Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 's390-6.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Heiko Carstens:

- Various virtual vs physical address usage fixes

- Fix error handling in Processor Activity Instrumentation device
driver, and export number of counters with a sysfs file

- Allow for multiple events when Processor Activity Instrumentation
counters are monitored in system wide sampling

- Change multiplier and shift values of the Time-of-Day clock source to
improve steering precision

- Remove a couple of unneeded GFP_DMA flags from allocations

- Disable mmap alignment if randomize_va_space is also disabled, to
avoid a too small heap

- Various changes to allow s390 to be compiled with LLVM=1, since
ld.lld and llvm-objcopy will have proper s390 support witch clang 19

- Add __uninitialized macro to Compiler Attributes. This is helpful
with s390's FPU code where some users have up to 520 byte stack
frames. Clearing such stack frames (if INIT_STACK_ALL_PATTERN or
INIT_STACK_ALL_ZERO is enabled) before they are used contradicts the
intention (performance improvement) of such code sections.

- Convert switch_to() to an out-of-line function, and use the generic
switch_to header file

- Replace the usage of s390's debug feature with pr_debug() calls
within the zcrypt device driver

- Improve hotplug support of the Adjunct Processor device driver

- Improve retry handling in the zcrypt device driver

- Various changes to the in-kernel FPU code:

- Make in-kernel FPU sections preemptible

- Convert various larger inline assemblies and assembler files to
C, mainly by using singe instruction inline assemblies. This
increases readability, but also allows makes it easier to add
proper instrumentation hooks

- Cleanup of the header files

- Provide fast variants of csum_partial() and
csum_partial_copy_nocheck() based on vector instructions

- Introduce and use a lock to synchronize accesses to zpci device data
structures to avoid inconsistent states caused by concurrent accesses

- Compile the kernel without -fPIE. This addresses the following
problems if the kernel is compiled with -fPIE:

- It uses dynamic symbols (.dynsym), for which the linker refuses
to allow more than 64k sections. This can break features which
use '-ffunction-sections' and '-fdata-sections', including
kpatch-build and function granular KASLR

- It unnecessarily uses GOT relocations, adding an extra layer of
indirection for many memory accesses

- Fix shared_cpu_list for CPU private L2 caches, which incorrectly were
reported as globally shared

* tag 's390-6.9-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (117 commits)
s390/tools: handle rela R_390_GOTPCDBL/R_390_GOTOFF64
s390/cache: prevent rebuild of shared_cpu_list
s390/crypto: remove retry loop with sleep from PAES pkey invocation
s390/pkey: improve pkey retry behavior
s390/zcrypt: improve zcrypt retry behavior
s390/zcrypt: introduce retries on in-kernel send CPRB functions
s390/ap: introduce mutex to lock the AP bus scan
s390/ap: rework ap_scan_bus() to return true on config change
s390/ap: clarify AP scan bus related functions and variables
s390/ap: rearm APQNs bindings complete completion
s390/configs: increase number of LOCKDEP_BITS
s390/vfio-ap: handle hardware checkstop state on queue reset operation
s390/pai: change sampling event assignment for PMU device driver
s390/boot: fix minor comment style damages
s390/boot: do not check for zero-termination relocation entry
s390/boot: make type of __vmlinux_relocs_64_start|end consistent
s390/boot: sanitize kaslr_adjust_relocs() function prototype
s390/boot: simplify GOT handling
s390: vmlinux.lds.S: fix .got.plt assertion
s390/boot: workaround current 'llvm-objdump -t -j ...' behavior
...

+3242 -1912
+14 -4
arch/s390/Kconfig
··· 127 127 select ARCH_WANT_DEFAULT_BPF_JIT 128 128 select ARCH_WANT_IPC_PARSE_VERSION 129 129 select ARCH_WANT_KERNEL_PMD_MKWRITE 130 + select ARCH_WANT_LD_ORPHAN_WARN 130 131 select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP 131 132 select BUILDTIME_TABLE_SORT 132 133 select CLONE_BACKWARDS2 ··· 449 448 select COMPAT_OLD_SIGACTION 450 449 select HAVE_UID16 451 450 depends on MULTIUSER 452 - depends on !CC_IS_CLANG 451 + depends on !CC_IS_CLANG && !LD_IS_LLD 453 452 help 454 453 Select this option if you want to enable your system kernel to 455 454 handle system-calls from ELF binaries for 31 bit ESA. This option ··· 583 582 help 584 583 This builds a kernel image that retains relocation information 585 584 so it can be loaded at an arbitrary address. 586 - The kernel is linked as a position-independent executable (PIE) 587 - and contains dynamic relocations which are processed early in the 588 - bootup process. 589 585 The relocations make the kernel image about 15% larger (compressed 590 586 10%), but are discarded at runtime. 591 587 Note: this option exists only for documentation purposes, please do 592 588 not remove it. 589 + 590 + config PIE_BUILD 591 + def_bool CC_IS_CLANG && !$(cc-option,-munaligned-symbols) 592 + help 593 + If the compiler is unable to generate code that can manage unaligned 594 + symbols, the kernel is linked as a position-independent executable 595 + (PIE) and includes dynamic relocations that are processed early 596 + during bootup. 597 + 598 + For kpatch functionality, it is recommended to build the kernel 599 + without the PIE_BUILD option. PIE_BUILD is only enabled when the 600 + compiler lacks proper support for handling unaligned symbols. 593 601 594 602 config RANDOMIZE_BASE 595 603 bool "Randomize the address of the kernel image (KASLR)"
+8 -2
arch/s390/Makefile
··· 14 14 KBUILD_CFLAGS_MODULE += -fPIC 15 15 KBUILD_AFLAGS += -m64 16 16 KBUILD_CFLAGS += -m64 17 + ifdef CONFIG_PIE_BUILD 17 18 KBUILD_CFLAGS += -fPIE 18 - LDFLAGS_vmlinux := -pie 19 + LDFLAGS_vmlinux := -pie -z notext 20 + else 21 + KBUILD_CFLAGS += $(call cc-option,-munaligned-symbols,) 22 + LDFLAGS_vmlinux := --emit-relocs --discard-none 23 + extra_tools := relocs 24 + endif 19 25 aflags_dwarf := -Wa,-gdwarf-2 20 26 KBUILD_AFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -D__ASSEMBLY__ 21 27 ifndef CONFIG_AS_IS_LLVM ··· 149 143 150 144 archprepare: 151 145 $(Q)$(MAKE) $(build)=$(syscalls) kapi 152 - $(Q)$(MAKE) $(build)=$(tools) kapi 146 + $(Q)$(MAKE) $(build)=$(tools) kapi $(extra_tools) 153 147 ifeq ($(KBUILD_EXTMOD),) 154 148 # We need to generate vdso-offsets.h before compiling certain files in kernel/. 155 149 # In order to do that, we should use the archprepare target, but we can't since
+1
arch/s390/boot/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 image 3 3 bzImage 4 + relocs.S 4 5 section_cmp.* 5 6 vmlinux 6 7 vmlinux.lds
+19 -6
arch/s390/boot/Makefile
··· 37 37 38 38 obj-y := head.o als.o startup.o physmem_info.o ipl_parm.o ipl_report.o vmem.o 39 39 obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o 40 - obj-y += version.o pgm_check_info.o ctype.o ipl_data.o machine_kexec_reloc.o 40 + obj-y += version.o pgm_check_info.o ctype.o ipl_data.o 41 + obj-y += $(if $(CONFIG_PIE_BUILD),machine_kexec_reloc.o,relocs.o) 41 42 obj-$(findstring y, $(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) $(CONFIG_PGSTE)) += uv.o 42 43 obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o 43 44 obj-y += $(if $(CONFIG_KERNEL_UNCOMPRESSED),,decompressor.o) info.o ··· 49 48 targets += vmlinux.lds vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 50 49 targets += vmlinux.bin.xz vmlinux.bin.lzma vmlinux.bin.lzo vmlinux.bin.lz4 51 50 targets += vmlinux.bin.zst info.bin syms.bin vmlinux.syms $(obj-all) 51 + ifndef CONFIG_PIE_BUILD 52 + targets += relocs.S 53 + endif 52 54 53 55 OBJECTS := $(addprefix $(obj)/,$(obj-y)) 54 56 OBJECTS_ALL := $(addprefix $(obj)/,$(obj-all)) ··· 60 56 61 57 quiet_cmd_section_cmp = SECTCMP $* 62 58 define cmd_section_cmp 63 - s1=`$(OBJDUMP) -t -j "$*" "$<" | sort | \ 59 + s1=`$(OBJDUMP) -t "$<" | grep "\s$*\s\+" | sort | \ 64 60 sed -n "/0000000000000000/! s/.*\s$*\s\+//p" | sha256sum`; \ 65 - s2=`$(OBJDUMP) -t -j "$*" "$(word 2,$^)" | sort | \ 61 + s2=`$(OBJDUMP) -t "$(word 2,$^)" | grep "\s$*\s\+" | sort | \ 66 62 sed -n "/0000000000000000/! s/.*\s$*\s\+//p" | sha256sum`; \ 67 63 if [ "$$s1" != "$$s2" ]; then \ 68 64 echo "error: section $* differs between $< and $(word 2,$^)" >&2; \ ··· 77 73 $(obj)/section_cmp%: vmlinux $(obj)/vmlinux FORCE 78 74 $(call if_changed,section_cmp) 79 75 80 - LDFLAGS_vmlinux := --oformat $(LD_BFD) -e startup $(if $(CONFIG_VMLINUX_MAP),-Map=$(obj)/vmlinux.map) --build-id=sha1 -T 76 + LDFLAGS_vmlinux-$(CONFIG_LD_ORPHAN_WARN) := --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL) 77 + LDFLAGS_vmlinux := $(LDFLAGS_vmlinux-y) --oformat $(LD_BFD) -e startup $(if $(CONFIG_VMLINUX_MAP),-Map=$(obj)/vmlinux.map) --build-id=sha1 -T 81 78 $(obj)/vmlinux: $(obj)/vmlinux.lds $(OBJECTS_ALL) FORCE 82 79 $(call if_changed,ld) 83 80 84 - LDFLAGS_vmlinux.syms := --oformat $(LD_BFD) -e startup -T 81 + LDFLAGS_vmlinux.syms := $(LDFLAGS_vmlinux-y) --oformat $(LD_BFD) -e startup -T 85 82 $(obj)/vmlinux.syms: $(obj)/vmlinux.lds $(OBJECTS) FORCE 86 83 $(call if_changed,ld) 87 84 ··· 98 93 $(obj)/syms.o: $(obj)/syms.bin FORCE 99 94 $(call if_changed,objcopy) 100 95 101 - OBJCOPYFLAGS_info.bin := -O binary --only-section=.vmlinux.info --set-section-flags .vmlinux.info=load 96 + OBJCOPYFLAGS_info.bin := -O binary --only-section=.vmlinux.info --set-section-flags .vmlinux.info=alloc,load 102 97 $(obj)/info.bin: vmlinux FORCE 103 98 $(call if_changed,objcopy) 104 99 ··· 109 104 OBJCOPYFLAGS_vmlinux.bin := -O binary --remove-section=.comment --remove-section=.vmlinux.info -S 110 105 $(obj)/vmlinux.bin: vmlinux FORCE 111 106 $(call if_changed,objcopy) 107 + 108 + ifndef CONFIG_PIE_BUILD 109 + CMD_RELOCS=arch/s390/tools/relocs 110 + quiet_cmd_relocs = RELOCS $@ 111 + cmd_relocs = $(CMD_RELOCS) $< > $@ 112 + $(obj)/relocs.S: vmlinux FORCE 113 + $(call if_changed,relocs) 114 + endif 112 115 113 116 suffix-$(CONFIG_KERNEL_GZIP) := .gz 114 117 suffix-$(CONFIG_KERNEL_BZIP2) := .bz2
+6
arch/s390/boot/boot.h
··· 25 25 unsigned long bootdata_size; 26 26 unsigned long bootdata_preserved_off; 27 27 unsigned long bootdata_preserved_size; 28 + #ifdef CONFIG_PIE_BUILD 28 29 unsigned long dynsym_start; 29 30 unsigned long rela_dyn_start; 30 31 unsigned long rela_dyn_end; 32 + #else 33 + unsigned long got_start; 34 + unsigned long got_end; 35 + #endif 31 36 unsigned long amode31_size; 32 37 unsigned long init_mm_off; 33 38 unsigned long swapper_pg_dir_off; ··· 88 83 extern int vmalloc_size_set; 89 84 extern char __boot_data_start[], __boot_data_end[]; 90 85 extern char __boot_data_preserved_start[], __boot_data_preserved_end[]; 86 + extern char __vmlinux_relocs_64_start[], __vmlinux_relocs_64_end[]; 91 87 extern char _decompressor_syms_start[], _decompressor_syms_end[]; 92 88 extern char _stack_start[], _stack_end[]; 93 89 extern char _end[], _decompressor_end[];
+66 -9
arch/s390/boot/startup.c
··· 141 141 memcpy((void *)vmlinux.bootdata_preserved_off, __boot_data_preserved_start, vmlinux.bootdata_preserved_size); 142 142 } 143 143 144 - static void handle_relocs(unsigned long offset) 144 + #ifdef CONFIG_PIE_BUILD 145 + static void kaslr_adjust_relocs(unsigned long min_addr, unsigned long max_addr, unsigned long offset) 145 146 { 146 147 Elf64_Rela *rela_start, *rela_end, *rela; 147 148 int r_type, r_sym, rc; ··· 172 171 error("Unknown relocation type"); 173 172 } 174 173 } 174 + 175 + static void kaslr_adjust_got(unsigned long offset) {} 176 + static void rescue_relocs(void) {} 177 + static void free_relocs(void) {} 178 + #else 179 + static int *vmlinux_relocs_64_start; 180 + static int *vmlinux_relocs_64_end; 181 + 182 + static void rescue_relocs(void) 183 + { 184 + unsigned long size = __vmlinux_relocs_64_end - __vmlinux_relocs_64_start; 185 + 186 + vmlinux_relocs_64_start = (void *)physmem_alloc_top_down(RR_RELOC, size, 0); 187 + vmlinux_relocs_64_end = (void *)vmlinux_relocs_64_start + size; 188 + memmove(vmlinux_relocs_64_start, __vmlinux_relocs_64_start, size); 189 + } 190 + 191 + static void free_relocs(void) 192 + { 193 + physmem_free(RR_RELOC); 194 + } 195 + 196 + static void kaslr_adjust_relocs(unsigned long min_addr, unsigned long max_addr, unsigned long offset) 197 + { 198 + int *reloc; 199 + long loc; 200 + 201 + /* Adjust R_390_64 relocations */ 202 + for (reloc = vmlinux_relocs_64_start; reloc < vmlinux_relocs_64_end; reloc++) { 203 + loc = (long)*reloc + offset; 204 + if (loc < min_addr || loc > max_addr) 205 + error("64-bit relocation outside of kernel!\n"); 206 + *(u64 *)loc += offset; 207 + } 208 + } 209 + 210 + static void kaslr_adjust_got(unsigned long offset) 211 + { 212 + u64 *entry; 213 + 214 + /* 215 + * Even without -fPIE, Clang still uses a global offset table for some 216 + * reason. Adjust the GOT entries. 217 + */ 218 + for (entry = (u64 *)vmlinux.got_start; entry < (u64 *)vmlinux.got_end; entry++) 219 + *entry += offset; 220 + } 221 + #endif 175 222 176 223 /* 177 224 * Merge information from several sources into a single ident_map_size value. ··· 348 299 vmalloc_size = max(size, vmalloc_size); 349 300 } 350 301 351 - static void offset_vmlinux_info(unsigned long offset) 302 + static void kaslr_adjust_vmlinux_info(unsigned long offset) 352 303 { 353 304 *(unsigned long *)(&vmlinux.entry) += offset; 354 305 vmlinux.bootdata_off += offset; 355 306 vmlinux.bootdata_preserved_off += offset; 307 + #ifdef CONFIG_PIE_BUILD 356 308 vmlinux.rela_dyn_start += offset; 357 309 vmlinux.rela_dyn_end += offset; 358 310 vmlinux.dynsym_start += offset; 311 + #else 312 + vmlinux.got_start += offset; 313 + vmlinux.got_end += offset; 314 + #endif 359 315 vmlinux.init_mm_off += offset; 360 316 vmlinux.swapper_pg_dir_off += offset; 361 317 vmlinux.invalid_pg_dir_off += offset; ··· 415 361 detect_physmem_online_ranges(max_physmem_end); 416 362 save_ipl_cert_comp_list(); 417 363 rescue_initrd(safe_addr, ident_map_size); 364 + rescue_relocs(); 418 365 419 366 if (kaslr_enabled()) { 420 367 vmlinux_lma = randomize_within_range(vmlinux.image_size + vmlinux.bss_size, ··· 423 368 ident_map_size); 424 369 if (vmlinux_lma) { 425 370 __kaslr_offset = vmlinux_lma - vmlinux.default_lma; 426 - offset_vmlinux_info(__kaslr_offset); 371 + kaslr_adjust_vmlinux_info(__kaslr_offset); 427 372 } 428 373 } 429 374 vmlinux_lma = vmlinux_lma ?: vmlinux.default_lma; ··· 448 393 /* 449 394 * The order of the following operations is important: 450 395 * 451 - * - handle_relocs() must follow clear_bss_section() to establish static 452 - * memory references to data in .bss to be used by setup_vmem() 396 + * - kaslr_adjust_relocs() must follow clear_bss_section() to establish 397 + * static memory references to data in .bss to be used by setup_vmem() 453 398 * (i.e init_mm.pgd) 454 399 * 455 - * - setup_vmem() must follow handle_relocs() to be able using 400 + * - setup_vmem() must follow kaslr_adjust_relocs() to be able using 456 401 * static memory references to data in .bss (i.e init_mm.pgd) 457 402 * 458 - * - copy_bootdata() must follow setup_vmem() to propagate changes to 459 - * bootdata made by setup_vmem() 403 + * - copy_bootdata() must follow setup_vmem() to propagate changes 404 + * to bootdata made by setup_vmem() 460 405 */ 461 406 clear_bss_section(vmlinux_lma); 462 - handle_relocs(__kaslr_offset); 407 + kaslr_adjust_relocs(vmlinux_lma, vmlinux_lma + vmlinux.image_size, __kaslr_offset); 408 + kaslr_adjust_got(__kaslr_offset); 409 + free_relocs(); 463 410 setup_vmem(asce_limit); 464 411 copy_bootdata(); 465 412
+48
arch/s390/boot/vmlinux.lds.S
··· 31 31 _text = .; /* Text */ 32 32 *(.text) 33 33 *(.text.*) 34 + INIT_TEXT 34 35 _etext = . ; 35 36 } 36 37 .rodata : { ··· 39 38 *(.rodata) /* read-only data */ 40 39 *(.rodata.*) 41 40 _erodata = . ; 41 + } 42 + .got : { 43 + *(.got) 42 44 } 43 45 NOTES 44 46 .data : { ··· 110 106 _compressed_end = .; 111 107 } 112 108 109 + #ifndef CONFIG_PIE_BUILD 110 + /* 111 + * When the kernel is built with CONFIG_KERNEL_UNCOMPRESSED, the entire 112 + * uncompressed vmlinux.bin is positioned in the bzImage decompressor 113 + * image at the default kernel LMA of 0x100000, enabling it to be 114 + * executed in-place. However, the size of .vmlinux.relocs could be 115 + * large enough to cause an overlap with the uncompressed kernel at the 116 + * address 0x100000. To address this issue, .vmlinux.relocs is 117 + * positioned after the .rodata.compressed. 118 + */ 119 + . = ALIGN(4); 120 + .vmlinux.relocs : { 121 + __vmlinux_relocs_64_start = .; 122 + *(.vmlinux.relocs_64) 123 + __vmlinux_relocs_64_end = .; 124 + } 125 + #endif 126 + 113 127 #define SB_TRAILER_SIZE 32 114 128 /* Trailer needed for Secure Boot */ 115 129 . += SB_TRAILER_SIZE; /* make sure .sb.trailer does not overwrite the previous section */ ··· 140 118 } 141 119 _end = .; 142 120 121 + DWARF_DEBUG 122 + ELF_DETAILS 123 + 124 + /* 125 + * Make sure that the .got.plt is either completely empty or it 126 + * contains only the three reserved double words. 127 + */ 128 + .got.plt : { 129 + *(.got.plt) 130 + } 131 + ASSERT(SIZEOF(.got.plt) == 0 || SIZEOF(.got.plt) == 0x18, "Unexpected GOT/PLT entries detected!") 132 + 133 + /* 134 + * Sections that should stay zero sized, which is safer to 135 + * explicitly check instead of blindly discarding. 136 + */ 137 + .plt : { 138 + *(.plt) *(.plt.*) *(.iplt) *(.igot .igot.plt) 139 + } 140 + ASSERT(SIZEOF(.plt) == 0, "Unexpected run-time procedure linkages detected!") 141 + .rela.dyn : { 142 + *(.rela.*) *(.rela_*) 143 + } 144 + ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) detected!") 145 + 143 146 /* Sections to be discarded */ 144 147 /DISCARD/ : { 148 + COMMON_DISCARDS 145 149 *(.eh_frame) 146 150 *(__ex_table) 147 151 *(*__ksymtab*)
+2
arch/s390/configs/debug_defconfig
··· 824 824 CONFIG_DEBUG_PREEMPT=y 825 825 CONFIG_PROVE_LOCKING=y 826 826 CONFIG_LOCK_STAT=y 827 + CONFIG_LOCKDEP_BITS=16 828 + CONFIG_LOCKDEP_CHAINS_BITS=17 827 829 CONFIG_DEBUG_ATOMIC_SLEEP=y 828 830 CONFIG_DEBUG_LOCKING_API_SELFTESTS=y 829 831 CONFIG_DEBUG_IRQFLAGS=y
+2 -2
arch/s390/crypto/chacha-glue.c
··· 15 15 #include <linux/kernel.h> 16 16 #include <linux/module.h> 17 17 #include <linux/sizes.h> 18 - #include <asm/fpu/api.h> 18 + #include <asm/fpu.h> 19 19 #include "chacha-s390.h" 20 20 21 21 static void chacha20_crypt_s390(u32 *state, u8 *dst, const u8 *src, 22 22 unsigned int nbytes, const u32 *key, 23 23 u32 *counter) 24 24 { 25 - struct kernel_fpu vxstate; 25 + DECLARE_KERNEL_FPU_ONSTACK32(vxstate); 26 26 27 27 kernel_fpu_begin(&vxstate, KERNEL_VXR); 28 28 chacha20_vx(dst, src, nbytes, key, counter);
+1 -1
arch/s390/crypto/chacha-s390.S
··· 8 8 9 9 #include <linux/linkage.h> 10 10 #include <asm/nospec-insn.h> 11 - #include <asm/vx-insn.h> 11 + #include <asm/fpu-insn.h> 12 12 13 13 #define SP %r15 14 14 #define FRAME (16 * 8 + 4 * 8)
+3 -8
arch/s390/crypto/crc32-vx.c
··· 13 13 #include <linux/cpufeature.h> 14 14 #include <linux/crc32.h> 15 15 #include <crypto/internal/hash.h> 16 - #include <asm/fpu/api.h> 17 - 16 + #include <asm/fpu.h> 17 + #include "crc32-vx.h" 18 18 19 19 #define CRC32_BLOCK_SIZE 1 20 20 #define CRC32_DIGEST_SIZE 4 ··· 31 31 u32 crc; 32 32 }; 33 33 34 - /* Prototypes for functions in assembly files */ 35 - u32 crc32_le_vgfm_16(u32 crc, unsigned char const *buf, size_t size); 36 - u32 crc32_be_vgfm_16(u32 crc, unsigned char const *buf, size_t size); 37 - u32 crc32c_le_vgfm_16(u32 crc, unsigned char const *buf, size_t size); 38 - 39 34 /* 40 35 * DEFINE_CRC32_VX() - Define a CRC-32 function using the vector extension 41 36 * ··· 44 49 static u32 __pure ___fname(u32 crc, \ 45 50 unsigned char const *data, size_t datalen) \ 46 51 { \ 47 - struct kernel_fpu vxstate; \ 48 52 unsigned long prealign, aligned, remaining; \ 53 + DECLARE_KERNEL_FPU_ONSTACK16(vxstate); \ 49 54 \ 50 55 if (datalen < VX_MIN_LEN + VX_ALIGN_MASK) \ 51 56 return ___crc32_sw(crc, data, datalen); \
+12
arch/s390/crypto/crc32-vx.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _CRC32_VX_S390_H 4 + #define _CRC32_VX_S390_H 5 + 6 + #include <linux/types.h> 7 + 8 + u32 crc32_be_vgfm_16(u32 crc, unsigned char const *buf, size_t size); 9 + u32 crc32_le_vgfm_16(u32 crc, unsigned char const *buf, size_t size); 10 + u32 crc32c_le_vgfm_16(u32 crc, unsigned char const *buf, size_t size); 11 + 12 + #endif /* _CRC32_VX_S390_H */
+66 -105
arch/s390/crypto/crc32be-vx.S arch/s390/crypto/crc32be-vx.c
··· 12 12 * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 13 13 */ 14 14 15 - #include <linux/linkage.h> 16 - #include <asm/nospec-insn.h> 17 - #include <asm/vx-insn.h> 15 + #include <linux/types.h> 16 + #include <asm/fpu.h> 17 + #include "crc32-vx.h" 18 18 19 19 /* Vector register range containing CRC-32 constants */ 20 - #define CONST_R1R2 %v9 21 - #define CONST_R3R4 %v10 22 - #define CONST_R5 %v11 23 - #define CONST_R6 %v12 24 - #define CONST_RU_POLY %v13 25 - #define CONST_CRC_POLY %v14 26 - 27 - .data 28 - .balign 8 20 + #define CONST_R1R2 9 21 + #define CONST_R3R4 10 22 + #define CONST_R5 11 23 + #define CONST_R6 12 24 + #define CONST_RU_POLY 13 25 + #define CONST_CRC_POLY 14 29 26 30 27 /* 31 28 * The CRC-32 constant block contains reduction constants to fold and ··· 55 58 * P'(x) = 0xEDB88320 56 59 */ 57 60 58 - SYM_DATA_START_LOCAL(constants_CRC_32_BE) 59 - .quad 0x08833794c, 0x0e6228b11 # R1, R2 60 - .quad 0x0c5b9cd4c, 0x0e8a45605 # R3, R4 61 - .quad 0x0f200aa66, 1 << 32 # R5, x32 62 - .quad 0x0490d678d, 1 # R6, 1 63 - .quad 0x104d101df, 0 # u 64 - .quad 0x104C11DB7, 0 # P(x) 65 - SYM_DATA_END(constants_CRC_32_BE) 61 + static unsigned long constants_CRC_32_BE[] = { 62 + 0x08833794c, 0x0e6228b11, /* R1, R2 */ 63 + 0x0c5b9cd4c, 0x0e8a45605, /* R3, R4 */ 64 + 0x0f200aa66, 1UL << 32, /* R5, x32 */ 65 + 0x0490d678d, 1, /* R6, 1 */ 66 + 0x104d101df, 0, /* u */ 67 + 0x104C11DB7, 0, /* P(x) */ 68 + }; 66 69 67 - .previous 68 - 69 - GEN_BR_THUNK %r14 70 - 71 - .text 72 - /* 73 - * The CRC-32 function(s) use these calling conventions: 74 - * 75 - * Parameters: 76 - * 77 - * %r2: Initial CRC value, typically ~0; and final CRC (return) value. 78 - * %r3: Input buffer pointer, performance might be improved if the 79 - * buffer is on a doubleword boundary. 80 - * %r4: Length of the buffer, must be 64 bytes or greater. 70 + /** 71 + * crc32_be_vgfm_16 - Compute CRC-32 (BE variant) with vector registers 72 + * @crc: Initial CRC value, typically ~0. 73 + * @buf: Input buffer pointer, performance might be improved if the 74 + * buffer is on a doubleword boundary. 75 + * @size: Size of the buffer, must be 64 bytes or greater. 81 76 * 82 77 * Register usage: 83 - * 84 - * %r5: CRC-32 constant pool base pointer. 85 78 * V0: Initial CRC value and intermediate constants and results. 86 79 * V1..V4: Data for CRC computation. 87 80 * V5..V8: Next data chunks that are fetched from the input buffer. 88 - * 89 81 * V9..V14: CRC-32 constants. 90 82 */ 91 - SYM_FUNC_START(crc32_be_vgfm_16) 83 + u32 crc32_be_vgfm_16(u32 crc, unsigned char const *buf, size_t size) 84 + { 92 85 /* Load CRC-32 constants */ 93 - larl %r5,constants_CRC_32_BE 94 - VLM CONST_R1R2,CONST_CRC_POLY,0,%r5 86 + fpu_vlm(CONST_R1R2, CONST_CRC_POLY, &constants_CRC_32_BE); 87 + fpu_vzero(0); 95 88 96 89 /* Load the initial CRC value into the leftmost word of V0. */ 97 - VZERO %v0 98 - VLVGF %v0,%r2,0 90 + fpu_vlvgf(0, crc, 0); 99 91 100 92 /* Load a 64-byte data chunk and XOR with CRC */ 101 - VLM %v1,%v4,0,%r3 /* 64-bytes into V1..V4 */ 102 - VX %v1,%v0,%v1 /* V1 ^= CRC */ 103 - aghi %r3,64 /* BUF = BUF + 64 */ 104 - aghi %r4,-64 /* LEN = LEN - 64 */ 93 + fpu_vlm(1, 4, buf); 94 + fpu_vx(1, 0, 1); 95 + buf += 64; 96 + size -= 64; 105 97 106 - /* Check remaining buffer size and jump to proper folding method */ 107 - cghi %r4,64 108 - jl .Lless_than_64bytes 98 + while (size >= 64) { 99 + /* Load the next 64-byte data chunk into V5 to V8 */ 100 + fpu_vlm(5, 8, buf); 109 101 110 - .Lfold_64bytes_loop: 111 - /* Load the next 64-byte data chunk into V5 to V8 */ 112 - VLM %v5,%v8,0,%r3 102 + /* 103 + * Perform a GF(2) multiplication of the doublewords in V1 with 104 + * the reduction constants in V0. The intermediate result is 105 + * then folded (accumulated) with the next data chunk in V5 and 106 + * stored in V1. Repeat this step for the register contents 107 + * in V2, V3, and V4 respectively. 108 + */ 109 + fpu_vgfmag(1, CONST_R1R2, 1, 5); 110 + fpu_vgfmag(2, CONST_R1R2, 2, 6); 111 + fpu_vgfmag(3, CONST_R1R2, 3, 7); 112 + fpu_vgfmag(4, CONST_R1R2, 4, 8); 113 + buf += 64; 114 + size -= 64; 115 + } 113 116 114 - /* 115 - * Perform a GF(2) multiplication of the doublewords in V1 with 116 - * the reduction constants in V0. The intermediate result is 117 - * then folded (accumulated) with the next data chunk in V5 and 118 - * stored in V1. Repeat this step for the register contents 119 - * in V2, V3, and V4 respectively. 120 - */ 121 - VGFMAG %v1,CONST_R1R2,%v1,%v5 122 - VGFMAG %v2,CONST_R1R2,%v2,%v6 123 - VGFMAG %v3,CONST_R1R2,%v3,%v7 124 - VGFMAG %v4,CONST_R1R2,%v4,%v8 125 - 126 - /* Adjust buffer pointer and length for next loop */ 127 - aghi %r3,64 /* BUF = BUF + 64 */ 128 - aghi %r4,-64 /* LEN = LEN - 64 */ 129 - 130 - cghi %r4,64 131 - jnl .Lfold_64bytes_loop 132 - 133 - .Lless_than_64bytes: 134 117 /* Fold V1 to V4 into a single 128-bit value in V1 */ 135 - VGFMAG %v1,CONST_R3R4,%v1,%v2 136 - VGFMAG %v1,CONST_R3R4,%v1,%v3 137 - VGFMAG %v1,CONST_R3R4,%v1,%v4 118 + fpu_vgfmag(1, CONST_R3R4, 1, 2); 119 + fpu_vgfmag(1, CONST_R3R4, 1, 3); 120 + fpu_vgfmag(1, CONST_R3R4, 1, 4); 138 121 139 - /* Check whether to continue with 64-bit folding */ 140 - cghi %r4,16 141 - jl .Lfinal_fold 122 + while (size >= 16) { 123 + fpu_vl(2, buf); 124 + fpu_vgfmag(1, CONST_R3R4, 1, 2); 125 + buf += 16; 126 + size -= 16; 127 + } 142 128 143 - .Lfold_16bytes_loop: 144 - 145 - VL %v2,0,,%r3 /* Load next data chunk */ 146 - VGFMAG %v1,CONST_R3R4,%v1,%v2 /* Fold next data chunk */ 147 - 148 - /* Adjust buffer pointer and size for folding next data chunk */ 149 - aghi %r3,16 150 - aghi %r4,-16 151 - 152 - /* Process remaining data chunks */ 153 - cghi %r4,16 154 - jnl .Lfold_16bytes_loop 155 - 156 - .Lfinal_fold: 157 129 /* 158 130 * The R5 constant is used to fold a 128-bit value into an 96-bit value 159 131 * that is XORed with the next 96-bit input data chunk. To use a single ··· 130 164 * form an intermediate 96-bit value (with appended zeros) which is then 131 165 * XORed with the intermediate reduction result. 132 166 */ 133 - VGFMG %v1,CONST_R5,%v1 167 + fpu_vgfmg(1, CONST_R5, 1); 134 168 135 169 /* 136 170 * Further reduce the remaining 96-bit value to a 64-bit value using a ··· 139 173 * doubleword with R6. The result is a 64-bit value and is subject to 140 174 * the Barret reduction. 141 175 */ 142 - VGFMG %v1,CONST_R6,%v1 176 + fpu_vgfmg(1, CONST_R6, 1); 143 177 144 178 /* 145 179 * The input values to the Barret reduction are the degree-63 polynomial ··· 160 194 */ 161 195 162 196 /* T1(x) = floor( R(x) / x^32 ) GF2MUL u */ 163 - VUPLLF %v2,%v1 164 - VGFMG %v2,CONST_RU_POLY,%v2 197 + fpu_vupllf(2, 1); 198 + fpu_vgfmg(2, CONST_RU_POLY, 2); 165 199 166 200 /* 167 201 * Compute the GF(2) product of the CRC polynomial in VO with T1(x) in 168 202 * V2 and XOR the intermediate result, T2(x), with the value in V1. 169 203 * The final result is in the rightmost word of V2. 170 204 */ 171 - VUPLLF %v2,%v2 172 - VGFMAG %v2,CONST_CRC_POLY,%v2,%v1 173 - 174 - .Ldone: 175 - VLGVF %r2,%v2,3 176 - BR_EX %r14 177 - SYM_FUNC_END(crc32_be_vgfm_16) 178 - 179 - .previous 205 + fpu_vupllf(2, 2); 206 + fpu_vgfmag(2, CONST_CRC_POLY, 2, 1); 207 + return fpu_vlgvf(2, 3); 208 + }
+102 -137
arch/s390/crypto/crc32le-vx.S arch/s390/crypto/crc32le-vx.c
··· 13 13 * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 14 14 */ 15 15 16 - #include <linux/linkage.h> 17 - #include <asm/nospec-insn.h> 18 - #include <asm/vx-insn.h> 16 + #include <linux/types.h> 17 + #include <asm/fpu.h> 18 + #include "crc32-vx.h" 19 19 20 20 /* Vector register range containing CRC-32 constants */ 21 - #define CONST_PERM_LE2BE %v9 22 - #define CONST_R2R1 %v10 23 - #define CONST_R4R3 %v11 24 - #define CONST_R5 %v12 25 - #define CONST_RU_POLY %v13 26 - #define CONST_CRC_POLY %v14 27 - 28 - .data 29 - .balign 8 21 + #define CONST_PERM_LE2BE 9 22 + #define CONST_R2R1 10 23 + #define CONST_R4R3 11 24 + #define CONST_R5 12 25 + #define CONST_RU_POLY 13 26 + #define CONST_CRC_POLY 14 30 27 31 28 /* 32 29 * The CRC-32 constant block contains reduction constants to fold and ··· 56 59 * P'(x) = 0x82F63B78 57 60 */ 58 61 59 - SYM_DATA_START_LOCAL(constants_CRC_32_LE) 60 - .octa 0x0F0E0D0C0B0A09080706050403020100 # BE->LE mask 61 - .quad 0x1c6e41596, 0x154442bd4 # R2, R1 62 - .quad 0x0ccaa009e, 0x1751997d0 # R4, R3 63 - .octa 0x163cd6124 # R5 64 - .octa 0x1F7011641 # u' 65 - .octa 0x1DB710641 # P'(x) << 1 66 - SYM_DATA_END(constants_CRC_32_LE) 62 + static unsigned long constants_CRC_32_LE[] = { 63 + 0x0f0e0d0c0b0a0908, 0x0706050403020100, /* BE->LE mask */ 64 + 0x1c6e41596, 0x154442bd4, /* R2, R1 */ 65 + 0x0ccaa009e, 0x1751997d0, /* R4, R3 */ 66 + 0x0, 0x163cd6124, /* R5 */ 67 + 0x0, 0x1f7011641, /* u' */ 68 + 0x0, 0x1db710641 /* P'(x) << 1 */ 69 + }; 67 70 68 - SYM_DATA_START_LOCAL(constants_CRC_32C_LE) 69 - .octa 0x0F0E0D0C0B0A09080706050403020100 # BE->LE mask 70 - .quad 0x09e4addf8, 0x740eef02 # R2, R1 71 - .quad 0x14cd00bd6, 0xf20c0dfe # R4, R3 72 - .octa 0x0dd45aab8 # R5 73 - .octa 0x0dea713f1 # u' 74 - .octa 0x105ec76f0 # P'(x) << 1 75 - SYM_DATA_END(constants_CRC_32C_LE) 71 + static unsigned long constants_CRC_32C_LE[] = { 72 + 0x0f0e0d0c0b0a0908, 0x0706050403020100, /* BE->LE mask */ 73 + 0x09e4addf8, 0x740eef02, /* R2, R1 */ 74 + 0x14cd00bd6, 0xf20c0dfe, /* R4, R3 */ 75 + 0x0, 0x0dd45aab8, /* R5 */ 76 + 0x0, 0x0dea713f1, /* u' */ 77 + 0x0, 0x105ec76f0 /* P'(x) << 1 */ 78 + }; 76 79 77 - .previous 78 - 79 - GEN_BR_THUNK %r14 80 - 81 - .text 82 - 83 - /* 84 - * The CRC-32 functions use these calling conventions: 85 - * 86 - * Parameters: 87 - * 88 - * %r2: Initial CRC value, typically ~0; and final CRC (return) value. 89 - * %r3: Input buffer pointer, performance might be improved if the 90 - * buffer is on a doubleword boundary. 91 - * %r4: Length of the buffer, must be 64 bytes or greater. 80 + /** 81 + * crc32_le_vgfm_generic - Compute CRC-32 (LE variant) with vector registers 82 + * @crc: Initial CRC value, typically ~0. 83 + * @buf: Input buffer pointer, performance might be improved if the 84 + * buffer is on a doubleword boundary. 85 + * @size: Size of the buffer, must be 64 bytes or greater. 86 + * @constants: CRC-32 constant pool base pointer. 92 87 * 93 88 * Register usage: 94 - * 95 - * %r5: CRC-32 constant pool base pointer. 96 - * V0: Initial CRC value and intermediate constants and results. 97 - * V1..V4: Data for CRC computation. 98 - * V5..V8: Next data chunks that are fetched from the input buffer. 99 - * V9: Constant for BE->LE conversion and shift operations 100 - * 89 + * V0: Initial CRC value and intermediate constants and results. 90 + * V1..V4: Data for CRC computation. 91 + * V5..V8: Next data chunks that are fetched from the input buffer. 92 + * V9: Constant for BE->LE conversion and shift operations 101 93 * V10..V14: CRC-32 constants. 102 94 */ 103 - 104 - SYM_FUNC_START(crc32_le_vgfm_16) 105 - larl %r5,constants_CRC_32_LE 106 - j crc32_le_vgfm_generic 107 - SYM_FUNC_END(crc32_le_vgfm_16) 108 - 109 - SYM_FUNC_START(crc32c_le_vgfm_16) 110 - larl %r5,constants_CRC_32C_LE 111 - j crc32_le_vgfm_generic 112 - SYM_FUNC_END(crc32c_le_vgfm_16) 113 - 114 - SYM_FUNC_START(crc32_le_vgfm_generic) 95 + static u32 crc32_le_vgfm_generic(u32 crc, unsigned char const *buf, size_t size, unsigned long *constants) 96 + { 115 97 /* Load CRC-32 constants */ 116 - VLM CONST_PERM_LE2BE,CONST_CRC_POLY,0,%r5 98 + fpu_vlm(CONST_PERM_LE2BE, CONST_CRC_POLY, constants); 117 99 118 100 /* 119 101 * Load the initial CRC value. ··· 101 125 * vector register and is later XORed with the LSB portion 102 126 * of the loaded input data. 103 127 */ 104 - VZERO %v0 /* Clear V0 */ 105 - VLVGF %v0,%r2,3 /* Load CRC into rightmost word */ 128 + fpu_vzero(0); /* Clear V0 */ 129 + fpu_vlvgf(0, crc, 3); /* Load CRC into rightmost word */ 106 130 107 131 /* Load a 64-byte data chunk and XOR with CRC */ 108 - VLM %v1,%v4,0,%r3 /* 64-bytes into V1..V4 */ 109 - VPERM %v1,%v1,%v1,CONST_PERM_LE2BE 110 - VPERM %v2,%v2,%v2,CONST_PERM_LE2BE 111 - VPERM %v3,%v3,%v3,CONST_PERM_LE2BE 112 - VPERM %v4,%v4,%v4,CONST_PERM_LE2BE 132 + fpu_vlm(1, 4, buf); 133 + fpu_vperm(1, 1, 1, CONST_PERM_LE2BE); 134 + fpu_vperm(2, 2, 2, CONST_PERM_LE2BE); 135 + fpu_vperm(3, 3, 3, CONST_PERM_LE2BE); 136 + fpu_vperm(4, 4, 4, CONST_PERM_LE2BE); 113 137 114 - VX %v1,%v0,%v1 /* V1 ^= CRC */ 115 - aghi %r3,64 /* BUF = BUF + 64 */ 116 - aghi %r4,-64 /* LEN = LEN - 64 */ 138 + fpu_vx(1, 0, 1); /* V1 ^= CRC */ 139 + buf += 64; 140 + size -= 64; 117 141 118 - cghi %r4,64 119 - jl .Lless_than_64bytes 142 + while (size >= 64) { 143 + fpu_vlm(5, 8, buf); 144 + fpu_vperm(5, 5, 5, CONST_PERM_LE2BE); 145 + fpu_vperm(6, 6, 6, CONST_PERM_LE2BE); 146 + fpu_vperm(7, 7, 7, CONST_PERM_LE2BE); 147 + fpu_vperm(8, 8, 8, CONST_PERM_LE2BE); 148 + /* 149 + * Perform a GF(2) multiplication of the doublewords in V1 with 150 + * the R1 and R2 reduction constants in V0. The intermediate 151 + * result is then folded (accumulated) with the next data chunk 152 + * in V5 and stored in V1. Repeat this step for the register 153 + * contents in V2, V3, and V4 respectively. 154 + */ 155 + fpu_vgfmag(1, CONST_R2R1, 1, 5); 156 + fpu_vgfmag(2, CONST_R2R1, 2, 6); 157 + fpu_vgfmag(3, CONST_R2R1, 3, 7); 158 + fpu_vgfmag(4, CONST_R2R1, 4, 8); 159 + buf += 64; 160 + size -= 64; 161 + } 120 162 121 - .Lfold_64bytes_loop: 122 - /* Load the next 64-byte data chunk into V5 to V8 */ 123 - VLM %v5,%v8,0,%r3 124 - VPERM %v5,%v5,%v5,CONST_PERM_LE2BE 125 - VPERM %v6,%v6,%v6,CONST_PERM_LE2BE 126 - VPERM %v7,%v7,%v7,CONST_PERM_LE2BE 127 - VPERM %v8,%v8,%v8,CONST_PERM_LE2BE 128 - 129 - /* 130 - * Perform a GF(2) multiplication of the doublewords in V1 with 131 - * the R1 and R2 reduction constants in V0. The intermediate result 132 - * is then folded (accumulated) with the next data chunk in V5 and 133 - * stored in V1. Repeat this step for the register contents 134 - * in V2, V3, and V4 respectively. 135 - */ 136 - VGFMAG %v1,CONST_R2R1,%v1,%v5 137 - VGFMAG %v2,CONST_R2R1,%v2,%v6 138 - VGFMAG %v3,CONST_R2R1,%v3,%v7 139 - VGFMAG %v4,CONST_R2R1,%v4,%v8 140 - 141 - aghi %r3,64 /* BUF = BUF + 64 */ 142 - aghi %r4,-64 /* LEN = LEN - 64 */ 143 - 144 - cghi %r4,64 145 - jnl .Lfold_64bytes_loop 146 - 147 - .Lless_than_64bytes: 148 163 /* 149 164 * Fold V1 to V4 into a single 128-bit value in V1. Multiply V1 with R3 150 165 * and R4 and accumulating the next 128-bit chunk until a single 128-bit 151 166 * value remains. 152 167 */ 153 - VGFMAG %v1,CONST_R4R3,%v1,%v2 154 - VGFMAG %v1,CONST_R4R3,%v1,%v3 155 - VGFMAG %v1,CONST_R4R3,%v1,%v4 168 + fpu_vgfmag(1, CONST_R4R3, 1, 2); 169 + fpu_vgfmag(1, CONST_R4R3, 1, 3); 170 + fpu_vgfmag(1, CONST_R4R3, 1, 4); 156 171 157 - cghi %r4,16 158 - jl .Lfinal_fold 172 + while (size >= 16) { 173 + fpu_vl(2, buf); 174 + fpu_vperm(2, 2, 2, CONST_PERM_LE2BE); 175 + fpu_vgfmag(1, CONST_R4R3, 1, 2); 176 + buf += 16; 177 + size -= 16; 178 + } 159 179 160 - .Lfold_16bytes_loop: 161 - 162 - VL %v2,0,,%r3 /* Load next data chunk */ 163 - VPERM %v2,%v2,%v2,CONST_PERM_LE2BE 164 - VGFMAG %v1,CONST_R4R3,%v1,%v2 /* Fold next data chunk */ 165 - 166 - aghi %r3,16 167 - aghi %r4,-16 168 - 169 - cghi %r4,16 170 - jnl .Lfold_16bytes_loop 171 - 172 - .Lfinal_fold: 173 180 /* 174 181 * Set up a vector register for byte shifts. The shift value must 175 182 * be loaded in bits 1-4 in byte element 7 of a vector register. 176 183 * Shift by 8 bytes: 0x40 177 184 * Shift by 4 bytes: 0x20 178 185 */ 179 - VLEIB %v9,0x40,7 186 + fpu_vleib(9, 0x40, 7); 180 187 181 188 /* 182 189 * Prepare V0 for the next GF(2) multiplication: shift V0 by 8 bytes 183 190 * to move R4 into the rightmost doubleword and set the leftmost 184 191 * doubleword to 0x1. 185 192 */ 186 - VSRLB %v0,CONST_R4R3,%v9 187 - VLEIG %v0,1,0 193 + fpu_vsrlb(0, CONST_R4R3, 9); 194 + fpu_vleig(0, 1, 0); 188 195 189 196 /* 190 197 * Compute GF(2) product of V1 and V0. The rightmost doubleword ··· 175 216 * multiplied by 0x1 and is then XORed with rightmost product. 176 217 * Implicitly, the intermediate leftmost product becomes padded 177 218 */ 178 - VGFMG %v1,%v0,%v1 219 + fpu_vgfmg(1, 0, 1); 179 220 180 221 /* 181 222 * Now do the final 32-bit fold by multiplying the rightmost word ··· 190 231 * rightmost doubleword and the leftmost doubleword is zero to ignore 191 232 * the leftmost product of V1. 192 233 */ 193 - VLEIB %v9,0x20,7 /* Shift by words */ 194 - VSRLB %v2,%v1,%v9 /* Store remaining bits in V2 */ 195 - VUPLLF %v1,%v1 /* Split rightmost doubleword */ 196 - VGFMAG %v1,CONST_R5,%v1,%v2 /* V1 = (V1 * R5) XOR V2 */ 234 + fpu_vleib(9, 0x20, 7); /* Shift by words */ 235 + fpu_vsrlb(2, 1, 9); /* Store remaining bits in V2 */ 236 + fpu_vupllf(1, 1); /* Split rightmost doubleword */ 237 + fpu_vgfmag(1, CONST_R5, 1, 2); /* V1 = (V1 * R5) XOR V2 */ 197 238 198 239 /* 199 240 * Apply a Barret reduction to compute the final 32-bit CRC value. ··· 215 256 */ 216 257 217 258 /* T1(x) = floor( R(x) / x^32 ) GF2MUL u */ 218 - VUPLLF %v2,%v1 219 - VGFMG %v2,CONST_RU_POLY,%v2 259 + fpu_vupllf(2, 1); 260 + fpu_vgfmg(2, CONST_RU_POLY, 2); 220 261 221 262 /* 222 263 * Compute the GF(2) product of the CRC polynomial with T1(x) in 223 264 * V2 and XOR the intermediate result, T2(x), with the value in V1. 224 265 * The final result is stored in word element 2 of V2. 225 266 */ 226 - VUPLLF %v2,%v2 227 - VGFMAG %v2,CONST_CRC_POLY,%v2,%v1 267 + fpu_vupllf(2, 2); 268 + fpu_vgfmag(2, CONST_CRC_POLY, 2, 1); 228 269 229 - .Ldone: 230 - VLGVF %r2,%v2,2 231 - BR_EX %r14 232 - SYM_FUNC_END(crc32_le_vgfm_generic) 270 + return fpu_vlgvf(2, 2); 271 + } 233 272 234 - .previous 273 + u32 crc32_le_vgfm_16(u32 crc, unsigned char const *buf, size_t size) 274 + { 275 + return crc32_le_vgfm_generic(crc, buf, size, &constants_CRC_32_LE[0]); 276 + } 277 + 278 + u32 crc32c_le_vgfm_16(u32 crc, unsigned char const *buf, size_t size) 279 + { 280 + return crc32_le_vgfm_generic(crc, buf, size, &constants_CRC_32C_LE[0]); 281 + }
+2 -14
arch/s390/crypto/paes_s390.c
··· 125 125 static inline int __paes_keyblob2pkey(struct key_blob *kb, 126 126 struct pkey_protkey *pk) 127 127 { 128 - int i, ret; 129 - 130 - /* try three times in case of failure */ 131 - for (i = 0; i < 3; i++) { 132 - if (i > 0 && ret == -EAGAIN && in_task()) 133 - if (msleep_interruptible(1000)) 134 - return -EINTR; 135 - ret = pkey_keyblob2pkey(kb->key, kb->keylen, 136 - pk->protkey, &pk->len, &pk->type); 137 - if (ret == 0) 138 - break; 139 - } 140 - 141 - return ret; 128 + return pkey_keyblob2pkey(kb->key, kb->keylen, 129 + pk->protkey, &pk->len, &pk->type); 142 130 } 143 131 144 132 static inline int __paes_convert_key(struct s390_paes_ctx *ctx)
+1 -2
arch/s390/hypfs/hypfs_diag0c.c
··· 20 20 */ 21 21 static void diag0c_fn(void *data) 22 22 { 23 - diag_stat_inc(DIAG_STAT_X00C); 24 - diag_amode31_ops.diag0c(((void **)data)[smp_processor_id()]); 23 + diag0c(((void **)data)[smp_processor_id()]); 25 24 } 26 25 27 26 /*
+2 -2
arch/s390/hypfs/hypfs_sprp.c
··· 25 25 26 26 static inline unsigned long __hypfs_sprp_diag304(void *data, unsigned long cmd) 27 27 { 28 - union register_pair r1 = { .even = (unsigned long)data, }; 28 + union register_pair r1 = { .even = virt_to_phys(data), }; 29 29 30 30 asm volatile("diag %[r1],%[r3],0x304\n" 31 31 : [r1] "+&d" (r1.pair) ··· 74 74 int rc; 75 75 76 76 rc = -ENOMEM; 77 - data = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 77 + data = (void *)get_zeroed_page(GFP_KERNEL); 78 78 diag304 = kzalloc(sizeof(*diag304), GFP_KERNEL); 79 79 if (!data || !diag304) 80 80 goto out;
+38
arch/s390/include/asm/access-regs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright IBM Corp. 1999, 2024 4 + */ 5 + 6 + #ifndef __ASM_S390_ACCESS_REGS_H 7 + #define __ASM_S390_ACCESS_REGS_H 8 + 9 + #include <linux/instrumented.h> 10 + #include <asm/sigcontext.h> 11 + 12 + struct access_regs { 13 + unsigned int regs[NUM_ACRS]; 14 + }; 15 + 16 + static inline void save_access_regs(unsigned int *acrs) 17 + { 18 + struct access_regs *regs = (struct access_regs *)acrs; 19 + 20 + instrument_write(regs, sizeof(*regs)); 21 + asm volatile("stamy 0,15,%[regs]" 22 + : [regs] "=QS" (*regs) 23 + : 24 + : "memory"); 25 + } 26 + 27 + static inline void restore_access_regs(unsigned int *acrs) 28 + { 29 + struct access_regs *regs = (struct access_regs *)acrs; 30 + 31 + instrument_read(regs, sizeof(*regs)); 32 + asm volatile("lamy 0,15,%[regs]" 33 + : 34 + : [regs] "QS" (*regs) 35 + : "memory"); 36 + } 37 + 38 + #endif /* __ASM_S390_ACCESS_REGS_H */
+2 -2
arch/s390/include/asm/appldata.h
··· 54 54 parm_list->function = fn; 55 55 parm_list->parlist_length = sizeof(*parm_list); 56 56 parm_list->buffer_length = length; 57 - parm_list->product_id_addr = (unsigned long) id; 57 + parm_list->product_id_addr = virt_to_phys(id); 58 58 parm_list->buffer_addr = virt_to_phys(buffer); 59 59 diag_stat_inc(DIAG_STAT_X0DC); 60 60 asm volatile( 61 61 " diag %1,%0,0xdc" 62 62 : "=d" (ry) 63 - : "d" (parm_list), "m" (*parm_list), "m" (*id) 63 + : "d" (virt_to_phys(parm_list)), "m" (*parm_list), "m" (*id) 64 64 : "cc"); 65 65 return ry; 66 66 }
+1 -1
arch/s390/include/asm/asm-prototypes.h
··· 3 3 4 4 #include <linux/kvm_host.h> 5 5 #include <linux/ftrace.h> 6 - #include <asm/fpu/api.h> 6 + #include <asm/fpu.h> 7 7 #include <asm-generic/asm-prototypes.h> 8 8 9 9 __int128_t __ashlti3(__int128_t a, int b);
+2 -2
arch/s390/include/asm/bug.h
··· 14 14 ".section .rodata.str,\"aMS\",@progbits,1\n" \ 15 15 "1: .asciz \""__FILE__"\"\n" \ 16 16 ".previous\n" \ 17 - ".section __bug_table,\"awM\",@progbits,%2\n" \ 17 + ".section __bug_table,\"aw\"\n" \ 18 18 "2: .long 0b-.\n" \ 19 19 " .long 1b-.\n" \ 20 20 " .short %0,%1\n" \ ··· 30 30 #define __EMIT_BUG(x) do { \ 31 31 asm_inline volatile( \ 32 32 "0: mc 0,0\n" \ 33 - ".section __bug_table,\"awM\",@progbits,%1\n" \ 33 + ".section __bug_table,\"aw\"\n" \ 34 34 "1: .long 0b-.\n" \ 35 35 " .short %0\n" \ 36 36 " .org 1b+%1\n" \
+11 -18
arch/s390/include/asm/checksum.h
··· 12 12 #ifndef _S390_CHECKSUM_H 13 13 #define _S390_CHECKSUM_H 14 14 15 - #include <linux/kasan-checks.h> 15 + #include <linux/instrumented.h> 16 16 #include <linux/in6.h> 17 17 18 - /* 19 - * Computes the checksum of a memory block at buff, length len, 20 - * and adds in "sum" (32-bit). 21 - * 22 - * Returns a 32-bit number suitable for feeding into itself 23 - * or csum_tcpudp_magic. 24 - * 25 - * This function must be called with even lengths, except 26 - * for the last fragment, which may be odd. 27 - * 28 - * It's best to have buff aligned on a 32-bit boundary. 29 - */ 30 - static inline __wsum csum_partial(const void *buff, int len, __wsum sum) 18 + static inline __wsum cksm(const void *buff, int len, __wsum sum) 31 19 { 32 20 union register_pair rp = { 33 - .even = (unsigned long) buff, 34 - .odd = (unsigned long) len, 21 + .even = (unsigned long)buff, 22 + .odd = (unsigned long)len, 35 23 }; 36 24 37 - kasan_check_read(buff, len); 38 - asm volatile( 25 + instrument_read(buff, len); 26 + asm volatile("\n" 39 27 "0: cksm %[sum],%[rp]\n" 40 28 " jo 0b\n" 41 29 : [sum] "+&d" (sum), [rp] "+&d" (rp.pair) : : "cc", "memory"); 42 30 return sum; 43 31 } 32 + 33 + __wsum csum_partial(const void *buff, int len, __wsum sum); 34 + 35 + #define _HAVE_ARCH_CSUM_AND_COPY 36 + __wsum csum_partial_copy_nocheck(const void *src, void *dst, int len); 44 37 45 38 /* 46 39 * Fold a partial checksum without adding pseudo headers.
+11 -4
arch/s390/include/asm/diag.h
··· 44 44 void diag_stat_inc(enum diag_stat_enum nr); 45 45 void diag_stat_inc_norecursion(enum diag_stat_enum nr); 46 46 47 + struct hypfs_diag0c_entry; 48 + 49 + /* 50 + * Diagnose 0c: Pseudo Timer 51 + */ 52 + void diag0c(struct hypfs_diag0c_entry *data); 53 + 47 54 /* 48 55 * Diagnose 10: Release page range 49 56 */ ··· 338 331 */ 339 332 struct diag_ops { 340 333 int (*diag210)(struct diag210 *addr); 341 - int (*diag26c)(void *req, void *resp, enum diag26c_sc subcode); 334 + int (*diag26c)(unsigned long rx, unsigned long rx1, enum diag26c_sc subcode); 342 335 int (*diag14)(unsigned long rx, unsigned long ry1, unsigned long subcode); 343 336 int (*diag8c)(struct diag8c *addr, struct ccw_dev_id *devno, size_t len); 344 - void (*diag0c)(struct hypfs_diag0c_entry *entry); 337 + void (*diag0c)(unsigned long rx); 345 338 void (*diag308_reset)(void); 346 339 }; 347 340 ··· 349 342 extern struct diag210 *__diag210_tmp_amode31; 350 343 351 344 int _diag210_amode31(struct diag210 *addr); 352 - int _diag26c_amode31(void *req, void *resp, enum diag26c_sc subcode); 345 + int _diag26c_amode31(unsigned long rx, unsigned long rx1, enum diag26c_sc subcode); 353 346 int _diag14_amode31(unsigned long rx, unsigned long ry1, unsigned long subcode); 354 - void _diag0c_amode31(struct hypfs_diag0c_entry *entry); 347 + void _diag0c_amode31(unsigned long rx); 355 348 void _diag308_reset_amode31(void); 356 349 int _diag8c_amode31(struct diag8c *addr, struct ccw_dev_id *devno, size_t len); 357 350
+2 -3
arch/s390/include/asm/entry-common.h
··· 8 8 #include <linux/processor.h> 9 9 #include <linux/uaccess.h> 10 10 #include <asm/timex.h> 11 - #include <asm/fpu/api.h> 11 + #include <asm/fpu.h> 12 12 #include <asm/pai.h> 13 13 14 14 #define ARCH_EXIT_TO_USER_MODE_WORK (_TIF_GUARDED_STORAGE | _TIF_PER_TRAP) ··· 41 41 42 42 static __always_inline void arch_exit_to_user_mode(void) 43 43 { 44 - if (test_cpu_flag(CIF_FPU)) 45 - __load_fpu_regs(); 44 + load_user_fpu_regs(); 46 45 47 46 if (IS_ENABLED(CONFIG_DEBUG_ENTRY)) 48 47 debug_user_asce(1);
+486
arch/s390/include/asm/fpu-insn.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Support for Floating Point and Vector Instructions 4 + * 5 + */ 6 + 7 + #ifndef __ASM_S390_FPU_INSN_H 8 + #define __ASM_S390_FPU_INSN_H 9 + 10 + #include <asm/fpu-insn-asm.h> 11 + 12 + #ifndef __ASSEMBLY__ 13 + 14 + #include <linux/instrumented.h> 15 + #include <asm/asm-extable.h> 16 + 17 + asm(".include \"asm/fpu-insn-asm.h\"\n"); 18 + 19 + /* 20 + * Various small helper functions, which can and should be used within 21 + * kernel fpu code sections. Each function represents only one floating 22 + * point or vector instruction (except for helper functions which require 23 + * exception handling). 24 + * 25 + * This allows to use floating point and vector instructions like C 26 + * functions, which has the advantage that all supporting code, like 27 + * e.g. loops, can be written in easy to read C code. 28 + * 29 + * Each of the helper functions provides support for code instrumentation, 30 + * like e.g. KASAN. Therefore instrumentation is also covered automatically 31 + * when using these functions. 32 + * 33 + * In order to ensure that code generated with the helper functions stays 34 + * within kernel fpu sections, which are guarded with kernel_fpu_begin() 35 + * and kernel_fpu_end() calls, each function has a mandatory "memory" 36 + * barrier. 37 + */ 38 + 39 + static __always_inline void fpu_cefbr(u8 f1, s32 val) 40 + { 41 + asm volatile("cefbr %[f1],%[val]\n" 42 + : 43 + : [f1] "I" (f1), [val] "d" (val) 44 + : "memory"); 45 + } 46 + 47 + static __always_inline unsigned long fpu_cgebr(u8 f2, u8 mode) 48 + { 49 + unsigned long val; 50 + 51 + asm volatile("cgebr %[val],%[mode],%[f2]\n" 52 + : [val] "=d" (val) 53 + : [f2] "I" (f2), [mode] "I" (mode) 54 + : "memory"); 55 + return val; 56 + } 57 + 58 + static __always_inline void fpu_debr(u8 f1, u8 f2) 59 + { 60 + asm volatile("debr %[f1],%[f2]\n" 61 + : 62 + : [f1] "I" (f1), [f2] "I" (f2) 63 + : "memory"); 64 + } 65 + 66 + static __always_inline void fpu_ld(unsigned short fpr, freg_t *reg) 67 + { 68 + instrument_read(reg, sizeof(*reg)); 69 + asm volatile("ld %[fpr],%[reg]\n" 70 + : 71 + : [fpr] "I" (fpr), [reg] "Q" (reg->ui) 72 + : "memory"); 73 + } 74 + 75 + static __always_inline void fpu_ldgr(u8 f1, u32 val) 76 + { 77 + asm volatile("ldgr %[f1],%[val]\n" 78 + : 79 + : [f1] "I" (f1), [val] "d" (val) 80 + : "memory"); 81 + } 82 + 83 + static __always_inline void fpu_lfpc(unsigned int *fpc) 84 + { 85 + instrument_read(fpc, sizeof(*fpc)); 86 + asm volatile("lfpc %[fpc]" 87 + : 88 + : [fpc] "Q" (*fpc) 89 + : "memory"); 90 + } 91 + 92 + /** 93 + * fpu_lfpc_safe - Load floating point control register safely. 94 + * @fpc: new value for floating point control register 95 + * 96 + * Load floating point control register. This may lead to an exception, 97 + * since a saved value may have been modified by user space (ptrace, 98 + * signal return, kvm registers) to an invalid value. In such a case 99 + * set the floating point control register to zero. 100 + */ 101 + static inline void fpu_lfpc_safe(unsigned int *fpc) 102 + { 103 + u32 tmp; 104 + 105 + instrument_read(fpc, sizeof(*fpc)); 106 + asm volatile("\n" 107 + "0: lfpc %[fpc]\n" 108 + "1: nopr %%r7\n" 109 + ".pushsection .fixup, \"ax\"\n" 110 + "2: lghi %[tmp],0\n" 111 + " sfpc %[tmp]\n" 112 + " jg 1b\n" 113 + ".popsection\n" 114 + EX_TABLE(1b, 2b) 115 + : [tmp] "=d" (tmp) 116 + : [fpc] "Q" (*fpc) 117 + : "memory"); 118 + } 119 + 120 + static __always_inline void fpu_std(unsigned short fpr, freg_t *reg) 121 + { 122 + instrument_write(reg, sizeof(*reg)); 123 + asm volatile("std %[fpr],%[reg]\n" 124 + : [reg] "=Q" (reg->ui) 125 + : [fpr] "I" (fpr) 126 + : "memory"); 127 + } 128 + 129 + static __always_inline void fpu_sfpc(unsigned int fpc) 130 + { 131 + asm volatile("sfpc %[fpc]" 132 + : 133 + : [fpc] "d" (fpc) 134 + : "memory"); 135 + } 136 + 137 + static __always_inline void fpu_stfpc(unsigned int *fpc) 138 + { 139 + instrument_write(fpc, sizeof(*fpc)); 140 + asm volatile("stfpc %[fpc]" 141 + : [fpc] "=Q" (*fpc) 142 + : 143 + : "memory"); 144 + } 145 + 146 + static __always_inline void fpu_vab(u8 v1, u8 v2, u8 v3) 147 + { 148 + asm volatile("VAB %[v1],%[v2],%[v3]" 149 + : 150 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3) 151 + : "memory"); 152 + } 153 + 154 + static __always_inline void fpu_vcksm(u8 v1, u8 v2, u8 v3) 155 + { 156 + asm volatile("VCKSM %[v1],%[v2],%[v3]" 157 + : 158 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3) 159 + : "memory"); 160 + } 161 + 162 + static __always_inline void fpu_vesravb(u8 v1, u8 v2, u8 v3) 163 + { 164 + asm volatile("VESRAVB %[v1],%[v2],%[v3]" 165 + : 166 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3) 167 + : "memory"); 168 + } 169 + 170 + static __always_inline void fpu_vgfmag(u8 v1, u8 v2, u8 v3, u8 v4) 171 + { 172 + asm volatile("VGFMAG %[v1],%[v2],%[v3],%[v4]" 173 + : 174 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3), [v4] "I" (v4) 175 + : "memory"); 176 + } 177 + 178 + static __always_inline void fpu_vgfmg(u8 v1, u8 v2, u8 v3) 179 + { 180 + asm volatile("VGFMG %[v1],%[v2],%[v3]" 181 + : 182 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3) 183 + : "memory"); 184 + } 185 + 186 + #ifdef CONFIG_CC_IS_CLANG 187 + 188 + static __always_inline void fpu_vl(u8 v1, const void *vxr) 189 + { 190 + instrument_read(vxr, sizeof(__vector128)); 191 + asm volatile("\n" 192 + " la 1,%[vxr]\n" 193 + " VL %[v1],0,,1\n" 194 + : 195 + : [vxr] "R" (*(__vector128 *)vxr), 196 + [v1] "I" (v1) 197 + : "memory", "1"); 198 + } 199 + 200 + #else /* CONFIG_CC_IS_CLANG */ 201 + 202 + static __always_inline void fpu_vl(u8 v1, const void *vxr) 203 + { 204 + instrument_read(vxr, sizeof(__vector128)); 205 + asm volatile("VL %[v1],%O[vxr],,%R[vxr]\n" 206 + : 207 + : [vxr] "Q" (*(__vector128 *)vxr), 208 + [v1] "I" (v1) 209 + : "memory"); 210 + } 211 + 212 + #endif /* CONFIG_CC_IS_CLANG */ 213 + 214 + static __always_inline void fpu_vleib(u8 v, s16 val, u8 index) 215 + { 216 + asm volatile("VLEIB %[v],%[val],%[index]" 217 + : 218 + : [v] "I" (v), [val] "K" (val), [index] "I" (index) 219 + : "memory"); 220 + } 221 + 222 + static __always_inline void fpu_vleig(u8 v, s16 val, u8 index) 223 + { 224 + asm volatile("VLEIG %[v],%[val],%[index]" 225 + : 226 + : [v] "I" (v), [val] "K" (val), [index] "I" (index) 227 + : "memory"); 228 + } 229 + 230 + static __always_inline u64 fpu_vlgvf(u8 v, u16 index) 231 + { 232 + u64 val; 233 + 234 + asm volatile("VLGVF %[val],%[v],%[index]" 235 + : [val] "=d" (val) 236 + : [v] "I" (v), [index] "L" (index) 237 + : "memory"); 238 + return val; 239 + } 240 + 241 + #ifdef CONFIG_CC_IS_CLANG 242 + 243 + static __always_inline void fpu_vll(u8 v1, u32 index, const void *vxr) 244 + { 245 + unsigned int size; 246 + 247 + size = min(index + 1, sizeof(__vector128)); 248 + instrument_read(vxr, size); 249 + asm volatile("\n" 250 + " la 1,%[vxr]\n" 251 + " VLL %[v1],%[index],0,1\n" 252 + : 253 + : [vxr] "R" (*(u8 *)vxr), 254 + [index] "d" (index), 255 + [v1] "I" (v1) 256 + : "memory", "1"); 257 + } 258 + 259 + #else /* CONFIG_CC_IS_CLANG */ 260 + 261 + static __always_inline void fpu_vll(u8 v1, u32 index, const void *vxr) 262 + { 263 + unsigned int size; 264 + 265 + size = min(index + 1, sizeof(__vector128)); 266 + instrument_read(vxr, size); 267 + asm volatile("VLL %[v1],%[index],%O[vxr],%R[vxr]\n" 268 + : 269 + : [vxr] "Q" (*(u8 *)vxr), 270 + [index] "d" (index), 271 + [v1] "I" (v1) 272 + : "memory"); 273 + } 274 + 275 + #endif /* CONFIG_CC_IS_CLANG */ 276 + 277 + #ifdef CONFIG_CC_IS_CLANG 278 + 279 + #define fpu_vlm(_v1, _v3, _vxrs) \ 280 + ({ \ 281 + unsigned int size = ((_v3) - (_v1) + 1) * sizeof(__vector128); \ 282 + struct { \ 283 + __vector128 _v[(_v3) - (_v1) + 1]; \ 284 + } *_v = (void *)(_vxrs); \ 285 + \ 286 + instrument_read(_v, size); \ 287 + asm volatile("\n" \ 288 + " la 1,%[vxrs]\n" \ 289 + " VLM %[v1],%[v3],0,1\n" \ 290 + : \ 291 + : [vxrs] "R" (*_v), \ 292 + [v1] "I" (_v1), [v3] "I" (_v3) \ 293 + : "memory", "1"); \ 294 + (_v3) - (_v1) + 1; \ 295 + }) 296 + 297 + #else /* CONFIG_CC_IS_CLANG */ 298 + 299 + #define fpu_vlm(_v1, _v3, _vxrs) \ 300 + ({ \ 301 + unsigned int size = ((_v3) - (_v1) + 1) * sizeof(__vector128); \ 302 + struct { \ 303 + __vector128 _v[(_v3) - (_v1) + 1]; \ 304 + } *_v = (void *)(_vxrs); \ 305 + \ 306 + instrument_read(_v, size); \ 307 + asm volatile("VLM %[v1],%[v3],%O[vxrs],%R[vxrs]\n" \ 308 + : \ 309 + : [vxrs] "Q" (*_v), \ 310 + [v1] "I" (_v1), [v3] "I" (_v3) \ 311 + : "memory"); \ 312 + (_v3) - (_v1) + 1; \ 313 + }) 314 + 315 + #endif /* CONFIG_CC_IS_CLANG */ 316 + 317 + static __always_inline void fpu_vlr(u8 v1, u8 v2) 318 + { 319 + asm volatile("VLR %[v1],%[v2]" 320 + : 321 + : [v1] "I" (v1), [v2] "I" (v2) 322 + : "memory"); 323 + } 324 + 325 + static __always_inline void fpu_vlvgf(u8 v, u32 val, u16 index) 326 + { 327 + asm volatile("VLVGF %[v],%[val],%[index]" 328 + : 329 + : [v] "I" (v), [val] "d" (val), [index] "L" (index) 330 + : "memory"); 331 + } 332 + 333 + static __always_inline void fpu_vn(u8 v1, u8 v2, u8 v3) 334 + { 335 + asm volatile("VN %[v1],%[v2],%[v3]" 336 + : 337 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3) 338 + : "memory"); 339 + } 340 + 341 + static __always_inline void fpu_vperm(u8 v1, u8 v2, u8 v3, u8 v4) 342 + { 343 + asm volatile("VPERM %[v1],%[v2],%[v3],%[v4]" 344 + : 345 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3), [v4] "I" (v4) 346 + : "memory"); 347 + } 348 + 349 + static __always_inline void fpu_vrepib(u8 v1, s16 i2) 350 + { 351 + asm volatile("VREPIB %[v1],%[i2]" 352 + : 353 + : [v1] "I" (v1), [i2] "K" (i2) 354 + : "memory"); 355 + } 356 + 357 + static __always_inline void fpu_vsrlb(u8 v1, u8 v2, u8 v3) 358 + { 359 + asm volatile("VSRLB %[v1],%[v2],%[v3]" 360 + : 361 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3) 362 + : "memory"); 363 + } 364 + 365 + #ifdef CONFIG_CC_IS_CLANG 366 + 367 + static __always_inline void fpu_vst(u8 v1, const void *vxr) 368 + { 369 + instrument_write(vxr, sizeof(__vector128)); 370 + asm volatile("\n" 371 + " la 1,%[vxr]\n" 372 + " VST %[v1],0,,1\n" 373 + : [vxr] "=R" (*(__vector128 *)vxr) 374 + : [v1] "I" (v1) 375 + : "memory", "1"); 376 + } 377 + 378 + #else /* CONFIG_CC_IS_CLANG */ 379 + 380 + static __always_inline void fpu_vst(u8 v1, const void *vxr) 381 + { 382 + instrument_write(vxr, sizeof(__vector128)); 383 + asm volatile("VST %[v1],%O[vxr],,%R[vxr]\n" 384 + : [vxr] "=Q" (*(__vector128 *)vxr) 385 + : [v1] "I" (v1) 386 + : "memory"); 387 + } 388 + 389 + #endif /* CONFIG_CC_IS_CLANG */ 390 + 391 + #ifdef CONFIG_CC_IS_CLANG 392 + 393 + static __always_inline void fpu_vstl(u8 v1, u32 index, const void *vxr) 394 + { 395 + unsigned int size; 396 + 397 + size = min(index + 1, sizeof(__vector128)); 398 + instrument_write(vxr, size); 399 + asm volatile("\n" 400 + " la 1,%[vxr]\n" 401 + " VSTL %[v1],%[index],0,1\n" 402 + : [vxr] "=R" (*(u8 *)vxr) 403 + : [index] "d" (index), [v1] "I" (v1) 404 + : "memory", "1"); 405 + } 406 + 407 + #else /* CONFIG_CC_IS_CLANG */ 408 + 409 + static __always_inline void fpu_vstl(u8 v1, u32 index, const void *vxr) 410 + { 411 + unsigned int size; 412 + 413 + size = min(index + 1, sizeof(__vector128)); 414 + instrument_write(vxr, size); 415 + asm volatile("VSTL %[v1],%[index],%O[vxr],%R[vxr]\n" 416 + : [vxr] "=Q" (*(u8 *)vxr) 417 + : [index] "d" (index), [v1] "I" (v1) 418 + : "memory"); 419 + } 420 + 421 + #endif /* CONFIG_CC_IS_CLANG */ 422 + 423 + #ifdef CONFIG_CC_IS_CLANG 424 + 425 + #define fpu_vstm(_v1, _v3, _vxrs) \ 426 + ({ \ 427 + unsigned int size = ((_v3) - (_v1) + 1) * sizeof(__vector128); \ 428 + struct { \ 429 + __vector128 _v[(_v3) - (_v1) + 1]; \ 430 + } *_v = (void *)(_vxrs); \ 431 + \ 432 + instrument_write(_v, size); \ 433 + asm volatile("\n" \ 434 + " la 1,%[vxrs]\n" \ 435 + " VSTM %[v1],%[v3],0,1\n" \ 436 + : [vxrs] "=R" (*_v) \ 437 + : [v1] "I" (_v1), [v3] "I" (_v3) \ 438 + : "memory", "1"); \ 439 + (_v3) - (_v1) + 1; \ 440 + }) 441 + 442 + #else /* CONFIG_CC_IS_CLANG */ 443 + 444 + #define fpu_vstm(_v1, _v3, _vxrs) \ 445 + ({ \ 446 + unsigned int size = ((_v3) - (_v1) + 1) * sizeof(__vector128); \ 447 + struct { \ 448 + __vector128 _v[(_v3) - (_v1) + 1]; \ 449 + } *_v = (void *)(_vxrs); \ 450 + \ 451 + instrument_write(_v, size); \ 452 + asm volatile("VSTM %[v1],%[v3],%O[vxrs],%R[vxrs]\n" \ 453 + : [vxrs] "=Q" (*_v) \ 454 + : [v1] "I" (_v1), [v3] "I" (_v3) \ 455 + : "memory"); \ 456 + (_v3) - (_v1) + 1; \ 457 + }) 458 + 459 + #endif /* CONFIG_CC_IS_CLANG */ 460 + 461 + static __always_inline void fpu_vupllf(u8 v1, u8 v2) 462 + { 463 + asm volatile("VUPLLF %[v1],%[v2]" 464 + : 465 + : [v1] "I" (v1), [v2] "I" (v2) 466 + : "memory"); 467 + } 468 + 469 + static __always_inline void fpu_vx(u8 v1, u8 v2, u8 v3) 470 + { 471 + asm volatile("VX %[v1],%[v2],%[v3]" 472 + : 473 + : [v1] "I" (v1), [v2] "I" (v2), [v3] "I" (v3) 474 + : "memory"); 475 + } 476 + 477 + static __always_inline void fpu_vzero(u8 v) 478 + { 479 + asm volatile("VZERO %[v]" 480 + : 481 + : [v] "I" (v) 482 + : "memory"); 483 + } 484 + 485 + #endif /* __ASSEMBLY__ */ 486 + #endif /* __ASM_S390_FPU_INSN_H */
+51
arch/s390/include/asm/fpu-types.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * FPU data structures 4 + * 5 + * Copyright IBM Corp. 2015 6 + * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 7 + */ 8 + 9 + #ifndef _ASM_S390_FPU_TYPES_H 10 + #define _ASM_S390_FPU_TYPES_H 11 + 12 + #include <asm/sigcontext.h> 13 + 14 + struct fpu { 15 + u32 fpc; 16 + __vector128 vxrs[__NUM_VXRS] __aligned(8); 17 + }; 18 + 19 + struct kernel_fpu_hdr { 20 + int mask; 21 + u32 fpc; 22 + }; 23 + 24 + struct kernel_fpu { 25 + struct kernel_fpu_hdr hdr; 26 + __vector128 vxrs[] __aligned(8); 27 + }; 28 + 29 + #define KERNEL_FPU_STRUCT(vxr_size) \ 30 + struct kernel_fpu_##vxr_size { \ 31 + struct kernel_fpu_hdr hdr; \ 32 + __vector128 vxrs[vxr_size] __aligned(8); \ 33 + } 34 + 35 + KERNEL_FPU_STRUCT(8); 36 + KERNEL_FPU_STRUCT(16); 37 + KERNEL_FPU_STRUCT(32); 38 + 39 + #define DECLARE_KERNEL_FPU_ONSTACK(vxr_size, name) \ 40 + struct kernel_fpu_##vxr_size name __uninitialized 41 + 42 + #define DECLARE_KERNEL_FPU_ONSTACK8(name) \ 43 + DECLARE_KERNEL_FPU_ONSTACK(8, name) 44 + 45 + #define DECLARE_KERNEL_FPU_ONSTACK16(name) \ 46 + DECLARE_KERNEL_FPU_ONSTACK(16, name) 47 + 48 + #define DECLARE_KERNEL_FPU_ONSTACK32(name) \ 49 + DECLARE_KERNEL_FPU_ONSTACK(32, name) 50 + 51 + #endif /* _ASM_S390_FPU_TYPES_H */
+295
arch/s390/include/asm/fpu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * In-kernel FPU support functions 4 + * 5 + * 6 + * Consider these guidelines before using in-kernel FPU functions: 7 + * 8 + * 1. Use kernel_fpu_begin() and kernel_fpu_end() to enclose all in-kernel 9 + * use of floating-point or vector registers and instructions. 10 + * 11 + * 2. For kernel_fpu_begin(), specify the vector register range you want to 12 + * use with the KERNEL_VXR_* constants. Consider these usage guidelines: 13 + * 14 + * a) If your function typically runs in process-context, use the lower 15 + * half of the vector registers, for example, specify KERNEL_VXR_LOW. 16 + * b) If your function typically runs in soft-irq or hard-irq context, 17 + * prefer using the upper half of the vector registers, for example, 18 + * specify KERNEL_VXR_HIGH. 19 + * 20 + * If you adhere to these guidelines, an interrupted process context 21 + * does not require to save and restore vector registers because of 22 + * disjoint register ranges. 23 + * 24 + * Also note that the __kernel_fpu_begin()/__kernel_fpu_end() functions 25 + * includes logic to save and restore up to 16 vector registers at once. 26 + * 27 + * 3. You can nest kernel_fpu_begin()/kernel_fpu_end() by using different 28 + * struct kernel_fpu states. Vector registers that are in use by outer 29 + * levels are saved and restored. You can minimize the save and restore 30 + * effort by choosing disjoint vector register ranges. 31 + * 32 + * 5. To use vector floating-point instructions, specify the KERNEL_FPC 33 + * flag to save and restore floating-point controls in addition to any 34 + * vector register range. 35 + * 36 + * 6. To use floating-point registers and instructions only, specify the 37 + * KERNEL_FPR flag. This flag triggers a save and restore of vector 38 + * registers V0 to V15 and floating-point controls. 39 + * 40 + * Copyright IBM Corp. 2015 41 + * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 42 + */ 43 + 44 + #ifndef _ASM_S390_FPU_H 45 + #define _ASM_S390_FPU_H 46 + 47 + #include <linux/processor.h> 48 + #include <linux/preempt.h> 49 + #include <linux/string.h> 50 + #include <linux/sched.h> 51 + #include <asm/sigcontext.h> 52 + #include <asm/fpu-types.h> 53 + #include <asm/fpu-insn.h> 54 + #include <asm/facility.h> 55 + 56 + static inline bool cpu_has_vx(void) 57 + { 58 + return likely(test_facility(129)); 59 + } 60 + 61 + enum { 62 + KERNEL_FPC_BIT = 0, 63 + KERNEL_VXR_V0V7_BIT, 64 + KERNEL_VXR_V8V15_BIT, 65 + KERNEL_VXR_V16V23_BIT, 66 + KERNEL_VXR_V24V31_BIT, 67 + }; 68 + 69 + #define KERNEL_FPC BIT(KERNEL_FPC_BIT) 70 + #define KERNEL_VXR_V0V7 BIT(KERNEL_VXR_V0V7_BIT) 71 + #define KERNEL_VXR_V8V15 BIT(KERNEL_VXR_V8V15_BIT) 72 + #define KERNEL_VXR_V16V23 BIT(KERNEL_VXR_V16V23_BIT) 73 + #define KERNEL_VXR_V24V31 BIT(KERNEL_VXR_V24V31_BIT) 74 + 75 + #define KERNEL_VXR_LOW (KERNEL_VXR_V0V7 | KERNEL_VXR_V8V15) 76 + #define KERNEL_VXR_MID (KERNEL_VXR_V8V15 | KERNEL_VXR_V16V23) 77 + #define KERNEL_VXR_HIGH (KERNEL_VXR_V16V23 | KERNEL_VXR_V24V31) 78 + 79 + #define KERNEL_VXR (KERNEL_VXR_LOW | KERNEL_VXR_HIGH) 80 + #define KERNEL_FPR (KERNEL_FPC | KERNEL_VXR_LOW) 81 + 82 + void load_fpu_state(struct fpu *state, int flags); 83 + void save_fpu_state(struct fpu *state, int flags); 84 + void __kernel_fpu_begin(struct kernel_fpu *state, int flags); 85 + void __kernel_fpu_end(struct kernel_fpu *state, int flags); 86 + 87 + static __always_inline void save_vx_regs(__vector128 *vxrs) 88 + { 89 + fpu_vstm(0, 15, &vxrs[0]); 90 + fpu_vstm(16, 31, &vxrs[16]); 91 + } 92 + 93 + static __always_inline void load_vx_regs(__vector128 *vxrs) 94 + { 95 + fpu_vlm(0, 15, &vxrs[0]); 96 + fpu_vlm(16, 31, &vxrs[16]); 97 + } 98 + 99 + static __always_inline void __save_fp_regs(freg_t *fprs, unsigned int offset) 100 + { 101 + fpu_std(0, &fprs[0 * offset]); 102 + fpu_std(1, &fprs[1 * offset]); 103 + fpu_std(2, &fprs[2 * offset]); 104 + fpu_std(3, &fprs[3 * offset]); 105 + fpu_std(4, &fprs[4 * offset]); 106 + fpu_std(5, &fprs[5 * offset]); 107 + fpu_std(6, &fprs[6 * offset]); 108 + fpu_std(7, &fprs[7 * offset]); 109 + fpu_std(8, &fprs[8 * offset]); 110 + fpu_std(9, &fprs[9 * offset]); 111 + fpu_std(10, &fprs[10 * offset]); 112 + fpu_std(11, &fprs[11 * offset]); 113 + fpu_std(12, &fprs[12 * offset]); 114 + fpu_std(13, &fprs[13 * offset]); 115 + fpu_std(14, &fprs[14 * offset]); 116 + fpu_std(15, &fprs[15 * offset]); 117 + } 118 + 119 + static __always_inline void __load_fp_regs(freg_t *fprs, unsigned int offset) 120 + { 121 + fpu_ld(0, &fprs[0 * offset]); 122 + fpu_ld(1, &fprs[1 * offset]); 123 + fpu_ld(2, &fprs[2 * offset]); 124 + fpu_ld(3, &fprs[3 * offset]); 125 + fpu_ld(4, &fprs[4 * offset]); 126 + fpu_ld(5, &fprs[5 * offset]); 127 + fpu_ld(6, &fprs[6 * offset]); 128 + fpu_ld(7, &fprs[7 * offset]); 129 + fpu_ld(8, &fprs[8 * offset]); 130 + fpu_ld(9, &fprs[9 * offset]); 131 + fpu_ld(10, &fprs[10 * offset]); 132 + fpu_ld(11, &fprs[11 * offset]); 133 + fpu_ld(12, &fprs[12 * offset]); 134 + fpu_ld(13, &fprs[13 * offset]); 135 + fpu_ld(14, &fprs[14 * offset]); 136 + fpu_ld(15, &fprs[15 * offset]); 137 + } 138 + 139 + static __always_inline void save_fp_regs(freg_t *fprs) 140 + { 141 + __save_fp_regs(fprs, sizeof(freg_t) / sizeof(freg_t)); 142 + } 143 + 144 + static __always_inline void load_fp_regs(freg_t *fprs) 145 + { 146 + __load_fp_regs(fprs, sizeof(freg_t) / sizeof(freg_t)); 147 + } 148 + 149 + static __always_inline void save_fp_regs_vx(__vector128 *vxrs) 150 + { 151 + freg_t *fprs = (freg_t *)&vxrs[0].high; 152 + 153 + __save_fp_regs(fprs, sizeof(__vector128) / sizeof(freg_t)); 154 + } 155 + 156 + static __always_inline void load_fp_regs_vx(__vector128 *vxrs) 157 + { 158 + freg_t *fprs = (freg_t *)&vxrs[0].high; 159 + 160 + __load_fp_regs(fprs, sizeof(__vector128) / sizeof(freg_t)); 161 + } 162 + 163 + static inline void load_user_fpu_regs(void) 164 + { 165 + struct thread_struct *thread = &current->thread; 166 + 167 + if (!thread->ufpu_flags) 168 + return; 169 + load_fpu_state(&thread->ufpu, thread->ufpu_flags); 170 + thread->ufpu_flags = 0; 171 + } 172 + 173 + static __always_inline void __save_user_fpu_regs(struct thread_struct *thread, int flags) 174 + { 175 + save_fpu_state(&thread->ufpu, flags); 176 + __atomic_or(flags, &thread->ufpu_flags); 177 + } 178 + 179 + static inline void save_user_fpu_regs(void) 180 + { 181 + struct thread_struct *thread = &current->thread; 182 + int mask, flags; 183 + 184 + mask = __atomic_or(KERNEL_FPC | KERNEL_VXR, &thread->kfpu_flags); 185 + flags = ~READ_ONCE(thread->ufpu_flags) & (KERNEL_FPC | KERNEL_VXR); 186 + if (flags) 187 + __save_user_fpu_regs(thread, flags); 188 + barrier(); 189 + WRITE_ONCE(thread->kfpu_flags, mask); 190 + } 191 + 192 + static __always_inline void _kernel_fpu_begin(struct kernel_fpu *state, int flags) 193 + { 194 + struct thread_struct *thread = &current->thread; 195 + int mask, uflags; 196 + 197 + mask = __atomic_or(flags, &thread->kfpu_flags); 198 + state->hdr.mask = mask; 199 + uflags = READ_ONCE(thread->ufpu_flags); 200 + if ((uflags & flags) != flags) 201 + __save_user_fpu_regs(thread, ~uflags & flags); 202 + if (mask & flags) 203 + __kernel_fpu_begin(state, flags); 204 + } 205 + 206 + static __always_inline void _kernel_fpu_end(struct kernel_fpu *state, int flags) 207 + { 208 + int mask = state->hdr.mask; 209 + 210 + if (mask & flags) 211 + __kernel_fpu_end(state, flags); 212 + barrier(); 213 + WRITE_ONCE(current->thread.kfpu_flags, mask); 214 + } 215 + 216 + void __kernel_fpu_invalid_size(void); 217 + 218 + static __always_inline void kernel_fpu_check_size(int flags, unsigned int size) 219 + { 220 + unsigned int cnt = 0; 221 + 222 + if (flags & KERNEL_VXR_V0V7) 223 + cnt += 8; 224 + if (flags & KERNEL_VXR_V8V15) 225 + cnt += 8; 226 + if (flags & KERNEL_VXR_V16V23) 227 + cnt += 8; 228 + if (flags & KERNEL_VXR_V24V31) 229 + cnt += 8; 230 + if (cnt != size) 231 + __kernel_fpu_invalid_size(); 232 + } 233 + 234 + #define kernel_fpu_begin(state, flags) \ 235 + { \ 236 + typeof(state) s = (state); \ 237 + int _flags = (flags); \ 238 + \ 239 + kernel_fpu_check_size(_flags, ARRAY_SIZE(s->vxrs)); \ 240 + _kernel_fpu_begin((struct kernel_fpu *)s, _flags); \ 241 + } 242 + 243 + #define kernel_fpu_end(state, flags) \ 244 + { \ 245 + typeof(state) s = (state); \ 246 + int _flags = (flags); \ 247 + \ 248 + kernel_fpu_check_size(_flags, ARRAY_SIZE(s->vxrs)); \ 249 + _kernel_fpu_end((struct kernel_fpu *)s, _flags); \ 250 + } 251 + 252 + static inline void save_kernel_fpu_regs(struct thread_struct *thread) 253 + { 254 + if (!thread->kfpu_flags) 255 + return; 256 + save_fpu_state(&thread->kfpu, thread->kfpu_flags); 257 + } 258 + 259 + static inline void restore_kernel_fpu_regs(struct thread_struct *thread) 260 + { 261 + if (!thread->kfpu_flags) 262 + return; 263 + load_fpu_state(&thread->kfpu, thread->kfpu_flags); 264 + } 265 + 266 + static inline void convert_vx_to_fp(freg_t *fprs, __vector128 *vxrs) 267 + { 268 + int i; 269 + 270 + for (i = 0; i < __NUM_FPRS; i++) 271 + fprs[i].ui = vxrs[i].high; 272 + } 273 + 274 + static inline void convert_fp_to_vx(__vector128 *vxrs, freg_t *fprs) 275 + { 276 + int i; 277 + 278 + for (i = 0; i < __NUM_FPRS; i++) 279 + vxrs[i].high = fprs[i].ui; 280 + } 281 + 282 + static inline void fpregs_store(_s390_fp_regs *fpregs, struct fpu *fpu) 283 + { 284 + fpregs->pad = 0; 285 + fpregs->fpc = fpu->fpc; 286 + convert_vx_to_fp((freg_t *)&fpregs->fprs, fpu->vxrs); 287 + } 288 + 289 + static inline void fpregs_load(_s390_fp_regs *fpregs, struct fpu *fpu) 290 + { 291 + fpu->fpc = fpregs->fpc; 292 + convert_fp_to_vx(fpu->vxrs, (freg_t *)&fpregs->fprs); 293 + } 294 + 295 + #endif /* _ASM_S390_FPU_H */
-126
arch/s390/include/asm/fpu/api.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * In-kernel FPU support functions 4 - * 5 - * 6 - * Consider these guidelines before using in-kernel FPU functions: 7 - * 8 - * 1. Use kernel_fpu_begin() and kernel_fpu_end() to enclose all in-kernel 9 - * use of floating-point or vector registers and instructions. 10 - * 11 - * 2. For kernel_fpu_begin(), specify the vector register range you want to 12 - * use with the KERNEL_VXR_* constants. Consider these usage guidelines: 13 - * 14 - * a) If your function typically runs in process-context, use the lower 15 - * half of the vector registers, for example, specify KERNEL_VXR_LOW. 16 - * b) If your function typically runs in soft-irq or hard-irq context, 17 - * prefer using the upper half of the vector registers, for example, 18 - * specify KERNEL_VXR_HIGH. 19 - * 20 - * If you adhere to these guidelines, an interrupted process context 21 - * does not require to save and restore vector registers because of 22 - * disjoint register ranges. 23 - * 24 - * Also note that the __kernel_fpu_begin()/__kernel_fpu_end() functions 25 - * includes logic to save and restore up to 16 vector registers at once. 26 - * 27 - * 3. You can nest kernel_fpu_begin()/kernel_fpu_end() by using different 28 - * struct kernel_fpu states. Vector registers that are in use by outer 29 - * levels are saved and restored. You can minimize the save and restore 30 - * effort by choosing disjoint vector register ranges. 31 - * 32 - * 5. To use vector floating-point instructions, specify the KERNEL_FPC 33 - * flag to save and restore floating-point controls in addition to any 34 - * vector register range. 35 - * 36 - * 6. To use floating-point registers and instructions only, specify the 37 - * KERNEL_FPR flag. This flag triggers a save and restore of vector 38 - * registers V0 to V15 and floating-point controls. 39 - * 40 - * Copyright IBM Corp. 2015 41 - * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 42 - */ 43 - 44 - #ifndef _ASM_S390_FPU_API_H 45 - #define _ASM_S390_FPU_API_H 46 - 47 - #include <linux/preempt.h> 48 - #include <asm/asm-extable.h> 49 - #include <asm/fpu/internal.h> 50 - 51 - void save_fpu_regs(void); 52 - void load_fpu_regs(void); 53 - void __load_fpu_regs(void); 54 - 55 - /** 56 - * sfpc_safe - Set floating point control register safely. 57 - * @fpc: new value for floating point control register 58 - * 59 - * Set floating point control register. This may lead to an exception, 60 - * since a saved value may have been modified by user space (ptrace, 61 - * signal return, kvm registers) to an invalid value. In such a case 62 - * set the floating point control register to zero. 63 - */ 64 - static inline void sfpc_safe(u32 fpc) 65 - { 66 - asm volatile("\n" 67 - "0: sfpc %[fpc]\n" 68 - "1: nopr %%r7\n" 69 - ".pushsection .fixup, \"ax\"\n" 70 - "2: lghi %[fpc],0\n" 71 - " jg 0b\n" 72 - ".popsection\n" 73 - EX_TABLE(1b, 2b) 74 - : [fpc] "+d" (fpc) 75 - : : "memory"); 76 - } 77 - 78 - #define KERNEL_FPC 1 79 - #define KERNEL_VXR_V0V7 2 80 - #define KERNEL_VXR_V8V15 4 81 - #define KERNEL_VXR_V16V23 8 82 - #define KERNEL_VXR_V24V31 16 83 - 84 - #define KERNEL_VXR_LOW (KERNEL_VXR_V0V7|KERNEL_VXR_V8V15) 85 - #define KERNEL_VXR_MID (KERNEL_VXR_V8V15|KERNEL_VXR_V16V23) 86 - #define KERNEL_VXR_HIGH (KERNEL_VXR_V16V23|KERNEL_VXR_V24V31) 87 - 88 - #define KERNEL_VXR (KERNEL_VXR_LOW|KERNEL_VXR_HIGH) 89 - #define KERNEL_FPR (KERNEL_FPC|KERNEL_VXR_LOW) 90 - 91 - struct kernel_fpu; 92 - 93 - /* 94 - * Note the functions below must be called with preemption disabled. 95 - * Do not enable preemption before calling __kernel_fpu_end() to prevent 96 - * an corruption of an existing kernel FPU state. 97 - * 98 - * Prefer using the kernel_fpu_begin()/kernel_fpu_end() pair of functions. 99 - */ 100 - void __kernel_fpu_begin(struct kernel_fpu *state, u32 flags); 101 - void __kernel_fpu_end(struct kernel_fpu *state, u32 flags); 102 - 103 - 104 - static inline void kernel_fpu_begin(struct kernel_fpu *state, u32 flags) 105 - { 106 - preempt_disable(); 107 - state->mask = S390_lowcore.fpu_flags; 108 - if (!test_cpu_flag(CIF_FPU)) 109 - /* Save user space FPU state and register contents */ 110 - save_fpu_regs(); 111 - else if (state->mask & flags) 112 - /* Save FPU/vector register in-use by the kernel */ 113 - __kernel_fpu_begin(state, flags); 114 - S390_lowcore.fpu_flags |= flags; 115 - } 116 - 117 - static inline void kernel_fpu_end(struct kernel_fpu *state, u32 flags) 118 - { 119 - S390_lowcore.fpu_flags = state->mask; 120 - if (state->mask & flags) 121 - /* Restore FPU/vector register in-use by the kernel */ 122 - __kernel_fpu_end(state, flags); 123 - preempt_enable(); 124 - } 125 - 126 - #endif /* _ASM_S390_FPU_API_H */
-67
arch/s390/include/asm/fpu/internal.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * FPU state and register content conversion primitives 4 - * 5 - * Copyright IBM Corp. 2015 6 - * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 7 - */ 8 - 9 - #ifndef _ASM_S390_FPU_INTERNAL_H 10 - #define _ASM_S390_FPU_INTERNAL_H 11 - 12 - #include <linux/string.h> 13 - #include <asm/facility.h> 14 - #include <asm/fpu/types.h> 15 - 16 - static inline bool cpu_has_vx(void) 17 - { 18 - return likely(test_facility(129)); 19 - } 20 - 21 - static inline void save_vx_regs(__vector128 *vxrs) 22 - { 23 - asm volatile( 24 - " la 1,%0\n" 25 - " .word 0xe70f,0x1000,0x003e\n" /* vstm 0,15,0(1) */ 26 - " .word 0xe70f,0x1100,0x0c3e\n" /* vstm 16,31,256(1) */ 27 - : "=Q" (*(struct vx_array *) vxrs) : : "1"); 28 - } 29 - 30 - static inline void convert_vx_to_fp(freg_t *fprs, __vector128 *vxrs) 31 - { 32 - int i; 33 - 34 - for (i = 0; i < __NUM_FPRS; i++) 35 - fprs[i].ui = vxrs[i].high; 36 - } 37 - 38 - static inline void convert_fp_to_vx(__vector128 *vxrs, freg_t *fprs) 39 - { 40 - int i; 41 - 42 - for (i = 0; i < __NUM_FPRS; i++) 43 - vxrs[i].high = fprs[i].ui; 44 - } 45 - 46 - static inline void fpregs_store(_s390_fp_regs *fpregs, struct fpu *fpu) 47 - { 48 - fpregs->pad = 0; 49 - fpregs->fpc = fpu->fpc; 50 - if (cpu_has_vx()) 51 - convert_vx_to_fp((freg_t *)&fpregs->fprs, fpu->vxrs); 52 - else 53 - memcpy((freg_t *)&fpregs->fprs, fpu->fprs, 54 - sizeof(fpregs->fprs)); 55 - } 56 - 57 - static inline void fpregs_load(_s390_fp_regs *fpregs, struct fpu *fpu) 58 - { 59 - fpu->fpc = fpregs->fpc; 60 - if (cpu_has_vx()) 61 - convert_fp_to_vx(fpu->vxrs, (freg_t *)&fpregs->fprs); 62 - else 63 - memcpy(fpu->fprs, (freg_t *)&fpregs->fprs, 64 - sizeof(fpregs->fprs)); 65 - } 66 - 67 - #endif /* _ASM_S390_FPU_INTERNAL_H */
-38
arch/s390/include/asm/fpu/types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * FPU data structures 4 - * 5 - * Copyright IBM Corp. 2015 6 - * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 7 - */ 8 - 9 - #ifndef _ASM_S390_FPU_TYPES_H 10 - #define _ASM_S390_FPU_TYPES_H 11 - 12 - #include <asm/sigcontext.h> 13 - 14 - struct fpu { 15 - __u32 fpc; /* Floating-point control */ 16 - void *regs; /* Pointer to the current save area */ 17 - union { 18 - /* Floating-point register save area */ 19 - freg_t fprs[__NUM_FPRS]; 20 - /* Vector register save area */ 21 - __vector128 vxrs[__NUM_VXRS]; 22 - }; 23 - }; 24 - 25 - /* VX array structure for address operand constraints in inline assemblies */ 26 - struct vx_array { __vector128 _[__NUM_VXRS]; }; 27 - 28 - /* In-kernel FPU state structure */ 29 - struct kernel_fpu { 30 - u32 mask; 31 - u32 fpc; 32 - union { 33 - freg_t fprs[__NUM_FPRS]; 34 - __vector128 vxrs[__NUM_VXRS]; 35 - }; 36 - }; 37 - 38 - #endif /* _ASM_S390_FPU_TYPES_H */
+3 -2
arch/s390/include/asm/kvm_host.h
··· 23 23 #include <linux/mmu_notifier.h> 24 24 #include <asm/debug.h> 25 25 #include <asm/cpu.h> 26 - #include <asm/fpu/api.h> 26 + #include <asm/fpu.h> 27 27 #include <asm/isc.h> 28 28 #include <asm/guarded_storage.h> 29 29 ··· 743 743 struct kvm_s390_sie_block *vsie_block; 744 744 unsigned int host_acrs[NUM_ACRS]; 745 745 struct gs_cb *host_gscb; 746 - struct fpu host_fpregs; 747 746 struct kvm_s390_local_interrupt local_int; 748 747 struct hrtimer ckc_timer; 749 748 struct kvm_s390_pgm_info pgm; ··· 764 765 __u64 cputm_start; 765 766 bool gs_enabled; 766 767 bool skey_enabled; 768 + /* Indicator if the access registers have been loaded from guest */ 769 + bool acrs_loaded; 767 770 struct kvm_s390_pv_vcpu pv; 768 771 union diag318_info diag318_info; 769 772 };
+1 -1
arch/s390/include/asm/lowcore.h
··· 157 157 __s32 preempt_count; /* 0x03a8 */ 158 158 __u32 spinlock_lockval; /* 0x03ac */ 159 159 __u32 spinlock_index; /* 0x03b0 */ 160 - __u32 fpu_flags; /* 0x03b4 */ 160 + __u8 pad_0x03b4[0x03b8-0x03b4]; /* 0x03b4 */ 161 161 __u64 percpu_offset; /* 0x03b8 */ 162 162 __u8 pad_0x03c0[0x03c8-0x03c0]; /* 0x03c0 */ 163 163 __u64 machine_flags; /* 0x03c8 */
+2 -1
arch/s390/include/asm/pai.h
··· 16 16 u64 header; 17 17 struct { 18 18 u64 : 8; 19 - u64 num_cc : 8; /* # of supported crypto counters */ 19 + u64 num_cc : 8; /* # of supported crypto counters */ 20 20 u64 : 9; 21 21 u64 num_nnpa : 7; /* # of supported NNPA counters */ 22 22 u64 : 32; ··· 81 81 PAI_MODE_COUNTING, 82 82 }; 83 83 84 + #define PAI_SAVE_AREA(x) ((x)->hw.event_base) 84 85 #endif
+2 -1
arch/s390/include/asm/pci.h
··· 122 122 struct rcu_head rcu; 123 123 struct hotplug_slot hotplug_slot; 124 124 125 + struct mutex state_lock; /* protect state changes */ 125 126 enum zpci_state state; 126 127 u32 fid; /* function ID, used by sclp */ 127 128 u32 fh; /* function handle, used by insn's */ ··· 143 142 u8 reserved : 2; 144 143 unsigned int devfn; /* DEVFN part of the RID*/ 145 144 146 - struct mutex lock; 147 145 u8 pfip[CLP_PFIP_NR_SEGMENTS]; /* pci function internal path */ 148 146 u32 uid; /* user defined id */ 149 147 u8 util_str[CLP_UTIL_STR_LEN]; /* utility string */ ··· 170 170 u64 dma_mask; /* DMA address space mask */ 171 171 172 172 /* Function measurement block */ 173 + struct mutex fmb_lock; 173 174 struct zpci_fmb *fmb; 174 175 u16 fmb_update; /* update interval */ 175 176 u16 fmb_length;
+1
arch/s390/include/asm/physmem_info.h
··· 22 22 RR_DECOMPRESSOR, 23 23 RR_INITRD, 24 24 RR_VMLINUX, 25 + RR_RELOC, 25 26 RR_AMODE31, 26 27 RR_IPLREPORT, 27 28 RR_CERT_COMP_LIST,
+5 -6
arch/s390/include/asm/processor.h
··· 15 15 #include <linux/bits.h> 16 16 17 17 #define CIF_NOHZ_DELAY 2 /* delay HZ disable for a tick */ 18 - #define CIF_FPU 3 /* restore FPU registers */ 19 18 #define CIF_ENABLED_WAIT 5 /* in enabled wait state */ 20 19 #define CIF_MCCK_GUEST 6 /* machine check happening in guest */ 21 20 #define CIF_DEDICATED_CPU 7 /* this CPU is dedicated */ 22 21 23 22 #define _CIF_NOHZ_DELAY BIT(CIF_NOHZ_DELAY) 24 - #define _CIF_FPU BIT(CIF_FPU) 25 23 #define _CIF_ENABLED_WAIT BIT(CIF_ENABLED_WAIT) 26 24 #define _CIF_MCCK_GUEST BIT(CIF_MCCK_GUEST) 27 25 #define _CIF_DEDICATED_CPU BIT(CIF_DEDICATED_CPU) ··· 31 33 #include <linux/cpumask.h> 32 34 #include <linux/linkage.h> 33 35 #include <linux/irqflags.h> 36 + #include <asm/fpu-types.h> 34 37 #include <asm/cpu.h> 35 38 #include <asm/page.h> 36 39 #include <asm/ptrace.h> 37 40 #include <asm/setup.h> 38 41 #include <asm/runtime_instr.h> 39 - #include <asm/fpu/types.h> 40 - #include <asm/fpu/internal.h> 41 42 #include <asm/irqflags.h> 42 43 43 44 typedef long (*sys_call_ptr_t)(struct pt_regs *regs); ··· 166 169 unsigned int gmap_write_flag; /* gmap fault write indication */ 167 170 unsigned int gmap_int_code; /* int code of last gmap fault */ 168 171 unsigned int gmap_pfault; /* signal of a pending guest pfault */ 172 + int ufpu_flags; /* user fpu flags */ 173 + int kfpu_flags; /* kernel fpu flags */ 169 174 170 175 /* Per-thread information related to debugging */ 171 176 struct per_regs per_user; /* User specified PER registers */ ··· 183 184 struct gs_cb *gs_cb; /* Current guarded storage cb */ 184 185 struct gs_cb *gs_bc_cb; /* Broadcast guarded storage cb */ 185 186 struct pgm_tdb trap_tdb; /* Transaction abort diagnose block */ 186 - struct fpu fpu; /* FP and VX register save area */ 187 + struct fpu ufpu; /* User FP and VX register save area */ 188 + struct fpu kfpu; /* Kernel FP and VX register save area */ 187 189 }; 188 190 189 191 /* Flag to disable transactions. */ ··· 203 203 204 204 #define INIT_THREAD { \ 205 205 .ksp = sizeof(init_stack) + (unsigned long) &init_stack, \ 206 - .fpu.regs = (void *) init_task.thread.fpu.fprs, \ 207 206 .last_break = 1, \ 208 207 } 209 208
+4
arch/s390/include/asm/ptrace.h
··· 203 203 return ret; 204 204 } 205 205 206 + struct task_struct; 207 + 208 + void update_cr_regs(struct task_struct *task); 209 + 206 210 /* 207 211 * These are defined as per linux/ptrace.h, which see. 208 212 */
-1
arch/s390/include/asm/stacktrace.h
··· 4 4 5 5 #include <linux/uaccess.h> 6 6 #include <linux/ptrace.h> 7 - #include <asm/switch_to.h> 8 7 9 8 struct stack_frame_user { 10 9 unsigned long back_chain;
-49
arch/s390/include/asm/switch_to.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Copyright IBM Corp. 1999, 2009 4 - * 5 - * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com> 6 - */ 7 - 8 - #ifndef __ASM_SWITCH_TO_H 9 - #define __ASM_SWITCH_TO_H 10 - 11 - #include <linux/thread_info.h> 12 - #include <asm/fpu/api.h> 13 - #include <asm/ptrace.h> 14 - #include <asm/guarded_storage.h> 15 - 16 - extern struct task_struct *__switch_to(void *, void *); 17 - extern void update_cr_regs(struct task_struct *task); 18 - 19 - static inline void save_access_regs(unsigned int *acrs) 20 - { 21 - typedef struct { int _[NUM_ACRS]; } acrstype; 22 - 23 - asm volatile("stam 0,15,%0" : "=Q" (*(acrstype *)acrs)); 24 - } 25 - 26 - static inline void restore_access_regs(unsigned int *acrs) 27 - { 28 - typedef struct { int _[NUM_ACRS]; } acrstype; 29 - 30 - asm volatile("lam 0,15,%0" : : "Q" (*(acrstype *)acrs)); 31 - } 32 - 33 - #define switch_to(prev, next, last) do { \ 34 - /* save_fpu_regs() sets the CIF_FPU flag, which enforces \ 35 - * a restore of the floating point / vector registers as \ 36 - * soon as the next task returns to user space \ 37 - */ \ 38 - save_fpu_regs(); \ 39 - save_access_regs(&prev->thread.acrs[0]); \ 40 - save_ri_cb(prev->thread.ri_cb); \ 41 - save_gs_cb(prev->thread.gs_cb); \ 42 - update_cr_regs(next); \ 43 - restore_access_regs(&next->thread.acrs[0]); \ 44 - restore_ri_cb(next->thread.ri_cb, prev->thread.ri_cb); \ 45 - restore_gs_cb(next->thread.gs_cb); \ 46 - prev = __switch_to(prev, next); \ 47 - } while (0) 48 - 49 - #endif /* __ASM_SWITCH_TO_H */
+61 -10
arch/s390/include/asm/vx-insn-asm.h arch/s390/include/asm/fpu-insn-asm.h
··· 9 9 * Author(s): Hendrik Brueckner <brueckner@linux.vnet.ibm.com> 10 10 */ 11 11 12 - #ifndef __ASM_S390_VX_INSN_INTERNAL_H 13 - #define __ASM_S390_VX_INSN_INTERNAL_H 12 + #ifndef __ASM_S390_FPU_INSN_ASM_H 13 + #define __ASM_S390_FPU_INSN_ASM_H 14 14 15 - #ifndef __ASM_S390_VX_INSN_H 16 - #error only <asm/vx-insn.h> can be included directly 15 + #ifndef __ASM_S390_FPU_INSN_H 16 + #error only <asm/fpu-insn.h> can be included directly 17 17 #endif 18 18 19 19 #ifdef __ASSEMBLY__ ··· 195 195 /* RXB - Compute most significant bit used vector registers 196 196 * 197 197 * @rxb: Operand to store computed RXB value 198 - * @v1: First vector register designated operand 199 - * @v2: Second vector register designated operand 200 - * @v3: Third vector register designated operand 201 - * @v4: Fourth vector register designated operand 198 + * @v1: Vector register designated operand whose MSB is stored in 199 + * RXB bit 0 (instruction bit 36) and whose remaining bits 200 + * are stored in instruction bits 8-11. 201 + * @v2: Vector register designated operand whose MSB is stored in 202 + * RXB bit 1 (instruction bit 37) and whose remaining bits 203 + * are stored in instruction bits 12-15. 204 + * @v3: Vector register designated operand whose MSB is stored in 205 + * RXB bit 2 (instruction bit 38) and whose remaining bits 206 + * are stored in instruction bits 16-19. 207 + * @v4: Vector register designated operand whose MSB is stored in 208 + * RXB bit 3 (instruction bit 39) and whose remaining bits 209 + * are stored in instruction bits 32-35. 210 + * 211 + * Note: In most vector instruction formats [1] V1, V2, V3, and V4 directly 212 + * correspond to @v1, @v2, @v3, and @v4. But there are exceptions, such as but 213 + * not limited to the vector instruction formats VRR-g, VRR-h, VRS-a, VRS-d, 214 + * and VSI. 215 + * 216 + * [1] IBM z/Architecture Principles of Operation, chapter "Program 217 + * Execution, section "Instructions", subsection "Instruction Formats". 202 218 */ 203 219 .macro RXB rxb v1 v2=0 v3=0 v4=0 204 220 \rxb = 0 ··· 239 223 * @v2: Second vector register designated operand (for RXB) 240 224 * @v3: Third vector register designated operand (for RXB) 241 225 * @v4: Fourth vector register designated operand (for RXB) 226 + * 227 + * Note: For @v1, @v2, @v3, and @v4 also refer to the RXB macro 228 + * description for further details. 242 229 */ 243 230 .macro MRXB m v1 v2=0 v3=0 v4=0 244 231 rxb = 0 ··· 257 238 * @v2: Second vector register designated operand (for RXB) 258 239 * @v3: Third vector register designated operand (for RXB) 259 240 * @v4: Fourth vector register designated operand (for RXB) 241 + * 242 + * Note: For @v1, @v2, @v3, and @v4 also refer to the RXB macro 243 + * description for further details. 260 244 */ 261 245 .macro MRXBOPC m opc v1 v2=0 v3=0 v4=0 262 246 MRXB \m, \v1, \v2, \v3, \v4 ··· 372 350 VX_NUM v3, \vr 373 351 .word 0xE700 | (r1 << 4) | (v3&15) 374 352 .word (b2 << 12) | (\disp) 375 - MRXBOPC \m, 0x21, v3 353 + MRXBOPC \m, 0x21, 0, v3 376 354 .endm 377 355 .macro VLGVB gr, vr, disp, base="%r0" 378 356 VLGV \gr, \vr, \disp, \base, 0 ··· 521 499 VMRL \vr1, \vr2, \vr3, 3 522 500 .endm 523 501 502 + /* VECTOR LOAD WITH LENGTH */ 503 + .macro VLL v, gr, disp, base 504 + VX_NUM v1, \v 505 + GR_NUM b2, \base 506 + GR_NUM r3, \gr 507 + .word 0xE700 | ((v1&15) << 4) | r3 508 + .word (b2 << 12) | (\disp) 509 + MRXBOPC 0, 0x37, v1 510 + .endm 511 + 512 + /* VECTOR STORE WITH LENGTH */ 513 + .macro VSTL v, gr, disp, base 514 + VX_NUM v1, \v 515 + GR_NUM b2, \base 516 + GR_NUM r3, \gr 517 + .word 0xE700 | ((v1&15) << 4) | r3 518 + .word (b2 << 12) | (\disp) 519 + MRXBOPC 0, 0x3f, v1 520 + .endm 524 521 525 522 /* Vector integer instructions */ 526 523 ··· 551 510 .word 0xE700 | ((v1&15) << 4) | (v2&15) 552 511 .word ((v3&15) << 12) 553 512 MRXBOPC 0, 0x68, v1, v2, v3 513 + .endm 514 + 515 + /* VECTOR CHECKSUM */ 516 + .macro VCKSM vr1, vr2, vr3 517 + VX_NUM v1, \vr1 518 + VX_NUM v2, \vr2 519 + VX_NUM v3, \vr3 520 + .word 0xE700 | ((v1&15) << 4) | (v2&15) 521 + .word ((v3&15) << 12) 522 + MRXBOPC 0, 0x66, v1, v2, v3 554 523 .endm 555 524 556 525 /* VECTOR EXCLUSIVE OR */ ··· 729 678 .endm 730 679 731 680 #endif /* __ASSEMBLY__ */ 732 - #endif /* __ASM_S390_VX_INSN_INTERNAL_H */ 681 + #endif /* __ASM_S390_FPU_INSN_ASM_H */
-19
arch/s390/include/asm/vx-insn.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Support for Vector Instructions 4 - * 5 - * This wrapper header file allows to use the vector instruction macros in 6 - * both assembler files as well as in inline assemblies in C files. 7 - */ 8 - 9 - #ifndef __ASM_S390_VX_INSN_H 10 - #define __ASM_S390_VX_INSN_H 11 - 12 - #include <asm/vx-insn-asm.h> 13 - 14 - #ifndef __ASSEMBLY__ 15 - 16 - asm(".include \"asm/vx-insn-asm.h\"\n"); 17 - 18 - #endif /* __ASSEMBLY__ */ 19 - #endif /* __ASM_S390_VX_INSN_H */
+1
arch/s390/kernel/cache.c
··· 166 166 ci_leaf_init(this_leaf++, pvt, ctype, level, cpu); 167 167 } 168 168 } 169 + this_cpu_ci->cpu_map_populated = true; 169 170 return 0; 170 171 }
+11 -11
arch/s390/kernel/compat_signal.c
··· 24 24 #include <linux/tty.h> 25 25 #include <linux/personality.h> 26 26 #include <linux/binfmts.h> 27 + #include <asm/access-regs.h> 27 28 #include <asm/ucontext.h> 28 29 #include <linux/uaccess.h> 29 30 #include <asm/lowcore.h> 30 - #include <asm/switch_to.h> 31 31 #include <asm/vdso.h> 32 - #include <asm/fpu/api.h> 32 + #include <asm/fpu.h> 33 33 #include "compat_linux.h" 34 34 #include "compat_ptrace.h" 35 35 #include "entry.h" ··· 56 56 static void store_sigregs(void) 57 57 { 58 58 save_access_regs(current->thread.acrs); 59 - save_fpu_regs(); 59 + save_user_fpu_regs(); 60 60 } 61 61 62 62 /* Load registers after signal return */ ··· 79 79 user_sregs.regs.gprs[i] = (__u32) regs->gprs[i]; 80 80 memcpy(&user_sregs.regs.acrs, current->thread.acrs, 81 81 sizeof(user_sregs.regs.acrs)); 82 - fpregs_store((_s390_fp_regs *) &user_sregs.fpregs, &current->thread.fpu); 82 + fpregs_store((_s390_fp_regs *) &user_sregs.fpregs, &current->thread.ufpu); 83 83 if (__copy_to_user(sregs, &user_sregs, sizeof(_sigregs32))) 84 84 return -EFAULT; 85 85 return 0; ··· 113 113 regs->gprs[i] = (__u64) user_sregs.regs.gprs[i]; 114 114 memcpy(&current->thread.acrs, &user_sregs.regs.acrs, 115 115 sizeof(current->thread.acrs)); 116 - fpregs_load((_s390_fp_regs *) &user_sregs.fpregs, &current->thread.fpu); 116 + fpregs_load((_s390_fp_regs *)&user_sregs.fpregs, &current->thread.ufpu); 117 117 118 118 clear_pt_regs_flag(regs, PIF_SYSCALL); /* No longer in a system call */ 119 119 return 0; ··· 136 136 /* Save vector registers to signal stack */ 137 137 if (cpu_has_vx()) { 138 138 for (i = 0; i < __NUM_VXRS_LOW; i++) 139 - vxrs[i] = current->thread.fpu.vxrs[i].low; 139 + vxrs[i] = current->thread.ufpu.vxrs[i].low; 140 140 if (__copy_to_user(&sregs_ext->vxrs_low, vxrs, 141 141 sizeof(sregs_ext->vxrs_low)) || 142 142 __copy_to_user(&sregs_ext->vxrs_high, 143 - current->thread.fpu.vxrs + __NUM_VXRS_LOW, 143 + current->thread.ufpu.vxrs + __NUM_VXRS_LOW, 144 144 sizeof(sregs_ext->vxrs_high))) 145 145 return -EFAULT; 146 146 } ··· 165 165 if (cpu_has_vx()) { 166 166 if (__copy_from_user(vxrs, &sregs_ext->vxrs_low, 167 167 sizeof(sregs_ext->vxrs_low)) || 168 - __copy_from_user(current->thread.fpu.vxrs + __NUM_VXRS_LOW, 168 + __copy_from_user(current->thread.ufpu.vxrs + __NUM_VXRS_LOW, 169 169 &sregs_ext->vxrs_high, 170 170 sizeof(sregs_ext->vxrs_high))) 171 171 return -EFAULT; 172 172 for (i = 0; i < __NUM_VXRS_LOW; i++) 173 - current->thread.fpu.vxrs[i].low = vxrs[i]; 173 + current->thread.ufpu.vxrs[i].low = vxrs[i]; 174 174 } 175 175 return 0; 176 176 } ··· 184 184 if (get_compat_sigset(&set, (compat_sigset_t __user *)frame->sc.oldmask)) 185 185 goto badframe; 186 186 set_current_blocked(&set); 187 - save_fpu_regs(); 187 + save_user_fpu_regs(); 188 188 if (restore_sigregs32(regs, &frame->sregs)) 189 189 goto badframe; 190 190 if (restore_sigregs_ext32(regs, &frame->sregs_ext)) ··· 207 207 set_current_blocked(&set); 208 208 if (compat_restore_altstack(&frame->uc.uc_stack)) 209 209 goto badframe; 210 - save_fpu_regs(); 210 + save_user_fpu_regs(); 211 211 if (restore_sigregs32(regs, &frame->uc.uc_mcontext)) 212 212 goto badframe; 213 213 if (restore_sigregs_ext32(regs, &frame->uc.uc_mcontext_ext))
+1 -1
arch/s390/kernel/crash_dump.c
··· 22 22 #include <asm/ipl.h> 23 23 #include <asm/sclp.h> 24 24 #include <asm/maccess.h> 25 - #include <asm/fpu/api.h> 25 + #include <asm/fpu.h> 26 26 27 27 #define PTR_ADD(x, y) (((char *) (x)) + ((unsigned long) (y))) 28 28 #define PTR_SUB(x, y) (((char *) (x)) - ((unsigned long) (y)))
+30 -1
arch/s390/kernel/diag.c
··· 147 147 EXPORT_SYMBOL(diag_stat_inc_norecursion); 148 148 149 149 /* 150 + * Diagnose 0c: Pseudo Timer 151 + */ 152 + void diag0c(struct hypfs_diag0c_entry *data) 153 + { 154 + diag_stat_inc(DIAG_STAT_X00C); 155 + diag_amode31_ops.diag0c(virt_to_phys(data)); 156 + } 157 + 158 + /* 150 159 * Diagnose 14: Input spool file manipulation 160 + * 161 + * The subcode parameter determines the type of the first parameter rx. 162 + * Currently used are the following 3 subcommands: 163 + * 0x0: Read the Next Spool File Buffer (Data Record) 164 + * 0x28: Position a Spool File to the Designated Record 165 + * 0xfff: Retrieve Next File Descriptor 166 + * 167 + * For subcommands 0x0 and 0xfff, the value of the first parameter is 168 + * a virtual address of a memory buffer and needs virtual to physical 169 + * address translation. For other subcommands the rx parameter is not 170 + * a virtual address. 151 171 */ 152 172 int diag14(unsigned long rx, unsigned long ry1, unsigned long subcode) 153 173 { 154 174 diag_stat_inc(DIAG_STAT_X014); 175 + switch (subcode) { 176 + case 0x0: 177 + case 0xfff: 178 + rx = virt_to_phys((void *)rx); 179 + break; 180 + default: 181 + /* Do nothing */ 182 + break; 183 + } 155 184 return diag_amode31_ops.diag14(rx, ry1, subcode); 156 185 } 157 186 EXPORT_SYMBOL(diag14); ··· 294 265 int diag26c(void *req, void *resp, enum diag26c_sc subcode) 295 266 { 296 267 diag_stat_inc(DIAG_STAT_X26C); 297 - return diag_amode31_ops.diag26c(req, resp, subcode); 268 + return diag_amode31_ops.diag26c(virt_to_phys(req), virt_to_phys(resp), subcode); 298 269 } 299 270 EXPORT_SYMBOL(diag26c);
+2 -1
arch/s390/kernel/early.c
··· 19 19 #include <linux/kernel.h> 20 20 #include <asm/asm-extable.h> 21 21 #include <linux/memblock.h> 22 + #include <asm/access-regs.h> 22 23 #include <asm/diag.h> 23 24 #include <asm/ebcdic.h> 25 + #include <asm/fpu.h> 24 26 #include <asm/ipl.h> 25 27 #include <asm/lowcore.h> 26 28 #include <asm/processor.h> ··· 33 31 #include <asm/sclp.h> 34 32 #include <asm/facility.h> 35 33 #include <asm/boot_data.h> 36 - #include <asm/switch_to.h> 37 34 #include "entry.h" 38 35 39 36 #define decompressor_handled_param(param) \
+6 -13
arch/s390/kernel/entry.S
··· 24 24 #include <asm/page.h> 25 25 #include <asm/sigp.h> 26 26 #include <asm/irq.h> 27 - #include <asm/vx-insn.h> 27 + #include <asm/fpu-insn.h> 28 28 #include <asm/setup.h> 29 29 #include <asm/nmi.h> 30 30 #include <asm/nospec-insn.h> ··· 171 171 nop 0 172 172 173 173 /* 174 - * Scheduler resume function, called by switch_to 175 - * gpr2 = (task_struct *) prev 176 - * gpr3 = (task_struct *) next 174 + * Scheduler resume function, called by __switch_to 175 + * gpr2 = (task_struct *)prev 176 + * gpr3 = (task_struct *)next 177 177 * Returns: 178 178 * gpr2 = prev 179 179 */ 180 - SYM_FUNC_START(__switch_to) 180 + SYM_FUNC_START(__switch_to_asm) 181 181 stmg %r6,%r15,__SF_GPRS(%r15) # store gprs of prev task 182 182 lghi %r4,__TASK_stack 183 183 lghi %r1,__TASK_thread ··· 193 193 lmg %r6,%r15,__SF_GPRS(%r15) # load gprs of next task 194 194 ALTERNATIVE "nop", "lpp _LPP_OFFSET", 40 195 195 BR_EX %r14 196 - SYM_FUNC_END(__switch_to) 196 + SYM_FUNC_END(__switch_to_asm) 197 197 198 198 #if IS_ENABLED(CONFIG_KVM) 199 199 /* ··· 220 220 oi __SIE_PROG0C+3(%r14),1 # we are going into SIE now 221 221 tm __SIE_PROG20+3(%r14),3 # last exit... 222 222 jnz .Lsie_skip 223 - TSTMSK __LC_CPU_FLAGS,_CIF_FPU 224 - jo .Lsie_skip # exit if fp/vx regs changed 225 223 lg %r14,__SF_SIE_CONTROL_PHYS(%r15) # get sie block phys addr 226 224 BPEXIT __SF_SIE_FLAGS(%r15),_TIF_ISOLATE_BP_GUEST 227 225 .Lsie_entry: ··· 487 489 */ 488 490 SYM_CODE_START(mcck_int_handler) 489 491 BPOFF 490 - la %r1,4095 # validate r1 491 - spt __LC_CPU_TIMER_SAVE_AREA-4095(%r1) # validate cpu timer 492 - LBEAR __LC_LAST_BREAK_SAVE_AREA-4095(%r1) # validate bear 493 - lmg %r0,%r15,__LC_GPREGS_SAVE_AREA # validate gprs 494 492 lmg %r8,%r9,__LC_MCK_OLD_PSW 495 493 TSTMSK __LC_MCCK_CODE,MCCK_CODE_SYSTEM_DAMAGE 496 494 jo .Lmcck_panic # yes -> rest of mcck code invalid 497 495 TSTMSK __LC_MCCK_CODE,MCCK_CODE_CR_VALID 498 496 jno .Lmcck_panic # control registers invalid -> panic 499 - lctlg %c0,%c15,__LC_CREGS_SAVE_AREA # validate ctl regs 500 497 ptlb 501 498 lghi %r14,__LC_CPU_TIMER_SAVE_AREA 502 499 mvc __LC_MCCK_ENTER_TIMER(8),0(%r14)
+1
arch/s390/kernel/entry.h
··· 19 19 void restart_int_handler(void); 20 20 void early_pgm_check_handler(void); 21 21 22 + struct task_struct *__switch_to_asm(struct task_struct *prev, struct task_struct *next); 22 23 void __ret_from_fork(struct task_struct *prev, struct pt_regs *regs); 23 24 void __do_pgm_check(struct pt_regs *regs); 24 25 void __do_syscall(struct pt_regs *regs, int per_trap);
+155 -225
arch/s390/kernel/fpu.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/cpu.h> 10 10 #include <linux/sched.h> 11 - #include <asm/fpu/types.h> 12 - #include <asm/fpu/api.h> 13 - #include <asm/vx-insn.h> 11 + #include <asm/fpu.h> 14 12 15 - void __kernel_fpu_begin(struct kernel_fpu *state, u32 flags) 13 + void __kernel_fpu_begin(struct kernel_fpu *state, int flags) 16 14 { 15 + __vector128 *vxrs = state->vxrs; 16 + int mask; 17 + 17 18 /* 18 19 * Limit the save to the FPU/vector registers already 19 - * in use by the previous context 20 + * in use by the previous context. 20 21 */ 21 - flags &= state->mask; 22 - 22 + flags &= state->hdr.mask; 23 23 if (flags & KERNEL_FPC) 24 - /* Save floating point control */ 25 - asm volatile("stfpc %0" : "=Q" (state->fpc)); 26 - 24 + fpu_stfpc(&state->hdr.fpc); 27 25 if (!cpu_has_vx()) { 28 - if (flags & KERNEL_VXR_V0V7) { 29 - /* Save floating-point registers */ 30 - asm volatile("std 0,%0" : "=Q" (state->fprs[0])); 31 - asm volatile("std 1,%0" : "=Q" (state->fprs[1])); 32 - asm volatile("std 2,%0" : "=Q" (state->fprs[2])); 33 - asm volatile("std 3,%0" : "=Q" (state->fprs[3])); 34 - asm volatile("std 4,%0" : "=Q" (state->fprs[4])); 35 - asm volatile("std 5,%0" : "=Q" (state->fprs[5])); 36 - asm volatile("std 6,%0" : "=Q" (state->fprs[6])); 37 - asm volatile("std 7,%0" : "=Q" (state->fprs[7])); 38 - asm volatile("std 8,%0" : "=Q" (state->fprs[8])); 39 - asm volatile("std 9,%0" : "=Q" (state->fprs[9])); 40 - asm volatile("std 10,%0" : "=Q" (state->fprs[10])); 41 - asm volatile("std 11,%0" : "=Q" (state->fprs[11])); 42 - asm volatile("std 12,%0" : "=Q" (state->fprs[12])); 43 - asm volatile("std 13,%0" : "=Q" (state->fprs[13])); 44 - asm volatile("std 14,%0" : "=Q" (state->fprs[14])); 45 - asm volatile("std 15,%0" : "=Q" (state->fprs[15])); 46 - } 26 + if (flags & KERNEL_VXR_LOW) 27 + save_fp_regs_vx(vxrs); 47 28 return; 48 29 } 49 - 50 - /* Test and save vector registers */ 51 - asm volatile ( 52 - /* 53 - * Test if any vector register must be saved and, if so, 54 - * test if all register can be saved. 55 - */ 56 - " la 1,%[vxrs]\n" /* load save area */ 57 - " tmll %[m],30\n" /* KERNEL_VXR */ 58 - " jz 7f\n" /* no work -> done */ 59 - " jo 5f\n" /* -> save V0..V31 */ 60 - /* 61 - * Test for special case KERNEL_FPU_MID only. In this 62 - * case a vstm V8..V23 is the best instruction 63 - */ 64 - " chi %[m],12\n" /* KERNEL_VXR_MID */ 65 - " jne 0f\n" /* -> save V8..V23 */ 66 - " VSTM 8,23,128,1\n" /* vstm %v8,%v23,128(%r1) */ 67 - " j 7f\n" 68 - /* Test and save the first half of 16 vector registers */ 69 - "0: tmll %[m],6\n" /* KERNEL_VXR_LOW */ 70 - " jz 3f\n" /* -> KERNEL_VXR_HIGH */ 71 - " jo 2f\n" /* 11 -> save V0..V15 */ 72 - " brc 2,1f\n" /* 10 -> save V8..V15 */ 73 - " VSTM 0,7,0,1\n" /* vstm %v0,%v7,0(%r1) */ 74 - " j 3f\n" 75 - "1: VSTM 8,15,128,1\n" /* vstm %v8,%v15,128(%r1) */ 76 - " j 3f\n" 77 - "2: VSTM 0,15,0,1\n" /* vstm %v0,%v15,0(%r1) */ 78 - /* Test and save the second half of 16 vector registers */ 79 - "3: tmll %[m],24\n" /* KERNEL_VXR_HIGH */ 80 - " jz 7f\n" 81 - " jo 6f\n" /* 11 -> save V16..V31 */ 82 - " brc 2,4f\n" /* 10 -> save V24..V31 */ 83 - " VSTM 16,23,256,1\n" /* vstm %v16,%v23,256(%r1) */ 84 - " j 7f\n" 85 - "4: VSTM 24,31,384,1\n" /* vstm %v24,%v31,384(%r1) */ 86 - " j 7f\n" 87 - "5: VSTM 0,15,0,1\n" /* vstm %v0,%v15,0(%r1) */ 88 - "6: VSTM 16,31,256,1\n" /* vstm %v16,%v31,256(%r1) */ 89 - "7:" 90 - : [vxrs] "=Q" (*(struct vx_array *) &state->vxrs) 91 - : [m] "d" (flags) 92 - : "1", "cc"); 30 + mask = flags & KERNEL_VXR; 31 + if (mask == KERNEL_VXR) { 32 + vxrs += fpu_vstm(0, 15, vxrs); 33 + vxrs += fpu_vstm(16, 31, vxrs); 34 + return; 35 + } 36 + if (mask == KERNEL_VXR_MID) { 37 + vxrs += fpu_vstm(8, 23, vxrs); 38 + return; 39 + } 40 + mask = flags & KERNEL_VXR_LOW; 41 + if (mask) { 42 + if (mask == KERNEL_VXR_LOW) 43 + vxrs += fpu_vstm(0, 15, vxrs); 44 + else if (mask == KERNEL_VXR_V0V7) 45 + vxrs += fpu_vstm(0, 7, vxrs); 46 + else 47 + vxrs += fpu_vstm(8, 15, vxrs); 48 + } 49 + mask = flags & KERNEL_VXR_HIGH; 50 + if (mask) { 51 + if (mask == KERNEL_VXR_HIGH) 52 + vxrs += fpu_vstm(16, 31, vxrs); 53 + else if (mask == KERNEL_VXR_V16V23) 54 + vxrs += fpu_vstm(16, 23, vxrs); 55 + else 56 + vxrs += fpu_vstm(24, 31, vxrs); 57 + } 93 58 } 94 59 EXPORT_SYMBOL(__kernel_fpu_begin); 95 60 96 - void __kernel_fpu_end(struct kernel_fpu *state, u32 flags) 61 + void __kernel_fpu_end(struct kernel_fpu *state, int flags) 97 62 { 63 + __vector128 *vxrs = state->vxrs; 64 + int mask; 65 + 98 66 /* 99 67 * Limit the restore to the FPU/vector registers of the 100 - * previous context that have been overwritte by the 101 - * current context 68 + * previous context that have been overwritten by the 69 + * current context. 102 70 */ 103 - flags &= state->mask; 104 - 71 + flags &= state->hdr.mask; 105 72 if (flags & KERNEL_FPC) 106 - /* Restore floating-point controls */ 107 - asm volatile("lfpc %0" : : "Q" (state->fpc)); 108 - 73 + fpu_lfpc(&state->hdr.fpc); 109 74 if (!cpu_has_vx()) { 110 - if (flags & KERNEL_VXR_V0V7) { 111 - /* Restore floating-point registers */ 112 - asm volatile("ld 0,%0" : : "Q" (state->fprs[0])); 113 - asm volatile("ld 1,%0" : : "Q" (state->fprs[1])); 114 - asm volatile("ld 2,%0" : : "Q" (state->fprs[2])); 115 - asm volatile("ld 3,%0" : : "Q" (state->fprs[3])); 116 - asm volatile("ld 4,%0" : : "Q" (state->fprs[4])); 117 - asm volatile("ld 5,%0" : : "Q" (state->fprs[5])); 118 - asm volatile("ld 6,%0" : : "Q" (state->fprs[6])); 119 - asm volatile("ld 7,%0" : : "Q" (state->fprs[7])); 120 - asm volatile("ld 8,%0" : : "Q" (state->fprs[8])); 121 - asm volatile("ld 9,%0" : : "Q" (state->fprs[9])); 122 - asm volatile("ld 10,%0" : : "Q" (state->fprs[10])); 123 - asm volatile("ld 11,%0" : : "Q" (state->fprs[11])); 124 - asm volatile("ld 12,%0" : : "Q" (state->fprs[12])); 125 - asm volatile("ld 13,%0" : : "Q" (state->fprs[13])); 126 - asm volatile("ld 14,%0" : : "Q" (state->fprs[14])); 127 - asm volatile("ld 15,%0" : : "Q" (state->fprs[15])); 128 - } 75 + if (flags & KERNEL_VXR_LOW) 76 + load_fp_regs_vx(vxrs); 129 77 return; 130 78 } 131 - 132 - /* Test and restore (load) vector registers */ 133 - asm volatile ( 134 - /* 135 - * Test if any vector register must be loaded and, if so, 136 - * test if all registers can be loaded at once. 137 - */ 138 - " la 1,%[vxrs]\n" /* load restore area */ 139 - " tmll %[m],30\n" /* KERNEL_VXR */ 140 - " jz 7f\n" /* no work -> done */ 141 - " jo 5f\n" /* -> restore V0..V31 */ 142 - /* 143 - * Test for special case KERNEL_FPU_MID only. In this 144 - * case a vlm V8..V23 is the best instruction 145 - */ 146 - " chi %[m],12\n" /* KERNEL_VXR_MID */ 147 - " jne 0f\n" /* -> restore V8..V23 */ 148 - " VLM 8,23,128,1\n" /* vlm %v8,%v23,128(%r1) */ 149 - " j 7f\n" 150 - /* Test and restore the first half of 16 vector registers */ 151 - "0: tmll %[m],6\n" /* KERNEL_VXR_LOW */ 152 - " jz 3f\n" /* -> KERNEL_VXR_HIGH */ 153 - " jo 2f\n" /* 11 -> restore V0..V15 */ 154 - " brc 2,1f\n" /* 10 -> restore V8..V15 */ 155 - " VLM 0,7,0,1\n" /* vlm %v0,%v7,0(%r1) */ 156 - " j 3f\n" 157 - "1: VLM 8,15,128,1\n" /* vlm %v8,%v15,128(%r1) */ 158 - " j 3f\n" 159 - "2: VLM 0,15,0,1\n" /* vlm %v0,%v15,0(%r1) */ 160 - /* Test and restore the second half of 16 vector registers */ 161 - "3: tmll %[m],24\n" /* KERNEL_VXR_HIGH */ 162 - " jz 7f\n" 163 - " jo 6f\n" /* 11 -> restore V16..V31 */ 164 - " brc 2,4f\n" /* 10 -> restore V24..V31 */ 165 - " VLM 16,23,256,1\n" /* vlm %v16,%v23,256(%r1) */ 166 - " j 7f\n" 167 - "4: VLM 24,31,384,1\n" /* vlm %v24,%v31,384(%r1) */ 168 - " j 7f\n" 169 - "5: VLM 0,15,0,1\n" /* vlm %v0,%v15,0(%r1) */ 170 - "6: VLM 16,31,256,1\n" /* vlm %v16,%v31,256(%r1) */ 171 - "7:" 172 - : [vxrs] "=Q" (*(struct vx_array *) &state->vxrs) 173 - : [m] "d" (flags) 174 - : "1", "cc"); 79 + mask = flags & KERNEL_VXR; 80 + if (mask == KERNEL_VXR) { 81 + vxrs += fpu_vlm(0, 15, vxrs); 82 + vxrs += fpu_vlm(16, 31, vxrs); 83 + return; 84 + } 85 + if (mask == KERNEL_VXR_MID) { 86 + vxrs += fpu_vlm(8, 23, vxrs); 87 + return; 88 + } 89 + mask = flags & KERNEL_VXR_LOW; 90 + if (mask) { 91 + if (mask == KERNEL_VXR_LOW) 92 + vxrs += fpu_vlm(0, 15, vxrs); 93 + else if (mask == KERNEL_VXR_V0V7) 94 + vxrs += fpu_vlm(0, 7, vxrs); 95 + else 96 + vxrs += fpu_vlm(8, 15, vxrs); 97 + } 98 + mask = flags & KERNEL_VXR_HIGH; 99 + if (mask) { 100 + if (mask == KERNEL_VXR_HIGH) 101 + vxrs += fpu_vlm(16, 31, vxrs); 102 + else if (mask == KERNEL_VXR_V16V23) 103 + vxrs += fpu_vlm(16, 23, vxrs); 104 + else 105 + vxrs += fpu_vlm(24, 31, vxrs); 106 + } 175 107 } 176 108 EXPORT_SYMBOL(__kernel_fpu_end); 177 109 178 - void __load_fpu_regs(void) 110 + void load_fpu_state(struct fpu *state, int flags) 179 111 { 180 - unsigned long *regs = current->thread.fpu.regs; 181 - struct fpu *state = &current->thread.fpu; 112 + __vector128 *vxrs = &state->vxrs[0]; 113 + int mask; 182 114 183 - sfpc_safe(state->fpc); 184 - if (likely(cpu_has_vx())) { 185 - asm volatile("lgr 1,%0\n" 186 - "VLM 0,15,0,1\n" 187 - "VLM 16,31,256,1\n" 188 - : 189 - : "d" (regs) 190 - : "1", "cc", "memory"); 191 - } else { 192 - asm volatile("ld 0,%0" : : "Q" (regs[0])); 193 - asm volatile("ld 1,%0" : : "Q" (regs[1])); 194 - asm volatile("ld 2,%0" : : "Q" (regs[2])); 195 - asm volatile("ld 3,%0" : : "Q" (regs[3])); 196 - asm volatile("ld 4,%0" : : "Q" (regs[4])); 197 - asm volatile("ld 5,%0" : : "Q" (regs[5])); 198 - asm volatile("ld 6,%0" : : "Q" (regs[6])); 199 - asm volatile("ld 7,%0" : : "Q" (regs[7])); 200 - asm volatile("ld 8,%0" : : "Q" (regs[8])); 201 - asm volatile("ld 9,%0" : : "Q" (regs[9])); 202 - asm volatile("ld 10,%0" : : "Q" (regs[10])); 203 - asm volatile("ld 11,%0" : : "Q" (regs[11])); 204 - asm volatile("ld 12,%0" : : "Q" (regs[12])); 205 - asm volatile("ld 13,%0" : : "Q" (regs[13])); 206 - asm volatile("ld 14,%0" : : "Q" (regs[14])); 207 - asm volatile("ld 15,%0" : : "Q" (regs[15])); 115 + if (flags & KERNEL_FPC) 116 + fpu_lfpc(&state->fpc); 117 + if (!cpu_has_vx()) { 118 + if (flags & KERNEL_VXR_V0V7) 119 + load_fp_regs_vx(state->vxrs); 120 + return; 208 121 } 209 - clear_cpu_flag(CIF_FPU); 210 - } 211 - 212 - void load_fpu_regs(void) 213 - { 214 - raw_local_irq_disable(); 215 - __load_fpu_regs(); 216 - raw_local_irq_enable(); 217 - } 218 - EXPORT_SYMBOL(load_fpu_regs); 219 - 220 - void save_fpu_regs(void) 221 - { 222 - unsigned long flags, *regs; 223 - struct fpu *state; 224 - 225 - local_irq_save(flags); 226 - 227 - if (test_cpu_flag(CIF_FPU)) 228 - goto out; 229 - 230 - state = &current->thread.fpu; 231 - regs = current->thread.fpu.regs; 232 - 233 - asm volatile("stfpc %0" : "=Q" (state->fpc)); 234 - if (likely(cpu_has_vx())) { 235 - asm volatile("lgr 1,%0\n" 236 - "VSTM 0,15,0,1\n" 237 - "VSTM 16,31,256,1\n" 238 - : 239 - : "d" (regs) 240 - : "1", "cc", "memory"); 241 - } else { 242 - asm volatile("std 0,%0" : "=Q" (regs[0])); 243 - asm volatile("std 1,%0" : "=Q" (regs[1])); 244 - asm volatile("std 2,%0" : "=Q" (regs[2])); 245 - asm volatile("std 3,%0" : "=Q" (regs[3])); 246 - asm volatile("std 4,%0" : "=Q" (regs[4])); 247 - asm volatile("std 5,%0" : "=Q" (regs[5])); 248 - asm volatile("std 6,%0" : "=Q" (regs[6])); 249 - asm volatile("std 7,%0" : "=Q" (regs[7])); 250 - asm volatile("std 8,%0" : "=Q" (regs[8])); 251 - asm volatile("std 9,%0" : "=Q" (regs[9])); 252 - asm volatile("std 10,%0" : "=Q" (regs[10])); 253 - asm volatile("std 11,%0" : "=Q" (regs[11])); 254 - asm volatile("std 12,%0" : "=Q" (regs[12])); 255 - asm volatile("std 13,%0" : "=Q" (regs[13])); 256 - asm volatile("std 14,%0" : "=Q" (regs[14])); 257 - asm volatile("std 15,%0" : "=Q" (regs[15])); 122 + mask = flags & KERNEL_VXR; 123 + if (mask == KERNEL_VXR) { 124 + fpu_vlm(0, 15, &vxrs[0]); 125 + fpu_vlm(16, 31, &vxrs[16]); 126 + return; 258 127 } 259 - set_cpu_flag(CIF_FPU); 260 - out: 261 - local_irq_restore(flags); 128 + if (mask == KERNEL_VXR_MID) { 129 + fpu_vlm(8, 23, &vxrs[8]); 130 + return; 131 + } 132 + mask = flags & KERNEL_VXR_LOW; 133 + if (mask) { 134 + if (mask == KERNEL_VXR_LOW) 135 + fpu_vlm(0, 15, &vxrs[0]); 136 + else if (mask == KERNEL_VXR_V0V7) 137 + fpu_vlm(0, 7, &vxrs[0]); 138 + else 139 + fpu_vlm(8, 15, &vxrs[8]); 140 + } 141 + mask = flags & KERNEL_VXR_HIGH; 142 + if (mask) { 143 + if (mask == KERNEL_VXR_HIGH) 144 + fpu_vlm(16, 31, &vxrs[16]); 145 + else if (mask == KERNEL_VXR_V16V23) 146 + fpu_vlm(16, 23, &vxrs[16]); 147 + else 148 + fpu_vlm(24, 31, &vxrs[24]); 149 + } 262 150 } 263 - EXPORT_SYMBOL(save_fpu_regs); 151 + 152 + void save_fpu_state(struct fpu *state, int flags) 153 + { 154 + __vector128 *vxrs = &state->vxrs[0]; 155 + int mask; 156 + 157 + if (flags & KERNEL_FPC) 158 + fpu_stfpc(&state->fpc); 159 + if (!cpu_has_vx()) { 160 + if (flags & KERNEL_VXR_LOW) 161 + save_fp_regs_vx(state->vxrs); 162 + return; 163 + } 164 + mask = flags & KERNEL_VXR; 165 + if (mask == KERNEL_VXR) { 166 + fpu_vstm(0, 15, &vxrs[0]); 167 + fpu_vstm(16, 31, &vxrs[16]); 168 + return; 169 + } 170 + if (mask == KERNEL_VXR_MID) { 171 + fpu_vstm(8, 23, &vxrs[8]); 172 + return; 173 + } 174 + mask = flags & KERNEL_VXR_LOW; 175 + if (mask) { 176 + if (mask == KERNEL_VXR_LOW) 177 + fpu_vstm(0, 15, &vxrs[0]); 178 + else if (mask == KERNEL_VXR_V0V7) 179 + fpu_vstm(0, 7, &vxrs[0]); 180 + else 181 + fpu_vstm(8, 15, &vxrs[8]); 182 + } 183 + mask = flags & KERNEL_VXR_HIGH; 184 + if (mask) { 185 + if (mask == KERNEL_VXR_HIGH) 186 + fpu_vstm(16, 31, &vxrs[16]); 187 + else if (mask == KERNEL_VXR_V16V23) 188 + fpu_vstm(16, 23, &vxrs[16]); 189 + else 190 + fpu_vstm(24, 31, &vxrs[24]); 191 + } 192 + } 193 + EXPORT_SYMBOL(save_fpu_state);
+1 -2
arch/s390/kernel/ipl.c
··· 1941 1941 reipl_type == IPL_TYPE_UNKNOWN) 1942 1942 os_info_flags |= OS_INFO_FLAG_REIPL_CLEAR; 1943 1943 os_info_entry_add(OS_INFO_FLAGS_ENTRY, &os_info_flags, sizeof(os_info_flags)); 1944 - csum = (__force unsigned int) 1945 - csum_partial(reipl_block_actual, reipl_block_actual->hdr.len, 0); 1944 + csum = (__force unsigned int)cksm(reipl_block_actual, reipl_block_actual->hdr.len, 0); 1946 1945 abs_lc = get_abs_lowcore(); 1947 1946 abs_lc->ipib = __pa(reipl_block_actual); 1948 1947 abs_lc->ipib_checksum = csum;
+2 -1
arch/s390/kernel/machine_kexec.c
··· 13 13 #include <linux/reboot.h> 14 14 #include <linux/ftrace.h> 15 15 #include <linux/debug_locks.h> 16 + #include <asm/guarded_storage.h> 16 17 #include <asm/pfault.h> 17 18 #include <asm/cio.h> 19 + #include <asm/fpu.h> 18 20 #include <asm/setup.h> 19 21 #include <asm/smp.h> 20 22 #include <asm/ipl.h> ··· 28 26 #include <asm/os_info.h> 29 27 #include <asm/set_memory.h> 30 28 #include <asm/stacktrace.h> 31 - #include <asm/switch_to.h> 32 29 #include <asm/nmi.h> 33 30 #include <asm/sclp.h> 34 31
+48 -120
arch/s390/kernel/nmi.c
··· 23 23 #include <linux/export.h> 24 24 #include <asm/lowcore.h> 25 25 #include <asm/ctlreg.h> 26 + #include <asm/fpu.h> 26 27 #include <asm/smp.h> 27 28 #include <asm/stp.h> 28 29 #include <asm/cputime.h> 29 30 #include <asm/nmi.h> 30 31 #include <asm/crw.h> 31 - #include <asm/switch_to.h> 32 32 #include <asm/asm-offsets.h> 33 33 #include <asm/pai.h> 34 - #include <asm/vx-insn.h> 35 - #include <asm/fpu/api.h> 36 34 37 35 struct mcck_struct { 38 36 unsigned int kill_task : 1; ··· 202 204 } 203 205 } 204 206 205 - /* 206 - * returns 0 if register contents could be validated 207 - * returns 1 otherwise 207 + /** 208 + * nmi_registers_valid - verify if registers are valid 209 + * @mci: machine check interruption code 210 + * 211 + * Inspect a machine check interruption code and verify if all required 212 + * registers are valid. For some registers the corresponding validity bit is 213 + * ignored and the registers are set to the expected value. 214 + * Returns true if all registers are valid, otherwise false. 208 215 */ 209 - static int notrace s390_validate_registers(union mci mci) 216 + static bool notrace nmi_registers_valid(union mci mci) 210 217 { 211 - struct mcesa *mcesa; 212 - void *fpt_save_area; 213 218 union ctlreg2 cr2; 214 - int kill_task; 215 - u64 zero; 216 219 217 - kill_task = 0; 218 - zero = 0; 219 - 220 - if (!mci.gr || !mci.fp) 221 - kill_task = 1; 222 - fpt_save_area = &S390_lowcore.floating_pt_save_area; 223 - if (!mci.fc) { 224 - kill_task = 1; 225 - asm volatile( 226 - " lfpc %0\n" 227 - : 228 - : "Q" (zero)); 229 - } else { 230 - asm volatile( 231 - " lfpc %0\n" 232 - : 233 - : "Q" (S390_lowcore.fpt_creg_save_area)); 234 - } 235 - 236 - mcesa = __va(S390_lowcore.mcesad & MCESA_ORIGIN_MASK); 237 - if (!cpu_has_vx()) { 238 - /* Validate floating point registers */ 239 - asm volatile( 240 - " ld 0,0(%0)\n" 241 - " ld 1,8(%0)\n" 242 - " ld 2,16(%0)\n" 243 - " ld 3,24(%0)\n" 244 - " ld 4,32(%0)\n" 245 - " ld 5,40(%0)\n" 246 - " ld 6,48(%0)\n" 247 - " ld 7,56(%0)\n" 248 - " ld 8,64(%0)\n" 249 - " ld 9,72(%0)\n" 250 - " ld 10,80(%0)\n" 251 - " ld 11,88(%0)\n" 252 - " ld 12,96(%0)\n" 253 - " ld 13,104(%0)\n" 254 - " ld 14,112(%0)\n" 255 - " ld 15,120(%0)\n" 256 - : 257 - : "a" (fpt_save_area) 258 - : "memory"); 259 - } else { 260 - /* Validate vector registers */ 261 - union ctlreg0 cr0; 262 - 263 - /* 264 - * The vector validity must only be checked if not running a 265 - * KVM guest. For KVM guests the machine check is forwarded by 266 - * KVM and it is the responsibility of the guest to take 267 - * appropriate actions. The host vector or FPU values have been 268 - * saved by KVM and will be restored by KVM. 269 - */ 270 - if (!mci.vr && !test_cpu_flag(CIF_MCCK_GUEST)) 271 - kill_task = 1; 272 - cr0.reg = S390_lowcore.cregs_save_area[0]; 273 - cr0.afp = cr0.vx = 1; 274 - local_ctl_load(0, &cr0.reg); 275 - asm volatile( 276 - " la 1,%0\n" 277 - " VLM 0,15,0,1\n" 278 - " VLM 16,31,256,1\n" 279 - : 280 - : "Q" (*(struct vx_array *)mcesa->vector_save_area) 281 - : "1"); 282 - local_ctl_load(0, &S390_lowcore.cregs_save_area[0]); 283 - } 284 - /* Validate access registers */ 285 - asm volatile( 286 - " lam 0,15,0(%0)\n" 287 - : 288 - : "a" (&S390_lowcore.access_regs_save_area) 289 - : "memory"); 290 - if (!mci.ar) 291 - kill_task = 1; 292 - /* Validate guarded storage registers */ 293 - cr2.reg = S390_lowcore.cregs_save_area[2]; 294 - if (cr2.gse) { 295 - if (!mci.gs) { 296 - /* 297 - * 2 cases: 298 - * - machine check in kernel or userspace 299 - * - machine check while running SIE (KVM guest) 300 - * For kernel or userspace the userspace values of 301 - * guarded storage control can not be recreated, the 302 - * process must be terminated. 303 - * For SIE the guest values of guarded storage can not 304 - * be recreated. This is either due to a bug or due to 305 - * GS being disabled in the guest. The guest will be 306 - * notified by KVM code and the guests machine check 307 - * handling must take care of this. The host values 308 - * are saved by KVM and are not affected. 309 - */ 310 - if (!test_cpu_flag(CIF_MCCK_GUEST)) 311 - kill_task = 1; 312 - } else { 313 - load_gs_cb((struct gs_cb *)mcesa->guarded_storage_save_area); 314 - } 315 - } 316 220 /* 317 - * The getcpu vdso syscall reads CPU number from the programmable 221 + * The getcpu vdso syscall reads the CPU number from the programmable 318 222 * field of the TOD clock. Disregard the TOD programmable register 319 - * validity bit and load the CPU number into the TOD programmable 320 - * field unconditionally. 223 + * validity bit and load the CPU number into the TOD programmable field 224 + * unconditionally. 321 225 */ 322 226 set_tod_programmable_field(raw_smp_processor_id()); 323 - /* Validate clock comparator register */ 227 + /* 228 + * Set the clock comparator register to the next expected value. 229 + */ 324 230 set_clock_comparator(S390_lowcore.clock_comparator); 325 - 231 + if (!mci.gr || !mci.fp || !mci.fc) 232 + return false; 233 + /* 234 + * The vector validity must only be checked if not running a 235 + * KVM guest. For KVM guests the machine check is forwarded by 236 + * KVM and it is the responsibility of the guest to take 237 + * appropriate actions. The host vector or FPU values have been 238 + * saved by KVM and will be restored by KVM. 239 + */ 240 + if (!mci.vr && !test_cpu_flag(CIF_MCCK_GUEST)) 241 + return false; 242 + if (!mci.ar) 243 + return false; 244 + /* 245 + * Two cases for guarded storage registers: 246 + * - machine check in kernel or userspace 247 + * - machine check while running SIE (KVM guest) 248 + * For kernel or userspace the userspace values of guarded storage 249 + * control can not be recreated, the process must be terminated. 250 + * For SIE the guest values of guarded storage can not be recreated. 251 + * This is either due to a bug or due to GS being disabled in the 252 + * guest. The guest will be notified by KVM code and the guests machine 253 + * check handling must take care of this. The host values are saved by 254 + * KVM and are not affected. 255 + */ 256 + cr2.reg = S390_lowcore.cregs_save_area[2]; 257 + if (cr2.gse && !mci.gs && !test_cpu_flag(CIF_MCCK_GUEST)) 258 + return false; 326 259 if (!mci.ms || !mci.pm || !mci.ia) 327 - kill_task = 1; 328 - 329 - return kill_task; 260 + return false; 261 + return true; 330 262 } 331 - NOKPROBE_SYMBOL(s390_validate_registers); 263 + NOKPROBE_SYMBOL(nmi_registers_valid); 332 264 333 265 /* 334 266 * Backup the guest's machine check info to its description block ··· 356 428 s390_handle_damage(); 357 429 } 358 430 } 359 - if (s390_validate_registers(mci)) { 431 + if (!nmi_registers_valid(mci)) { 360 432 if (!user_mode(regs)) 361 433 s390_handle_damage(); 362 434 /*
+3 -3
arch/s390/kernel/os_info.c
··· 29 29 u32 os_info_csum(struct os_info *os_info) 30 30 { 31 31 int size = sizeof(*os_info) - offsetof(struct os_info, version_major); 32 - return (__force u32)csum_partial(&os_info->version_major, size, 0); 32 + return (__force u32)cksm(&os_info->version_major, size, 0); 33 33 } 34 34 35 35 /* ··· 49 49 { 50 50 os_info.entry[nr].addr = __pa(ptr); 51 51 os_info.entry[nr].size = size; 52 - os_info.entry[nr].csum = (__force u32)csum_partial(ptr, size, 0); 52 + os_info.entry[nr].csum = (__force u32)cksm(ptr, size, 0); 53 53 os_info.csum = os_info_csum(&os_info); 54 54 } 55 55 ··· 98 98 msg = "copy failed"; 99 99 goto fail_free; 100 100 } 101 - csum = (__force u32)csum_partial(buf_align, size, 0); 101 + csum = (__force u32)cksm(buf_align, size, 0); 102 102 if (csum != os_info_old->entry[nr].csum) { 103 103 msg = "checksum failed"; 104 104 goto fail_free;
+55 -28
arch/s390/kernel/perf_pai_crypto.c
··· 98 98 event->attr.config, event->cpu, 99 99 cpump->active_events, cpump->mode, 100 100 refcount_read(&cpump->refcnt)); 101 + free_page(PAI_SAVE_AREA(event)); 101 102 if (refcount_dec_and_test(&cpump->refcnt)) { 102 103 debug_sprintf_event(cfm_dbg, 4, "%s page %#lx save %p\n", 103 104 __func__, (unsigned long)cpump->page, ··· 261 260 { 262 261 struct perf_event_attr *a = &event->attr; 263 262 struct paicrypt_map *cpump; 263 + int rc = 0; 264 264 265 265 /* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */ 266 266 if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type) ··· 276 274 /* Allow only CRYPTO_ALL for sampling. */ 277 275 if (a->sample_period && a->config != PAI_CRYPTO_BASE) 278 276 return -EINVAL; 277 + /* Get a page to store last counter values for sampling */ 278 + if (a->sample_period) { 279 + PAI_SAVE_AREA(event) = get_zeroed_page(GFP_KERNEL); 280 + if (!PAI_SAVE_AREA(event)) { 281 + rc = -ENOMEM; 282 + goto out; 283 + } 284 + } 279 285 280 286 cpump = paicrypt_busy(event); 281 - if (IS_ERR(cpump)) 282 - return PTR_ERR(cpump); 287 + if (IS_ERR(cpump)) { 288 + free_page(PAI_SAVE_AREA(event)); 289 + rc = PTR_ERR(cpump); 290 + goto out; 291 + } 283 292 284 293 event->destroy = paicrypt_event_destroy; 285 294 ··· 306 293 } 307 294 308 295 static_branch_inc(&pai_key); 309 - return 0; 296 + out: 297 + return rc; 310 298 } 311 299 312 300 static void paicrypt_read(struct perf_event *event) ··· 324 310 325 311 static void paicrypt_start(struct perf_event *event, int flags) 326 312 { 313 + struct paicrypt_mapptr *mp = this_cpu_ptr(paicrypt_root.mapptr); 314 + struct paicrypt_map *cpump = mp->mapptr; 327 315 u64 sum; 328 316 329 - /* Event initialization sets last_tag to 0. When later on the events 330 - * are deleted and re-added, do not reset the event count value to zero. 331 - * Events are added, deleted and re-added when 2 or more events 332 - * are active at the same time. 333 - */ 334 317 if (!event->attr.sample_period) { /* Counting */ 335 - if (!event->hw.last_tag) { 336 - event->hw.last_tag = 1; 337 - sum = paicrypt_getall(event); /* Get current value */ 338 - local64_set(&event->hw.prev_count, sum); 339 - } 318 + sum = paicrypt_getall(event); /* Get current value */ 319 + local64_set(&event->hw.prev_count, sum); 340 320 } else { /* Sampling */ 321 + cpump->event = event; 341 322 perf_sched_cb_inc(event->pmu); 342 323 } 343 324 } ··· 348 339 WRITE_ONCE(S390_lowcore.ccd, ccd); 349 340 local_ctl_set_bit(0, CR0_CRYPTOGRAPHY_COUNTER_BIT); 350 341 } 351 - cpump->event = event; 352 342 if (flags & PERF_EF_START) 353 343 paicrypt_start(event, PERF_EF_RELOAD); 354 344 event->hw.state = 0; ··· 375 367 } 376 368 } 377 369 378 - /* Create raw data and save it in buffer. Returns number of bytes copied. 379 - * Saves only positive counter entries of the form 370 + /* Create raw data and save it in buffer. Calculate the delta for each 371 + * counter between this invocation and the last invocation. 372 + * Returns number of bytes copied. 373 + * Saves only entries with positive counter difference of the form 380 374 * 2 bytes: Number of counter 381 375 * 8 bytes: Value of counter 382 376 */ 383 377 static size_t paicrypt_copy(struct pai_userdata *userdata, unsigned long *page, 384 - bool exclude_user, bool exclude_kernel) 378 + unsigned long *page_old, bool exclude_user, 379 + bool exclude_kernel) 385 380 { 386 381 int i, outidx = 0; 387 382 388 383 for (i = 1; i <= paicrypt_cnt; i++) { 389 - u64 val = 0; 384 + u64 val = 0, val_old = 0; 390 385 391 - if (!exclude_kernel) 386 + if (!exclude_kernel) { 392 387 val += paicrypt_getctr(page, i, true); 393 - if (!exclude_user) 388 + val_old += paicrypt_getctr(page_old, i, true); 389 + } 390 + if (!exclude_user) { 394 391 val += paicrypt_getctr(page, i, false); 392 + val_old += paicrypt_getctr(page_old, i, false); 393 + } 394 + if (val >= val_old) 395 + val -= val_old; 396 + else 397 + val = (~0ULL - val_old) + val + 1; 395 398 if (val) { 396 399 userdata[outidx].num = i; 397 400 userdata[outidx].value = val; ··· 445 426 446 427 overflow = perf_event_overflow(event, &data, &regs); 447 428 perf_event_update_userpage(event); 448 - /* Clear lowcore page after read */ 449 - memset(cpump->page, 0, PAGE_SIZE); 429 + /* Save crypto counter lowcore page after reading event data. */ 430 + memcpy((void *)PAI_SAVE_AREA(event), cpump->page, PAGE_SIZE); 450 431 return overflow; 451 432 } 452 433 ··· 462 443 if (!event) /* No event active */ 463 444 return 0; 464 445 rawsize = paicrypt_copy(cpump->save, cpump->page, 446 + (unsigned long *)PAI_SAVE_AREA(event), 465 447 cpump->event->attr.exclude_user, 466 448 cpump->event->attr.exclude_kernel); 467 449 if (rawsize) /* No incremented counters */ ··· 714 694 { 715 695 struct perf_pmu_events_attr *pa; 716 696 697 + /* Index larger than array_size, no counter name available */ 698 + if (num >= ARRAY_SIZE(paicrypt_ctrnames)) { 699 + attrs[num] = NULL; 700 + return 0; 701 + } 702 + 717 703 pa = kzalloc(sizeof(*pa), GFP_KERNEL); 718 704 if (!pa) 719 705 return -ENOMEM; ··· 740 714 struct attribute **attrs; 741 715 int ret, i; 742 716 743 - attrs = kmalloc_array(ARRAY_SIZE(paicrypt_ctrnames) + 1, sizeof(*attrs), 744 - GFP_KERNEL); 717 + attrs = kmalloc_array(paicrypt_cnt + 2, sizeof(*attrs), GFP_KERNEL); 745 718 if (!attrs) 746 719 return -ENOMEM; 747 - for (i = 0; i < ARRAY_SIZE(paicrypt_ctrnames); i++) { 720 + for (i = 0; i <= paicrypt_cnt; i++) { 748 721 ret = attr_event_init_one(attrs, i); 749 722 if (ret) { 750 - attr_event_free(attrs, i - 1); 723 + attr_event_free(attrs, i); 751 724 return ret; 752 725 } 753 726 } ··· 767 742 paicrypt_cnt = ib.num_cc; 768 743 if (paicrypt_cnt == 0) 769 744 return 0; 770 - if (paicrypt_cnt >= PAI_CRYPTO_MAXCTR) 771 - paicrypt_cnt = PAI_CRYPTO_MAXCTR - 1; 745 + if (paicrypt_cnt >= PAI_CRYPTO_MAXCTR) { 746 + pr_err("Too many PMU pai_crypto counters %d\n", paicrypt_cnt); 747 + return -E2BIG; 748 + } 772 749 773 750 rc = attr_event_init(); /* Export known PAI crypto events */ 774 751 if (rc) {
+36 -16
arch/s390/kernel/perf_pai_ext.c
··· 120 120 struct paiext_mapptr *mp = per_cpu_ptr(paiext_root.mapptr, event->cpu); 121 121 struct paiext_map *cpump = mp->mapptr; 122 122 123 + free_page(PAI_SAVE_AREA(event)); 123 124 mutex_lock(&paiext_reserve_mutex); 124 125 cpump->event = NULL; 125 126 if (refcount_dec_and_test(&cpump->refcnt)) /* Last reference gone */ ··· 203 202 } 204 203 205 204 rc = 0; 206 - cpump->event = event; 207 205 208 206 undo: 209 207 if (rc) { ··· 256 256 /* Prohibit exclude_user event selection */ 257 257 if (a->exclude_user) 258 258 return -EINVAL; 259 + /* Get a page to store last counter values for sampling */ 260 + if (a->sample_period) { 261 + PAI_SAVE_AREA(event) = get_zeroed_page(GFP_KERNEL); 262 + if (!PAI_SAVE_AREA(event)) 263 + return -ENOMEM; 264 + } 259 265 260 266 rc = paiext_alloc(a, event); 261 - if (rc) 267 + if (rc) { 268 + free_page(PAI_SAVE_AREA(event)); 262 269 return rc; 270 + } 263 271 event->destroy = paiext_event_destroy; 264 272 265 273 if (a->sample_period) { ··· 327 319 328 320 static void paiext_start(struct perf_event *event, int flags) 329 321 { 322 + struct paiext_mapptr *mp = this_cpu_ptr(paiext_root.mapptr); 323 + struct paiext_map *cpump = mp->mapptr; 330 324 u64 sum; 331 325 332 326 if (!event->attr.sample_period) { /* Counting */ 333 - if (!event->hw.last_tag) { 334 - event->hw.last_tag = 1; 335 - sum = paiext_getall(event); /* Get current value */ 336 - local64_set(&event->hw.prev_count, sum); 337 - } 327 + sum = paiext_getall(event); /* Get current value */ 328 + local64_set(&event->hw.prev_count, sum); 338 329 } else { /* Sampling */ 330 + cpump->event = event; 339 331 perf_sched_cb_inc(event->pmu); 340 332 } 341 333 } ··· 354 346 debug_sprintf_event(paiext_dbg, 4, "%s 1508 %llx acc %llx\n", 355 347 __func__, S390_lowcore.aicd, pcb->acc); 356 348 } 357 - cpump->event = event; 358 349 if (flags & PERF_EF_START) 359 350 paiext_start(event, PERF_EF_RELOAD); 360 351 event->hw.state = 0; ··· 391 384 * 2 bytes: Number of counter 392 385 * 8 bytes: Value of counter 393 386 */ 394 - static size_t paiext_copy(struct pai_userdata *userdata, unsigned long *area) 387 + static size_t paiext_copy(struct pai_userdata *userdata, unsigned long *area, 388 + unsigned long *area_old) 395 389 { 396 390 int i, outidx = 0; 397 391 398 392 for (i = 1; i <= paiext_cnt; i++) { 399 393 u64 val = paiext_getctr(area, i); 394 + u64 val_old = paiext_getctr(area_old, i); 400 395 396 + if (val >= val_old) 397 + val -= val_old; 398 + else 399 + val = (~0ULL - val_old) + val + 1; 401 400 if (val) { 402 401 userdata[outidx].num = i; 403 402 userdata[outidx].value = val; ··· 459 446 460 447 overflow = perf_event_overflow(event, &data, &regs); 461 448 perf_event_update_userpage(event); 462 - /* Clear lowcore area after read */ 463 - memset(cpump->area, 0, PAIE1_CTRBLOCK_SZ); 449 + /* Save NNPA lowcore area after read in event */ 450 + memcpy((void *)PAI_SAVE_AREA(event), cpump->area, 451 + PAIE1_CTRBLOCK_SZ); 464 452 return overflow; 465 453 } 466 454 ··· 476 462 477 463 if (!event) 478 464 return 0; 479 - rawsize = paiext_copy(cpump->save, cpump->area); 465 + rawsize = paiext_copy(cpump->save, cpump->area, 466 + (unsigned long *)PAI_SAVE_AREA(event)); 480 467 if (rawsize) /* Incremented counters */ 481 468 rc = paiext_push_sample(rawsize, cpump, event); 482 469 return rc; ··· 599 584 { 600 585 struct perf_pmu_events_attr *pa; 601 586 587 + /* Index larger than array_size, no counter name available */ 588 + if (num >= ARRAY_SIZE(paiext_ctrnames)) { 589 + attrs[num] = NULL; 590 + return 0; 591 + } 592 + 602 593 pa = kzalloc(sizeof(*pa), GFP_KERNEL); 603 594 if (!pa) 604 595 return -ENOMEM; ··· 625 604 struct attribute **attrs; 626 605 int ret, i; 627 606 628 - attrs = kmalloc_array(ARRAY_SIZE(paiext_ctrnames) + 1, sizeof(*attrs), 629 - GFP_KERNEL); 607 + attrs = kmalloc_array(paiext_cnt + 2, sizeof(*attrs), GFP_KERNEL); 630 608 if (!attrs) 631 609 return -ENOMEM; 632 - for (i = 0; i < ARRAY_SIZE(paiext_ctrnames); i++) { 610 + for (i = 0; i <= paiext_cnt; i++) { 633 611 ret = attr_event_init_one(attrs, i); 634 612 if (ret) { 635 - attr_event_free(attrs, i - 1); 613 + attr_event_free(attrs, i); 636 614 return ret; 637 615 } 638 616 }
+3 -7
arch/s390/kernel/perf_regs.c
··· 5 5 #include <linux/errno.h> 6 6 #include <linux/bug.h> 7 7 #include <asm/ptrace.h> 8 - #include <asm/fpu/api.h> 9 - #include <asm/fpu/types.h> 8 + #include <asm/fpu.h> 10 9 11 10 u64 perf_reg_value(struct pt_regs *regs, int idx) 12 11 { ··· 19 20 return 0; 20 21 21 22 idx -= PERF_REG_S390_FP0; 22 - if (cpu_has_vx()) 23 - fp = *(freg_t *)(current->thread.fpu.vxrs + idx); 24 - else 25 - fp = current->thread.fpu.fprs[idx]; 23 + fp = *(freg_t *)(current->thread.ufpu.vxrs + idx); 26 24 return fp.ui; 27 25 } 28 26 ··· 61 65 */ 62 66 regs_user->regs = task_pt_regs(current); 63 67 if (user_mode(regs_user->regs)) 64 - save_fpu_regs(); 68 + save_user_fpu_regs(); 65 69 regs_user->abi = perf_reg_abi(current); 66 70 }
+25 -6
arch/s390/kernel/process.c
··· 31 31 #include <linux/init_task.h> 32 32 #include <linux/entry-common.h> 33 33 #include <linux/io.h> 34 + #include <asm/guarded_storage.h> 35 + #include <asm/access-regs.h> 36 + #include <asm/switch_to.h> 34 37 #include <asm/cpu_mf.h> 35 38 #include <asm/processor.h> 39 + #include <asm/ptrace.h> 36 40 #include <asm/vtimer.h> 37 41 #include <asm/exec.h> 42 + #include <asm/fpu.h> 38 43 #include <asm/irq.h> 39 44 #include <asm/nmi.h> 40 45 #include <asm/smp.h> 41 46 #include <asm/stacktrace.h> 42 - #include <asm/switch_to.h> 43 47 #include <asm/runtime_instr.h> 44 48 #include <asm/unwind.h> 45 49 #include "entry.h" ··· 88 84 { 89 85 /* 90 86 * Save the floating-point or vector register state of the current 91 - * task and set the CIF_FPU flag to lazy restore the FPU register 87 + * task and set the TIF_FPU flag to lazy restore the FPU register 92 88 * state when returning to user space. 93 89 */ 94 - save_fpu_regs(); 90 + save_user_fpu_regs(); 95 91 96 92 *dst = *src; 97 - dst->thread.fpu.regs = dst->thread.fpu.fprs; 93 + dst->thread.kfpu_flags = 0; 98 94 99 95 /* 100 96 * Don't transfer over the runtime instrumentation or the guarded ··· 190 186 191 187 void execve_tail(void) 192 188 { 193 - current->thread.fpu.fpc = 0; 194 - asm volatile("sfpc %0" : : "d" (0)); 189 + current->thread.ufpu.fpc = 0; 190 + fpu_sfpc(0); 191 + } 192 + 193 + struct task_struct *__switch_to(struct task_struct *prev, struct task_struct *next) 194 + { 195 + save_user_fpu_regs(); 196 + save_kernel_fpu_regs(&prev->thread); 197 + save_access_regs(&prev->thread.acrs[0]); 198 + save_ri_cb(prev->thread.ri_cb); 199 + save_gs_cb(prev->thread.gs_cb); 200 + update_cr_regs(next); 201 + restore_kernel_fpu_regs(&next->thread); 202 + restore_access_regs(&next->thread.acrs[0]); 203 + restore_ri_cb(next->thread.ri_cb, prev->thread.ri_cb); 204 + restore_gs_cb(next->thread.gs_cb); 205 + return __switch_to_asm(prev, next); 195 206 } 196 207 197 208 unsigned long __get_wchan(struct task_struct *p)
+32 -69
arch/s390/kernel/ptrace.c
··· 24 24 #include <linux/seccomp.h> 25 25 #include <linux/compat.h> 26 26 #include <trace/syscall.h> 27 + #include <asm/guarded_storage.h> 28 + #include <asm/access-regs.h> 27 29 #include <asm/page.h> 28 30 #include <linux/uaccess.h> 29 31 #include <asm/unistd.h> 30 - #include <asm/switch_to.h> 31 32 #include <asm/runtime_instr.h> 32 33 #include <asm/facility.h> 33 - #include <asm/fpu/api.h> 34 + #include <asm/fpu.h> 34 35 35 36 #include "entry.h" 36 37 ··· 247 246 /* 248 247 * floating point control reg. is in the thread structure 249 248 */ 250 - tmp = child->thread.fpu.fpc; 249 + tmp = child->thread.ufpu.fpc; 251 250 tmp <<= BITS_PER_LONG - 32; 252 251 253 252 } else if (addr < offsetof(struct user, regs.fp_regs) + sizeof(s390_fp_regs)) { 254 253 /* 255 - * floating point regs. are either in child->thread.fpu 256 - * or the child->thread.fpu.vxrs array 254 + * floating point regs. are in the child->thread.ufpu.vxrs array 257 255 */ 258 256 offset = addr - offsetof(struct user, regs.fp_regs.fprs); 259 - if (cpu_has_vx()) 260 - tmp = *(addr_t *) 261 - ((addr_t) child->thread.fpu.vxrs + 2*offset); 262 - else 263 - tmp = *(addr_t *) 264 - ((addr_t) child->thread.fpu.fprs + offset); 265 - 257 + tmp = *(addr_t *)((addr_t)child->thread.ufpu.vxrs + 2 * offset); 266 258 } else if (addr < offsetof(struct user, regs.per_info) + sizeof(per_struct)) { 267 259 /* 268 260 * Handle access to the per_info structure. ··· 389 395 */ 390 396 if ((unsigned int)data != 0) 391 397 return -EINVAL; 392 - child->thread.fpu.fpc = data >> (BITS_PER_LONG - 32); 398 + child->thread.ufpu.fpc = data >> (BITS_PER_LONG - 32); 393 399 394 400 } else if (addr < offsetof(struct user, regs.fp_regs) + sizeof(s390_fp_regs)) { 395 401 /* 396 - * floating point regs. are either in child->thread.fpu 397 - * or the child->thread.fpu.vxrs array 402 + * floating point regs. are in the child->thread.ufpu.vxrs array 398 403 */ 399 404 offset = addr - offsetof(struct user, regs.fp_regs.fprs); 400 - if (cpu_has_vx()) 401 - *(addr_t *)((addr_t) 402 - child->thread.fpu.vxrs + 2*offset) = data; 403 - else 404 - *(addr_t *)((addr_t) 405 - child->thread.fpu.fprs + offset) = data; 406 - 405 + *(addr_t *)((addr_t)child->thread.ufpu.vxrs + 2 * offset) = data; 407 406 } else if (addr < offsetof(struct user, regs.per_info) + sizeof(per_struct)) { 408 407 /* 409 408 * Handle access to the per_info structure. ··· 609 622 /* 610 623 * floating point control reg. is in the thread structure 611 624 */ 612 - tmp = child->thread.fpu.fpc; 625 + tmp = child->thread.ufpu.fpc; 613 626 614 627 } else if (addr < offsetof(struct compat_user, regs.fp_regs) + sizeof(s390_fp_regs)) { 615 628 /* 616 - * floating point regs. are either in child->thread.fpu 617 - * or the child->thread.fpu.vxrs array 629 + * floating point regs. are in the child->thread.ufpu.vxrs array 618 630 */ 619 631 offset = addr - offsetof(struct compat_user, regs.fp_regs.fprs); 620 - if (cpu_has_vx()) 621 - tmp = *(__u32 *) 622 - ((addr_t) child->thread.fpu.vxrs + 2*offset); 623 - else 624 - tmp = *(__u32 *) 625 - ((addr_t) child->thread.fpu.fprs + offset); 626 - 632 + tmp = *(__u32 *)((addr_t)child->thread.ufpu.vxrs + 2 * offset); 627 633 } else if (addr < offsetof(struct compat_user, regs.per_info) + sizeof(struct compat_per_struct_kernel)) { 628 634 /* 629 635 * Handle access to the per_info structure. ··· 728 748 /* 729 749 * floating point control reg. is in the thread structure 730 750 */ 731 - child->thread.fpu.fpc = data; 751 + child->thread.ufpu.fpc = data; 732 752 733 753 } else if (addr < offsetof(struct compat_user, regs.fp_regs) + sizeof(s390_fp_regs)) { 734 754 /* 735 - * floating point regs. are either in child->thread.fpu 736 - * or the child->thread.fpu.vxrs array 755 + * floating point regs. are in the child->thread.ufpu.vxrs array 737 756 */ 738 757 offset = addr - offsetof(struct compat_user, regs.fp_regs.fprs); 739 - if (cpu_has_vx()) 740 - *(__u32 *)((addr_t) 741 - child->thread.fpu.vxrs + 2*offset) = tmp; 742 - else 743 - *(__u32 *)((addr_t) 744 - child->thread.fpu.fprs + offset) = tmp; 745 - 758 + *(__u32 *)((addr_t)child->thread.ufpu.vxrs + 2 * offset) = tmp; 746 759 } else if (addr < offsetof(struct compat_user, regs.per_info) + sizeof(struct compat_per_struct_kernel)) { 747 760 /* 748 761 * Handle access to the per_info structure. ··· 866 893 _s390_fp_regs fp_regs; 867 894 868 895 if (target == current) 869 - save_fpu_regs(); 896 + save_user_fpu_regs(); 870 897 871 - fp_regs.fpc = target->thread.fpu.fpc; 872 - fpregs_store(&fp_regs, &target->thread.fpu); 898 + fp_regs.fpc = target->thread.ufpu.fpc; 899 + fpregs_store(&fp_regs, &target->thread.ufpu); 873 900 874 901 return membuf_write(&to, &fp_regs, sizeof(fp_regs)); 875 902 } ··· 883 910 freg_t fprs[__NUM_FPRS]; 884 911 885 912 if (target == current) 886 - save_fpu_regs(); 887 - 888 - if (cpu_has_vx()) 889 - convert_vx_to_fp(fprs, target->thread.fpu.vxrs); 890 - else 891 - memcpy(&fprs, target->thread.fpu.fprs, sizeof(fprs)); 892 - 913 + save_user_fpu_regs(); 914 + convert_vx_to_fp(fprs, target->thread.ufpu.vxrs); 893 915 if (count > 0 && pos < offsetof(s390_fp_regs, fprs)) { 894 - u32 ufpc[2] = { target->thread.fpu.fpc, 0 }; 916 + u32 ufpc[2] = { target->thread.ufpu.fpc, 0 }; 895 917 rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &ufpc, 896 918 0, offsetof(s390_fp_regs, fprs)); 897 919 if (rc) 898 920 return rc; 899 921 if (ufpc[1] != 0) 900 922 return -EINVAL; 901 - target->thread.fpu.fpc = ufpc[0]; 923 + target->thread.ufpu.fpc = ufpc[0]; 902 924 } 903 925 904 926 if (rc == 0 && count > 0) ··· 901 933 fprs, offsetof(s390_fp_regs, fprs), -1); 902 934 if (rc) 903 935 return rc; 904 - 905 - if (cpu_has_vx()) 906 - convert_fp_to_vx(target->thread.fpu.vxrs, fprs); 907 - else 908 - memcpy(target->thread.fpu.fprs, &fprs, sizeof(fprs)); 909 - 936 + convert_fp_to_vx(target->thread.ufpu.vxrs, fprs); 910 937 return rc; 911 938 } 912 939 ··· 951 988 if (!cpu_has_vx()) 952 989 return -ENODEV; 953 990 if (target == current) 954 - save_fpu_regs(); 991 + save_user_fpu_regs(); 955 992 for (i = 0; i < __NUM_VXRS_LOW; i++) 956 - vxrs[i] = target->thread.fpu.vxrs[i].low; 993 + vxrs[i] = target->thread.ufpu.vxrs[i].low; 957 994 return membuf_write(&to, vxrs, sizeof(vxrs)); 958 995 } 959 996 ··· 968 1005 if (!cpu_has_vx()) 969 1006 return -ENODEV; 970 1007 if (target == current) 971 - save_fpu_regs(); 1008 + save_user_fpu_regs(); 972 1009 973 1010 for (i = 0; i < __NUM_VXRS_LOW; i++) 974 - vxrs[i] = target->thread.fpu.vxrs[i].low; 1011 + vxrs[i] = target->thread.ufpu.vxrs[i].low; 975 1012 976 1013 rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vxrs, 0, -1); 977 1014 if (rc == 0) 978 1015 for (i = 0; i < __NUM_VXRS_LOW; i++) 979 - target->thread.fpu.vxrs[i].low = vxrs[i]; 1016 + target->thread.ufpu.vxrs[i].low = vxrs[i]; 980 1017 981 1018 return rc; 982 1019 } ··· 988 1025 if (!cpu_has_vx()) 989 1026 return -ENODEV; 990 1027 if (target == current) 991 - save_fpu_regs(); 992 - return membuf_write(&to, target->thread.fpu.vxrs + __NUM_VXRS_LOW, 1028 + save_user_fpu_regs(); 1029 + return membuf_write(&to, target->thread.ufpu.vxrs + __NUM_VXRS_LOW, 993 1030 __NUM_VXRS_HIGH * sizeof(__vector128)); 994 1031 } 995 1032 ··· 1003 1040 if (!cpu_has_vx()) 1004 1041 return -ENODEV; 1005 1042 if (target == current) 1006 - save_fpu_regs(); 1043 + save_user_fpu_regs(); 1007 1044 1008 1045 rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, 1009 - target->thread.fpu.vxrs + __NUM_VXRS_LOW, 0, -1); 1046 + target->thread.ufpu.vxrs + __NUM_VXRS_LOW, 0, -1); 1010 1047 return rc; 1011 1048 } 1012 1049
+6 -6
arch/s390/kernel/setup.c
··· 504 504 int j; 505 505 u64 i; 506 506 507 - code_resource.start = (unsigned long) _text; 508 - code_resource.end = (unsigned long) _etext - 1; 509 - data_resource.start = (unsigned long) _etext; 510 - data_resource.end = (unsigned long) _edata - 1; 511 - bss_resource.start = (unsigned long) __bss_start; 512 - bss_resource.end = (unsigned long) __bss_stop - 1; 507 + code_resource.start = __pa_symbol(_text); 508 + code_resource.end = __pa_symbol(_etext) - 1; 509 + data_resource.start = __pa_symbol(_etext); 510 + data_resource.end = __pa_symbol(_edata) - 1; 511 + bss_resource.start = __pa_symbol(__bss_start); 512 + bss_resource.end = __pa_symbol(__bss_stop) - 1; 513 513 514 514 for_each_mem_range(i, &start, &end) { 515 515 res = memblock_alloc(sizeof(*res), 8);
+10 -10
arch/s390/kernel/signal.c
··· 30 30 #include <linux/compat.h> 31 31 #include <asm/ucontext.h> 32 32 #include <linux/uaccess.h> 33 + #include <asm/access-regs.h> 33 34 #include <asm/lowcore.h> 34 - #include <asm/switch_to.h> 35 35 #include <asm/vdso.h> 36 36 #include "entry.h" 37 37 ··· 109 109 static void store_sigregs(void) 110 110 { 111 111 save_access_regs(current->thread.acrs); 112 - save_fpu_regs(); 112 + save_user_fpu_regs(); 113 113 } 114 114 115 115 /* Load registers after signal return */ ··· 131 131 memcpy(&user_sregs.regs.gprs, &regs->gprs, sizeof(sregs->regs.gprs)); 132 132 memcpy(&user_sregs.regs.acrs, current->thread.acrs, 133 133 sizeof(user_sregs.regs.acrs)); 134 - fpregs_store(&user_sregs.fpregs, &current->thread.fpu); 134 + fpregs_store(&user_sregs.fpregs, &current->thread.ufpu); 135 135 if (__copy_to_user(sregs, &user_sregs, sizeof(_sigregs))) 136 136 return -EFAULT; 137 137 return 0; ··· 165 165 memcpy(&current->thread.acrs, &user_sregs.regs.acrs, 166 166 sizeof(current->thread.acrs)); 167 167 168 - fpregs_load(&user_sregs.fpregs, &current->thread.fpu); 168 + fpregs_load(&user_sregs.fpregs, &current->thread.ufpu); 169 169 170 170 clear_pt_regs_flag(regs, PIF_SYSCALL); /* No longer in a system call */ 171 171 return 0; ··· 181 181 /* Save vector registers to signal stack */ 182 182 if (cpu_has_vx()) { 183 183 for (i = 0; i < __NUM_VXRS_LOW; i++) 184 - vxrs[i] = current->thread.fpu.vxrs[i].low; 184 + vxrs[i] = current->thread.ufpu.vxrs[i].low; 185 185 if (__copy_to_user(&sregs_ext->vxrs_low, vxrs, 186 186 sizeof(sregs_ext->vxrs_low)) || 187 187 __copy_to_user(&sregs_ext->vxrs_high, 188 - current->thread.fpu.vxrs + __NUM_VXRS_LOW, 188 + current->thread.ufpu.vxrs + __NUM_VXRS_LOW, 189 189 sizeof(sregs_ext->vxrs_high))) 190 190 return -EFAULT; 191 191 } ··· 202 202 if (cpu_has_vx()) { 203 203 if (__copy_from_user(vxrs, &sregs_ext->vxrs_low, 204 204 sizeof(sregs_ext->vxrs_low)) || 205 - __copy_from_user(current->thread.fpu.vxrs + __NUM_VXRS_LOW, 205 + __copy_from_user(current->thread.ufpu.vxrs + __NUM_VXRS_LOW, 206 206 &sregs_ext->vxrs_high, 207 207 sizeof(sregs_ext->vxrs_high))) 208 208 return -EFAULT; 209 209 for (i = 0; i < __NUM_VXRS_LOW; i++) 210 - current->thread.fpu.vxrs[i].low = vxrs[i]; 210 + current->thread.ufpu.vxrs[i].low = vxrs[i]; 211 211 } 212 212 return 0; 213 213 } ··· 222 222 if (__copy_from_user(&set.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE)) 223 223 goto badframe; 224 224 set_current_blocked(&set); 225 - save_fpu_regs(); 225 + save_user_fpu_regs(); 226 226 if (restore_sigregs(regs, &frame->sregs)) 227 227 goto badframe; 228 228 if (restore_sigregs_ext(regs, &frame->sregs_ext)) ··· 246 246 set_current_blocked(&set); 247 247 if (restore_altstack(&frame->uc.uc_stack)) 248 248 goto badframe; 249 - save_fpu_regs(); 249 + save_user_fpu_regs(); 250 250 if (restore_sigregs(regs, &frame->uc.uc_mcontext)) 251 251 goto badframe; 252 252 if (restore_sigregs_ext(regs, &frame->uc.uc_mcontext_ext))
+2 -1
arch/s390/kernel/smp.c
··· 36 36 #include <linux/sched/task_stack.h> 37 37 #include <linux/crash_dump.h> 38 38 #include <linux/kprobes.h> 39 + #include <asm/access-regs.h> 39 40 #include <asm/asm-offsets.h> 40 41 #include <asm/ctlreg.h> 41 42 #include <asm/pfault.h> 42 43 #include <asm/diag.h> 43 - #include <asm/switch_to.h> 44 44 #include <asm/facility.h> 45 + #include <asm/fpu.h> 45 46 #include <asm/ipl.h> 46 47 #include <asm/setup.h> 47 48 #include <asm/irq.h>
+10 -17
arch/s390/kernel/sysinfo.c
··· 20 20 #include <asm/sysinfo.h> 21 21 #include <asm/cpcmd.h> 22 22 #include <asm/topology.h> 23 - #include <asm/fpu/api.h> 23 + #include <asm/fpu.h> 24 24 25 25 int topology_max_mnest; 26 26 ··· 426 426 */ 427 427 void s390_adjust_jiffies(void) 428 428 { 429 + DECLARE_KERNEL_FPU_ONSTACK16(fpu); 429 430 struct sysinfo_1_2_2 *info; 430 431 unsigned long capability; 431 - struct kernel_fpu fpu; 432 432 433 433 info = (void *) get_zeroed_page(GFP_KERNEL); 434 434 if (!info) ··· 447 447 * point division .. 448 448 */ 449 449 kernel_fpu_begin(&fpu, KERNEL_FPR); 450 - asm volatile( 451 - " sfpc %3\n" 452 - " l %0,%1\n" 453 - " tmlh %0,0xff80\n" 454 - " jnz 0f\n" 455 - " cefbr %%f2,%0\n" 456 - " j 1f\n" 457 - "0: le %%f2,%1\n" 458 - "1: cefbr %%f0,%2\n" 459 - " debr %%f0,%%f2\n" 460 - " cgebr %0,5,%%f0\n" 461 - : "=&d" (capability) 462 - : "Q" (info->capability), "d" (10000000), "d" (0) 463 - : "cc" 464 - ); 450 + fpu_sfpc(0); 451 + if (info->capability & 0xff800000) 452 + fpu_ldgr(2, info->capability); 453 + else 454 + fpu_cefbr(2, info->capability); 455 + fpu_cefbr(0, 10000000); 456 + fpu_debr(0, 2); 457 + capability = fpu_cgebr(0, 5); 465 458 kernel_fpu_end(&fpu, KERNEL_FPR); 466 459 } else 467 460 /*
+1 -1
arch/s390/kernel/text_amode31.S
··· 90 90 SYM_FUNC_END(_diag26c_amode31) 91 91 92 92 /* 93 - * void _diag0c_amode31(struct hypfs_diag0c_entry *entry) 93 + * void _diag0c_amode31(unsigned long rx) 94 94 */ 95 95 SYM_FUNC_START(_diag0c_amode31) 96 96 sam31
+3 -3
arch/s390/kernel/time.c
··· 251 251 .rating = 400, 252 252 .read = read_tod_clock, 253 253 .mask = CLOCKSOURCE_MASK(64), 254 - .mult = 1000, 255 - .shift = 12, 254 + .mult = 4096000, 255 + .shift = 24, 256 256 .flags = CLOCK_SOURCE_IS_CONTINUOUS, 257 257 .vdso_clock_mode = VDSO_CLOCKMODE_TOD, 258 258 }; ··· 716 716 /* 717 717 * STP subsys sysfs interface functions 718 718 */ 719 - static struct bus_type stp_subsys = { 719 + static const struct bus_type stp_subsys = { 720 720 .name = "stp", 721 721 .dev_name = "stp", 722 722 };
+6 -6
arch/s390/kernel/traps.c
··· 28 28 #include <linux/cpu.h> 29 29 #include <linux/entry-common.h> 30 30 #include <asm/asm-extable.h> 31 - #include <asm/fpu/api.h> 32 31 #include <asm/vtime.h> 32 + #include <asm/fpu.h> 33 33 #include "entry.h" 34 34 35 35 static inline void __user *get_trap_ip(struct pt_regs *regs) ··· 201 201 } 202 202 203 203 /* get vector interrupt code from fpc */ 204 - save_fpu_regs(); 205 - vic = (current->thread.fpu.fpc & 0xf00) >> 8; 204 + save_user_fpu_regs(); 205 + vic = (current->thread.ufpu.fpc & 0xf00) >> 8; 206 206 switch (vic) { 207 207 case 1: /* invalid vector operation */ 208 208 si_code = FPE_FLTINV; ··· 227 227 228 228 static void data_exception(struct pt_regs *regs) 229 229 { 230 - save_fpu_regs(); 231 - if (current->thread.fpu.fpc & FPC_DXC_MASK) 232 - do_fp_trap(regs, current->thread.fpu.fpc); 230 + save_user_fpu_regs(); 231 + if (current->thread.ufpu.fpc & FPC_DXC_MASK) 232 + do_fp_trap(regs, current->thread.ufpu.fpc); 233 233 else 234 234 do_trap(regs, SIGILL, ILL_ILLOPN, "data exception"); 235 235 }
-1
arch/s390/kernel/uprobes.c
··· 12 12 #include <linux/kdebug.h> 13 13 #include <linux/sched/task_stack.h> 14 14 15 - #include <asm/switch_to.h> 16 15 #include <asm/facility.h> 17 16 #include <asm/kprobes.h> 18 17 #include <asm/dis.h>
+1 -1
arch/s390/kernel/vdso32/Makefile
··· 22 22 KBUILD_CFLAGS_32 := $(filter-out -mno-pic-data-is-text-relative,$(KBUILD_CFLAGS_32)) 23 23 KBUILD_CFLAGS_32 += -m31 -fPIC -shared -fno-common -fno-builtin 24 24 25 - LDFLAGS_vdso32.so.dbg += -fPIC -shared -soname=linux-vdso32.so.1 \ 25 + LDFLAGS_vdso32.so.dbg += -shared -soname=linux-vdso32.so.1 \ 26 26 --hash-style=both --build-id=sha1 -melf_s390 -T 27 27 28 28 $(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
-1
arch/s390/kernel/vdso32/vdso32.lds.S
··· 9 9 10 10 OUTPUT_FORMAT("elf32-s390", "elf32-s390", "elf32-s390") 11 11 OUTPUT_ARCH(s390:31-bit) 12 - ENTRY(_start) 13 12 14 13 SECTIONS 15 14 {
+2 -1
arch/s390/kernel/vdso64/Makefile
··· 25 25 26 26 KBUILD_CFLAGS_64 := $(filter-out -m64,$(KBUILD_CFLAGS)) 27 27 KBUILD_CFLAGS_64 := $(filter-out -mno-pic-data-is-text-relative,$(KBUILD_CFLAGS_64)) 28 + KBUILD_CFLAGS_64 := $(filter-out -munaligned-symbols,$(KBUILD_CFLAGS_64)) 28 29 KBUILD_CFLAGS_64 += -m64 -fPIC -fno-common -fno-builtin 29 - ldflags-y := -fPIC -shared -soname=linux-vdso64.so.1 \ 30 + ldflags-y := -shared -soname=linux-vdso64.so.1 \ 30 31 --hash-style=both --build-id=sha1 -T 31 32 32 33 $(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_64)
-1
arch/s390/kernel/vdso64/vdso64.lds.S
··· 9 9 10 10 OUTPUT_FORMAT("elf64-s390", "elf64-s390", "elf64-s390") 11 11 OUTPUT_ARCH(s390:64-bit) 12 - ENTRY(_start) 13 12 14 13 SECTIONS 15 14 {
+54
arch/s390/kernel/vmlinux.lds.S
··· 59 59 } :text = 0x0700 60 60 61 61 RO_DATA(PAGE_SIZE) 62 + .data.rel.ro : { 63 + *(.data.rel.ro .data.rel.ro.*) 64 + } 65 + .got : { 66 + __got_start = .; 67 + *(.got) 68 + __got_end = .; 69 + } 62 70 63 71 . = ALIGN(PAGE_SIZE); 64 72 _sdata = .; /* Start of data section */ ··· 81 73 __end_ro_after_init = .; 82 74 83 75 RW_DATA(0x100, PAGE_SIZE, THREAD_SIZE) 76 + .data.rel : { 77 + *(.data.rel*) 78 + } 84 79 BOOT_DATA_PRESERVED 85 80 86 81 . = ALIGN(8); ··· 192 181 193 182 PERCPU_SECTION(0x100) 194 183 184 + #ifdef CONFIG_PIE_BUILD 195 185 .dynsym ALIGN(8) : { 196 186 __dynsym_start = .; 197 187 *(.dynsym) ··· 202 190 __rela_dyn_start = .; 203 191 *(.rela*) 204 192 __rela_dyn_end = .; 193 + } 194 + .dynamic ALIGN(8) : { 195 + *(.dynamic) 196 + } 197 + .dynstr ALIGN(8) : { 198 + *(.dynstr) 199 + } 200 + #endif 201 + .hash ALIGN(8) : { 202 + *(.hash) 203 + } 204 + .gnu.hash ALIGN(8) : { 205 + *(.gnu.hash) 205 206 } 206 207 207 208 . = ALIGN(PAGE_SIZE); ··· 239 214 QUAD(__boot_data_preserved_start) /* bootdata_preserved_off */ 240 215 QUAD(__boot_data_preserved_end - 241 216 __boot_data_preserved_start) /* bootdata_preserved_size */ 217 + #ifdef CONFIG_PIE_BUILD 242 218 QUAD(__dynsym_start) /* dynsym_start */ 243 219 QUAD(__rela_dyn_start) /* rela_dyn_start */ 244 220 QUAD(__rela_dyn_end) /* rela_dyn_end */ 221 + #else 222 + QUAD(__got_start) /* got_start */ 223 + QUAD(__got_end) /* got_end */ 224 + #endif 245 225 QUAD(_eamode31 - _samode31) /* amode31_size */ 246 226 QUAD(init_mm) 247 227 QUAD(swapper_pg_dir) ··· 264 234 STABS_DEBUG 265 235 DWARF_DEBUG 266 236 ELF_DETAILS 237 + 238 + /* 239 + * Make sure that the .got.plt is either completely empty or it 240 + * contains only the three reserved double words. 241 + */ 242 + .got.plt : { 243 + *(.got.plt) 244 + } 245 + ASSERT(SIZEOF(.got.plt) == 0 || SIZEOF(.got.plt) == 0x18, "Unexpected GOT/PLT entries detected!") 246 + 247 + /* 248 + * Sections that should stay zero sized, which is safer to 249 + * explicitly check instead of blindly discarding. 250 + */ 251 + .plt : { 252 + *(.plt) *(.plt.*) *(.iplt) *(.igot .igot.plt) 253 + } 254 + ASSERT(SIZEOF(.plt) == 0, "Unexpected run-time procedure linkages detected!") 255 + #ifndef CONFIG_PIE_BUILD 256 + .rela.dyn : { 257 + *(.rela.*) *(.rela_*) 258 + } 259 + ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) detected!") 260 + #endif 267 261 268 262 /* Sections to be discarded */ 269 263 DISCARDS
+3 -2
arch/s390/kvm/gaccess.c
··· 11 11 #include <linux/err.h> 12 12 #include <linux/pgtable.h> 13 13 #include <linux/bitfield.h> 14 + #include <asm/access-regs.h> 14 15 #include <asm/fault.h> 15 16 #include <asm/gmap.h> 16 17 #include "kvm-s390.h" 17 18 #include "gaccess.h" 18 - #include <asm/switch_to.h> 19 19 20 20 union asce { 21 21 unsigned long val; ··· 391 391 if (ar >= NUM_ACRS) 392 392 return -EINVAL; 393 393 394 - save_access_regs(vcpu->run->s.regs.acrs); 394 + if (vcpu->arch.acrs_loaded) 395 + save_access_regs(vcpu->run->s.regs.acrs); 395 396 alet.val = vcpu->run->s.regs.acrs[ar]; 396 397 397 398 if (ar == 0 || alet.val == 0) {
+3 -3
arch/s390/kvm/interrupt.c
··· 19 19 #include <linux/slab.h> 20 20 #include <linux/bitmap.h> 21 21 #include <linux/vmalloc.h> 22 + #include <asm/access-regs.h> 22 23 #include <asm/asm-offsets.h> 23 24 #include <asm/dis.h> 24 25 #include <linux/uaccess.h> 25 26 #include <asm/sclp.h> 26 27 #include <asm/isc.h> 27 28 #include <asm/gmap.h> 28 - #include <asm/switch_to.h> 29 29 #include <asm/nmi.h> 30 30 #include <asm/airq.h> 31 31 #include <asm/tpi.h> ··· 584 584 585 585 mci.val = mchk->mcic; 586 586 /* take care of lazy register loading */ 587 - save_fpu_regs(); 587 + kvm_s390_fpu_store(vcpu->run); 588 588 save_access_regs(vcpu->run->s.regs.acrs); 589 589 if (MACHINE_HAS_GS && vcpu->arch.gs_enabled) 590 590 save_gs_cb(current->thread.gs_cb); ··· 648 648 } 649 649 rc |= write_guest_lc(vcpu, __LC_GPREGS_SAVE_AREA, 650 650 vcpu->run->s.regs.gprs, 128); 651 - rc |= put_guest_lc(vcpu, current->thread.fpu.fpc, 651 + rc |= put_guest_lc(vcpu, vcpu->run->s.regs.fpc, 652 652 (u32 __user *) __LC_FP_CREG_SAVE_AREA); 653 653 rc |= put_guest_lc(vcpu, vcpu->arch.sie_block->todpr, 654 654 (u32 __user *) __LC_TOD_PROGREG_SAVE_AREA);
+11 -22
arch/s390/kvm/kvm-s390.c
··· 33 33 #include <linux/pgtable.h> 34 34 #include <linux/mmu_notifier.h> 35 35 36 + #include <asm/access-regs.h> 36 37 #include <asm/asm-offsets.h> 37 38 #include <asm/lowcore.h> 38 39 #include <asm/stp.h> 39 40 #include <asm/gmap.h> 40 41 #include <asm/nmi.h> 41 - #include <asm/switch_to.h> 42 42 #include <asm/isc.h> 43 43 #include <asm/sclp.h> 44 44 #include <asm/cpacf.h> 45 45 #include <asm/timex.h> 46 + #include <asm/fpu.h> 46 47 #include <asm/ap.h> 47 48 #include <asm/uv.h> 48 - #include <asm/fpu/api.h> 49 49 #include "kvm-s390.h" 50 50 #include "gaccess.h" 51 51 #include "pci.h" ··· 3951 3951 KVM_SYNC_ARCH0 | 3952 3952 KVM_SYNC_PFAULT | 3953 3953 KVM_SYNC_DIAG318; 3954 + vcpu->arch.acrs_loaded = false; 3954 3955 kvm_s390_set_prefix(vcpu, 0); 3955 3956 if (test_kvm_facility(vcpu->kvm, 64)) 3956 3957 vcpu->run->kvm_valid_regs |= KVM_SYNC_RICCB; ··· 4830 4829 vcpu->run->s.regs.gprs, 4831 4830 sizeof(sie_page->pv_grregs)); 4832 4831 } 4833 - if (test_cpu_flag(CIF_FPU)) 4834 - load_fpu_regs(); 4835 4832 exit_reason = sie64a(vcpu->arch.sie_block, 4836 4833 vcpu->run->s.regs.gprs); 4837 4834 if (kvm_s390_pv_cpu_is_protected(vcpu)) { ··· 4950 4951 } 4951 4952 save_access_regs(vcpu->arch.host_acrs); 4952 4953 restore_access_regs(vcpu->run->s.regs.acrs); 4953 - /* save host (userspace) fprs/vrs */ 4954 - save_fpu_regs(); 4955 - vcpu->arch.host_fpregs.fpc = current->thread.fpu.fpc; 4956 - vcpu->arch.host_fpregs.regs = current->thread.fpu.regs; 4957 - if (cpu_has_vx()) 4958 - current->thread.fpu.regs = vcpu->run->s.regs.vrs; 4959 - else 4960 - current->thread.fpu.regs = vcpu->run->s.regs.fprs; 4961 - current->thread.fpu.fpc = vcpu->run->s.regs.fpc; 4962 - 4954 + vcpu->arch.acrs_loaded = true; 4955 + kvm_s390_fpu_load(vcpu->run); 4963 4956 /* Sync fmt2 only data */ 4964 4957 if (likely(!kvm_s390_pv_cpu_is_protected(vcpu))) { 4965 4958 sync_regs_fmt2(vcpu); ··· 5012 5021 kvm_run->s.regs.pfc = vcpu->arch.pfault_compare; 5013 5022 save_access_regs(vcpu->run->s.regs.acrs); 5014 5023 restore_access_regs(vcpu->arch.host_acrs); 5015 - /* Save guest register state */ 5016 - save_fpu_regs(); 5017 - vcpu->run->s.regs.fpc = current->thread.fpu.fpc; 5018 - /* Restore will be done lazily at return */ 5019 - current->thread.fpu.fpc = vcpu->arch.host_fpregs.fpc; 5020 - current->thread.fpu.regs = vcpu->arch.host_fpregs.regs; 5024 + vcpu->arch.acrs_loaded = false; 5025 + kvm_s390_fpu_store(vcpu->run); 5021 5026 if (likely(!kvm_s390_pv_cpu_is_protected(vcpu))) 5022 5027 store_regs_fmt2(vcpu); 5023 5028 } ··· 5021 5034 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) 5022 5035 { 5023 5036 struct kvm_run *kvm_run = vcpu->run; 5037 + DECLARE_KERNEL_FPU_ONSTACK32(fpu); 5024 5038 int rc; 5025 5039 5026 5040 /* ··· 5063 5075 goto out; 5064 5076 } 5065 5077 5078 + kernel_fpu_begin(&fpu, KERNEL_FPC | KERNEL_VXR); 5066 5079 sync_regs(vcpu); 5067 5080 enable_cpu_timer_accounting(vcpu); 5068 5081 ··· 5087 5098 5088 5099 disable_cpu_timer_accounting(vcpu); 5089 5100 store_regs(vcpu); 5101 + kernel_fpu_end(&fpu, KERNEL_FPC | KERNEL_VXR); 5090 5102 5091 5103 kvm_sigset_deactivate(vcpu); 5092 5104 ··· 5162 5172 * switch in the run ioctl. Let's update our copies before we save 5163 5173 * it into the save area 5164 5174 */ 5165 - save_fpu_regs(); 5166 - vcpu->run->s.regs.fpc = current->thread.fpu.fpc; 5175 + kvm_s390_fpu_store(vcpu->run); 5167 5176 save_access_regs(vcpu->run->s.regs.acrs); 5168 5177 5169 5178 return kvm_s390_store_status_unloaded(vcpu, addr);
+18
arch/s390/kvm/kvm-s390.h
··· 20 20 #include <asm/processor.h> 21 21 #include <asm/sclp.h> 22 22 23 + static inline void kvm_s390_fpu_store(struct kvm_run *run) 24 + { 25 + fpu_stfpc(&run->s.regs.fpc); 26 + if (cpu_has_vx()) 27 + save_vx_regs((__vector128 *)&run->s.regs.vrs); 28 + else 29 + save_fp_regs((freg_t *)&run->s.regs.fprs); 30 + } 31 + 32 + static inline void kvm_s390_fpu_load(struct kvm_run *run) 33 + { 34 + fpu_lfpc_safe(&run->s.regs.fpc); 35 + if (cpu_has_vx()) 36 + load_vx_regs((__vector128 *)&run->s.regs.vrs); 37 + else 38 + load_fp_regs((freg_t *)&run->s.regs.fprs); 39 + } 40 + 23 41 /* Transactional Memory Execution related macros */ 24 42 #define IS_TE_ENABLED(vcpu) ((vcpu->arch.sie_block->ecb & ECB_TE)) 25 43 #define TDB_FORMAT1 1
-3
arch/s390/kvm/vsie.c
··· 18 18 #include <asm/sclp.h> 19 19 #include <asm/nmi.h> 20 20 #include <asm/dis.h> 21 - #include <asm/fpu/api.h> 22 21 #include <asm/facility.h> 23 22 #include "kvm-s390.h" 24 23 #include "gaccess.h" ··· 1148 1149 */ 1149 1150 vcpu->arch.sie_block->prog0c |= PROG_IN_SIE; 1150 1151 barrier(); 1151 - if (test_cpu_flag(CIF_FPU)) 1152 - load_fpu_regs(); 1153 1152 if (!kvm_s390_vcpu_sie_inhibited(vcpu)) 1154 1153 rc = sie64a(scb_s, vcpu->run->s.regs.gprs); 1155 1154 barrier();
+1
arch/s390/lib/Makefile
··· 4 4 # 5 5 6 6 lib-y += delay.o string.o uaccess.o find.o spinlock.o tishift.o 7 + lib-y += csum-partial.o 7 8 obj-y += mem.o xor.o 8 9 lib-$(CONFIG_KPROBES) += probes.o 9 10 lib-$(CONFIG_UPROBES) += probes.o
+91
arch/s390/lib/csum-partial.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/export.h> 4 + #include <asm/checksum.h> 5 + #include <asm/fpu.h> 6 + 7 + /* 8 + * Computes the checksum of a memory block at src, length len, 9 + * and adds in "sum" (32-bit). If copy is true copies to dst. 10 + * 11 + * Returns a 32-bit number suitable for feeding into itself 12 + * or csum_tcpudp_magic. 13 + * 14 + * This function must be called with even lengths, except 15 + * for the last fragment, which may be odd. 16 + * 17 + * It's best to have src and dst aligned on a 64-bit boundary. 18 + */ 19 + static __always_inline __wsum csum_copy(void *dst, const void *src, int len, __wsum sum, bool copy) 20 + { 21 + DECLARE_KERNEL_FPU_ONSTACK8(vxstate); 22 + 23 + if (!cpu_has_vx()) { 24 + if (copy) 25 + memcpy(dst, src, len); 26 + return cksm(dst, len, sum); 27 + } 28 + kernel_fpu_begin(&vxstate, KERNEL_VXR_V16V23); 29 + fpu_vlvgf(16, (__force u32)sum, 1); 30 + fpu_vzero(17); 31 + fpu_vzero(18); 32 + fpu_vzero(19); 33 + while (len >= 64) { 34 + fpu_vlm(20, 23, src); 35 + if (copy) { 36 + fpu_vstm(20, 23, dst); 37 + dst += 64; 38 + } 39 + fpu_vcksm(16, 20, 16); 40 + fpu_vcksm(17, 21, 17); 41 + fpu_vcksm(18, 22, 18); 42 + fpu_vcksm(19, 23, 19); 43 + src += 64; 44 + len -= 64; 45 + } 46 + while (len >= 32) { 47 + fpu_vlm(20, 21, src); 48 + if (copy) { 49 + fpu_vstm(20, 21, dst); 50 + dst += 32; 51 + } 52 + fpu_vcksm(16, 20, 16); 53 + fpu_vcksm(17, 21, 17); 54 + src += 32; 55 + len -= 32; 56 + } 57 + while (len >= 16) { 58 + fpu_vl(20, src); 59 + if (copy) { 60 + fpu_vst(20, dst); 61 + dst += 16; 62 + } 63 + fpu_vcksm(16, 20, 16); 64 + src += 16; 65 + len -= 16; 66 + } 67 + if (len) { 68 + fpu_vll(20, len - 1, src); 69 + if (copy) 70 + fpu_vstl(20, len - 1, dst); 71 + fpu_vcksm(16, 20, 16); 72 + } 73 + fpu_vcksm(18, 19, 18); 74 + fpu_vcksm(16, 17, 16); 75 + fpu_vcksm(16, 18, 16); 76 + sum = (__force __wsum)fpu_vlgvf(16, 1); 77 + kernel_fpu_end(&vxstate, KERNEL_VXR_V16V23); 78 + return sum; 79 + } 80 + 81 + __wsum csum_partial(const void *buff, int len, __wsum sum) 82 + { 83 + return csum_copy(NULL, buff, len, sum, false); 84 + } 85 + EXPORT_SYMBOL(csum_partial); 86 + 87 + __wsum csum_partial_copy_nocheck(const void *src, void *dst, int len) 88 + { 89 + return csum_copy(dst, src, len, 0, true); 90 + } 91 + EXPORT_SYMBOL(csum_partial_copy_nocheck);
+2 -2
arch/s390/mm/extmem.c
··· 136 136 unsigned long rx, ry; 137 137 int rc; 138 138 139 - rx = (unsigned long) parameter; 139 + rx = virt_to_phys(parameter); 140 140 ry = (unsigned long) *func; 141 141 142 142 diag_stat_inc(DIAG_STAT_X064); ··· 178 178 179 179 /* initialize diag input parameters */ 180 180 qin->qopcode = DCSS_FINDSEGA; 181 - qin->qoutptr = (unsigned long) qout; 181 + qin->qoutptr = virt_to_phys(qout); 182 182 qin->qoutlen = sizeof(struct qout64); 183 183 memcpy (qin->qname, seg->dcss_name, 8); 184 184
+11 -8
arch/s390/mm/mmap.c
··· 71 71 return PAGE_ALIGN(STACK_TOP - gap - rnd); 72 72 } 73 73 74 + static int get_align_mask(struct file *filp, unsigned long flags) 75 + { 76 + if (!(current->flags & PF_RANDOMIZE)) 77 + return 0; 78 + if (filp || (flags & MAP_SHARED)) 79 + return MMAP_ALIGN_MASK << PAGE_SHIFT; 80 + return 0; 81 + } 82 + 74 83 unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, 75 84 unsigned long len, unsigned long pgoff, 76 85 unsigned long flags) ··· 106 97 info.length = len; 107 98 info.low_limit = mm->mmap_base; 108 99 info.high_limit = TASK_SIZE; 109 - if (filp || (flags & MAP_SHARED)) 110 - info.align_mask = MMAP_ALIGN_MASK << PAGE_SHIFT; 111 - else 112 - info.align_mask = 0; 100 + info.align_mask = get_align_mask(filp, flags); 113 101 info.align_offset = pgoff << PAGE_SHIFT; 114 102 addr = vm_unmapped_area(&info); 115 103 if (offset_in_page(addr)) ··· 144 138 info.length = len; 145 139 info.low_limit = PAGE_SIZE; 146 140 info.high_limit = mm->mmap_base; 147 - if (filp || (flags & MAP_SHARED)) 148 - info.align_mask = MMAP_ALIGN_MASK << PAGE_SHIFT; 149 - else 150 - info.align_mask = 0; 141 + info.align_mask = get_align_mask(filp, flags); 151 142 info.align_offset = pgoff << PAGE_SHIFT; 152 143 addr = vm_unmapped_area(&info); 153 144
+14 -6
arch/s390/pci/pci.c
··· 28 28 #include <linux/jump_label.h> 29 29 #include <linux/pci.h> 30 30 #include <linux/printk.h> 31 + #include <linux/lockdep.h> 31 32 32 33 #include <asm/isc.h> 33 34 #include <asm/airq.h> ··· 731 730 * equivalent to its state during boot when first probing a driver. 732 731 * Consequently after reset the PCI function requires re-initialization via the 733 732 * common PCI code including re-enabling IRQs via pci_alloc_irq_vectors() 734 - * and enabling the function via e.g.pci_enablde_device_flags().The caller 733 + * and enabling the function via e.g. pci_enable_device_flags(). The caller 735 734 * must guard against concurrent reset attempts. 736 735 * 737 736 * In most cases this function should not be called directly but through 738 737 * pci_reset_function() or pci_reset_bus() which handle the save/restore and 739 - * locking. 738 + * locking - asserted by lockdep. 740 739 * 741 740 * Return: 0 on success and an error value otherwise 742 741 */ ··· 745 744 u8 status; 746 745 int rc; 747 746 747 + lockdep_assert_held(&zdev->state_lock); 748 748 zpci_dbg(3, "rst fid:%x, fh:%x\n", zdev->fid, zdev->fh); 749 749 if (zdev_enabled(zdev)) { 750 750 /* Disables device access, DMAs and IRQs (reset state) */ ··· 808 806 zdev->state = state; 809 807 810 808 kref_init(&zdev->kref); 811 - mutex_init(&zdev->lock); 809 + mutex_init(&zdev->state_lock); 810 + mutex_init(&zdev->fmb_lock); 812 811 mutex_init(&zdev->kzdev_lock); 813 812 814 813 rc = zpci_init_iommu(zdev); ··· 873 870 { 874 871 int rc; 875 872 873 + lockdep_assert_held(&zdev->state_lock); 874 + if (zdev->state != ZPCI_FN_STATE_CONFIGURED) 875 + return 0; 876 + 876 877 if (zdev->zbus->bus) 877 878 zpci_bus_remove_device(zdev, false); 878 879 ··· 896 889 } 897 890 898 891 /** 899 - * zpci_device_reserved() - Mark device as resverved 892 + * zpci_device_reserved() - Mark device as reserved 900 893 * @zdev: the zpci_dev that was reserved 901 894 * 902 895 * Handle the case that a given zPCI function was reserved by another system. ··· 906 899 */ 907 900 void zpci_device_reserved(struct zpci_dev *zdev) 908 901 { 909 - if (zdev->has_hp_slot) 910 - zpci_exit_slot(zdev); 911 902 /* 912 903 * Remove device from zpci_list as it is going away. This also 913 904 * makes sure we ignore subsequent zPCI events for this device. ··· 922 917 { 923 918 struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref); 924 919 int ret; 920 + 921 + if (zdev->has_hp_slot) 922 + zpci_exit_slot(zdev); 925 923 926 924 if (zdev->zbus->bus) 927 925 zpci_bus_remove_device(zdev, false);
+5 -5
arch/s390/pci/pci_debug.c
··· 91 91 if (!zdev) 92 92 return 0; 93 93 94 - mutex_lock(&zdev->lock); 94 + mutex_lock(&zdev->fmb_lock); 95 95 if (!zdev->fmb) { 96 - mutex_unlock(&zdev->lock); 96 + mutex_unlock(&zdev->fmb_lock); 97 97 seq_puts(m, "FMB statistics disabled\n"); 98 98 return 0; 99 99 } ··· 130 130 } 131 131 132 132 pci_sw_counter_show(m); 133 - mutex_unlock(&zdev->lock); 133 + mutex_unlock(&zdev->fmb_lock); 134 134 return 0; 135 135 } 136 136 ··· 148 148 if (rc) 149 149 return rc; 150 150 151 - mutex_lock(&zdev->lock); 151 + mutex_lock(&zdev->fmb_lock); 152 152 switch (val) { 153 153 case 0: 154 154 rc = zpci_fmb_disable_device(zdev); ··· 157 157 rc = zpci_fmb_enable_device(zdev); 158 158 break; 159 159 } 160 - mutex_unlock(&zdev->lock); 160 + mutex_unlock(&zdev->fmb_lock); 161 161 return rc ? rc : count; 162 162 } 163 163
+12 -3
arch/s390/pci/pci_event.c
··· 267 267 zpci_err_hex(ccdf, sizeof(*ccdf)); 268 268 269 269 if (zdev) { 270 + mutex_lock(&zdev->state_lock); 270 271 zpci_update_fh(zdev, ccdf->fh); 271 272 if (zdev->zbus->bus) 272 273 pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn); ··· 295 294 } 296 295 pci_dev_put(pdev); 297 296 no_pdev: 297 + if (zdev) 298 + mutex_unlock(&zdev->state_lock); 298 299 zpci_zdev_put(zdev); 299 300 } 300 301 ··· 329 326 330 327 zpci_dbg(3, "avl fid:%x, fh:%x, pec:%x\n", 331 328 ccdf->fid, ccdf->fh, ccdf->pec); 329 + 330 + if (existing_zdev) 331 + mutex_lock(&zdev->state_lock); 332 + 332 333 switch (ccdf->pec) { 333 334 case 0x0301: /* Reserved|Standby -> Configured */ 334 335 if (!zdev) { ··· 355 348 break; 356 349 case 0x0303: /* Deconfiguration requested */ 357 350 if (zdev) { 358 - /* The event may have been queued before we confirgured 351 + /* The event may have been queued before we configured 359 352 * the device. 360 353 */ 361 354 if (zdev->state != ZPCI_FN_STATE_CONFIGURED) ··· 366 359 break; 367 360 case 0x0304: /* Configured -> Standby|Reserved */ 368 361 if (zdev) { 369 - /* The event may have been queued before we confirgured 362 + /* The event may have been queued before we configured 370 363 * the device.: 371 364 */ 372 365 if (zdev->state == ZPCI_FN_STATE_CONFIGURED) ··· 390 383 default: 391 384 break; 392 385 } 393 - if (existing_zdev) 386 + if (existing_zdev) { 387 + mutex_unlock(&zdev->state_lock); 394 388 zpci_zdev_put(zdev); 389 + } 395 390 } 396 391 397 392 void zpci_event_availability(void *data)
+43 -27
arch/s390/pci/pci_sysfs.c
··· 49 49 } 50 50 static DEVICE_ATTR_RO(mio_enabled); 51 51 52 + static int _do_recover(struct pci_dev *pdev, struct zpci_dev *zdev) 53 + { 54 + u8 status; 55 + int ret; 56 + 57 + pci_stop_and_remove_bus_device(pdev); 58 + if (zdev_enabled(zdev)) { 59 + ret = zpci_disable_device(zdev); 60 + /* 61 + * Due to a z/VM vs LPAR inconsistency in the error 62 + * state the FH may indicate an enabled device but 63 + * disable says the device is already disabled don't 64 + * treat it as an error here. 65 + */ 66 + if (ret == -EINVAL) 67 + ret = 0; 68 + if (ret) 69 + return ret; 70 + } 71 + 72 + ret = zpci_enable_device(zdev); 73 + if (ret) 74 + return ret; 75 + 76 + if (zdev->dma_table) { 77 + ret = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 78 + virt_to_phys(zdev->dma_table), &status); 79 + if (ret) 80 + zpci_disable_device(zdev); 81 + } 82 + return ret; 83 + } 84 + 52 85 static ssize_t recover_store(struct device *dev, struct device_attribute *attr, 53 86 const char *buf, size_t count) 54 87 { ··· 89 56 struct pci_dev *pdev = to_pci_dev(dev); 90 57 struct zpci_dev *zdev = to_zpci(pdev); 91 58 int ret = 0; 92 - u8 status; 93 59 94 60 /* Can't use device_remove_self() here as that would lead us to lock 95 61 * the pci_rescan_remove_lock while holding the device' kernfs lock. ··· 102 70 */ 103 71 kn = sysfs_break_active_protection(&dev->kobj, &attr->attr); 104 72 WARN_ON_ONCE(!kn); 73 + 74 + /* Device needs to be configured and state must not change */ 75 + mutex_lock(&zdev->state_lock); 76 + if (zdev->state != ZPCI_FN_STATE_CONFIGURED) 77 + goto out; 78 + 105 79 /* device_remove_file() serializes concurrent calls ignoring all but 106 80 * the first 107 81 */ ··· 120 82 */ 121 83 pci_lock_rescan_remove(); 122 84 if (pci_dev_is_added(pdev)) { 123 - pci_stop_and_remove_bus_device(pdev); 124 - if (zdev_enabled(zdev)) { 125 - ret = zpci_disable_device(zdev); 126 - /* 127 - * Due to a z/VM vs LPAR inconsistency in the error 128 - * state the FH may indicate an enabled device but 129 - * disable says the device is already disabled don't 130 - * treat it as an error here. 131 - */ 132 - if (ret == -EINVAL) 133 - ret = 0; 134 - if (ret) 135 - goto out; 136 - } 137 - 138 - ret = zpci_enable_device(zdev); 139 - if (ret) 140 - goto out; 141 - 142 - if (zdev->dma_table) { 143 - ret = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 144 - virt_to_phys(zdev->dma_table), &status); 145 - if (ret) 146 - zpci_disable_device(zdev); 147 - } 85 + ret = _do_recover(pdev, zdev); 148 86 } 149 - out: 150 87 pci_rescan_bus(zdev->zbus->bus); 151 88 pci_unlock_rescan_remove(); 89 + 90 + out: 91 + mutex_unlock(&zdev->state_lock); 152 92 if (kn) 153 93 sysfs_unbreak_active_protection(kn); 154 94 return ret ? ret : count;
+1
arch/s390/tools/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 gen_facilities 3 3 gen_opcode_table 4 + relocs
+5
arch/s390/tools/Makefile
··· 25 25 26 26 $(kapi)/dis-defs.h: $(obj)/gen_opcode_table FORCE 27 27 $(call filechk,dis-defs.h) 28 + 29 + hostprogs += relocs 30 + PHONY += relocs 31 + relocs: $(obj)/relocs 32 + @:
+387
arch/s390/tools/relocs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <stdio.h> 4 + #include <stdarg.h> 5 + #include <stdlib.h> 6 + #include <stdint.h> 7 + #include <inttypes.h> 8 + #include <string.h> 9 + #include <errno.h> 10 + #include <unistd.h> 11 + #include <elf.h> 12 + #include <byteswap.h> 13 + #define USE_BSD 14 + #include <endian.h> 15 + 16 + #define ELF_BITS 64 17 + 18 + #define ELF_MACHINE EM_S390 19 + #define ELF_MACHINE_NAME "IBM S/390" 20 + #define SHT_REL_TYPE SHT_RELA 21 + #define Elf_Rel Elf64_Rela 22 + 23 + #define ELF_CLASS ELFCLASS64 24 + #define ELF_ENDIAN ELFDATA2MSB 25 + #define ELF_R_SYM(val) ELF64_R_SYM(val) 26 + #define ELF_R_TYPE(val) ELF64_R_TYPE(val) 27 + #define ELF_ST_TYPE(o) ELF64_ST_TYPE(o) 28 + #define ELF_ST_BIND(o) ELF64_ST_BIND(o) 29 + #define ELF_ST_VISIBILITY(o) ELF64_ST_VISIBILITY(o) 30 + 31 + #define ElfW(type) _ElfW(ELF_BITS, type) 32 + #define _ElfW(bits, type) __ElfW(bits, type) 33 + #define __ElfW(bits, type) Elf##bits##_##type 34 + 35 + #define Elf_Addr ElfW(Addr) 36 + #define Elf_Ehdr ElfW(Ehdr) 37 + #define Elf_Phdr ElfW(Phdr) 38 + #define Elf_Shdr ElfW(Shdr) 39 + #define Elf_Sym ElfW(Sym) 40 + 41 + static Elf_Ehdr ehdr; 42 + static unsigned long shnum; 43 + static unsigned int shstrndx; 44 + 45 + struct relocs { 46 + uint32_t *offset; 47 + unsigned long count; 48 + unsigned long size; 49 + }; 50 + 51 + static struct relocs relocs64; 52 + #define FMT PRIu64 53 + 54 + struct section { 55 + Elf_Shdr shdr; 56 + struct section *link; 57 + Elf_Rel *reltab; 58 + }; 59 + 60 + static struct section *secs; 61 + 62 + #if BYTE_ORDER == LITTLE_ENDIAN 63 + #define le16_to_cpu(val) (val) 64 + #define le32_to_cpu(val) (val) 65 + #define le64_to_cpu(val) (val) 66 + #define be16_to_cpu(val) bswap_16(val) 67 + #define be32_to_cpu(val) bswap_32(val) 68 + #define be64_to_cpu(val) bswap_64(val) 69 + #endif 70 + 71 + #if BYTE_ORDER == BIG_ENDIAN 72 + #define le16_to_cpu(val) bswap_16(val) 73 + #define le32_to_cpu(val) bswap_32(val) 74 + #define le64_to_cpu(val) bswap_64(val) 75 + #define be16_to_cpu(val) (val) 76 + #define be32_to_cpu(val) (val) 77 + #define be64_to_cpu(val) (val) 78 + #endif 79 + 80 + static uint16_t elf16_to_cpu(uint16_t val) 81 + { 82 + if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB) 83 + return le16_to_cpu(val); 84 + else 85 + return be16_to_cpu(val); 86 + } 87 + 88 + static uint32_t elf32_to_cpu(uint32_t val) 89 + { 90 + if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB) 91 + return le32_to_cpu(val); 92 + else 93 + return be32_to_cpu(val); 94 + } 95 + 96 + #define elf_half_to_cpu(x) elf16_to_cpu(x) 97 + #define elf_word_to_cpu(x) elf32_to_cpu(x) 98 + 99 + static uint64_t elf64_to_cpu(uint64_t val) 100 + { 101 + return be64_to_cpu(val); 102 + } 103 + 104 + #define elf_addr_to_cpu(x) elf64_to_cpu(x) 105 + #define elf_off_to_cpu(x) elf64_to_cpu(x) 106 + #define elf_xword_to_cpu(x) elf64_to_cpu(x) 107 + 108 + static void die(char *fmt, ...) 109 + { 110 + va_list ap; 111 + 112 + va_start(ap, fmt); 113 + vfprintf(stderr, fmt, ap); 114 + va_end(ap); 115 + exit(1); 116 + } 117 + 118 + static void read_ehdr(FILE *fp) 119 + { 120 + if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1) 121 + die("Cannot read ELF header: %s\n", strerror(errno)); 122 + if (memcmp(ehdr.e_ident, ELFMAG, SELFMAG) != 0) 123 + die("No ELF magic\n"); 124 + if (ehdr.e_ident[EI_CLASS] != ELF_CLASS) 125 + die("Not a %d bit executable\n", ELF_BITS); 126 + if (ehdr.e_ident[EI_DATA] != ELF_ENDIAN) 127 + die("ELF endian mismatch\n"); 128 + if (ehdr.e_ident[EI_VERSION] != EV_CURRENT) 129 + die("Unknown ELF version\n"); 130 + 131 + /* Convert the fields to native endian */ 132 + ehdr.e_type = elf_half_to_cpu(ehdr.e_type); 133 + ehdr.e_machine = elf_half_to_cpu(ehdr.e_machine); 134 + ehdr.e_version = elf_word_to_cpu(ehdr.e_version); 135 + ehdr.e_entry = elf_addr_to_cpu(ehdr.e_entry); 136 + ehdr.e_phoff = elf_off_to_cpu(ehdr.e_phoff); 137 + ehdr.e_shoff = elf_off_to_cpu(ehdr.e_shoff); 138 + ehdr.e_flags = elf_word_to_cpu(ehdr.e_flags); 139 + ehdr.e_ehsize = elf_half_to_cpu(ehdr.e_ehsize); 140 + ehdr.e_phentsize = elf_half_to_cpu(ehdr.e_phentsize); 141 + ehdr.e_phnum = elf_half_to_cpu(ehdr.e_phnum); 142 + ehdr.e_shentsize = elf_half_to_cpu(ehdr.e_shentsize); 143 + ehdr.e_shnum = elf_half_to_cpu(ehdr.e_shnum); 144 + ehdr.e_shstrndx = elf_half_to_cpu(ehdr.e_shstrndx); 145 + 146 + shnum = ehdr.e_shnum; 147 + shstrndx = ehdr.e_shstrndx; 148 + 149 + if ((ehdr.e_type != ET_EXEC) && (ehdr.e_type != ET_DYN)) 150 + die("Unsupported ELF header type\n"); 151 + if (ehdr.e_machine != ELF_MACHINE) 152 + die("Not for %s\n", ELF_MACHINE_NAME); 153 + if (ehdr.e_version != EV_CURRENT) 154 + die("Unknown ELF version\n"); 155 + if (ehdr.e_ehsize != sizeof(Elf_Ehdr)) 156 + die("Bad Elf header size\n"); 157 + if (ehdr.e_phentsize != sizeof(Elf_Phdr)) 158 + die("Bad program header entry\n"); 159 + if (ehdr.e_shentsize != sizeof(Elf_Shdr)) 160 + die("Bad section header entry\n"); 161 + 162 + if (shnum == SHN_UNDEF || shstrndx == SHN_XINDEX) { 163 + Elf_Shdr shdr; 164 + 165 + if (fseek(fp, ehdr.e_shoff, SEEK_SET) < 0) 166 + die("Seek to %" FMT " failed: %s\n", ehdr.e_shoff, strerror(errno)); 167 + 168 + if (fread(&shdr, sizeof(shdr), 1, fp) != 1) 169 + die("Cannot read initial ELF section header: %s\n", strerror(errno)); 170 + 171 + if (shnum == SHN_UNDEF) 172 + shnum = elf_xword_to_cpu(shdr.sh_size); 173 + 174 + if (shstrndx == SHN_XINDEX) 175 + shstrndx = elf_word_to_cpu(shdr.sh_link); 176 + } 177 + 178 + if (shstrndx >= shnum) 179 + die("String table index out of bounds\n"); 180 + } 181 + 182 + static void read_shdrs(FILE *fp) 183 + { 184 + Elf_Shdr shdr; 185 + int i; 186 + 187 + secs = calloc(shnum, sizeof(struct section)); 188 + if (!secs) 189 + die("Unable to allocate %ld section headers\n", shnum); 190 + 191 + if (fseek(fp, ehdr.e_shoff, SEEK_SET) < 0) 192 + die("Seek to %" FMT " failed: %s\n", ehdr.e_shoff, strerror(errno)); 193 + 194 + for (i = 0; i < shnum; i++) { 195 + struct section *sec = &secs[i]; 196 + 197 + if (fread(&shdr, sizeof(shdr), 1, fp) != 1) { 198 + die("Cannot read ELF section headers %d/%ld: %s\n", 199 + i, shnum, strerror(errno)); 200 + } 201 + 202 + sec->shdr.sh_name = elf_word_to_cpu(shdr.sh_name); 203 + sec->shdr.sh_type = elf_word_to_cpu(shdr.sh_type); 204 + sec->shdr.sh_flags = elf_xword_to_cpu(shdr.sh_flags); 205 + sec->shdr.sh_addr = elf_addr_to_cpu(shdr.sh_addr); 206 + sec->shdr.sh_offset = elf_off_to_cpu(shdr.sh_offset); 207 + sec->shdr.sh_size = elf_xword_to_cpu(shdr.sh_size); 208 + sec->shdr.sh_link = elf_word_to_cpu(shdr.sh_link); 209 + sec->shdr.sh_info = elf_word_to_cpu(shdr.sh_info); 210 + sec->shdr.sh_addralign = elf_xword_to_cpu(shdr.sh_addralign); 211 + sec->shdr.sh_entsize = elf_xword_to_cpu(shdr.sh_entsize); 212 + 213 + if (sec->shdr.sh_link < shnum) 214 + sec->link = &secs[sec->shdr.sh_link]; 215 + } 216 + 217 + } 218 + 219 + static void read_relocs(FILE *fp) 220 + { 221 + int i, j; 222 + 223 + for (i = 0; i < shnum; i++) { 224 + struct section *sec = &secs[i]; 225 + 226 + if (sec->shdr.sh_type != SHT_REL_TYPE) 227 + continue; 228 + 229 + sec->reltab = malloc(sec->shdr.sh_size); 230 + if (!sec->reltab) 231 + die("malloc of %" FMT " bytes for relocs failed\n", sec->shdr.sh_size); 232 + 233 + if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0) 234 + die("Seek to %" FMT " failed: %s\n", sec->shdr.sh_offset, strerror(errno)); 235 + 236 + if (fread(sec->reltab, 1, sec->shdr.sh_size, fp) != sec->shdr.sh_size) 237 + die("Cannot read symbol table: %s\n", strerror(errno)); 238 + 239 + for (j = 0; j < sec->shdr.sh_size / sizeof(Elf_Rel); j++) { 240 + Elf_Rel *rel = &sec->reltab[j]; 241 + 242 + rel->r_offset = elf_addr_to_cpu(rel->r_offset); 243 + rel->r_info = elf_xword_to_cpu(rel->r_info); 244 + #if (SHT_REL_TYPE == SHT_RELA) 245 + rel->r_addend = elf_xword_to_cpu(rel->r_addend); 246 + #endif 247 + } 248 + } 249 + } 250 + 251 + static void add_reloc(struct relocs *r, uint32_t offset) 252 + { 253 + if (r->count == r->size) { 254 + unsigned long newsize = r->size + 50000; 255 + void *mem = realloc(r->offset, newsize * sizeof(r->offset[0])); 256 + 257 + if (!mem) 258 + die("realloc of %ld entries for relocs failed\n", newsize); 259 + 260 + r->offset = mem; 261 + r->size = newsize; 262 + } 263 + r->offset[r->count++] = offset; 264 + } 265 + 266 + static int do_reloc(struct section *sec, Elf_Rel *rel) 267 + { 268 + unsigned int r_type = ELF64_R_TYPE(rel->r_info); 269 + ElfW(Addr) offset = rel->r_offset; 270 + 271 + switch (r_type) { 272 + case R_390_NONE: 273 + case R_390_PC32: 274 + case R_390_PC64: 275 + case R_390_PC16DBL: 276 + case R_390_PC32DBL: 277 + case R_390_PLT32DBL: 278 + case R_390_GOTENT: 279 + case R_390_GOTPCDBL: 280 + case R_390_GOTOFF64: 281 + break; 282 + case R_390_64: 283 + add_reloc(&relocs64, offset); 284 + break; 285 + default: 286 + die("Unsupported relocation type: %d\n", r_type); 287 + break; 288 + } 289 + 290 + return 0; 291 + } 292 + 293 + static void walk_relocs(void) 294 + { 295 + int i; 296 + 297 + /* Walk through the relocations */ 298 + for (i = 0; i < shnum; i++) { 299 + struct section *sec_applies; 300 + int j; 301 + struct section *sec = &secs[i]; 302 + 303 + if (sec->shdr.sh_type != SHT_REL_TYPE) 304 + continue; 305 + 306 + sec_applies = &secs[sec->shdr.sh_info]; 307 + if (!(sec_applies->shdr.sh_flags & SHF_ALLOC)) 308 + continue; 309 + 310 + for (j = 0; j < sec->shdr.sh_size / sizeof(Elf_Rel); j++) { 311 + Elf_Rel *rel = &sec->reltab[j]; 312 + 313 + do_reloc(sec, rel); 314 + } 315 + } 316 + } 317 + 318 + static int cmp_relocs(const void *va, const void *vb) 319 + { 320 + const uint32_t *a, *b; 321 + 322 + a = va; b = vb; 323 + return (*a == *b) ? 0 : (*a > *b) ? 1 : -1; 324 + } 325 + 326 + static void sort_relocs(struct relocs *r) 327 + { 328 + qsort(r->offset, r->count, sizeof(r->offset[0]), cmp_relocs); 329 + } 330 + 331 + static int print_reloc(uint32_t v) 332 + { 333 + return fprintf(stdout, "\t.long 0x%08"PRIx32"\n", v) > 0 ? 0 : -1; 334 + } 335 + 336 + static void emit_relocs(void) 337 + { 338 + int i; 339 + 340 + walk_relocs(); 341 + sort_relocs(&relocs64); 342 + 343 + printf(".section \".vmlinux.relocs_64\",\"a\"\n"); 344 + for (i = 0; i < relocs64.count; i++) 345 + print_reloc(relocs64.offset[i]); 346 + } 347 + 348 + static void process(FILE *fp) 349 + { 350 + read_ehdr(fp); 351 + read_shdrs(fp); 352 + read_relocs(fp); 353 + emit_relocs(); 354 + } 355 + 356 + static void usage(void) 357 + { 358 + die("relocs vmlinux\n"); 359 + } 360 + 361 + int main(int argc, char **argv) 362 + { 363 + unsigned char e_ident[EI_NIDENT]; 364 + const char *fname; 365 + FILE *fp; 366 + 367 + fname = NULL; 368 + 369 + if (argc != 2) 370 + usage(); 371 + 372 + fname = argv[1]; 373 + 374 + fp = fopen(fname, "r"); 375 + if (!fp) 376 + die("Cannot open %s: %s\n", fname, strerror(errno)); 377 + 378 + if (fread(&e_ident, 1, EI_NIDENT, fp) != EI_NIDENT) 379 + die("Cannot read %s: %s", fname, strerror(errno)); 380 + 381 + rewind(fp); 382 + 383 + process(fp); 384 + 385 + fclose(fp); 386 + return 0; 387 + }
+43 -22
drivers/pci/hotplug/s390_pci_hpc.c
··· 26 26 hotplug_slot); 27 27 int rc; 28 28 29 - if (zdev->state != ZPCI_FN_STATE_STANDBY) 30 - return -EIO; 29 + mutex_lock(&zdev->state_lock); 30 + if (zdev->state != ZPCI_FN_STATE_STANDBY) { 31 + rc = -EIO; 32 + goto out; 33 + } 31 34 32 35 rc = sclp_pci_configure(zdev->fid); 33 36 zpci_dbg(3, "conf fid:%x, rc:%d\n", zdev->fid, rc); 34 37 if (rc) 35 - return rc; 38 + goto out; 36 39 zdev->state = ZPCI_FN_STATE_CONFIGURED; 37 40 38 - return zpci_scan_configured_device(zdev, zdev->fh); 41 + rc = zpci_scan_configured_device(zdev, zdev->fh); 42 + out: 43 + mutex_unlock(&zdev->state_lock); 44 + return rc; 39 45 } 40 46 41 47 static int disable_slot(struct hotplug_slot *hotplug_slot) 42 48 { 43 49 struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev, 44 50 hotplug_slot); 45 - struct pci_dev *pdev; 51 + struct pci_dev *pdev = NULL; 52 + int rc; 46 53 47 - if (zdev->state != ZPCI_FN_STATE_CONFIGURED) 48 - return -EIO; 54 + mutex_lock(&zdev->state_lock); 55 + if (zdev->state != ZPCI_FN_STATE_CONFIGURED) { 56 + rc = -EIO; 57 + goto out; 58 + } 49 59 50 60 pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn); 51 61 if (pdev && pci_num_vf(pdev)) { 52 62 pci_dev_put(pdev); 53 - return -EBUSY; 63 + rc = -EBUSY; 64 + goto out; 54 65 } 55 - pci_dev_put(pdev); 56 66 57 - return zpci_deconfigure_device(zdev); 67 + rc = zpci_deconfigure_device(zdev); 68 + out: 69 + mutex_unlock(&zdev->state_lock); 70 + if (pdev) 71 + pci_dev_put(pdev); 72 + return rc; 58 73 } 59 74 60 75 static int reset_slot(struct hotplug_slot *hotplug_slot, bool probe) 61 76 { 62 77 struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev, 63 78 hotplug_slot); 79 + int rc = -EIO; 64 80 65 - if (zdev->state != ZPCI_FN_STATE_CONFIGURED) 66 - return -EIO; 67 81 /* 68 - * We can't take the zdev->lock as reset_slot may be called during 69 - * probing and/or device removal which already happens under the 70 - * zdev->lock. Instead the user should use the higher level 71 - * pci_reset_function() or pci_bus_reset() which hold the PCI device 72 - * lock preventing concurrent removal. If not using these functions 73 - * holding the PCI device lock is required. 82 + * If we can't get the zdev->state_lock the device state is 83 + * currently undergoing a transition and we bail out - just 84 + * the same as if the device's state is not configured at all. 74 85 */ 86 + if (!mutex_trylock(&zdev->state_lock)) 87 + return rc; 75 88 76 - /* As long as the function is configured we can reset */ 77 - if (probe) 78 - return 0; 89 + /* We can reset only if the function is configured */ 90 + if (zdev->state != ZPCI_FN_STATE_CONFIGURED) 91 + goto out; 79 92 80 - return zpci_hot_reset_device(zdev); 93 + if (probe) { 94 + rc = 0; 95 + goto out; 96 + } 97 + 98 + rc = zpci_hot_reset_device(zdev); 99 + out: 100 + mutex_unlock(&zdev->state_lock); 101 + return rc; 81 102 } 82 103 83 104 static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)
+2 -2
drivers/s390/char/vmur.c
··· 195 195 struct ccw1 *ptr = cpa; 196 196 197 197 while (ptr->cda) { 198 - kfree((void *)(addr_t) ptr->cda); 198 + kfree(phys_to_virt(ptr->cda)); 199 199 ptr++; 200 200 } 201 201 kfree(cpa); ··· 237 237 free_chan_prog(cpa); 238 238 return ERR_PTR(-ENOMEM); 239 239 } 240 - cpa[i].cda = (u32)(addr_t) kbuf; 240 + cpa[i].cda = (u32)virt_to_phys(kbuf); 241 241 if (copy_from_user(kbuf, ubuf, reclen)) { 242 242 free_chan_prog(cpa); 243 243 return ERR_PTR(-EFAULT);
-1
drivers/s390/char/zcore.c
··· 29 29 #include <asm/irqflags.h> 30 30 #include <asm/checksum.h> 31 31 #include <asm/os_info.h> 32 - #include <asm/switch_to.h> 33 32 #include <asm/maccess.h> 34 33 #include "sclp.h" 35 34
+2 -2
drivers/s390/cio/ccwgroup.c
··· 31 31 * to devices that use multiple subchannels. 32 32 */ 33 33 34 - static struct bus_type ccwgroup_bus_type; 34 + static const struct bus_type ccwgroup_bus_type; 35 35 36 36 static void __ccwgroup_remove_symlinks(struct ccwgroup_device *gdev) 37 37 { ··· 465 465 gdrv->shutdown(gdev); 466 466 } 467 467 468 - static struct bus_type ccwgroup_bus_type = { 468 + static const struct bus_type ccwgroup_bus_type = { 469 469 .name = "ccwgroup", 470 470 .dev_groups = ccwgroup_dev_groups, 471 471 .remove = ccwgroup_remove,
+2 -2
drivers/s390/cio/chsc.c
··· 1091 1091 { 1092 1092 int ret; 1093 1093 1094 - sei_page = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 1095 - chsc_page = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 1094 + sei_page = (void *)get_zeroed_page(GFP_KERNEL); 1095 + chsc_page = (void *)get_zeroed_page(GFP_KERNEL); 1096 1096 if (!sei_page || !chsc_page) { 1097 1097 ret = -ENOMEM; 1098 1098 goto out_err;
+10 -10
drivers/s390/cio/chsc_sch.c
··· 293 293 if (!css_general_characteristics.dynio) 294 294 /* It makes no sense to try. */ 295 295 return -EOPNOTSUPP; 296 - chsc_area = (void *)get_zeroed_page(GFP_DMA | GFP_KERNEL); 296 + chsc_area = (void *)get_zeroed_page(GFP_KERNEL); 297 297 if (!chsc_area) 298 298 return -ENOMEM; 299 299 request = kzalloc(sizeof(*request), GFP_KERNEL); ··· 341 341 ret = -ENOMEM; 342 342 goto out_unlock; 343 343 } 344 - on_close_chsc_area = (void *)get_zeroed_page(GFP_DMA | GFP_KERNEL); 344 + on_close_chsc_area = (void *)get_zeroed_page(GFP_KERNEL); 345 345 if (!on_close_chsc_area) { 346 346 ret = -ENOMEM; 347 347 goto out_free_request; ··· 393 393 struct chsc_sync_area *chsc_area; 394 394 int ret, ccode; 395 395 396 - chsc_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 396 + chsc_area = (void *)get_zeroed_page(GFP_KERNEL); 397 397 if (!chsc_area) 398 398 return -ENOMEM; 399 399 if (copy_from_user(chsc_area, user_area, PAGE_SIZE)) { ··· 439 439 u8 data[PAGE_SIZE - 20]; 440 440 } __attribute__ ((packed)) *scpcd_area; 441 441 442 - scpcd_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 442 + scpcd_area = (void *)get_zeroed_page(GFP_KERNEL); 443 443 if (!scpcd_area) 444 444 return -ENOMEM; 445 445 cd = kzalloc(sizeof(*cd), GFP_KERNEL); ··· 501 501 u8 data[PAGE_SIZE - 20]; 502 502 } __attribute__ ((packed)) *scucd_area; 503 503 504 - scucd_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 504 + scucd_area = (void *)get_zeroed_page(GFP_KERNEL); 505 505 if (!scucd_area) 506 506 return -ENOMEM; 507 507 cd = kzalloc(sizeof(*cd), GFP_KERNEL); ··· 564 564 u8 data[PAGE_SIZE - 20]; 565 565 } __attribute__ ((packed)) *sscud_area; 566 566 567 - sscud_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 567 + sscud_area = (void *)get_zeroed_page(GFP_KERNEL); 568 568 if (!sscud_area) 569 569 return -ENOMEM; 570 570 cud = kzalloc(sizeof(*cud), GFP_KERNEL); ··· 626 626 u8 data[PAGE_SIZE - 20]; 627 627 } __attribute__ ((packed)) *sci_area; 628 628 629 - sci_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 629 + sci_area = (void *)get_zeroed_page(GFP_KERNEL); 630 630 if (!sci_area) 631 631 return -ENOMEM; 632 632 ci = kzalloc(sizeof(*ci), GFP_KERNEL); ··· 697 697 u32 res; 698 698 } __attribute__ ((packed)) *cssids_parm; 699 699 700 - sccl_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 700 + sccl_area = (void *)get_zeroed_page(GFP_KERNEL); 701 701 if (!sccl_area) 702 702 return -ENOMEM; 703 703 ccl = kzalloc(sizeof(*ccl), GFP_KERNEL); ··· 757 757 int ret; 758 758 759 759 chpd = kzalloc(sizeof(*chpd), GFP_KERNEL); 760 - scpd_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 760 + scpd_area = (void *)get_zeroed_page(GFP_KERNEL); 761 761 if (!scpd_area || !chpd) { 762 762 ret = -ENOMEM; 763 763 goto out_free; ··· 797 797 u8 data[PAGE_SIZE - 36]; 798 798 } __attribute__ ((packed)) *sdcal_area; 799 799 800 - sdcal_area = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA); 800 + sdcal_area = (void *)get_zeroed_page(GFP_KERNEL); 801 801 if (!sdcal_area) 802 802 return -ENOMEM; 803 803 dcal = kzalloc(sizeof(*dcal), GFP_KERNEL);
+3 -3
drivers/s390/cio/cmf.c
··· 169 169 " lgr 2,%[mbo]\n" 170 170 " schm\n" 171 171 : 172 - : [r1] "d" ((unsigned long)onoff), [mbo] "d" (area) 172 + : [r1] "d" ((unsigned long)onoff), 173 + [mbo] "d" (virt_to_phys(area)) 173 174 : "1", "2"); 174 175 } 175 176 ··· 502 501 WARN_ON(!list_empty(&cmb_area.list)); 503 502 504 503 spin_unlock(&cmb_area.lock); 505 - mem = (void*)__get_free_pages(GFP_KERNEL | GFP_DMA, 506 - get_order(size)); 504 + mem = (void *)__get_free_pages(GFP_KERNEL, get_order(size)); 507 505 spin_lock(&cmb_area.lock); 508 506 509 507 if (cmb_area.mem) {
+2 -2
drivers/s390/cio/css.c
··· 39 39 40 40 #define MAX_CSS_IDX 0 41 41 struct channel_subsystem *channel_subsystems[MAX_CSS_IDX + 1]; 42 - static struct bus_type css_bus_type; 42 + static const struct bus_type css_bus_type; 43 43 44 44 int 45 45 for_each_subchannel(int(*fn)(struct subchannel_id, void *), void *data) ··· 1409 1409 return ret; 1410 1410 } 1411 1411 1412 - static struct bus_type css_bus_type = { 1412 + static const struct bus_type css_bus_type = { 1413 1413 .name = "css", 1414 1414 .match = css_bus_match, 1415 1415 .probe = css_probe,
+2 -2
drivers/s390/cio/device.c
··· 49 49 50 50 static atomic_t ccw_device_init_count = ATOMIC_INIT(0); 51 51 static DECLARE_WAIT_QUEUE_HEAD(ccw_device_init_wq); 52 - static struct bus_type ccw_bus_type; 52 + static const struct bus_type ccw_bus_type; 53 53 54 54 /******************* bus type handling ***********************/ 55 55 ··· 1776 1776 __disable_cmf(cdev); 1777 1777 } 1778 1778 1779 - static struct bus_type ccw_bus_type = { 1779 + static const struct bus_type ccw_bus_type = { 1780 1780 .name = "ccw", 1781 1781 .match = ccw_bus_match, 1782 1782 .uevent = ccw_uevent,
+2 -2
drivers/s390/cio/scm.c
··· 42 42 return add_uevent_var(env, "MODALIAS=scm:scmdev"); 43 43 } 44 44 45 - static struct bus_type scm_bus_type = { 45 + static const struct bus_type scm_bus_type = { 46 46 .name = "scm", 47 47 .probe = scmdev_probe, 48 48 .remove = scmdev_remove, ··· 228 228 size_t num; 229 229 int ret; 230 230 231 - scm_info = (void *)__get_free_page(GFP_KERNEL | GFP_DMA); 231 + scm_info = (void *)__get_free_page(GFP_KERNEL); 232 232 if (!scm_info) 233 233 return -ENOMEM; 234 234
+191 -68
drivers/s390/crypto/ap_bus.c
··· 38 38 #include <linux/debugfs.h> 39 39 #include <linux/ctype.h> 40 40 #include <linux/module.h> 41 + #include <asm/uv.h> 41 42 42 43 #include "ap_bus.h" 43 44 #include "ap_debug.h" ··· 84 83 DEFINE_MUTEX(ap_perms_mutex); 85 84 EXPORT_SYMBOL(ap_perms_mutex); 86 85 87 - /* # of bus scans since init */ 88 - static atomic64_t ap_scan_bus_count; 89 - 90 86 /* # of bindings complete since init */ 91 87 static atomic64_t ap_bindings_complete_count = ATOMIC64_INIT(0); 92 88 93 - /* completion for initial APQN bindings complete */ 94 - static DECLARE_COMPLETION(ap_init_apqn_bindings_complete); 89 + /* completion for APQN bindings complete */ 90 + static DECLARE_COMPLETION(ap_apqn_bindings_complete); 95 91 96 92 static struct ap_config_info *ap_qci_info; 97 93 static struct ap_config_info *ap_qci_info_old; ··· 99 101 debug_info_t *ap_dbf_info; 100 102 101 103 /* 102 - * Workqueue timer for bus rescan. 104 + * AP bus rescan related things. 103 105 */ 104 - static struct timer_list ap_config_timer; 105 - static int ap_config_time = AP_CONFIG_TIME; 106 - static void ap_scan_bus(struct work_struct *); 107 - static DECLARE_WORK(ap_scan_work, ap_scan_bus); 106 + static bool ap_scan_bus(void); 107 + static bool ap_scan_bus_result; /* result of last ap_scan_bus() */ 108 + static DEFINE_MUTEX(ap_scan_bus_mutex); /* mutex ap_scan_bus() invocations */ 109 + static atomic64_t ap_scan_bus_count; /* counter ap_scan_bus() invocations */ 110 + static int ap_scan_bus_time = AP_CONFIG_TIME; 111 + static struct timer_list ap_scan_bus_timer; 112 + static void ap_scan_bus_wq_callback(struct work_struct *); 113 + static DECLARE_WORK(ap_scan_bus_work, ap_scan_bus_wq_callback); 108 114 109 115 /* 110 116 * Tasklet & timer for AP request polling and interrupts ··· 137 135 /* Maximum adapter id, if not given via qci */ 138 136 static int ap_max_adapter_id = 63; 139 137 140 - static struct bus_type ap_bus_type; 138 + static const struct bus_type ap_bus_type; 141 139 142 140 /* Adapter interrupt definitions */ 143 141 static void ap_interrupt_handler(struct airq_struct *airq, ··· 755 753 } 756 754 757 755 /* 758 - * After initial ap bus scan do check if all existing APQNs are 756 + * After ap bus scan do check if all existing APQNs are 759 757 * bound to device drivers. 760 758 */ 761 759 static void ap_check_bindings_complete(void) ··· 765 763 if (atomic64_read(&ap_scan_bus_count) >= 1) { 766 764 ap_calc_bound_apqns(&apqns, &bound); 767 765 if (bound == apqns) { 768 - if (!completion_done(&ap_init_apqn_bindings_complete)) { 769 - complete_all(&ap_init_apqn_bindings_complete); 770 - AP_DBF_INFO("%s complete\n", __func__); 766 + if (!completion_done(&ap_apqn_bindings_complete)) { 767 + complete_all(&ap_apqn_bindings_complete); 768 + pr_debug("%s all apqn bindings complete\n", __func__); 771 769 } 772 770 ap_send_bindings_complete_uevent(); 773 771 } ··· 784 782 * -ETIME is returned. On failures negative return values are 785 783 * returned to the caller. 786 784 */ 787 - int ap_wait_init_apqn_bindings_complete(unsigned long timeout) 785 + int ap_wait_apqn_bindings_complete(unsigned long timeout) 788 786 { 787 + int rc = 0; 789 788 long l; 790 789 791 - if (completion_done(&ap_init_apqn_bindings_complete)) 790 + if (completion_done(&ap_apqn_bindings_complete)) 792 791 return 0; 793 792 794 793 if (timeout) 795 794 l = wait_for_completion_interruptible_timeout( 796 - &ap_init_apqn_bindings_complete, timeout); 795 + &ap_apqn_bindings_complete, timeout); 797 796 else 798 797 l = wait_for_completion_interruptible( 799 - &ap_init_apqn_bindings_complete); 798 + &ap_apqn_bindings_complete); 800 799 if (l < 0) 801 - return l == -ERESTARTSYS ? -EINTR : l; 800 + rc = l == -ERESTARTSYS ? -EINTR : l; 802 801 else if (l == 0 && timeout) 803 - return -ETIME; 802 + rc = -ETIME; 804 803 805 - return 0; 804 + pr_debug("%s rc=%d\n", __func__, rc); 805 + return rc; 806 806 } 807 - EXPORT_SYMBOL(ap_wait_init_apqn_bindings_complete); 807 + EXPORT_SYMBOL(ap_wait_apqn_bindings_complete); 808 808 809 809 static int __ap_queue_devices_with_id_unregister(struct device *dev, void *data) 810 810 { ··· 830 826 drvres = to_ap_drv(dev->driver)->flags 831 827 & AP_DRIVER_FLAG_DEFAULT; 832 828 if (!!devres != !!drvres) { 833 - AP_DBF_DBG("%s reprobing queue=%02x.%04x\n", 834 - __func__, card, queue); 829 + pr_debug("%s reprobing queue=%02x.%04x\n", 830 + __func__, card, queue); 835 831 rc = device_reprobe(dev); 836 832 if (rc) 837 833 AP_DBF_WARN("%s reprobing queue=%02x.%04x failed\n", ··· 943 939 if (is_queue_dev(dev)) 944 940 hash_del(&to_ap_queue(dev)->hnode); 945 941 spin_unlock_bh(&ap_queues_lock); 946 - } else { 947 - ap_check_bindings_complete(); 948 942 } 949 943 950 944 out: ··· 1014 1012 } 1015 1013 EXPORT_SYMBOL(ap_driver_unregister); 1016 1014 1017 - void ap_bus_force_rescan(void) 1015 + /* 1016 + * Enforce a synchronous AP bus rescan. 1017 + * Returns true if the bus scan finds a change in the AP configuration 1018 + * and AP devices have been added or deleted when this function returns. 1019 + */ 1020 + bool ap_bus_force_rescan(void) 1018 1021 { 1019 - /* Only trigger AP bus scans after the initial scan is done */ 1020 - if (atomic64_read(&ap_scan_bus_count) <= 0) 1021 - return; 1022 + unsigned long scan_counter = atomic64_read(&ap_scan_bus_count); 1023 + bool rc = false; 1022 1024 1023 - /* processing a asynchronous bus rescan */ 1024 - del_timer(&ap_config_timer); 1025 - queue_work(system_long_wq, &ap_scan_work); 1026 - flush_work(&ap_scan_work); 1025 + pr_debug(">%s scan counter=%lu\n", __func__, scan_counter); 1026 + 1027 + /* Only trigger AP bus scans after the initial scan is done */ 1028 + if (scan_counter <= 0) 1029 + goto out; 1030 + 1031 + /* Try to acquire the AP scan bus mutex */ 1032 + if (mutex_trylock(&ap_scan_bus_mutex)) { 1033 + /* mutex acquired, run the AP bus scan */ 1034 + ap_scan_bus_result = ap_scan_bus(); 1035 + rc = ap_scan_bus_result; 1036 + mutex_unlock(&ap_scan_bus_mutex); 1037 + goto out; 1038 + } 1039 + 1040 + /* 1041 + * Mutex acquire failed. So there is currently another task 1042 + * already running the AP bus scan. Then let's simple wait 1043 + * for the lock which means the other task has finished and 1044 + * stored the result in ap_scan_bus_result. 1045 + */ 1046 + if (mutex_lock_interruptible(&ap_scan_bus_mutex)) { 1047 + /* some error occurred, ignore and go out */ 1048 + goto out; 1049 + } 1050 + rc = ap_scan_bus_result; 1051 + mutex_unlock(&ap_scan_bus_mutex); 1052 + 1053 + out: 1054 + pr_debug("%s rc=%d\n", __func__, rc); 1055 + return rc; 1027 1056 } 1028 1057 EXPORT_SYMBOL(ap_bus_force_rescan); 1029 1058 ··· 1063 1030 */ 1064 1031 void ap_bus_cfg_chg(void) 1065 1032 { 1066 - AP_DBF_DBG("%s config change, forcing bus rescan\n", __func__); 1033 + pr_debug("%s config change, forcing bus rescan\n", __func__); 1067 1034 1068 1035 ap_bus_force_rescan(); 1069 1036 } ··· 1283 1250 1284 1251 static ssize_t config_time_show(const struct bus_type *bus, char *buf) 1285 1252 { 1286 - return sysfs_emit(buf, "%d\n", ap_config_time); 1253 + return sysfs_emit(buf, "%d\n", ap_scan_bus_time); 1287 1254 } 1288 1255 1289 1256 static ssize_t config_time_store(const struct bus_type *bus, ··· 1293 1260 1294 1261 if (sscanf(buf, "%d\n", &time) != 1 || time < 5 || time > 120) 1295 1262 return -EINVAL; 1296 - ap_config_time = time; 1297 - mod_timer(&ap_config_timer, jiffies + ap_config_time * HZ); 1263 + ap_scan_bus_time = time; 1264 + mod_timer(&ap_scan_bus_timer, jiffies + ap_scan_bus_time * HZ); 1298 1265 return count; 1299 1266 } 1300 1267 ··· 1636 1603 }; 1637 1604 ATTRIBUTE_GROUPS(ap_bus); 1638 1605 1639 - static struct bus_type ap_bus_type = { 1606 + static const struct bus_type ap_bus_type = { 1640 1607 .name = "ap", 1641 1608 .bus_groups = ap_bus_groups, 1642 1609 .match = &ap_bus_match, ··· 1921 1888 aq->last_err_rc = AP_RESPONSE_CHECKSTOPPED; 1922 1889 } 1923 1890 spin_unlock_bh(&aq->lock); 1924 - AP_DBF_DBG("%s(%d,%d) queue dev checkstop on\n", 1925 - __func__, ac->id, dom); 1891 + pr_debug("%s(%d,%d) queue dev checkstop on\n", 1892 + __func__, ac->id, dom); 1926 1893 /* 'receive' pending messages with -EAGAIN */ 1927 1894 ap_flush_queue(aq); 1928 1895 goto put_dev_and_continue; ··· 1932 1899 if (aq->dev_state > AP_DEV_STATE_UNINITIATED) 1933 1900 _ap_queue_init_state(aq); 1934 1901 spin_unlock_bh(&aq->lock); 1935 - AP_DBF_DBG("%s(%d,%d) queue dev checkstop off\n", 1936 - __func__, ac->id, dom); 1902 + pr_debug("%s(%d,%d) queue dev checkstop off\n", 1903 + __func__, ac->id, dom); 1937 1904 goto put_dev_and_continue; 1938 1905 } 1939 1906 /* config state change */ ··· 1945 1912 aq->last_err_rc = AP_RESPONSE_DECONFIGURED; 1946 1913 } 1947 1914 spin_unlock_bh(&aq->lock); 1948 - AP_DBF_DBG("%s(%d,%d) queue dev config off\n", 1949 - __func__, ac->id, dom); 1915 + pr_debug("%s(%d,%d) queue dev config off\n", 1916 + __func__, ac->id, dom); 1950 1917 ap_send_config_uevent(&aq->ap_dev, aq->config); 1951 1918 /* 'receive' pending messages with -EAGAIN */ 1952 1919 ap_flush_queue(aq); ··· 1957 1924 if (aq->dev_state > AP_DEV_STATE_UNINITIATED) 1958 1925 _ap_queue_init_state(aq); 1959 1926 spin_unlock_bh(&aq->lock); 1960 - AP_DBF_DBG("%s(%d,%d) queue dev config on\n", 1961 - __func__, ac->id, dom); 1927 + pr_debug("%s(%d,%d) queue dev config on\n", 1928 + __func__, ac->id, dom); 1962 1929 ap_send_config_uevent(&aq->ap_dev, aq->config); 1963 1930 goto put_dev_and_continue; 1964 1931 } ··· 2030 1997 ap_scan_rm_card_dev_and_queue_devs(ac); 2031 1998 put_device(dev); 2032 1999 } else { 2033 - AP_DBF_DBG("%s(%d) no type info (no APQN found), ignored\n", 2034 - __func__, ap); 2000 + pr_debug("%s(%d) no type info (no APQN found), ignored\n", 2001 + __func__, ap); 2035 2002 } 2036 2003 return; 2037 2004 } ··· 2043 2010 ap_scan_rm_card_dev_and_queue_devs(ac); 2044 2011 put_device(dev); 2045 2012 } else { 2046 - AP_DBF_DBG("%s(%d) no valid type (0) info, ignored\n", 2047 - __func__, ap); 2013 + pr_debug("%s(%d) no valid type (0) info, ignored\n", 2014 + __func__, ap); 2048 2015 } 2049 2016 return; 2050 2017 } ··· 2168 2135 sizeof(struct ap_config_info)) != 0; 2169 2136 } 2170 2137 2138 + /* 2139 + * ap_config_has_new_aps - Check current against old qci info if 2140 + * new adapters have appeared. Returns true if at least one new 2141 + * adapter in the apm mask is showing up. Existing adapters or 2142 + * receding adapters are not counted. 2143 + */ 2144 + static bool ap_config_has_new_aps(void) 2145 + { 2146 + 2147 + unsigned long m[BITS_TO_LONGS(AP_DEVICES)]; 2148 + 2149 + if (!ap_qci_info) 2150 + return false; 2151 + 2152 + bitmap_andnot(m, (unsigned long *)ap_qci_info->apm, 2153 + (unsigned long *)ap_qci_info_old->apm, AP_DEVICES); 2154 + if (!bitmap_empty(m, AP_DEVICES)) 2155 + return true; 2156 + 2157 + return false; 2158 + } 2159 + 2160 + /* 2161 + * ap_config_has_new_doms - Check current against old qci info if 2162 + * new (usage) domains have appeared. Returns true if at least one 2163 + * new domain in the aqm mask is showing up. Existing domains or 2164 + * receding domains are not counted. 2165 + */ 2166 + static bool ap_config_has_new_doms(void) 2167 + { 2168 + unsigned long m[BITS_TO_LONGS(AP_DOMAINS)]; 2169 + 2170 + if (!ap_qci_info) 2171 + return false; 2172 + 2173 + bitmap_andnot(m, (unsigned long *)ap_qci_info->aqm, 2174 + (unsigned long *)ap_qci_info_old->aqm, AP_DOMAINS); 2175 + if (!bitmap_empty(m, AP_DOMAINS)) 2176 + return true; 2177 + 2178 + return false; 2179 + } 2180 + 2171 2181 /** 2172 2182 * ap_scan_bus(): Scan the AP bus for new devices 2173 - * Runs periodically, workqueue timer (ap_config_time) 2174 - * @unused: Unused pointer. 2183 + * Always run under mutex ap_scan_bus_mutex protection 2184 + * which needs to get locked/unlocked by the caller! 2185 + * Returns true if any config change has been detected 2186 + * during the scan, otherwise false. 2175 2187 */ 2176 - static void ap_scan_bus(struct work_struct *unused) 2188 + static bool ap_scan_bus(void) 2177 2189 { 2178 - int ap, config_changed = 0; 2190 + bool config_changed; 2191 + int ap; 2179 2192 2180 - /* config change notify */ 2193 + pr_debug(">%s\n", __func__); 2194 + 2195 + /* (re-)fetch configuration via QCI */ 2181 2196 config_changed = ap_get_configuration(); 2182 - if (config_changed) 2197 + if (config_changed) { 2198 + if (ap_config_has_new_aps() || ap_config_has_new_doms()) { 2199 + /* 2200 + * Appearance of new adapters and/or domains need to 2201 + * build new ap devices which need to get bound to an 2202 + * device driver. Thus reset the APQN bindings complete 2203 + * completion. 2204 + */ 2205 + reinit_completion(&ap_apqn_bindings_complete); 2206 + } 2207 + /* post a config change notify */ 2183 2208 notify_config_changed(); 2209 + } 2184 2210 ap_select_domain(); 2185 - 2186 - AP_DBF_DBG("%s running\n", __func__); 2187 2211 2188 2212 /* loop over all possible adapters */ 2189 2213 for (ap = 0; ap <= ap_max_adapter_id; ap++) ··· 2264 2174 } 2265 2175 2266 2176 if (atomic64_inc_return(&ap_scan_bus_count) == 1) { 2267 - AP_DBF_DBG("%s init scan complete\n", __func__); 2177 + pr_debug("%s init scan complete\n", __func__); 2268 2178 ap_send_init_scan_done_uevent(); 2269 - ap_check_bindings_complete(); 2270 2179 } 2271 2180 2272 - mod_timer(&ap_config_timer, jiffies + ap_config_time * HZ); 2181 + ap_check_bindings_complete(); 2182 + 2183 + mod_timer(&ap_scan_bus_timer, jiffies + ap_scan_bus_time * HZ); 2184 + 2185 + pr_debug("<%s config_changed=%d\n", __func__, config_changed); 2186 + 2187 + return config_changed; 2273 2188 } 2274 2189 2275 - static void ap_config_timeout(struct timer_list *unused) 2190 + /* 2191 + * Callback for the ap_scan_bus_timer 2192 + * Runs periodically, workqueue timer (ap_scan_bus_time) 2193 + */ 2194 + static void ap_scan_bus_timer_callback(struct timer_list *unused) 2276 2195 { 2277 - queue_work(system_long_wq, &ap_scan_work); 2196 + /* 2197 + * schedule work into the system long wq which when 2198 + * the work is finally executed, calls the AP bus scan. 2199 + */ 2200 + queue_work(system_long_wq, &ap_scan_bus_work); 2201 + } 2202 + 2203 + /* 2204 + * Callback for the ap_scan_bus_work 2205 + */ 2206 + static void ap_scan_bus_wq_callback(struct work_struct *unused) 2207 + { 2208 + /* 2209 + * Try to invoke an ap_scan_bus(). If the mutex acquisition 2210 + * fails there is currently another task already running the 2211 + * AP scan bus and there is no need to wait and re-trigger the 2212 + * scan again. Please note at the end of the scan bus function 2213 + * the AP scan bus timer is re-armed which triggers then the 2214 + * ap_scan_bus_timer_callback which enqueues a work into the 2215 + * system_long_wq which invokes this function here again. 2216 + */ 2217 + if (mutex_trylock(&ap_scan_bus_mutex)) { 2218 + ap_scan_bus_result = ap_scan_bus(); 2219 + mutex_unlock(&ap_scan_bus_mutex); 2220 + } 2278 2221 } 2279 2222 2280 2223 static int __init ap_debug_init(void) 2281 2224 { 2282 2225 ap_dbf_info = debug_register("ap", 2, 1, 2283 - DBF_MAX_SPRINTF_ARGS * sizeof(long)); 2226 + AP_DBF_MAX_SPRINTF_ARGS * sizeof(long)); 2284 2227 debug_register_view(ap_dbf_info, &debug_sprintf_view); 2285 2228 debug_set_level(ap_dbf_info, DBF_ERR); 2286 2229 ··· 2397 2274 ap_root_device->bus = &ap_bus_type; 2398 2275 2399 2276 /* Setup the AP bus rescan timer. */ 2400 - timer_setup(&ap_config_timer, ap_config_timeout, 0); 2277 + timer_setup(&ap_scan_bus_timer, ap_scan_bus_timer_callback, 0); 2401 2278 2402 2279 /* 2403 2280 * Setup the high resolution poll timer. ··· 2415 2292 goto out_work; 2416 2293 } 2417 2294 2418 - queue_work(system_long_wq, &ap_scan_work); 2295 + queue_work(system_long_wq, &ap_scan_bus_work); 2419 2296 2420 2297 return 0; 2421 2298
+6 -2
drivers/s390/crypto/ap_bus.h
··· 266 266 bool ap_is_se_guest(void); 267 267 void ap_wait(enum ap_sm_wait wait); 268 268 void ap_request_timeout(struct timer_list *t); 269 - void ap_bus_force_rescan(void); 269 + bool ap_bus_force_rescan(void); 270 270 271 271 int ap_test_config_usage_domain(unsigned int domain); 272 272 int ap_test_config_ctrl_domain(unsigned int domain); ··· 352 352 * the return value is 0. If the timeout (in jiffies) hits instead 353 353 * -ETIME is returned. On failures negative return values are 354 354 * returned to the caller. 355 + * It may be that the AP bus scan finds new devices. Then the 356 + * condition that all APQNs are bound to their device drivers 357 + * is reset to false and this call again blocks until either all 358 + * APQNs are bound to a device driver or the timeout hits again. 355 359 */ 356 - int ap_wait_init_apqn_bindings_complete(unsigned long timeout); 360 + int ap_wait_apqn_bindings_complete(unsigned long timeout); 357 361 358 362 void ap_send_config_uevent(struct ap_device *ap_dev, bool cfg); 359 363 void ap_send_online_uevent(struct ap_device *ap_dev, int online);
+1 -3
drivers/s390/crypto/ap_debug.h
··· 16 16 #define RC2ERR(rc) ((rc) ? DBF_ERR : DBF_INFO) 17 17 #define RC2WARN(rc) ((rc) ? DBF_WARN : DBF_INFO) 18 18 19 - #define DBF_MAX_SPRINTF_ARGS 6 19 + #define AP_DBF_MAX_SPRINTF_ARGS 6 20 20 21 21 #define AP_DBF(...) \ 22 22 debug_sprintf_event(ap_dbf_info, ##__VA_ARGS__) ··· 26 26 debug_sprintf_event(ap_dbf_info, DBF_WARN, ##__VA_ARGS__) 27 27 #define AP_DBF_INFO(...) \ 28 28 debug_sprintf_event(ap_dbf_info, DBF_INFO, ##__VA_ARGS__) 29 - #define AP_DBF_DBG(...) \ 30 - debug_sprintf_event(ap_dbf_info, DBF_DEBUG, ##__VA_ARGS__) 31 29 32 30 extern debug_info_t *ap_dbf_info; 33 31
+19 -12
drivers/s390/crypto/ap_queue.c
··· 136 136 137 137 switch (status.response_code) { 138 138 case AP_RESPONSE_NORMAL: 139 + print_hex_dump_debug("aprpl: ", DUMP_PREFIX_ADDRESS, 16, 1, 140 + aq->reply->msg, aq->reply->len, false); 139 141 aq->queue_count = max_t(int, 0, aq->queue_count - 1); 140 142 if (!status.queue_empty && !aq->queue_count) 141 143 aq->queue_count++; ··· 171 169 aq->queue_count = 0; 172 170 list_splice_init(&aq->pendingq, &aq->requestq); 173 171 aq->requestq_count += aq->pendingq_count; 172 + pr_debug("%s queue 0x%02x.%04x rescheduled %d reqs (new req %d)\n", 173 + __func__, AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid), 174 + aq->pendingq_count, aq->requestq_count); 174 175 aq->pendingq_count = 0; 175 176 break; 176 177 default: ··· 248 243 249 244 /* Start the next request on the queue. */ 250 245 ap_msg = list_entry(aq->requestq.next, struct ap_message, list); 246 + print_hex_dump_debug("apreq: ", DUMP_PREFIX_ADDRESS, 16, 1, 247 + ap_msg->msg, ap_msg->len, false); 251 248 status = __ap_send(qid, ap_msg->psmid, 252 249 ap_msg->msg, ap_msg->len, 253 250 ap_msg->flags & AP_MSG_FLAG_SPECIAL); ··· 453 446 case AP_BS_Q_USABLE: 454 447 /* association is through */ 455 448 aq->sm_state = AP_SM_STATE_IDLE; 456 - AP_DBF_DBG("%s queue 0x%02x.%04x associated with %u\n", 457 - __func__, AP_QID_CARD(aq->qid), 458 - AP_QID_QUEUE(aq->qid), aq->assoc_idx); 449 + pr_debug("%s queue 0x%02x.%04x associated with %u\n", 450 + __func__, AP_QID_CARD(aq->qid), 451 + AP_QID_QUEUE(aq->qid), aq->assoc_idx); 459 452 return AP_SM_WAIT_NONE; 460 453 case AP_BS_Q_USABLE_NO_SECURE_KEY: 461 454 /* association still pending */ ··· 697 690 698 691 status = ap_test_queue(aq->qid, 1, &hwinfo); 699 692 if (status.response_code > AP_RESPONSE_BUSY) { 700 - AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n", 701 - __func__, status.response_code, 702 - AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); 693 + pr_debug("%s RC 0x%02x on tapq(0x%02x.%04x)\n", 694 + __func__, status.response_code, 695 + AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); 703 696 return -EIO; 704 697 } 705 698 ··· 853 846 854 847 status = ap_test_queue(aq->qid, 1, &hwinfo); 855 848 if (status.response_code > AP_RESPONSE_BUSY) { 856 - AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n", 857 - __func__, status.response_code, 858 - AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); 849 + pr_debug("%s RC 0x%02x on tapq(0x%02x.%04x)\n", 850 + __func__, status.response_code, 851 + AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); 859 852 return -EIO; 860 853 } 861 854 ··· 981 974 982 975 status = ap_test_queue(aq->qid, 1, &hwinfo); 983 976 if (status.response_code > AP_RESPONSE_BUSY) { 984 - AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n", 985 - __func__, status.response_code, 986 - AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); 977 + pr_debug("%s RC 0x%02x on tapq(0x%02x.%04x)\n", 978 + __func__, status.response_code, 979 + AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); 987 980 return -EIO; 988 981 } 989 982
+117 -109
drivers/s390/crypto/pkey_api.c
··· 42 42 * debug feature data and functions 43 43 */ 44 44 45 - static debug_info_t *debug_info; 45 + static debug_info_t *pkey_dbf_info; 46 46 47 - #define DEBUG_DBG(...) debug_sprintf_event(debug_info, 6, ##__VA_ARGS__) 48 - #define DEBUG_INFO(...) debug_sprintf_event(debug_info, 5, ##__VA_ARGS__) 49 - #define DEBUG_WARN(...) debug_sprintf_event(debug_info, 4, ##__VA_ARGS__) 50 - #define DEBUG_ERR(...) debug_sprintf_event(debug_info, 3, ##__VA_ARGS__) 47 + #define PKEY_DBF_INFO(...) debug_sprintf_event(pkey_dbf_info, 5, ##__VA_ARGS__) 48 + #define PKEY_DBF_WARN(...) debug_sprintf_event(pkey_dbf_info, 4, ##__VA_ARGS__) 49 + #define PKEY_DBF_ERR(...) debug_sprintf_event(pkey_dbf_info, 3, ##__VA_ARGS__) 51 50 52 51 static void __init pkey_debug_init(void) 53 52 { 54 53 /* 5 arguments per dbf entry (including the format string ptr) */ 55 - debug_info = debug_register("pkey", 1, 1, 5 * sizeof(long)); 56 - debug_register_view(debug_info, &debug_sprintf_view); 57 - debug_set_level(debug_info, 3); 54 + pkey_dbf_info = debug_register("pkey", 1, 1, 5 * sizeof(long)); 55 + debug_register_view(pkey_dbf_info, &debug_sprintf_view); 56 + debug_set_level(pkey_dbf_info, 3); 58 57 } 59 58 60 59 static void __exit pkey_debug_exit(void) 61 60 { 62 - debug_unregister(debug_info); 61 + debug_unregister(pkey_dbf_info); 63 62 } 64 63 65 64 /* inside view of a protected key token (only type 0x00 version 0x01) */ ··· 162 163 fc = CPACF_PCKMO_ENC_ECC_ED448_KEY; 163 164 break; 164 165 default: 165 - DEBUG_ERR("%s unknown/unsupported keytype %u\n", 166 - __func__, keytype); 166 + PKEY_DBF_ERR("%s unknown/unsupported keytype %u\n", 167 + __func__, keytype); 167 168 return -EINVAL; 168 169 } 169 170 170 171 if (*protkeylen < keysize + AES_WK_VP_SIZE) { 171 - DEBUG_ERR("%s prot key buffer size too small: %u < %d\n", 172 - __func__, *protkeylen, keysize + AES_WK_VP_SIZE); 172 + PKEY_DBF_ERR("%s prot key buffer size too small: %u < %d\n", 173 + __func__, *protkeylen, keysize + AES_WK_VP_SIZE); 173 174 return -EINVAL; 174 175 } 175 176 ··· 181 182 } 182 183 /* check for the pckmo subfunction we need now */ 183 184 if (!cpacf_test_func(&pckmo_functions, fc)) { 184 - DEBUG_ERR("%s pckmo functions not available\n", __func__); 185 + PKEY_DBF_ERR("%s pckmo functions not available\n", __func__); 185 186 return -ENODEV; 186 187 } 187 188 ··· 243 244 } 244 245 245 246 if (rc) 246 - DEBUG_DBG("%s failed rc=%d\n", __func__, rc); 247 + pr_debug("%s failed rc=%d\n", __func__, rc); 247 248 248 249 return rc; 249 250 } ··· 282 283 out: 283 284 kfree(apqns); 284 285 if (rc) 285 - DEBUG_DBG("%s failed rc=%d\n", __func__, rc); 286 + pr_debug("%s failed rc=%d\n", __func__, rc); 286 287 return rc; 287 288 } 288 289 ··· 293 294 u8 *protkey, u32 *protkeylen, u32 *protkeytype) 294 295 { 295 296 u32 nr_apqns, *apqns = NULL; 297 + int i, j, rc = -ENODEV; 296 298 u16 card, dom; 297 - int i, rc; 298 299 299 300 zcrypt_wait_api_operational(); 300 301 301 - /* build a list of apqns suitable for this key */ 302 - rc = ep11_findcard2(&apqns, &nr_apqns, 0xFFFF, 0xFFFF, 303 - ZCRYPT_CEX7, 304 - ap_is_se_guest() ? EP11_API_V6 : EP11_API_V4, 305 - ep11_kb_wkvp(key, keylen)); 306 - if (rc) 307 - goto out; 302 + /* try two times in case of failure */ 303 + for (i = 0; i < 2 && rc; i++) { 308 304 309 - /* go through the list of apqns and try to derive an pkey */ 310 - for (rc = -ENODEV, i = 0; i < nr_apqns; i++) { 311 - card = apqns[i] >> 16; 312 - dom = apqns[i] & 0xFFFF; 313 - rc = ep11_kblob2protkey(card, dom, key, keylen, 314 - protkey, protkeylen, protkeytype); 315 - if (rc == 0) 316 - break; 305 + /* build a list of apqns suitable for this key */ 306 + rc = ep11_findcard2(&apqns, &nr_apqns, 0xFFFF, 0xFFFF, 307 + ZCRYPT_CEX7, 308 + ap_is_se_guest() ? EP11_API_V6 : EP11_API_V4, 309 + ep11_kb_wkvp(key, keylen)); 310 + if (rc) 311 + continue; /* retry findcard on failure */ 312 + 313 + /* go through the list of apqns and try to derive an pkey */ 314 + for (rc = -ENODEV, j = 0; j < nr_apqns && rc; j++) { 315 + card = apqns[j] >> 16; 316 + dom = apqns[j] & 0xFFFF; 317 + rc = ep11_kblob2protkey(card, dom, key, keylen, 318 + protkey, protkeylen, protkeytype); 319 + } 320 + 321 + kfree(apqns); 317 322 } 318 323 319 - out: 320 - kfree(apqns); 321 324 if (rc) 322 - DEBUG_DBG("%s failed rc=%d\n", __func__, rc); 325 + pr_debug("%s failed rc=%d\n", __func__, rc); 326 + 323 327 return rc; 324 328 } 325 329 ··· 338 336 int rc; 339 337 340 338 /* check the secure key for valid AES secure key */ 341 - rc = cca_check_secaeskeytoken(debug_info, 3, (u8 *)seckey, 0); 339 + rc = cca_check_secaeskeytoken(pkey_dbf_info, 3, (u8 *)seckey, 0); 342 340 if (rc) 343 341 goto out; 344 342 if (pattributes) ··· 353 351 354 352 if (rc > 0) { 355 353 /* key mkvp matches to old master key mkvp */ 356 - DEBUG_DBG("%s secure key has old mkvp\n", __func__); 354 + pr_debug("%s secure key has old mkvp\n", __func__); 357 355 if (pattributes) 358 356 *pattributes |= PKEY_VERIFY_ATTR_OLD_MKVP; 359 357 rc = 0; ··· 365 363 *pdomain = domain; 366 364 367 365 out: 368 - DEBUG_DBG("%s rc=%d\n", __func__, rc); 366 + pr_debug("%s rc=%d\n", __func__, rc); 369 367 return rc; 370 368 } 371 369 ··· 381 379 382 380 keysize = pkey_keytype_aes_to_size(keytype); 383 381 if (!keysize) { 384 - DEBUG_ERR("%s unknown/unsupported keytype %d\n", __func__, 385 - keytype); 382 + PKEY_DBF_ERR("%s unknown/unsupported keytype %d\n", __func__, 383 + keytype); 386 384 return -EINVAL; 387 385 } 388 386 ··· 430 428 fc = CPACF_KMC_PAES_256; 431 429 break; 432 430 default: 433 - DEBUG_ERR("%s unknown/unsupported keytype %u\n", __func__, 434 - protkeytype); 431 + PKEY_DBF_ERR("%s unknown/unsupported keytype %u\n", __func__, 432 + protkeytype); 435 433 return -EINVAL; 436 434 } 437 435 if (protkeylen != pkeylen) { 438 - DEBUG_ERR("%s invalid protected key size %u for keytype %u\n", 439 - __func__, protkeylen, protkeytype); 436 + PKEY_DBF_ERR("%s invalid protected key size %u for keytype %u\n", 437 + __func__, protkeylen, protkeytype); 440 438 return -EINVAL; 441 439 } 442 440 ··· 448 446 k = cpacf_kmc(fc | CPACF_ENCRYPT, &param, null_msg, dest_buf, 449 447 sizeof(null_msg)); 450 448 if (k != sizeof(null_msg)) { 451 - DEBUG_ERR("%s protected key is not valid\n", __func__); 449 + PKEY_DBF_ERR("%s protected key is not valid\n", __func__); 452 450 return -EKEYREJECTED; 453 451 } 454 452 ··· 466 464 467 465 keysize = pkey_keytype_aes_to_size(t->keytype); 468 466 if (!keysize) { 469 - DEBUG_ERR("%s unknown/unsupported keytype %u\n", 470 - __func__, t->keytype); 467 + PKEY_DBF_ERR("%s unknown/unsupported keytype %u\n", 468 + __func__, t->keytype); 471 469 return -EINVAL; 472 470 } 473 471 if (t->len != keysize) { 474 - DEBUG_ERR("%s non clear key aes token: invalid key len %u\n", 475 - __func__, t->len); 472 + PKEY_DBF_ERR("%s non clear key aes token: invalid key len %u\n", 473 + __func__, t->len); 476 474 return -EINVAL; 477 475 } 478 476 ··· 507 505 goto out; 508 506 509 507 failure: 510 - DEBUG_ERR("%s unable to build protected key from clear", __func__); 508 + PKEY_DBF_ERR("%s unable to build protected key from clear", __func__); 511 509 512 510 out: 513 511 kfree(tmpbuf); ··· 538 536 keylen = 64; 539 537 break; 540 538 default: 541 - DEBUG_ERR("%s unknown/unsupported keytype %u\n", 542 - __func__, t->keytype); 539 + PKEY_DBF_ERR("%s unknown/unsupported keytype %u\n", 540 + __func__, t->keytype); 543 541 return -EINVAL; 544 542 } 545 543 546 544 if (t->len != keylen) { 547 - DEBUG_ERR("%s non clear key ecc token: invalid key len %u\n", 548 - __func__, t->len); 545 + PKEY_DBF_ERR("%s non clear key ecc token: invalid key len %u\n", 546 + __func__, t->len); 549 547 return -EINVAL; 550 548 } 551 549 ··· 553 551 rc = pkey_clr2protkey(t->keytype, t->clearkey, 554 552 protkey, protkeylen, protkeytype); 555 553 if (rc) { 556 - DEBUG_ERR("%s unable to build protected key from clear", 557 - __func__); 554 + PKEY_DBF_ERR("%s unable to build protected key from clear", 555 + __func__); 558 556 } 559 557 560 558 return rc; ··· 606 604 protkeylen, protkeytype); 607 605 break; 608 606 default: 609 - DEBUG_ERR("%s unknown/unsupported non cca clear key type %u\n", 610 - __func__, t->keytype); 607 + PKEY_DBF_ERR("%s unknown/unsupported non cca clear key type %u\n", 608 + __func__, t->keytype); 611 609 return -EINVAL; 612 610 } 613 611 break; 614 612 } 615 613 case TOKVER_EP11_AES: { 616 614 /* check ep11 key for exportable as protected key */ 617 - rc = ep11_check_aes_key(debug_info, 3, key, keylen, 1); 615 + rc = ep11_check_aes_key(pkey_dbf_info, 3, key, keylen, 1); 618 616 if (rc) 619 617 goto out; 620 618 rc = pkey_ep11key2pkey(key, keylen, ··· 623 621 } 624 622 case TOKVER_EP11_AES_WITH_HEADER: 625 623 /* check ep11 key with header for exportable as protected key */ 626 - rc = ep11_check_aes_key_with_hdr(debug_info, 3, key, keylen, 1); 624 + rc = ep11_check_aes_key_with_hdr(pkey_dbf_info, 625 + 3, key, keylen, 1); 627 626 if (rc) 628 627 goto out; 629 628 rc = pkey_ep11key2pkey(key, keylen, 630 629 protkey, protkeylen, protkeytype); 631 630 break; 632 631 default: 633 - DEBUG_ERR("%s unknown/unsupported non-CCA token version %d\n", 634 - __func__, hdr->version); 632 + PKEY_DBF_ERR("%s unknown/unsupported non-CCA token version %d\n", 633 + __func__, hdr->version); 635 634 } 636 635 637 636 out: ··· 657 654 return -EINVAL; 658 655 break; 659 656 default: 660 - DEBUG_ERR("%s unknown/unsupported CCA internal token version %d\n", 661 - __func__, hdr->version); 657 + PKEY_DBF_ERR("%s unknown/unsupported CCA internal token version %d\n", 658 + __func__, hdr->version); 662 659 return -EINVAL; 663 660 } 664 661 ··· 675 672 int rc; 676 673 677 674 if (keylen < sizeof(struct keytoken_header)) { 678 - DEBUG_ERR("%s invalid keylen %d\n", __func__, keylen); 675 + PKEY_DBF_ERR("%s invalid keylen %d\n", __func__, keylen); 679 676 return -EINVAL; 680 677 } 681 678 ··· 689 686 protkey, protkeylen, protkeytype); 690 687 break; 691 688 default: 692 - DEBUG_ERR("%s unknown/unsupported blob type %d\n", 693 - __func__, hdr->type); 689 + PKEY_DBF_ERR("%s unknown/unsupported blob type %d\n", 690 + __func__, hdr->type); 694 691 return -EINVAL; 695 692 } 696 693 697 - DEBUG_DBG("%s rc=%d\n", __func__, rc); 694 + pr_debug("%s rc=%d\n", __func__, rc); 698 695 return rc; 699 696 } 700 697 EXPORT_SYMBOL(pkey_keyblob2pkey); ··· 842 839 hdr->version == TOKVER_CCA_AES) { 843 840 struct secaeskeytoken *t = (struct secaeskeytoken *)key; 844 841 845 - rc = cca_check_secaeskeytoken(debug_info, 3, key, 0); 842 + rc = cca_check_secaeskeytoken(pkey_dbf_info, 3, key, 0); 846 843 if (rc) 847 844 goto out; 848 845 if (ktype) ··· 872 869 hdr->version == TOKVER_CCA_VLSC) { 873 870 struct cipherkeytoken *t = (struct cipherkeytoken *)key; 874 871 875 - rc = cca_check_secaescipherkey(debug_info, 3, key, 0, 1); 872 + rc = cca_check_secaescipherkey(pkey_dbf_info, 3, key, 0, 1); 876 873 if (rc) 877 874 goto out; 878 875 if (ktype) ··· 910 907 struct ep11keyblob *kb = (struct ep11keyblob *)key; 911 908 int api; 912 909 913 - rc = ep11_check_aes_key(debug_info, 3, key, keylen, 1); 910 + rc = ep11_check_aes_key(pkey_dbf_info, 3, key, keylen, 1); 914 911 if (rc) 915 912 goto out; 916 913 if (ktype) ··· 936 933 struct ep11kblob_header *kh = (struct ep11kblob_header *)key; 937 934 int api; 938 935 939 - rc = ep11_check_aes_key_with_hdr(debug_info, 3, 940 - key, keylen, 1); 936 + rc = ep11_check_aes_key_with_hdr(pkey_dbf_info, 937 + 3, key, keylen, 1); 941 938 if (rc) 942 939 goto out; 943 940 if (ktype) ··· 984 981 if (hdr->version == TOKVER_CCA_AES) { 985 982 if (keylen != sizeof(struct secaeskeytoken)) 986 983 return -EINVAL; 987 - if (cca_check_secaeskeytoken(debug_info, 3, key, 0)) 984 + if (cca_check_secaeskeytoken(pkey_dbf_info, 3, key, 0)) 988 985 return -EINVAL; 989 986 } else if (hdr->version == TOKVER_CCA_VLSC) { 990 987 if (keylen < hdr->len || keylen > MAXCCAVLSCTOKENSIZE) 991 988 return -EINVAL; 992 - if (cca_check_secaescipherkey(debug_info, 3, key, 0, 1)) 989 + if (cca_check_secaescipherkey(pkey_dbf_info, 990 + 3, key, 0, 1)) 993 991 return -EINVAL; 994 992 } else { 995 - DEBUG_ERR("%s unknown CCA internal token version %d\n", 996 - __func__, hdr->version); 993 + PKEY_DBF_ERR("%s unknown CCA internal token version %d\n", 994 + __func__, hdr->version); 997 995 return -EINVAL; 998 996 } 999 997 } else if (hdr->type == TOKTYPE_NON_CCA) { 1000 998 if (hdr->version == TOKVER_EP11_AES) { 1001 - if (ep11_check_aes_key(debug_info, 3, key, keylen, 1)) 999 + if (ep11_check_aes_key(pkey_dbf_info, 1000 + 3, key, keylen, 1)) 1002 1001 return -EINVAL; 1003 1002 } else if (hdr->version == TOKVER_EP11_AES_WITH_HEADER) { 1004 - if (ep11_check_aes_key_with_hdr(debug_info, 3, 1005 - key, keylen, 1)) 1003 + if (ep11_check_aes_key_with_hdr(pkey_dbf_info, 1004 + 3, key, keylen, 1)) 1006 1005 return -EINVAL; 1007 1006 } else { 1008 1007 return pkey_nonccatok2pkey(key, keylen, ··· 1012 1007 protkeytype); 1013 1008 } 1014 1009 } else { 1015 - DEBUG_ERR("%s unknown/unsupported blob type %d\n", 1016 - __func__, hdr->type); 1010 + PKEY_DBF_ERR("%s unknown/unsupported blob type %d\n", 1011 + __func__, hdr->type); 1017 1012 return -EINVAL; 1018 1013 } 1019 1014 ··· 1239 1234 hdr->version == TOKVER_EP11_AES_WITH_HEADER && 1240 1235 is_ep11_keyblob(key + sizeof(struct ep11kblob_header))) { 1241 1236 /* EP11 AES key blob with header */ 1242 - if (ep11_check_aes_key_with_hdr(debug_info, 3, key, keylen, 1)) 1237 + if (ep11_check_aes_key_with_hdr(pkey_dbf_info, 1238 + 3, key, keylen, 1)) 1243 1239 return -EINVAL; 1244 1240 } else if (hdr->type == TOKTYPE_NON_CCA && 1245 1241 hdr->version == TOKVER_EP11_ECC_WITH_HEADER && 1246 1242 is_ep11_keyblob(key + sizeof(struct ep11kblob_header))) { 1247 1243 /* EP11 ECC key blob with header */ 1248 - if (ep11_check_ecc_key_with_hdr(debug_info, 3, key, keylen, 1)) 1244 + if (ep11_check_ecc_key_with_hdr(pkey_dbf_info, 1245 + 3, key, keylen, 1)) 1249 1246 return -EINVAL; 1250 1247 } else if (hdr->type == TOKTYPE_NON_CCA && 1251 1248 hdr->version == TOKVER_EP11_AES && 1252 1249 is_ep11_keyblob(key)) { 1253 1250 /* EP11 AES key blob with header in session field */ 1254 - if (ep11_check_aes_key(debug_info, 3, key, keylen, 1)) 1251 + if (ep11_check_aes_key(pkey_dbf_info, 3, key, keylen, 1)) 1255 1252 return -EINVAL; 1256 1253 } else if (hdr->type == TOKTYPE_CCA_INTERNAL) { 1257 1254 if (hdr->version == TOKVER_CCA_AES) { 1258 1255 /* CCA AES data key */ 1259 1256 if (keylen != sizeof(struct secaeskeytoken)) 1260 1257 return -EINVAL; 1261 - if (cca_check_secaeskeytoken(debug_info, 3, key, 0)) 1258 + if (cca_check_secaeskeytoken(pkey_dbf_info, 3, key, 0)) 1262 1259 return -EINVAL; 1263 1260 } else if (hdr->version == TOKVER_CCA_VLSC) { 1264 1261 /* CCA AES cipher key */ 1265 1262 if (keylen < hdr->len || keylen > MAXCCAVLSCTOKENSIZE) 1266 1263 return -EINVAL; 1267 - if (cca_check_secaescipherkey(debug_info, 3, key, 0, 1)) 1264 + if (cca_check_secaescipherkey(pkey_dbf_info, 1265 + 3, key, 0, 1)) 1268 1266 return -EINVAL; 1269 1267 } else { 1270 - DEBUG_ERR("%s unknown CCA internal token version %d\n", 1271 - __func__, hdr->version); 1268 + PKEY_DBF_ERR("%s unknown CCA internal token version %d\n", 1269 + __func__, hdr->version); 1272 1270 return -EINVAL; 1273 1271 } 1274 1272 } else if (hdr->type == TOKTYPE_CCA_INTERNAL_PKA) { 1275 1273 /* CCA ECC (private) key */ 1276 1274 if (keylen < sizeof(struct eccprivkeytoken)) 1277 1275 return -EINVAL; 1278 - if (cca_check_sececckeytoken(debug_info, 3, key, keylen, 1)) 1276 + if (cca_check_sececckeytoken(pkey_dbf_info, 3, key, keylen, 1)) 1279 1277 return -EINVAL; 1280 1278 } else if (hdr->type == TOKTYPE_NON_CCA) { 1281 1279 return pkey_nonccatok2pkey(key, keylen, 1282 1280 protkey, protkeylen, protkeytype); 1283 1281 } else { 1284 - DEBUG_ERR("%s unknown/unsupported blob type %d\n", 1285 - __func__, hdr->type); 1282 + PKEY_DBF_ERR("%s unknown/unsupported blob type %d\n", 1283 + __func__, hdr->type); 1286 1284 return -EINVAL; 1287 1285 } 1288 1286 ··· 1358 1350 return -EFAULT; 1359 1351 rc = cca_genseckey(kgs.cardnr, kgs.domain, 1360 1352 kgs.keytype, kgs.seckey.seckey); 1361 - DEBUG_DBG("%s cca_genseckey()=%d\n", __func__, rc); 1353 + pr_debug("%s cca_genseckey()=%d\n", __func__, rc); 1362 1354 if (rc) 1363 1355 break; 1364 1356 if (copy_to_user(ugs, &kgs, sizeof(kgs))) ··· 1373 1365 return -EFAULT; 1374 1366 rc = cca_clr2seckey(kcs.cardnr, kcs.domain, kcs.keytype, 1375 1367 kcs.clrkey.clrkey, kcs.seckey.seckey); 1376 - DEBUG_DBG("%s cca_clr2seckey()=%d\n", __func__, rc); 1368 + pr_debug("%s cca_clr2seckey()=%d\n", __func__, rc); 1377 1369 if (rc) 1378 1370 break; 1379 1371 if (copy_to_user(ucs, &kcs, sizeof(kcs))) ··· 1391 1383 rc = cca_sec2protkey(ksp.cardnr, ksp.domain, 1392 1384 ksp.seckey.seckey, ksp.protkey.protkey, 1393 1385 &ksp.protkey.len, &ksp.protkey.type); 1394 - DEBUG_DBG("%s cca_sec2protkey()=%d\n", __func__, rc); 1386 + pr_debug("%s cca_sec2protkey()=%d\n", __func__, rc); 1395 1387 if (rc) 1396 1388 break; 1397 1389 if (copy_to_user(usp, &ksp, sizeof(ksp))) ··· 1408 1400 rc = pkey_clr2protkey(kcp.keytype, kcp.clrkey.clrkey, 1409 1401 kcp.protkey.protkey, 1410 1402 &kcp.protkey.len, &kcp.protkey.type); 1411 - DEBUG_DBG("%s pkey_clr2protkey()=%d\n", __func__, rc); 1403 + pr_debug("%s pkey_clr2protkey()=%d\n", __func__, rc); 1412 1404 if (rc) 1413 1405 break; 1414 1406 if (copy_to_user(ucp, &kcp, sizeof(kcp))) ··· 1424 1416 return -EFAULT; 1425 1417 rc = cca_findcard(kfc.seckey.seckey, 1426 1418 &kfc.cardnr, &kfc.domain, 1); 1427 - DEBUG_DBG("%s cca_findcard()=%d\n", __func__, rc); 1419 + pr_debug("%s cca_findcard()=%d\n", __func__, rc); 1428 1420 if (rc < 0) 1429 1421 break; 1430 1422 if (copy_to_user(ufc, &kfc, sizeof(kfc))) ··· 1440 1432 ksp.protkey.len = sizeof(ksp.protkey.protkey); 1441 1433 rc = pkey_skey2pkey(ksp.seckey.seckey, ksp.protkey.protkey, 1442 1434 &ksp.protkey.len, &ksp.protkey.type); 1443 - DEBUG_DBG("%s pkey_skey2pkey()=%d\n", __func__, rc); 1435 + pr_debug("%s pkey_skey2pkey()=%d\n", __func__, rc); 1444 1436 if (rc) 1445 1437 break; 1446 1438 if (copy_to_user(usp, &ksp, sizeof(ksp))) ··· 1455 1447 return -EFAULT; 1456 1448 rc = pkey_verifykey(&kvk.seckey, &kvk.cardnr, &kvk.domain, 1457 1449 &kvk.keysize, &kvk.attributes); 1458 - DEBUG_DBG("%s pkey_verifykey()=%d\n", __func__, rc); 1450 + pr_debug("%s pkey_verifykey()=%d\n", __func__, rc); 1459 1451 if (rc) 1460 1452 break; 1461 1453 if (copy_to_user(uvk, &kvk, sizeof(kvk))) ··· 1471 1463 kgp.protkey.len = sizeof(kgp.protkey.protkey); 1472 1464 rc = pkey_genprotkey(kgp.keytype, kgp.protkey.protkey, 1473 1465 &kgp.protkey.len, &kgp.protkey.type); 1474 - DEBUG_DBG("%s pkey_genprotkey()=%d\n", __func__, rc); 1466 + pr_debug("%s pkey_genprotkey()=%d\n", __func__, rc); 1475 1467 if (rc) 1476 1468 break; 1477 1469 if (copy_to_user(ugp, &kgp, sizeof(kgp))) ··· 1486 1478 return -EFAULT; 1487 1479 rc = pkey_verifyprotkey(kvp.protkey.protkey, 1488 1480 kvp.protkey.len, kvp.protkey.type); 1489 - DEBUG_DBG("%s pkey_verifyprotkey()=%d\n", __func__, rc); 1481 + pr_debug("%s pkey_verifyprotkey()=%d\n", __func__, rc); 1490 1482 break; 1491 1483 } 1492 1484 case PKEY_KBLOB2PROTK: { ··· 1502 1494 ktp.protkey.len = sizeof(ktp.protkey.protkey); 1503 1495 rc = pkey_keyblob2pkey(kkey, ktp.keylen, ktp.protkey.protkey, 1504 1496 &ktp.protkey.len, &ktp.protkey.type); 1505 - DEBUG_DBG("%s pkey_keyblob2pkey()=%d\n", __func__, rc); 1497 + pr_debug("%s pkey_keyblob2pkey()=%d\n", __func__, rc); 1506 1498 memzero_explicit(kkey, ktp.keylen); 1507 1499 kfree(kkey); 1508 1500 if (rc) ··· 1531 1523 rc = pkey_genseckey2(apqns, kgs.apqn_entries, 1532 1524 kgs.type, kgs.size, kgs.keygenflags, 1533 1525 kkey, &klen); 1534 - DEBUG_DBG("%s pkey_genseckey2()=%d\n", __func__, rc); 1526 + pr_debug("%s pkey_genseckey2()=%d\n", __func__, rc); 1535 1527 kfree(apqns); 1536 1528 if (rc) { 1537 1529 kfree(kkey); ··· 1573 1565 rc = pkey_clr2seckey2(apqns, kcs.apqn_entries, 1574 1566 kcs.type, kcs.size, kcs.keygenflags, 1575 1567 kcs.clrkey.clrkey, kkey, &klen); 1576 - DEBUG_DBG("%s pkey_clr2seckey2()=%d\n", __func__, rc); 1568 + pr_debug("%s pkey_clr2seckey2()=%d\n", __func__, rc); 1577 1569 kfree(apqns); 1578 1570 if (rc) { 1579 1571 kfree(kkey); ··· 1609 1601 rc = pkey_verifykey2(kkey, kvk.keylen, 1610 1602 &kvk.cardnr, &kvk.domain, 1611 1603 &kvk.type, &kvk.size, &kvk.flags); 1612 - DEBUG_DBG("%s pkey_verifykey2()=%d\n", __func__, rc); 1604 + pr_debug("%s pkey_verifykey2()=%d\n", __func__, rc); 1613 1605 kfree(kkey); 1614 1606 if (rc) 1615 1607 break; ··· 1638 1630 kkey, ktp.keylen, 1639 1631 ktp.protkey.protkey, &ktp.protkey.len, 1640 1632 &ktp.protkey.type); 1641 - DEBUG_DBG("%s pkey_keyblob2pkey2()=%d\n", __func__, rc); 1633 + pr_debug("%s pkey_keyblob2pkey2()=%d\n", __func__, rc); 1642 1634 kfree(apqns); 1643 1635 memzero_explicit(kkey, ktp.keylen); 1644 1636 kfree(kkey); ··· 1672 1664 } 1673 1665 rc = pkey_apqns4key(kkey, kak.keylen, kak.flags, 1674 1666 apqns, &nr_apqns); 1675 - DEBUG_DBG("%s pkey_apqns4key()=%d\n", __func__, rc); 1667 + pr_debug("%s pkey_apqns4key()=%d\n", __func__, rc); 1676 1668 kfree(kkey); 1677 1669 if (rc && rc != -ENOSPC) { 1678 1670 kfree(apqns); ··· 1715 1707 } 1716 1708 rc = pkey_apqns4keytype(kat.type, kat.cur_mkvp, kat.alt_mkvp, 1717 1709 kat.flags, apqns, &nr_apqns); 1718 - DEBUG_DBG("%s pkey_apqns4keytype()=%d\n", __func__, rc); 1710 + pr_debug("%s pkey_apqns4keytype()=%d\n", __func__, rc); 1719 1711 if (rc && rc != -ENOSPC) { 1720 1712 kfree(apqns); 1721 1713 break; ··· 1765 1757 rc = pkey_keyblob2pkey3(apqns, ktp.apqn_entries, 1766 1758 kkey, ktp.keylen, 1767 1759 protkey, &protkeylen, &ktp.pkeytype); 1768 - DEBUG_DBG("%s pkey_keyblob2pkey3()=%d\n", __func__, rc); 1760 + pr_debug("%s pkey_keyblob2pkey3()=%d\n", __func__, rc); 1769 1761 kfree(apqns); 1770 1762 memzero_explicit(kkey, ktp.keylen); 1771 1763 kfree(kkey);
+1 -1
drivers/s390/crypto/vfio_ap_drv.c
··· 60 60 kfree(matrix_dev); 61 61 } 62 62 63 - static struct bus_type matrix_bus = { 63 + static const struct bus_type matrix_bus = { 64 64 .name = "matrix", 65 65 }; 66 66
+18 -17
drivers/s390/crypto/vfio_ap_ops.c
··· 659 659 AP_DOMAINS); 660 660 } 661 661 662 + static bool _queue_passable(struct vfio_ap_queue *q) 663 + { 664 + if (!q) 665 + return false; 666 + 667 + switch (q->reset_status.response_code) { 668 + case AP_RESPONSE_NORMAL: 669 + case AP_RESPONSE_DECONFIGURED: 670 + case AP_RESPONSE_CHECKSTOPPED: 671 + return true; 672 + default: 673 + return false; 674 + } 675 + } 676 + 662 677 /* 663 678 * vfio_ap_mdev_filter_matrix - filter the APQNs assigned to the matrix mdev 664 679 * to ensure no queue devices are passed through to ··· 702 687 unsigned long apid, apqi, apqn; 703 688 DECLARE_BITMAP(prev_shadow_apm, AP_DEVICES); 704 689 DECLARE_BITMAP(prev_shadow_aqm, AP_DOMAINS); 705 - struct vfio_ap_queue *q; 706 690 707 691 bitmap_copy(prev_shadow_apm, matrix_mdev->shadow_apcb.apm, AP_DEVICES); 708 692 bitmap_copy(prev_shadow_aqm, matrix_mdev->shadow_apcb.aqm, AP_DOMAINS); ··· 730 716 * hardware device. 731 717 */ 732 718 apqn = AP_MKQID(apid, apqi); 733 - q = vfio_ap_mdev_get_queue(matrix_mdev, apqn); 734 - if (!q || q->reset_status.response_code) { 719 + if (!_queue_passable(vfio_ap_mdev_get_queue(matrix_mdev, apqn))) { 735 720 clear_bit_inv(apid, matrix_mdev->shadow_apcb.apm); 736 721 737 722 /* ··· 1704 1691 switch (status->response_code) { 1705 1692 case AP_RESPONSE_NORMAL: 1706 1693 case AP_RESPONSE_DECONFIGURED: 1694 + case AP_RESPONSE_CHECKSTOPPED: 1707 1695 return 0; 1708 1696 case AP_RESPONSE_RESET_IN_PROGRESS: 1709 1697 case AP_RESPONSE_BUSY: ··· 1761 1747 memcpy(&q->reset_status, &status, sizeof(status)); 1762 1748 continue; 1763 1749 } 1764 - /* 1765 - * When an AP adapter is deconfigured, the 1766 - * associated queues are reset, so let's set the 1767 - * status response code to 0 so the queue may be 1768 - * passed through (i.e., not filtered) 1769 - */ 1770 - if (status.response_code == AP_RESPONSE_DECONFIGURED) 1771 - q->reset_status.response_code = 0; 1772 1750 if (q->saved_isc != VFIO_AP_ISC_INVALID) 1773 1751 vfio_ap_free_aqic_resources(q); 1774 1752 break; ··· 1787 1781 queue_work(system_long_wq, &q->reset_work); 1788 1782 break; 1789 1783 case AP_RESPONSE_DECONFIGURED: 1790 - /* 1791 - * When an AP adapter is deconfigured, the associated 1792 - * queues are reset, so let's set the status response code to 0 1793 - * so the queue may be passed through (i.e., not filtered). 1794 - */ 1795 - q->reset_status.response_code = 0; 1784 + case AP_RESPONSE_CHECKSTOPPED: 1796 1785 vfio_ap_free_aqic_resources(q); 1797 1786 break; 1798 1787 default:
+123 -103
drivers/s390/crypto/zcrypt_api.c
··· 12 12 * Multiple device nodes: Harald Freudenberger <freude@linux.ibm.com> 13 13 */ 14 14 15 + #define KMSG_COMPONENT "zcrypt" 16 + #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt 17 + 15 18 #include <linux/module.h> 16 19 #include <linux/init.h> 17 20 #include <linux/interrupt.h> ··· 60 57 LIST_HEAD(zcrypt_card_list); 61 58 62 59 static atomic_t zcrypt_open_count = ATOMIC_INIT(0); 63 - static atomic_t zcrypt_rescan_count = ATOMIC_INIT(0); 64 - 65 - atomic_t zcrypt_rescan_req = ATOMIC_INIT(0); 66 - EXPORT_SYMBOL(zcrypt_rescan_req); 67 60 68 61 static LIST_HEAD(zcrypt_ops_list); 69 62 ··· 68 69 69 70 /* 70 71 * Process a rescan of the transport layer. 71 - * 72 - * Returns 1, if the rescan has been processed, otherwise 0. 72 + * Runs a synchronous AP bus rescan. 73 + * Returns true if something has changed (for example the 74 + * bus scan has found and build up new devices) and it is 75 + * worth to do a retry. Otherwise false is returned meaning 76 + * no changes on the AP bus level. 73 77 */ 74 - static inline int zcrypt_process_rescan(void) 78 + static inline bool zcrypt_process_rescan(void) 75 79 { 76 - if (atomic_read(&zcrypt_rescan_req)) { 77 - atomic_set(&zcrypt_rescan_req, 0); 78 - atomic_inc(&zcrypt_rescan_count); 79 - ap_bus_force_rescan(); 80 - ZCRYPT_DBF_INFO("%s rescan count=%07d\n", __func__, 81 - atomic_inc_return(&zcrypt_rescan_count)); 82 - return 1; 83 - } 84 - return 0; 80 + return ap_bus_force_rescan(); 85 81 } 86 82 87 83 void zcrypt_msgtype_register(struct zcrypt_ops *zops) ··· 709 715 spin_unlock(&zcrypt_list_lock); 710 716 711 717 if (!pref_zq) { 712 - ZCRYPT_DBF_DBG("%s no matching queue found => ENODEV\n", 713 - __func__); 718 + pr_debug("%s no matching queue found => ENODEV\n", __func__); 714 719 rc = -ENODEV; 715 720 goto out; 716 721 } ··· 813 820 spin_unlock(&zcrypt_list_lock); 814 821 815 822 if (!pref_zq) { 816 - ZCRYPT_DBF_DBG("%s no matching queue found => ENODEV\n", 817 - __func__); 823 + pr_debug("%s no matching queue found => ENODEV\n", __func__); 818 824 rc = -ENODEV; 819 825 goto out; 820 826 } ··· 857 865 rc = prep_cca_ap_msg(userspace, xcrb, &ap_msg, &func_code, &domain); 858 866 if (rc) 859 867 goto out; 868 + print_hex_dump_debug("ccareq: ", DUMP_PREFIX_ADDRESS, 16, 1, 869 + ap_msg.msg, ap_msg.len, false); 860 870 861 871 tdom = *domain; 862 872 if (perms != &ap_perms && tdom < AP_DOMAINS) { ··· 934 940 spin_unlock(&zcrypt_list_lock); 935 941 936 942 if (!pref_zq) { 937 - ZCRYPT_DBF_DBG("%s no match for address %02x.%04x => ENODEV\n", 938 - __func__, xcrb->user_defined, *domain); 943 + pr_debug("%s no match for address %02x.%04x => ENODEV\n", 944 + __func__, xcrb->user_defined, *domain); 939 945 rc = -ENODEV; 940 946 goto out; 941 947 } ··· 946 952 *domain = AP_QID_QUEUE(qid); 947 953 948 954 rc = pref_zq->ops->send_cprb(userspace, pref_zq, xcrb, &ap_msg); 955 + if (!rc) { 956 + print_hex_dump_debug("ccarpl: ", DUMP_PREFIX_ADDRESS, 16, 1, 957 + ap_msg.msg, ap_msg.len, false); 958 + } 949 959 950 960 spin_lock(&zcrypt_list_lock); 951 961 zcrypt_drop_queue(pref_zc, pref_zq, mod, wgt); ··· 968 970 969 971 long zcrypt_send_cprb(struct ica_xcRB *xcrb) 970 972 { 971 - return _zcrypt_send_cprb(false, &ap_perms, NULL, xcrb); 973 + struct zcrypt_track tr; 974 + int rc; 975 + 976 + memset(&tr, 0, sizeof(tr)); 977 + 978 + do { 979 + rc = _zcrypt_send_cprb(false, &ap_perms, &tr, xcrb); 980 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 981 + 982 + /* on ENODEV failure: retry once again after a requested rescan */ 983 + if (rc == -ENODEV && zcrypt_process_rescan()) 984 + do { 985 + rc = _zcrypt_send_cprb(false, &ap_perms, &tr, xcrb); 986 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 987 + if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 988 + rc = -EIO; 989 + if (rc) 990 + pr_debug("%s rc=%d\n", __func__, rc); 991 + 992 + return rc; 972 993 } 973 994 EXPORT_SYMBOL(zcrypt_send_cprb); 974 995 ··· 1062 1045 rc = prep_ep11_ap_msg(userspace, xcrb, &ap_msg, &func_code, &domain); 1063 1046 if (rc) 1064 1047 goto out_free; 1048 + print_hex_dump_debug("ep11req: ", DUMP_PREFIX_ADDRESS, 16, 1, 1049 + ap_msg.msg, ap_msg.len, false); 1065 1050 1066 1051 if (perms != &ap_perms && domain < AUTOSEL_DOM) { 1067 1052 if (ap_msg.flags & AP_MSG_FLAG_ADMIN) { ··· 1132 1113 1133 1114 if (!pref_zq) { 1134 1115 if (targets && target_num == 1) { 1135 - ZCRYPT_DBF_DBG("%s no match for address %02x.%04x => ENODEV\n", 1136 - __func__, (int)targets->ap_id, 1137 - (int)targets->dom_id); 1116 + pr_debug("%s no match for address %02x.%04x => ENODEV\n", 1117 + __func__, (int)targets->ap_id, 1118 + (int)targets->dom_id); 1138 1119 } else if (targets) { 1139 - ZCRYPT_DBF_DBG("%s no match for %d target addrs => ENODEV\n", 1140 - __func__, (int)target_num); 1120 + pr_debug("%s no match for %d target addrs => ENODEV\n", 1121 + __func__, (int)target_num); 1141 1122 } else { 1142 - ZCRYPT_DBF_DBG("%s no match for address ff.ffff => ENODEV\n", 1143 - __func__); 1123 + pr_debug("%s no match for address ff.ffff => ENODEV\n", 1124 + __func__); 1144 1125 } 1145 1126 rc = -ENODEV; 1146 1127 goto out_free; ··· 1148 1129 1149 1130 qid = pref_zq->queue->qid; 1150 1131 rc = pref_zq->ops->send_ep11_cprb(userspace, pref_zq, xcrb, &ap_msg); 1132 + if (!rc) { 1133 + print_hex_dump_debug("ep11rpl: ", DUMP_PREFIX_ADDRESS, 16, 1, 1134 + ap_msg.msg, ap_msg.len, false); 1135 + } 1151 1136 1152 1137 spin_lock(&zcrypt_list_lock); 1153 1138 zcrypt_drop_queue(pref_zc, pref_zq, mod, wgt); ··· 1172 1149 1173 1150 long zcrypt_send_ep11_cprb(struct ep11_urb *xcrb) 1174 1151 { 1175 - return _zcrypt_send_ep11_cprb(false, &ap_perms, NULL, xcrb); 1152 + struct zcrypt_track tr; 1153 + int rc; 1154 + 1155 + memset(&tr, 0, sizeof(tr)); 1156 + 1157 + do { 1158 + rc = _zcrypt_send_ep11_cprb(false, &ap_perms, &tr, xcrb); 1159 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1160 + 1161 + /* on ENODEV failure: retry once again after a requested rescan */ 1162 + if (rc == -ENODEV && zcrypt_process_rescan()) 1163 + do { 1164 + rc = _zcrypt_send_ep11_cprb(false, &ap_perms, &tr, xcrb); 1165 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1166 + if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 1167 + rc = -EIO; 1168 + if (rc) 1169 + pr_debug("%s rc=%d\n", __func__, rc); 1170 + 1171 + return rc; 1176 1172 } 1177 1173 EXPORT_SYMBOL(zcrypt_send_ep11_cprb); 1178 1174 ··· 1241 1199 spin_unlock(&zcrypt_list_lock); 1242 1200 1243 1201 if (!pref_zq) { 1244 - ZCRYPT_DBF_DBG("%s no matching queue found => ENODEV\n", 1245 - __func__); 1202 + pr_debug("%s no matching queue found => ENODEV\n", __func__); 1246 1203 rc = -ENODEV; 1247 1204 goto out; 1248 1205 } ··· 1472 1431 1473 1432 do { 1474 1433 rc = zcrypt_rsa_modexpo(perms, &tr, &mex); 1475 - if (rc == -EAGAIN) 1476 - tr.again_counter++; 1477 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1478 - /* on failure: retry once again after a requested rescan */ 1479 - if ((rc == -ENODEV) && (zcrypt_process_rescan())) 1434 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1435 + 1436 + /* on ENODEV failure: retry once again after a requested rescan */ 1437 + if (rc == -ENODEV && zcrypt_process_rescan()) 1480 1438 do { 1481 1439 rc = zcrypt_rsa_modexpo(perms, &tr, &mex); 1482 - if (rc == -EAGAIN) 1483 - tr.again_counter++; 1484 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1440 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1485 1441 if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 1486 1442 rc = -EIO; 1487 1443 if (rc) { 1488 - ZCRYPT_DBF_DBG("ioctl ICARSAMODEXPO rc=%d\n", rc); 1444 + pr_debug("ioctl ICARSAMODEXPO rc=%d\n", rc); 1489 1445 return rc; 1490 1446 } 1491 1447 return put_user(mex.outputdatalength, &umex->outputdatalength); ··· 1501 1463 1502 1464 do { 1503 1465 rc = zcrypt_rsa_crt(perms, &tr, &crt); 1504 - if (rc == -EAGAIN) 1505 - tr.again_counter++; 1506 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1507 - /* on failure: retry once again after a requested rescan */ 1508 - if ((rc == -ENODEV) && (zcrypt_process_rescan())) 1466 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1467 + 1468 + /* on ENODEV failure: retry once again after a requested rescan */ 1469 + if (rc == -ENODEV && zcrypt_process_rescan()) 1509 1470 do { 1510 1471 rc = zcrypt_rsa_crt(perms, &tr, &crt); 1511 - if (rc == -EAGAIN) 1512 - tr.again_counter++; 1513 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1472 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1514 1473 if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 1515 1474 rc = -EIO; 1516 1475 if (rc) { 1517 - ZCRYPT_DBF_DBG("ioctl ICARSACRT rc=%d\n", rc); 1476 + pr_debug("ioctl ICARSACRT rc=%d\n", rc); 1518 1477 return rc; 1519 1478 } 1520 1479 return put_user(crt.outputdatalength, &ucrt->outputdatalength); ··· 1530 1495 1531 1496 do { 1532 1497 rc = _zcrypt_send_cprb(true, perms, &tr, &xcrb); 1533 - if (rc == -EAGAIN) 1534 - tr.again_counter++; 1535 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1536 - /* on failure: retry once again after a requested rescan */ 1537 - if ((rc == -ENODEV) && (zcrypt_process_rescan())) 1498 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1499 + 1500 + /* on ENODEV failure: retry once again after a requested rescan */ 1501 + if (rc == -ENODEV && zcrypt_process_rescan()) 1538 1502 do { 1539 1503 rc = _zcrypt_send_cprb(true, perms, &tr, &xcrb); 1540 - if (rc == -EAGAIN) 1541 - tr.again_counter++; 1542 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1504 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1543 1505 if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 1544 1506 rc = -EIO; 1545 1507 if (rc) 1546 - ZCRYPT_DBF_DBG("ioctl ZSENDCPRB rc=%d status=0x%x\n", 1547 - rc, xcrb.status); 1508 + pr_debug("ioctl ZSENDCPRB rc=%d status=0x%x\n", 1509 + rc, xcrb.status); 1548 1510 if (copy_to_user(uxcrb, &xcrb, sizeof(xcrb))) 1549 1511 return -EFAULT; 1550 1512 return rc; ··· 1560 1528 1561 1529 do { 1562 1530 rc = _zcrypt_send_ep11_cprb(true, perms, &tr, &xcrb); 1563 - if (rc == -EAGAIN) 1564 - tr.again_counter++; 1565 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1566 - /* on failure: retry once again after a requested rescan */ 1567 - if ((rc == -ENODEV) && (zcrypt_process_rescan())) 1531 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1532 + 1533 + /* on ENODEV failure: retry once again after a requested rescan */ 1534 + if (rc == -ENODEV && zcrypt_process_rescan()) 1568 1535 do { 1569 1536 rc = _zcrypt_send_ep11_cprb(true, perms, &tr, &xcrb); 1570 - if (rc == -EAGAIN) 1571 - tr.again_counter++; 1572 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1537 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1573 1538 if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 1574 1539 rc = -EIO; 1575 1540 if (rc) 1576 - ZCRYPT_DBF_DBG("ioctl ZSENDEP11CPRB rc=%d\n", rc); 1541 + pr_debug("ioctl ZSENDEP11CPRB rc=%d\n", rc); 1577 1542 if (copy_to_user(uxcrb, &xcrb, sizeof(xcrb))) 1578 1543 return -EFAULT; 1579 1544 return rc; ··· 1699 1670 } 1700 1671 /* unknown ioctl number */ 1701 1672 default: 1702 - ZCRYPT_DBF_DBG("unknown ioctl 0x%08x\n", cmd); 1673 + pr_debug("unknown ioctl 0x%08x\n", cmd); 1703 1674 return -ENOIOCTLCMD; 1704 1675 } 1705 1676 } ··· 1737 1708 mex64.n_modulus = compat_ptr(mex32.n_modulus); 1738 1709 do { 1739 1710 rc = zcrypt_rsa_modexpo(perms, &tr, &mex64); 1740 - if (rc == -EAGAIN) 1741 - tr.again_counter++; 1742 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1743 - /* on failure: retry once again after a requested rescan */ 1744 - if ((rc == -ENODEV) && (zcrypt_process_rescan())) 1711 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1712 + 1713 + /* on ENODEV failure: retry once again after a requested rescan */ 1714 + if (rc == -ENODEV && zcrypt_process_rescan()) 1745 1715 do { 1746 1716 rc = zcrypt_rsa_modexpo(perms, &tr, &mex64); 1747 - if (rc == -EAGAIN) 1748 - tr.again_counter++; 1749 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1717 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1750 1718 if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 1751 1719 rc = -EIO; 1752 1720 if (rc) ··· 1787 1761 crt64.u_mult_inv = compat_ptr(crt32.u_mult_inv); 1788 1762 do { 1789 1763 rc = zcrypt_rsa_crt(perms, &tr, &crt64); 1790 - if (rc == -EAGAIN) 1791 - tr.again_counter++; 1792 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1793 - /* on failure: retry once again after a requested rescan */ 1794 - if ((rc == -ENODEV) && (zcrypt_process_rescan())) 1764 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1765 + 1766 + /* on ENODEV failure: retry once again after a requested rescan */ 1767 + if (rc == -ENODEV && zcrypt_process_rescan()) 1795 1768 do { 1796 1769 rc = zcrypt_rsa_crt(perms, &tr, &crt64); 1797 - if (rc == -EAGAIN) 1798 - tr.again_counter++; 1799 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1770 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1800 1771 if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 1801 1772 rc = -EIO; 1802 1773 if (rc) ··· 1856 1833 xcrb64.status = xcrb32.status; 1857 1834 do { 1858 1835 rc = _zcrypt_send_cprb(true, perms, &tr, &xcrb64); 1859 - if (rc == -EAGAIN) 1860 - tr.again_counter++; 1861 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1862 - /* on failure: retry once again after a requested rescan */ 1863 - if ((rc == -ENODEV) && (zcrypt_process_rescan())) 1836 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1837 + 1838 + /* on ENODEV failure: retry once again after a requested rescan */ 1839 + if (rc == -ENODEV && zcrypt_process_rescan()) 1864 1840 do { 1865 1841 rc = _zcrypt_send_cprb(true, perms, &tr, &xcrb64); 1866 - if (rc == -EAGAIN) 1867 - tr.again_counter++; 1868 - } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); 1842 + } while (rc == -EAGAIN && ++tr.again_counter < TRACK_AGAIN_MAX); 1869 1843 if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) 1870 1844 rc = -EIO; 1871 1845 xcrb32.reply_control_blk_length = xcrb64.reply_control_blk_length; ··· 1934 1914 */ 1935 1915 if (zcrypt_rng_buffer_index == 0) { 1936 1916 rc = zcrypt_rng((char *)zcrypt_rng_buffer); 1937 - /* on failure: retry once again after a requested rescan */ 1938 - if ((rc == -ENODEV) && (zcrypt_process_rescan())) 1917 + /* on ENODEV failure: retry once again after an AP bus rescan */ 1918 + if (rc == -ENODEV && zcrypt_process_rescan()) 1939 1919 rc = zcrypt_rng((char *)zcrypt_rng_buffer); 1940 1920 if (rc < 0) 1941 1921 return -EIO; ··· 1997 1977 * an asynchronous job. This function waits until these initial jobs 1998 1978 * are done and so the zcrypt api should be ready to serve crypto 1999 1979 * requests - if there are resources available. The function uses an 2000 - * internal timeout of 60s. The very first caller will either wait for 1980 + * internal timeout of 30s. The very first caller will either wait for 2001 1981 * ap bus bindings complete or the timeout happens. This state will be 2002 1982 * remembered for further callers which will only be blocked until a 2003 1983 * decision is made (timeout or bindings complete). ··· 2016 1996 switch (zcrypt_wait_api_state) { 2017 1997 case 0: 2018 1998 /* initial state, invoke wait for the ap bus complete */ 2019 - rc = ap_wait_init_apqn_bindings_complete( 2020 - msecs_to_jiffies(60 * 1000)); 1999 + rc = ap_wait_apqn_bindings_complete( 2000 + msecs_to_jiffies(ZCRYPT_WAIT_BINDINGS_COMPLETE_MS)); 2021 2001 switch (rc) { 2022 2002 case 0: 2023 2003 /* ap bus bindings are complete */ ··· 2034 2014 break; 2035 2015 default: 2036 2016 /* other failure */ 2037 - ZCRYPT_DBF_DBG("%s ap_wait_init_apqn_bindings_complete()=%d\n", 2038 - __func__, rc); 2017 + pr_debug("%s ap_wait_init_apqn_bindings_complete()=%d\n", 2018 + __func__, rc); 2039 2019 break; 2040 2020 } 2041 2021 break; ··· 2058 2038 int __init zcrypt_debug_init(void) 2059 2039 { 2060 2040 zcrypt_dbf_info = debug_register("zcrypt", 2, 1, 2061 - DBF_MAX_SPRINTF_ARGS * sizeof(long)); 2041 + ZCRYPT_DBF_MAX_SPRINTF_ARGS * sizeof(long)); 2062 2042 debug_register_view(zcrypt_dbf_info, &debug_sprintf_view); 2063 2043 debug_set_level(zcrypt_dbf_info, DBF_ERR); 2064 2044
+9
drivers/s390/crypto/zcrypt_api.h
··· 38 38 */ 39 39 #define ZCRYPT_RNG_BUFFER_SIZE 4096 40 40 41 + /** 42 + * The zcrypt_wait_api_operational() function waits this 43 + * amount in milliseconds for ap_wait_aqpn_bindings_complete(). 44 + * Also on a cprb send failure with ENODEV the send functions 45 + * trigger an ap bus rescan and wait this time in milliseconds 46 + * for ap_wait_aqpn_bindings_complete() before resending. 47 + */ 48 + #define ZCRYPT_WAIT_BINDINGS_COMPLETE_MS 30000 49 + 41 50 /* 42 51 * Identifier for Crypto Request Performance Index 43 52 */
+96 -118
drivers/s390/crypto/zcrypt_ccamisc.c
··· 23 23 #include "zcrypt_msgtype6.h" 24 24 #include "zcrypt_ccamisc.h" 25 25 26 - #define DEBUG_DBG(...) ZCRYPT_DBF(DBF_DEBUG, ##__VA_ARGS__) 27 - #define DEBUG_INFO(...) ZCRYPT_DBF(DBF_INFO, ##__VA_ARGS__) 28 - #define DEBUG_WARN(...) ZCRYPT_DBF(DBF_WARN, ##__VA_ARGS__) 29 - #define DEBUG_ERR(...) ZCRYPT_DBF(DBF_ERR, ##__VA_ARGS__) 30 - 31 26 /* Size of parameter block used for all cca requests/replies */ 32 27 #define PARMBSIZE 512 33 28 ··· 362 367 memcpy(preqparm->lv1.key_length, "KEYLN32 ", 8); 363 368 break; 364 369 default: 365 - DEBUG_ERR("%s unknown/unsupported keybitsize %d\n", 366 - __func__, keybitsize); 370 + ZCRYPT_DBF_ERR("%s unknown/unsupported keybitsize %d\n", 371 + __func__, keybitsize); 367 372 rc = -EINVAL; 368 373 goto out; 369 374 } ··· 381 386 /* forward xcrb with request CPRB and reply CPRB to zcrypt dd */ 382 387 rc = zcrypt_send_cprb(&xcrb); 383 388 if (rc) { 384 - DEBUG_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, errno %d\n", 385 - __func__, (int)cardnr, (int)domain, rc); 389 + ZCRYPT_DBF_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, errno %d\n", 390 + __func__, (int)cardnr, (int)domain, rc); 386 391 goto out; 387 392 } 388 393 389 394 /* check response returncode and reasoncode */ 390 395 if (prepcblk->ccp_rtcode != 0) { 391 - DEBUG_ERR("%s secure key generate failure, card response %d/%d\n", 392 - __func__, 396 + ZCRYPT_DBF_ERR("%s secure key generate failure, card response %d/%d\n", 397 + __func__, 393 398 (int)prepcblk->ccp_rtcode, 394 399 (int)prepcblk->ccp_rscode); 395 400 rc = -EIO; ··· 406 411 - sizeof(prepparm->lv3.keyblock.toklen) 407 412 - sizeof(prepparm->lv3.keyblock.tokattr); 408 413 if (seckeysize != SECKEYBLOBSIZE) { 409 - DEBUG_ERR("%s secure token size mismatch %d != %d bytes\n", 410 - __func__, seckeysize, SECKEYBLOBSIZE); 414 + ZCRYPT_DBF_ERR("%s secure token size mismatch %d != %d bytes\n", 415 + __func__, seckeysize, SECKEYBLOBSIZE); 411 416 rc = -EIO; 412 417 goto out; 413 418 } ··· 500 505 keysize = 32; 501 506 break; 502 507 default: 503 - DEBUG_ERR("%s unknown/unsupported keybitsize %d\n", 504 - __func__, keybitsize); 508 + ZCRYPT_DBF_ERR("%s unknown/unsupported keybitsize %d\n", 509 + __func__, keybitsize); 505 510 rc = -EINVAL; 506 511 goto out; 507 512 } ··· 519 524 /* forward xcrb with request CPRB and reply CPRB to zcrypt dd */ 520 525 rc = zcrypt_send_cprb(&xcrb); 521 526 if (rc) { 522 - DEBUG_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 523 - __func__, (int)cardnr, (int)domain, rc); 527 + ZCRYPT_DBF_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 528 + __func__, (int)cardnr, (int)domain, rc); 524 529 goto out; 525 530 } 526 531 527 532 /* check response returncode and reasoncode */ 528 533 if (prepcblk->ccp_rtcode != 0) { 529 - DEBUG_ERR("%s clear key import failure, card response %d/%d\n", 530 - __func__, 531 - (int)prepcblk->ccp_rtcode, 532 - (int)prepcblk->ccp_rscode); 534 + ZCRYPT_DBF_ERR("%s clear key import failure, card response %d/%d\n", 535 + __func__, 536 + (int)prepcblk->ccp_rtcode, 537 + (int)prepcblk->ccp_rscode); 533 538 rc = -EIO; 534 539 goto out; 535 540 } ··· 544 549 - sizeof(prepparm->lv3.keyblock.toklen) 545 550 - sizeof(prepparm->lv3.keyblock.tokattr); 546 551 if (seckeysize != SECKEYBLOBSIZE) { 547 - DEBUG_ERR("%s secure token size mismatch %d != %d bytes\n", 548 - __func__, seckeysize, SECKEYBLOBSIZE); 552 + ZCRYPT_DBF_ERR("%s secure token size mismatch %d != %d bytes\n", 553 + __func__, seckeysize, SECKEYBLOBSIZE); 549 554 rc = -EIO; 550 555 goto out; 551 556 } ··· 646 651 /* forward xcrb with request CPRB and reply CPRB to zcrypt dd */ 647 652 rc = zcrypt_send_cprb(&xcrb); 648 653 if (rc) { 649 - DEBUG_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 650 - __func__, (int)cardnr, (int)domain, rc); 654 + ZCRYPT_DBF_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 655 + __func__, (int)cardnr, (int)domain, rc); 651 656 goto out; 652 657 } 653 658 654 659 /* check response returncode and reasoncode */ 655 660 if (prepcblk->ccp_rtcode != 0) { 656 - DEBUG_ERR("%s unwrap secure key failure, card response %d/%d\n", 657 - __func__, 658 - (int)prepcblk->ccp_rtcode, 659 - (int)prepcblk->ccp_rscode); 661 + ZCRYPT_DBF_ERR("%s unwrap secure key failure, card response %d/%d\n", 662 + __func__, 663 + (int)prepcblk->ccp_rtcode, 664 + (int)prepcblk->ccp_rscode); 660 665 if (prepcblk->ccp_rtcode == 8 && prepcblk->ccp_rscode == 2290) 661 666 rc = -EAGAIN; 662 667 else ··· 664 669 goto out; 665 670 } 666 671 if (prepcblk->ccp_rscode != 0) { 667 - DEBUG_WARN("%s unwrap secure key warning, card response %d/%d\n", 668 - __func__, 669 - (int)prepcblk->ccp_rtcode, 670 - (int)prepcblk->ccp_rscode); 672 + ZCRYPT_DBF_WARN("%s unwrap secure key warning, card response %d/%d\n", 673 + __func__, 674 + (int)prepcblk->ccp_rtcode, 675 + (int)prepcblk->ccp_rscode); 671 676 } 672 677 673 678 /* process response cprb param block */ ··· 678 683 /* check the returned keyblock */ 679 684 if (prepparm->lv3.ckb.version != 0x01 && 680 685 prepparm->lv3.ckb.version != 0x02) { 681 - DEBUG_ERR("%s reply param keyblock version mismatch 0x%02x\n", 682 - __func__, (int)prepparm->lv3.ckb.version); 686 + ZCRYPT_DBF_ERR("%s reply param keyblock version mismatch 0x%02x\n", 687 + __func__, (int)prepparm->lv3.ckb.version); 683 688 rc = -EIO; 684 689 goto out; 685 690 } ··· 702 707 *protkeytype = PKEY_KEYTYPE_AES_256; 703 708 break; 704 709 default: 705 - DEBUG_ERR("%s unknown/unsupported keylen %d\n", 706 - __func__, prepparm->lv3.ckb.len); 710 + ZCRYPT_DBF_ERR("%s unknown/unsupported keylen %d\n", 711 + __func__, prepparm->lv3.ckb.len); 707 712 rc = -EIO; 708 713 goto out; 709 714 } ··· 835 840 case 256: 836 841 break; 837 842 default: 838 - DEBUG_ERR( 839 - "%s unknown/unsupported keybitsize %d\n", 840 - __func__, keybitsize); 843 + ZCRYPT_DBF_ERR("%s unknown/unsupported keybitsize %d\n", 844 + __func__, keybitsize); 841 845 rc = -EINVAL; 842 846 goto out; 843 847 } ··· 874 880 /* forward xcrb with request CPRB and reply CPRB to zcrypt dd */ 875 881 rc = zcrypt_send_cprb(&xcrb); 876 882 if (rc) { 877 - DEBUG_ERR( 878 - "%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 879 - __func__, (int)cardnr, (int)domain, rc); 883 + ZCRYPT_DBF_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 884 + __func__, (int)cardnr, (int)domain, rc); 880 885 goto out; 881 886 } 882 887 883 888 /* check response returncode and reasoncode */ 884 889 if (prepcblk->ccp_rtcode != 0) { 885 - DEBUG_ERR( 886 - "%s cipher key generate failure, card response %d/%d\n", 887 - __func__, 888 - (int)prepcblk->ccp_rtcode, 889 - (int)prepcblk->ccp_rscode); 890 + ZCRYPT_DBF_ERR("%s cipher key generate failure, card response %d/%d\n", 891 + __func__, 892 + (int)prepcblk->ccp_rtcode, 893 + (int)prepcblk->ccp_rscode); 890 894 rc = -EIO; 891 895 goto out; 892 896 } ··· 897 905 /* do some plausibility checks on the key block */ 898 906 if (prepparm->kb.len < 120 + 5 * sizeof(uint16_t) || 899 907 prepparm->kb.len > 136 + 5 * sizeof(uint16_t)) { 900 - DEBUG_ERR("%s reply with invalid or unknown key block\n", 901 - __func__); 908 + ZCRYPT_DBF_ERR("%s reply with invalid or unknown key block\n", 909 + __func__); 902 910 rc = -EIO; 903 911 goto out; 904 912 } ··· 1040 1048 /* forward xcrb with request CPRB and reply CPRB to zcrypt dd */ 1041 1049 rc = zcrypt_send_cprb(&xcrb); 1042 1050 if (rc) { 1043 - DEBUG_ERR( 1044 - "%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 1045 - __func__, (int)cardnr, (int)domain, rc); 1051 + ZCRYPT_DBF_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 1052 + __func__, (int)cardnr, (int)domain, rc); 1046 1053 goto out; 1047 1054 } 1048 1055 1049 1056 /* check response returncode and reasoncode */ 1050 1057 if (prepcblk->ccp_rtcode != 0) { 1051 - DEBUG_ERR( 1052 - "%s CSNBKPI2 failure, card response %d/%d\n", 1053 - __func__, 1054 - (int)prepcblk->ccp_rtcode, 1055 - (int)prepcblk->ccp_rscode); 1058 + ZCRYPT_DBF_ERR("%s CSNBKPI2 failure, card response %d/%d\n", 1059 + __func__, 1060 + (int)prepcblk->ccp_rtcode, 1061 + (int)prepcblk->ccp_rscode); 1056 1062 rc = -EIO; 1057 1063 goto out; 1058 1064 } ··· 1063 1073 /* do some plausibility checks on the key block */ 1064 1074 if (prepparm->kb.len < 120 + 3 * sizeof(uint16_t) || 1065 1075 prepparm->kb.len > 136 + 3 * sizeof(uint16_t)) { 1066 - DEBUG_ERR("%s reply with invalid or unknown key block\n", 1067 - __func__); 1076 + ZCRYPT_DBF_ERR("%s reply with invalid or unknown key block\n", 1077 + __func__); 1068 1078 rc = -EIO; 1069 1079 goto out; 1070 1080 } ··· 1122 1132 rc = _ip_cprb_helper(card, dom, "AES ", "FIRST ", "MIN3PART", 1123 1133 exorbuf, keybitsize, token, &tokensize); 1124 1134 if (rc) { 1125 - DEBUG_ERR( 1126 - "%s clear key import 1/4 with CSNBKPI2 failed, rc=%d\n", 1127 - __func__, rc); 1135 + ZCRYPT_DBF_ERR("%s clear key import 1/4 with CSNBKPI2 failed, rc=%d\n", 1136 + __func__, rc); 1128 1137 goto out; 1129 1138 } 1130 1139 rc = _ip_cprb_helper(card, dom, "AES ", "ADD-PART", NULL, 1131 1140 clrkey, keybitsize, token, &tokensize); 1132 1141 if (rc) { 1133 - DEBUG_ERR( 1134 - "%s clear key import 2/4 with CSNBKPI2 failed, rc=%d\n", 1135 - __func__, rc); 1142 + ZCRYPT_DBF_ERR("%s clear key import 2/4 with CSNBKPI2 failed, rc=%d\n", 1143 + __func__, rc); 1136 1144 goto out; 1137 1145 } 1138 1146 rc = _ip_cprb_helper(card, dom, "AES ", "ADD-PART", NULL, 1139 1147 exorbuf, keybitsize, token, &tokensize); 1140 1148 if (rc) { 1141 - DEBUG_ERR( 1142 - "%s clear key import 3/4 with CSNBKPI2 failed, rc=%d\n", 1143 - __func__, rc); 1149 + ZCRYPT_DBF_ERR("%s clear key import 3/4 with CSNBKPI2 failed, rc=%d\n", 1150 + __func__, rc); 1144 1151 goto out; 1145 1152 } 1146 1153 rc = _ip_cprb_helper(card, dom, "AES ", "COMPLETE", NULL, 1147 1154 NULL, keybitsize, token, &tokensize); 1148 1155 if (rc) { 1149 - DEBUG_ERR( 1150 - "%s clear key import 4/4 with CSNBKPI2 failed, rc=%d\n", 1151 - __func__, rc); 1156 + ZCRYPT_DBF_ERR("%s clear key import 4/4 with CSNBKPI2 failed, rc=%d\n", 1157 + __func__, rc); 1152 1158 goto out; 1153 1159 } 1154 1160 ··· 1251 1265 /* forward xcrb with request CPRB and reply CPRB to zcrypt dd */ 1252 1266 rc = zcrypt_send_cprb(&xcrb); 1253 1267 if (rc) { 1254 - DEBUG_ERR( 1255 - "%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 1256 - __func__, (int)cardnr, (int)domain, rc); 1268 + ZCRYPT_DBF_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 1269 + __func__, (int)cardnr, (int)domain, rc); 1257 1270 goto out; 1258 1271 } 1259 1272 1260 1273 /* check response returncode and reasoncode */ 1261 1274 if (prepcblk->ccp_rtcode != 0) { 1262 - DEBUG_ERR( 1263 - "%s unwrap secure key failure, card response %d/%d\n", 1264 - __func__, 1265 - (int)prepcblk->ccp_rtcode, 1266 - (int)prepcblk->ccp_rscode); 1275 + ZCRYPT_DBF_ERR("%s unwrap secure key failure, card response %d/%d\n", 1276 + __func__, 1277 + (int)prepcblk->ccp_rtcode, 1278 + (int)prepcblk->ccp_rscode); 1267 1279 if (prepcblk->ccp_rtcode == 8 && prepcblk->ccp_rscode == 2290) 1268 1280 rc = -EAGAIN; 1269 1281 else ··· 1269 1285 goto out; 1270 1286 } 1271 1287 if (prepcblk->ccp_rscode != 0) { 1272 - DEBUG_WARN( 1273 - "%s unwrap secure key warning, card response %d/%d\n", 1274 - __func__, 1275 - (int)prepcblk->ccp_rtcode, 1276 - (int)prepcblk->ccp_rscode); 1288 + ZCRYPT_DBF_WARN("%s unwrap secure key warning, card response %d/%d\n", 1289 + __func__, 1290 + (int)prepcblk->ccp_rtcode, 1291 + (int)prepcblk->ccp_rscode); 1277 1292 } 1278 1293 1279 1294 /* process response cprb param block */ ··· 1283 1300 /* check the returned keyblock */ 1284 1301 if (prepparm->vud.ckb.version != 0x01 && 1285 1302 prepparm->vud.ckb.version != 0x02) { 1286 - DEBUG_ERR("%s reply param keyblock version mismatch 0x%02x\n", 1287 - __func__, (int)prepparm->vud.ckb.version); 1303 + ZCRYPT_DBF_ERR("%s reply param keyblock version mismatch 0x%02x\n", 1304 + __func__, (int)prepparm->vud.ckb.version); 1288 1305 rc = -EIO; 1289 1306 goto out; 1290 1307 } 1291 1308 if (prepparm->vud.ckb.algo != 0x02) { 1292 - DEBUG_ERR( 1293 - "%s reply param keyblock algo mismatch 0x%02x != 0x02\n", 1294 - __func__, (int)prepparm->vud.ckb.algo); 1309 + ZCRYPT_DBF_ERR("%s reply param keyblock algo mismatch 0x%02x != 0x02\n", 1310 + __func__, (int)prepparm->vud.ckb.algo); 1295 1311 rc = -EIO; 1296 1312 goto out; 1297 1313 } ··· 1313 1331 *protkeytype = PKEY_KEYTYPE_AES_256; 1314 1332 break; 1315 1333 default: 1316 - DEBUG_ERR("%s unknown/unsupported keylen %d\n", 1317 - __func__, prepparm->vud.ckb.keylen); 1334 + ZCRYPT_DBF_ERR("%s unknown/unsupported keylen %d\n", 1335 + __func__, prepparm->vud.ckb.keylen); 1318 1336 rc = -EIO; 1319 1337 goto out; 1320 1338 } ··· 1414 1432 /* forward xcrb with request CPRB and reply CPRB to zcrypt dd */ 1415 1433 rc = zcrypt_send_cprb(&xcrb); 1416 1434 if (rc) { 1417 - DEBUG_ERR( 1418 - "%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 1419 - __func__, (int)cardnr, (int)domain, rc); 1435 + ZCRYPT_DBF_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 1436 + __func__, (int)cardnr, (int)domain, rc); 1420 1437 goto out; 1421 1438 } 1422 1439 1423 1440 /* check response returncode and reasoncode */ 1424 1441 if (prepcblk->ccp_rtcode != 0) { 1425 - DEBUG_ERR( 1426 - "%s unwrap secure key failure, card response %d/%d\n", 1427 - __func__, 1428 - (int)prepcblk->ccp_rtcode, 1429 - (int)prepcblk->ccp_rscode); 1442 + ZCRYPT_DBF_ERR("%s unwrap secure key failure, card response %d/%d\n", 1443 + __func__, 1444 + (int)prepcblk->ccp_rtcode, 1445 + (int)prepcblk->ccp_rscode); 1430 1446 if (prepcblk->ccp_rtcode == 8 && prepcblk->ccp_rscode == 2290) 1431 1447 rc = -EAGAIN; 1432 1448 else ··· 1432 1452 goto out; 1433 1453 } 1434 1454 if (prepcblk->ccp_rscode != 0) { 1435 - DEBUG_WARN( 1436 - "%s unwrap secure key warning, card response %d/%d\n", 1437 - __func__, 1438 - (int)prepcblk->ccp_rtcode, 1439 - (int)prepcblk->ccp_rscode); 1455 + ZCRYPT_DBF_WARN("%s unwrap secure key warning, card response %d/%d\n", 1456 + __func__, 1457 + (int)prepcblk->ccp_rtcode, 1458 + (int)prepcblk->ccp_rscode); 1440 1459 } 1441 1460 1442 1461 /* process response cprb param block */ ··· 1445 1466 1446 1467 /* check the returned keyblock */ 1447 1468 if (prepparm->vud.ckb.version != 0x02) { 1448 - DEBUG_ERR("%s reply param keyblock version mismatch 0x%02x != 0x02\n", 1449 - __func__, (int)prepparm->vud.ckb.version); 1469 + ZCRYPT_DBF_ERR("%s reply param keyblock version mismatch 0x%02x != 0x02\n", 1470 + __func__, (int)prepparm->vud.ckb.version); 1450 1471 rc = -EIO; 1451 1472 goto out; 1452 1473 } 1453 1474 if (prepparm->vud.ckb.algo != 0x81) { 1454 - DEBUG_ERR( 1455 - "%s reply param keyblock algo mismatch 0x%02x != 0x81\n", 1456 - __func__, (int)prepparm->vud.ckb.algo); 1475 + ZCRYPT_DBF_ERR("%s reply param keyblock algo mismatch 0x%02x != 0x81\n", 1476 + __func__, (int)prepparm->vud.ckb.algo); 1457 1477 rc = -EIO; 1458 1478 goto out; 1459 1479 } 1460 1480 1461 1481 /* copy the translated protected key */ 1462 1482 if (prepparm->vud.ckb.keylen > *protkeylen) { 1463 - DEBUG_ERR("%s prot keylen mismatch %d > buffersize %u\n", 1464 - __func__, prepparm->vud.ckb.keylen, *protkeylen); 1483 + ZCRYPT_DBF_ERR("%s prot keylen mismatch %d > buffersize %u\n", 1484 + __func__, prepparm->vud.ckb.keylen, *protkeylen); 1465 1485 rc = -EIO; 1466 1486 goto out; 1467 1487 } ··· 1528 1550 /* forward xcrb with request CPRB and reply CPRB to zcrypt dd */ 1529 1551 rc = zcrypt_send_cprb(&xcrb); 1530 1552 if (rc) { 1531 - DEBUG_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 1532 - __func__, (int)cardnr, (int)domain, rc); 1553 + ZCRYPT_DBF_ERR("%s zcrypt_send_cprb (cardnr=%d domain=%d) failed, rc=%d\n", 1554 + __func__, (int)cardnr, (int)domain, rc); 1533 1555 goto out; 1534 1556 } 1535 1557 1536 1558 /* check response returncode and reasoncode */ 1537 1559 if (prepcblk->ccp_rtcode != 0) { 1538 - DEBUG_ERR("%s unwrap secure key failure, card response %d/%d\n", 1539 - __func__, 1540 - (int)prepcblk->ccp_rtcode, 1541 - (int)prepcblk->ccp_rscode); 1560 + ZCRYPT_DBF_ERR("%s unwrap secure key failure, card response %d/%d\n", 1561 + __func__, 1562 + (int)prepcblk->ccp_rtcode, 1563 + (int)prepcblk->ccp_rscode); 1542 1564 rc = -EIO; 1543 1565 goto out; 1544 1566 }
+1 -3
drivers/s390/crypto/zcrypt_debug.h
··· 17 17 #define RC2ERR(rc) ((rc) ? DBF_ERR : DBF_INFO) 18 18 #define RC2WARN(rc) ((rc) ? DBF_WARN : DBF_INFO) 19 19 20 - #define DBF_MAX_SPRINTF_ARGS 6 20 + #define ZCRYPT_DBF_MAX_SPRINTF_ARGS 6 21 21 22 22 #define ZCRYPT_DBF(...) \ 23 23 debug_sprintf_event(zcrypt_dbf_info, ##__VA_ARGS__) ··· 27 27 debug_sprintf_event(zcrypt_dbf_info, DBF_WARN, ##__VA_ARGS__) 28 28 #define ZCRYPT_DBF_INFO(...) \ 29 29 debug_sprintf_event(zcrypt_dbf_info, DBF_INFO, ##__VA_ARGS__) 30 - #define ZCRYPT_DBF_DBG(...) \ 31 - debug_sprintf_event(zcrypt_dbf_info, DBF_DEBUG, ##__VA_ARGS__) 32 30 33 31 extern debug_info_t *zcrypt_dbf_info; 34 32
+56 -71
drivers/s390/crypto/zcrypt_ep11misc.c
··· 24 24 #include "zcrypt_ep11misc.h" 25 25 #include "zcrypt_ccamisc.h" 26 26 27 - #define DEBUG_DBG(...) ZCRYPT_DBF(DBF_DEBUG, ##__VA_ARGS__) 28 - #define DEBUG_INFO(...) ZCRYPT_DBF(DBF_INFO, ##__VA_ARGS__) 29 - #define DEBUG_WARN(...) ZCRYPT_DBF(DBF_WARN, ##__VA_ARGS__) 30 - #define DEBUG_ERR(...) ZCRYPT_DBF(DBF_ERR, ##__VA_ARGS__) 31 - 32 27 #define EP11_PINBLOB_V1_BYTES 56 33 28 34 29 /* default iv used here */ ··· 505 510 506 511 /* start tag */ 507 512 if (*pl++ != 0x30) { 508 - DEBUG_ERR("%s reply start tag mismatch\n", func); 513 + ZCRYPT_DBF_ERR("%s reply start tag mismatch\n", func); 509 514 return -EIO; 510 515 } 511 516 ··· 522 527 len = *((u16 *)pl); 523 528 pl += 2; 524 529 } else { 525 - DEBUG_ERR("%s reply start tag lenfmt mismatch 0x%02hhx\n", 526 - func, *pl); 530 + ZCRYPT_DBF_ERR("%s reply start tag lenfmt mismatch 0x%02hhx\n", 531 + func, *pl); 527 532 return -EIO; 528 533 } 529 534 530 535 /* len should cover at least 3 fields with 32 bit value each */ 531 536 if (len < 3 * 6) { 532 - DEBUG_ERR("%s reply length %d too small\n", func, len); 537 + ZCRYPT_DBF_ERR("%s reply length %d too small\n", func, len); 533 538 return -EIO; 534 539 } 535 540 536 541 /* function tag, length and value */ 537 542 if (pl[0] != 0x04 || pl[1] != 0x04) { 538 - DEBUG_ERR("%s function tag or length mismatch\n", func); 543 + ZCRYPT_DBF_ERR("%s function tag or length mismatch\n", func); 539 544 return -EIO; 540 545 } 541 546 pl += 6; 542 547 543 548 /* dom tag, length and value */ 544 549 if (pl[0] != 0x04 || pl[1] != 0x04) { 545 - DEBUG_ERR("%s dom tag or length mismatch\n", func); 550 + ZCRYPT_DBF_ERR("%s dom tag or length mismatch\n", func); 546 551 return -EIO; 547 552 } 548 553 pl += 6; 549 554 550 555 /* return value tag, length and value */ 551 556 if (pl[0] != 0x04 || pl[1] != 0x04) { 552 - DEBUG_ERR("%s return value tag or length mismatch\n", func); 557 + ZCRYPT_DBF_ERR("%s return value tag or length mismatch\n", 558 + func); 553 559 return -EIO; 554 560 } 555 561 pl += 2; 556 562 ret = *((u32 *)pl); 557 563 if (ret != 0) { 558 - DEBUG_ERR("%s return value 0x%04x != 0\n", func, ret); 564 + ZCRYPT_DBF_ERR("%s return value 0x%04x != 0\n", func, ret); 559 565 return -EIO; 560 566 } 561 567 ··· 622 626 623 627 rc = zcrypt_send_ep11_cprb(urb); 624 628 if (rc) { 625 - DEBUG_ERR( 626 - "%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 627 - __func__, (int)cardnr, (int)domain, rc); 629 + ZCRYPT_DBF_ERR("%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 630 + __func__, (int)cardnr, (int)domain, rc); 628 631 goto out; 629 632 } 630 633 ··· 631 636 if (rc) 632 637 goto out; 633 638 if (rep_pl->data_tag != 0x04 || rep_pl->data_lenfmt != 0x82) { 634 - DEBUG_ERR("%s unknown reply data format\n", __func__); 639 + ZCRYPT_DBF_ERR("%s unknown reply data format\n", __func__); 635 640 rc = -EIO; 636 641 goto out; 637 642 } 638 643 if (rep_pl->data_len > buflen) { 639 - DEBUG_ERR("%s mismatch between reply data len and buffer len\n", 640 - __func__); 644 + ZCRYPT_DBF_ERR("%s mismatch between reply data len and buffer len\n", 645 + __func__); 641 646 rc = -ENOSPC; 642 647 goto out; 643 648 } ··· 811 816 case 256: 812 817 break; 813 818 default: 814 - DEBUG_ERR( 815 - "%s unknown/unsupported keybitsize %d\n", 816 - __func__, keybitsize); 819 + ZCRYPT_DBF_ERR("%s unknown/unsupported keybitsize %d\n", 820 + __func__, keybitsize); 817 821 rc = -EINVAL; 818 822 goto out; 819 823 } ··· 872 878 873 879 rc = zcrypt_send_ep11_cprb(urb); 874 880 if (rc) { 875 - DEBUG_ERR( 876 - "%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 877 - __func__, (int)card, (int)domain, rc); 881 + ZCRYPT_DBF_ERR("%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 882 + __func__, (int)card, (int)domain, rc); 878 883 goto out; 879 884 } 880 885 ··· 881 888 if (rc) 882 889 goto out; 883 890 if (rep_pl->data_tag != 0x04 || rep_pl->data_lenfmt != 0x82) { 884 - DEBUG_ERR("%s unknown reply data format\n", __func__); 891 + ZCRYPT_DBF_ERR("%s unknown reply data format\n", __func__); 885 892 rc = -EIO; 886 893 goto out; 887 894 } 888 895 if (rep_pl->data_len > *keybufsize) { 889 - DEBUG_ERR("%s mismatch reply data len / key buffer len\n", 890 - __func__); 896 + ZCRYPT_DBF_ERR("%s mismatch reply data len / key buffer len\n", 897 + __func__); 891 898 rc = -ENOSPC; 892 899 goto out; 893 900 } ··· 1023 1030 1024 1031 rc = zcrypt_send_ep11_cprb(urb); 1025 1032 if (rc) { 1026 - DEBUG_ERR( 1027 - "%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 1028 - __func__, (int)card, (int)domain, rc); 1033 + ZCRYPT_DBF_ERR("%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 1034 + __func__, (int)card, (int)domain, rc); 1029 1035 goto out; 1030 1036 } 1031 1037 ··· 1032 1040 if (rc) 1033 1041 goto out; 1034 1042 if (rep_pl->data_tag != 0x04) { 1035 - DEBUG_ERR("%s unknown reply data format\n", __func__); 1043 + ZCRYPT_DBF_ERR("%s unknown reply data format\n", __func__); 1036 1044 rc = -EIO; 1037 1045 goto out; 1038 1046 } ··· 1045 1053 n = *((u16 *)p); 1046 1054 p += 2; 1047 1055 } else { 1048 - DEBUG_ERR("%s unknown reply data length format 0x%02hhx\n", 1049 - __func__, rep_pl->data_lenfmt); 1056 + ZCRYPT_DBF_ERR("%s unknown reply data length format 0x%02hhx\n", 1057 + __func__, rep_pl->data_lenfmt); 1050 1058 rc = -EIO; 1051 1059 goto out; 1052 1060 } 1053 1061 if (n > *outbufsize) { 1054 - DEBUG_ERR("%s mismatch reply data len %d / output buffer %zu\n", 1055 - __func__, n, *outbufsize); 1062 + ZCRYPT_DBF_ERR("%s mismatch reply data len %d / output buffer %zu\n", 1063 + __func__, n, *outbufsize); 1056 1064 rc = -ENOSPC; 1057 1065 goto out; 1058 1066 } ··· 1180 1188 1181 1189 rc = zcrypt_send_ep11_cprb(urb); 1182 1190 if (rc) { 1183 - DEBUG_ERR( 1184 - "%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 1185 - __func__, (int)card, (int)domain, rc); 1191 + ZCRYPT_DBF_ERR("%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 1192 + __func__, (int)card, (int)domain, rc); 1186 1193 goto out; 1187 1194 } 1188 1195 ··· 1189 1198 if (rc) 1190 1199 goto out; 1191 1200 if (rep_pl->data_tag != 0x04 || rep_pl->data_lenfmt != 0x82) { 1192 - DEBUG_ERR("%s unknown reply data format\n", __func__); 1201 + ZCRYPT_DBF_ERR("%s unknown reply data format\n", __func__); 1193 1202 rc = -EIO; 1194 1203 goto out; 1195 1204 } 1196 1205 if (rep_pl->data_len > *keybufsize) { 1197 - DEBUG_ERR("%s mismatch reply data len / key buffer len\n", 1198 - __func__); 1206 + ZCRYPT_DBF_ERR("%s mismatch reply data len / key buffer len\n", 1207 + __func__); 1199 1208 rc = -ENOSPC; 1200 1209 goto out; 1201 1210 } ··· 1334 1343 1335 1344 rc = zcrypt_send_ep11_cprb(urb); 1336 1345 if (rc) { 1337 - DEBUG_ERR( 1338 - "%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 1339 - __func__, (int)card, (int)domain, rc); 1346 + ZCRYPT_DBF_ERR("%s zcrypt_send_ep11_cprb(card=%d dom=%d) failed, rc=%d\n", 1347 + __func__, (int)card, (int)domain, rc); 1340 1348 goto out; 1341 1349 } 1342 1350 ··· 1343 1353 if (rc) 1344 1354 goto out; 1345 1355 if (rep_pl->data_tag != 0x04 || rep_pl->data_lenfmt != 0x82) { 1346 - DEBUG_ERR("%s unknown reply data format\n", __func__); 1356 + ZCRYPT_DBF_ERR("%s unknown reply data format\n", __func__); 1347 1357 rc = -EIO; 1348 1358 goto out; 1349 1359 } 1350 1360 if (rep_pl->data_len > *datasize) { 1351 - DEBUG_ERR("%s mismatch reply data len / data buffer len\n", 1352 - __func__); 1361 + ZCRYPT_DBF_ERR("%s mismatch reply data len / data buffer len\n", 1362 + __func__); 1353 1363 rc = -ENOSPC; 1354 1364 goto out; 1355 1365 } ··· 1376 1386 if (keybitsize == 128 || keybitsize == 192 || keybitsize == 256) { 1377 1387 clrkeylen = keybitsize / 8; 1378 1388 } else { 1379 - DEBUG_ERR( 1380 - "%s unknown/unsupported keybitsize %d\n", 1381 - __func__, keybitsize); 1389 + ZCRYPT_DBF_ERR("%s unknown/unsupported keybitsize %d\n", 1390 + __func__, keybitsize); 1382 1391 return -EINVAL; 1383 1392 } 1384 1393 ··· 1394 1405 0x00006c00, /* EN/DECRYPT, WRAP/UNWRAP */ 1395 1406 kek, &keklen); 1396 1407 if (rc) { 1397 - DEBUG_ERR( 1398 - "%s generate kek key failed, rc=%d\n", 1399 - __func__, rc); 1408 + ZCRYPT_DBF_ERR("%s generate kek key failed, rc=%d\n", 1409 + __func__, rc); 1400 1410 goto out; 1401 1411 } 1402 1412 ··· 1403 1415 rc = ep11_cryptsingle(card, domain, 0, 0, def_iv, kek, keklen, 1404 1416 clrkey, clrkeylen, encbuf, &encbuflen); 1405 1417 if (rc) { 1406 - DEBUG_ERR( 1407 - "%s encrypting key value with kek key failed, rc=%d\n", 1408 - __func__, rc); 1418 + ZCRYPT_DBF_ERR("%s encrypting key value with kek key failed, rc=%d\n", 1419 + __func__, rc); 1409 1420 goto out; 1410 1421 } 1411 1422 ··· 1413 1426 encbuf, encbuflen, 0, def_iv, 1414 1427 keybitsize, 0, keybuf, keybufsize, keytype); 1415 1428 if (rc) { 1416 - DEBUG_ERR( 1417 - "%s importing key value as new key failed,, rc=%d\n", 1418 - __func__, rc); 1429 + ZCRYPT_DBF_ERR("%s importing key value as new key failed,, rc=%d\n", 1430 + __func__, rc); 1419 1431 goto out; 1420 1432 } 1421 1433 ··· 1462 1476 rc = _ep11_wrapkey(card, dom, (u8 *)key, keylen, 1463 1477 0, def_iv, wkbuf, &wkbuflen); 1464 1478 if (rc) { 1465 - DEBUG_ERR( 1466 - "%s rewrapping ep11 key to pkey failed, rc=%d\n", 1467 - __func__, rc); 1479 + ZCRYPT_DBF_ERR("%s rewrapping ep11 key to pkey failed, rc=%d\n", 1480 + __func__, rc); 1468 1481 goto out; 1469 1482 } 1470 1483 wki = (struct wk_info *)wkbuf; 1471 1484 1472 1485 /* check struct version and pkey type */ 1473 1486 if (wki->version != 1 || wki->pkeytype < 1 || wki->pkeytype > 5) { 1474 - DEBUG_ERR("%s wk info version %d or pkeytype %d mismatch.\n", 1475 - __func__, (int)wki->version, (int)wki->pkeytype); 1487 + ZCRYPT_DBF_ERR("%s wk info version %d or pkeytype %d mismatch.\n", 1488 + __func__, (int)wki->version, (int)wki->pkeytype); 1476 1489 rc = -EIO; 1477 1490 goto out; 1478 1491 } ··· 1496 1511 *protkeytype = PKEY_KEYTYPE_AES_256; 1497 1512 break; 1498 1513 default: 1499 - DEBUG_ERR("%s unknown/unsupported AES pkeysize %d\n", 1500 - __func__, (int)wki->pkeysize); 1514 + ZCRYPT_DBF_ERR("%s unknown/unsupported AES pkeysize %d\n", 1515 + __func__, (int)wki->pkeysize); 1501 1516 rc = -EIO; 1502 1517 goto out; 1503 1518 } ··· 1510 1525 break; 1511 1526 case 2: /* TDES */ 1512 1527 default: 1513 - DEBUG_ERR("%s unknown/unsupported key type %d\n", 1514 - __func__, (int)wki->pkeytype); 1528 + ZCRYPT_DBF_ERR("%s unknown/unsupported key type %d\n", 1529 + __func__, (int)wki->pkeytype); 1515 1530 rc = -EIO; 1516 1531 goto out; 1517 1532 } 1518 1533 1519 1534 /* copy the translated protected key */ 1520 1535 if (wki->pkeysize > *protkeylen) { 1521 - DEBUG_ERR("%s wk info pkeysize %llu > protkeysize %u\n", 1522 - __func__, wki->pkeysize, *protkeylen); 1536 + ZCRYPT_DBF_ERR("%s wk info pkeysize %llu > protkeysize %u\n", 1537 + __func__, wki->pkeysize, *protkeylen); 1523 1538 rc = -EINVAL; 1524 1539 goto out; 1525 1540 }
+2 -3
drivers/s390/crypto/zcrypt_error.h
··· 119 119 case REP82_ERROR_MESSAGE_TYPE: /* 0x20 */ 120 120 case REP82_ERROR_TRANSPORT_FAIL: /* 0x90 */ 121 121 /* 122 - * Msg to wrong type or card/infrastructure failure. 123 - * Trigger rescan of the ap bus, trigger retry request. 122 + * Msg to wrong type or card/infrastructure failure. Return 123 + * EAGAIN, the upper layer may do a retry on the request. 124 124 */ 125 - atomic_set(&zcrypt_rescan_req, 1); 126 125 /* For type 86 response show the apfs value (failure reason) */ 127 126 if (ehdr->reply_code == REP82_ERROR_TRANSPORT_FAIL && 128 127 ehdr->type == TYPE86_RSP_CODE) {
+7 -7
drivers/s390/crypto/zcrypt_msgtype50.c
··· 427 427 len = t80h->len; 428 428 if (len > reply->bufsize || len > msg->bufsize || 429 429 len != reply->len) { 430 - ZCRYPT_DBF_DBG("%s len mismatch => EMSGSIZE\n", __func__); 430 + pr_debug("%s len mismatch => EMSGSIZE\n", __func__); 431 431 msg->rc = -EMSGSIZE; 432 432 goto out; 433 433 } ··· 487 487 out: 488 488 ap_msg->private = NULL; 489 489 if (rc) 490 - ZCRYPT_DBF_DBG("%s send me cprb at dev=%02x.%04x rc=%d\n", 491 - __func__, AP_QID_CARD(zq->queue->qid), 492 - AP_QID_QUEUE(zq->queue->qid), rc); 490 + pr_debug("%s send me cprb at dev=%02x.%04x rc=%d\n", 491 + __func__, AP_QID_CARD(zq->queue->qid), 492 + AP_QID_QUEUE(zq->queue->qid), rc); 493 493 return rc; 494 494 } 495 495 ··· 537 537 out: 538 538 ap_msg->private = NULL; 539 539 if (rc) 540 - ZCRYPT_DBF_DBG("%s send crt cprb at dev=%02x.%04x rc=%d\n", 541 - __func__, AP_QID_CARD(zq->queue->qid), 542 - AP_QID_QUEUE(zq->queue->qid), rc); 540 + pr_debug("%s send crt cprb at dev=%02x.%04x rc=%d\n", 541 + __func__, AP_QID_CARD(zq->queue->qid), 542 + AP_QID_QUEUE(zq->queue->qid), rc); 543 543 return rc; 544 544 } 545 545
+24 -21
drivers/s390/crypto/zcrypt_msgtype6.c
··· 437 437 ap_msg->flags |= AP_MSG_FLAG_ADMIN; 438 438 break; 439 439 default: 440 - ZCRYPT_DBF_DBG("%s unknown CPRB minor version '%c%c'\n", 441 - __func__, msg->cprbx.func_id[0], 442 - msg->cprbx.func_id[1]); 440 + pr_debug("%s unknown CPRB minor version '%c%c'\n", 441 + __func__, msg->cprbx.func_id[0], 442 + msg->cprbx.func_id[1]); 443 443 } 444 444 445 445 /* copy data block */ ··· 629 629 630 630 /* Copy CPRB to user */ 631 631 if (xcrb->reply_control_blk_length < msg->fmt2.count1) { 632 - ZCRYPT_DBF_DBG("%s reply_control_blk_length %u < required %u => EMSGSIZE\n", 633 - __func__, xcrb->reply_control_blk_length, 634 - msg->fmt2.count1); 632 + pr_debug("%s reply_control_blk_length %u < required %u => EMSGSIZE\n", 633 + __func__, xcrb->reply_control_blk_length, 634 + msg->fmt2.count1); 635 635 return -EMSGSIZE; 636 636 } 637 637 if (z_copy_to_user(userspace, xcrb->reply_control_blk_addr, ··· 642 642 /* Copy data buffer to user */ 643 643 if (msg->fmt2.count2) { 644 644 if (xcrb->reply_data_length < msg->fmt2.count2) { 645 - ZCRYPT_DBF_DBG("%s reply_data_length %u < required %u => EMSGSIZE\n", 646 - __func__, xcrb->reply_data_length, 647 - msg->fmt2.count2); 645 + pr_debug("%s reply_data_length %u < required %u => EMSGSIZE\n", 646 + __func__, xcrb->reply_data_length, 647 + msg->fmt2.count2); 648 648 return -EMSGSIZE; 649 649 } 650 650 if (z_copy_to_user(userspace, xcrb->reply_data_addr, ··· 673 673 char *data = reply->msg; 674 674 675 675 if (xcrb->resp_len < msg->fmt2.count1) { 676 - ZCRYPT_DBF_DBG("%s resp_len %u < required %u => EMSGSIZE\n", 677 - __func__, (unsigned int)xcrb->resp_len, 678 - msg->fmt2.count1); 676 + pr_debug("%s resp_len %u < required %u => EMSGSIZE\n", 677 + __func__, (unsigned int)xcrb->resp_len, 678 + msg->fmt2.count1); 679 679 return -EMSGSIZE; 680 680 } 681 681 ··· 875 875 len = sizeof(struct type86x_reply) + t86r->length; 876 876 if (len > reply->bufsize || len > msg->bufsize || 877 877 len != reply->len) { 878 - ZCRYPT_DBF_DBG("%s len mismatch => EMSGSIZE\n", __func__); 878 + pr_debug("%s len mismatch => EMSGSIZE\n", 879 + __func__); 879 880 msg->rc = -EMSGSIZE; 880 881 goto out; 881 882 } ··· 890 889 len = t86r->fmt2.offset1 + t86r->fmt2.count1; 891 890 if (len > reply->bufsize || len > msg->bufsize || 892 891 len != reply->len) { 893 - ZCRYPT_DBF_DBG("%s len mismatch => EMSGSIZE\n", __func__); 892 + pr_debug("%s len mismatch => EMSGSIZE\n", 893 + __func__); 894 894 msg->rc = -EMSGSIZE; 895 895 goto out; 896 896 } ··· 941 939 len = t86r->fmt2.offset1 + t86r->fmt2.count1; 942 940 if (len > reply->bufsize || len > msg->bufsize || 943 941 len != reply->len) { 944 - ZCRYPT_DBF_DBG("%s len mismatch => EMSGSIZE\n", __func__); 942 + pr_debug("%s len mismatch => EMSGSIZE\n", 943 + __func__); 945 944 msg->rc = -EMSGSIZE; 946 945 goto out; 947 946 } ··· 1154 1151 1155 1152 out: 1156 1153 if (rc) 1157 - ZCRYPT_DBF_DBG("%s send cprb at dev=%02x.%04x rc=%d\n", 1158 - __func__, AP_QID_CARD(zq->queue->qid), 1159 - AP_QID_QUEUE(zq->queue->qid), rc); 1154 + pr_debug("%s send cprb at dev=%02x.%04x rc=%d\n", 1155 + __func__, AP_QID_CARD(zq->queue->qid), 1156 + AP_QID_QUEUE(zq->queue->qid), rc); 1160 1157 return rc; 1161 1158 } 1162 1159 ··· 1277 1274 1278 1275 out: 1279 1276 if (rc) 1280 - ZCRYPT_DBF_DBG("%s send cprb at dev=%02x.%04x rc=%d\n", 1281 - __func__, AP_QID_CARD(zq->queue->qid), 1282 - AP_QID_QUEUE(zq->queue->qid), rc); 1277 + pr_debug("%s send cprb at dev=%02x.%04x rc=%d\n", 1278 + __func__, AP_QID_CARD(zq->queue->qid), 1279 + AP_QID_QUEUE(zq->queue->qid), rc); 1283 1280 return rc; 1284 1281 } 1285 1282
+12
include/linux/compiler_attributes.h
··· 334 334 #define __section(section) __attribute__((__section__(section))) 335 335 336 336 /* 337 + * Optional: only supported since gcc >= 12 338 + * 339 + * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-uninitialized-variable-attribute 340 + * clang: https://clang.llvm.org/docs/AttributeReference.html#uninitialized 341 + */ 342 + #if __has_attribute(__uninitialized__) 343 + # define __uninitialized __attribute__((__uninitialized__)) 344 + #else 345 + # define __uninitialized 346 + #endif 347 + 348 + /* 337 349 * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-unused-function-attribute 338 350 * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Type-Attributes.html#index-unused-type-attribute 339 351 * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-unused-variable-attribute
+13 -49
lib/raid6/s390vx.uc
··· 12 12 */ 13 13 14 14 #include <linux/raid/pq.h> 15 - #include <asm/fpu/api.h> 16 - #include <asm/vx-insn.h> 15 + #include <asm/fpu.h> 17 16 18 17 #define NSIZE 16 19 18 20 - static inline void LOAD_CONST(void) 19 + static __always_inline void LOAD_CONST(void) 21 20 { 22 - asm volatile("VREPIB %v24,7"); 23 - asm volatile("VREPIB %v25,0x1d"); 21 + fpu_vrepib(24, 0x07); 22 + fpu_vrepib(25, 0x1d); 24 23 } 25 24 26 25 /* ··· 27 28 * vector register y left by 1 bit and stores the result in 28 29 * vector register x. 29 30 */ 30 - static inline void SHLBYTE(int x, int y) 31 - { 32 - asm volatile ("VAB %0,%1,%1" : : "i" (x), "i" (y)); 33 - } 31 + #define SHLBYTE(x, y) fpu_vab(x, y, y) 34 32 35 33 /* 36 34 * For each of the 16 bytes in the vector register y the MASK() ··· 35 39 * or 0x00 if the high bit is 0. The result is stored in vector 36 40 * register x. 37 41 */ 38 - static inline void MASK(int x, int y) 39 - { 40 - asm volatile ("VESRAVB %0,%1,24" : : "i" (x), "i" (y)); 41 - } 42 + #define MASK(x, y) fpu_vesravb(x, y, 24) 42 43 43 - static inline void AND(int x, int y, int z) 44 - { 45 - asm volatile ("VN %0,%1,%2" : : "i" (x), "i" (y), "i" (z)); 46 - } 47 - 48 - static inline void XOR(int x, int y, int z) 49 - { 50 - asm volatile ("VX %0,%1,%2" : : "i" (x), "i" (y), "i" (z)); 51 - } 52 - 53 - static inline void LOAD_DATA(int x, u8 *ptr) 54 - { 55 - typedef struct { u8 _[16 * $#]; } addrtype; 56 - register addrtype *__ptr asm("1") = (addrtype *) ptr; 57 - 58 - asm volatile ("VLM %2,%3,0,%1" 59 - : : "m" (*__ptr), "a" (__ptr), "i" (x), 60 - "i" (x + $# - 1)); 61 - } 62 - 63 - static inline void STORE_DATA(int x, u8 *ptr) 64 - { 65 - typedef struct { u8 _[16 * $#]; } addrtype; 66 - register addrtype *__ptr asm("1") = (addrtype *) ptr; 67 - 68 - asm volatile ("VSTM %2,%3,0,1" 69 - : "=m" (*__ptr) : "a" (__ptr), "i" (x), 70 - "i" (x + $# - 1)); 71 - } 72 - 73 - static inline void COPY_VEC(int x, int y) 74 - { 75 - asm volatile ("VLR %0,%1" : : "i" (x), "i" (y)); 76 - } 44 + #define AND(x, y, z) fpu_vn(x, y, z) 45 + #define XOR(x, y, z) fpu_vx(x, y, z) 46 + #define LOAD_DATA(x, ptr) fpu_vlm(x, x + $# - 1, ptr) 47 + #define STORE_DATA(x, ptr) fpu_vstm(x, x + $# - 1, ptr) 48 + #define COPY_VEC(x, y) fpu_vlr(x, y) 77 49 78 50 static void raid6_s390vx$#_gen_syndrome(int disks, size_t bytes, void **ptrs) 79 51 { 80 - struct kernel_fpu vxstate; 52 + DECLARE_KERNEL_FPU_ONSTACK32(vxstate); 81 53 u8 **dptr, *p, *q; 82 54 int d, z, z0; 83 55 ··· 78 114 static void raid6_s390vx$#_xor_syndrome(int disks, int start, int stop, 79 115 size_t bytes, void **ptrs) 80 116 { 81 - struct kernel_fpu vxstate; 117 + DECLARE_KERNEL_FPU_ONSTACK32(vxstate); 82 118 u8 **dptr, *p, *q; 83 119 int d, z, z0; 84 120
+31
tools/testing/selftests/kvm/s390x/memop.c
··· 375 375 kvm_vm_free(t.kvm_vm); 376 376 } 377 377 378 + static void test_copy_access_register(void) 379 + { 380 + struct test_default t = test_default_init(guest_copy); 381 + 382 + HOST_SYNC(t.vcpu, STAGE_INITED); 383 + 384 + prepare_mem12(); 385 + t.run->psw_mask &= ~(3UL << (63 - 17)); 386 + t.run->psw_mask |= 1UL << (63 - 17); /* Enable AR mode */ 387 + 388 + /* 389 + * Primary address space gets used if an access register 390 + * contains zero. The host makes use of AR[1] so is a good 391 + * candidate to ensure the guest AR (of zero) is used. 392 + */ 393 + CHECK_N_DO(MOP, t.vcpu, LOGICAL, WRITE, mem1, t.size, 394 + GADDR_V(mem1), AR(1)); 395 + HOST_SYNC(t.vcpu, STAGE_COPIED); 396 + 397 + CHECK_N_DO(MOP, t.vcpu, LOGICAL, READ, mem2, t.size, 398 + GADDR_V(mem2), AR(1)); 399 + ASSERT_MEM_EQ(mem1, mem2, t.size); 400 + 401 + kvm_vm_free(t.kvm_vm); 402 + } 403 + 378 404 static void set_storage_key_range(void *addr, size_t len, uint8_t key) 379 405 { 380 406 uintptr_t _addr, abs, i; ··· 1126 1100 .name = "copy with key fetch protection override", 1127 1101 .test = test_copy_key_fetch_prot_override, 1128 1102 .requirements_met = extension_cap > 0, 1103 + }, 1104 + { 1105 + .name = "copy with access register mode", 1106 + .test = test_copy_access_register, 1107 + .requirements_met = true, 1129 1108 }, 1130 1109 { 1131 1110 .name = "error checks with key",