Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Heiko Carstens:
"Since Martin is on vacation you get the s390 pull request for the
v4.15 merge window this time from me.

Besides a lot of cleanups and bug fixes these are the most important
changes:

- a new regset for runtime instrumentation registers

- hardware accelerated AES-GCM support for the aes_s390 module

- support for the new CEX6S crypto cards

- support for FORTIFY_SOURCE

- addition of missing z13 and new z14 instructions to the in-kernel
disassembler

- generate opcode tables for the in-kernel disassembler out of a
simple text file instead of having to manually maintain those
tables

- fast memset16, memset32 and memset64 implementations

- removal of named saved segment support

- hardware counter support for z14

- queued spinlocks and queued rwlocks implementations for s390

- use the stack_depth tracking feature for s390 BPF JIT

- a new s390_sthyi system call which emulates the sthyi (store
hypervisor information) instruction

- removal of the old KVM virtio transport

- an s390 specific CPU alternatives implementation which is used in
the new spinlock code"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (88 commits)
MAINTAINERS: add virtio-ccw.h to virtio/s390 section
s390/noexec: execute kexec datamover without DAT
s390: fix transactional execution control register handling
s390/bpf: take advantage of stack_depth tracking
s390: simplify transactional execution elf hwcap handling
s390/zcrypt: Rework struct ap_qact_ap_info.
s390/virtio: remove unused header file kvm_virtio.h
s390: avoid undefined behaviour
s390/disassembler: generate opcode tables from text file
s390/disassembler: remove insn_to_mnemonic()
s390/dasd: avoid calling do_gettimeofday()
s390: vfio-ccw: Do not attempt to free no-op, test and tic cda.
s390: remove named saved segment support
s390/archrandom: Reconsider s390 arch random implementation
s390/pci: do not require AIS facility
s390/qdio: sanitize put_indicator
s390/qdio: use atomic_cmpxchg
s390/nmi: avoid using long-displacement facility
s390: pass endianness info to sparse
s390/decompressor: remove informational messages
...

+4787 -4616
+3
Documentation/admin-guide/kernel-parameters.txt
··· 2548 2548 2549 2549 noalign [KNL,ARM] 2550 2550 2551 + noaltinstr [S390] Disables alternative instructions patching 2552 + (CPU alternatives feature). 2553 + 2551 2554 noapic [SMP,APIC] Tells the kernel to not make use of any 2552 2555 IOAPICs that may be present in the system. 2553 2556
+1
MAINTAINERS
··· 14335 14335 L: kvm@vger.kernel.org 14336 14336 S: Supported 14337 14337 F: drivers/s390/virtio/ 14338 + F: arch/s390/include/uapi/asm/virtio-ccw.h 14338 14339 14339 14340 VIRTIO GPU DRIVER 14340 14341 M: David Airlie <airlied@linux.ie>
+17 -26
arch/s390/Kconfig
··· 68 68 select ARCH_BINFMT_ELF_STATE 69 69 select ARCH_HAS_DEVMEM_IS_ALLOWED 70 70 select ARCH_HAS_ELF_RANDOMIZE 71 + select ARCH_HAS_FORTIFY_SOURCE 71 72 select ARCH_HAS_GCOV_PROFILE_ALL 72 73 select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA 73 74 select ARCH_HAS_KCOV ··· 144 143 select HAVE_DYNAMIC_FTRACE 145 144 select HAVE_DYNAMIC_FTRACE_WITH_REGS 146 145 select HAVE_EFFICIENT_UNALIGNED_ACCESS 147 - select HAVE_EXIT_THREAD 148 146 select HAVE_FTRACE_MCOUNT_RECORD 149 147 select HAVE_FUNCTION_GRAPH_TRACER 150 148 select HAVE_FUNCTION_TRACER ··· 538 538 539 539 If unsure, say Y. 540 540 541 + config ALTERNATIVES 542 + def_bool y 543 + prompt "Patch optimized instructions for running CPU type" 544 + help 545 + When enabled the kernel code is compiled with additional 546 + alternative instructions blocks optimized for newer CPU types. 547 + These alternative instructions blocks are patched at kernel boot 548 + time when running CPU supports them. This mechanism is used to 549 + optimize some critical code paths (i.e. spinlocks) for newer CPUs 550 + even if kernel is build to support older machine generations. 551 + 552 + This mechanism could be disabled by appending "noaltinstr" 553 + option to the kernel command line. 554 + 555 + If unsure, say Y. 556 + 541 557 endmenu 542 558 543 559 menu "Memory setup" ··· 825 809 Everybody who wants to run Linux under VM != VM4.2 should select 826 810 this option. 827 811 828 - config SHARED_KERNEL 829 - bool "VM shared kernel support" 830 - depends on !JUMP_LABEL 831 - help 832 - Select this option, if you want to share the text segment of the 833 - Linux kernel between different VM guests. This reduces memory 834 - usage with lots of guests but greatly increases kernel size. 835 - Also if a kernel was IPL'ed from a shared segment the kexec system 836 - call will not work. 837 - You should only select this option if you know what you are 838 - doing and want to exploit this feature. 839 - 840 812 config CMM 841 813 def_tristate n 842 814 prompt "Cooperative memory management" ··· 933 929 934 930 Select this option if you want to run the kernel as a guest under 935 931 the KVM hypervisor. 936 - 937 - config S390_GUEST_OLD_TRANSPORT 938 - def_bool y 939 - prompt "Guest support for old s390 virtio transport (DEPRECATED)" 940 - depends on S390_GUEST 941 - help 942 - Enable this option to add support for the old s390-virtio 943 - transport (i.e. virtio devices NOT based on virtio-ccw). This 944 - type of virtio devices is only available on the experimental 945 - kuli userspace or with old (< 2.6) qemu. If you are running 946 - with a modern version of qemu (which supports virtio-ccw since 947 - 1.4 and uses it by default since version 2.4), you probably won't 948 - need this. 949 932 950 933 endmenu
+2 -1
arch/s390/Makefile
··· 21 21 KBUILD_AFLAGS += -m64 22 22 UTS_MACHINE := s390x 23 23 STACK_SIZE := 16384 24 - CHECKFLAGS += -D__s390__ -D__s390x__ 24 + CHECKFLAGS += -D__s390__ -D__s390x__ -mbig-endian 25 25 26 26 export LD_BFD 27 27 ··· 133 133 134 134 archprepare: 135 135 $(Q)$(MAKE) $(build)=$(tools) include/generated/facilities.h 136 + $(Q)$(MAKE) $(build)=$(tools) include/generated/dis.h 136 137 137 138 # Don't use tabs in echo arguments 138 139 define archhelp
+1 -1
arch/s390/boot/compressed/Makefile
··· 12 12 targets += misc.o piggy.o sizes.h head.o 13 13 14 14 KBUILD_CFLAGS := -m64 -D__KERNEL__ -O2 15 - KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 15 + KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY 16 16 KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks -msoft-float 17 17 KBUILD_CFLAGS += $(call cc-option,-mpacked-stack) 18 18 KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
-2
arch/s390/boot/compressed/misc.c
··· 170 170 free_mem_ptr = (unsigned long) &_end; 171 171 free_mem_end_ptr = free_mem_ptr + HEAP_SIZE; 172 172 173 - puts("Uncompressing Linux... "); 174 173 __decompress(input_data, input_len, NULL, NULL, output, 0, NULL, error); 175 - puts("Ok, booting the kernel.\n"); 176 174 return (unsigned long) output; 177 175 } 178 176
+4 -7
arch/s390/configs/default_defconfig
··· 69 69 CONFIG_TRANSPARENT_HUGEPAGE=y 70 70 CONFIG_CLEANCACHE=y 71 71 CONFIG_FRONTSWAP=y 72 - CONFIG_CMA=y 73 72 CONFIG_CMA_DEBUG=y 74 73 CONFIG_CMA_DEBUGFS=y 75 74 CONFIG_MEM_SOFT_DIRTY=y ··· 378 379 CONFIG_BLK_DEV_CRYPTOLOOP=m 379 380 CONFIG_BLK_DEV_DRBD=m 380 381 CONFIG_BLK_DEV_NBD=m 381 - CONFIG_BLK_DEV_OSD=m 382 382 CONFIG_BLK_DEV_RAM=y 383 383 CONFIG_BLK_DEV_RAM_SIZE=32768 384 384 CONFIG_BLK_DEV_RAM_DAX=y ··· 414 416 CONFIG_MD=y 415 417 CONFIG_BLK_DEV_MD=y 416 418 CONFIG_MD_LINEAR=m 417 - CONFIG_MD_RAID0=m 418 419 CONFIG_MD_MULTIPATH=m 419 420 CONFIG_MD_FAULTY=m 420 421 CONFIG_BLK_DEV_DM=m ··· 480 483 CONFIG_INFINIBAND_USER_ACCESS=m 481 484 CONFIG_MLX4_INFINIBAND=m 482 485 CONFIG_MLX5_INFINIBAND=m 486 + CONFIG_VFIO=m 487 + CONFIG_VFIO_PCI=m 483 488 CONFIG_VIRTIO_BALLOON=m 484 489 CONFIG_EXT4_FS=y 485 490 CONFIG_EXT4_FS_POSIX_ACL=y ··· 598 599 CONFIG_WQ_WATCHDOG=y 599 600 CONFIG_PANIC_ON_OOPS=y 600 601 CONFIG_DEBUG_TIMEKEEPING=y 601 - CONFIG_DEBUG_RT_MUTEXES=y 602 602 CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y 603 603 CONFIG_PROVE_LOCKING=y 604 604 CONFIG_LOCK_STAT=y ··· 627 629 CONFIG_FTRACE_SYSCALLS=y 628 630 CONFIG_STACK_TRACER=y 629 631 CONFIG_BLK_DEV_IO_TRACE=y 630 - CONFIG_UPROBE_EVENTS=y 631 632 CONFIG_FUNCTION_PROFILER=y 632 633 CONFIG_HIST_TRIGGERS=y 633 - CONFIG_TRACE_ENUM_MAP_FILE=y 634 634 CONFIG_LKDTM=m 635 635 CONFIG_TEST_LIST_SORT=y 636 636 CONFIG_TEST_SORT=y ··· 645 649 CONFIG_SECURITY=y 646 650 CONFIG_SECURITY_NETWORK=y 647 651 CONFIG_HARDENED_USERCOPY=y 652 + CONFIG_FORTIFY_SOURCE=y 648 653 CONFIG_SECURITY_SELINUX=y 649 654 CONFIG_SECURITY_SELINUX_BOOTPARAM=y 650 655 CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0 ··· 702 705 CONFIG_CRYPTO_USER_API_AEAD=m 703 706 CONFIG_ZCRYPT=m 704 707 CONFIG_PKEY=m 708 + CONFIG_CRYPTO_PAES_S390=m 705 709 CONFIG_CRYPTO_SHA1_S390=m 706 710 CONFIG_CRYPTO_SHA256_S390=m 707 711 CONFIG_CRYPTO_SHA512_S390=m 708 712 CONFIG_CRYPTO_DES_S390=m 709 713 CONFIG_CRYPTO_AES_S390=m 710 - CONFIG_CRYPTO_PAES_S390=m 711 714 CONFIG_CRYPTO_GHASH_S390=m 712 715 CONFIG_CRYPTO_CRC32_S390=y 713 716 CONFIG_ASYMMETRIC_KEY_TYPE=y
+3 -6
arch/s390/configs/gcov_defconfig
··· 70 70 CONFIG_TRANSPARENT_HUGEPAGE=y 71 71 CONFIG_CLEANCACHE=y 72 72 CONFIG_FRONTSWAP=y 73 - CONFIG_CMA=y 74 73 CONFIG_MEM_SOFT_DIRTY=y 75 74 CONFIG_ZSWAP=y 76 75 CONFIG_ZBUD=m ··· 375 376 CONFIG_BLK_DEV_CRYPTOLOOP=m 376 377 CONFIG_BLK_DEV_DRBD=m 377 378 CONFIG_BLK_DEV_NBD=m 378 - CONFIG_BLK_DEV_OSD=m 379 379 CONFIG_BLK_DEV_RAM=y 380 380 CONFIG_BLK_DEV_RAM_SIZE=32768 381 381 CONFIG_BLK_DEV_RAM_DAX=y ··· 410 412 CONFIG_MD=y 411 413 CONFIG_BLK_DEV_MD=y 412 414 CONFIG_MD_LINEAR=m 413 - CONFIG_MD_RAID0=m 414 415 CONFIG_MD_MULTIPATH=m 415 416 CONFIG_MD_FAULTY=m 416 417 CONFIG_BLK_DEV_DM=m ··· 476 479 CONFIG_INFINIBAND_USER_ACCESS=m 477 480 CONFIG_MLX4_INFINIBAND=m 478 481 CONFIG_MLX5_INFINIBAND=m 482 + CONFIG_VFIO=m 483 + CONFIG_VFIO_PCI=m 479 484 CONFIG_VIRTIO_BALLOON=m 480 485 CONFIG_EXT4_FS=y 481 486 CONFIG_EXT4_FS_POSIX_ACL=y ··· 574 575 CONFIG_FTRACE_SYSCALLS=y 575 576 CONFIG_STACK_TRACER=y 576 577 CONFIG_BLK_DEV_IO_TRACE=y 577 - CONFIG_UPROBE_EVENTS=y 578 578 CONFIG_FUNCTION_PROFILER=y 579 579 CONFIG_HIST_TRIGGERS=y 580 - CONFIG_TRACE_ENUM_MAP_FILE=y 581 580 CONFIG_LKDTM=m 582 581 CONFIG_PERCPU_TEST=m 583 582 CONFIG_ATOMIC64_SELFTEST=y ··· 647 650 CONFIG_CRYPTO_USER_API_AEAD=m 648 651 CONFIG_ZCRYPT=m 649 652 CONFIG_PKEY=m 653 + CONFIG_CRYPTO_PAES_S390=m 650 654 CONFIG_CRYPTO_SHA1_S390=m 651 655 CONFIG_CRYPTO_SHA256_S390=m 652 656 CONFIG_CRYPTO_SHA512_S390=m 653 657 CONFIG_CRYPTO_DES_S390=m 654 658 CONFIG_CRYPTO_AES_S390=m 655 - CONFIG_CRYPTO_PAES_S390=m 656 659 CONFIG_CRYPTO_GHASH_S390=m 657 660 CONFIG_CRYPTO_CRC32_S390=y 658 661 CONFIG_CRC7=m
+3 -6
arch/s390/configs/performance_defconfig
··· 68 68 CONFIG_TRANSPARENT_HUGEPAGE=y 69 69 CONFIG_CLEANCACHE=y 70 70 CONFIG_FRONTSWAP=y 71 - CONFIG_CMA=y 72 71 CONFIG_MEM_SOFT_DIRTY=y 73 72 CONFIG_ZSWAP=y 74 73 CONFIG_ZBUD=m ··· 373 374 CONFIG_BLK_DEV_CRYPTOLOOP=m 374 375 CONFIG_BLK_DEV_DRBD=m 375 376 CONFIG_BLK_DEV_NBD=m 376 - CONFIG_BLK_DEV_OSD=m 377 377 CONFIG_BLK_DEV_RAM=y 378 378 CONFIG_BLK_DEV_RAM_SIZE=32768 379 379 CONFIG_BLK_DEV_RAM_DAX=y ··· 408 410 CONFIG_MD=y 409 411 CONFIG_BLK_DEV_MD=y 410 412 CONFIG_MD_LINEAR=m 411 - CONFIG_MD_RAID0=m 412 413 CONFIG_MD_MULTIPATH=m 413 414 CONFIG_MD_FAULTY=m 414 415 CONFIG_BLK_DEV_DM=m ··· 474 477 CONFIG_INFINIBAND_USER_ACCESS=m 475 478 CONFIG_MLX4_INFINIBAND=m 476 479 CONFIG_MLX5_INFINIBAND=m 480 + CONFIG_VFIO=m 481 + CONFIG_VFIO_PCI=m 477 482 CONFIG_VIRTIO_BALLOON=m 478 483 CONFIG_EXT4_FS=y 479 484 CONFIG_EXT4_FS_POSIX_ACL=y ··· 572 573 CONFIG_FTRACE_SYSCALLS=y 573 574 CONFIG_STACK_TRACER=y 574 575 CONFIG_BLK_DEV_IO_TRACE=y 575 - CONFIG_UPROBE_EVENTS=y 576 576 CONFIG_FUNCTION_PROFILER=y 577 577 CONFIG_HIST_TRIGGERS=y 578 - CONFIG_TRACE_ENUM_MAP_FILE=y 579 578 CONFIG_LKDTM=m 580 579 CONFIG_PERCPU_TEST=m 581 580 CONFIG_ATOMIC64_SELFTEST=y ··· 645 648 CONFIG_CRYPTO_USER_API_AEAD=m 646 649 CONFIG_ZCRYPT=m 647 650 CONFIG_PKEY=m 651 + CONFIG_CRYPTO_PAES_S390=m 648 652 CONFIG_CRYPTO_SHA1_S390=m 649 653 CONFIG_CRYPTO_SHA256_S390=m 650 654 CONFIG_CRYPTO_SHA512_S390=m 651 655 CONFIG_CRYPTO_DES_S390=m 652 656 CONFIG_CRYPTO_AES_S390=m 653 - CONFIG_CRYPTO_PAES_S390=m 654 657 CONFIG_CRYPTO_GHASH_S390=m 655 658 CONFIG_CRYPTO_CRC32_S390=y 656 659 CONFIG_CRC7=m
+293 -3
arch/s390/crypto/aes_s390.c
··· 4 4 * s390 implementation of the AES Cipher Algorithm. 5 5 * 6 6 * s390 Version: 7 - * Copyright IBM Corp. 2005, 2007 7 + * Copyright IBM Corp. 2005, 2017 8 8 * Author(s): Jan Glauber (jang@de.ibm.com) 9 9 * Sebastian Siewior (sebastian@breakpoint.cc> SW-Fallback 10 + * Patrick Steuer <patrick.steuer@de.ibm.com> 11 + * Harald Freudenberger <freude@de.ibm.com> 10 12 * 11 13 * Derived from "crypto/aes_generic.c" 12 14 * ··· 24 22 25 23 #include <crypto/aes.h> 26 24 #include <crypto/algapi.h> 25 + #include <crypto/ghash.h> 26 + #include <crypto/internal/aead.h> 27 27 #include <crypto/internal/skcipher.h> 28 + #include <crypto/scatterwalk.h> 28 29 #include <linux/err.h> 29 30 #include <linux/module.h> 30 31 #include <linux/cpufeature.h> 31 32 #include <linux/init.h> 32 33 #include <linux/spinlock.h> 33 34 #include <linux/fips.h> 35 + #include <linux/string.h> 34 36 #include <crypto/xts.h> 35 37 #include <asm/cpacf.h> 36 38 37 39 static u8 *ctrblk; 38 40 static DEFINE_SPINLOCK(ctrblk_lock); 39 41 40 - static cpacf_mask_t km_functions, kmc_functions, kmctr_functions; 42 + static cpacf_mask_t km_functions, kmc_functions, kmctr_functions, 43 + kma_functions; 41 44 42 45 struct s390_aes_ctx { 43 46 u8 key[AES_MAX_KEY_SIZE]; ··· 60 53 int key_len; 61 54 unsigned long fc; 62 55 struct crypto_skcipher *fallback; 56 + }; 57 + 58 + struct gcm_sg_walk { 59 + struct scatter_walk walk; 60 + unsigned int walk_bytes; 61 + u8 *walk_ptr; 62 + unsigned int walk_bytes_remain; 63 + u8 buf[AES_BLOCK_SIZE]; 64 + unsigned int buf_bytes; 65 + u8 *ptr; 66 + unsigned int nbytes; 63 67 }; 64 68 65 69 static int setkey_fallback_cip(struct crypto_tfm *tfm, const u8 *in_key, ··· 789 771 } 790 772 }; 791 773 774 + static int gcm_aes_setkey(struct crypto_aead *tfm, const u8 *key, 775 + unsigned int keylen) 776 + { 777 + struct s390_aes_ctx *ctx = crypto_aead_ctx(tfm); 778 + 779 + switch (keylen) { 780 + case AES_KEYSIZE_128: 781 + ctx->fc = CPACF_KMA_GCM_AES_128; 782 + break; 783 + case AES_KEYSIZE_192: 784 + ctx->fc = CPACF_KMA_GCM_AES_192; 785 + break; 786 + case AES_KEYSIZE_256: 787 + ctx->fc = CPACF_KMA_GCM_AES_256; 788 + break; 789 + default: 790 + return -EINVAL; 791 + } 792 + 793 + memcpy(ctx->key, key, keylen); 794 + ctx->key_len = keylen; 795 + return 0; 796 + } 797 + 798 + static int gcm_aes_setauthsize(struct crypto_aead *tfm, unsigned int authsize) 799 + { 800 + switch (authsize) { 801 + case 4: 802 + case 8: 803 + case 12: 804 + case 13: 805 + case 14: 806 + case 15: 807 + case 16: 808 + break; 809 + default: 810 + return -EINVAL; 811 + } 812 + 813 + return 0; 814 + } 815 + 816 + static void gcm_sg_walk_start(struct gcm_sg_walk *gw, struct scatterlist *sg, 817 + unsigned int len) 818 + { 819 + memset(gw, 0, sizeof(*gw)); 820 + gw->walk_bytes_remain = len; 821 + scatterwalk_start(&gw->walk, sg); 822 + } 823 + 824 + static int gcm_sg_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) 825 + { 826 + int n; 827 + 828 + /* minbytesneeded <= AES_BLOCK_SIZE */ 829 + if (gw->buf_bytes && gw->buf_bytes >= minbytesneeded) { 830 + gw->ptr = gw->buf; 831 + gw->nbytes = gw->buf_bytes; 832 + goto out; 833 + } 834 + 835 + if (gw->walk_bytes_remain == 0) { 836 + gw->ptr = NULL; 837 + gw->nbytes = 0; 838 + goto out; 839 + } 840 + 841 + gw->walk_bytes = scatterwalk_clamp(&gw->walk, gw->walk_bytes_remain); 842 + if (!gw->walk_bytes) { 843 + scatterwalk_start(&gw->walk, sg_next(gw->walk.sg)); 844 + gw->walk_bytes = scatterwalk_clamp(&gw->walk, 845 + gw->walk_bytes_remain); 846 + } 847 + gw->walk_ptr = scatterwalk_map(&gw->walk); 848 + 849 + if (!gw->buf_bytes && gw->walk_bytes >= minbytesneeded) { 850 + gw->ptr = gw->walk_ptr; 851 + gw->nbytes = gw->walk_bytes; 852 + goto out; 853 + } 854 + 855 + while (1) { 856 + n = min(gw->walk_bytes, AES_BLOCK_SIZE - gw->buf_bytes); 857 + memcpy(gw->buf + gw->buf_bytes, gw->walk_ptr, n); 858 + gw->buf_bytes += n; 859 + gw->walk_bytes_remain -= n; 860 + scatterwalk_unmap(&gw->walk); 861 + scatterwalk_advance(&gw->walk, n); 862 + scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain); 863 + 864 + if (gw->buf_bytes >= minbytesneeded) { 865 + gw->ptr = gw->buf; 866 + gw->nbytes = gw->buf_bytes; 867 + goto out; 868 + } 869 + 870 + gw->walk_bytes = scatterwalk_clamp(&gw->walk, 871 + gw->walk_bytes_remain); 872 + if (!gw->walk_bytes) { 873 + scatterwalk_start(&gw->walk, sg_next(gw->walk.sg)); 874 + gw->walk_bytes = scatterwalk_clamp(&gw->walk, 875 + gw->walk_bytes_remain); 876 + } 877 + gw->walk_ptr = scatterwalk_map(&gw->walk); 878 + } 879 + 880 + out: 881 + return gw->nbytes; 882 + } 883 + 884 + static void gcm_sg_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone) 885 + { 886 + int n; 887 + 888 + if (gw->ptr == NULL) 889 + return; 890 + 891 + if (gw->ptr == gw->buf) { 892 + n = gw->buf_bytes - bytesdone; 893 + if (n > 0) { 894 + memmove(gw->buf, gw->buf + bytesdone, n); 895 + gw->buf_bytes -= n; 896 + } else 897 + gw->buf_bytes = 0; 898 + } else { 899 + gw->walk_bytes_remain -= bytesdone; 900 + scatterwalk_unmap(&gw->walk); 901 + scatterwalk_advance(&gw->walk, bytesdone); 902 + scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain); 903 + } 904 + } 905 + 906 + static int gcm_aes_crypt(struct aead_request *req, unsigned int flags) 907 + { 908 + struct crypto_aead *tfm = crypto_aead_reqtfm(req); 909 + struct s390_aes_ctx *ctx = crypto_aead_ctx(tfm); 910 + unsigned int ivsize = crypto_aead_ivsize(tfm); 911 + unsigned int taglen = crypto_aead_authsize(tfm); 912 + unsigned int aadlen = req->assoclen; 913 + unsigned int pclen = req->cryptlen; 914 + int ret = 0; 915 + 916 + unsigned int len, in_bytes, out_bytes, 917 + min_bytes, bytes, aad_bytes, pc_bytes; 918 + struct gcm_sg_walk gw_in, gw_out; 919 + u8 tag[GHASH_DIGEST_SIZE]; 920 + 921 + struct { 922 + u32 _[3]; /* reserved */ 923 + u32 cv; /* Counter Value */ 924 + u8 t[GHASH_DIGEST_SIZE];/* Tag */ 925 + u8 h[AES_BLOCK_SIZE]; /* Hash-subkey */ 926 + u64 taadl; /* Total AAD Length */ 927 + u64 tpcl; /* Total Plain-/Cipher-text Length */ 928 + u8 j0[GHASH_BLOCK_SIZE];/* initial counter value */ 929 + u8 k[AES_MAX_KEY_SIZE]; /* Key */ 930 + } param; 931 + 932 + /* 933 + * encrypt 934 + * req->src: aad||plaintext 935 + * req->dst: aad||ciphertext||tag 936 + * decrypt 937 + * req->src: aad||ciphertext||tag 938 + * req->dst: aad||plaintext, return 0 or -EBADMSG 939 + * aad, plaintext and ciphertext may be empty. 940 + */ 941 + if (flags & CPACF_DECRYPT) 942 + pclen -= taglen; 943 + len = aadlen + pclen; 944 + 945 + memset(&param, 0, sizeof(param)); 946 + param.cv = 1; 947 + param.taadl = aadlen * 8; 948 + param.tpcl = pclen * 8; 949 + memcpy(param.j0, req->iv, ivsize); 950 + *(u32 *)(param.j0 + ivsize) = 1; 951 + memcpy(param.k, ctx->key, ctx->key_len); 952 + 953 + gcm_sg_walk_start(&gw_in, req->src, len); 954 + gcm_sg_walk_start(&gw_out, req->dst, len); 955 + 956 + do { 957 + min_bytes = min_t(unsigned int, 958 + aadlen > 0 ? aadlen : pclen, AES_BLOCK_SIZE); 959 + in_bytes = gcm_sg_walk_go(&gw_in, min_bytes); 960 + out_bytes = gcm_sg_walk_go(&gw_out, min_bytes); 961 + bytes = min(in_bytes, out_bytes); 962 + 963 + if (aadlen + pclen <= bytes) { 964 + aad_bytes = aadlen; 965 + pc_bytes = pclen; 966 + flags |= CPACF_KMA_LAAD | CPACF_KMA_LPC; 967 + } else { 968 + if (aadlen <= bytes) { 969 + aad_bytes = aadlen; 970 + pc_bytes = (bytes - aadlen) & 971 + ~(AES_BLOCK_SIZE - 1); 972 + flags |= CPACF_KMA_LAAD; 973 + } else { 974 + aad_bytes = bytes & ~(AES_BLOCK_SIZE - 1); 975 + pc_bytes = 0; 976 + } 977 + } 978 + 979 + if (aad_bytes > 0) 980 + memcpy(gw_out.ptr, gw_in.ptr, aad_bytes); 981 + 982 + cpacf_kma(ctx->fc | flags, &param, 983 + gw_out.ptr + aad_bytes, 984 + gw_in.ptr + aad_bytes, pc_bytes, 985 + gw_in.ptr, aad_bytes); 986 + 987 + gcm_sg_walk_done(&gw_in, aad_bytes + pc_bytes); 988 + gcm_sg_walk_done(&gw_out, aad_bytes + pc_bytes); 989 + aadlen -= aad_bytes; 990 + pclen -= pc_bytes; 991 + } while (aadlen + pclen > 0); 992 + 993 + if (flags & CPACF_DECRYPT) { 994 + scatterwalk_map_and_copy(tag, req->src, len, taglen, 0); 995 + if (crypto_memneq(tag, param.t, taglen)) 996 + ret = -EBADMSG; 997 + } else 998 + scatterwalk_map_and_copy(param.t, req->dst, len, taglen, 1); 999 + 1000 + memzero_explicit(&param, sizeof(param)); 1001 + return ret; 1002 + } 1003 + 1004 + static int gcm_aes_encrypt(struct aead_request *req) 1005 + { 1006 + return gcm_aes_crypt(req, CPACF_ENCRYPT); 1007 + } 1008 + 1009 + static int gcm_aes_decrypt(struct aead_request *req) 1010 + { 1011 + return gcm_aes_crypt(req, CPACF_DECRYPT); 1012 + } 1013 + 1014 + static struct aead_alg gcm_aes_aead = { 1015 + .setkey = gcm_aes_setkey, 1016 + .setauthsize = gcm_aes_setauthsize, 1017 + .encrypt = gcm_aes_encrypt, 1018 + .decrypt = gcm_aes_decrypt, 1019 + 1020 + .ivsize = GHASH_BLOCK_SIZE - sizeof(u32), 1021 + .maxauthsize = GHASH_DIGEST_SIZE, 1022 + .chunksize = AES_BLOCK_SIZE, 1023 + 1024 + .base = { 1025 + .cra_flags = CRYPTO_ALG_TYPE_AEAD, 1026 + .cra_blocksize = 1, 1027 + .cra_ctxsize = sizeof(struct s390_aes_ctx), 1028 + .cra_priority = 900, 1029 + .cra_name = "gcm(aes)", 1030 + .cra_driver_name = "gcm-aes-s390", 1031 + .cra_module = THIS_MODULE, 1032 + }, 1033 + }; 1034 + 792 1035 static struct crypto_alg *aes_s390_algs_ptr[5]; 793 1036 static int aes_s390_algs_num; 794 1037 ··· 1069 790 crypto_unregister_alg(aes_s390_algs_ptr[aes_s390_algs_num]); 1070 791 if (ctrblk) 1071 792 free_page((unsigned long) ctrblk); 793 + 794 + crypto_unregister_aead(&gcm_aes_aead); 1072 795 } 1073 796 1074 797 static int __init aes_s390_init(void) 1075 798 { 1076 799 int ret; 1077 800 1078 - /* Query available functions for KM, KMC and KMCTR */ 801 + /* Query available functions for KM, KMC, KMCTR and KMA */ 1079 802 cpacf_query(CPACF_KM, &km_functions); 1080 803 cpacf_query(CPACF_KMC, &kmc_functions); 1081 804 cpacf_query(CPACF_KMCTR, &kmctr_functions); 805 + cpacf_query(CPACF_KMA, &kma_functions); 1082 806 1083 807 if (cpacf_test_func(&km_functions, CPACF_KM_AES_128) || 1084 808 cpacf_test_func(&km_functions, CPACF_KM_AES_192) || ··· 1118 836 goto out_err; 1119 837 } 1120 838 ret = aes_s390_register_alg(&ctr_aes_alg); 839 + if (ret) 840 + goto out_err; 841 + } 842 + 843 + if (cpacf_test_func(&kma_functions, CPACF_KMA_GCM_AES_128) || 844 + cpacf_test_func(&kma_functions, CPACF_KMA_GCM_AES_192) || 845 + cpacf_test_func(&kma_functions, CPACF_KMA_GCM_AES_256)) { 846 + ret = crypto_register_aead(&gcm_aes_aead); 1121 847 if (ret) 1122 848 goto out_err; 1123 849 }
-3
arch/s390/defconfig
··· 53 53 CONFIG_TRANSPARENT_HUGEPAGE=y 54 54 CONFIG_CLEANCACHE=y 55 55 CONFIG_FRONTSWAP=y 56 - CONFIG_CMA=y 57 56 CONFIG_ZSWAP=y 58 57 CONFIG_ZBUD=m 59 58 CONFIG_ZSMALLOC=m ··· 162 163 CONFIG_DEBUG_PAGEALLOC=y 163 164 CONFIG_DETECT_HUNG_TASK=y 164 165 CONFIG_PANIC_ON_OOPS=y 165 - CONFIG_DEBUG_RT_MUTEXES=y 166 166 CONFIG_PROVE_LOCKING=y 167 167 CONFIG_LOCK_STAT=y 168 168 CONFIG_DEBUG_LOCKDEP=y ··· 177 179 CONFIG_STACK_TRACER=y 178 180 CONFIG_BLK_DEV_IO_TRACE=y 179 181 CONFIG_FUNCTION_PROFILER=y 180 - CONFIG_TRACE_ENUM_MAP_FILE=y 181 182 CONFIG_KPROBES_SANITY_TEST=y 182 183 CONFIG_S390_PTDUMP=y 183 184 CONFIG_CRYPTO_CRYPTD=m
+1
arch/s390/include/asm/Kbuild
··· 15 15 generic-y += mcs_spinlock.h 16 16 generic-y += mm-arch-hooks.h 17 17 generic-y += preempt.h 18 + generic-y += rwsem.h 18 19 generic-y += trace_clock.h 19 20 generic-y += unaligned.h 20 21 generic-y += word-at-a-time.h
+163
arch/s390/include/asm/alternative.h
··· 1 + #ifndef _ASM_S390_ALTERNATIVE_H 2 + #define _ASM_S390_ALTERNATIVE_H 3 + 4 + #ifndef __ASSEMBLY__ 5 + 6 + #include <linux/types.h> 7 + #include <linux/stddef.h> 8 + #include <linux/stringify.h> 9 + 10 + struct alt_instr { 11 + s32 instr_offset; /* original instruction */ 12 + s32 repl_offset; /* offset to replacement instruction */ 13 + u16 facility; /* facility bit set for replacement */ 14 + u8 instrlen; /* length of original instruction */ 15 + u8 replacementlen; /* length of new instruction */ 16 + } __packed; 17 + 18 + #ifdef CONFIG_ALTERNATIVES 19 + extern void apply_alternative_instructions(void); 20 + extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end); 21 + #else 22 + static inline void apply_alternative_instructions(void) {}; 23 + static inline void apply_alternatives(struct alt_instr *start, 24 + struct alt_instr *end) {}; 25 + #endif 26 + /* 27 + * |661: |662: |6620 |663: 28 + * +-----------+---------------------+ 29 + * | oldinstr | oldinstr_padding | 30 + * | +----------+----------+ 31 + * | | | | 32 + * | | >6 bytes |6/4/2 nops| 33 + * | |6 bytes jg-----------> 34 + * +-----------+---------------------+ 35 + * ^^ static padding ^^ 36 + * 37 + * .altinstr_replacement section 38 + * +---------------------+-----------+ 39 + * |6641: |6651: 40 + * | alternative instr 1 | 41 + * +-----------+---------+- - - - - -+ 42 + * |6642: |6652: | 43 + * | alternative instr 2 | padding 44 + * +---------------------+- - - - - -+ 45 + * ^ runtime ^ 46 + * 47 + * .altinstructions section 48 + * +---------------------------------+ 49 + * | alt_instr entries for each | 50 + * | alternative instr | 51 + * +---------------------------------+ 52 + */ 53 + 54 + #define b_altinstr(num) "664"#num 55 + #define e_altinstr(num) "665"#num 56 + 57 + #define e_oldinstr_pad_end "663" 58 + #define oldinstr_len "662b-661b" 59 + #define oldinstr_total_len e_oldinstr_pad_end"b-661b" 60 + #define altinstr_len(num) e_altinstr(num)"b-"b_altinstr(num)"b" 61 + #define oldinstr_pad_len(num) \ 62 + "-(((" altinstr_len(num) ")-(" oldinstr_len ")) > 0) * " \ 63 + "((" altinstr_len(num) ")-(" oldinstr_len "))" 64 + 65 + #define INSTR_LEN_SANITY_CHECK(len) \ 66 + ".if " len " > 254\n" \ 67 + "\t.error \"cpu alternatives does not support instructions " \ 68 + "blocks > 254 bytes\"\n" \ 69 + ".endif\n" \ 70 + ".if (" len ") %% 2\n" \ 71 + "\t.error \"cpu alternatives instructions length is odd\"\n" \ 72 + ".endif\n" 73 + 74 + #define OLDINSTR_PADDING(oldinstr, num) \ 75 + ".if " oldinstr_pad_len(num) " > 6\n" \ 76 + "\tjg " e_oldinstr_pad_end "f\n" \ 77 + "6620:\n" \ 78 + "\t.fill (" oldinstr_pad_len(num) " - (6620b-662b)) / 2, 2, 0x0700\n" \ 79 + ".else\n" \ 80 + "\t.fill " oldinstr_pad_len(num) " / 6, 6, 0xc0040000\n" \ 81 + "\t.fill " oldinstr_pad_len(num) " %% 6 / 4, 4, 0x47000000\n" \ 82 + "\t.fill " oldinstr_pad_len(num) " %% 6 %% 4 / 2, 2, 0x0700\n" \ 83 + ".endif\n" 84 + 85 + #define OLDINSTR(oldinstr, num) \ 86 + "661:\n\t" oldinstr "\n662:\n" \ 87 + OLDINSTR_PADDING(oldinstr, num) \ 88 + e_oldinstr_pad_end ":\n" \ 89 + INSTR_LEN_SANITY_CHECK(oldinstr_len) 90 + 91 + #define OLDINSTR_2(oldinstr, num1, num2) \ 92 + "661:\n\t" oldinstr "\n662:\n" \ 93 + ".if " altinstr_len(num1) " < " altinstr_len(num2) "\n" \ 94 + OLDINSTR_PADDING(oldinstr, num2) \ 95 + ".else\n" \ 96 + OLDINSTR_PADDING(oldinstr, num1) \ 97 + ".endif\n" \ 98 + e_oldinstr_pad_end ":\n" \ 99 + INSTR_LEN_SANITY_CHECK(oldinstr_len) 100 + 101 + #define ALTINSTR_ENTRY(facility, num) \ 102 + "\t.long 661b - .\n" /* old instruction */ \ 103 + "\t.long " b_altinstr(num)"b - .\n" /* alt instruction */ \ 104 + "\t.word " __stringify(facility) "\n" /* facility bit */ \ 105 + "\t.byte " oldinstr_total_len "\n" /* source len */ \ 106 + "\t.byte " altinstr_len(num) "\n" /* alt instruction len */ 107 + 108 + #define ALTINSTR_REPLACEMENT(altinstr, num) /* replacement */ \ 109 + b_altinstr(num)":\n\t" altinstr "\n" e_altinstr(num) ":\n" \ 110 + INSTR_LEN_SANITY_CHECK(altinstr_len(num)) 111 + 112 + #ifdef CONFIG_ALTERNATIVES 113 + /* alternative assembly primitive: */ 114 + #define ALTERNATIVE(oldinstr, altinstr, facility) \ 115 + ".pushsection .altinstr_replacement, \"ax\"\n" \ 116 + ALTINSTR_REPLACEMENT(altinstr, 1) \ 117 + ".popsection\n" \ 118 + OLDINSTR(oldinstr, 1) \ 119 + ".pushsection .altinstructions,\"a\"\n" \ 120 + ALTINSTR_ENTRY(facility, 1) \ 121 + ".popsection\n" 122 + 123 + #define ALTERNATIVE_2(oldinstr, altinstr1, facility1, altinstr2, facility2)\ 124 + ".pushsection .altinstr_replacement, \"ax\"\n" \ 125 + ALTINSTR_REPLACEMENT(altinstr1, 1) \ 126 + ALTINSTR_REPLACEMENT(altinstr2, 2) \ 127 + ".popsection\n" \ 128 + OLDINSTR_2(oldinstr, 1, 2) \ 129 + ".pushsection .altinstructions,\"a\"\n" \ 130 + ALTINSTR_ENTRY(facility1, 1) \ 131 + ALTINSTR_ENTRY(facility2, 2) \ 132 + ".popsection\n" 133 + #else 134 + /* Alternative instructions are disabled, let's put just oldinstr in */ 135 + #define ALTERNATIVE(oldinstr, altinstr, facility) \ 136 + oldinstr "\n" 137 + 138 + #define ALTERNATIVE_2(oldinstr, altinstr1, facility1, altinstr2, facility2) \ 139 + oldinstr "\n" 140 + #endif 141 + 142 + /* 143 + * Alternative instructions for different CPU types or capabilities. 144 + * 145 + * This allows to use optimized instructions even on generic binary 146 + * kernels. 147 + * 148 + * oldinstr is padded with jump and nops at compile time if altinstr is 149 + * longer. altinstr is padded with jump and nops at run-time during patching. 150 + * 151 + * For non barrier like inlines please define new variants 152 + * without volatile and memory clobber. 153 + */ 154 + #define alternative(oldinstr, altinstr, facility) \ 155 + asm volatile(ALTERNATIVE(oldinstr, altinstr, facility) : : : "memory") 156 + 157 + #define alternative_2(oldinstr, altinstr1, facility1, altinstr2, facility2) \ 158 + asm volatile(ALTERNATIVE_2(oldinstr, altinstr1, facility1, \ 159 + altinstr2, facility2) ::: "memory") 160 + 161 + #endif /* __ASSEMBLY__ */ 162 + 163 + #endif /* _ASM_S390_ALTERNATIVE_H */
+13 -13
arch/s390/include/asm/archrandom.h
··· 28 28 29 29 static inline bool arch_has_random(void) 30 30 { 31 - if (static_branch_likely(&s390_arch_random_available)) 32 - return true; 33 31 return false; 34 32 } 35 33 36 34 static inline bool arch_has_random_seed(void) 37 35 { 38 - return arch_has_random(); 36 + if (static_branch_likely(&s390_arch_random_available)) 37 + return true; 38 + return false; 39 39 } 40 40 41 41 static inline bool arch_get_random_long(unsigned long *v) 42 42 { 43 - if (static_branch_likely(&s390_arch_random_available)) { 44 - s390_arch_random_generate((u8 *)v, sizeof(*v)); 45 - return true; 46 - } 47 43 return false; 48 44 } 49 45 50 46 static inline bool arch_get_random_int(unsigned int *v) 51 47 { 48 + return false; 49 + } 50 + 51 + static inline bool arch_get_random_seed_long(unsigned long *v) 52 + { 52 53 if (static_branch_likely(&s390_arch_random_available)) { 53 54 s390_arch_random_generate((u8 *)v, sizeof(*v)); 54 55 return true; ··· 57 56 return false; 58 57 } 59 58 60 - static inline bool arch_get_random_seed_long(unsigned long *v) 61 - { 62 - return arch_get_random_long(v); 63 - } 64 - 65 59 static inline bool arch_get_random_seed_int(unsigned int *v) 66 60 { 67 - return arch_get_random_int(v); 61 + if (static_branch_likely(&s390_arch_random_available)) { 62 + s390_arch_random_generate((u8 *)v, sizeof(*v)); 63 + return true; 64 + } 65 + return false; 68 66 } 69 67 70 68 #endif /* CONFIG_ARCH_RANDOM */
+21 -11
arch/s390/include/asm/atomic_ops.h
··· 40 40 #undef __ATOMIC_OPS 41 41 #undef __ATOMIC_OP 42 42 43 - static inline void __atomic_add_const(int val, int *ptr) 44 - { 45 - asm volatile( 46 - " asi %[ptr],%[val]\n" 47 - : [ptr] "+Q" (*ptr) : [val] "i" (val) : "cc"); 43 + #define __ATOMIC_CONST_OP(op_name, op_type, op_string, op_barrier) \ 44 + static inline void op_name(op_type val, op_type *ptr) \ 45 + { \ 46 + asm volatile( \ 47 + op_string " %[ptr],%[val]\n" \ 48 + op_barrier \ 49 + : [ptr] "+Q" (*ptr) : [val] "i" (val) : "cc", "memory");\ 48 50 } 49 51 50 - static inline void __atomic64_add_const(long val, long *ptr) 51 - { 52 - asm volatile( 53 - " agsi %[ptr],%[val]\n" 54 - : [ptr] "+Q" (*ptr) : [val] "i" (val) : "cc"); 55 - } 52 + #define __ATOMIC_CONST_OPS(op_name, op_type, op_string) \ 53 + __ATOMIC_CONST_OP(op_name, op_type, op_string, "\n") \ 54 + __ATOMIC_CONST_OP(op_name##_barrier, op_type, op_string, "bcr 14,0\n") 55 + 56 + __ATOMIC_CONST_OPS(__atomic_add_const, int, "asi") 57 + __ATOMIC_CONST_OPS(__atomic64_add_const, long, "agsi") 58 + 59 + #undef __ATOMIC_CONST_OPS 60 + #undef __ATOMIC_CONST_OP 56 61 57 62 #else /* CONFIG_HAVE_MARCH_Z196_FEATURES */ 58 63 ··· 112 107 __ATOMIC64_OPS(__atomic64_xor, "xgr") 113 108 114 109 #undef __ATOMIC64_OPS 110 + 111 + #define __atomic_add_const(val, ptr) __atomic_add(val, ptr) 112 + #define __atomic_add_const_barrier(val, ptr) __atomic_add(val, ptr) 113 + #define __atomic64_add_const(val, ptr) __atomic64_add(val, ptr) 114 + #define __atomic64_add_const_barrier(val, ptr) __atomic64_add(val, ptr) 115 115 116 116 #endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */ 117 117
+2
arch/s390/include/asm/ccwgroup.h
··· 42 42 * @thaw: undo work done in @freeze 43 43 * @restore: callback for restoring after hibernation 44 44 * @driver: embedded driver structure 45 + * @ccw_driver: supported ccw_driver (optional) 45 46 */ 46 47 struct ccwgroup_driver { 47 48 int (*setup) (struct ccwgroup_device *); ··· 57 56 int (*restore)(struct ccwgroup_device *); 58 57 59 58 struct device_driver driver; 59 + struct ccw_driver *ccw_driver; 60 60 }; 61 61 62 62 extern int ccwgroup_driver_register (struct ccwgroup_driver *cdriver);
+51 -1
arch/s390/include/asm/cpacf.h
··· 2 2 /* 3 3 * CP Assist for Cryptographic Functions (CPACF) 4 4 * 5 - * Copyright IBM Corp. 2003, 2016 5 + * Copyright IBM Corp. 2003, 2017 6 6 * Author(s): Thomas Spatzier 7 7 * Jan Glauber 8 8 * Harald Freudenberger (freude@de.ibm.com) ··· 134 134 #define CPACF_PRNO_TRNG_Q_R2C_RATIO 0x70 135 135 #define CPACF_PRNO_TRNG 0x72 136 136 137 + /* 138 + * Function codes for the KMA (CIPHER MESSAGE WITH AUTHENTICATION) 139 + * instruction 140 + */ 141 + #define CPACF_KMA_QUERY 0x00 142 + #define CPACF_KMA_GCM_AES_128 0x12 143 + #define CPACF_KMA_GCM_AES_192 0x13 144 + #define CPACF_KMA_GCM_AES_256 0x14 145 + 146 + /* 147 + * Flags for the KMA (CIPHER MESSAGE WITH AUTHENTICATION) instruction 148 + */ 149 + #define CPACF_KMA_LPC 0x100 /* Last-Plaintext/Ciphertext */ 150 + #define CPACF_KMA_LAAD 0x200 /* Last-AAD */ 151 + #define CPACF_KMA_HS 0x400 /* Hash-subkey Supplied */ 152 + 137 153 typedef struct { unsigned char bytes[16]; } cpacf_mask_t; 138 154 139 155 /** ··· 195 179 return test_facility(77); /* check for MSA4 */ 196 180 case CPACF_PRNO: 197 181 return test_facility(57); /* check for MSA5 */ 182 + case CPACF_KMA: 183 + return test_facility(146); /* check for MSA8 */ 198 184 default: 199 185 BUG(); 200 186 } ··· 485 467 " .insn rre,%[opc] << 16,0,0\n" /* PCKMO opcode */ 486 468 : 487 469 : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_PCKMO) 470 + : "cc", "memory"); 471 + } 472 + 473 + /** 474 + * cpacf_kma() - executes the KMA (CIPHER MESSAGE WITH AUTHENTICATION) 475 + * instruction 476 + * @func: the function code passed to KMA; see CPACF_KMA_xxx defines 477 + * @param: address of parameter block; see POP for details on each func 478 + * @dest: address of destination memory area 479 + * @src: address of source memory area 480 + * @src_len: length of src operand in bytes 481 + * @aad: address of additional authenticated data memory area 482 + * @aad_len: length of aad operand in bytes 483 + */ 484 + static inline void cpacf_kma(unsigned long func, void *param, u8 *dest, 485 + const u8 *src, unsigned long src_len, 486 + const u8 *aad, unsigned long aad_len) 487 + { 488 + register unsigned long r0 asm("0") = (unsigned long) func; 489 + register unsigned long r1 asm("1") = (unsigned long) param; 490 + register unsigned long r2 asm("2") = (unsigned long) src; 491 + register unsigned long r3 asm("3") = (unsigned long) src_len; 492 + register unsigned long r4 asm("4") = (unsigned long) aad; 493 + register unsigned long r5 asm("5") = (unsigned long) aad_len; 494 + register unsigned long r6 asm("6") = (unsigned long) dest; 495 + 496 + asm volatile( 497 + "0: .insn rrf,%[opc] << 16,%[dst],%[src],%[aad],0\n" 498 + " brc 1,0b\n" /* handle partial completion */ 499 + : [dst] "+a" (r6), [src] "+a" (r2), [slen] "+d" (r3), 500 + [aad] "+a" (r4), [alen] "+d" (r5) 501 + : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_KMA) 488 502 : "cc", "memory"); 489 503 } 490 504
+31 -1
arch/s390/include/asm/ctl_reg.h
··· 8 8 #ifndef __ASM_CTL_REG_H 9 9 #define __ASM_CTL_REG_H 10 10 11 + #include <linux/const.h> 12 + 13 + #define CR2_GUARDED_STORAGE _BITUL(63 - 59) 14 + 15 + #define CR14_CHANNEL_REPORT_SUBMASK _BITUL(63 - 35) 16 + #define CR14_RECOVERY_SUBMASK _BITUL(63 - 36) 17 + #define CR14_DEGRADATION_SUBMASK _BITUL(63 - 37) 18 + #define CR14_EXTERNAL_DAMAGE_SUBMASK _BITUL(63 - 38) 19 + #define CR14_WARNING_SUBMASK _BITUL(63 - 39) 20 + 21 + #ifndef __ASSEMBLY__ 22 + 11 23 #include <linux/bug.h> 12 24 13 25 #define __ctl_load(array, low, high) do { \ ··· 67 55 union ctlreg0 { 68 56 unsigned long val; 69 57 struct { 70 - unsigned long : 32; 58 + unsigned long : 8; 59 + unsigned long tcx : 1; /* Transactional-Execution control */ 60 + unsigned long pifo : 1; /* Transactional-Execution Program- 61 + Interruption-Filtering Override */ 62 + unsigned long : 22; 71 63 unsigned long : 3; 72 64 unsigned long lap : 1; /* Low-address-protection control */ 73 65 unsigned long : 4; ··· 87 71 }; 88 72 }; 89 73 74 + union ctlreg2 { 75 + unsigned long val; 76 + struct { 77 + unsigned long : 33; 78 + unsigned long ducto : 25; 79 + unsigned long : 1; 80 + unsigned long gse : 1; 81 + unsigned long : 1; 82 + unsigned long tds : 1; 83 + unsigned long tdc : 2; 84 + }; 85 + }; 86 + 90 87 #ifdef CONFIG_SMP 91 88 # define ctl_set_bit(cr, bit) smp_ctl_set_bit(cr, bit) 92 89 # define ctl_clear_bit(cr, bit) smp_ctl_clear_bit(cr, bit) ··· 108 79 # define ctl_clear_bit(cr, bit) __ctl_clear_bit(cr, bit) 109 80 #endif 110 81 82 + #endif /* __ASSEMBLY__ */ 111 83 #endif /* __ASM_CTL_REG_H */
+96 -90
arch/s390/include/asm/debug.h
··· 14 14 #include <linux/refcount.h> 15 15 #include <uapi/asm/debug.h> 16 16 17 - #define DEBUG_MAX_LEVEL 6 /* debug levels range from 0 to 6 */ 18 - #define DEBUG_OFF_LEVEL -1 /* level where debug is switched off */ 19 - #define DEBUG_FLUSH_ALL -1 /* parameter to flush all areas */ 20 - #define DEBUG_MAX_VIEWS 10 /* max number of views in proc fs */ 21 - #define DEBUG_MAX_NAME_LEN 64 /* max length for a debugfs file name */ 22 - #define DEBUG_DEFAULT_LEVEL 3 /* initial debug level */ 17 + #define DEBUG_MAX_LEVEL 6 /* debug levels range from 0 to 6 */ 18 + #define DEBUG_OFF_LEVEL -1 /* level where debug is switched off */ 19 + #define DEBUG_FLUSH_ALL -1 /* parameter to flush all areas */ 20 + #define DEBUG_MAX_VIEWS 10 /* max number of views in proc fs */ 21 + #define DEBUG_MAX_NAME_LEN 64 /* max length for a debugfs file name */ 22 + #define DEBUG_DEFAULT_LEVEL 3 /* initial debug level */ 23 23 24 24 #define DEBUG_DIR_ROOT "s390dbf" /* name of debug root directory in proc fs */ 25 25 26 - #define DEBUG_DATA(entry) (char*)(entry + 1) /* data is stored behind */ 27 - /* the entry information */ 26 + #define DEBUG_DATA(entry) (char *)(entry + 1) /* data is stored behind */ 27 + /* the entry information */ 28 28 29 29 typedef struct __debug_entry debug_entry_t; 30 30 31 31 struct debug_view; 32 32 33 - typedef struct debug_info { 34 - struct debug_info* next; 35 - struct debug_info* prev; 33 + typedef struct debug_info { 34 + struct debug_info *next; 35 + struct debug_info *prev; 36 36 refcount_t ref_count; 37 - spinlock_t lock; 37 + spinlock_t lock; 38 38 int level; 39 39 int nr_areas; 40 40 int pages_per_area; 41 41 int buf_size; 42 - int entry_size; 43 - debug_entry_t*** areas; 42 + int entry_size; 43 + debug_entry_t ***areas; 44 44 int active_area; 45 45 int *active_pages; 46 46 int *active_entries; 47 - struct dentry* debugfs_root_entry; 48 - struct dentry* debugfs_entries[DEBUG_MAX_VIEWS]; 49 - struct debug_view* views[DEBUG_MAX_VIEWS]; 47 + struct dentry *debugfs_root_entry; 48 + struct dentry *debugfs_entries[DEBUG_MAX_VIEWS]; 49 + struct debug_view *views[DEBUG_MAX_VIEWS]; 50 50 char name[DEBUG_MAX_NAME_LEN]; 51 51 umode_t mode; 52 52 } debug_info_t; 53 53 54 - typedef int (debug_header_proc_t) (debug_info_t* id, 55 - struct debug_view* view, 54 + typedef int (debug_header_proc_t) (debug_info_t *id, 55 + struct debug_view *view, 56 56 int area, 57 - debug_entry_t* entry, 58 - char* out_buf); 57 + debug_entry_t *entry, 58 + char *out_buf); 59 59 60 - typedef int (debug_format_proc_t) (debug_info_t* id, 61 - struct debug_view* view, char* out_buf, 62 - const char* in_buf); 63 - typedef int (debug_prolog_proc_t) (debug_info_t* id, 64 - struct debug_view* view, 65 - char* out_buf); 66 - typedef int (debug_input_proc_t) (debug_info_t* id, 67 - struct debug_view* view, 68 - struct file* file, 60 + typedef int (debug_format_proc_t) (debug_info_t *id, 61 + struct debug_view *view, char *out_buf, 62 + const char *in_buf); 63 + typedef int (debug_prolog_proc_t) (debug_info_t *id, 64 + struct debug_view *view, 65 + char *out_buf); 66 + typedef int (debug_input_proc_t) (debug_info_t *id, 67 + struct debug_view *view, 68 + struct file *file, 69 69 const char __user *user_buf, 70 - size_t in_buf_size, loff_t* offset); 70 + size_t in_buf_size, loff_t *offset); 71 71 72 - int debug_dflt_header_fn(debug_info_t* id, struct debug_view* view, 73 - int area, debug_entry_t* entry, char* out_buf); 74 - 72 + int debug_dflt_header_fn(debug_info_t *id, struct debug_view *view, 73 + int area, debug_entry_t *entry, char *out_buf); 74 + 75 75 struct debug_view { 76 76 char name[DEBUG_MAX_NAME_LEN]; 77 - debug_prolog_proc_t* prolog_proc; 78 - debug_header_proc_t* header_proc; 79 - debug_format_proc_t* format_proc; 80 - debug_input_proc_t* input_proc; 81 - void* private_data; 77 + debug_prolog_proc_t *prolog_proc; 78 + debug_header_proc_t *header_proc; 79 + debug_format_proc_t *format_proc; 80 + debug_input_proc_t *input_proc; 81 + void *private_data; 82 82 }; 83 83 84 84 extern struct debug_view debug_hex_ascii_view; ··· 87 87 88 88 /* do NOT use the _common functions */ 89 89 90 - debug_entry_t* debug_event_common(debug_info_t* id, int level, 91 - const void* data, int length); 90 + debug_entry_t *debug_event_common(debug_info_t *id, int level, 91 + const void *data, int length); 92 92 93 - debug_entry_t* debug_exception_common(debug_info_t* id, int level, 94 - const void* data, int length); 93 + debug_entry_t *debug_exception_common(debug_info_t *id, int level, 94 + const void *data, int length); 95 95 96 96 /* Debug Feature API: */ 97 97 98 98 debug_info_t *debug_register(const char *name, int pages, int nr_areas, 99 - int buf_size); 99 + int buf_size); 100 100 101 101 debug_info_t *debug_register_mode(const char *name, int pages, int nr_areas, 102 102 int buf_size, umode_t mode, uid_t uid, 103 103 gid_t gid); 104 104 105 - void debug_unregister(debug_info_t* id); 105 + void debug_unregister(debug_info_t *id); 106 106 107 - void debug_set_level(debug_info_t* id, int new_level); 107 + void debug_set_level(debug_info_t *id, int new_level); 108 108 109 109 void debug_set_critical(void); 110 110 void debug_stop_all(void); 111 111 112 - static inline bool debug_level_enabled(debug_info_t* id, int level) 112 + static inline bool debug_level_enabled(debug_info_t *id, int level) 113 113 { 114 114 return level <= id->level; 115 115 } 116 116 117 - static inline debug_entry_t* 118 - debug_event(debug_info_t* id, int level, void* data, int length) 117 + static inline debug_entry_t *debug_event(debug_info_t *id, int level, 118 + void *data, int length) 119 119 { 120 120 if ((!id) || (level > id->level) || (id->pages_per_area == 0)) 121 121 return NULL; 122 - return debug_event_common(id,level,data,length); 122 + return debug_event_common(id, level, data, length); 123 123 } 124 124 125 - static inline debug_entry_t* 126 - debug_int_event(debug_info_t* id, int level, unsigned int tag) 125 + static inline debug_entry_t *debug_int_event(debug_info_t *id, int level, 126 + unsigned int tag) 127 127 { 128 - unsigned int t=tag; 128 + unsigned int t = tag; 129 + 129 130 if ((!id) || (level > id->level) || (id->pages_per_area == 0)) 130 131 return NULL; 131 - return debug_event_common(id,level,&t,sizeof(unsigned int)); 132 + return debug_event_common(id, level, &t, sizeof(unsigned int)); 132 133 } 133 134 134 - static inline debug_entry_t * 135 - debug_long_event (debug_info_t* id, int level, unsigned long tag) 135 + static inline debug_entry_t *debug_long_event(debug_info_t *id, int level, 136 + unsigned long tag) 136 137 { 137 - unsigned long t=tag; 138 + unsigned long t = tag; 139 + 138 140 if ((!id) || (level > id->level) || (id->pages_per_area == 0)) 139 141 return NULL; 140 - return debug_event_common(id,level,&t,sizeof(unsigned long)); 142 + return debug_event_common(id, level, &t, sizeof(unsigned long)); 141 143 } 142 144 143 - static inline debug_entry_t* 144 - debug_text_event(debug_info_t* id, int level, const char* txt) 145 + static inline debug_entry_t *debug_text_event(debug_info_t *id, int level, 146 + const char *txt) 145 147 { 146 148 if ((!id) || (level > id->level) || (id->pages_per_area == 0)) 147 149 return NULL; 148 - return debug_event_common(id,level,txt,strlen(txt)); 150 + return debug_event_common(id, level, txt, strlen(txt)); 149 151 } 150 152 151 153 /* ··· 163 161 debug_entry_t *__ret; \ 164 162 debug_info_t *__id = _id; \ 165 163 int __level = _level; \ 164 + \ 166 165 if ((!__id) || (__level > __id->level)) \ 167 166 __ret = NULL; \ 168 167 else \ ··· 172 169 __ret; \ 173 170 }) 174 171 175 - static inline debug_entry_t* 176 - debug_exception(debug_info_t* id, int level, void* data, int length) 172 + static inline debug_entry_t *debug_exception(debug_info_t *id, int level, 173 + void *data, int length) 177 174 { 178 175 if ((!id) || (level > id->level) || (id->pages_per_area == 0)) 179 176 return NULL; 180 - return debug_exception_common(id,level,data,length); 177 + return debug_exception_common(id, level, data, length); 181 178 } 182 179 183 - static inline debug_entry_t* 184 - debug_int_exception(debug_info_t* id, int level, unsigned int tag) 180 + static inline debug_entry_t *debug_int_exception(debug_info_t *id, int level, 181 + unsigned int tag) 185 182 { 186 - unsigned int t=tag; 183 + unsigned int t = tag; 184 + 187 185 if ((!id) || (level > id->level) || (id->pages_per_area == 0)) 188 186 return NULL; 189 - return debug_exception_common(id,level,&t,sizeof(unsigned int)); 187 + return debug_exception_common(id, level, &t, sizeof(unsigned int)); 190 188 } 191 189 192 - static inline debug_entry_t * 193 - debug_long_exception (debug_info_t* id, int level, unsigned long tag) 190 + static inline debug_entry_t *debug_long_exception (debug_info_t *id, int level, 191 + unsigned long tag) 194 192 { 195 - unsigned long t=tag; 193 + unsigned long t = tag; 194 + 196 195 if ((!id) || (level > id->level) || (id->pages_per_area == 0)) 197 196 return NULL; 198 - return debug_exception_common(id,level,&t,sizeof(unsigned long)); 197 + return debug_exception_common(id, level, &t, sizeof(unsigned long)); 199 198 } 200 199 201 - static inline debug_entry_t* 202 - debug_text_exception(debug_info_t* id, int level, const char* txt) 200 + static inline debug_entry_t *debug_text_exception(debug_info_t *id, int level, 201 + const char *txt) 203 202 { 204 203 if ((!id) || (level > id->level) || (id->pages_per_area == 0)) 205 204 return NULL; 206 - return debug_exception_common(id,level,txt,strlen(txt)); 205 + return debug_exception_common(id, level, txt, strlen(txt)); 207 206 } 208 207 209 208 /* ··· 221 216 debug_entry_t *__ret; \ 222 217 debug_info_t *__id = _id; \ 223 218 int __level = _level; \ 219 + \ 224 220 if ((!__id) || (__level > __id->level)) \ 225 221 __ret = NULL; \ 226 222 else \ ··· 230 224 __ret; \ 231 225 }) 232 226 233 - int debug_register_view(debug_info_t* id, struct debug_view* view); 234 - int debug_unregister_view(debug_info_t* id, struct debug_view* view); 227 + int debug_register_view(debug_info_t *id, struct debug_view *view); 228 + int debug_unregister_view(debug_info_t *id, struct debug_view *view); 235 229 236 230 /* 237 231 define the debug levels: 238 232 - 0 No debugging output to console or syslog 239 - - 1 Log internal errors to syslog, ignore check conditions 233 + - 1 Log internal errors to syslog, ignore check conditions 240 234 - 2 Log internal errors and check conditions to syslog 241 235 - 3 Log internal errors to console, log check conditions to syslog 242 236 - 4 Log internal errors and check conditions to console ··· 254 248 #define INTERNAL_DEBMSG(x,y...) "D" __FILE__ "%d: " x, __LINE__, y 255 249 256 250 #if DEBUG_LEVEL > 0 257 - #define PRINT_DEBUG(x...) printk ( KERN_DEBUG PRINTK_HEADER x ) 258 - #define PRINT_INFO(x...) printk ( KERN_INFO PRINTK_HEADER x ) 259 - #define PRINT_WARN(x...) printk ( KERN_WARNING PRINTK_HEADER x ) 260 - #define PRINT_ERR(x...) printk ( KERN_ERR PRINTK_HEADER x ) 261 - #define PRINT_FATAL(x...) panic ( PRINTK_HEADER x ) 251 + #define PRINT_DEBUG(x...) printk(KERN_DEBUG PRINTK_HEADER x) 252 + #define PRINT_INFO(x...) printk(KERN_INFO PRINTK_HEADER x) 253 + #define PRINT_WARN(x...) printk(KERN_WARNING PRINTK_HEADER x) 254 + #define PRINT_ERR(x...) printk(KERN_ERR PRINTK_HEADER x) 255 + #define PRINT_FATAL(x...) panic(PRINTK_HEADER x) 262 256 #else 263 - #define PRINT_DEBUG(x...) printk ( KERN_DEBUG PRINTK_HEADER x ) 264 - #define PRINT_INFO(x...) printk ( KERN_DEBUG PRINTK_HEADER x ) 265 - #define PRINT_WARN(x...) printk ( KERN_DEBUG PRINTK_HEADER x ) 266 - #define PRINT_ERR(x...) printk ( KERN_DEBUG PRINTK_HEADER x ) 267 - #define PRINT_FATAL(x...) printk ( KERN_DEBUG PRINTK_HEADER x ) 268 - #endif /* DASD_DEBUG */ 257 + #define PRINT_DEBUG(x...) printk(KERN_DEBUG PRINTK_HEADER x) 258 + #define PRINT_INFO(x...) printk(KERN_DEBUG PRINTK_HEADER x) 259 + #define PRINT_WARN(x...) printk(KERN_DEBUG PRINTK_HEADER x) 260 + #define PRINT_ERR(x...) printk(KERN_DEBUG PRINTK_HEADER x) 261 + #define PRINT_FATAL(x...) printk(KERN_DEBUG PRINTK_HEADER x) 262 + #endif /* DASD_DEBUG */ 269 263 270 - #endif /* DEBUG_H */ 264 + #endif /* DEBUG_H */
+1 -27
arch/s390/include/asm/dis.h
··· 9 9 #ifndef __ASM_S390_DIS_H__ 10 10 #define __ASM_S390_DIS_H__ 11 11 12 - /* Type of operand */ 13 - #define OPERAND_GPR 0x1 /* Operand printed as %rx */ 14 - #define OPERAND_FPR 0x2 /* Operand printed as %fx */ 15 - #define OPERAND_AR 0x4 /* Operand printed as %ax */ 16 - #define OPERAND_CR 0x8 /* Operand printed as %cx */ 17 - #define OPERAND_VR 0x10 /* Operand printed as %vx */ 18 - #define OPERAND_DISP 0x20 /* Operand printed as displacement */ 19 - #define OPERAND_BASE 0x40 /* Operand printed as base register */ 20 - #define OPERAND_INDEX 0x80 /* Operand printed as index register */ 21 - #define OPERAND_PCREL 0x100 /* Operand printed as pc-relative symbol */ 22 - #define OPERAND_SIGNED 0x200 /* Operand printed as signed value */ 23 - #define OPERAND_LENGTH 0x400 /* Operand printed as length (+1) */ 24 - 25 - 26 - struct s390_operand { 27 - int bits; /* The number of bits in the operand. */ 28 - int shift; /* The number of bits to shift. */ 29 - int flags; /* One bit syntax flags. */ 30 - }; 31 - 32 - struct s390_insn { 33 - const char name[5]; 34 - unsigned char opfrag; 35 - unsigned char format; 36 - }; 37 - 12 + #include <generated/dis.h> 38 13 39 14 static inline int insn_length(unsigned char code) 40 15 { ··· 20 45 21 46 void show_code(struct pt_regs *regs); 22 47 void print_fn_code(unsigned char *code, unsigned long len); 23 - int insn_to_mnemonic(unsigned char *instruction, char *buf, unsigned int len); 24 48 struct s390_insn *find_insn(unsigned char *code); 25 49 26 50 static inline int is_known_insn(unsigned char *code)
+2 -1
arch/s390/include/asm/ipl.h
··· 13 13 #include <asm/cio.h> 14 14 #include <asm/setup.h> 15 15 16 + #define NSS_NAME_SIZE 8 17 + 16 18 #define IPL_PARMBLOCK_ORIGIN 0x2000 17 19 18 20 #define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \ ··· 108 106 enum { 109 107 IPL_DEVNO_VALID = 1, 110 108 IPL_PARMBLOCK_VALID = 2, 111 - IPL_NSS_VALID = 4, 112 109 }; 113 110 114 111 enum ipl_type {
-2
arch/s390/include/asm/kprobes.h
··· 63 63 64 64 #define kretprobe_blacklist_size 0 65 65 66 - #define KPROBE_SWAP_INST 0x10 67 - 68 66 /* Architecture specific copy of original instruction */ 69 67 struct arch_specific_insn { 70 68 /* copy of original instruction */
-1
arch/s390/include/asm/kvm_host.h
··· 736 736 wait_queue_head_t ipte_wq; 737 737 int ipte_lock_count; 738 738 struct mutex ipte_mutex; 739 - struct ratelimit_state sthyi_limit; 740 739 spinlock_t start_stop_lock; 741 740 struct sie_page2 *sie_page2; 742 741 struct kvm_s390_cpu_model model;
+3 -2
arch/s390/include/asm/lowcore.h
··· 134 134 __u8 pad_0x03b4[0x03b8-0x03b4]; /* 0x03b4 */ 135 135 __u64 gmap; /* 0x03b8 */ 136 136 __u32 spinlock_lockval; /* 0x03c0 */ 137 - __u32 fpu_flags; /* 0x03c4 */ 138 - __u8 pad_0x03c8[0x0400-0x03c8]; /* 0x03c8 */ 137 + __u32 spinlock_index; /* 0x03c4 */ 138 + __u32 fpu_flags; /* 0x03c8 */ 139 + __u8 pad_0x03cc[0x0400-0x03cc]; /* 0x03cc */ 139 140 140 141 /* Per cpu primary space access list */ 141 142 __u32 paste[16]; /* 0x0400 */
+11 -8
arch/s390/include/asm/nmi.h
··· 26 26 #define MCCK_CODE_CPU_TIMER_VALID _BITUL(63 - 46) 27 27 #define MCCK_CODE_PSW_MWP_VALID _BITUL(63 - 20) 28 28 #define MCCK_CODE_PSW_IA_VALID _BITUL(63 - 23) 29 - 30 - #define MCCK_CR14_CR_PENDING_SUB_MASK (1 << 28) 31 - #define MCCK_CR14_RECOVERY_SUB_MASK (1 << 27) 32 - #define MCCK_CR14_DEGRAD_SUB_MASK (1 << 26) 33 - #define MCCK_CR14_EXT_DAMAGE_SUB_MASK (1 << 25) 34 - #define MCCK_CR14_WARN_SUB_MASK (1 << 24) 29 + #define MCCK_CODE_CR_VALID _BITUL(63 - 29) 30 + #define MCCK_CODE_GS_VALID _BITUL(63 - 36) 31 + #define MCCK_CODE_FC_VALID _BITUL(63 - 43) 35 32 36 33 #ifndef __ASSEMBLY__ 37 34 ··· 84 87 85 88 #define MCESA_ORIGIN_MASK (~0x3ffUL) 86 89 #define MCESA_LC_MASK (0xfUL) 90 + #define MCESA_MIN_SIZE (1024) 91 + #define MCESA_MAX_SIZE (2048) 87 92 88 93 struct mcesa { 89 94 u8 vector_save_area[1024]; ··· 94 95 95 96 struct pt_regs; 96 97 97 - extern void s390_handle_mcck(void); 98 - extern void s390_do_machine_check(struct pt_regs *regs); 98 + void nmi_alloc_boot_cpu(struct lowcore *lc); 99 + int nmi_alloc_per_cpu(struct lowcore *lc); 100 + void nmi_free_per_cpu(struct lowcore *lc); 101 + 102 + void s390_handle_mcck(void); 103 + void s390_do_machine_check(struct pt_regs *regs); 99 104 100 105 #endif /* __ASSEMBLY__ */ 101 106 #endif /* _ASM_S390_NMI_H */
+1 -5
arch/s390/include/asm/pci_debug.h
··· 19 19 20 20 static inline void zpci_err_hex(void *addr, int len) 21 21 { 22 - while (len > 0) { 23 - debug_event(pci_debug_err_id, 0, (void *) addr, len); 24 - len -= pci_debug_err_id->buf_size; 25 - addr += pci_debug_err_id->buf_size; 26 - } 22 + debug_event(pci_debug_err_id, 0, addr, len); 27 23 } 28 24 29 25 #endif
+1 -1
arch/s390/include/asm/pci_insn.h
··· 82 82 int zpci_load(u64 *data, u64 req, u64 offset); 83 83 int zpci_store(u64 data, u64 req, u64 offset); 84 84 int zpci_store_block(const u64 *data, u64 req, u64 offset); 85 - void zpci_set_irq_ctrl(u16 ctl, char *unused, u8 isc); 85 + int zpci_set_irq_ctrl(u16 ctl, char *unused, u8 isc); 86 86 87 87 #endif
+2 -16
arch/s390/include/asm/pgalloc.h
··· 13 13 #define _S390_PGALLOC_H 14 14 15 15 #include <linux/threads.h> 16 + #include <linux/string.h> 16 17 #include <linux/gfp.h> 17 18 #include <linux/mm.h> 18 19 ··· 29 28 void page_table_free_pgste(struct page *page); 30 29 extern int page_table_allocate_pgste; 31 30 32 - static inline void clear_table(unsigned long *s, unsigned long val, size_t n) 33 - { 34 - struct addrtype { char _[256]; }; 35 - int i; 36 - 37 - for (i = 0; i < n; i += 256) { 38 - *s = val; 39 - asm volatile( 40 - "mvc 8(248,%[s]),0(%[s])\n" 41 - : "+m" (*(struct addrtype *) s) 42 - : [s] "a" (s)); 43 - s += 256 / sizeof(long); 44 - } 45 - } 46 - 47 31 static inline void crst_table_init(unsigned long *crst, unsigned long entry) 48 32 { 49 - clear_table(crst, entry, _CRST_TABLE_SIZE); 33 + memset64((u64 *)crst, entry, _CRST_ENTRIES); 50 34 } 51 35 52 36 static inline unsigned long pgd_entry_type(struct mm_struct *mm)
+5 -3
arch/s390/include/asm/processor.h
··· 22 22 #define CIF_IGNORE_IRQ 5 /* ignore interrupt (for udelay) */ 23 23 #define CIF_ENABLED_WAIT 6 /* in enabled wait state */ 24 24 #define CIF_MCCK_GUEST 7 /* machine check happening in guest */ 25 + #define CIF_DEDICATED_CPU 8 /* this CPU is dedicated */ 25 26 26 27 #define _CIF_MCCK_PENDING _BITUL(CIF_MCCK_PENDING) 27 28 #define _CIF_ASCE_PRIMARY _BITUL(CIF_ASCE_PRIMARY) ··· 32 31 #define _CIF_IGNORE_IRQ _BITUL(CIF_IGNORE_IRQ) 33 32 #define _CIF_ENABLED_WAIT _BITUL(CIF_ENABLED_WAIT) 34 33 #define _CIF_MCCK_GUEST _BITUL(CIF_MCCK_GUEST) 34 + #define _CIF_DEDICATED_CPU _BITUL(CIF_DEDICATED_CPU) 35 35 36 36 #ifndef __ASSEMBLY__ 37 37 ··· 221 219 void show_cacheinfo(struct seq_file *m); 222 220 223 221 /* Free all resources held by a thread. */ 224 - extern void release_thread(struct task_struct *); 222 + static inline void release_thread(struct task_struct *tsk) { } 225 223 226 - /* Free guarded storage control block for current */ 227 - void exit_thread_gs(void); 224 + /* Free guarded storage control block */ 225 + void guarded_storage_release(struct task_struct *tsk); 228 226 229 227 unsigned long get_wchan(struct task_struct *p); 230 228 #define task_pt_regs(tsk) ((struct pt_regs *) \
+44 -42
arch/s390/include/asm/runtime_instr.h
··· 6 6 #define S390_RUNTIME_INSTR_STOP 0x2 7 7 8 8 struct runtime_instr_cb { 9 - __u64 buf_current; 10 - __u64 buf_origin; 11 - __u64 buf_limit; 9 + __u64 rca; 10 + __u64 roa; 11 + __u64 rla; 12 12 13 - __u32 valid : 1; 14 - __u32 pstate : 1; 15 - __u32 pstate_set_buf : 1; 16 - __u32 home_space : 1; 17 - __u32 altered : 1; 18 - __u32 : 3; 19 - __u32 pstate_sample : 1; 20 - __u32 sstate_sample : 1; 21 - __u32 pstate_collect : 1; 22 - __u32 sstate_collect : 1; 23 - __u32 : 1; 24 - __u32 halted_int : 1; 25 - __u32 int_requested : 1; 26 - __u32 buffer_full_int : 1; 13 + __u32 v : 1; 14 + __u32 s : 1; 15 + __u32 k : 1; 16 + __u32 h : 1; 17 + __u32 a : 1; 18 + __u32 reserved1 : 3; 19 + __u32 ps : 1; 20 + __u32 qs : 1; 21 + __u32 pc : 1; 22 + __u32 qc : 1; 23 + __u32 reserved2 : 1; 24 + __u32 g : 1; 25 + __u32 u : 1; 26 + __u32 l : 1; 27 27 __u32 key : 4; 28 - __u32 : 9; 28 + __u32 reserved3 : 8; 29 + __u32 t : 1; 29 30 __u32 rgs : 3; 30 31 31 - __u32 mode : 4; 32 - __u32 next : 1; 32 + __u32 m : 4; 33 + __u32 n : 1; 33 34 __u32 mae : 1; 34 - __u32 : 2; 35 - __u32 call_type_br : 1; 36 - __u32 return_type_br : 1; 37 - __u32 other_type_br : 1; 38 - __u32 bc_other_type : 1; 39 - __u32 emit : 1; 40 - __u32 tx_abort : 1; 41 - __u32 : 2; 42 - __u32 bp_xn : 1; 43 - __u32 bp_xt : 1; 44 - __u32 bp_ti : 1; 45 - __u32 bp_ni : 1; 46 - __u32 suppr_y : 1; 47 - __u32 suppr_z : 1; 35 + __u32 reserved4 : 2; 36 + __u32 c : 1; 37 + __u32 r : 1; 38 + __u32 b : 1; 39 + __u32 j : 1; 40 + __u32 e : 1; 41 + __u32 x : 1; 42 + __u32 reserved5 : 2; 43 + __u32 bpxn : 1; 44 + __u32 bpxt : 1; 45 + __u32 bpti : 1; 46 + __u32 bpni : 1; 47 + __u32 reserved6 : 2; 48 48 49 - __u32 dc_miss_extra : 1; 50 - __u32 lat_lev_ignore : 1; 51 - __u32 ic_lat_lev : 4; 52 - __u32 dc_lat_lev : 4; 49 + __u32 d : 1; 50 + __u32 f : 1; 51 + __u32 ic : 4; 52 + __u32 dc : 4; 53 53 54 - __u64 reserved1; 55 - __u64 scaling_factor; 54 + __u64 reserved7; 55 + __u64 sf; 56 56 __u64 rsic; 57 - __u64 reserved2; 57 + __u64 reserved8; 58 58 } __packed __aligned(8); 59 59 60 60 extern struct runtime_instr_cb runtime_instr_empty_cb; ··· 86 86 load_runtime_instr_cb(&runtime_instr_empty_cb); 87 87 } 88 88 89 - void exit_thread_runtime_instr(void); 89 + struct task_struct; 90 + 91 + void runtime_instr_release(struct task_struct *tsk); 90 92 91 93 #endif /* _RUNTIME_INSTR_H */
-211
arch/s390/include/asm/rwsem.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _S390_RWSEM_H 3 - #define _S390_RWSEM_H 4 - 5 - /* 6 - * S390 version 7 - * Copyright IBM Corp. 2002 8 - * Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com) 9 - * 10 - * Based on asm-alpha/semaphore.h and asm-i386/rwsem.h 11 - */ 12 - 13 - /* 14 - * 15 - * The MSW of the count is the negated number of active writers and waiting 16 - * lockers, and the LSW is the total number of active locks 17 - * 18 - * The lock count is initialized to 0 (no active and no waiting lockers). 19 - * 20 - * When a writer subtracts WRITE_BIAS, it'll get 0xffff0001 for the case of an 21 - * uncontended lock. This can be determined because XADD returns the old value. 22 - * Readers increment by 1 and see a positive value when uncontended, negative 23 - * if there are writers (and maybe) readers waiting (in which case it goes to 24 - * sleep). 25 - * 26 - * The value of WAITING_BIAS supports up to 32766 waiting processes. This can 27 - * be extended to 65534 by manually checking the whole MSW rather than relying 28 - * on the S flag. 29 - * 30 - * The value of ACTIVE_BIAS supports up to 65535 active processes. 31 - * 32 - * This should be totally fair - if anything is waiting, a process that wants a 33 - * lock will go to the back of the queue. When the currently active lock is 34 - * released, if there's a writer at the front of the queue, then that and only 35 - * that will be woken up; if there's a bunch of consecutive readers at the 36 - * front, then they'll all be woken up, but no other readers will be. 37 - */ 38 - 39 - #ifndef _LINUX_RWSEM_H 40 - #error "please don't include asm/rwsem.h directly, use linux/rwsem.h instead" 41 - #endif 42 - 43 - #define RWSEM_UNLOCKED_VALUE 0x0000000000000000L 44 - #define RWSEM_ACTIVE_BIAS 0x0000000000000001L 45 - #define RWSEM_ACTIVE_MASK 0x00000000ffffffffL 46 - #define RWSEM_WAITING_BIAS (-0x0000000100000000L) 47 - #define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS 48 - #define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS) 49 - 50 - /* 51 - * lock for reading 52 - */ 53 - static inline void __down_read(struct rw_semaphore *sem) 54 - { 55 - signed long old, new; 56 - 57 - asm volatile( 58 - " lg %0,%2\n" 59 - "0: lgr %1,%0\n" 60 - " aghi %1,%4\n" 61 - " csg %0,%1,%2\n" 62 - " jl 0b" 63 - : "=&d" (old), "=&d" (new), "=Q" (sem->count) 64 - : "Q" (sem->count), "i" (RWSEM_ACTIVE_READ_BIAS) 65 - : "cc", "memory"); 66 - if (old < 0) 67 - rwsem_down_read_failed(sem); 68 - } 69 - 70 - /* 71 - * trylock for reading -- returns 1 if successful, 0 if contention 72 - */ 73 - static inline int __down_read_trylock(struct rw_semaphore *sem) 74 - { 75 - signed long old, new; 76 - 77 - asm volatile( 78 - " lg %0,%2\n" 79 - "0: ltgr %1,%0\n" 80 - " jm 1f\n" 81 - " aghi %1,%4\n" 82 - " csg %0,%1,%2\n" 83 - " jl 0b\n" 84 - "1:" 85 - : "=&d" (old), "=&d" (new), "=Q" (sem->count) 86 - : "Q" (sem->count), "i" (RWSEM_ACTIVE_READ_BIAS) 87 - : "cc", "memory"); 88 - return old >= 0 ? 1 : 0; 89 - } 90 - 91 - /* 92 - * lock for writing 93 - */ 94 - static inline long ___down_write(struct rw_semaphore *sem) 95 - { 96 - signed long old, new, tmp; 97 - 98 - tmp = RWSEM_ACTIVE_WRITE_BIAS; 99 - asm volatile( 100 - " lg %0,%2\n" 101 - "0: lgr %1,%0\n" 102 - " ag %1,%4\n" 103 - " csg %0,%1,%2\n" 104 - " jl 0b" 105 - : "=&d" (old), "=&d" (new), "=Q" (sem->count) 106 - : "Q" (sem->count), "m" (tmp) 107 - : "cc", "memory"); 108 - 109 - return old; 110 - } 111 - 112 - static inline void __down_write(struct rw_semaphore *sem) 113 - { 114 - if (___down_write(sem)) 115 - rwsem_down_write_failed(sem); 116 - } 117 - 118 - static inline int __down_write_killable(struct rw_semaphore *sem) 119 - { 120 - if (___down_write(sem)) 121 - if (IS_ERR(rwsem_down_write_failed_killable(sem))) 122 - return -EINTR; 123 - 124 - return 0; 125 - } 126 - 127 - /* 128 - * trylock for writing -- returns 1 if successful, 0 if contention 129 - */ 130 - static inline int __down_write_trylock(struct rw_semaphore *sem) 131 - { 132 - signed long old; 133 - 134 - asm volatile( 135 - " lg %0,%1\n" 136 - "0: ltgr %0,%0\n" 137 - " jnz 1f\n" 138 - " csg %0,%3,%1\n" 139 - " jl 0b\n" 140 - "1:" 141 - : "=&d" (old), "=Q" (sem->count) 142 - : "Q" (sem->count), "d" (RWSEM_ACTIVE_WRITE_BIAS) 143 - : "cc", "memory"); 144 - return (old == RWSEM_UNLOCKED_VALUE) ? 1 : 0; 145 - } 146 - 147 - /* 148 - * unlock after reading 149 - */ 150 - static inline void __up_read(struct rw_semaphore *sem) 151 - { 152 - signed long old, new; 153 - 154 - asm volatile( 155 - " lg %0,%2\n" 156 - "0: lgr %1,%0\n" 157 - " aghi %1,%4\n" 158 - " csg %0,%1,%2\n" 159 - " jl 0b" 160 - : "=&d" (old), "=&d" (new), "=Q" (sem->count) 161 - : "Q" (sem->count), "i" (-RWSEM_ACTIVE_READ_BIAS) 162 - : "cc", "memory"); 163 - if (new < 0) 164 - if ((new & RWSEM_ACTIVE_MASK) == 0) 165 - rwsem_wake(sem); 166 - } 167 - 168 - /* 169 - * unlock after writing 170 - */ 171 - static inline void __up_write(struct rw_semaphore *sem) 172 - { 173 - signed long old, new, tmp; 174 - 175 - tmp = -RWSEM_ACTIVE_WRITE_BIAS; 176 - asm volatile( 177 - " lg %0,%2\n" 178 - "0: lgr %1,%0\n" 179 - " ag %1,%4\n" 180 - " csg %0,%1,%2\n" 181 - " jl 0b" 182 - : "=&d" (old), "=&d" (new), "=Q" (sem->count) 183 - : "Q" (sem->count), "m" (tmp) 184 - : "cc", "memory"); 185 - if (new < 0) 186 - if ((new & RWSEM_ACTIVE_MASK) == 0) 187 - rwsem_wake(sem); 188 - } 189 - 190 - /* 191 - * downgrade write lock to read lock 192 - */ 193 - static inline void __downgrade_write(struct rw_semaphore *sem) 194 - { 195 - signed long old, new, tmp; 196 - 197 - tmp = -RWSEM_WAITING_BIAS; 198 - asm volatile( 199 - " lg %0,%2\n" 200 - "0: lgr %1,%0\n" 201 - " ag %1,%4\n" 202 - " csg %0,%1,%2\n" 203 - " jl 0b" 204 - : "=&d" (old), "=&d" (new), "=Q" (sem->count) 205 - : "Q" (sem->count), "m" (tmp) 206 - : "cc", "memory"); 207 - if (new > 1) 208 - rwsem_downgrade_wake(sem); 209 - } 210 - 211 - #endif /* _S390_RWSEM_H */
+1 -1
arch/s390/include/asm/sections.h
··· 4 4 5 5 #include <asm-generic/sections.h> 6 6 7 - extern char _eshared[], _ehead[]; 7 + extern char _ehead[]; 8 8 9 9 #endif
-3
arch/s390/include/asm/setup.h
··· 98 98 #define SET_CONSOLE_VT220 do { console_mode = 4; } while (0) 99 99 #define SET_CONSOLE_HVC do { console_mode = 5; } while (0) 100 100 101 - #define NSS_NAME_SIZE 8 102 - extern char kernel_nss_name[]; 103 - 104 101 #ifdef CONFIG_PFAULT 105 102 extern int pfault_init(void); 106 103 extern void pfault_fini(void);
+5
arch/s390/include/asm/smp.h
··· 28 28 29 29 extern void smp_call_online_cpu(void (*func)(void *), void *); 30 30 extern void smp_call_ipl_cpu(void (*func)(void *), void *); 31 + extern void smp_emergency_stop(void); 31 32 32 33 extern int smp_find_processor_id(u16 address); 33 34 extern int smp_store_status(int cpu); ··· 52 51 static inline void smp_call_online_cpu(void (*func)(void *), void *data) 53 52 { 54 53 func(data); 54 + } 55 + 56 + static inline void smp_emergency_stop(void) 57 + { 55 58 } 56 59 57 60 static inline int smp_find_processor_id(u16 address) { return 0; }
+31 -138
arch/s390/include/asm/spinlock.h
··· 14 14 #include <asm/atomic_ops.h> 15 15 #include <asm/barrier.h> 16 16 #include <asm/processor.h> 17 + #include <asm/alternative.h> 17 18 18 19 #define SPINLOCK_LOCKVAL (S390_lowcore.spinlock_lockval) 19 20 ··· 37 36 * (the type definitions are in asm/spinlock_types.h) 38 37 */ 39 38 40 - void arch_lock_relax(int cpu); 39 + void arch_spin_relax(arch_spinlock_t *lock); 41 40 42 41 void arch_spin_lock_wait(arch_spinlock_t *); 43 42 int arch_spin_trylock_retry(arch_spinlock_t *); 44 - void arch_spin_lock_wait_flags(arch_spinlock_t *, unsigned long flags); 45 - 46 - static inline void arch_spin_relax(arch_spinlock_t *lock) 47 - { 48 - arch_lock_relax(lock->lock); 49 - } 43 + void arch_spin_lock_setup(int cpu); 50 44 51 45 static inline u32 arch_spin_lockval(int cpu) 52 46 { 53 - return ~cpu; 47 + return cpu + 1; 54 48 } 55 49 56 50 static inline int arch_spin_value_unlocked(arch_spinlock_t lock) ··· 61 65 static inline int arch_spin_trylock_once(arch_spinlock_t *lp) 62 66 { 63 67 barrier(); 64 - return likely(arch_spin_value_unlocked(*lp) && 65 - __atomic_cmpxchg_bool(&lp->lock, 0, SPINLOCK_LOCKVAL)); 68 + return likely(__atomic_cmpxchg_bool(&lp->lock, 0, SPINLOCK_LOCKVAL)); 66 69 } 67 70 68 71 static inline void arch_spin_lock(arch_spinlock_t *lp) ··· 74 79 unsigned long flags) 75 80 { 76 81 if (!arch_spin_trylock_once(lp)) 77 - arch_spin_lock_wait_flags(lp, flags); 82 + arch_spin_lock_wait(lp); 78 83 } 79 84 80 85 static inline int arch_spin_trylock(arch_spinlock_t *lp) ··· 88 93 { 89 94 typecheck(int, lp->lock); 90 95 asm volatile( 91 - #ifdef CONFIG_HAVE_MARCH_ZEC12_FEATURES 92 - " .long 0xb2fa0070\n" /* NIAI 7 */ 93 - #endif 94 - " st %1,%0\n" 95 - : "=Q" (lp->lock) : "d" (0) : "cc", "memory"); 96 + ALTERNATIVE("", ".long 0xb2fa0070", 49) /* NIAI 7 */ 97 + " sth %1,%0\n" 98 + : "=Q" (((unsigned short *) &lp->lock)[1]) 99 + : "d" (0) : "cc", "memory"); 96 100 } 97 101 98 102 /* ··· 109 115 * read_can_lock - would read_trylock() succeed? 110 116 * @lock: the rwlock in question. 111 117 */ 112 - #define arch_read_can_lock(x) ((int)(x)->lock >= 0) 118 + #define arch_read_can_lock(x) (((x)->cnts & 0xffff0000) == 0) 113 119 114 120 /** 115 121 * write_can_lock - would write_trylock() succeed? 116 122 * @lock: the rwlock in question. 117 123 */ 118 - #define arch_write_can_lock(x) ((x)->lock == 0) 119 - 120 - extern int _raw_read_trylock_retry(arch_rwlock_t *lp); 121 - extern int _raw_write_trylock_retry(arch_rwlock_t *lp); 124 + #define arch_write_can_lock(x) ((x)->cnts == 0) 122 125 123 126 #define arch_read_lock_flags(lock, flags) arch_read_lock(lock) 124 127 #define arch_write_lock_flags(lock, flags) arch_write_lock(lock) 128 + #define arch_read_relax(rw) barrier() 129 + #define arch_write_relax(rw) barrier() 125 130 126 - static inline int arch_read_trylock_once(arch_rwlock_t *rw) 127 - { 128 - int old = ACCESS_ONCE(rw->lock); 129 - return likely(old >= 0 && 130 - __atomic_cmpxchg_bool(&rw->lock, old, old + 1)); 131 - } 132 - 133 - static inline int arch_write_trylock_once(arch_rwlock_t *rw) 134 - { 135 - int old = ACCESS_ONCE(rw->lock); 136 - return likely(old == 0 && 137 - __atomic_cmpxchg_bool(&rw->lock, 0, 0x80000000)); 138 - } 139 - 140 - #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES 141 - 142 - #define __RAW_OP_OR "lao" 143 - #define __RAW_OP_AND "lan" 144 - #define __RAW_OP_ADD "laa" 145 - 146 - #define __RAW_LOCK(ptr, op_val, op_string) \ 147 - ({ \ 148 - int old_val; \ 149 - \ 150 - typecheck(int *, ptr); \ 151 - asm volatile( \ 152 - op_string " %0,%2,%1\n" \ 153 - "bcr 14,0\n" \ 154 - : "=d" (old_val), "+Q" (*ptr) \ 155 - : "d" (op_val) \ 156 - : "cc", "memory"); \ 157 - old_val; \ 158 - }) 159 - 160 - #define __RAW_UNLOCK(ptr, op_val, op_string) \ 161 - ({ \ 162 - int old_val; \ 163 - \ 164 - typecheck(int *, ptr); \ 165 - asm volatile( \ 166 - op_string " %0,%2,%1\n" \ 167 - : "=d" (old_val), "+Q" (*ptr) \ 168 - : "d" (op_val) \ 169 - : "cc", "memory"); \ 170 - old_val; \ 171 - }) 172 - 173 - extern void _raw_read_lock_wait(arch_rwlock_t *lp); 174 - extern void _raw_write_lock_wait(arch_rwlock_t *lp, int prev); 131 + void arch_read_lock_wait(arch_rwlock_t *lp); 132 + void arch_write_lock_wait(arch_rwlock_t *lp); 175 133 176 134 static inline void arch_read_lock(arch_rwlock_t *rw) 177 135 { 178 136 int old; 179 137 180 - old = __RAW_LOCK(&rw->lock, 1, __RAW_OP_ADD); 181 - if (old < 0) 182 - _raw_read_lock_wait(rw); 138 + old = __atomic_add(1, &rw->cnts); 139 + if (old & 0xffff0000) 140 + arch_read_lock_wait(rw); 183 141 } 184 142 185 143 static inline void arch_read_unlock(arch_rwlock_t *rw) 186 144 { 187 - __RAW_UNLOCK(&rw->lock, -1, __RAW_OP_ADD); 145 + __atomic_add_const_barrier(-1, &rw->cnts); 188 146 } 189 147 190 148 static inline void arch_write_lock(arch_rwlock_t *rw) 191 149 { 192 - int old; 193 - 194 - old = __RAW_LOCK(&rw->lock, 0x80000000, __RAW_OP_OR); 195 - if (old != 0) 196 - _raw_write_lock_wait(rw, old); 197 - rw->owner = SPINLOCK_LOCKVAL; 150 + if (!__atomic_cmpxchg_bool(&rw->cnts, 0, 0x30000)) 151 + arch_write_lock_wait(rw); 198 152 } 199 153 200 154 static inline void arch_write_unlock(arch_rwlock_t *rw) 201 155 { 202 - rw->owner = 0; 203 - __RAW_UNLOCK(&rw->lock, 0x7fffffff, __RAW_OP_AND); 156 + __atomic_add_barrier(-0x30000, &rw->cnts); 204 157 } 205 158 206 - #else /* CONFIG_HAVE_MARCH_Z196_FEATURES */ 207 - 208 - extern void _raw_read_lock_wait(arch_rwlock_t *lp); 209 - extern void _raw_write_lock_wait(arch_rwlock_t *lp); 210 - 211 - static inline void arch_read_lock(arch_rwlock_t *rw) 212 - { 213 - if (!arch_read_trylock_once(rw)) 214 - _raw_read_lock_wait(rw); 215 - } 216 - 217 - static inline void arch_read_unlock(arch_rwlock_t *rw) 218 - { 219 - int old; 220 - 221 - do { 222 - old = ACCESS_ONCE(rw->lock); 223 - } while (!__atomic_cmpxchg_bool(&rw->lock, old, old - 1)); 224 - } 225 - 226 - static inline void arch_write_lock(arch_rwlock_t *rw) 227 - { 228 - if (!arch_write_trylock_once(rw)) 229 - _raw_write_lock_wait(rw); 230 - rw->owner = SPINLOCK_LOCKVAL; 231 - } 232 - 233 - static inline void arch_write_unlock(arch_rwlock_t *rw) 234 - { 235 - typecheck(int, rw->lock); 236 - 237 - rw->owner = 0; 238 - asm volatile( 239 - "st %1,%0\n" 240 - : "+Q" (rw->lock) 241 - : "d" (0) 242 - : "cc", "memory"); 243 - } 244 - 245 - #endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */ 246 159 247 160 static inline int arch_read_trylock(arch_rwlock_t *rw) 248 161 { 249 - if (!arch_read_trylock_once(rw)) 250 - return _raw_read_trylock_retry(rw); 251 - return 1; 162 + int old; 163 + 164 + old = READ_ONCE(rw->cnts); 165 + return (!(old & 0xffff0000) && 166 + __atomic_cmpxchg_bool(&rw->cnts, old, old + 1)); 252 167 } 253 168 254 169 static inline int arch_write_trylock(arch_rwlock_t *rw) 255 170 { 256 - if (!arch_write_trylock_once(rw) && !_raw_write_trylock_retry(rw)) 257 - return 0; 258 - rw->owner = SPINLOCK_LOCKVAL; 259 - return 1; 260 - } 171 + int old; 261 172 262 - static inline void arch_read_relax(arch_rwlock_t *rw) 263 - { 264 - arch_lock_relax(rw->owner); 265 - } 266 - 267 - static inline void arch_write_relax(arch_rwlock_t *rw) 268 - { 269 - arch_lock_relax(rw->owner); 173 + old = READ_ONCE(rw->cnts); 174 + return !old && __atomic_cmpxchg_bool(&rw->cnts, 0, 0x30000); 270 175 } 271 176 272 177 #endif /* __ASM_SPINLOCK_H */
+2 -2
arch/s390/include/asm/spinlock_types.h
··· 13 13 #define __ARCH_SPIN_LOCK_UNLOCKED { .lock = 0, } 14 14 15 15 typedef struct { 16 - int lock; 17 - int owner; 16 + int cnts; 17 + arch_spinlock_t wait; 18 18 } arch_rwlock_t; 19 19 20 20 #define __ARCH_RW_LOCK_UNLOCKED { 0 }
+34 -12
arch/s390/include/asm/string.h
··· 18 18 #define __HAVE_ARCH_MEMMOVE /* gcc builtin & arch function */ 19 19 #define __HAVE_ARCH_MEMSCAN /* inline & arch function */ 20 20 #define __HAVE_ARCH_MEMSET /* gcc builtin & arch function */ 21 + #define __HAVE_ARCH_MEMSET16 /* arch function */ 22 + #define __HAVE_ARCH_MEMSET32 /* arch function */ 23 + #define __HAVE_ARCH_MEMSET64 /* arch function */ 21 24 #define __HAVE_ARCH_STRCAT /* inline & arch function */ 22 25 #define __HAVE_ARCH_STRCMP /* arch function */ 23 26 #define __HAVE_ARCH_STRCPY /* inline & arch function */ ··· 34 31 #define __HAVE_ARCH_STRSTR /* arch function */ 35 32 36 33 /* Prototypes for non-inlined arch strings functions. */ 37 - extern int memcmp(const void *, const void *, size_t); 38 - extern void *memcpy(void *, const void *, size_t); 39 - extern void *memset(void *, int, size_t); 40 - extern void *memmove(void *, const void *, size_t); 41 - extern int strcmp(const char *,const char *); 42 - extern size_t strlcat(char *, const char *, size_t); 43 - extern size_t strlcpy(char *, const char *, size_t); 44 - extern char *strncat(char *, const char *, size_t); 45 - extern char *strncpy(char *, const char *, size_t); 46 - extern char *strrchr(const char *, int); 47 - extern char *strstr(const char *, const char *); 34 + int memcmp(const void *s1, const void *s2, size_t n); 35 + void *memcpy(void *dest, const void *src, size_t n); 36 + void *memset(void *s, int c, size_t n); 37 + void *memmove(void *dest, const void *src, size_t n); 38 + int strcmp(const char *s1, const char *s2); 39 + size_t strlcat(char *dest, const char *src, size_t n); 40 + size_t strlcpy(char *dest, const char *src, size_t size); 41 + char *strncat(char *dest, const char *src, size_t n); 42 + char *strncpy(char *dest, const char *src, size_t n); 43 + char *strrchr(const char *s, int c); 44 + char *strstr(const char *s1, const char *s2); 48 45 49 46 #undef __HAVE_ARCH_STRCHR 50 47 #undef __HAVE_ARCH_STRNCHR ··· 53 50 #undef __HAVE_ARCH_STRSEP 54 51 #undef __HAVE_ARCH_STRSPN 55 52 56 - #if !defined(IN_ARCH_STRING_C) 53 + void *__memset16(uint16_t *s, uint16_t v, size_t count); 54 + void *__memset32(uint32_t *s, uint32_t v, size_t count); 55 + void *__memset64(uint64_t *s, uint64_t v, size_t count); 56 + 57 + static inline void *memset16(uint16_t *s, uint16_t v, size_t count) 58 + { 59 + return __memset16(s, v, count * sizeof(v)); 60 + } 61 + 62 + static inline void *memset32(uint32_t *s, uint32_t v, size_t count) 63 + { 64 + return __memset32(s, v, count * sizeof(v)); 65 + } 66 + 67 + static inline void *memset64(uint64_t *s, uint64_t v, size_t count) 68 + { 69 + return __memset64(s, v, count * sizeof(v)); 70 + } 71 + 72 + #if !defined(IN_ARCH_STRING_C) && (!defined(CONFIG_FORTIFY_SOURCE) || defined(__NO_FORTIFY)) 57 73 58 74 static inline void *memchr(const void * s, int c, size_t n) 59 75 {
+1 -1
arch/s390/include/asm/switch_to.h
··· 37 37 save_ri_cb(prev->thread.ri_cb); \ 38 38 save_gs_cb(prev->thread.gs_cb); \ 39 39 } \ 40 + update_cr_regs(next); \ 40 41 if (next->mm) { \ 41 - update_cr_regs(next); \ 42 42 set_cpu_flag(CIF_FPU); \ 43 43 restore_access_regs(&next->thread.acrs[0]); \ 44 44 restore_ri_cb(next->thread.ri_cb, prev->thread.ri_cb); \
+3 -1
arch/s390/include/asm/sysinfo.h
··· 156 156 struct topology_core { 157 157 unsigned char nl; 158 158 unsigned char reserved0[3]; 159 - unsigned char :6; 159 + unsigned char :5; 160 + unsigned char d:1; 160 161 unsigned char pp:2; 161 162 unsigned char reserved1; 162 163 unsigned short origin; ··· 199 198 int register_service_level(struct service_level *); 200 199 int unregister_service_level(struct service_level *); 201 200 201 + int sthyi_fill(void *dst, u64 *rc); 202 202 #endif /* __ASM_S390_SYSINFO_H */
+2
arch/s390/include/asm/topology.h
··· 17 17 unsigned short book_id; 18 18 unsigned short drawer_id; 19 19 unsigned short node_id; 20 + unsigned short dedicated : 1; 20 21 cpumask_t thread_mask; 21 22 cpumask_t core_mask; 22 23 cpumask_t book_mask; ··· 36 35 #define topology_book_cpumask(cpu) (&cpu_topology[cpu].book_mask) 37 36 #define topology_drawer_id(cpu) (cpu_topology[cpu].drawer_id) 38 37 #define topology_drawer_cpumask(cpu) (&cpu_topology[cpu].drawer_mask) 38 + #define topology_cpu_dedicated(cpu) (cpu_topology[cpu].dedicated) 39 39 40 40 #define mc_capable() 1 41 41
+1
arch/s390/include/asm/vdso.h
··· 47 47 48 48 extern struct vdso_data *vdso_data; 49 49 50 + void vdso_alloc_boot_cpu(struct lowcore *lowcore); 50 51 int vdso_alloc_per_cpu(struct lowcore *lowcore); 51 52 void vdso_free_per_cpu(struct lowcore *lowcore); 52 53
-65
arch/s390/include/uapi/asm/kvm_virtio.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - /* 3 - * definition for virtio for kvm on s390 4 - * 5 - * Copyright IBM Corp. 2008 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License (version 2 only) 9 - * as published by the Free Software Foundation. 10 - * 11 - * Author(s): Christian Borntraeger <borntraeger@de.ibm.com> 12 - */ 13 - 14 - #ifndef __KVM_S390_VIRTIO_H 15 - #define __KVM_S390_VIRTIO_H 16 - 17 - #include <linux/types.h> 18 - 19 - struct kvm_device_desc { 20 - /* The device type: console, network, disk etc. Type 0 terminates. */ 21 - __u8 type; 22 - /* The number of virtqueues (first in config array) */ 23 - __u8 num_vq; 24 - /* 25 - * The number of bytes of feature bits. Multiply by 2: one for host 26 - * features and one for guest acknowledgements. 27 - */ 28 - __u8 feature_len; 29 - /* The number of bytes of the config array after virtqueues. */ 30 - __u8 config_len; 31 - /* A status byte, written by the Guest. */ 32 - __u8 status; 33 - __u8 config[0]; 34 - }; 35 - 36 - /* 37 - * This is how we expect the device configuration field for a virtqueue 38 - * to be laid out in config space. 39 - */ 40 - struct kvm_vqconfig { 41 - /* The token returned with an interrupt. Set by the guest */ 42 - __u64 token; 43 - /* The address of the virtio ring */ 44 - __u64 address; 45 - /* The number of entries in the virtio_ring */ 46 - __u16 num; 47 - 48 - }; 49 - 50 - #define KVM_S390_VIRTIO_NOTIFY 0 51 - #define KVM_S390_VIRTIO_RESET 1 52 - #define KVM_S390_VIRTIO_SET_STATUS 2 53 - 54 - /* The alignment to use between consumer and producer parts of vring. 55 - * This is pagesize for historical reasons. */ 56 - #define KVM_S390_VIRTIO_RING_ALIGN 4096 57 - 58 - 59 - /* These values are supposed to be in ext_params on an interrupt */ 60 - #define VIRTIO_PARAM_MASK 0xff 61 - #define VIRTIO_PARAM_VRING_INTERRUPT 0x0 62 - #define VIRTIO_PARAM_CONFIG_CHANGED 0x1 63 - #define VIRTIO_PARAM_DEV_ADD 0x2 64 - 65 - #endif
+6
arch/s390/include/uapi/asm/sthyi.h
··· 1 + #ifndef _UAPI_ASM_STHYI_H 2 + #define _UAPI_ASM_STHYI_H 3 + 4 + #define STHYI_FC_CP_IFL_CAP 0 5 + 6 + #endif /* _UAPI_ASM_STHYI_H */
+2 -1
arch/s390/include/uapi/asm/unistd.h
··· 316 316 #define __NR_pwritev2 377 317 317 #define __NR_s390_guarded_storage 378 318 318 #define __NR_statx 379 319 - #define NR_syscalls 380 319 + #define __NR_s390_sthyi 380 320 + #define NR_syscalls 381 320 321 321 322 /* 322 323 * There are some system calls that are not present on 64 bit, some
+4 -1
arch/s390/kernel/Makefile
··· 34 34 AFLAGS_head.o += -march=z900 35 35 endif 36 36 37 + CFLAGS_als.o += -D__NO_FORTIFY 38 + 37 39 # 38 40 # Passing null pointers is ok for smp code, since we access the lowcore here. 39 41 # ··· 58 56 obj-y += processor.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o 59 57 obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o als.o 60 58 obj-y += sysinfo.o jump_label.o lgr.o os_info.o machine_kexec.o pgm_check.o 61 - obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o 59 + obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o 62 60 obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o 63 61 64 62 extra-y += head.o head64.o vmlinux.lds ··· 77 75 obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o 78 76 obj-$(CONFIG_CRASH_DUMP) += crash_dump.o 79 77 obj-$(CONFIG_UPROBES) += uprobes.o 78 + obj-$(CONFIG_ALTERNATIVES) += alternative.o 80 79 81 80 obj-$(CONFIG_PERF_EVENTS) += perf_event.o perf_cpum_cf.o perf_cpum_sf.o 82 81 obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf_events.o
+110
arch/s390/kernel/alternative.c
··· 1 + #include <linux/module.h> 2 + #include <asm/alternative.h> 3 + #include <asm/facility.h> 4 + 5 + #define MAX_PATCH_LEN (255 - 1) 6 + 7 + static int __initdata_or_module alt_instr_disabled; 8 + 9 + static int __init disable_alternative_instructions(char *str) 10 + { 11 + alt_instr_disabled = 1; 12 + return 0; 13 + } 14 + 15 + early_param("noaltinstr", disable_alternative_instructions); 16 + 17 + struct brcl_insn { 18 + u16 opc; 19 + s32 disp; 20 + } __packed; 21 + 22 + static u16 __initdata_or_module nop16 = 0x0700; 23 + static u32 __initdata_or_module nop32 = 0x47000000; 24 + static struct brcl_insn __initdata_or_module nop48 = { 25 + 0xc004, 0 26 + }; 27 + 28 + static const void *nops[] __initdata_or_module = { 29 + &nop16, 30 + &nop32, 31 + &nop48 32 + }; 33 + 34 + static void __init_or_module add_jump_padding(void *insns, unsigned int len) 35 + { 36 + struct brcl_insn brcl = { 37 + 0xc0f4, 38 + len / 2 39 + }; 40 + 41 + memcpy(insns, &brcl, sizeof(brcl)); 42 + insns += sizeof(brcl); 43 + len -= sizeof(brcl); 44 + 45 + while (len > 0) { 46 + memcpy(insns, &nop16, 2); 47 + insns += 2; 48 + len -= 2; 49 + } 50 + } 51 + 52 + static void __init_or_module add_padding(void *insns, unsigned int len) 53 + { 54 + if (len > 6) 55 + add_jump_padding(insns, len); 56 + else if (len >= 2) 57 + memcpy(insns, nops[len / 2 - 1], len); 58 + } 59 + 60 + static void __init_or_module __apply_alternatives(struct alt_instr *start, 61 + struct alt_instr *end) 62 + { 63 + struct alt_instr *a; 64 + u8 *instr, *replacement; 65 + u8 insnbuf[MAX_PATCH_LEN]; 66 + 67 + /* 68 + * The scan order should be from start to end. A later scanned 69 + * alternative code can overwrite previously scanned alternative code. 70 + */ 71 + for (a = start; a < end; a++) { 72 + int insnbuf_sz = 0; 73 + 74 + instr = (u8 *)&a->instr_offset + a->instr_offset; 75 + replacement = (u8 *)&a->repl_offset + a->repl_offset; 76 + 77 + if (!test_facility(a->facility)) 78 + continue; 79 + 80 + if (unlikely(a->instrlen % 2 || a->replacementlen % 2)) { 81 + WARN_ONCE(1, "cpu alternatives instructions length is " 82 + "odd, skipping patching\n"); 83 + continue; 84 + } 85 + 86 + memcpy(insnbuf, replacement, a->replacementlen); 87 + insnbuf_sz = a->replacementlen; 88 + 89 + if (a->instrlen > a->replacementlen) { 90 + add_padding(insnbuf + a->replacementlen, 91 + a->instrlen - a->replacementlen); 92 + insnbuf_sz += a->instrlen - a->replacementlen; 93 + } 94 + 95 + s390_kernel_write(instr, insnbuf, insnbuf_sz); 96 + } 97 + } 98 + 99 + void __init_or_module apply_alternatives(struct alt_instr *start, 100 + struct alt_instr *end) 101 + { 102 + if (!alt_instr_disabled) 103 + __apply_alternatives(start, end); 104 + } 105 + 106 + extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; 107 + void __init apply_alternative_instructions(void) 108 + { 109 + apply_alternatives(__alt_instructions, __alt_instructions_end); 110 + }
+5
arch/s390/kernel/asm-offsets.c
··· 14 14 #include <asm/vdso.h> 15 15 #include <asm/pgtable.h> 16 16 #include <asm/gmap.h> 17 + #include <asm/nmi.h> 17 18 18 19 /* 19 20 * Make sure that the compiler is new enough. We want a compiler that ··· 160 159 OFFSET(__LC_LAST_UPDATE_CLOCK, lowcore, last_update_clock); 161 160 OFFSET(__LC_INT_CLOCK, lowcore, int_clock); 162 161 OFFSET(__LC_MCCK_CLOCK, lowcore, mcck_clock); 162 + OFFSET(__LC_CLOCK_COMPARATOR, lowcore, clock_comparator); 163 163 OFFSET(__LC_BOOT_CLOCK, lowcore, boot_clock); 164 164 OFFSET(__LC_CURRENT, lowcore, current_task); 165 165 OFFSET(__LC_KERNEL_STACK, lowcore, kernel_stack); ··· 195 193 OFFSET(__LC_AREGS_SAVE_AREA, lowcore, access_regs_save_area); 196 194 OFFSET(__LC_CREGS_SAVE_AREA, lowcore, cregs_save_area); 197 195 OFFSET(__LC_PGM_TDB, lowcore, pgm_tdb); 196 + BLANK(); 197 + /* extended machine check save area */ 198 + OFFSET(__MCESA_GS_SAVE_AREA, mcesa, guarded_storage_save_area); 198 199 BLANK(); 199 200 /* gmap/sie offsets */ 200 201 OFFSET(__GMAP_ASCE, gmap, asce);
+1
arch/s390/kernel/compat_wrapper.c
··· 181 181 COMPAT_SYSCALL_WRAP6(copy_file_range, int, fd_in, loff_t __user *, off_in, int, fd_out, loff_t __user *, off_out, size_t, len, unsigned int, flags); 182 182 COMPAT_SYSCALL_WRAP2(s390_guarded_storage, int, command, struct gs_cb *, gs_cb); 183 183 COMPAT_SYSCALL_WRAP5(statx, int, dfd, const char __user *, path, unsigned, flags, unsigned, mask, struct statx __user *, buffer); 184 + COMPAT_SYSCALL_WRAP4(s390_sthyi, unsigned long, code, void __user *, info, u64 __user *, rc, unsigned long, flags);
+414 -498
arch/s390/kernel/debug.c
··· 5 5 * Copyright IBM Corp. 1999, 2012 6 6 * 7 7 * Author(s): Michael Holzheu (holzheu@de.ibm.com), 8 - * Holger Smolinski (Holger.Smolinski@de.ibm.com) 8 + * Holger Smolinski (Holger.Smolinski@de.ibm.com) 9 9 * 10 10 * Bugreports to: <Linux390@de.ibm.com> 11 11 */ ··· 37 37 38 38 typedef struct file_private_info { 39 39 loff_t offset; /* offset of last read in file */ 40 - int act_area; /* number of last formated area */ 41 - int act_page; /* act page in given area */ 42 - int act_entry; /* last formated entry (offset */ 43 - /* relative to beginning of last */ 44 - /* formated page) */ 45 - size_t act_entry_offset; /* up to this offset we copied */ 40 + int act_area; /* number of last formated area */ 41 + int act_page; /* act page in given area */ 42 + int act_entry; /* last formated entry (offset */ 43 + /* relative to beginning of last */ 44 + /* formated page) */ 45 + size_t act_entry_offset; /* up to this offset we copied */ 46 46 /* in last read the last formated */ 47 47 /* entry to userland */ 48 48 char temp_buf[2048]; /* buffer for output */ 49 - debug_info_t *debug_info_org; /* original debug information */ 49 + debug_info_t *debug_info_org; /* original debug information */ 50 50 debug_info_t *debug_info_snap; /* snapshot of debug information */ 51 51 struct debug_view *view; /* used view of debug info */ 52 52 } file_private_info_t; 53 53 54 - typedef struct 55 - { 54 + typedef struct { 56 55 char *string; 57 - /* 58 - * This assumes that all args are converted into longs 59 - * on L/390 this is the case for all types of parameter 60 - * except of floats, and long long (32 bit) 56 + /* 57 + * This assumes that all args are converted into longs 58 + * on L/390 this is the case for all types of parameter 59 + * except of floats, and long long (32 bit) 61 60 * 62 61 */ 63 62 long args[0]; 64 63 } debug_sprintf_entry_t; 65 64 66 - 67 65 /* internal function prototyes */ 68 66 69 67 static int debug_init(void); 70 68 static ssize_t debug_output(struct file *file, char __user *user_buf, 71 - size_t user_len, loff_t * offset); 69 + size_t user_len, loff_t *offset); 72 70 static ssize_t debug_input(struct file *file, const char __user *user_buf, 73 - size_t user_len, loff_t * offset); 71 + size_t user_len, loff_t *offset); 74 72 static int debug_open(struct inode *inode, struct file *file); 75 73 static int debug_close(struct inode *inode, struct file *file); 76 74 static debug_info_t *debug_info_create(const char *name, int pages_per_area, 77 - int nr_areas, int buf_size, umode_t mode); 75 + int nr_areas, int buf_size, umode_t mode); 78 76 static void debug_info_get(debug_info_t *); 79 77 static void debug_info_put(debug_info_t *); 80 - static int debug_prolog_level_fn(debug_info_t * id, 81 - struct debug_view *view, char *out_buf); 82 - static int debug_input_level_fn(debug_info_t * id, struct debug_view *view, 83 - struct file *file, const char __user *user_buf, 84 - size_t user_buf_size, loff_t * offset); 85 - static int debug_prolog_pages_fn(debug_info_t * id, 86 - struct debug_view *view, char *out_buf); 87 - static int debug_input_pages_fn(debug_info_t * id, struct debug_view *view, 88 - struct file *file, const char __user *user_buf, 89 - size_t user_buf_size, loff_t * offset); 90 - static int debug_input_flush_fn(debug_info_t * id, struct debug_view *view, 91 - struct file *file, const char __user *user_buf, 92 - size_t user_buf_size, loff_t * offset); 93 - static int debug_hex_ascii_format_fn(debug_info_t * id, struct debug_view *view, 94 - char *out_buf, const char *in_buf); 95 - static int debug_raw_format_fn(debug_info_t * id, 96 - struct debug_view *view, char *out_buf, 97 - const char *in_buf); 98 - static int debug_raw_header_fn(debug_info_t * id, struct debug_view *view, 99 - int area, debug_entry_t * entry, char *out_buf); 78 + static int debug_prolog_level_fn(debug_info_t *id, 79 + struct debug_view *view, char *out_buf); 80 + static int debug_input_level_fn(debug_info_t *id, struct debug_view *view, 81 + struct file *file, const char __user *user_buf, 82 + size_t user_buf_size, loff_t *offset); 83 + static int debug_prolog_pages_fn(debug_info_t *id, 84 + struct debug_view *view, char *out_buf); 85 + static int debug_input_pages_fn(debug_info_t *id, struct debug_view *view, 86 + struct file *file, const char __user *user_buf, 87 + size_t user_buf_size, loff_t *offset); 88 + static int debug_input_flush_fn(debug_info_t *id, struct debug_view *view, 89 + struct file *file, const char __user *user_buf, 90 + size_t user_buf_size, loff_t *offset); 91 + static int debug_hex_ascii_format_fn(debug_info_t *id, struct debug_view *view, 92 + char *out_buf, const char *in_buf); 93 + static int debug_raw_format_fn(debug_info_t *id, 94 + struct debug_view *view, char *out_buf, 95 + const char *in_buf); 96 + static int debug_raw_header_fn(debug_info_t *id, struct debug_view *view, 97 + int area, debug_entry_t *entry, char *out_buf); 100 98 101 - static int debug_sprintf_format_fn(debug_info_t * id, struct debug_view *view, 102 - char *out_buf, debug_sprintf_entry_t *curr_event); 99 + static int debug_sprintf_format_fn(debug_info_t *id, struct debug_view *view, 100 + char *out_buf, debug_sprintf_entry_t *curr_event); 103 101 104 102 /* globals */ 105 103 ··· 140 142 }; 141 143 142 144 static struct debug_view debug_flush_view = { 143 - "flush", 144 - NULL, 145 - NULL, 146 - NULL, 147 - &debug_input_flush_fn, 148 - NULL 145 + "flush", 146 + NULL, 147 + NULL, 148 + NULL, 149 + &debug_input_flush_fn, 150 + NULL 149 151 }; 150 152 151 153 struct debug_view debug_sprintf_view = { 152 154 "sprintf", 153 155 NULL, 154 156 &debug_dflt_header_fn, 155 - (debug_format_proc_t*)&debug_sprintf_format_fn, 157 + (debug_format_proc_t *)&debug_sprintf_format_fn, 156 158 NULL, 157 159 NULL 158 160 }; ··· 163 165 164 166 /* static globals */ 165 167 166 - static debug_info_t *debug_area_first = NULL; 167 - static debug_info_t *debug_area_last = NULL; 168 + static debug_info_t *debug_area_first; 169 + static debug_info_t *debug_area_last; 168 170 static DEFINE_MUTEX(debug_mutex); 169 171 170 172 static int initialized; 171 173 static int debug_critical; 172 174 173 175 static const struct file_operations debug_file_ops = { 174 - .owner = THIS_MODULE, 175 - .read = debug_output, 176 - .write = debug_input, 177 - .open = debug_open, 176 + .owner = THIS_MODULE, 177 + .read = debug_output, 178 + .write = debug_input, 179 + .open = debug_open, 178 180 .release = debug_close, 179 181 .llseek = no_llseek, 180 182 }; ··· 189 191 * areas[areanumber][pagenumber][pageoffset] 190 192 */ 191 193 192 - static debug_entry_t*** 193 - debug_areas_alloc(int pages_per_area, int nr_areas) 194 + static debug_entry_t ***debug_areas_alloc(int pages_per_area, int nr_areas) 194 195 { 195 - debug_entry_t*** areas; 196 - int i,j; 196 + debug_entry_t ***areas; 197 + int i, j; 197 198 198 - areas = kmalloc(nr_areas * 199 - sizeof(debug_entry_t**), 200 - GFP_KERNEL); 199 + areas = kmalloc(nr_areas * sizeof(debug_entry_t **), GFP_KERNEL); 201 200 if (!areas) 202 201 goto fail_malloc_areas; 203 202 for (i = 0; i < nr_areas; i++) { 204 - areas[i] = kmalloc(pages_per_area * 205 - sizeof(debug_entry_t*),GFP_KERNEL); 206 - if (!areas[i]) { 203 + areas[i] = kmalloc(pages_per_area * sizeof(debug_entry_t *), GFP_KERNEL); 204 + if (!areas[i]) 207 205 goto fail_malloc_areas2; 208 - } 209 - for(j = 0; j < pages_per_area; j++) { 206 + for (j = 0; j < pages_per_area; j++) { 210 207 areas[i][j] = kzalloc(PAGE_SIZE, GFP_KERNEL); 211 - if(!areas[i][j]) { 212 - for(j--; j >=0 ; j--) { 208 + if (!areas[i][j]) { 209 + for (j--; j >= 0 ; j--) 213 210 kfree(areas[i][j]); 214 - } 215 211 kfree(areas[i]); 216 212 goto fail_malloc_areas2; 217 213 } ··· 214 222 return areas; 215 223 216 224 fail_malloc_areas2: 217 - for(i--; i >= 0; i--){ 218 - for(j=0; j < pages_per_area;j++){ 225 + for (i--; i >= 0; i--) { 226 + for (j = 0; j < pages_per_area; j++) 219 227 kfree(areas[i][j]); 220 - } 221 228 kfree(areas[i]); 222 229 } 223 230 kfree(areas); 224 231 fail_malloc_areas: 225 232 return NULL; 226 - 227 233 } 228 - 229 234 230 235 /* 231 236 * debug_info_alloc 232 237 * - alloc new debug-info 233 238 */ 234 - 235 - static debug_info_t* 236 - debug_info_alloc(const char *name, int pages_per_area, int nr_areas, 237 - int buf_size, int level, int mode) 239 + static debug_info_t *debug_info_alloc(const char *name, int pages_per_area, 240 + int nr_areas, int buf_size, int level, 241 + int mode) 238 242 { 239 - debug_info_t* rc; 243 + debug_info_t *rc; 240 244 241 245 /* alloc everything */ 242 - 243 246 rc = kmalloc(sizeof(debug_info_t), GFP_KERNEL); 244 - if(!rc) 247 + if (!rc) 245 248 goto fail_malloc_rc; 246 249 rc->active_entries = kcalloc(nr_areas, sizeof(int), GFP_KERNEL); 247 - if(!rc->active_entries) 250 + if (!rc->active_entries) 248 251 goto fail_malloc_active_entries; 249 252 rc->active_pages = kcalloc(nr_areas, sizeof(int), GFP_KERNEL); 250 - if(!rc->active_pages) 253 + if (!rc->active_pages) 251 254 goto fail_malloc_active_pages; 252 - if((mode == ALL_AREAS) && (pages_per_area != 0)){ 255 + if ((mode == ALL_AREAS) && (pages_per_area != 0)) { 253 256 rc->areas = debug_areas_alloc(pages_per_area, nr_areas); 254 - if(!rc->areas) 257 + if (!rc->areas) 255 258 goto fail_malloc_areas; 256 259 } else { 257 260 rc->areas = NULL; 258 261 } 259 262 260 263 /* initialize members */ 261 - 262 264 spin_lock_init(&rc->lock); 263 265 rc->pages_per_area = pages_per_area; 264 - rc->nr_areas = nr_areas; 266 + rc->nr_areas = nr_areas; 265 267 rc->active_area = 0; 266 - rc->level = level; 267 - rc->buf_size = buf_size; 268 - rc->entry_size = sizeof(debug_entry_t) + buf_size; 268 + rc->level = level; 269 + rc->buf_size = buf_size; 270 + rc->entry_size = sizeof(debug_entry_t) + buf_size; 269 271 strlcpy(rc->name, name, sizeof(rc->name)); 270 272 memset(rc->views, 0, DEBUG_MAX_VIEWS * sizeof(struct debug_view *)); 271 - memset(rc->debugfs_entries, 0 ,DEBUG_MAX_VIEWS * 272 - sizeof(struct dentry*)); 273 + memset(rc->debugfs_entries, 0, DEBUG_MAX_VIEWS * sizeof(struct dentry *)); 273 274 refcount_set(&(rc->ref_count), 0); 274 275 275 276 return rc; ··· 281 296 * debug_areas_free 282 297 * - free all debug areas 283 298 */ 284 - 285 - static void 286 - debug_areas_free(debug_info_t* db_info) 299 + static void debug_areas_free(debug_info_t *db_info) 287 300 { 288 - int i,j; 301 + int i, j; 289 302 290 - if(!db_info->areas) 303 + if (!db_info->areas) 291 304 return; 292 305 for (i = 0; i < db_info->nr_areas; i++) { 293 - for(j = 0; j < db_info->pages_per_area; j++) { 306 + for (j = 0; j < db_info->pages_per_area; j++) 294 307 kfree(db_info->areas[i][j]); 295 - } 296 308 kfree(db_info->areas[i]); 297 309 } 298 310 kfree(db_info->areas); ··· 300 318 * debug_info_free 301 319 * - free memory debug-info 302 320 */ 303 - 304 - static void 305 - debug_info_free(debug_info_t* db_info){ 321 + static void debug_info_free(debug_info_t *db_info) 322 + { 306 323 debug_areas_free(db_info); 307 324 kfree(db_info->active_entries); 308 325 kfree(db_info->active_pages); ··· 313 332 * - create new debug-info 314 333 */ 315 334 316 - static debug_info_t* 317 - debug_info_create(const char *name, int pages_per_area, int nr_areas, 318 - int buf_size, umode_t mode) 335 + static debug_info_t *debug_info_create(const char *name, int pages_per_area, 336 + int nr_areas, int buf_size, umode_t mode) 319 337 { 320 - debug_info_t* rc; 338 + debug_info_t *rc; 321 339 322 - rc = debug_info_alloc(name, pages_per_area, nr_areas, buf_size, 323 - DEBUG_DEFAULT_LEVEL, ALL_AREAS); 324 - if(!rc) 340 + rc = debug_info_alloc(name, pages_per_area, nr_areas, buf_size, 341 + DEBUG_DEFAULT_LEVEL, ALL_AREAS); 342 + if (!rc) 325 343 goto out; 326 344 327 345 rc->mode = mode & ~S_IFMT; 328 346 329 347 /* create root directory */ 330 - rc->debugfs_root_entry = debugfs_create_dir(rc->name, 331 - debug_debugfs_root_entry); 348 + rc->debugfs_root_entry = debugfs_create_dir(rc->name, 349 + debug_debugfs_root_entry); 332 350 333 351 /* append new element to linked list */ 334 - if (!debug_area_first) { 335 - /* first element in list */ 336 - debug_area_first = rc; 337 - rc->prev = NULL; 338 - } else { 339 - /* append element to end of list */ 340 - debug_area_last->next = rc; 341 - rc->prev = debug_area_last; 342 - } 343 - debug_area_last = rc; 344 - rc->next = NULL; 352 + if (!debug_area_first) { 353 + /* first element in list */ 354 + debug_area_first = rc; 355 + rc->prev = NULL; 356 + } else { 357 + /* append element to end of list */ 358 + debug_area_last->next = rc; 359 + rc->prev = debug_area_last; 360 + } 361 + debug_area_last = rc; 362 + rc->next = NULL; 345 363 346 364 refcount_set(&rc->ref_count, 1); 347 365 out: ··· 351 371 * debug_info_copy 352 372 * - copy debug-info 353 373 */ 354 - 355 - static debug_info_t* 356 - debug_info_copy(debug_info_t* in, int mode) 374 + static debug_info_t *debug_info_copy(debug_info_t *in, int mode) 357 375 { 358 - int i,j; 359 - debug_info_t* rc; 360 - unsigned long flags; 376 + unsigned long flags; 377 + debug_info_t *rc; 378 + int i, j; 361 379 362 380 /* get a consistent copy of the debug areas */ 363 381 do { 364 382 rc = debug_info_alloc(in->name, in->pages_per_area, 365 383 in->nr_areas, in->buf_size, in->level, mode); 366 384 spin_lock_irqsave(&in->lock, flags); 367 - if(!rc) 385 + if (!rc) 368 386 goto out; 369 387 /* has something changed in the meantime ? */ 370 - if((rc->pages_per_area == in->pages_per_area) && 371 - (rc->nr_areas == in->nr_areas)) { 388 + if ((rc->pages_per_area == in->pages_per_area) && 389 + (rc->nr_areas == in->nr_areas)) { 372 390 break; 373 391 } 374 392 spin_unlock_irqrestore(&in->lock, flags); ··· 374 396 } while (1); 375 397 376 398 if (mode == NO_AREAS) 377 - goto out; 399 + goto out; 378 400 379 - for(i = 0; i < in->nr_areas; i++){ 380 - for(j = 0; j < in->pages_per_area; j++) { 381 - memcpy(rc->areas[i][j], in->areas[i][j],PAGE_SIZE); 382 - } 383 - } 401 + for (i = 0; i < in->nr_areas; i++) { 402 + for (j = 0; j < in->pages_per_area; j++) 403 + memcpy(rc->areas[i][j], in->areas[i][j], PAGE_SIZE); 404 + } 384 405 out: 385 - spin_unlock_irqrestore(&in->lock, flags); 386 - return rc; 406 + spin_unlock_irqrestore(&in->lock, flags); 407 + return rc; 387 408 } 388 409 389 410 /* 390 411 * debug_info_get 391 412 * - increments reference count for debug-info 392 413 */ 393 - 394 - static void 395 - debug_info_get(debug_info_t * db_info) 414 + static void debug_info_get(debug_info_t *db_info) 396 415 { 397 416 if (db_info) 398 417 refcount_inc(&db_info->ref_count); ··· 399 424 * debug_info_put: 400 425 * - decreases reference count for debug-info and frees it if necessary 401 426 */ 402 - 403 - static void 404 - debug_info_put(debug_info_t *db_info) 427 + static void debug_info_put(debug_info_t *db_info) 405 428 { 406 429 int i; 407 430 ··· 412 439 debugfs_remove(db_info->debugfs_entries[i]); 413 440 } 414 441 debugfs_remove(db_info->debugfs_root_entry); 415 - if(db_info == debug_area_first) 442 + if (db_info == debug_area_first) 416 443 debug_area_first = db_info->next; 417 - if(db_info == debug_area_last) 444 + if (db_info == debug_area_last) 418 445 debug_area_last = db_info->prev; 419 - if(db_info->prev) db_info->prev->next = db_info->next; 420 - if(db_info->next) db_info->next->prev = db_info->prev; 446 + if (db_info->prev) 447 + db_info->prev->next = db_info->next; 448 + if (db_info->next) 449 + db_info->next->prev = db_info->prev; 421 450 debug_info_free(db_info); 422 451 } 423 452 } ··· 428 453 * debug_format_entry: 429 454 * - format one debug entry and return size of formated data 430 455 */ 431 - 432 - static int 433 - debug_format_entry(file_private_info_t *p_info) 456 + static int debug_format_entry(file_private_info_t *p_info) 434 457 { 435 - debug_info_t *id_snap = p_info->debug_info_snap; 458 + debug_info_t *id_snap = p_info->debug_info_snap; 436 459 struct debug_view *view = p_info->view; 437 460 debug_entry_t *act_entry; 438 461 size_t len = 0; 439 - if(p_info->act_entry == DEBUG_PROLOG_ENTRY){ 462 + 463 + if (p_info->act_entry == DEBUG_PROLOG_ENTRY) { 440 464 /* print prolog */ 441 - if (view->prolog_proc) 442 - len += view->prolog_proc(id_snap,view,p_info->temp_buf); 465 + if (view->prolog_proc) 466 + len += view->prolog_proc(id_snap, view, p_info->temp_buf); 443 467 goto out; 444 468 } 445 469 if (!id_snap->areas) /* this is true, if we have a prolog only view */ 446 470 goto out; /* or if 'pages_per_area' is 0 */ 447 - act_entry = (debug_entry_t *) ((char*)id_snap->areas[p_info->act_area] 448 - [p_info->act_page] + p_info->act_entry); 449 - 471 + act_entry = (debug_entry_t *) ((char *)id_snap->areas[p_info->act_area] 472 + [p_info->act_page] + p_info->act_entry); 473 + 450 474 if (act_entry->id.stck == 0LL) 451 - goto out; /* empty entry */ 475 + goto out; /* empty entry */ 452 476 if (view->header_proc) 453 477 len += view->header_proc(id_snap, view, p_info->act_area, 454 - act_entry, p_info->temp_buf + len); 478 + act_entry, p_info->temp_buf + len); 455 479 if (view->format_proc) 456 480 len += view->format_proc(id_snap, view, p_info->temp_buf + len, 457 - DEBUG_DATA(act_entry)); 481 + DEBUG_DATA(act_entry)); 458 482 out: 459 - return len; 483 + return len; 460 484 } 461 485 462 486 /* 463 487 * debug_next_entry: 464 488 * - goto next entry in p_info 465 489 */ 466 - 467 - static inline int 468 - debug_next_entry(file_private_info_t *p_info) 490 + static inline int debug_next_entry(file_private_info_t *p_info) 469 491 { 470 492 debug_info_t *id; 471 493 472 494 id = p_info->debug_info_snap; 473 - if(p_info->act_entry == DEBUG_PROLOG_ENTRY){ 495 + if (p_info->act_entry == DEBUG_PROLOG_ENTRY) { 474 496 p_info->act_entry = 0; 475 497 p_info->act_page = 0; 476 498 goto out; 477 499 } 478 - if(!id->areas) 500 + if (!id->areas) 479 501 return 1; 480 502 p_info->act_entry += id->entry_size; 481 503 /* switch to next page, if we reached the end of the page */ 482 - if (p_info->act_entry > (PAGE_SIZE - id->entry_size)){ 504 + if (p_info->act_entry > (PAGE_SIZE - id->entry_size)) { 483 505 /* next page */ 484 506 p_info->act_entry = 0; 485 507 p_info->act_page += 1; 486 - if((p_info->act_page % id->pages_per_area) == 0) { 508 + if ((p_info->act_page % id->pages_per_area) == 0) { 487 509 /* next area */ 488 - p_info->act_area++; 489 - p_info->act_page=0; 510 + p_info->act_area++; 511 + p_info->act_page = 0; 490 512 } 491 - if(p_info->act_area >= id->nr_areas) 513 + if (p_info->act_area >= id->nr_areas) 492 514 return 1; 493 515 } 494 516 out: 495 - return 0; 517 + return 0; 496 518 } 497 519 498 520 /* ··· 497 525 * - called for user read() 498 526 * - copies formated debug entries to the user buffer 499 527 */ 500 - 501 - static ssize_t 502 - debug_output(struct file *file, /* file descriptor */ 503 - char __user *user_buf, /* user buffer */ 504 - size_t len, /* length of buffer */ 505 - loff_t *offset) /* offset in the file */ 528 + static ssize_t debug_output(struct file *file, /* file descriptor */ 529 + char __user *user_buf, /* user buffer */ 530 + size_t len, /* length of buffer */ 531 + loff_t *offset) /* offset in the file */ 506 532 { 507 533 size_t count = 0; 508 534 size_t entry_offset; 509 535 file_private_info_t *p_info; 510 536 511 - p_info = ((file_private_info_t *) file->private_data); 512 - if (*offset != p_info->offset) 537 + p_info = (file_private_info_t *) file->private_data; 538 + if (*offset != p_info->offset) 513 539 return -EPIPE; 514 - if(p_info->act_area >= p_info->debug_info_snap->nr_areas) 540 + if (p_info->act_area >= p_info->debug_info_snap->nr_areas) 515 541 return 0; 516 542 entry_offset = p_info->act_entry_offset; 517 - while(count < len){ 518 - int formatted_line_size; 543 + while (count < len) { 519 544 int formatted_line_residue; 545 + int formatted_line_size; 520 546 int user_buf_residue; 521 547 size_t copy_size; 522 548 ··· 522 552 formatted_line_residue = formatted_line_size - entry_offset; 523 553 user_buf_residue = len-count; 524 554 copy_size = min(user_buf_residue, formatted_line_residue); 525 - if(copy_size){ 555 + if (copy_size) { 526 556 if (copy_to_user(user_buf + count, p_info->temp_buf 527 - + entry_offset, copy_size)) 557 + + entry_offset, copy_size)) 528 558 return -EFAULT; 529 559 count += copy_size; 530 560 entry_offset += copy_size; 531 561 } 532 - if(copy_size == formatted_line_residue){ 562 + if (copy_size == formatted_line_residue) { 533 563 entry_offset = 0; 534 - if(debug_next_entry(p_info)) 564 + if (debug_next_entry(p_info)) 535 565 goto out; 536 566 } 537 567 } 538 568 out: 539 - p_info->offset = *offset + count; 569 + p_info->offset = *offset + count; 540 570 p_info->act_entry_offset = entry_offset; 541 571 *offset = p_info->offset; 542 572 return count; ··· 547 577 * - called for user write() 548 578 * - calls input function of view 549 579 */ 550 - 551 - static ssize_t 552 - debug_input(struct file *file, const char __user *user_buf, size_t length, 553 - loff_t *offset) 580 + static ssize_t debug_input(struct file *file, const char __user *user_buf, 581 + size_t length, loff_t *offset) 554 582 { 555 - int rc = 0; 556 583 file_private_info_t *p_info; 584 + int rc = 0; 557 585 558 586 mutex_lock(&debug_mutex); 559 587 p_info = ((file_private_info_t *) file->private_data); 560 - if (p_info->view->input_proc) 588 + if (p_info->view->input_proc) { 561 589 rc = p_info->view->input_proc(p_info->debug_info_org, 562 590 p_info->view, file, user_buf, 563 591 length, offset); 564 - else 592 + } else { 565 593 rc = -EPERM; 594 + } 566 595 mutex_unlock(&debug_mutex); 567 - return rc; /* number of input characters */ 596 + return rc; /* number of input characters */ 568 597 } 569 598 570 599 /* ··· 572 603 * - copies formated output to private_data area of the file 573 604 * handle 574 605 */ 575 - 576 - static int 577 - debug_open(struct inode *inode, struct file *file) 606 + static int debug_open(struct inode *inode, struct file *file) 578 607 { 579 - int i, rc = 0; 580 - file_private_info_t *p_info; 581 608 debug_info_t *debug_info, *debug_info_snapshot; 609 + file_private_info_t *p_info; 610 + int i, rc = 0; 582 611 583 612 mutex_lock(&debug_mutex); 584 613 debug_info = file_inode(file)->i_private; ··· 584 617 for (i = 0; i < DEBUG_MAX_VIEWS; i++) { 585 618 if (!debug_info->views[i]) 586 619 continue; 587 - else if (debug_info->debugfs_entries[i] == 588 - file->f_path.dentry) { 589 - goto found; /* found view ! */ 590 - } 620 + else if (debug_info->debugfs_entries[i] == file->f_path.dentry) 621 + goto found; /* found view ! */ 591 622 } 592 623 /* no entry found */ 593 624 rc = -EINVAL; ··· 593 628 594 629 found: 595 630 596 - /* Make snapshot of current debug areas to get it consistent. */ 631 + /* Make snapshot of current debug areas to get it consistent. */ 597 632 /* To copy all the areas is only needed, if we have a view which */ 598 633 /* formats the debug areas. */ 599 634 600 - if(!debug_info->views[i]->format_proc && 601 - !debug_info->views[i]->header_proc){ 635 + if (!debug_info->views[i]->format_proc && !debug_info->views[i]->header_proc) 602 636 debug_info_snapshot = debug_info_copy(debug_info, NO_AREAS); 603 - } else { 637 + else 604 638 debug_info_snapshot = debug_info_copy(debug_info, ALL_AREAS); 605 - } 606 639 607 - if(!debug_info_snapshot){ 640 + if (!debug_info_snapshot) { 608 641 rc = -ENOMEM; 609 642 goto out; 610 643 } 611 - p_info = kmalloc(sizeof(file_private_info_t), 612 - GFP_KERNEL); 613 - if(!p_info){ 644 + p_info = kmalloc(sizeof(file_private_info_t), GFP_KERNEL); 645 + if (!p_info) { 614 646 debug_info_free(debug_info_snapshot); 615 647 rc = -ENOMEM; 616 648 goto out; 617 649 } 618 650 p_info->offset = 0; 619 651 p_info->debug_info_snap = debug_info_snapshot; 620 - p_info->debug_info_org = debug_info; 652 + p_info->debug_info_org = debug_info; 621 653 p_info->view = debug_info->views[i]; 622 654 p_info->act_area = 0; 623 655 p_info->act_page = 0; ··· 633 671 * - called for user close() 634 672 * - deletes private_data area of the file handle 635 673 */ 636 - 637 - static int 638 - debug_close(struct inode *inode, struct file *file) 674 + static int debug_close(struct inode *inode, struct file *file) 639 675 { 640 676 file_private_info_t *p_info; 677 + 641 678 p_info = (file_private_info_t *) file->private_data; 642 - if(p_info->debug_info_snap) 679 + if (p_info->debug_info_snap) 643 680 debug_info_free(p_info->debug_info_snap); 644 681 debug_info_put(p_info->debug_info_org); 645 682 kfree(file->private_data); 646 - return 0; /* success */ 683 + return 0; /* success */ 647 684 } 648 685 649 686 /* ··· 651 690 * The mode parameter allows to specify access rights for the s390dbf files 652 691 * - Returns handle for debug area 653 692 */ 654 - 655 693 debug_info_t *debug_register_mode(const char *name, int pages_per_area, 656 694 int nr_areas, int buf_size, umode_t mode, 657 695 uid_t uid, gid_t gid) ··· 664 704 BUG_ON(!initialized); 665 705 mutex_lock(&debug_mutex); 666 706 667 - /* create new debug_info */ 668 - 707 + /* create new debug_info */ 669 708 rc = debug_info_create(name, pages_per_area, nr_areas, buf_size, mode); 670 - if(!rc) 709 + if (!rc) 671 710 goto out; 672 711 debug_register_view(rc, &debug_level_view); 673 - debug_register_view(rc, &debug_flush_view); 712 + debug_register_view(rc, &debug_flush_view); 674 713 debug_register_view(rc, &debug_pages_view); 675 714 out: 676 - if (!rc){ 715 + if (!rc) 677 716 pr_err("Registering debug feature %s failed\n", name); 678 - } 679 717 mutex_unlock(&debug_mutex); 680 718 return rc; 681 719 } ··· 684 726 * - creates and initializes debug area for the caller 685 727 * - returns handle for debug area 686 728 */ 687 - 688 729 debug_info_t *debug_register(const char *name, int pages_per_area, 689 730 int nr_areas, int buf_size) 690 731 { ··· 696 739 * debug_unregister: 697 740 * - give back debug area 698 741 */ 699 - 700 - void 701 - debug_unregister(debug_info_t * id) 742 + void debug_unregister(debug_info_t *id) 702 743 { 703 744 if (!id) 704 - goto out; 745 + return; 705 746 mutex_lock(&debug_mutex); 706 747 debug_info_put(id); 707 748 mutex_unlock(&debug_mutex); 708 - 709 - out: 710 - return; 711 749 } 712 750 EXPORT_SYMBOL(debug_unregister); 713 751 ··· 710 758 * debug_set_size: 711 759 * - set area size (number of pages) and number of areas 712 760 */ 713 - static int 714 - debug_set_size(debug_info_t* id, int nr_areas, int pages_per_area) 761 + static int debug_set_size(debug_info_t *id, int nr_areas, int pages_per_area) 715 762 { 763 + debug_entry_t ***new_areas; 716 764 unsigned long flags; 717 - debug_entry_t *** new_areas; 718 - int rc=0; 765 + int rc = 0; 719 766 720 - if(!id || (nr_areas <= 0) || (pages_per_area < 0)) 767 + if (!id || (nr_areas <= 0) || (pages_per_area < 0)) 721 768 return -EINVAL; 722 - if(pages_per_area > 0){ 769 + if (pages_per_area > 0) { 723 770 new_areas = debug_areas_alloc(pages_per_area, nr_areas); 724 - if(!new_areas) { 771 + if (!new_areas) { 725 772 pr_info("Allocating memory for %i pages failed\n", 726 773 pages_per_area); 727 774 rc = -ENOMEM; ··· 729 778 } else { 730 779 new_areas = NULL; 731 780 } 732 - spin_lock_irqsave(&id->lock,flags); 781 + spin_lock_irqsave(&id->lock, flags); 733 782 debug_areas_free(id); 734 783 id->areas = new_areas; 735 784 id->nr_areas = nr_areas; 736 785 id->pages_per_area = pages_per_area; 737 786 id->active_area = 0; 738 - memset(id->active_entries,0,sizeof(int)*id->nr_areas); 787 + memset(id->active_entries, 0, sizeof(int)*id->nr_areas); 739 788 memset(id->active_pages, 0, sizeof(int)*id->nr_areas); 740 - spin_unlock_irqrestore(&id->lock,flags); 741 - pr_info("%s: set new size (%i pages)\n" ,id->name, pages_per_area); 789 + spin_unlock_irqrestore(&id->lock, flags); 790 + pr_info("%s: set new size (%i pages)\n", id->name, pages_per_area); 742 791 out: 743 792 return rc; 744 793 } ··· 747 796 * debug_set_level: 748 797 * - set actual debug level 749 798 */ 750 - 751 - void 752 - debug_set_level(debug_info_t* id, int new_level) 799 + void debug_set_level(debug_info_t *id, int new_level) 753 800 { 754 801 unsigned long flags; 755 - if(!id) 756 - return; 757 - spin_lock_irqsave(&id->lock,flags); 758 - if(new_level == DEBUG_OFF_LEVEL){ 759 - id->level = DEBUG_OFF_LEVEL; 760 - pr_info("%s: switched off\n",id->name); 761 - } else if ((new_level > DEBUG_MAX_LEVEL) || (new_level < 0)) { 802 + 803 + if (!id) 804 + return; 805 + spin_lock_irqsave(&id->lock, flags); 806 + if (new_level == DEBUG_OFF_LEVEL) { 807 + id->level = DEBUG_OFF_LEVEL; 808 + pr_info("%s: switched off\n", id->name); 809 + } else if ((new_level > DEBUG_MAX_LEVEL) || (new_level < 0)) { 762 810 pr_info("%s: level %i is out of range (%i - %i)\n", 763 - id->name, new_level, 0, DEBUG_MAX_LEVEL); 764 - } else { 765 - id->level = new_level; 766 - } 767 - spin_unlock_irqrestore(&id->lock,flags); 811 + id->name, new_level, 0, DEBUG_MAX_LEVEL); 812 + } else { 813 + id->level = new_level; 814 + } 815 + spin_unlock_irqrestore(&id->lock, flags); 768 816 } 769 817 EXPORT_SYMBOL(debug_set_level); 770 818 ··· 771 821 * proceed_active_entry: 772 822 * - set active entry to next in the ring buffer 773 823 */ 774 - 775 - static inline void 776 - proceed_active_entry(debug_info_t * id) 824 + static inline void proceed_active_entry(debug_info_t *id) 777 825 { 778 826 if ((id->active_entries[id->active_area] += id->entry_size) 779 - > (PAGE_SIZE - id->entry_size)){ 827 + > (PAGE_SIZE - id->entry_size)) { 780 828 id->active_entries[id->active_area] = 0; 781 829 id->active_pages[id->active_area] = 782 830 (id->active_pages[id->active_area] + 1) % ··· 786 838 * proceed_active_area: 787 839 * - set active area to next in the ring buffer 788 840 */ 789 - 790 - static inline void 791 - proceed_active_area(debug_info_t * id) 841 + static inline void proceed_active_area(debug_info_t *id) 792 842 { 793 843 id->active_area++; 794 844 id->active_area = id->active_area % id->nr_areas; ··· 795 849 /* 796 850 * get_active_entry: 797 851 */ 798 - 799 - static inline debug_entry_t* 800 - get_active_entry(debug_info_t * id) 852 + static inline debug_entry_t *get_active_entry(debug_info_t *id) 801 853 { 802 854 return (debug_entry_t *) (((char *) id->areas[id->active_area] 803 - [id->active_pages[id->active_area]]) + 804 - id->active_entries[id->active_area]); 855 + [id->active_pages[id->active_area]]) + 856 + id->active_entries[id->active_area]); 805 857 } 806 858 807 859 /* ··· 807 863 * - set timestamp, caller address, cpu number etc. 808 864 */ 809 865 810 - static inline void 811 - debug_finish_entry(debug_info_t * id, debug_entry_t* active, int level, 812 - int exception) 866 + static inline void debug_finish_entry(debug_info_t *id, debug_entry_t *active, 867 + int level, int exception) 813 868 { 814 869 active->id.stck = get_tod_clock_fast() - 815 870 *(unsigned long long *) &tod_clock_base[1]; 816 871 active->id.fields.cpuid = smp_processor_id(); 817 872 active->caller = __builtin_return_address(0); 818 873 active->id.fields.exception = exception; 819 - active->id.fields.level = level; 874 + active->id.fields.level = level; 820 875 proceed_active_entry(id); 821 - if(exception) 876 + if (exception) 822 877 proceed_active_area(id); 823 878 } 824 879 825 - static int debug_stoppable=1; 826 - static int debug_active=1; 880 + static int debug_stoppable = 1; 881 + static int debug_active = 1; 827 882 828 883 #define CTL_S390DBF_STOPPABLE 5678 829 884 #define CTL_S390DBF_ACTIVE 5679 ··· 832 889 * always allow read, allow write only if debug_stoppable is set or 833 890 * if debug_active is already off 834 891 */ 835 - static int 836 - s390dbf_procactive(struct ctl_table *table, int write, 837 - void __user *buffer, size_t *lenp, loff_t *ppos) 892 + static int s390dbf_procactive(struct ctl_table *table, int write, 893 + void __user *buffer, size_t *lenp, loff_t *ppos) 838 894 { 839 895 if (!write || debug_stoppable || !debug_active) 840 896 return proc_dointvec(table, write, buffer, lenp, ppos); ··· 841 899 return 0; 842 900 } 843 901 844 - 845 902 static struct ctl_table s390dbf_table[] = { 846 903 { 847 - .procname = "debug_stoppable", 904 + .procname = "debug_stoppable", 848 905 .data = &debug_stoppable, 849 906 .maxlen = sizeof(int), 850 - .mode = S_IRUGO | S_IWUSR, 851 - .proc_handler = proc_dointvec, 907 + .mode = S_IRUGO | S_IWUSR, 908 + .proc_handler = proc_dointvec, 852 909 }, 853 - { 854 - .procname = "debug_active", 910 + { 911 + .procname = "debug_active", 855 912 .data = &debug_active, 856 913 .maxlen = sizeof(int), 857 - .mode = S_IRUGO | S_IWUSR, 858 - .proc_handler = s390dbf_procactive, 914 + .mode = S_IRUGO | S_IWUSR, 915 + .proc_handler = s390dbf_procactive, 859 916 }, 860 917 { } 861 918 }; 862 919 863 920 static struct ctl_table s390dbf_dir_table[] = { 864 921 { 865 - .procname = "s390dbf", 866 - .maxlen = 0, 867 - .mode = S_IRUGO | S_IXUGO, 868 - .child = s390dbf_table, 922 + .procname = "s390dbf", 923 + .maxlen = 0, 924 + .mode = S_IRUGO | S_IXUGO, 925 + .child = s390dbf_table, 869 926 }, 870 927 { } 871 928 }; 872 929 873 930 static struct ctl_table_header *s390dbf_sysctl_header; 874 931 875 - void 876 - debug_stop_all(void) 932 + void debug_stop_all(void) 877 933 { 878 934 if (debug_stoppable) 879 935 debug_active = 0; ··· 887 947 * debug_event_common: 888 948 * - write debug entry with given size 889 949 */ 890 - 891 - debug_entry_t* 892 - debug_event_common(debug_info_t * id, int level, const void *buf, int len) 950 + debug_entry_t *debug_event_common(debug_info_t *id, int level, const void *buf, 951 + int len) 893 952 { 894 - unsigned long flags; 895 953 debug_entry_t *active; 954 + unsigned long flags; 896 955 897 956 if (!debug_active || !id->areas) 898 957 return NULL; 899 958 if (debug_critical) { 900 959 if (!spin_trylock_irqsave(&id->lock, flags)) 901 960 return NULL; 902 - } else 961 + } else { 903 962 spin_lock_irqsave(&id->lock, flags); 904 - active = get_active_entry(id); 905 - memset(DEBUG_DATA(active), 0, id->buf_size); 906 - memcpy(DEBUG_DATA(active), buf, min(len, id->buf_size)); 907 - debug_finish_entry(id, active, level, 0); 908 - spin_unlock_irqrestore(&id->lock, flags); 963 + } 964 + do { 965 + active = get_active_entry(id); 966 + memcpy(DEBUG_DATA(active), buf, min(len, id->buf_size)); 967 + if (len < id->buf_size) 968 + memset((DEBUG_DATA(active)) + len, 0, id->buf_size - len); 969 + debug_finish_entry(id, active, level, 0); 970 + len -= id->buf_size; 971 + buf += id->buf_size; 972 + } while (len > 0); 909 973 974 + spin_unlock_irqrestore(&id->lock, flags); 910 975 return active; 911 976 } 912 977 EXPORT_SYMBOL(debug_event_common); ··· 920 975 * debug_exception_common: 921 976 * - write debug entry with given size and switch to next debug area 922 977 */ 923 - 924 - debug_entry_t 925 - *debug_exception_common(debug_info_t * id, int level, const void *buf, int len) 978 + debug_entry_t *debug_exception_common(debug_info_t *id, int level, 979 + const void *buf, int len) 926 980 { 927 - unsigned long flags; 928 981 debug_entry_t *active; 982 + unsigned long flags; 929 983 930 984 if (!debug_active || !id->areas) 931 985 return NULL; 932 986 if (debug_critical) { 933 987 if (!spin_trylock_irqsave(&id->lock, flags)) 934 988 return NULL; 935 - } else 989 + } else { 936 990 spin_lock_irqsave(&id->lock, flags); 937 - active = get_active_entry(id); 938 - memset(DEBUG_DATA(active), 0, id->buf_size); 939 - memcpy(DEBUG_DATA(active), buf, min(len, id->buf_size)); 940 - debug_finish_entry(id, active, level, 1); 941 - spin_unlock_irqrestore(&id->lock, flags); 991 + } 992 + do { 993 + active = get_active_entry(id); 994 + memcpy(DEBUG_DATA(active), buf, min(len, id->buf_size)); 995 + if (len < id->buf_size) 996 + memset((DEBUG_DATA(active)) + len, 0, id->buf_size - len); 997 + debug_finish_entry(id, active, level, len <= id->buf_size); 998 + len -= id->buf_size; 999 + buf += id->buf_size; 1000 + } while (len > 0); 942 1001 1002 + spin_unlock_irqrestore(&id->lock, flags); 943 1003 return active; 944 1004 } 945 1005 EXPORT_SYMBOL(debug_exception_common); ··· 952 1002 /* 953 1003 * counts arguments in format string for sprintf view 954 1004 */ 955 - 956 - static inline int 957 - debug_count_numargs(char *string) 1005 + static inline int debug_count_numargs(char *string) 958 1006 { 959 - int numargs=0; 1007 + int numargs = 0; 960 1008 961 - while(*string) { 962 - if(*string++=='%') 1009 + while (*string) { 1010 + if (*string++ == '%') 963 1011 numargs++; 964 1012 } 965 - return(numargs); 1013 + return numargs; 966 1014 } 967 1015 968 1016 /* 969 1017 * debug_sprintf_event: 970 1018 */ 971 - 972 - debug_entry_t* 973 - __debug_sprintf_event(debug_info_t *id, int level, char *string, ...) 1019 + debug_entry_t *__debug_sprintf_event(debug_info_t *id, int level, char *string, ...) 974 1020 { 975 - va_list ap; 976 - int numargs,idx; 977 - unsigned long flags; 978 1021 debug_sprintf_entry_t *curr_event; 979 1022 debug_entry_t *active; 1023 + unsigned long flags; 1024 + int numargs, idx; 1025 + va_list ap; 980 1026 981 1027 if (!debug_active || !id->areas) 982 1028 return NULL; 983 - numargs=debug_count_numargs(string); 1029 + numargs = debug_count_numargs(string); 984 1030 985 1031 if (debug_critical) { 986 1032 if (!spin_trylock_irqsave(&id->lock, flags)) 987 1033 return NULL; 988 - } else 1034 + } else { 989 1035 spin_lock_irqsave(&id->lock, flags); 1036 + } 990 1037 active = get_active_entry(id); 991 - curr_event=(debug_sprintf_entry_t *) DEBUG_DATA(active); 992 - va_start(ap,string); 993 - curr_event->string=string; 994 - for(idx=0;idx<min(numargs,(int)(id->buf_size / sizeof(long))-1);idx++) 995 - curr_event->args[idx]=va_arg(ap,long); 1038 + curr_event = (debug_sprintf_entry_t *) DEBUG_DATA(active); 1039 + va_start(ap, string); 1040 + curr_event->string = string; 1041 + for (idx = 0; idx < min(numargs, (int)(id->buf_size / sizeof(long)) - 1); idx++) 1042 + curr_event->args[idx] = va_arg(ap, long); 996 1043 va_end(ap); 997 1044 debug_finish_entry(id, active, level, 0); 998 1045 spin_unlock_irqrestore(&id->lock, flags); ··· 1001 1054 /* 1002 1055 * debug_sprintf_exception: 1003 1056 */ 1004 - 1005 - debug_entry_t* 1006 - __debug_sprintf_exception(debug_info_t *id, int level, char *string, ...) 1057 + debug_entry_t *__debug_sprintf_exception(debug_info_t *id, int level, char *string, ...) 1007 1058 { 1008 - va_list ap; 1009 - int numargs,idx; 1010 - unsigned long flags; 1011 1059 debug_sprintf_entry_t *curr_event; 1012 1060 debug_entry_t *active; 1061 + unsigned long flags; 1062 + int numargs, idx; 1063 + va_list ap; 1013 1064 1014 1065 if (!debug_active || !id->areas) 1015 1066 return NULL; 1016 1067 1017 - numargs=debug_count_numargs(string); 1068 + numargs = debug_count_numargs(string); 1018 1069 1019 1070 if (debug_critical) { 1020 1071 if (!spin_trylock_irqsave(&id->lock, flags)) 1021 1072 return NULL; 1022 - } else 1073 + } else { 1023 1074 spin_lock_irqsave(&id->lock, flags); 1075 + } 1024 1076 active = get_active_entry(id); 1025 - curr_event=(debug_sprintf_entry_t *)DEBUG_DATA(active); 1026 - va_start(ap,string); 1027 - curr_event->string=string; 1028 - for(idx=0;idx<min(numargs,(int)(id->buf_size / sizeof(long))-1);idx++) 1029 - curr_event->args[idx]=va_arg(ap,long); 1077 + curr_event = (debug_sprintf_entry_t *)DEBUG_DATA(active); 1078 + va_start(ap, string); 1079 + curr_event->string = string; 1080 + for (idx = 0; idx < min(numargs, (int)(id->buf_size / sizeof(long)) - 1); idx++) 1081 + curr_event->args[idx] = va_arg(ap, long); 1030 1082 va_end(ap); 1031 1083 debug_finish_entry(id, active, level, 1); 1032 1084 spin_unlock_irqrestore(&id->lock, flags); ··· 1037 1091 /* 1038 1092 * debug_register_view: 1039 1093 */ 1040 - 1041 - int 1042 - debug_register_view(debug_info_t * id, struct debug_view *view) 1094 + int debug_register_view(debug_info_t *id, struct debug_view *view) 1043 1095 { 1096 + unsigned long flags; 1097 + struct dentry *pde; 1098 + umode_t mode; 1044 1099 int rc = 0; 1045 1100 int i; 1046 - unsigned long flags; 1047 - umode_t mode; 1048 - struct dentry *pde; 1049 1101 1050 1102 if (!id) 1051 1103 goto out; ··· 1053 1109 if (!view->input_proc) 1054 1110 mode &= ~(S_IWUSR | S_IWGRP | S_IWOTH); 1055 1111 pde = debugfs_create_file(view->name, mode, id->debugfs_root_entry, 1056 - id , &debug_file_ops); 1057 - if (!pde){ 1112 + id, &debug_file_ops); 1113 + if (!pde) { 1058 1114 pr_err("Registering view %s/%s failed due to out of " 1059 - "memory\n", id->name,view->name); 1115 + "memory\n", id->name, view->name); 1060 1116 rc = -1; 1061 1117 goto out; 1062 1118 } ··· 1084 1140 /* 1085 1141 * debug_unregister_view: 1086 1142 */ 1087 - 1088 - int 1089 - debug_unregister_view(debug_info_t * id, struct debug_view *view) 1143 + int debug_unregister_view(debug_info_t *id, struct debug_view *view) 1090 1144 { 1091 1145 struct dentry *dentry = NULL; 1092 1146 unsigned long flags; ··· 1097 1155 if (id->views[i] == view) 1098 1156 break; 1099 1157 } 1100 - if (i == DEBUG_MAX_VIEWS) 1158 + if (i == DEBUG_MAX_VIEWS) { 1101 1159 rc = -1; 1102 - else { 1160 + } else { 1103 1161 dentry = id->debugfs_entries[i]; 1104 1162 id->views[i] = NULL; 1105 1163 id->debugfs_entries[i] = NULL; ··· 1111 1169 } 1112 1170 EXPORT_SYMBOL(debug_unregister_view); 1113 1171 1114 - static inline char * 1115 - debug_get_user_string(const char __user *user_buf, size_t user_len) 1172 + static inline char *debug_get_user_string(const char __user *user_buf, 1173 + size_t user_len) 1116 1174 { 1117 - char* buffer; 1175 + char *buffer; 1118 1176 1119 1177 buffer = kmalloc(user_len + 1, GFP_KERNEL); 1120 1178 if (!buffer) ··· 1128 1186 buffer[user_len - 1] = 0; 1129 1187 else 1130 1188 buffer[user_len] = 0; 1131 - return buffer; 1189 + return buffer; 1132 1190 } 1133 1191 1134 - static inline int 1135 - debug_get_uint(char *buf) 1192 + static inline int debug_get_uint(char *buf) 1136 1193 { 1137 1194 int rc; 1138 1195 1139 1196 buf = skip_spaces(buf); 1140 1197 rc = simple_strtoul(buf, &buf, 10); 1141 - if(*buf){ 1198 + if (*buf) 1142 1199 rc = -EINVAL; 1143 - } 1144 1200 return rc; 1145 1201 } 1146 1202 ··· 1151 1211 * prints out actual debug level 1152 1212 */ 1153 1213 1154 - static int 1155 - debug_prolog_pages_fn(debug_info_t * id, 1156 - struct debug_view *view, char *out_buf) 1214 + static int debug_prolog_pages_fn(debug_info_t *id, struct debug_view *view, 1215 + char *out_buf) 1157 1216 { 1158 1217 return sprintf(out_buf, "%i\n", id->pages_per_area); 1159 1218 } ··· 1161 1222 * reads new size (number of pages per debug area) 1162 1223 */ 1163 1224 1164 - static int 1165 - debug_input_pages_fn(debug_info_t * id, struct debug_view *view, 1166 - struct file *file, const char __user *user_buf, 1167 - size_t user_len, loff_t * offset) 1225 + static int debug_input_pages_fn(debug_info_t *id, struct debug_view *view, 1226 + struct file *file, const char __user *user_buf, 1227 + size_t user_len, loff_t *offset) 1168 1228 { 1229 + int rc, new_pages; 1169 1230 char *str; 1170 - int rc,new_pages; 1171 1231 1172 1232 if (user_len > 0x10000) 1173 - user_len = 0x10000; 1174 - if (*offset != 0){ 1233 + user_len = 0x10000; 1234 + if (*offset != 0) { 1175 1235 rc = -EPIPE; 1176 1236 goto out; 1177 1237 } 1178 - str = debug_get_user_string(user_buf,user_len); 1179 - if(IS_ERR(str)){ 1238 + str = debug_get_user_string(user_buf, user_len); 1239 + if (IS_ERR(str)) { 1180 1240 rc = PTR_ERR(str); 1181 1241 goto out; 1182 1242 } 1183 1243 new_pages = debug_get_uint(str); 1184 - if(new_pages < 0){ 1244 + if (new_pages < 0) { 1185 1245 rc = -EINVAL; 1186 1246 goto free_str; 1187 1247 } 1188 - rc = debug_set_size(id,id->nr_areas, new_pages); 1189 - if(rc != 0){ 1248 + rc = debug_set_size(id, id->nr_areas, new_pages); 1249 + if (rc != 0) { 1190 1250 rc = -EINVAL; 1191 1251 goto free_str; 1192 1252 } ··· 1200 1262 /* 1201 1263 * prints out actual debug level 1202 1264 */ 1203 - 1204 - static int 1205 - debug_prolog_level_fn(debug_info_t * id, struct debug_view *view, char *out_buf) 1265 + static int debug_prolog_level_fn(debug_info_t *id, struct debug_view *view, 1266 + char *out_buf) 1206 1267 { 1207 1268 int rc = 0; 1208 1269 1209 - if(id->level == DEBUG_OFF_LEVEL) { 1210 - rc = sprintf(out_buf,"-\n"); 1211 - } 1212 - else { 1270 + if (id->level == DEBUG_OFF_LEVEL) 1271 + rc = sprintf(out_buf, "-\n"); 1272 + else 1213 1273 rc = sprintf(out_buf, "%i\n", id->level); 1214 - } 1215 1274 return rc; 1216 1275 } 1217 1276 1218 1277 /* 1219 1278 * reads new debug level 1220 1279 */ 1221 - 1222 - static int 1223 - debug_input_level_fn(debug_info_t * id, struct debug_view *view, 1224 - struct file *file, const char __user *user_buf, 1225 - size_t user_len, loff_t * offset) 1280 + static int debug_input_level_fn(debug_info_t *id, struct debug_view *view, 1281 + struct file *file, const char __user *user_buf, 1282 + size_t user_len, loff_t *offset) 1226 1283 { 1284 + int rc, new_level; 1227 1285 char *str; 1228 - int rc,new_level; 1229 1286 1230 1287 if (user_len > 0x10000) 1231 - user_len = 0x10000; 1232 - if (*offset != 0){ 1288 + user_len = 0x10000; 1289 + if (*offset != 0) { 1233 1290 rc = -EPIPE; 1234 1291 goto out; 1235 1292 } 1236 - str = debug_get_user_string(user_buf,user_len); 1237 - if(IS_ERR(str)){ 1293 + str = debug_get_user_string(user_buf, user_len); 1294 + if (IS_ERR(str)) { 1238 1295 rc = PTR_ERR(str); 1239 1296 goto out; 1240 1297 } 1241 - if(str[0] == '-'){ 1298 + if (str[0] == '-') { 1242 1299 debug_set_level(id, DEBUG_OFF_LEVEL); 1243 1300 rc = user_len; 1244 1301 goto free_str; 1245 1302 } else { 1246 1303 new_level = debug_get_uint(str); 1247 1304 } 1248 - if(new_level < 0) { 1305 + if (new_level < 0) { 1249 1306 pr_warn("%s is not a valid level for a debug feature\n", str); 1250 1307 rc = -EINVAL; 1251 1308 } else { ··· 1254 1321 return rc; /* number of input characters */ 1255 1322 } 1256 1323 1257 - 1258 1324 /* 1259 1325 * flushes debug areas 1260 1326 */ 1261 - 1262 - static void debug_flush(debug_info_t* id, int area) 1327 + static void debug_flush(debug_info_t *id, int area) 1263 1328 { 1264 - unsigned long flags; 1265 - int i,j; 1329 + unsigned long flags; 1330 + int i, j; 1266 1331 1267 - if(!id || !id->areas) 1268 - return; 1269 - spin_lock_irqsave(&id->lock,flags); 1270 - if(area == DEBUG_FLUSH_ALL){ 1271 - id->active_area = 0; 1272 - memset(id->active_entries, 0, id->nr_areas * sizeof(int)); 1273 - for (i = 0; i < id->nr_areas; i++) { 1332 + if (!id || !id->areas) 1333 + return; 1334 + spin_lock_irqsave(&id->lock, flags); 1335 + if (area == DEBUG_FLUSH_ALL) { 1336 + id->active_area = 0; 1337 + memset(id->active_entries, 0, id->nr_areas * sizeof(int)); 1338 + for (i = 0; i < id->nr_areas; i++) { 1274 1339 id->active_pages[i] = 0; 1275 - for(j = 0; j < id->pages_per_area; j++) { 1276 - memset(id->areas[i][j], 0, PAGE_SIZE); 1277 - } 1340 + for (j = 0; j < id->pages_per_area; j++) 1341 + memset(id->areas[i][j], 0, PAGE_SIZE); 1278 1342 } 1279 - } else if(area >= 0 && area < id->nr_areas) { 1280 - id->active_entries[area] = 0; 1343 + } else if (area >= 0 && area < id->nr_areas) { 1344 + id->active_entries[area] = 0; 1281 1345 id->active_pages[area] = 0; 1282 - for(i = 0; i < id->pages_per_area; i++) { 1283 - memset(id->areas[area][i],0,PAGE_SIZE); 1284 - } 1285 - } 1286 - spin_unlock_irqrestore(&id->lock,flags); 1346 + for (i = 0; i < id->pages_per_area; i++) 1347 + memset(id->areas[area][i], 0, PAGE_SIZE); 1348 + } 1349 + spin_unlock_irqrestore(&id->lock, flags); 1287 1350 } 1288 1351 1289 1352 /* 1290 - * view function: flushes debug areas 1353 + * view function: flushes debug areas 1291 1354 */ 1292 - 1293 - static int 1294 - debug_input_flush_fn(debug_info_t * id, struct debug_view *view, 1295 - struct file *file, const char __user *user_buf, 1296 - size_t user_len, loff_t * offset) 1355 + static int debug_input_flush_fn(debug_info_t *id, struct debug_view *view, 1356 + struct file *file, const char __user *user_buf, 1357 + size_t user_len, loff_t *offset) 1297 1358 { 1298 - char input_buf[1]; 1299 - int rc = user_len; 1359 + char input_buf[1]; 1360 + int rc = user_len; 1300 1361 1301 1362 if (user_len > 0x10000) 1302 - user_len = 0x10000; 1303 - if (*offset != 0){ 1363 + user_len = 0x10000; 1364 + if (*offset != 0) { 1304 1365 rc = -EPIPE; 1305 - goto out; 1366 + goto out; 1306 1367 } 1307 - if (copy_from_user(input_buf, user_buf, 1)){ 1308 - rc = -EFAULT; 1309 - goto out; 1310 - } 1311 - if(input_buf[0] == '-') { 1312 - debug_flush(id, DEBUG_FLUSH_ALL); 1313 - goto out; 1314 - } 1315 - if (isdigit(input_buf[0])) { 1316 - int area = ((int) input_buf[0] - (int) '0'); 1317 - debug_flush(id, area); 1318 - goto out; 1319 - } 1368 + if (copy_from_user(input_buf, user_buf, 1)) { 1369 + rc = -EFAULT; 1370 + goto out; 1371 + } 1372 + if (input_buf[0] == '-') { 1373 + debug_flush(id, DEBUG_FLUSH_ALL); 1374 + goto out; 1375 + } 1376 + if (isdigit(input_buf[0])) { 1377 + int area = ((int) input_buf[0] - (int) '0'); 1378 + 1379 + debug_flush(id, area); 1380 + goto out; 1381 + } 1320 1382 1321 1383 pr_info("Flushing debug data failed because %c is not a valid " 1322 1384 "area\n", input_buf[0]); 1323 1385 1324 1386 out: 1325 - *offset += user_len; 1326 - return rc; /* number of input characters */ 1387 + *offset += user_len; 1388 + return rc; /* number of input characters */ 1327 1389 } 1328 1390 1329 1391 /* 1330 1392 * prints debug header in raw format 1331 1393 */ 1332 - 1333 - static int 1334 - debug_raw_header_fn(debug_info_t * id, struct debug_view *view, 1335 - int area, debug_entry_t * entry, char *out_buf) 1394 + static int debug_raw_header_fn(debug_info_t *id, struct debug_view *view, 1395 + int area, debug_entry_t *entry, char *out_buf) 1336 1396 { 1337 - int rc; 1397 + int rc; 1338 1398 1339 1399 rc = sizeof(debug_entry_t); 1340 - memcpy(out_buf,entry,sizeof(debug_entry_t)); 1341 - return rc; 1400 + memcpy(out_buf, entry, sizeof(debug_entry_t)); 1401 + return rc; 1342 1402 } 1343 1403 1344 1404 /* 1345 1405 * prints debug data in raw format 1346 1406 */ 1347 - 1348 - static int 1349 - debug_raw_format_fn(debug_info_t * id, struct debug_view *view, 1407 + static int debug_raw_format_fn(debug_info_t *id, struct debug_view *view, 1350 1408 char *out_buf, const char *in_buf) 1351 1409 { 1352 1410 int rc; ··· 1350 1426 /* 1351 1427 * prints debug data in hex/ascii format 1352 1428 */ 1353 - 1354 - static int 1355 - debug_hex_ascii_format_fn(debug_info_t * id, struct debug_view *view, 1356 - char *out_buf, const char *in_buf) 1429 + static int debug_hex_ascii_format_fn(debug_info_t *id, struct debug_view *view, 1430 + char *out_buf, const char *in_buf) 1357 1431 { 1358 1432 int i, rc = 0; 1359 1433 1360 - for (i = 0; i < id->buf_size; i++) { 1361 - rc += sprintf(out_buf + rc, "%02x ", 1362 - ((unsigned char *) in_buf)[i]); 1363 - } 1434 + for (i = 0; i < id->buf_size; i++) 1435 + rc += sprintf(out_buf + rc, "%02x ", ((unsigned char *) in_buf)[i]); 1364 1436 rc += sprintf(out_buf + rc, "| "); 1365 1437 for (i = 0; i < id->buf_size; i++) { 1366 1438 unsigned char c = in_buf[i]; 1439 + 1367 1440 if (isascii(c) && isprint(c)) 1368 1441 rc += sprintf(out_buf + rc, "%c", c); 1369 1442 else ··· 1373 1452 /* 1374 1453 * prints header for debug entry 1375 1454 */ 1376 - 1377 - int 1378 - debug_dflt_header_fn(debug_info_t * id, struct debug_view *view, 1379 - int area, debug_entry_t * entry, char *out_buf) 1455 + int debug_dflt_header_fn(debug_info_t *id, struct debug_view *view, 1456 + int area, debug_entry_t *entry, char *out_buf) 1380 1457 { 1381 1458 unsigned long base, sec, usec; 1382 - char *except_str; 1383 1459 unsigned long caller; 1384 - int rc = 0; 1385 1460 unsigned int level; 1461 + char *except_str; 1462 + int rc = 0; 1386 1463 1387 1464 level = entry->id.fields.level; 1388 1465 base = (*(unsigned long *) &tod_clock_base[0]) >> 4; ··· 1406 1487 1407 1488 #define DEBUG_SPRINTF_MAX_ARGS 10 1408 1489 1409 - static int 1410 - debug_sprintf_format_fn(debug_info_t * id, struct debug_view *view, 1411 - char *out_buf, debug_sprintf_entry_t *curr_event) 1490 + static int debug_sprintf_format_fn(debug_info_t *id, struct debug_view *view, 1491 + char *out_buf, debug_sprintf_entry_t *curr_event) 1412 1492 { 1413 - int num_longs, num_used_args = 0,i, rc = 0; 1493 + int num_longs, num_used_args = 0, i, rc = 0; 1414 1494 int index[DEBUG_SPRINTF_MAX_ARGS]; 1415 1495 1416 1496 /* count of longs fit into one entry */ 1417 - num_longs = id->buf_size / sizeof(long); 1497 + num_longs = id->buf_size / sizeof(long); 1418 1498 1419 - if(num_longs < 1) 1499 + if (num_longs < 1) 1420 1500 goto out; /* bufsize of entry too small */ 1421 - if(num_longs == 1) { 1501 + if (num_longs == 1) { 1422 1502 /* no args, we use only the string */ 1423 1503 strcpy(out_buf, curr_event->string); 1424 1504 rc = strlen(curr_event->string); ··· 1425 1507 } 1426 1508 1427 1509 /* number of arguments used for sprintf (without the format string) */ 1428 - num_used_args = min(DEBUG_SPRINTF_MAX_ARGS, (num_longs - 1)); 1510 + num_used_args = min(DEBUG_SPRINTF_MAX_ARGS, (num_longs - 1)); 1429 1511 1430 - memset(index,0, DEBUG_SPRINTF_MAX_ARGS * sizeof(int)); 1512 + memset(index, 0, DEBUG_SPRINTF_MAX_ARGS * sizeof(int)); 1431 1513 1432 - for(i = 0; i < num_used_args; i++) 1514 + for (i = 0; i < num_used_args; i++) 1433 1515 index[i] = i; 1434 1516 1435 - rc = sprintf(out_buf, curr_event->string, curr_event->args[index[0]], 1436 - curr_event->args[index[1]], curr_event->args[index[2]], 1437 - curr_event->args[index[3]], curr_event->args[index[4]], 1438 - curr_event->args[index[5]], curr_event->args[index[6]], 1439 - curr_event->args[index[7]], curr_event->args[index[8]], 1440 - curr_event->args[index[9]]); 1441 - 1517 + rc = sprintf(out_buf, curr_event->string, curr_event->args[index[0]], 1518 + curr_event->args[index[1]], curr_event->args[index[2]], 1519 + curr_event->args[index[3]], curr_event->args[index[4]], 1520 + curr_event->args[index[5]], curr_event->args[index[6]], 1521 + curr_event->args[index[7]], curr_event->args[index[8]], 1522 + curr_event->args[index[9]]); 1442 1523 out: 1443 - 1444 1524 return rc; 1445 1525 } 1446 1526
+287 -1760
arch/s390/kernel/dis.c
··· 21 21 #include <linux/reboot.h> 22 22 #include <linux/kprobes.h> 23 23 #include <linux/kdebug.h> 24 - 25 24 #include <linux/uaccess.h> 25 + #include <linux/atomic.h> 26 26 #include <asm/dis.h> 27 27 #include <asm/io.h> 28 - #include <linux/atomic.h> 29 28 #include <asm/cpcmd.h> 30 29 #include <asm/lowcore.h> 31 30 #include <asm/debug.h> 32 31 #include <asm/irq.h> 33 32 33 + /* Type of operand */ 34 + #define OPERAND_GPR 0x1 /* Operand printed as %rx */ 35 + #define OPERAND_FPR 0x2 /* Operand printed as %fx */ 36 + #define OPERAND_AR 0x4 /* Operand printed as %ax */ 37 + #define OPERAND_CR 0x8 /* Operand printed as %cx */ 38 + #define OPERAND_VR 0x10 /* Operand printed as %vx */ 39 + #define OPERAND_DISP 0x20 /* Operand printed as displacement */ 40 + #define OPERAND_BASE 0x40 /* Operand printed as base register */ 41 + #define OPERAND_INDEX 0x80 /* Operand printed as index register */ 42 + #define OPERAND_PCREL 0x100 /* Operand printed as pc-relative symbol */ 43 + #define OPERAND_SIGNED 0x200 /* Operand printed as signed value */ 44 + #define OPERAND_LENGTH 0x400 /* Operand printed as length (+1) */ 45 + 46 + struct s390_operand { 47 + unsigned char bits; /* The number of bits in the operand. */ 48 + unsigned char shift; /* The number of bits to shift. */ 49 + unsigned short flags; /* One bit syntax flags. */ 50 + }; 51 + 52 + struct s390_insn { 53 + union { 54 + const char name[5]; 55 + struct { 56 + unsigned char zero; 57 + unsigned int offset; 58 + } __packed; 59 + }; 60 + unsigned char opfrag; 61 + unsigned char format; 62 + }; 63 + 64 + struct s390_opcode_offset { 65 + unsigned char opcode; 66 + unsigned char mask; 67 + unsigned char byte; 68 + unsigned short offset; 69 + unsigned short count; 70 + } __packed; 71 + 34 72 enum { 35 - UNUSED, /* Indicates the end of the operand list */ 36 - R_8, /* GPR starting at position 8 */ 37 - R_12, /* GPR starting at position 12 */ 38 - R_16, /* GPR starting at position 16 */ 39 - R_20, /* GPR starting at position 20 */ 40 - R_24, /* GPR starting at position 24 */ 41 - R_28, /* GPR starting at position 28 */ 42 - R_32, /* GPR starting at position 32 */ 43 - F_8, /* FPR starting at position 8 */ 44 - F_12, /* FPR starting at position 12 */ 45 - F_16, /* FPR starting at position 16 */ 46 - F_20, /* FPR starting at position 16 */ 47 - F_24, /* FPR starting at position 24 */ 48 - F_28, /* FPR starting at position 28 */ 49 - F_32, /* FPR starting at position 32 */ 73 + UNUSED, 50 74 A_8, /* Access reg. starting at position 8 */ 51 75 A_12, /* Access reg. starting at position 12 */ 52 76 A_24, /* Access reg. starting at position 24 */ 53 77 A_28, /* Access reg. starting at position 28 */ 54 - C_8, /* Control reg. starting at position 8 */ 55 - C_12, /* Control reg. starting at position 12 */ 56 - V_8, /* Vector reg. starting at position 8, extension bit at 36 */ 57 - V_12, /* Vector reg. starting at position 12, extension bit at 37 */ 58 - V_16, /* Vector reg. starting at position 16, extension bit at 38 */ 59 - V_32, /* Vector reg. starting at position 32, extension bit at 39 */ 60 - W_12, /* Vector reg. at bit 12, extension at bit 37, used as index */ 61 78 B_16, /* Base register starting at position 16 */ 62 79 B_32, /* Base register starting at position 32 */ 63 - X_12, /* Index register starting at position 12 */ 80 + C_8, /* Control reg. starting at position 8 */ 81 + C_12, /* Control reg. starting at position 12 */ 82 + D20_20, /* 20 bit displacement starting at 20 */ 64 83 D_20, /* Displacement starting at position 20 */ 65 84 D_36, /* Displacement starting at position 36 */ 66 - D20_20, /* 20 bit displacement starting at 20 */ 85 + F_8, /* FPR starting at position 8 */ 86 + F_12, /* FPR starting at position 12 */ 87 + F_16, /* FPR starting at position 16 */ 88 + F_24, /* FPR starting at position 24 */ 89 + F_28, /* FPR starting at position 28 */ 90 + F_32, /* FPR starting at position 32 */ 91 + I8_8, /* 8 bit signed value starting at 8 */ 92 + I8_32, /* 8 bit signed value starting at 32 */ 93 + I16_16, /* 16 bit signed value starting at 16 */ 94 + I16_32, /* 16 bit signed value starting at 32 */ 95 + I32_16, /* 32 bit signed value starting at 16 */ 96 + J12_12, /* 12 bit PC relative offset at 12 */ 97 + J16_16, /* 16 bit PC relative offset at 16 */ 98 + J16_32, /* 16 bit PC relative offset at 32 */ 99 + J24_24, /* 24 bit PC relative offset at 24 */ 100 + J32_16, /* 32 bit PC relative offset at 16 */ 67 101 L4_8, /* 4 bit length starting at position 8 */ 68 102 L4_12, /* 4 bit length starting at position 12 */ 69 103 L8_8, /* 8 bit length starting at position 8 */ 104 + R_8, /* GPR starting at position 8 */ 105 + R_12, /* GPR starting at position 12 */ 106 + R_16, /* GPR starting at position 16 */ 107 + R_24, /* GPR starting at position 24 */ 108 + R_28, /* GPR starting at position 28 */ 70 109 U4_8, /* 4 bit unsigned value starting at 8 */ 71 110 U4_12, /* 4 bit unsigned value starting at 12 */ 72 111 U4_16, /* 4 bit unsigned value starting at 16 */ ··· 117 78 U8_8, /* 8 bit unsigned value starting at 8 */ 118 79 U8_16, /* 8 bit unsigned value starting at 16 */ 119 80 U8_24, /* 8 bit unsigned value starting at 24 */ 81 + U8_28, /* 8 bit unsigned value starting at 28 */ 120 82 U8_32, /* 8 bit unsigned value starting at 32 */ 121 - I8_8, /* 8 bit signed value starting at 8 */ 122 - I8_16, /* 8 bit signed value starting at 16 */ 123 - I8_24, /* 8 bit signed value starting at 24 */ 124 - I8_32, /* 8 bit signed value starting at 32 */ 125 - J12_12, /* PC relative offset at 12 */ 126 - I16_16, /* 16 bit signed value starting at 16 */ 127 - I16_32, /* 32 bit signed value starting at 16 */ 128 - U16_16, /* 16 bit unsigned value starting at 16 */ 129 - U16_32, /* 32 bit unsigned value starting at 16 */ 130 - J16_16, /* PC relative jump offset at 16 */ 131 - J16_32, /* PC relative offset at 16 */ 132 - I24_24, /* 24 bit signed value starting at 24 */ 133 - J32_16, /* PC relative long offset at 16 */ 134 - I32_16, /* 32 bit signed value starting at 16 */ 135 - U32_16, /* 32 bit unsigned value starting at 16 */ 136 - M_16, /* 4 bit optional mask starting at 16 */ 137 - M_20, /* 4 bit optional mask starting at 20 */ 138 - M_24, /* 4 bit optional mask starting at 24 */ 139 - M_28, /* 4 bit optional mask starting at 28 */ 140 - M_32, /* 4 bit optional mask starting at 32 */ 141 - RO_28, /* optional GPR starting at position 28 */ 83 + U12_16, /* 12 bit unsigned value starting at 16 */ 84 + U16_16, /* 16 bit unsigned value starting at 16 */ 85 + U16_32, /* 16 bit unsigned value starting at 32 */ 86 + U32_16, /* 32 bit unsigned value starting at 16 */ 87 + VX_12, /* Vector index register starting at position 12 */ 88 + V_8, /* Vector reg. starting at position 8 */ 89 + V_12, /* Vector reg. starting at position 12 */ 90 + V_16, /* Vector reg. starting at position 16 */ 91 + V_32, /* Vector reg. starting at position 32 */ 92 + X_12, /* Index register starting at position 12 */ 142 93 }; 143 94 144 - /* 145 - * Enumeration of the different instruction formats. 146 - * For details consult the principles of operation. 147 - */ 148 - enum { 149 - INSTR_INVALID, 150 - INSTR_E, 151 - INSTR_IE_UU, 152 - INSTR_MII_UPI, 153 - INSTR_RIE_R0IU, INSTR_RIE_R0UU, INSTR_RIE_RRP, INSTR_RIE_RRPU, 154 - INSTR_RIE_RRUUU, INSTR_RIE_RUPI, INSTR_RIE_RUPU, INSTR_RIE_RRI0, 155 - INSTR_RIL_RI, INSTR_RIL_RP, INSTR_RIL_RU, INSTR_RIL_UP, 156 - INSTR_RIS_R0RDU, INSTR_RIS_R0UU, INSTR_RIS_RURDI, INSTR_RIS_RURDU, 157 - INSTR_RI_RI, INSTR_RI_RP, INSTR_RI_RU, INSTR_RI_UP, 158 - INSTR_RRE_00, INSTR_RRE_0R, INSTR_RRE_AA, INSTR_RRE_AR, INSTR_RRE_F0, 159 - INSTR_RRE_FF, INSTR_RRE_FR, INSTR_RRE_R0, INSTR_RRE_RA, INSTR_RRE_RF, 160 - INSTR_RRE_RR, INSTR_RRE_RR_OPT, 161 - INSTR_RRF_0UFF, INSTR_RRF_F0FF, INSTR_RRF_F0FF2, INSTR_RRF_F0FR, 162 - INSTR_RRF_FFRU, INSTR_RRF_FUFF, INSTR_RRF_FUFF2, INSTR_RRF_M0RR, 163 - INSTR_RRF_R0RR, INSTR_RRF_R0RR2, INSTR_RRF_RMRR, INSTR_RRF_RURR, 164 - INSTR_RRF_U0FF, INSTR_RRF_U0RF, INSTR_RRF_U0RR, INSTR_RRF_UUFF, 165 - INSTR_RRF_UUFR, INSTR_RRF_UURF, 166 - INSTR_RRR_F0FF, INSTR_RRS_RRRDU, 167 - INSTR_RR_FF, INSTR_RR_R0, INSTR_RR_RR, INSTR_RR_U0, INSTR_RR_UR, 168 - INSTR_RSE_CCRD, INSTR_RSE_RRRD, INSTR_RSE_RURD, 169 - INSTR_RSI_RRP, 170 - INSTR_RSL_LRDFU, INSTR_RSL_R0RD, 171 - INSTR_RSY_AARD, INSTR_RSY_CCRD, INSTR_RSY_RRRD, INSTR_RSY_RURD, 172 - INSTR_RSY_RDRM, INSTR_RSY_RMRD, 173 - INSTR_RS_AARD, INSTR_RS_CCRD, INSTR_RS_R0RD, INSTR_RS_RRRD, 174 - INSTR_RS_RURD, 175 - INSTR_RXE_FRRD, INSTR_RXE_RRRD, INSTR_RXE_RRRDM, 176 - INSTR_RXF_FRRDF, 177 - INSTR_RXY_FRRD, INSTR_RXY_RRRD, INSTR_RXY_URRD, 178 - INSTR_RX_FRRD, INSTR_RX_RRRD, INSTR_RX_URRD, 179 - INSTR_SIL_RDI, INSTR_SIL_RDU, 180 - INSTR_SIY_IRD, INSTR_SIY_URD, 181 - INSTR_SI_URD, 182 - INSTR_SMI_U0RDP, 183 - INSTR_SSE_RDRD, 184 - INSTR_SSF_RRDRD, INSTR_SSF_RRDRD2, 185 - INSTR_SS_L0RDRD, INSTR_SS_LIRDRD, INSTR_SS_LLRDRD, INSTR_SS_RRRDRD, 186 - INSTR_SS_RRRDRD2, INSTR_SS_RRRDRD3, 187 - INSTR_S_00, INSTR_S_RD, 188 - INSTR_VRI_V0IM, INSTR_VRI_V0I0, INSTR_VRI_V0IIM, INSTR_VRI_VVIM, 189 - INSTR_VRI_VVV0IM, INSTR_VRI_VVV0I0, INSTR_VRI_VVIMM, 190 - INSTR_VRR_VV00MMM, INSTR_VRR_VV000MM, INSTR_VRR_VV0000M, 191 - INSTR_VRR_VV00000, INSTR_VRR_VVV0M0M, INSTR_VRR_VV00M0M, 192 - INSTR_VRR_VVV000M, INSTR_VRR_VVV000V, INSTR_VRR_VVV0000, 193 - INSTR_VRR_VVV0MMM, INSTR_VRR_VVV00MM, INSTR_VRR_VVVMM0V, 194 - INSTR_VRR_VVVM0MV, INSTR_VRR_VVVM00V, INSTR_VRR_VRR0000, 195 - INSTR_VRS_VVRDM, INSTR_VRS_VVRD0, INSTR_VRS_VRRDM, INSTR_VRS_VRRD0, 196 - INSTR_VRS_RVRDM, 197 - INSTR_VRV_VVRDM, INSTR_VRV_VWRDM, 198 - INSTR_VRX_VRRDM, INSTR_VRX_VRRD0, 199 - }; 200 - 201 - static const struct s390_operand operands[] = 202 - { 203 - [UNUSED] = { 0, 0, 0 }, 204 - [R_8] = { 4, 8, OPERAND_GPR }, 205 - [R_12] = { 4, 12, OPERAND_GPR }, 206 - [R_16] = { 4, 16, OPERAND_GPR }, 207 - [R_20] = { 4, 20, OPERAND_GPR }, 208 - [R_24] = { 4, 24, OPERAND_GPR }, 209 - [R_28] = { 4, 28, OPERAND_GPR }, 210 - [R_32] = { 4, 32, OPERAND_GPR }, 211 - [F_8] = { 4, 8, OPERAND_FPR }, 212 - [F_12] = { 4, 12, OPERAND_FPR }, 213 - [F_16] = { 4, 16, OPERAND_FPR }, 214 - [F_20] = { 4, 16, OPERAND_FPR }, 215 - [F_24] = { 4, 24, OPERAND_FPR }, 216 - [F_28] = { 4, 28, OPERAND_FPR }, 217 - [F_32] = { 4, 32, OPERAND_FPR }, 95 + static const struct s390_operand operands[] = { 96 + [UNUSED] = { 0, 0, 0 }, 218 97 [A_8] = { 4, 8, OPERAND_AR }, 219 98 [A_12] = { 4, 12, OPERAND_AR }, 220 99 [A_24] = { 4, 24, OPERAND_AR }, 221 100 [A_28] = { 4, 28, OPERAND_AR }, 101 + [B_16] = { 4, 16, OPERAND_BASE | OPERAND_GPR }, 102 + [B_32] = { 4, 32, OPERAND_BASE | OPERAND_GPR }, 222 103 [C_8] = { 4, 8, OPERAND_CR }, 223 104 [C_12] = { 4, 12, OPERAND_CR }, 105 + [D20_20] = { 20, 20, OPERAND_DISP | OPERAND_SIGNED }, 106 + [D_20] = { 12, 20, OPERAND_DISP }, 107 + [D_36] = { 12, 36, OPERAND_DISP }, 108 + [F_8] = { 4, 8, OPERAND_FPR }, 109 + [F_12] = { 4, 12, OPERAND_FPR }, 110 + [F_16] = { 4, 16, OPERAND_FPR }, 111 + [F_24] = { 4, 24, OPERAND_FPR }, 112 + [F_28] = { 4, 28, OPERAND_FPR }, 113 + [F_32] = { 4, 32, OPERAND_FPR }, 114 + [I8_8] = { 8, 8, OPERAND_SIGNED }, 115 + [I8_32] = { 8, 32, OPERAND_SIGNED }, 116 + [I16_16] = { 16, 16, OPERAND_SIGNED }, 117 + [I16_32] = { 16, 32, OPERAND_SIGNED }, 118 + [I32_16] = { 32, 16, OPERAND_SIGNED }, 119 + [J12_12] = { 12, 12, OPERAND_PCREL }, 120 + [J16_16] = { 16, 16, OPERAND_PCREL }, 121 + [J16_32] = { 16, 32, OPERAND_PCREL }, 122 + [J24_24] = { 24, 24, OPERAND_PCREL }, 123 + [J32_16] = { 32, 16, OPERAND_PCREL }, 124 + [L4_8] = { 4, 8, OPERAND_LENGTH }, 125 + [L4_12] = { 4, 12, OPERAND_LENGTH }, 126 + [L8_8] = { 8, 8, OPERAND_LENGTH }, 127 + [R_8] = { 4, 8, OPERAND_GPR }, 128 + [R_12] = { 4, 12, OPERAND_GPR }, 129 + [R_16] = { 4, 16, OPERAND_GPR }, 130 + [R_24] = { 4, 24, OPERAND_GPR }, 131 + [R_28] = { 4, 28, OPERAND_GPR }, 132 + [U4_8] = { 4, 8, 0 }, 133 + [U4_12] = { 4, 12, 0 }, 134 + [U4_16] = { 4, 16, 0 }, 135 + [U4_20] = { 4, 20, 0 }, 136 + [U4_24] = { 4, 24, 0 }, 137 + [U4_28] = { 4, 28, 0 }, 138 + [U4_32] = { 4, 32, 0 }, 139 + [U4_36] = { 4, 36, 0 }, 140 + [U8_8] = { 8, 8, 0 }, 141 + [U8_16] = { 8, 16, 0 }, 142 + [U8_24] = { 8, 24, 0 }, 143 + [U8_28] = { 8, 28, 0 }, 144 + [U8_32] = { 8, 32, 0 }, 145 + [U12_16] = { 12, 16, 0 }, 146 + [U16_16] = { 16, 16, 0 }, 147 + [U16_32] = { 16, 32, 0 }, 148 + [U32_16] = { 32, 16, 0 }, 149 + [VX_12] = { 4, 12, OPERAND_INDEX | OPERAND_VR }, 224 150 [V_8] = { 4, 8, OPERAND_VR }, 225 151 [V_12] = { 4, 12, OPERAND_VR }, 226 152 [V_16] = { 4, 16, OPERAND_VR }, 227 153 [V_32] = { 4, 32, OPERAND_VR }, 228 - [W_12] = { 4, 12, OPERAND_INDEX | OPERAND_VR }, 229 - [B_16] = { 4, 16, OPERAND_BASE | OPERAND_GPR }, 230 - [B_32] = { 4, 32, OPERAND_BASE | OPERAND_GPR }, 231 154 [X_12] = { 4, 12, OPERAND_INDEX | OPERAND_GPR }, 232 - [D_20] = { 12, 20, OPERAND_DISP }, 233 - [D_36] = { 12, 36, OPERAND_DISP }, 234 - [D20_20] = { 20, 20, OPERAND_DISP | OPERAND_SIGNED }, 235 - [L4_8] = { 4, 8, OPERAND_LENGTH }, 236 - [L4_12] = { 4, 12, OPERAND_LENGTH }, 237 - [L8_8] = { 8, 8, OPERAND_LENGTH }, 238 - [U4_8] = { 4, 8, 0 }, 239 - [U4_12] = { 4, 12, 0 }, 240 - [U4_16] = { 4, 16, 0 }, 241 - [U4_20] = { 4, 20, 0 }, 242 - [U4_24] = { 4, 24, 0 }, 243 - [U4_28] = { 4, 28, 0 }, 244 - [U4_32] = { 4, 32, 0 }, 245 - [U4_36] = { 4, 36, 0 }, 246 - [U8_8] = { 8, 8, 0 }, 247 - [U8_16] = { 8, 16, 0 }, 248 - [U8_24] = { 8, 24, 0 }, 249 - [U8_32] = { 8, 32, 0 }, 250 - [J12_12] = { 12, 12, OPERAND_PCREL }, 251 - [I8_8] = { 8, 8, OPERAND_SIGNED }, 252 - [I8_16] = { 8, 16, OPERAND_SIGNED }, 253 - [I8_24] = { 8, 24, OPERAND_SIGNED }, 254 - [I8_32] = { 8, 32, OPERAND_SIGNED }, 255 - [I16_32] = { 16, 32, OPERAND_SIGNED }, 256 - [I16_16] = { 16, 16, OPERAND_SIGNED }, 257 - [U16_16] = { 16, 16, 0 }, 258 - [U16_32] = { 16, 32, 0 }, 259 - [J16_16] = { 16, 16, OPERAND_PCREL }, 260 - [J16_32] = { 16, 32, OPERAND_PCREL }, 261 - [I24_24] = { 24, 24, OPERAND_SIGNED }, 262 - [J32_16] = { 32, 16, OPERAND_PCREL }, 263 - [I32_16] = { 32, 16, OPERAND_SIGNED }, 264 - [U32_16] = { 32, 16, 0 }, 265 - [M_16] = { 4, 16, 0 }, 266 - [M_20] = { 4, 20, 0 }, 267 - [M_24] = { 4, 24, 0 }, 268 - [M_28] = { 4, 28, 0 }, 269 - [M_32] = { 4, 32, 0 }, 270 - [RO_28] = { 4, 28, OPERAND_GPR } 271 155 }; 272 156 273 - static const unsigned char formats[][7] = { 274 - [INSTR_E] = { 0xff, 0,0,0,0,0,0 }, 275 - [INSTR_IE_UU] = { 0xff, U4_24,U4_28,0,0,0,0 }, 276 - [INSTR_MII_UPI] = { 0xff, U4_8,J12_12,I24_24 }, 277 - [INSTR_RIE_R0IU] = { 0xff, R_8,I16_16,U4_32,0,0,0 }, 278 - [INSTR_RIE_R0UU] = { 0xff, R_8,U16_16,U4_32,0,0,0 }, 279 - [INSTR_RIE_RRI0] = { 0xff, R_8,R_12,I16_16,0,0,0 }, 280 - [INSTR_RIE_RRPU] = { 0xff, R_8,R_12,U4_32,J16_16,0,0 }, 281 - [INSTR_RIE_RRP] = { 0xff, R_8,R_12,J16_16,0,0,0 }, 282 - [INSTR_RIE_RRUUU] = { 0xff, R_8,R_12,U8_16,U8_24,U8_32,0 }, 283 - [INSTR_RIE_RUPI] = { 0xff, R_8,I8_32,U4_12,J16_16,0,0 }, 284 - [INSTR_RIE_RUPU] = { 0xff, R_8,U8_32,U4_12,J16_16,0,0 }, 285 - [INSTR_RIL_RI] = { 0x0f, R_8,I32_16,0,0,0,0 }, 286 - [INSTR_RIL_RP] = { 0x0f, R_8,J32_16,0,0,0,0 }, 287 - [INSTR_RIL_RU] = { 0x0f, R_8,U32_16,0,0,0,0 }, 288 - [INSTR_RIL_UP] = { 0x0f, U4_8,J32_16,0,0,0,0 }, 289 - [INSTR_RIS_R0RDU] = { 0xff, R_8,U8_32,D_20,B_16,0,0 }, 290 - [INSTR_RIS_RURDI] = { 0xff, R_8,I8_32,U4_12,D_20,B_16,0 }, 291 - [INSTR_RIS_RURDU] = { 0xff, R_8,U8_32,U4_12,D_20,B_16,0 }, 292 - [INSTR_RI_RI] = { 0x0f, R_8,I16_16,0,0,0,0 }, 293 - [INSTR_RI_RP] = { 0x0f, R_8,J16_16,0,0,0,0 }, 294 - [INSTR_RI_RU] = { 0x0f, R_8,U16_16,0,0,0,0 }, 295 - [INSTR_RI_UP] = { 0x0f, U4_8,J16_16,0,0,0,0 }, 296 - [INSTR_RRE_00] = { 0xff, 0,0,0,0,0,0 }, 297 - [INSTR_RRE_0R] = { 0xff, R_28,0,0,0,0,0 }, 298 - [INSTR_RRE_AA] = { 0xff, A_24,A_28,0,0,0,0 }, 299 - [INSTR_RRE_AR] = { 0xff, A_24,R_28,0,0,0,0 }, 300 - [INSTR_RRE_F0] = { 0xff, F_24,0,0,0,0,0 }, 301 - [INSTR_RRE_FF] = { 0xff, F_24,F_28,0,0,0,0 }, 302 - [INSTR_RRE_FR] = { 0xff, F_24,R_28,0,0,0,0 }, 303 - [INSTR_RRE_R0] = { 0xff, R_24,0,0,0,0,0 }, 304 - [INSTR_RRE_RA] = { 0xff, R_24,A_28,0,0,0,0 }, 305 - [INSTR_RRE_RF] = { 0xff, R_24,F_28,0,0,0,0 }, 306 - [INSTR_RRE_RR] = { 0xff, R_24,R_28,0,0,0,0 }, 307 - [INSTR_RRE_RR_OPT]= { 0xff, R_24,RO_28,0,0,0,0 }, 308 - [INSTR_RRF_0UFF] = { 0xff, F_24,F_28,U4_20,0,0,0 }, 309 - [INSTR_RRF_F0FF2] = { 0xff, F_24,F_16,F_28,0,0,0 }, 310 - [INSTR_RRF_F0FF] = { 0xff, F_16,F_24,F_28,0,0,0 }, 311 - [INSTR_RRF_F0FR] = { 0xff, F_24,F_16,R_28,0,0,0 }, 312 - [INSTR_RRF_FFRU] = { 0xff, F_24,F_16,R_28,U4_20,0,0 }, 313 - [INSTR_RRF_FUFF] = { 0xff, F_24,F_16,F_28,U4_20,0,0 }, 314 - [INSTR_RRF_FUFF2] = { 0xff, F_24,F_28,F_16,U4_20,0,0 }, 315 - [INSTR_RRF_M0RR] = { 0xff, R_24,R_28,M_16,0,0,0 }, 316 - [INSTR_RRF_R0RR] = { 0xff, R_24,R_16,R_28,0,0,0 }, 317 - [INSTR_RRF_R0RR2] = { 0xff, R_24,R_28,R_16,0,0,0 }, 318 - [INSTR_RRF_RMRR] = { 0xff, R_24,R_16,R_28,M_20,0,0 }, 319 - [INSTR_RRF_RURR] = { 0xff, R_24,R_28,R_16,U4_20,0,0 }, 320 - [INSTR_RRF_U0FF] = { 0xff, F_24,U4_16,F_28,0,0,0 }, 321 - [INSTR_RRF_U0RF] = { 0xff, R_24,U4_16,F_28,0,0,0 }, 322 - [INSTR_RRF_U0RR] = { 0xff, R_24,R_28,U4_16,0,0,0 }, 323 - [INSTR_RRF_UUFF] = { 0xff, F_24,U4_16,F_28,U4_20,0,0 }, 324 - [INSTR_RRF_UUFR] = { 0xff, F_24,U4_16,R_28,U4_20,0,0 }, 325 - [INSTR_RRF_UURF] = { 0xff, R_24,U4_16,F_28,U4_20,0,0 }, 326 - [INSTR_RRR_F0FF] = { 0xff, F_24,F_28,F_16,0,0,0 }, 327 - [INSTR_RRS_RRRDU] = { 0xff, R_8,R_12,U4_32,D_20,B_16,0 }, 328 - [INSTR_RR_FF] = { 0xff, F_8,F_12,0,0,0,0 }, 329 - [INSTR_RR_R0] = { 0xff, R_8, 0,0,0,0,0 }, 330 - [INSTR_RR_RR] = { 0xff, R_8,R_12,0,0,0,0 }, 331 - [INSTR_RR_U0] = { 0xff, U8_8, 0,0,0,0,0 }, 332 - [INSTR_RR_UR] = { 0xff, U4_8,R_12,0,0,0,0 }, 333 - [INSTR_RSE_CCRD] = { 0xff, C_8,C_12,D_20,B_16,0,0 }, 334 - [INSTR_RSE_RRRD] = { 0xff, R_8,R_12,D_20,B_16,0,0 }, 335 - [INSTR_RSE_RURD] = { 0xff, R_8,U4_12,D_20,B_16,0,0 }, 336 - [INSTR_RSI_RRP] = { 0xff, R_8,R_12,J16_16,0,0,0 }, 337 - [INSTR_RSL_LRDFU] = { 0xff, F_32,D_20,L4_8,B_16,U4_36,0 }, 338 - [INSTR_RSL_R0RD] = { 0xff, D_20,L4_8,B_16,0,0,0 }, 339 - [INSTR_RSY_AARD] = { 0xff, A_8,A_12,D20_20,B_16,0,0 }, 340 - [INSTR_RSY_CCRD] = { 0xff, C_8,C_12,D20_20,B_16,0,0 }, 341 - [INSTR_RSY_RDRM] = { 0xff, R_8,D20_20,B_16,U4_12,0,0 }, 342 - [INSTR_RSY_RMRD] = { 0xff, R_8,U4_12,D20_20,B_16,0,0 }, 343 - [INSTR_RSY_RRRD] = { 0xff, R_8,R_12,D20_20,B_16,0,0 }, 344 - [INSTR_RSY_RURD] = { 0xff, R_8,U4_12,D20_20,B_16,0,0 }, 345 - [INSTR_RS_AARD] = { 0xff, A_8,A_12,D_20,B_16,0,0 }, 346 - [INSTR_RS_CCRD] = { 0xff, C_8,C_12,D_20,B_16,0,0 }, 347 - [INSTR_RS_R0RD] = { 0xff, R_8,D_20,B_16,0,0,0 }, 348 - [INSTR_RS_RRRD] = { 0xff, R_8,R_12,D_20,B_16,0,0 }, 349 - [INSTR_RS_RURD] = { 0xff, R_8,U4_12,D_20,B_16,0,0 }, 350 - [INSTR_RXE_FRRD] = { 0xff, F_8,D_20,X_12,B_16,0,0 }, 351 - [INSTR_RXE_RRRD] = { 0xff, R_8,D_20,X_12,B_16,0,0 }, 352 - [INSTR_RXE_RRRDM] = { 0xff, R_8,D_20,X_12,B_16,M_32,0 }, 353 - [INSTR_RXF_FRRDF] = { 0xff, F_32,F_8,D_20,X_12,B_16,0 }, 354 - [INSTR_RXY_FRRD] = { 0xff, F_8,D20_20,X_12,B_16,0,0 }, 355 - [INSTR_RXY_RRRD] = { 0xff, R_8,D20_20,X_12,B_16,0,0 }, 356 - [INSTR_RXY_URRD] = { 0xff, U4_8,D20_20,X_12,B_16,0,0 }, 357 - [INSTR_RX_FRRD] = { 0xff, F_8,D_20,X_12,B_16,0,0 }, 358 - [INSTR_RX_RRRD] = { 0xff, R_8,D_20,X_12,B_16,0,0 }, 359 - [INSTR_RX_URRD] = { 0xff, U4_8,D_20,X_12,B_16,0,0 }, 360 - [INSTR_SIL_RDI] = { 0xff, D_20,B_16,I16_32,0,0,0 }, 361 - [INSTR_SIL_RDU] = { 0xff, D_20,B_16,U16_32,0,0,0 }, 362 - [INSTR_SIY_IRD] = { 0xff, D20_20,B_16,I8_8,0,0,0 }, 363 - [INSTR_SIY_URD] = { 0xff, D20_20,B_16,U8_8,0,0,0 }, 364 - [INSTR_SI_URD] = { 0xff, D_20,B_16,U8_8,0,0,0 }, 365 - [INSTR_SMI_U0RDP] = { 0xff, U4_8,J16_32,D_20,B_16,0,0 }, 366 - [INSTR_SSE_RDRD] = { 0xff, D_20,B_16,D_36,B_32,0,0 }, 367 - [INSTR_SSF_RRDRD] = { 0x0f, D_20,B_16,D_36,B_32,R_8,0 }, 368 - [INSTR_SSF_RRDRD2]= { 0x0f, R_8,D_20,B_16,D_36,B_32,0 }, 369 - [INSTR_SS_L0RDRD] = { 0xff, D_20,L8_8,B_16,D_36,B_32,0 }, 370 - [INSTR_SS_LIRDRD] = { 0xff, D_20,L4_8,B_16,D_36,B_32,U4_12 }, 371 - [INSTR_SS_LLRDRD] = { 0xff, D_20,L4_8,B_16,D_36,L4_12,B_32 }, 372 - [INSTR_SS_RRRDRD2]= { 0xff, R_8,D_20,B_16,R_12,D_36,B_32 }, 373 - [INSTR_SS_RRRDRD3]= { 0xff, R_8,R_12,D_20,B_16,D_36,B_32 }, 374 - [INSTR_SS_RRRDRD] = { 0xff, D_20,R_8,B_16,D_36,B_32,R_12 }, 375 - [INSTR_S_00] = { 0xff, 0,0,0,0,0,0 }, 376 - [INSTR_S_RD] = { 0xff, D_20,B_16,0,0,0,0 }, 377 - [INSTR_VRI_V0IM] = { 0xff, V_8,I16_16,M_32,0,0,0 }, 378 - [INSTR_VRI_V0I0] = { 0xff, V_8,I16_16,0,0,0,0 }, 379 - [INSTR_VRI_V0IIM] = { 0xff, V_8,I8_16,I8_24,M_32,0,0 }, 380 - [INSTR_VRI_VVIM] = { 0xff, V_8,I16_16,V_12,M_32,0,0 }, 381 - [INSTR_VRI_VVV0IM]= { 0xff, V_8,V_12,V_16,I8_24,M_32,0 }, 382 - [INSTR_VRI_VVV0I0]= { 0xff, V_8,V_12,V_16,I8_24,0,0 }, 383 - [INSTR_VRI_VVIMM] = { 0xff, V_8,V_12,I16_16,M_32,M_28,0 }, 384 - [INSTR_VRR_VV00MMM]={ 0xff, V_8,V_12,M_32,M_28,M_24,0 }, 385 - [INSTR_VRR_VV000MM]={ 0xff, V_8,V_12,M_32,M_28,0,0 }, 386 - [INSTR_VRR_VV0000M]={ 0xff, V_8,V_12,M_32,0,0,0 }, 387 - [INSTR_VRR_VV00000]={ 0xff, V_8,V_12,0,0,0,0 }, 388 - [INSTR_VRR_VVV0M0M]={ 0xff, V_8,V_12,V_16,M_32,M_24,0 }, 389 - [INSTR_VRR_VV00M0M]={ 0xff, V_8,V_12,M_32,M_24,0,0 }, 390 - [INSTR_VRR_VVV000M]={ 0xff, V_8,V_12,V_16,M_32,0,0 }, 391 - [INSTR_VRR_VVV000V]={ 0xff, V_8,V_12,V_16,V_32,0,0 }, 392 - [INSTR_VRR_VVV0000]={ 0xff, V_8,V_12,V_16,0,0,0 }, 393 - [INSTR_VRR_VVV0MMM]={ 0xff, V_8,V_12,V_16,M_32,M_28,M_24 }, 394 - [INSTR_VRR_VVV00MM]={ 0xff, V_8,V_12,V_16,M_32,M_28,0 }, 395 - [INSTR_VRR_VVVMM0V]={ 0xff, V_8,V_12,V_16,V_32,M_20,M_24 }, 396 - [INSTR_VRR_VVVM0MV]={ 0xff, V_8,V_12,V_16,V_32,M_28,M_20 }, 397 - [INSTR_VRR_VVVM00V]={ 0xff, V_8,V_12,V_16,V_32,M_20,0 }, 398 - [INSTR_VRR_VRR0000]={ 0xff, V_8,R_12,R_16,0,0,0 }, 399 - [INSTR_VRS_VVRDM] = { 0xff, V_8,V_12,D_20,B_16,M_32,0 }, 400 - [INSTR_VRS_VVRD0] = { 0xff, V_8,V_12,D_20,B_16,0,0 }, 401 - [INSTR_VRS_VRRDM] = { 0xff, V_8,R_12,D_20,B_16,M_32,0 }, 402 - [INSTR_VRS_VRRD0] = { 0xff, V_8,R_12,D_20,B_16,0,0 }, 403 - [INSTR_VRS_RVRDM] = { 0xff, R_8,V_12,D_20,B_16,M_32,0 }, 404 - [INSTR_VRV_VVRDM] = { 0xff, V_8,V_12,D_20,B_16,M_32,0 }, 405 - [INSTR_VRV_VWRDM] = { 0xff, V_8,D_20,W_12,B_16,M_32,0 }, 406 - [INSTR_VRX_VRRDM] = { 0xff, V_8,D_20,X_12,B_16,M_32,0 }, 407 - [INSTR_VRX_VRRD0] = { 0xff, V_8,D_20,X_12,B_16,0,0 }, 157 + static const unsigned char formats[][6] = { 158 + [INSTR_E] = { 0, 0, 0, 0, 0, 0 }, 159 + [INSTR_IE_UU] = { U4_24, U4_28, 0, 0, 0, 0 }, 160 + [INSTR_MII_UPP] = { U4_8, J12_12, J24_24 }, 161 + [INSTR_RIE_R0IU] = { R_8, I16_16, U4_32, 0, 0, 0 }, 162 + [INSTR_RIE_R0UU] = { R_8, U16_16, U4_32, 0, 0, 0 }, 163 + [INSTR_RIE_RRI0] = { R_8, R_12, I16_16, 0, 0, 0 }, 164 + [INSTR_RIE_RRP] = { R_8, R_12, J16_16, 0, 0, 0 }, 165 + [INSTR_RIE_RRPU] = { R_8, R_12, U4_32, J16_16, 0, 0 }, 166 + [INSTR_RIE_RRUUU] = { R_8, R_12, U8_16, U8_24, U8_32, 0 }, 167 + [INSTR_RIE_RUI0] = { R_8, I16_16, U4_12, 0, 0, 0 }, 168 + [INSTR_RIE_RUPI] = { R_8, I8_32, U4_12, J16_16, 0, 0 }, 169 + [INSTR_RIE_RUPU] = { R_8, U8_32, U4_12, J16_16, 0, 0 }, 170 + [INSTR_RIL_RI] = { R_8, I32_16, 0, 0, 0, 0 }, 171 + [INSTR_RIL_RP] = { R_8, J32_16, 0, 0, 0, 0 }, 172 + [INSTR_RIL_RU] = { R_8, U32_16, 0, 0, 0, 0 }, 173 + [INSTR_RIL_UP] = { U4_8, J32_16, 0, 0, 0, 0 }, 174 + [INSTR_RIS_RURDI] = { R_8, I8_32, U4_12, D_20, B_16, 0 }, 175 + [INSTR_RIS_RURDU] = { R_8, U8_32, U4_12, D_20, B_16, 0 }, 176 + [INSTR_RI_RI] = { R_8, I16_16, 0, 0, 0, 0 }, 177 + [INSTR_RI_RP] = { R_8, J16_16, 0, 0, 0, 0 }, 178 + [INSTR_RI_RU] = { R_8, U16_16, 0, 0, 0, 0 }, 179 + [INSTR_RI_UP] = { U4_8, J16_16, 0, 0, 0, 0 }, 180 + [INSTR_RRE_00] = { 0, 0, 0, 0, 0, 0 }, 181 + [INSTR_RRE_AA] = { A_24, A_28, 0, 0, 0, 0 }, 182 + [INSTR_RRE_AR] = { A_24, R_28, 0, 0, 0, 0 }, 183 + [INSTR_RRE_F0] = { F_24, 0, 0, 0, 0, 0 }, 184 + [INSTR_RRE_FF] = { F_24, F_28, 0, 0, 0, 0 }, 185 + [INSTR_RRE_FR] = { F_24, R_28, 0, 0, 0, 0 }, 186 + [INSTR_RRE_R0] = { R_24, 0, 0, 0, 0, 0 }, 187 + [INSTR_RRE_RA] = { R_24, A_28, 0, 0, 0, 0 }, 188 + [INSTR_RRE_RF] = { R_24, F_28, 0, 0, 0, 0 }, 189 + [INSTR_RRE_RR] = { R_24, R_28, 0, 0, 0, 0 }, 190 + [INSTR_RRF_0UFF] = { F_24, F_28, U4_20, 0, 0, 0 }, 191 + [INSTR_RRF_0URF] = { R_24, F_28, U4_20, 0, 0, 0 }, 192 + [INSTR_RRF_F0FF] = { F_16, F_24, F_28, 0, 0, 0 }, 193 + [INSTR_RRF_F0FF2] = { F_24, F_16, F_28, 0, 0, 0 }, 194 + [INSTR_RRF_F0FR] = { F_24, F_16, R_28, 0, 0, 0 }, 195 + [INSTR_RRF_FFRU] = { F_24, F_16, R_28, U4_20, 0, 0 }, 196 + [INSTR_RRF_FUFF] = { F_24, F_16, F_28, U4_20, 0, 0 }, 197 + [INSTR_RRF_FUFF2] = { F_24, F_28, F_16, U4_20, 0, 0 }, 198 + [INSTR_RRF_R0RR] = { R_24, R_16, R_28, 0, 0, 0 }, 199 + [INSTR_RRF_R0RR2] = { R_24, R_28, R_16, 0, 0, 0 }, 200 + [INSTR_RRF_RURR] = { R_24, R_28, R_16, U4_20, 0, 0 }, 201 + [INSTR_RRF_RURR2] = { R_24, R_16, R_28, U4_20, 0, 0 }, 202 + [INSTR_RRF_U0FF] = { F_24, U4_16, F_28, 0, 0, 0 }, 203 + [INSTR_RRF_U0RF] = { R_24, U4_16, F_28, 0, 0, 0 }, 204 + [INSTR_RRF_U0RR] = { R_24, R_28, U4_16, 0, 0, 0 }, 205 + [INSTR_RRF_UUFF] = { F_24, U4_16, F_28, U4_20, 0, 0 }, 206 + [INSTR_RRF_UUFR] = { F_24, U4_16, R_28, U4_20, 0, 0 }, 207 + [INSTR_RRF_UURF] = { R_24, U4_16, F_28, U4_20, 0, 0 }, 208 + [INSTR_RRS_RRRDU] = { R_8, R_12, U4_32, D_20, B_16 }, 209 + [INSTR_RR_FF] = { F_8, F_12, 0, 0, 0, 0 }, 210 + [INSTR_RR_R0] = { R_8, 0, 0, 0, 0, 0 }, 211 + [INSTR_RR_RR] = { R_8, R_12, 0, 0, 0, 0 }, 212 + [INSTR_RR_U0] = { U8_8, 0, 0, 0, 0, 0 }, 213 + [INSTR_RR_UR] = { U4_8, R_12, 0, 0, 0, 0 }, 214 + [INSTR_RSI_RRP] = { R_8, R_12, J16_16, 0, 0, 0 }, 215 + [INSTR_RSL_LRDFU] = { F_32, D_20, L8_8, B_16, U4_36, 0 }, 216 + [INSTR_RSL_R0RD] = { D_20, L4_8, B_16, 0, 0, 0 }, 217 + [INSTR_RSY_AARD] = { A_8, A_12, D20_20, B_16, 0, 0 }, 218 + [INSTR_RSY_CCRD] = { C_8, C_12, D20_20, B_16, 0, 0 }, 219 + [INSTR_RSY_RDRU] = { R_8, D20_20, B_16, U4_12, 0, 0 }, 220 + [INSTR_RSY_RRRD] = { R_8, R_12, D20_20, B_16, 0, 0 }, 221 + [INSTR_RSY_RURD] = { R_8, U4_12, D20_20, B_16, 0, 0 }, 222 + [INSTR_RSY_RURD2] = { R_8, D20_20, B_16, U4_12, 0, 0 }, 223 + [INSTR_RS_AARD] = { A_8, A_12, D_20, B_16, 0, 0 }, 224 + [INSTR_RS_CCRD] = { C_8, C_12, D_20, B_16, 0, 0 }, 225 + [INSTR_RS_R0RD] = { R_8, D_20, B_16, 0, 0, 0 }, 226 + [INSTR_RS_RRRD] = { R_8, R_12, D_20, B_16, 0, 0 }, 227 + [INSTR_RS_RURD] = { R_8, U4_12, D_20, B_16, 0, 0 }, 228 + [INSTR_RXE_FRRD] = { F_8, D_20, X_12, B_16, 0, 0 }, 229 + [INSTR_RXE_RRRDU] = { R_8, D_20, X_12, B_16, U4_32, 0 }, 230 + [INSTR_RXF_FRRDF] = { F_32, F_8, D_20, X_12, B_16, 0 }, 231 + [INSTR_RXY_FRRD] = { F_8, D20_20, X_12, B_16, 0, 0 }, 232 + [INSTR_RXY_RRRD] = { R_8, D20_20, X_12, B_16, 0, 0 }, 233 + [INSTR_RXY_URRD] = { U4_8, D20_20, X_12, B_16, 0, 0 }, 234 + [INSTR_RX_FRRD] = { F_8, D_20, X_12, B_16, 0, 0 }, 235 + [INSTR_RX_RRRD] = { R_8, D_20, X_12, B_16, 0, 0 }, 236 + [INSTR_RX_URRD] = { U4_8, D_20, X_12, B_16, 0, 0 }, 237 + [INSTR_SIL_RDI] = { D_20, B_16, I16_32, 0, 0, 0 }, 238 + [INSTR_SIL_RDU] = { D_20, B_16, U16_32, 0, 0, 0 }, 239 + [INSTR_SIY_IRD] = { D20_20, B_16, I8_8, 0, 0, 0 }, 240 + [INSTR_SIY_URD] = { D20_20, B_16, U8_8, 0, 0, 0 }, 241 + [INSTR_SI_RD] = { D_20, B_16, 0, 0, 0, 0 }, 242 + [INSTR_SI_URD] = { D_20, B_16, U8_8, 0, 0, 0 }, 243 + [INSTR_SMI_U0RDP] = { U4_8, J16_32, D_20, B_16, 0, 0 }, 244 + [INSTR_SSE_RDRD] = { D_20, B_16, D_36, B_32, 0, 0 }, 245 + [INSTR_SSF_RRDRD] = { D_20, B_16, D_36, B_32, R_8, 0 }, 246 + [INSTR_SSF_RRDRD2] = { R_8, D_20, B_16, D_36, B_32, 0 }, 247 + [INSTR_SS_L0RDRD] = { D_20, L8_8, B_16, D_36, B_32, 0 }, 248 + [INSTR_SS_L2RDRD] = { D_20, B_16, D_36, L8_8, B_32, 0 }, 249 + [INSTR_SS_LIRDRD] = { D_20, L4_8, B_16, D_36, B_32, U4_12 }, 250 + [INSTR_SS_LLRDRD] = { D_20, L4_8, B_16, D_36, L4_12, B_32 }, 251 + [INSTR_SS_RRRDRD] = { D_20, R_8, B_16, D_36, B_32, R_12 }, 252 + [INSTR_SS_RRRDRD2] = { R_8, D_20, B_16, R_12, D_36, B_32 }, 253 + [INSTR_SS_RRRDRD3] = { R_8, R_12, D_20, B_16, D_36, B_32 }, 254 + [INSTR_S_00] = { 0, 0, 0, 0, 0, 0 }, 255 + [INSTR_S_RD] = { D_20, B_16, 0, 0, 0, 0 }, 256 + [INSTR_VRI_V0IU] = { V_8, I16_16, U4_32, 0, 0, 0 }, 257 + [INSTR_VRI_V0U] = { V_8, U16_16, 0, 0, 0, 0 }, 258 + [INSTR_VRI_V0UU2] = { V_8, U16_16, U4_32, 0, 0, 0 }, 259 + [INSTR_VRI_V0UUU] = { V_8, U8_16, U8_24, U4_32, 0, 0 }, 260 + [INSTR_VRI_VR0UU] = { V_8, R_12, U8_28, U4_24, 0, 0 }, 261 + [INSTR_VRI_VVUU] = { V_8, V_12, U16_16, U4_32, 0, 0 }, 262 + [INSTR_VRI_VVUUU] = { V_8, V_12, U12_16, U4_32, U4_28, 0 }, 263 + [INSTR_VRI_VVUUU2] = { V_8, V_12, U8_28, U8_16, U4_24, 0 }, 264 + [INSTR_VRI_VVV0U] = { V_8, V_12, V_16, U8_24, 0, 0 }, 265 + [INSTR_VRI_VVV0UU] = { V_8, V_12, V_16, U8_24, U4_32, 0 }, 266 + [INSTR_VRI_VVV0UU2] = { V_8, V_12, V_16, U8_28, U4_24, 0 }, 267 + [INSTR_VRR_0V] = { V_12, 0, 0, 0, 0, 0 }, 268 + [INSTR_VRR_0VV0U] = { V_12, V_16, U4_24, 0, 0, 0 }, 269 + [INSTR_VRR_RV0U] = { R_8, V_12, U4_24, 0, 0, 0 }, 270 + [INSTR_VRR_VRR] = { V_8, R_12, R_16, 0, 0, 0 }, 271 + [INSTR_VRR_VV] = { V_8, V_12, 0, 0, 0, 0 }, 272 + [INSTR_VRR_VV0U] = { V_8, V_12, U4_32, 0, 0, 0 }, 273 + [INSTR_VRR_VV0U0U] = { V_8, V_12, U4_32, U4_24, 0, 0 }, 274 + [INSTR_VRR_VV0UU2] = { V_8, V_12, U4_32, U4_28, 0, 0 }, 275 + [INSTR_VRR_VV0UUU] = { V_8, V_12, U4_32, U4_28, U4_24, 0 }, 276 + [INSTR_VRR_VVV] = { V_8, V_12, V_16, 0, 0, 0 }, 277 + [INSTR_VRR_VVV0U] = { V_8, V_12, V_16, U4_32, 0, 0 }, 278 + [INSTR_VRR_VVV0U0U] = { V_8, V_12, V_16, U4_32, U4_24, 0 }, 279 + [INSTR_VRR_VVV0UU] = { V_8, V_12, V_16, U4_32, U4_28, 0 }, 280 + [INSTR_VRR_VVV0UUU] = { V_8, V_12, V_16, U4_32, U4_28, U4_24 }, 281 + [INSTR_VRR_VVV0V] = { V_8, V_12, V_16, V_32, 0, 0 }, 282 + [INSTR_VRR_VVVU0UV] = { V_8, V_12, V_16, V_32, U4_28, U4_20 }, 283 + [INSTR_VRR_VVVU0V] = { V_8, V_12, V_16, V_32, U4_20, 0 }, 284 + [INSTR_VRR_VVVUU0V] = { V_8, V_12, V_16, V_32, U4_20, U4_24 }, 285 + [INSTR_VRS_RRDV] = { V_32, R_12, D_20, B_16, 0, 0 }, 286 + [INSTR_VRS_RVRDU] = { R_8, V_12, D_20, B_16, U4_32, 0 }, 287 + [INSTR_VRS_VRRD] = { V_8, R_12, D_20, B_16, 0, 0 }, 288 + [INSTR_VRS_VRRDU] = { V_8, R_12, D_20, B_16, U4_32, 0 }, 289 + [INSTR_VRS_VVRD] = { V_8, V_12, D_20, B_16, 0, 0 }, 290 + [INSTR_VRS_VVRDU] = { V_8, V_12, D_20, B_16, U4_32, 0 }, 291 + [INSTR_VRV_VVXRDU] = { V_8, D_20, VX_12, B_16, U4_32, 0 }, 292 + [INSTR_VRX_VRRD] = { V_8, D_20, X_12, B_16, 0, 0 }, 293 + [INSTR_VRX_VRRDU] = { V_8, D_20, X_12, B_16, U4_32, 0 }, 294 + [INSTR_VRX_VV] = { V_8, V_12, 0, 0, 0, 0 }, 295 + [INSTR_VSI_URDV] = { V_32, D_20, B_16, U8_8, 0, 0 }, 408 296 }; 409 297 410 - enum { 411 - LONG_INSN_ALGHSIK, 412 - LONG_INSN_ALHHHR, 413 - LONG_INSN_ALHHLR, 414 - LONG_INSN_ALHSIK, 415 - LONG_INSN_ALSIHN, 416 - LONG_INSN_CDFBRA, 417 - LONG_INSN_CDGBRA, 418 - LONG_INSN_CDGTRA, 419 - LONG_INSN_CDLFBR, 420 - LONG_INSN_CDLFTR, 421 - LONG_INSN_CDLGBR, 422 - LONG_INSN_CDLGTR, 423 - LONG_INSN_CEFBRA, 424 - LONG_INSN_CEGBRA, 425 - LONG_INSN_CELFBR, 426 - LONG_INSN_CELGBR, 427 - LONG_INSN_CFDBRA, 428 - LONG_INSN_CFEBRA, 429 - LONG_INSN_CFXBRA, 430 - LONG_INSN_CGDBRA, 431 - LONG_INSN_CGDTRA, 432 - LONG_INSN_CGEBRA, 433 - LONG_INSN_CGXBRA, 434 - LONG_INSN_CGXTRA, 435 - LONG_INSN_CLFDBR, 436 - LONG_INSN_CLFDTR, 437 - LONG_INSN_CLFEBR, 438 - LONG_INSN_CLFHSI, 439 - LONG_INSN_CLFXBR, 440 - LONG_INSN_CLFXTR, 441 - LONG_INSN_CLGDBR, 442 - LONG_INSN_CLGDTR, 443 - LONG_INSN_CLGEBR, 444 - LONG_INSN_CLGFRL, 445 - LONG_INSN_CLGHRL, 446 - LONG_INSN_CLGHSI, 447 - LONG_INSN_CLGXBR, 448 - LONG_INSN_CLGXTR, 449 - LONG_INSN_CLHHSI, 450 - LONG_INSN_CXFBRA, 451 - LONG_INSN_CXGBRA, 452 - LONG_INSN_CXGTRA, 453 - LONG_INSN_CXLFBR, 454 - LONG_INSN_CXLFTR, 455 - LONG_INSN_CXLGBR, 456 - LONG_INSN_CXLGTR, 457 - LONG_INSN_FIDBRA, 458 - LONG_INSN_FIEBRA, 459 - LONG_INSN_FIXBRA, 460 - LONG_INSN_LDXBRA, 461 - LONG_INSN_LEDBRA, 462 - LONG_INSN_LEXBRA, 463 - LONG_INSN_LLGFAT, 464 - LONG_INSN_LLGFRL, 465 - LONG_INSN_LLGHRL, 466 - LONG_INSN_LLGTAT, 467 - LONG_INSN_POPCNT, 468 - LONG_INSN_RIEMIT, 469 - LONG_INSN_RINEXT, 470 - LONG_INSN_RISBGN, 471 - LONG_INSN_RISBHG, 472 - LONG_INSN_RISBLG, 473 - LONG_INSN_SLHHHR, 474 - LONG_INSN_SLHHLR, 475 - LONG_INSN_TABORT, 476 - LONG_INSN_TBEGIN, 477 - LONG_INSN_TBEGINC, 478 - LONG_INSN_PCISTG, 479 - LONG_INSN_MPCIFC, 480 - LONG_INSN_STPCIFC, 481 - LONG_INSN_PCISTB, 482 - LONG_INSN_VPOPCT, 483 - LONG_INSN_VERLLV, 484 - LONG_INSN_VESRAV, 485 - LONG_INSN_VESRLV, 486 - LONG_INSN_VSBCBI, 487 - LONG_INSN_STCCTM 488 - }; 489 - 490 - static char *long_insn_name[] = { 491 - [LONG_INSN_ALGHSIK] = "alghsik", 492 - [LONG_INSN_ALHHHR] = "alhhhr", 493 - [LONG_INSN_ALHHLR] = "alhhlr", 494 - [LONG_INSN_ALHSIK] = "alhsik", 495 - [LONG_INSN_ALSIHN] = "alsihn", 496 - [LONG_INSN_CDFBRA] = "cdfbra", 497 - [LONG_INSN_CDGBRA] = "cdgbra", 498 - [LONG_INSN_CDGTRA] = "cdgtra", 499 - [LONG_INSN_CDLFBR] = "cdlfbr", 500 - [LONG_INSN_CDLFTR] = "cdlftr", 501 - [LONG_INSN_CDLGBR] = "cdlgbr", 502 - [LONG_INSN_CDLGTR] = "cdlgtr", 503 - [LONG_INSN_CEFBRA] = "cefbra", 504 - [LONG_INSN_CEGBRA] = "cegbra", 505 - [LONG_INSN_CELFBR] = "celfbr", 506 - [LONG_INSN_CELGBR] = "celgbr", 507 - [LONG_INSN_CFDBRA] = "cfdbra", 508 - [LONG_INSN_CFEBRA] = "cfebra", 509 - [LONG_INSN_CFXBRA] = "cfxbra", 510 - [LONG_INSN_CGDBRA] = "cgdbra", 511 - [LONG_INSN_CGDTRA] = "cgdtra", 512 - [LONG_INSN_CGEBRA] = "cgebra", 513 - [LONG_INSN_CGXBRA] = "cgxbra", 514 - [LONG_INSN_CGXTRA] = "cgxtra", 515 - [LONG_INSN_CLFDBR] = "clfdbr", 516 - [LONG_INSN_CLFDTR] = "clfdtr", 517 - [LONG_INSN_CLFEBR] = "clfebr", 518 - [LONG_INSN_CLFHSI] = "clfhsi", 519 - [LONG_INSN_CLFXBR] = "clfxbr", 520 - [LONG_INSN_CLFXTR] = "clfxtr", 521 - [LONG_INSN_CLGDBR] = "clgdbr", 522 - [LONG_INSN_CLGDTR] = "clgdtr", 523 - [LONG_INSN_CLGEBR] = "clgebr", 524 - [LONG_INSN_CLGFRL] = "clgfrl", 525 - [LONG_INSN_CLGHRL] = "clghrl", 526 - [LONG_INSN_CLGHSI] = "clghsi", 527 - [LONG_INSN_CLGXBR] = "clgxbr", 528 - [LONG_INSN_CLGXTR] = "clgxtr", 529 - [LONG_INSN_CLHHSI] = "clhhsi", 530 - [LONG_INSN_CXFBRA] = "cxfbra", 531 - [LONG_INSN_CXGBRA] = "cxgbra", 532 - [LONG_INSN_CXGTRA] = "cxgtra", 533 - [LONG_INSN_CXLFBR] = "cxlfbr", 534 - [LONG_INSN_CXLFTR] = "cxlftr", 535 - [LONG_INSN_CXLGBR] = "cxlgbr", 536 - [LONG_INSN_CXLGTR] = "cxlgtr", 537 - [LONG_INSN_FIDBRA] = "fidbra", 538 - [LONG_INSN_FIEBRA] = "fiebra", 539 - [LONG_INSN_FIXBRA] = "fixbra", 540 - [LONG_INSN_LDXBRA] = "ldxbra", 541 - [LONG_INSN_LEDBRA] = "ledbra", 542 - [LONG_INSN_LEXBRA] = "lexbra", 543 - [LONG_INSN_LLGFAT] = "llgfat", 544 - [LONG_INSN_LLGFRL] = "llgfrl", 545 - [LONG_INSN_LLGHRL] = "llghrl", 546 - [LONG_INSN_LLGTAT] = "llgtat", 547 - [LONG_INSN_POPCNT] = "popcnt", 548 - [LONG_INSN_RIEMIT] = "riemit", 549 - [LONG_INSN_RINEXT] = "rinext", 550 - [LONG_INSN_RISBGN] = "risbgn", 551 - [LONG_INSN_RISBHG] = "risbhg", 552 - [LONG_INSN_RISBLG] = "risblg", 553 - [LONG_INSN_SLHHHR] = "slhhhr", 554 - [LONG_INSN_SLHHLR] = "slhhlr", 555 - [LONG_INSN_TABORT] = "tabort", 556 - [LONG_INSN_TBEGIN] = "tbegin", 557 - [LONG_INSN_TBEGINC] = "tbeginc", 558 - [LONG_INSN_PCISTG] = "pcistg", 559 - [LONG_INSN_MPCIFC] = "mpcifc", 560 - [LONG_INSN_STPCIFC] = "stpcifc", 561 - [LONG_INSN_PCISTB] = "pcistb", 562 - [LONG_INSN_VPOPCT] = "vpopct", 563 - [LONG_INSN_VERLLV] = "verllv", 564 - [LONG_INSN_VESRAV] = "vesrav", 565 - [LONG_INSN_VESRLV] = "vesrlv", 566 - [LONG_INSN_VSBCBI] = "vsbcbi", 567 - [LONG_INSN_STCCTM] = "stcctm", 568 - }; 569 - 570 - static struct s390_insn opcode[] = { 571 - { "bprp", 0xc5, INSTR_MII_UPI }, 572 - { "bpp", 0xc7, INSTR_SMI_U0RDP }, 573 - { "trtr", 0xd0, INSTR_SS_L0RDRD }, 574 - { "lmd", 0xef, INSTR_SS_RRRDRD3 }, 575 - { "spm", 0x04, INSTR_RR_R0 }, 576 - { "balr", 0x05, INSTR_RR_RR }, 577 - { "bctr", 0x06, INSTR_RR_RR }, 578 - { "bcr", 0x07, INSTR_RR_UR }, 579 - { "svc", 0x0a, INSTR_RR_U0 }, 580 - { "bsm", 0x0b, INSTR_RR_RR }, 581 - { "bassm", 0x0c, INSTR_RR_RR }, 582 - { "basr", 0x0d, INSTR_RR_RR }, 583 - { "mvcl", 0x0e, INSTR_RR_RR }, 584 - { "clcl", 0x0f, INSTR_RR_RR }, 585 - { "lpr", 0x10, INSTR_RR_RR }, 586 - { "lnr", 0x11, INSTR_RR_RR }, 587 - { "ltr", 0x12, INSTR_RR_RR }, 588 - { "lcr", 0x13, INSTR_RR_RR }, 589 - { "nr", 0x14, INSTR_RR_RR }, 590 - { "clr", 0x15, INSTR_RR_RR }, 591 - { "or", 0x16, INSTR_RR_RR }, 592 - { "xr", 0x17, INSTR_RR_RR }, 593 - { "lr", 0x18, INSTR_RR_RR }, 594 - { "cr", 0x19, INSTR_RR_RR }, 595 - { "ar", 0x1a, INSTR_RR_RR }, 596 - { "sr", 0x1b, INSTR_RR_RR }, 597 - { "mr", 0x1c, INSTR_RR_RR }, 598 - { "dr", 0x1d, INSTR_RR_RR }, 599 - { "alr", 0x1e, INSTR_RR_RR }, 600 - { "slr", 0x1f, INSTR_RR_RR }, 601 - { "lpdr", 0x20, INSTR_RR_FF }, 602 - { "lndr", 0x21, INSTR_RR_FF }, 603 - { "ltdr", 0x22, INSTR_RR_FF }, 604 - { "lcdr", 0x23, INSTR_RR_FF }, 605 - { "hdr", 0x24, INSTR_RR_FF }, 606 - { "ldxr", 0x25, INSTR_RR_FF }, 607 - { "mxr", 0x26, INSTR_RR_FF }, 608 - { "mxdr", 0x27, INSTR_RR_FF }, 609 - { "ldr", 0x28, INSTR_RR_FF }, 610 - { "cdr", 0x29, INSTR_RR_FF }, 611 - { "adr", 0x2a, INSTR_RR_FF }, 612 - { "sdr", 0x2b, INSTR_RR_FF }, 613 - { "mdr", 0x2c, INSTR_RR_FF }, 614 - { "ddr", 0x2d, INSTR_RR_FF }, 615 - { "awr", 0x2e, INSTR_RR_FF }, 616 - { "swr", 0x2f, INSTR_RR_FF }, 617 - { "lper", 0x30, INSTR_RR_FF }, 618 - { "lner", 0x31, INSTR_RR_FF }, 619 - { "lter", 0x32, INSTR_RR_FF }, 620 - { "lcer", 0x33, INSTR_RR_FF }, 621 - { "her", 0x34, INSTR_RR_FF }, 622 - { "ledr", 0x35, INSTR_RR_FF }, 623 - { "axr", 0x36, INSTR_RR_FF }, 624 - { "sxr", 0x37, INSTR_RR_FF }, 625 - { "ler", 0x38, INSTR_RR_FF }, 626 - { "cer", 0x39, INSTR_RR_FF }, 627 - { "aer", 0x3a, INSTR_RR_FF }, 628 - { "ser", 0x3b, INSTR_RR_FF }, 629 - { "mder", 0x3c, INSTR_RR_FF }, 630 - { "der", 0x3d, INSTR_RR_FF }, 631 - { "aur", 0x3e, INSTR_RR_FF }, 632 - { "sur", 0x3f, INSTR_RR_FF }, 633 - { "sth", 0x40, INSTR_RX_RRRD }, 634 - { "la", 0x41, INSTR_RX_RRRD }, 635 - { "stc", 0x42, INSTR_RX_RRRD }, 636 - { "ic", 0x43, INSTR_RX_RRRD }, 637 - { "ex", 0x44, INSTR_RX_RRRD }, 638 - { "bal", 0x45, INSTR_RX_RRRD }, 639 - { "bct", 0x46, INSTR_RX_RRRD }, 640 - { "bc", 0x47, INSTR_RX_URRD }, 641 - { "lh", 0x48, INSTR_RX_RRRD }, 642 - { "ch", 0x49, INSTR_RX_RRRD }, 643 - { "ah", 0x4a, INSTR_RX_RRRD }, 644 - { "sh", 0x4b, INSTR_RX_RRRD }, 645 - { "mh", 0x4c, INSTR_RX_RRRD }, 646 - { "bas", 0x4d, INSTR_RX_RRRD }, 647 - { "cvd", 0x4e, INSTR_RX_RRRD }, 648 - { "cvb", 0x4f, INSTR_RX_RRRD }, 649 - { "st", 0x50, INSTR_RX_RRRD }, 650 - { "lae", 0x51, INSTR_RX_RRRD }, 651 - { "n", 0x54, INSTR_RX_RRRD }, 652 - { "cl", 0x55, INSTR_RX_RRRD }, 653 - { "o", 0x56, INSTR_RX_RRRD }, 654 - { "x", 0x57, INSTR_RX_RRRD }, 655 - { "l", 0x58, INSTR_RX_RRRD }, 656 - { "c", 0x59, INSTR_RX_RRRD }, 657 - { "a", 0x5a, INSTR_RX_RRRD }, 658 - { "s", 0x5b, INSTR_RX_RRRD }, 659 - { "m", 0x5c, INSTR_RX_RRRD }, 660 - { "d", 0x5d, INSTR_RX_RRRD }, 661 - { "al", 0x5e, INSTR_RX_RRRD }, 662 - { "sl", 0x5f, INSTR_RX_RRRD }, 663 - { "std", 0x60, INSTR_RX_FRRD }, 664 - { "mxd", 0x67, INSTR_RX_FRRD }, 665 - { "ld", 0x68, INSTR_RX_FRRD }, 666 - { "cd", 0x69, INSTR_RX_FRRD }, 667 - { "ad", 0x6a, INSTR_RX_FRRD }, 668 - { "sd", 0x6b, INSTR_RX_FRRD }, 669 - { "md", 0x6c, INSTR_RX_FRRD }, 670 - { "dd", 0x6d, INSTR_RX_FRRD }, 671 - { "aw", 0x6e, INSTR_RX_FRRD }, 672 - { "sw", 0x6f, INSTR_RX_FRRD }, 673 - { "ste", 0x70, INSTR_RX_FRRD }, 674 - { "ms", 0x71, INSTR_RX_RRRD }, 675 - { "le", 0x78, INSTR_RX_FRRD }, 676 - { "ce", 0x79, INSTR_RX_FRRD }, 677 - { "ae", 0x7a, INSTR_RX_FRRD }, 678 - { "se", 0x7b, INSTR_RX_FRRD }, 679 - { "mde", 0x7c, INSTR_RX_FRRD }, 680 - { "de", 0x7d, INSTR_RX_FRRD }, 681 - { "au", 0x7e, INSTR_RX_FRRD }, 682 - { "su", 0x7f, INSTR_RX_FRRD }, 683 - { "ssm", 0x80, INSTR_S_RD }, 684 - { "lpsw", 0x82, INSTR_S_RD }, 685 - { "diag", 0x83, INSTR_RS_RRRD }, 686 - { "brxh", 0x84, INSTR_RSI_RRP }, 687 - { "brxle", 0x85, INSTR_RSI_RRP }, 688 - { "bxh", 0x86, INSTR_RS_RRRD }, 689 - { "bxle", 0x87, INSTR_RS_RRRD }, 690 - { "srl", 0x88, INSTR_RS_R0RD }, 691 - { "sll", 0x89, INSTR_RS_R0RD }, 692 - { "sra", 0x8a, INSTR_RS_R0RD }, 693 - { "sla", 0x8b, INSTR_RS_R0RD }, 694 - { "srdl", 0x8c, INSTR_RS_R0RD }, 695 - { "sldl", 0x8d, INSTR_RS_R0RD }, 696 - { "srda", 0x8e, INSTR_RS_R0RD }, 697 - { "slda", 0x8f, INSTR_RS_R0RD }, 698 - { "stm", 0x90, INSTR_RS_RRRD }, 699 - { "tm", 0x91, INSTR_SI_URD }, 700 - { "mvi", 0x92, INSTR_SI_URD }, 701 - { "ts", 0x93, INSTR_S_RD }, 702 - { "ni", 0x94, INSTR_SI_URD }, 703 - { "cli", 0x95, INSTR_SI_URD }, 704 - { "oi", 0x96, INSTR_SI_URD }, 705 - { "xi", 0x97, INSTR_SI_URD }, 706 - { "lm", 0x98, INSTR_RS_RRRD }, 707 - { "trace", 0x99, INSTR_RS_RRRD }, 708 - { "lam", 0x9a, INSTR_RS_AARD }, 709 - { "stam", 0x9b, INSTR_RS_AARD }, 710 - { "mvcle", 0xa8, INSTR_RS_RRRD }, 711 - { "clcle", 0xa9, INSTR_RS_RRRD }, 712 - { "stnsm", 0xac, INSTR_SI_URD }, 713 - { "stosm", 0xad, INSTR_SI_URD }, 714 - { "sigp", 0xae, INSTR_RS_RRRD }, 715 - { "mc", 0xaf, INSTR_SI_URD }, 716 - { "lra", 0xb1, INSTR_RX_RRRD }, 717 - { "stctl", 0xb6, INSTR_RS_CCRD }, 718 - { "lctl", 0xb7, INSTR_RS_CCRD }, 719 - { "cs", 0xba, INSTR_RS_RRRD }, 720 - { "cds", 0xbb, INSTR_RS_RRRD }, 721 - { "clm", 0xbd, INSTR_RS_RURD }, 722 - { "stcm", 0xbe, INSTR_RS_RURD }, 723 - { "icm", 0xbf, INSTR_RS_RURD }, 724 - { "mvn", 0xd1, INSTR_SS_L0RDRD }, 725 - { "mvc", 0xd2, INSTR_SS_L0RDRD }, 726 - { "mvz", 0xd3, INSTR_SS_L0RDRD }, 727 - { "nc", 0xd4, INSTR_SS_L0RDRD }, 728 - { "clc", 0xd5, INSTR_SS_L0RDRD }, 729 - { "oc", 0xd6, INSTR_SS_L0RDRD }, 730 - { "xc", 0xd7, INSTR_SS_L0RDRD }, 731 - { "mvck", 0xd9, INSTR_SS_RRRDRD }, 732 - { "mvcp", 0xda, INSTR_SS_RRRDRD }, 733 - { "mvcs", 0xdb, INSTR_SS_RRRDRD }, 734 - { "tr", 0xdc, INSTR_SS_L0RDRD }, 735 - { "trt", 0xdd, INSTR_SS_L0RDRD }, 736 - { "ed", 0xde, INSTR_SS_L0RDRD }, 737 - { "edmk", 0xdf, INSTR_SS_L0RDRD }, 738 - { "pku", 0xe1, INSTR_SS_L0RDRD }, 739 - { "unpku", 0xe2, INSTR_SS_L0RDRD }, 740 - { "mvcin", 0xe8, INSTR_SS_L0RDRD }, 741 - { "pka", 0xe9, INSTR_SS_L0RDRD }, 742 - { "unpka", 0xea, INSTR_SS_L0RDRD }, 743 - { "plo", 0xee, INSTR_SS_RRRDRD2 }, 744 - { "srp", 0xf0, INSTR_SS_LIRDRD }, 745 - { "mvo", 0xf1, INSTR_SS_LLRDRD }, 746 - { "pack", 0xf2, INSTR_SS_LLRDRD }, 747 - { "unpk", 0xf3, INSTR_SS_LLRDRD }, 748 - { "zap", 0xf8, INSTR_SS_LLRDRD }, 749 - { "cp", 0xf9, INSTR_SS_LLRDRD }, 750 - { "ap", 0xfa, INSTR_SS_LLRDRD }, 751 - { "sp", 0xfb, INSTR_SS_LLRDRD }, 752 - { "mp", 0xfc, INSTR_SS_LLRDRD }, 753 - { "dp", 0xfd, INSTR_SS_LLRDRD }, 754 - { "", 0, INSTR_INVALID } 755 - }; 756 - 757 - static struct s390_insn opcode_01[] = { 758 - { "ptff", 0x04, INSTR_E }, 759 - { "pfpo", 0x0a, INSTR_E }, 760 - { "sam64", 0x0e, INSTR_E }, 761 - { "pr", 0x01, INSTR_E }, 762 - { "upt", 0x02, INSTR_E }, 763 - { "sckpf", 0x07, INSTR_E }, 764 - { "tam", 0x0b, INSTR_E }, 765 - { "sam24", 0x0c, INSTR_E }, 766 - { "sam31", 0x0d, INSTR_E }, 767 - { "trap2", 0xff, INSTR_E }, 768 - { "", 0, INSTR_INVALID } 769 - }; 770 - 771 - static struct s390_insn opcode_a5[] = { 772 - { "iihh", 0x00, INSTR_RI_RU }, 773 - { "iihl", 0x01, INSTR_RI_RU }, 774 - { "iilh", 0x02, INSTR_RI_RU }, 775 - { "iill", 0x03, INSTR_RI_RU }, 776 - { "nihh", 0x04, INSTR_RI_RU }, 777 - { "nihl", 0x05, INSTR_RI_RU }, 778 - { "nilh", 0x06, INSTR_RI_RU }, 779 - { "nill", 0x07, INSTR_RI_RU }, 780 - { "oihh", 0x08, INSTR_RI_RU }, 781 - { "oihl", 0x09, INSTR_RI_RU }, 782 - { "oilh", 0x0a, INSTR_RI_RU }, 783 - { "oill", 0x0b, INSTR_RI_RU }, 784 - { "llihh", 0x0c, INSTR_RI_RU }, 785 - { "llihl", 0x0d, INSTR_RI_RU }, 786 - { "llilh", 0x0e, INSTR_RI_RU }, 787 - { "llill", 0x0f, INSTR_RI_RU }, 788 - { "", 0, INSTR_INVALID } 789 - }; 790 - 791 - static struct s390_insn opcode_a7[] = { 792 - { "tmhh", 0x02, INSTR_RI_RU }, 793 - { "tmhl", 0x03, INSTR_RI_RU }, 794 - { "brctg", 0x07, INSTR_RI_RP }, 795 - { "lghi", 0x09, INSTR_RI_RI }, 796 - { "aghi", 0x0b, INSTR_RI_RI }, 797 - { "mghi", 0x0d, INSTR_RI_RI }, 798 - { "cghi", 0x0f, INSTR_RI_RI }, 799 - { "tmlh", 0x00, INSTR_RI_RU }, 800 - { "tmll", 0x01, INSTR_RI_RU }, 801 - { "brc", 0x04, INSTR_RI_UP }, 802 - { "bras", 0x05, INSTR_RI_RP }, 803 - { "brct", 0x06, INSTR_RI_RP }, 804 - { "lhi", 0x08, INSTR_RI_RI }, 805 - { "ahi", 0x0a, INSTR_RI_RI }, 806 - { "mhi", 0x0c, INSTR_RI_RI }, 807 - { "chi", 0x0e, INSTR_RI_RI }, 808 - { "", 0, INSTR_INVALID } 809 - }; 810 - 811 - static struct s390_insn opcode_aa[] = { 812 - { { 0, LONG_INSN_RINEXT }, 0x00, INSTR_RI_RI }, 813 - { "rion", 0x01, INSTR_RI_RI }, 814 - { "tric", 0x02, INSTR_RI_RI }, 815 - { "rioff", 0x03, INSTR_RI_RI }, 816 - { { 0, LONG_INSN_RIEMIT }, 0x04, INSTR_RI_RI }, 817 - { "", 0, INSTR_INVALID } 818 - }; 819 - 820 - static struct s390_insn opcode_b2[] = { 821 - { "stckf", 0x7c, INSTR_S_RD }, 822 - { "lpp", 0x80, INSTR_S_RD }, 823 - { "lcctl", 0x84, INSTR_S_RD }, 824 - { "lpctl", 0x85, INSTR_S_RD }, 825 - { "qsi", 0x86, INSTR_S_RD }, 826 - { "lsctl", 0x87, INSTR_S_RD }, 827 - { "qctri", 0x8e, INSTR_S_RD }, 828 - { "stfle", 0xb0, INSTR_S_RD }, 829 - { "lpswe", 0xb2, INSTR_S_RD }, 830 - { "srnmb", 0xb8, INSTR_S_RD }, 831 - { "srnmt", 0xb9, INSTR_S_RD }, 832 - { "lfas", 0xbd, INSTR_S_RD }, 833 - { "scctr", 0xe0, INSTR_RRE_RR }, 834 - { "spctr", 0xe1, INSTR_RRE_RR }, 835 - { "ecctr", 0xe4, INSTR_RRE_RR }, 836 - { "epctr", 0xe5, INSTR_RRE_RR }, 837 - { "ppa", 0xe8, INSTR_RRF_U0RR }, 838 - { "etnd", 0xec, INSTR_RRE_R0 }, 839 - { "ecpga", 0xed, INSTR_RRE_RR }, 840 - { "tend", 0xf8, INSTR_S_00 }, 841 - { "niai", 0xfa, INSTR_IE_UU }, 842 - { { 0, LONG_INSN_TABORT }, 0xfc, INSTR_S_RD }, 843 - { "stidp", 0x02, INSTR_S_RD }, 844 - { "sck", 0x04, INSTR_S_RD }, 845 - { "stck", 0x05, INSTR_S_RD }, 846 - { "sckc", 0x06, INSTR_S_RD }, 847 - { "stckc", 0x07, INSTR_S_RD }, 848 - { "spt", 0x08, INSTR_S_RD }, 849 - { "stpt", 0x09, INSTR_S_RD }, 850 - { "spka", 0x0a, INSTR_S_RD }, 851 - { "ipk", 0x0b, INSTR_S_00 }, 852 - { "ptlb", 0x0d, INSTR_S_00 }, 853 - { "spx", 0x10, INSTR_S_RD }, 854 - { "stpx", 0x11, INSTR_S_RD }, 855 - { "stap", 0x12, INSTR_S_RD }, 856 - { "sie", 0x14, INSTR_S_RD }, 857 - { "pc", 0x18, INSTR_S_RD }, 858 - { "sac", 0x19, INSTR_S_RD }, 859 - { "cfc", 0x1a, INSTR_S_RD }, 860 - { "servc", 0x20, INSTR_RRE_RR }, 861 - { "ipte", 0x21, INSTR_RRE_RR }, 862 - { "ipm", 0x22, INSTR_RRE_R0 }, 863 - { "ivsk", 0x23, INSTR_RRE_RR }, 864 - { "iac", 0x24, INSTR_RRE_R0 }, 865 - { "ssar", 0x25, INSTR_RRE_R0 }, 866 - { "epar", 0x26, INSTR_RRE_R0 }, 867 - { "esar", 0x27, INSTR_RRE_R0 }, 868 - { "pt", 0x28, INSTR_RRE_RR }, 869 - { "iske", 0x29, INSTR_RRE_RR }, 870 - { "rrbe", 0x2a, INSTR_RRE_RR }, 871 - { "sske", 0x2b, INSTR_RRF_M0RR }, 872 - { "tb", 0x2c, INSTR_RRE_0R }, 873 - { "dxr", 0x2d, INSTR_RRE_FF }, 874 - { "pgin", 0x2e, INSTR_RRE_RR }, 875 - { "pgout", 0x2f, INSTR_RRE_RR }, 876 - { "csch", 0x30, INSTR_S_00 }, 877 - { "hsch", 0x31, INSTR_S_00 }, 878 - { "msch", 0x32, INSTR_S_RD }, 879 - { "ssch", 0x33, INSTR_S_RD }, 880 - { "stsch", 0x34, INSTR_S_RD }, 881 - { "tsch", 0x35, INSTR_S_RD }, 882 - { "tpi", 0x36, INSTR_S_RD }, 883 - { "sal", 0x37, INSTR_S_00 }, 884 - { "rsch", 0x38, INSTR_S_00 }, 885 - { "stcrw", 0x39, INSTR_S_RD }, 886 - { "stcps", 0x3a, INSTR_S_RD }, 887 - { "rchp", 0x3b, INSTR_S_00 }, 888 - { "schm", 0x3c, INSTR_S_00 }, 889 - { "bakr", 0x40, INSTR_RRE_RR }, 890 - { "cksm", 0x41, INSTR_RRE_RR }, 891 - { "sqdr", 0x44, INSTR_RRE_FF }, 892 - { "sqer", 0x45, INSTR_RRE_FF }, 893 - { "stura", 0x46, INSTR_RRE_RR }, 894 - { "msta", 0x47, INSTR_RRE_R0 }, 895 - { "palb", 0x48, INSTR_RRE_00 }, 896 - { "ereg", 0x49, INSTR_RRE_RR }, 897 - { "esta", 0x4a, INSTR_RRE_RR }, 898 - { "lura", 0x4b, INSTR_RRE_RR }, 899 - { "tar", 0x4c, INSTR_RRE_AR }, 900 - { "cpya", 0x4d, INSTR_RRE_AA }, 901 - { "sar", 0x4e, INSTR_RRE_AR }, 902 - { "ear", 0x4f, INSTR_RRE_RA }, 903 - { "csp", 0x50, INSTR_RRE_RR }, 904 - { "msr", 0x52, INSTR_RRE_RR }, 905 - { "mvpg", 0x54, INSTR_RRE_RR }, 906 - { "mvst", 0x55, INSTR_RRE_RR }, 907 - { "cuse", 0x57, INSTR_RRE_RR }, 908 - { "bsg", 0x58, INSTR_RRE_RR }, 909 - { "bsa", 0x5a, INSTR_RRE_RR }, 910 - { "clst", 0x5d, INSTR_RRE_RR }, 911 - { "srst", 0x5e, INSTR_RRE_RR }, 912 - { "cmpsc", 0x63, INSTR_RRE_RR }, 913 - { "siga", 0x74, INSTR_S_RD }, 914 - { "xsch", 0x76, INSTR_S_00 }, 915 - { "rp", 0x77, INSTR_S_RD }, 916 - { "stcke", 0x78, INSTR_S_RD }, 917 - { "sacf", 0x79, INSTR_S_RD }, 918 - { "stsi", 0x7d, INSTR_S_RD }, 919 - { "srnm", 0x99, INSTR_S_RD }, 920 - { "stfpc", 0x9c, INSTR_S_RD }, 921 - { "lfpc", 0x9d, INSTR_S_RD }, 922 - { "tre", 0xa5, INSTR_RRE_RR }, 923 - { "cuutf", 0xa6, INSTR_RRF_M0RR }, 924 - { "cutfu", 0xa7, INSTR_RRF_M0RR }, 925 - { "stfl", 0xb1, INSTR_S_RD }, 926 - { "trap4", 0xff, INSTR_S_RD }, 927 - { "", 0, INSTR_INVALID } 928 - }; 929 - 930 - static struct s390_insn opcode_b3[] = { 931 - { "maylr", 0x38, INSTR_RRF_F0FF }, 932 - { "mylr", 0x39, INSTR_RRF_F0FF }, 933 - { "mayr", 0x3a, INSTR_RRF_F0FF }, 934 - { "myr", 0x3b, INSTR_RRF_F0FF }, 935 - { "mayhr", 0x3c, INSTR_RRF_F0FF }, 936 - { "myhr", 0x3d, INSTR_RRF_F0FF }, 937 - { "lpdfr", 0x70, INSTR_RRE_FF }, 938 - { "lndfr", 0x71, INSTR_RRE_FF }, 939 - { "cpsdr", 0x72, INSTR_RRF_F0FF2 }, 940 - { "lcdfr", 0x73, INSTR_RRE_FF }, 941 - { "sfasr", 0x85, INSTR_RRE_R0 }, 942 - { { 0, LONG_INSN_CELFBR }, 0x90, INSTR_RRF_UUFR }, 943 - { { 0, LONG_INSN_CDLFBR }, 0x91, INSTR_RRF_UUFR }, 944 - { { 0, LONG_INSN_CXLFBR }, 0x92, INSTR_RRF_UURF }, 945 - { { 0, LONG_INSN_CEFBRA }, 0x94, INSTR_RRF_UUFR }, 946 - { { 0, LONG_INSN_CDFBRA }, 0x95, INSTR_RRF_UUFR }, 947 - { { 0, LONG_INSN_CXFBRA }, 0x96, INSTR_RRF_UURF }, 948 - { { 0, LONG_INSN_CFEBRA }, 0x98, INSTR_RRF_UURF }, 949 - { { 0, LONG_INSN_CFDBRA }, 0x99, INSTR_RRF_UURF }, 950 - { { 0, LONG_INSN_CFXBRA }, 0x9a, INSTR_RRF_UUFR }, 951 - { { 0, LONG_INSN_CLFEBR }, 0x9c, INSTR_RRF_UURF }, 952 - { { 0, LONG_INSN_CLFDBR }, 0x9d, INSTR_RRF_UURF }, 953 - { { 0, LONG_INSN_CLFXBR }, 0x9e, INSTR_RRF_UUFR }, 954 - { { 0, LONG_INSN_CELGBR }, 0xa0, INSTR_RRF_UUFR }, 955 - { { 0, LONG_INSN_CDLGBR }, 0xa1, INSTR_RRF_UUFR }, 956 - { { 0, LONG_INSN_CXLGBR }, 0xa2, INSTR_RRF_UURF }, 957 - { { 0, LONG_INSN_CEGBRA }, 0xa4, INSTR_RRF_UUFR }, 958 - { { 0, LONG_INSN_CDGBRA }, 0xa5, INSTR_RRF_UUFR }, 959 - { { 0, LONG_INSN_CXGBRA }, 0xa6, INSTR_RRF_UURF }, 960 - { { 0, LONG_INSN_CGEBRA }, 0xa8, INSTR_RRF_UURF }, 961 - { { 0, LONG_INSN_CGDBRA }, 0xa9, INSTR_RRF_UURF }, 962 - { { 0, LONG_INSN_CGXBRA }, 0xaa, INSTR_RRF_UUFR }, 963 - { { 0, LONG_INSN_CLGEBR }, 0xac, INSTR_RRF_UURF }, 964 - { { 0, LONG_INSN_CLGDBR }, 0xad, INSTR_RRF_UURF }, 965 - { { 0, LONG_INSN_CLGXBR }, 0xae, INSTR_RRF_UUFR }, 966 - { "ldgr", 0xc1, INSTR_RRE_FR }, 967 - { "cegr", 0xc4, INSTR_RRE_FR }, 968 - { "cdgr", 0xc5, INSTR_RRE_FR }, 969 - { "cxgr", 0xc6, INSTR_RRE_FR }, 970 - { "cger", 0xc8, INSTR_RRF_U0RF }, 971 - { "cgdr", 0xc9, INSTR_RRF_U0RF }, 972 - { "cgxr", 0xca, INSTR_RRF_U0RF }, 973 - { "lgdr", 0xcd, INSTR_RRE_RF }, 974 - { "mdtra", 0xd0, INSTR_RRF_FUFF2 }, 975 - { "ddtra", 0xd1, INSTR_RRF_FUFF2 }, 976 - { "adtra", 0xd2, INSTR_RRF_FUFF2 }, 977 - { "sdtra", 0xd3, INSTR_RRF_FUFF2 }, 978 - { "ldetr", 0xd4, INSTR_RRF_0UFF }, 979 - { "ledtr", 0xd5, INSTR_RRF_UUFF }, 980 - { "ltdtr", 0xd6, INSTR_RRE_FF }, 981 - { "fidtr", 0xd7, INSTR_RRF_UUFF }, 982 - { "mxtra", 0xd8, INSTR_RRF_FUFF2 }, 983 - { "dxtra", 0xd9, INSTR_RRF_FUFF2 }, 984 - { "axtra", 0xda, INSTR_RRF_FUFF2 }, 985 - { "sxtra", 0xdb, INSTR_RRF_FUFF2 }, 986 - { "lxdtr", 0xdc, INSTR_RRF_0UFF }, 987 - { "ldxtr", 0xdd, INSTR_RRF_UUFF }, 988 - { "ltxtr", 0xde, INSTR_RRE_FF }, 989 - { "fixtr", 0xdf, INSTR_RRF_UUFF }, 990 - { "kdtr", 0xe0, INSTR_RRE_FF }, 991 - { { 0, LONG_INSN_CGDTRA }, 0xe1, INSTR_RRF_UURF }, 992 - { "cudtr", 0xe2, INSTR_RRE_RF }, 993 - { "csdtr", 0xe3, INSTR_RRE_RF }, 994 - { "cdtr", 0xe4, INSTR_RRE_FF }, 995 - { "eedtr", 0xe5, INSTR_RRE_RF }, 996 - { "esdtr", 0xe7, INSTR_RRE_RF }, 997 - { "kxtr", 0xe8, INSTR_RRE_FF }, 998 - { { 0, LONG_INSN_CGXTRA }, 0xe9, INSTR_RRF_UUFR }, 999 - { "cuxtr", 0xea, INSTR_RRE_RF }, 1000 - { "csxtr", 0xeb, INSTR_RRE_RF }, 1001 - { "cxtr", 0xec, INSTR_RRE_FF }, 1002 - { "eextr", 0xed, INSTR_RRE_RF }, 1003 - { "esxtr", 0xef, INSTR_RRE_RF }, 1004 - { { 0, LONG_INSN_CDGTRA }, 0xf1, INSTR_RRF_UUFR }, 1005 - { "cdutr", 0xf2, INSTR_RRE_FR }, 1006 - { "cdstr", 0xf3, INSTR_RRE_FR }, 1007 - { "cedtr", 0xf4, INSTR_RRE_FF }, 1008 - { "qadtr", 0xf5, INSTR_RRF_FUFF }, 1009 - { "iedtr", 0xf6, INSTR_RRF_F0FR }, 1010 - { "rrdtr", 0xf7, INSTR_RRF_FFRU }, 1011 - { { 0, LONG_INSN_CXGTRA }, 0xf9, INSTR_RRF_UURF }, 1012 - { "cxutr", 0xfa, INSTR_RRE_FR }, 1013 - { "cxstr", 0xfb, INSTR_RRE_FR }, 1014 - { "cextr", 0xfc, INSTR_RRE_FF }, 1015 - { "qaxtr", 0xfd, INSTR_RRF_FUFF }, 1016 - { "iextr", 0xfe, INSTR_RRF_F0FR }, 1017 - { "rrxtr", 0xff, INSTR_RRF_FFRU }, 1018 - { "lpebr", 0x00, INSTR_RRE_FF }, 1019 - { "lnebr", 0x01, INSTR_RRE_FF }, 1020 - { "ltebr", 0x02, INSTR_RRE_FF }, 1021 - { "lcebr", 0x03, INSTR_RRE_FF }, 1022 - { "ldebr", 0x04, INSTR_RRE_FF }, 1023 - { "lxdbr", 0x05, INSTR_RRE_FF }, 1024 - { "lxebr", 0x06, INSTR_RRE_FF }, 1025 - { "mxdbr", 0x07, INSTR_RRE_FF }, 1026 - { "kebr", 0x08, INSTR_RRE_FF }, 1027 - { "cebr", 0x09, INSTR_RRE_FF }, 1028 - { "aebr", 0x0a, INSTR_RRE_FF }, 1029 - { "sebr", 0x0b, INSTR_RRE_FF }, 1030 - { "mdebr", 0x0c, INSTR_RRE_FF }, 1031 - { "debr", 0x0d, INSTR_RRE_FF }, 1032 - { "maebr", 0x0e, INSTR_RRF_F0FF }, 1033 - { "msebr", 0x0f, INSTR_RRF_F0FF }, 1034 - { "lpdbr", 0x10, INSTR_RRE_FF }, 1035 - { "lndbr", 0x11, INSTR_RRE_FF }, 1036 - { "ltdbr", 0x12, INSTR_RRE_FF }, 1037 - { "lcdbr", 0x13, INSTR_RRE_FF }, 1038 - { "sqebr", 0x14, INSTR_RRE_FF }, 1039 - { "sqdbr", 0x15, INSTR_RRE_FF }, 1040 - { "sqxbr", 0x16, INSTR_RRE_FF }, 1041 - { "meebr", 0x17, INSTR_RRE_FF }, 1042 - { "kdbr", 0x18, INSTR_RRE_FF }, 1043 - { "cdbr", 0x19, INSTR_RRE_FF }, 1044 - { "adbr", 0x1a, INSTR_RRE_FF }, 1045 - { "sdbr", 0x1b, INSTR_RRE_FF }, 1046 - { "mdbr", 0x1c, INSTR_RRE_FF }, 1047 - { "ddbr", 0x1d, INSTR_RRE_FF }, 1048 - { "madbr", 0x1e, INSTR_RRF_F0FF }, 1049 - { "msdbr", 0x1f, INSTR_RRF_F0FF }, 1050 - { "lder", 0x24, INSTR_RRE_FF }, 1051 - { "lxdr", 0x25, INSTR_RRE_FF }, 1052 - { "lxer", 0x26, INSTR_RRE_FF }, 1053 - { "maer", 0x2e, INSTR_RRF_F0FF }, 1054 - { "mser", 0x2f, INSTR_RRF_F0FF }, 1055 - { "sqxr", 0x36, INSTR_RRE_FF }, 1056 - { "meer", 0x37, INSTR_RRE_FF }, 1057 - { "madr", 0x3e, INSTR_RRF_F0FF }, 1058 - { "msdr", 0x3f, INSTR_RRF_F0FF }, 1059 - { "lpxbr", 0x40, INSTR_RRE_FF }, 1060 - { "lnxbr", 0x41, INSTR_RRE_FF }, 1061 - { "ltxbr", 0x42, INSTR_RRE_FF }, 1062 - { "lcxbr", 0x43, INSTR_RRE_FF }, 1063 - { { 0, LONG_INSN_LEDBRA }, 0x44, INSTR_RRF_UUFF }, 1064 - { { 0, LONG_INSN_LDXBRA }, 0x45, INSTR_RRF_UUFF }, 1065 - { { 0, LONG_INSN_LEXBRA }, 0x46, INSTR_RRF_UUFF }, 1066 - { { 0, LONG_INSN_FIXBRA }, 0x47, INSTR_RRF_UUFF }, 1067 - { "kxbr", 0x48, INSTR_RRE_FF }, 1068 - { "cxbr", 0x49, INSTR_RRE_FF }, 1069 - { "axbr", 0x4a, INSTR_RRE_FF }, 1070 - { "sxbr", 0x4b, INSTR_RRE_FF }, 1071 - { "mxbr", 0x4c, INSTR_RRE_FF }, 1072 - { "dxbr", 0x4d, INSTR_RRE_FF }, 1073 - { "tbedr", 0x50, INSTR_RRF_U0FF }, 1074 - { "tbdr", 0x51, INSTR_RRF_U0FF }, 1075 - { "diebr", 0x53, INSTR_RRF_FUFF }, 1076 - { { 0, LONG_INSN_FIEBRA }, 0x57, INSTR_RRF_UUFF }, 1077 - { "thder", 0x58, INSTR_RRE_FF }, 1078 - { "thdr", 0x59, INSTR_RRE_FF }, 1079 - { "didbr", 0x5b, INSTR_RRF_FUFF }, 1080 - { { 0, LONG_INSN_FIDBRA }, 0x5f, INSTR_RRF_UUFF }, 1081 - { "lpxr", 0x60, INSTR_RRE_FF }, 1082 - { "lnxr", 0x61, INSTR_RRE_FF }, 1083 - { "ltxr", 0x62, INSTR_RRE_FF }, 1084 - { "lcxr", 0x63, INSTR_RRE_FF }, 1085 - { "lxr", 0x65, INSTR_RRE_FF }, 1086 - { "lexr", 0x66, INSTR_RRE_FF }, 1087 - { "fixr", 0x67, INSTR_RRE_FF }, 1088 - { "cxr", 0x69, INSTR_RRE_FF }, 1089 - { "lzer", 0x74, INSTR_RRE_F0 }, 1090 - { "lzdr", 0x75, INSTR_RRE_F0 }, 1091 - { "lzxr", 0x76, INSTR_RRE_F0 }, 1092 - { "fier", 0x77, INSTR_RRE_FF }, 1093 - { "fidr", 0x7f, INSTR_RRE_FF }, 1094 - { "sfpc", 0x84, INSTR_RRE_RR_OPT }, 1095 - { "efpc", 0x8c, INSTR_RRE_RR_OPT }, 1096 - { "cefbr", 0x94, INSTR_RRE_RF }, 1097 - { "cdfbr", 0x95, INSTR_RRE_RF }, 1098 - { "cxfbr", 0x96, INSTR_RRE_RF }, 1099 - { "cfebr", 0x98, INSTR_RRF_U0RF }, 1100 - { "cfdbr", 0x99, INSTR_RRF_U0RF }, 1101 - { "cfxbr", 0x9a, INSTR_RRF_U0RF }, 1102 - { "cefr", 0xb4, INSTR_RRE_FR }, 1103 - { "cdfr", 0xb5, INSTR_RRE_FR }, 1104 - { "cxfr", 0xb6, INSTR_RRE_FR }, 1105 - { "cfer", 0xb8, INSTR_RRF_U0RF }, 1106 - { "cfdr", 0xb9, INSTR_RRF_U0RF }, 1107 - { "cfxr", 0xba, INSTR_RRF_U0RF }, 1108 - { "", 0, INSTR_INVALID } 1109 - }; 1110 - 1111 - static struct s390_insn opcode_b9[] = { 1112 - { "lpgr", 0x00, INSTR_RRE_RR }, 1113 - { "lngr", 0x01, INSTR_RRE_RR }, 1114 - { "ltgr", 0x02, INSTR_RRE_RR }, 1115 - { "lcgr", 0x03, INSTR_RRE_RR }, 1116 - { "lgr", 0x04, INSTR_RRE_RR }, 1117 - { "lurag", 0x05, INSTR_RRE_RR }, 1118 - { "lgbr", 0x06, INSTR_RRE_RR }, 1119 - { "lghr", 0x07, INSTR_RRE_RR }, 1120 - { "agr", 0x08, INSTR_RRE_RR }, 1121 - { "sgr", 0x09, INSTR_RRE_RR }, 1122 - { "algr", 0x0a, INSTR_RRE_RR }, 1123 - { "slgr", 0x0b, INSTR_RRE_RR }, 1124 - { "msgr", 0x0c, INSTR_RRE_RR }, 1125 - { "dsgr", 0x0d, INSTR_RRE_RR }, 1126 - { "eregg", 0x0e, INSTR_RRE_RR }, 1127 - { "lrvgr", 0x0f, INSTR_RRE_RR }, 1128 - { "lpgfr", 0x10, INSTR_RRE_RR }, 1129 - { "lngfr", 0x11, INSTR_RRE_RR }, 1130 - { "ltgfr", 0x12, INSTR_RRE_RR }, 1131 - { "lcgfr", 0x13, INSTR_RRE_RR }, 1132 - { "lgfr", 0x14, INSTR_RRE_RR }, 1133 - { "llgfr", 0x16, INSTR_RRE_RR }, 1134 - { "llgtr", 0x17, INSTR_RRE_RR }, 1135 - { "agfr", 0x18, INSTR_RRE_RR }, 1136 - { "sgfr", 0x19, INSTR_RRE_RR }, 1137 - { "algfr", 0x1a, INSTR_RRE_RR }, 1138 - { "slgfr", 0x1b, INSTR_RRE_RR }, 1139 - { "msgfr", 0x1c, INSTR_RRE_RR }, 1140 - { "dsgfr", 0x1d, INSTR_RRE_RR }, 1141 - { "cgr", 0x20, INSTR_RRE_RR }, 1142 - { "clgr", 0x21, INSTR_RRE_RR }, 1143 - { "sturg", 0x25, INSTR_RRE_RR }, 1144 - { "lbr", 0x26, INSTR_RRE_RR }, 1145 - { "lhr", 0x27, INSTR_RRE_RR }, 1146 - { "cgfr", 0x30, INSTR_RRE_RR }, 1147 - { "clgfr", 0x31, INSTR_RRE_RR }, 1148 - { "cfdtr", 0x41, INSTR_RRF_UURF }, 1149 - { { 0, LONG_INSN_CLGDTR }, 0x42, INSTR_RRF_UURF }, 1150 - { { 0, LONG_INSN_CLFDTR }, 0x43, INSTR_RRF_UURF }, 1151 - { "bctgr", 0x46, INSTR_RRE_RR }, 1152 - { "cfxtr", 0x49, INSTR_RRF_UURF }, 1153 - { { 0, LONG_INSN_CLGXTR }, 0x4a, INSTR_RRF_UUFR }, 1154 - { { 0, LONG_INSN_CLFXTR }, 0x4b, INSTR_RRF_UUFR }, 1155 - { "cdftr", 0x51, INSTR_RRF_UUFR }, 1156 - { { 0, LONG_INSN_CDLGTR }, 0x52, INSTR_RRF_UUFR }, 1157 - { { 0, LONG_INSN_CDLFTR }, 0x53, INSTR_RRF_UUFR }, 1158 - { "cxftr", 0x59, INSTR_RRF_UURF }, 1159 - { { 0, LONG_INSN_CXLGTR }, 0x5a, INSTR_RRF_UURF }, 1160 - { { 0, LONG_INSN_CXLFTR }, 0x5b, INSTR_RRF_UUFR }, 1161 - { "cgrt", 0x60, INSTR_RRF_U0RR }, 1162 - { "clgrt", 0x61, INSTR_RRF_U0RR }, 1163 - { "crt", 0x72, INSTR_RRF_U0RR }, 1164 - { "clrt", 0x73, INSTR_RRF_U0RR }, 1165 - { "ngr", 0x80, INSTR_RRE_RR }, 1166 - { "ogr", 0x81, INSTR_RRE_RR }, 1167 - { "xgr", 0x82, INSTR_RRE_RR }, 1168 - { "flogr", 0x83, INSTR_RRE_RR }, 1169 - { "llgcr", 0x84, INSTR_RRE_RR }, 1170 - { "llghr", 0x85, INSTR_RRE_RR }, 1171 - { "mlgr", 0x86, INSTR_RRE_RR }, 1172 - { "dlgr", 0x87, INSTR_RRE_RR }, 1173 - { "alcgr", 0x88, INSTR_RRE_RR }, 1174 - { "slbgr", 0x89, INSTR_RRE_RR }, 1175 - { "cspg", 0x8a, INSTR_RRE_RR }, 1176 - { "idte", 0x8e, INSTR_RRF_R0RR }, 1177 - { "crdte", 0x8f, INSTR_RRF_RMRR }, 1178 - { "llcr", 0x94, INSTR_RRE_RR }, 1179 - { "llhr", 0x95, INSTR_RRE_RR }, 1180 - { "esea", 0x9d, INSTR_RRE_R0 }, 1181 - { "ptf", 0xa2, INSTR_RRE_R0 }, 1182 - { "lptea", 0xaa, INSTR_RRF_RURR }, 1183 - { "rrbm", 0xae, INSTR_RRE_RR }, 1184 - { "pfmf", 0xaf, INSTR_RRE_RR }, 1185 - { "cu14", 0xb0, INSTR_RRF_M0RR }, 1186 - { "cu24", 0xb1, INSTR_RRF_M0RR }, 1187 - { "cu41", 0xb2, INSTR_RRE_RR }, 1188 - { "cu42", 0xb3, INSTR_RRE_RR }, 1189 - { "trtre", 0xbd, INSTR_RRF_M0RR }, 1190 - { "srstu", 0xbe, INSTR_RRE_RR }, 1191 - { "trte", 0xbf, INSTR_RRF_M0RR }, 1192 - { "ahhhr", 0xc8, INSTR_RRF_R0RR2 }, 1193 - { "shhhr", 0xc9, INSTR_RRF_R0RR2 }, 1194 - { { 0, LONG_INSN_ALHHHR }, 0xca, INSTR_RRF_R0RR2 }, 1195 - { { 0, LONG_INSN_SLHHHR }, 0xcb, INSTR_RRF_R0RR2 }, 1196 - { "chhr", 0xcd, INSTR_RRE_RR }, 1197 - { "clhhr", 0xcf, INSTR_RRE_RR }, 1198 - { { 0, LONG_INSN_PCISTG }, 0xd0, INSTR_RRE_RR }, 1199 - { "pcilg", 0xd2, INSTR_RRE_RR }, 1200 - { "rpcit", 0xd3, INSTR_RRE_RR }, 1201 - { "ahhlr", 0xd8, INSTR_RRF_R0RR2 }, 1202 - { "shhlr", 0xd9, INSTR_RRF_R0RR2 }, 1203 - { { 0, LONG_INSN_ALHHLR }, 0xda, INSTR_RRF_R0RR2 }, 1204 - { { 0, LONG_INSN_SLHHLR }, 0xdb, INSTR_RRF_R0RR2 }, 1205 - { "chlr", 0xdd, INSTR_RRE_RR }, 1206 - { "clhlr", 0xdf, INSTR_RRE_RR }, 1207 - { { 0, LONG_INSN_POPCNT }, 0xe1, INSTR_RRE_RR }, 1208 - { "locgr", 0xe2, INSTR_RRF_M0RR }, 1209 - { "ngrk", 0xe4, INSTR_RRF_R0RR2 }, 1210 - { "ogrk", 0xe6, INSTR_RRF_R0RR2 }, 1211 - { "xgrk", 0xe7, INSTR_RRF_R0RR2 }, 1212 - { "agrk", 0xe8, INSTR_RRF_R0RR2 }, 1213 - { "sgrk", 0xe9, INSTR_RRF_R0RR2 }, 1214 - { "algrk", 0xea, INSTR_RRF_R0RR2 }, 1215 - { "slgrk", 0xeb, INSTR_RRF_R0RR2 }, 1216 - { "locr", 0xf2, INSTR_RRF_M0RR }, 1217 - { "nrk", 0xf4, INSTR_RRF_R0RR2 }, 1218 - { "ork", 0xf6, INSTR_RRF_R0RR2 }, 1219 - { "xrk", 0xf7, INSTR_RRF_R0RR2 }, 1220 - { "ark", 0xf8, INSTR_RRF_R0RR2 }, 1221 - { "srk", 0xf9, INSTR_RRF_R0RR2 }, 1222 - { "alrk", 0xfa, INSTR_RRF_R0RR2 }, 1223 - { "slrk", 0xfb, INSTR_RRF_R0RR2 }, 1224 - { "kmac", 0x1e, INSTR_RRE_RR }, 1225 - { "lrvr", 0x1f, INSTR_RRE_RR }, 1226 - { "km", 0x2e, INSTR_RRE_RR }, 1227 - { "kmc", 0x2f, INSTR_RRE_RR }, 1228 - { "kimd", 0x3e, INSTR_RRE_RR }, 1229 - { "klmd", 0x3f, INSTR_RRE_RR }, 1230 - { "epsw", 0x8d, INSTR_RRE_RR }, 1231 - { "trtt", 0x90, INSTR_RRF_M0RR }, 1232 - { "trto", 0x91, INSTR_RRF_M0RR }, 1233 - { "trot", 0x92, INSTR_RRF_M0RR }, 1234 - { "troo", 0x93, INSTR_RRF_M0RR }, 1235 - { "mlr", 0x96, INSTR_RRE_RR }, 1236 - { "dlr", 0x97, INSTR_RRE_RR }, 1237 - { "alcr", 0x98, INSTR_RRE_RR }, 1238 - { "slbr", 0x99, INSTR_RRE_RR }, 1239 - { "", 0, INSTR_INVALID } 1240 - }; 1241 - 1242 - static struct s390_insn opcode_c0[] = { 1243 - { "lgfi", 0x01, INSTR_RIL_RI }, 1244 - { "xihf", 0x06, INSTR_RIL_RU }, 1245 - { "xilf", 0x07, INSTR_RIL_RU }, 1246 - { "iihf", 0x08, INSTR_RIL_RU }, 1247 - { "iilf", 0x09, INSTR_RIL_RU }, 1248 - { "nihf", 0x0a, INSTR_RIL_RU }, 1249 - { "nilf", 0x0b, INSTR_RIL_RU }, 1250 - { "oihf", 0x0c, INSTR_RIL_RU }, 1251 - { "oilf", 0x0d, INSTR_RIL_RU }, 1252 - { "llihf", 0x0e, INSTR_RIL_RU }, 1253 - { "llilf", 0x0f, INSTR_RIL_RU }, 1254 - { "larl", 0x00, INSTR_RIL_RP }, 1255 - { "brcl", 0x04, INSTR_RIL_UP }, 1256 - { "brasl", 0x05, INSTR_RIL_RP }, 1257 - { "", 0, INSTR_INVALID } 1258 - }; 1259 - 1260 - static struct s390_insn opcode_c2[] = { 1261 - { "msgfi", 0x00, INSTR_RIL_RI }, 1262 - { "msfi", 0x01, INSTR_RIL_RI }, 1263 - { "slgfi", 0x04, INSTR_RIL_RU }, 1264 - { "slfi", 0x05, INSTR_RIL_RU }, 1265 - { "agfi", 0x08, INSTR_RIL_RI }, 1266 - { "afi", 0x09, INSTR_RIL_RI }, 1267 - { "algfi", 0x0a, INSTR_RIL_RU }, 1268 - { "alfi", 0x0b, INSTR_RIL_RU }, 1269 - { "cgfi", 0x0c, INSTR_RIL_RI }, 1270 - { "cfi", 0x0d, INSTR_RIL_RI }, 1271 - { "clgfi", 0x0e, INSTR_RIL_RU }, 1272 - { "clfi", 0x0f, INSTR_RIL_RU }, 1273 - { "", 0, INSTR_INVALID } 1274 - }; 1275 - 1276 - static struct s390_insn opcode_c4[] = { 1277 - { "llhrl", 0x02, INSTR_RIL_RP }, 1278 - { "lghrl", 0x04, INSTR_RIL_RP }, 1279 - { "lhrl", 0x05, INSTR_RIL_RP }, 1280 - { { 0, LONG_INSN_LLGHRL }, 0x06, INSTR_RIL_RP }, 1281 - { "sthrl", 0x07, INSTR_RIL_RP }, 1282 - { "lgrl", 0x08, INSTR_RIL_RP }, 1283 - { "stgrl", 0x0b, INSTR_RIL_RP }, 1284 - { "lgfrl", 0x0c, INSTR_RIL_RP }, 1285 - { "lrl", 0x0d, INSTR_RIL_RP }, 1286 - { { 0, LONG_INSN_LLGFRL }, 0x0e, INSTR_RIL_RP }, 1287 - { "strl", 0x0f, INSTR_RIL_RP }, 1288 - { "", 0, INSTR_INVALID } 1289 - }; 1290 - 1291 - static struct s390_insn opcode_c6[] = { 1292 - { "exrl", 0x00, INSTR_RIL_RP }, 1293 - { "pfdrl", 0x02, INSTR_RIL_UP }, 1294 - { "cghrl", 0x04, INSTR_RIL_RP }, 1295 - { "chrl", 0x05, INSTR_RIL_RP }, 1296 - { { 0, LONG_INSN_CLGHRL }, 0x06, INSTR_RIL_RP }, 1297 - { "clhrl", 0x07, INSTR_RIL_RP }, 1298 - { "cgrl", 0x08, INSTR_RIL_RP }, 1299 - { "clgrl", 0x0a, INSTR_RIL_RP }, 1300 - { "cgfrl", 0x0c, INSTR_RIL_RP }, 1301 - { "crl", 0x0d, INSTR_RIL_RP }, 1302 - { { 0, LONG_INSN_CLGFRL }, 0x0e, INSTR_RIL_RP }, 1303 - { "clrl", 0x0f, INSTR_RIL_RP }, 1304 - { "", 0, INSTR_INVALID } 1305 - }; 1306 - 1307 - static struct s390_insn opcode_c8[] = { 1308 - { "mvcos", 0x00, INSTR_SSF_RRDRD }, 1309 - { "ectg", 0x01, INSTR_SSF_RRDRD }, 1310 - { "csst", 0x02, INSTR_SSF_RRDRD }, 1311 - { "lpd", 0x04, INSTR_SSF_RRDRD2 }, 1312 - { "lpdg", 0x05, INSTR_SSF_RRDRD2 }, 1313 - { "", 0, INSTR_INVALID } 1314 - }; 1315 - 1316 - static struct s390_insn opcode_cc[] = { 1317 - { "brcth", 0x06, INSTR_RIL_RP }, 1318 - { "aih", 0x08, INSTR_RIL_RI }, 1319 - { "alsih", 0x0a, INSTR_RIL_RI }, 1320 - { { 0, LONG_INSN_ALSIHN }, 0x0b, INSTR_RIL_RI }, 1321 - { "cih", 0x0d, INSTR_RIL_RI }, 1322 - { "clih", 0x0f, INSTR_RIL_RI }, 1323 - { "", 0, INSTR_INVALID } 1324 - }; 1325 - 1326 - static struct s390_insn opcode_e3[] = { 1327 - { "ltg", 0x02, INSTR_RXY_RRRD }, 1328 - { "lrag", 0x03, INSTR_RXY_RRRD }, 1329 - { "lg", 0x04, INSTR_RXY_RRRD }, 1330 - { "cvby", 0x06, INSTR_RXY_RRRD }, 1331 - { "ag", 0x08, INSTR_RXY_RRRD }, 1332 - { "sg", 0x09, INSTR_RXY_RRRD }, 1333 - { "alg", 0x0a, INSTR_RXY_RRRD }, 1334 - { "slg", 0x0b, INSTR_RXY_RRRD }, 1335 - { "msg", 0x0c, INSTR_RXY_RRRD }, 1336 - { "dsg", 0x0d, INSTR_RXY_RRRD }, 1337 - { "cvbg", 0x0e, INSTR_RXY_RRRD }, 1338 - { "lrvg", 0x0f, INSTR_RXY_RRRD }, 1339 - { "lt", 0x12, INSTR_RXY_RRRD }, 1340 - { "lray", 0x13, INSTR_RXY_RRRD }, 1341 - { "lgf", 0x14, INSTR_RXY_RRRD }, 1342 - { "lgh", 0x15, INSTR_RXY_RRRD }, 1343 - { "llgf", 0x16, INSTR_RXY_RRRD }, 1344 - { "llgt", 0x17, INSTR_RXY_RRRD }, 1345 - { "agf", 0x18, INSTR_RXY_RRRD }, 1346 - { "sgf", 0x19, INSTR_RXY_RRRD }, 1347 - { "algf", 0x1a, INSTR_RXY_RRRD }, 1348 - { "slgf", 0x1b, INSTR_RXY_RRRD }, 1349 - { "msgf", 0x1c, INSTR_RXY_RRRD }, 1350 - { "dsgf", 0x1d, INSTR_RXY_RRRD }, 1351 - { "cg", 0x20, INSTR_RXY_RRRD }, 1352 - { "clg", 0x21, INSTR_RXY_RRRD }, 1353 - { "stg", 0x24, INSTR_RXY_RRRD }, 1354 - { "ntstg", 0x25, INSTR_RXY_RRRD }, 1355 - { "cvdy", 0x26, INSTR_RXY_RRRD }, 1356 - { "cvdg", 0x2e, INSTR_RXY_RRRD }, 1357 - { "strvg", 0x2f, INSTR_RXY_RRRD }, 1358 - { "cgf", 0x30, INSTR_RXY_RRRD }, 1359 - { "clgf", 0x31, INSTR_RXY_RRRD }, 1360 - { "ltgf", 0x32, INSTR_RXY_RRRD }, 1361 - { "cgh", 0x34, INSTR_RXY_RRRD }, 1362 - { "pfd", 0x36, INSTR_RXY_URRD }, 1363 - { "strvh", 0x3f, INSTR_RXY_RRRD }, 1364 - { "bctg", 0x46, INSTR_RXY_RRRD }, 1365 - { "sty", 0x50, INSTR_RXY_RRRD }, 1366 - { "msy", 0x51, INSTR_RXY_RRRD }, 1367 - { "ny", 0x54, INSTR_RXY_RRRD }, 1368 - { "cly", 0x55, INSTR_RXY_RRRD }, 1369 - { "oy", 0x56, INSTR_RXY_RRRD }, 1370 - { "xy", 0x57, INSTR_RXY_RRRD }, 1371 - { "ly", 0x58, INSTR_RXY_RRRD }, 1372 - { "cy", 0x59, INSTR_RXY_RRRD }, 1373 - { "ay", 0x5a, INSTR_RXY_RRRD }, 1374 - { "sy", 0x5b, INSTR_RXY_RRRD }, 1375 - { "mfy", 0x5c, INSTR_RXY_RRRD }, 1376 - { "aly", 0x5e, INSTR_RXY_RRRD }, 1377 - { "sly", 0x5f, INSTR_RXY_RRRD }, 1378 - { "sthy", 0x70, INSTR_RXY_RRRD }, 1379 - { "lay", 0x71, INSTR_RXY_RRRD }, 1380 - { "stcy", 0x72, INSTR_RXY_RRRD }, 1381 - { "icy", 0x73, INSTR_RXY_RRRD }, 1382 - { "laey", 0x75, INSTR_RXY_RRRD }, 1383 - { "lb", 0x76, INSTR_RXY_RRRD }, 1384 - { "lgb", 0x77, INSTR_RXY_RRRD }, 1385 - { "lhy", 0x78, INSTR_RXY_RRRD }, 1386 - { "chy", 0x79, INSTR_RXY_RRRD }, 1387 - { "ahy", 0x7a, INSTR_RXY_RRRD }, 1388 - { "shy", 0x7b, INSTR_RXY_RRRD }, 1389 - { "mhy", 0x7c, INSTR_RXY_RRRD }, 1390 - { "ng", 0x80, INSTR_RXY_RRRD }, 1391 - { "og", 0x81, INSTR_RXY_RRRD }, 1392 - { "xg", 0x82, INSTR_RXY_RRRD }, 1393 - { "lgat", 0x85, INSTR_RXY_RRRD }, 1394 - { "mlg", 0x86, INSTR_RXY_RRRD }, 1395 - { "dlg", 0x87, INSTR_RXY_RRRD }, 1396 - { "alcg", 0x88, INSTR_RXY_RRRD }, 1397 - { "slbg", 0x89, INSTR_RXY_RRRD }, 1398 - { "stpq", 0x8e, INSTR_RXY_RRRD }, 1399 - { "lpq", 0x8f, INSTR_RXY_RRRD }, 1400 - { "llgc", 0x90, INSTR_RXY_RRRD }, 1401 - { "llgh", 0x91, INSTR_RXY_RRRD }, 1402 - { "llc", 0x94, INSTR_RXY_RRRD }, 1403 - { "llh", 0x95, INSTR_RXY_RRRD }, 1404 - { { 0, LONG_INSN_LLGTAT }, 0x9c, INSTR_RXY_RRRD }, 1405 - { { 0, LONG_INSN_LLGFAT }, 0x9d, INSTR_RXY_RRRD }, 1406 - { "lat", 0x9f, INSTR_RXY_RRRD }, 1407 - { "lbh", 0xc0, INSTR_RXY_RRRD }, 1408 - { "llch", 0xc2, INSTR_RXY_RRRD }, 1409 - { "stch", 0xc3, INSTR_RXY_RRRD }, 1410 - { "lhh", 0xc4, INSTR_RXY_RRRD }, 1411 - { "llhh", 0xc6, INSTR_RXY_RRRD }, 1412 - { "sthh", 0xc7, INSTR_RXY_RRRD }, 1413 - { "lfhat", 0xc8, INSTR_RXY_RRRD }, 1414 - { "lfh", 0xca, INSTR_RXY_RRRD }, 1415 - { "stfh", 0xcb, INSTR_RXY_RRRD }, 1416 - { "chf", 0xcd, INSTR_RXY_RRRD }, 1417 - { "clhf", 0xcf, INSTR_RXY_RRRD }, 1418 - { { 0, LONG_INSN_MPCIFC }, 0xd0, INSTR_RXY_RRRD }, 1419 - { { 0, LONG_INSN_STPCIFC }, 0xd4, INSTR_RXY_RRRD }, 1420 - { "lrv", 0x1e, INSTR_RXY_RRRD }, 1421 - { "lrvh", 0x1f, INSTR_RXY_RRRD }, 1422 - { "strv", 0x3e, INSTR_RXY_RRRD }, 1423 - { "ml", 0x96, INSTR_RXY_RRRD }, 1424 - { "dl", 0x97, INSTR_RXY_RRRD }, 1425 - { "alc", 0x98, INSTR_RXY_RRRD }, 1426 - { "slb", 0x99, INSTR_RXY_RRRD }, 1427 - { "", 0, INSTR_INVALID } 1428 - }; 1429 - 1430 - static struct s390_insn opcode_e5[] = { 1431 - { "strag", 0x02, INSTR_SSE_RDRD }, 1432 - { "mvhhi", 0x44, INSTR_SIL_RDI }, 1433 - { "mvghi", 0x48, INSTR_SIL_RDI }, 1434 - { "mvhi", 0x4c, INSTR_SIL_RDI }, 1435 - { "chhsi", 0x54, INSTR_SIL_RDI }, 1436 - { { 0, LONG_INSN_CLHHSI }, 0x55, INSTR_SIL_RDU }, 1437 - { "cghsi", 0x58, INSTR_SIL_RDI }, 1438 - { { 0, LONG_INSN_CLGHSI }, 0x59, INSTR_SIL_RDU }, 1439 - { "chsi", 0x5c, INSTR_SIL_RDI }, 1440 - { { 0, LONG_INSN_CLFHSI }, 0x5d, INSTR_SIL_RDU }, 1441 - { { 0, LONG_INSN_TBEGIN }, 0x60, INSTR_SIL_RDU }, 1442 - { { 0, LONG_INSN_TBEGINC }, 0x61, INSTR_SIL_RDU }, 1443 - { "lasp", 0x00, INSTR_SSE_RDRD }, 1444 - { "tprot", 0x01, INSTR_SSE_RDRD }, 1445 - { "mvcsk", 0x0e, INSTR_SSE_RDRD }, 1446 - { "mvcdk", 0x0f, INSTR_SSE_RDRD }, 1447 - { "", 0, INSTR_INVALID } 1448 - }; 1449 - 1450 - static struct s390_insn opcode_e7[] = { 1451 - { "lcbb", 0x27, INSTR_RXE_RRRDM }, 1452 - { "vgef", 0x13, INSTR_VRV_VVRDM }, 1453 - { "vgeg", 0x12, INSTR_VRV_VVRDM }, 1454 - { "vgbm", 0x44, INSTR_VRI_V0I0 }, 1455 - { "vgm", 0x46, INSTR_VRI_V0IIM }, 1456 - { "vl", 0x06, INSTR_VRX_VRRD0 }, 1457 - { "vlr", 0x56, INSTR_VRR_VV00000 }, 1458 - { "vlrp", 0x05, INSTR_VRX_VRRDM }, 1459 - { "vleb", 0x00, INSTR_VRX_VRRDM }, 1460 - { "vleh", 0x01, INSTR_VRX_VRRDM }, 1461 - { "vlef", 0x03, INSTR_VRX_VRRDM }, 1462 - { "vleg", 0x02, INSTR_VRX_VRRDM }, 1463 - { "vleib", 0x40, INSTR_VRI_V0IM }, 1464 - { "vleih", 0x41, INSTR_VRI_V0IM }, 1465 - { "vleif", 0x43, INSTR_VRI_V0IM }, 1466 - { "vleig", 0x42, INSTR_VRI_V0IM }, 1467 - { "vlgv", 0x21, INSTR_VRS_RVRDM }, 1468 - { "vllez", 0x04, INSTR_VRX_VRRDM }, 1469 - { "vlm", 0x36, INSTR_VRS_VVRD0 }, 1470 - { "vlbb", 0x07, INSTR_VRX_VRRDM }, 1471 - { "vlvg", 0x22, INSTR_VRS_VRRDM }, 1472 - { "vlvgp", 0x62, INSTR_VRR_VRR0000 }, 1473 - { "vll", 0x37, INSTR_VRS_VRRD0 }, 1474 - { "vmrh", 0x61, INSTR_VRR_VVV000M }, 1475 - { "vmrl", 0x60, INSTR_VRR_VVV000M }, 1476 - { "vpk", 0x94, INSTR_VRR_VVV000M }, 1477 - { "vpks", 0x97, INSTR_VRR_VVV0M0M }, 1478 - { "vpkls", 0x95, INSTR_VRR_VVV0M0M }, 1479 - { "vperm", 0x8c, INSTR_VRR_VVV000V }, 1480 - { "vpdi", 0x84, INSTR_VRR_VVV000M }, 1481 - { "vrep", 0x4d, INSTR_VRI_VVIM }, 1482 - { "vrepi", 0x45, INSTR_VRI_V0IM }, 1483 - { "vscef", 0x1b, INSTR_VRV_VWRDM }, 1484 - { "vsceg", 0x1a, INSTR_VRV_VWRDM }, 1485 - { "vsel", 0x8d, INSTR_VRR_VVV000V }, 1486 - { "vseg", 0x5f, INSTR_VRR_VV0000M }, 1487 - { "vst", 0x0e, INSTR_VRX_VRRD0 }, 1488 - { "vsteb", 0x08, INSTR_VRX_VRRDM }, 1489 - { "vsteh", 0x09, INSTR_VRX_VRRDM }, 1490 - { "vstef", 0x0b, INSTR_VRX_VRRDM }, 1491 - { "vsteg", 0x0a, INSTR_VRX_VRRDM }, 1492 - { "vstm", 0x3e, INSTR_VRS_VVRD0 }, 1493 - { "vstl", 0x3f, INSTR_VRS_VRRD0 }, 1494 - { "vuph", 0xd7, INSTR_VRR_VV0000M }, 1495 - { "vuplh", 0xd5, INSTR_VRR_VV0000M }, 1496 - { "vupl", 0xd6, INSTR_VRR_VV0000M }, 1497 - { "vupll", 0xd4, INSTR_VRR_VV0000M }, 1498 - { "va", 0xf3, INSTR_VRR_VVV000M }, 1499 - { "vacc", 0xf1, INSTR_VRR_VVV000M }, 1500 - { "vac", 0xbb, INSTR_VRR_VVVM00V }, 1501 - { "vaccc", 0xb9, INSTR_VRR_VVVM00V }, 1502 - { "vn", 0x68, INSTR_VRR_VVV0000 }, 1503 - { "vnc", 0x69, INSTR_VRR_VVV0000 }, 1504 - { "vavg", 0xf2, INSTR_VRR_VVV000M }, 1505 - { "vavgl", 0xf0, INSTR_VRR_VVV000M }, 1506 - { "vcksm", 0x66, INSTR_VRR_VVV0000 }, 1507 - { "vec", 0xdb, INSTR_VRR_VV0000M }, 1508 - { "vecl", 0xd9, INSTR_VRR_VV0000M }, 1509 - { "vceq", 0xf8, INSTR_VRR_VVV0M0M }, 1510 - { "vch", 0xfb, INSTR_VRR_VVV0M0M }, 1511 - { "vchl", 0xf9, INSTR_VRR_VVV0M0M }, 1512 - { "vclz", 0x53, INSTR_VRR_VV0000M }, 1513 - { "vctz", 0x52, INSTR_VRR_VV0000M }, 1514 - { "vx", 0x6d, INSTR_VRR_VVV0000 }, 1515 - { "vgfm", 0xb4, INSTR_VRR_VVV000M }, 1516 - { "vgfma", 0xbc, INSTR_VRR_VVVM00V }, 1517 - { "vlc", 0xde, INSTR_VRR_VV0000M }, 1518 - { "vlp", 0xdf, INSTR_VRR_VV0000M }, 1519 - { "vmx", 0xff, INSTR_VRR_VVV000M }, 1520 - { "vmxl", 0xfd, INSTR_VRR_VVV000M }, 1521 - { "vmn", 0xfe, INSTR_VRR_VVV000M }, 1522 - { "vmnl", 0xfc, INSTR_VRR_VVV000M }, 1523 - { "vmal", 0xaa, INSTR_VRR_VVVM00V }, 1524 - { "vmae", 0xae, INSTR_VRR_VVVM00V }, 1525 - { "vmale", 0xac, INSTR_VRR_VVVM00V }, 1526 - { "vmah", 0xab, INSTR_VRR_VVVM00V }, 1527 - { "vmalh", 0xa9, INSTR_VRR_VVVM00V }, 1528 - { "vmao", 0xaf, INSTR_VRR_VVVM00V }, 1529 - { "vmalo", 0xad, INSTR_VRR_VVVM00V }, 1530 - { "vmh", 0xa3, INSTR_VRR_VVV000M }, 1531 - { "vmlh", 0xa1, INSTR_VRR_VVV000M }, 1532 - { "vml", 0xa2, INSTR_VRR_VVV000M }, 1533 - { "vme", 0xa6, INSTR_VRR_VVV000M }, 1534 - { "vmle", 0xa4, INSTR_VRR_VVV000M }, 1535 - { "vmo", 0xa7, INSTR_VRR_VVV000M }, 1536 - { "vmlo", 0xa5, INSTR_VRR_VVV000M }, 1537 - { "vno", 0x6b, INSTR_VRR_VVV0000 }, 1538 - { "vo", 0x6a, INSTR_VRR_VVV0000 }, 1539 - { { 0, LONG_INSN_VPOPCT }, 0x50, INSTR_VRR_VV0000M }, 1540 - { { 0, LONG_INSN_VERLLV }, 0x73, INSTR_VRR_VVV000M }, 1541 - { "verll", 0x33, INSTR_VRS_VVRDM }, 1542 - { "verim", 0x72, INSTR_VRI_VVV0IM }, 1543 - { "veslv", 0x70, INSTR_VRR_VVV000M }, 1544 - { "vesl", 0x30, INSTR_VRS_VVRDM }, 1545 - { { 0, LONG_INSN_VESRAV }, 0x7a, INSTR_VRR_VVV000M }, 1546 - { "vesra", 0x3a, INSTR_VRS_VVRDM }, 1547 - { { 0, LONG_INSN_VESRLV }, 0x78, INSTR_VRR_VVV000M }, 1548 - { "vesrl", 0x38, INSTR_VRS_VVRDM }, 1549 - { "vsl", 0x74, INSTR_VRR_VVV0000 }, 1550 - { "vslb", 0x75, INSTR_VRR_VVV0000 }, 1551 - { "vsldb", 0x77, INSTR_VRI_VVV0I0 }, 1552 - { "vsra", 0x7e, INSTR_VRR_VVV0000 }, 1553 - { "vsrab", 0x7f, INSTR_VRR_VVV0000 }, 1554 - { "vsrl", 0x7c, INSTR_VRR_VVV0000 }, 1555 - { "vsrlb", 0x7d, INSTR_VRR_VVV0000 }, 1556 - { "vs", 0xf7, INSTR_VRR_VVV000M }, 1557 - { "vscb", 0xf5, INSTR_VRR_VVV000M }, 1558 - { "vsb", 0xbf, INSTR_VRR_VVVM00V }, 1559 - { { 0, LONG_INSN_VSBCBI }, 0xbd, INSTR_VRR_VVVM00V }, 1560 - { "vsumg", 0x65, INSTR_VRR_VVV000M }, 1561 - { "vsumq", 0x67, INSTR_VRR_VVV000M }, 1562 - { "vsum", 0x64, INSTR_VRR_VVV000M }, 1563 - { "vtm", 0xd8, INSTR_VRR_VV00000 }, 1564 - { "vfae", 0x82, INSTR_VRR_VVV0M0M }, 1565 - { "vfee", 0x80, INSTR_VRR_VVV0M0M }, 1566 - { "vfene", 0x81, INSTR_VRR_VVV0M0M }, 1567 - { "vistr", 0x5c, INSTR_VRR_VV00M0M }, 1568 - { "vstrc", 0x8a, INSTR_VRR_VVVMM0V }, 1569 - { "vfa", 0xe3, INSTR_VRR_VVV00MM }, 1570 - { "wfc", 0xcb, INSTR_VRR_VV000MM }, 1571 - { "wfk", 0xca, INSTR_VRR_VV000MM }, 1572 - { "vfce", 0xe8, INSTR_VRR_VVV0MMM }, 1573 - { "vfch", 0xeb, INSTR_VRR_VVV0MMM }, 1574 - { "vfche", 0xea, INSTR_VRR_VVV0MMM }, 1575 - { "vcdg", 0xc3, INSTR_VRR_VV00MMM }, 1576 - { "vcdlg", 0xc1, INSTR_VRR_VV00MMM }, 1577 - { "vcgd", 0xc2, INSTR_VRR_VV00MMM }, 1578 - { "vclgd", 0xc0, INSTR_VRR_VV00MMM }, 1579 - { "vfd", 0xe5, INSTR_VRR_VVV00MM }, 1580 - { "vfi", 0xc7, INSTR_VRR_VV00MMM }, 1581 - { "vlde", 0xc4, INSTR_VRR_VV000MM }, 1582 - { "vled", 0xc5, INSTR_VRR_VV00MMM }, 1583 - { "vfm", 0xe7, INSTR_VRR_VVV00MM }, 1584 - { "vfma", 0x8f, INSTR_VRR_VVVM0MV }, 1585 - { "vfms", 0x8e, INSTR_VRR_VVVM0MV }, 1586 - { "vfpso", 0xcc, INSTR_VRR_VV00MMM }, 1587 - { "vfsq", 0xce, INSTR_VRR_VV000MM }, 1588 - { "vfs", 0xe2, INSTR_VRR_VVV00MM }, 1589 - { "vftci", 0x4a, INSTR_VRI_VVIMM }, 1590 - }; 1591 - 1592 - static struct s390_insn opcode_eb[] = { 1593 - { "lmg", 0x04, INSTR_RSY_RRRD }, 1594 - { "srag", 0x0a, INSTR_RSY_RRRD }, 1595 - { "slag", 0x0b, INSTR_RSY_RRRD }, 1596 - { "srlg", 0x0c, INSTR_RSY_RRRD }, 1597 - { "sllg", 0x0d, INSTR_RSY_RRRD }, 1598 - { "tracg", 0x0f, INSTR_RSY_RRRD }, 1599 - { "csy", 0x14, INSTR_RSY_RRRD }, 1600 - { "rllg", 0x1c, INSTR_RSY_RRRD }, 1601 - { "clmh", 0x20, INSTR_RSY_RURD }, 1602 - { "clmy", 0x21, INSTR_RSY_RURD }, 1603 - { "clt", 0x23, INSTR_RSY_RURD }, 1604 - { "stmg", 0x24, INSTR_RSY_RRRD }, 1605 - { "stctg", 0x25, INSTR_RSY_CCRD }, 1606 - { "stmh", 0x26, INSTR_RSY_RRRD }, 1607 - { "clgt", 0x2b, INSTR_RSY_RURD }, 1608 - { "stcmh", 0x2c, INSTR_RSY_RURD }, 1609 - { "stcmy", 0x2d, INSTR_RSY_RURD }, 1610 - { "lctlg", 0x2f, INSTR_RSY_CCRD }, 1611 - { "csg", 0x30, INSTR_RSY_RRRD }, 1612 - { "cdsy", 0x31, INSTR_RSY_RRRD }, 1613 - { "cdsg", 0x3e, INSTR_RSY_RRRD }, 1614 - { "bxhg", 0x44, INSTR_RSY_RRRD }, 1615 - { "bxleg", 0x45, INSTR_RSY_RRRD }, 1616 - { "ecag", 0x4c, INSTR_RSY_RRRD }, 1617 - { "tmy", 0x51, INSTR_SIY_URD }, 1618 - { "mviy", 0x52, INSTR_SIY_URD }, 1619 - { "niy", 0x54, INSTR_SIY_URD }, 1620 - { "cliy", 0x55, INSTR_SIY_URD }, 1621 - { "oiy", 0x56, INSTR_SIY_URD }, 1622 - { "xiy", 0x57, INSTR_SIY_URD }, 1623 - { "asi", 0x6a, INSTR_SIY_IRD }, 1624 - { "alsi", 0x6e, INSTR_SIY_IRD }, 1625 - { "agsi", 0x7a, INSTR_SIY_IRD }, 1626 - { "algsi", 0x7e, INSTR_SIY_IRD }, 1627 - { "icmh", 0x80, INSTR_RSY_RURD }, 1628 - { "icmy", 0x81, INSTR_RSY_RURD }, 1629 - { "clclu", 0x8f, INSTR_RSY_RRRD }, 1630 - { "stmy", 0x90, INSTR_RSY_RRRD }, 1631 - { "lmh", 0x96, INSTR_RSY_RRRD }, 1632 - { "lmy", 0x98, INSTR_RSY_RRRD }, 1633 - { "lamy", 0x9a, INSTR_RSY_AARD }, 1634 - { "stamy", 0x9b, INSTR_RSY_AARD }, 1635 - { { 0, LONG_INSN_PCISTB }, 0xd0, INSTR_RSY_RRRD }, 1636 - { "sic", 0xd1, INSTR_RSY_RRRD }, 1637 - { "srak", 0xdc, INSTR_RSY_RRRD }, 1638 - { "slak", 0xdd, INSTR_RSY_RRRD }, 1639 - { "srlk", 0xde, INSTR_RSY_RRRD }, 1640 - { "sllk", 0xdf, INSTR_RSY_RRRD }, 1641 - { "locg", 0xe2, INSTR_RSY_RDRM }, 1642 - { "stocg", 0xe3, INSTR_RSY_RDRM }, 1643 - { "lang", 0xe4, INSTR_RSY_RRRD }, 1644 - { "laog", 0xe6, INSTR_RSY_RRRD }, 1645 - { "laxg", 0xe7, INSTR_RSY_RRRD }, 1646 - { "laag", 0xe8, INSTR_RSY_RRRD }, 1647 - { "laalg", 0xea, INSTR_RSY_RRRD }, 1648 - { "loc", 0xf2, INSTR_RSY_RDRM }, 1649 - { "stoc", 0xf3, INSTR_RSY_RDRM }, 1650 - { "lan", 0xf4, INSTR_RSY_RRRD }, 1651 - { "lao", 0xf6, INSTR_RSY_RRRD }, 1652 - { "lax", 0xf7, INSTR_RSY_RRRD }, 1653 - { "laa", 0xf8, INSTR_RSY_RRRD }, 1654 - { "laal", 0xfa, INSTR_RSY_RRRD }, 1655 - { "lric", 0x60, INSTR_RSY_RDRM }, 1656 - { "stric", 0x61, INSTR_RSY_RDRM }, 1657 - { "mric", 0x62, INSTR_RSY_RDRM }, 1658 - { { 0, LONG_INSN_STCCTM }, 0x17, INSTR_RSY_RMRD }, 1659 - { "rll", 0x1d, INSTR_RSY_RRRD }, 1660 - { "mvclu", 0x8e, INSTR_RSY_RRRD }, 1661 - { "tp", 0xc0, INSTR_RSL_R0RD }, 1662 - { "", 0, INSTR_INVALID } 1663 - }; 1664 - 1665 - static struct s390_insn opcode_ec[] = { 1666 - { "brxhg", 0x44, INSTR_RIE_RRP }, 1667 - { "brxlg", 0x45, INSTR_RIE_RRP }, 1668 - { { 0, LONG_INSN_RISBLG }, 0x51, INSTR_RIE_RRUUU }, 1669 - { "rnsbg", 0x54, INSTR_RIE_RRUUU }, 1670 - { "risbg", 0x55, INSTR_RIE_RRUUU }, 1671 - { "rosbg", 0x56, INSTR_RIE_RRUUU }, 1672 - { "rxsbg", 0x57, INSTR_RIE_RRUUU }, 1673 - { { 0, LONG_INSN_RISBGN }, 0x59, INSTR_RIE_RRUUU }, 1674 - { { 0, LONG_INSN_RISBHG }, 0x5D, INSTR_RIE_RRUUU }, 1675 - { "cgrj", 0x64, INSTR_RIE_RRPU }, 1676 - { "clgrj", 0x65, INSTR_RIE_RRPU }, 1677 - { "cgit", 0x70, INSTR_RIE_R0IU }, 1678 - { "clgit", 0x71, INSTR_RIE_R0UU }, 1679 - { "cit", 0x72, INSTR_RIE_R0IU }, 1680 - { "clfit", 0x73, INSTR_RIE_R0UU }, 1681 - { "crj", 0x76, INSTR_RIE_RRPU }, 1682 - { "clrj", 0x77, INSTR_RIE_RRPU }, 1683 - { "cgij", 0x7c, INSTR_RIE_RUPI }, 1684 - { "clgij", 0x7d, INSTR_RIE_RUPU }, 1685 - { "cij", 0x7e, INSTR_RIE_RUPI }, 1686 - { "clij", 0x7f, INSTR_RIE_RUPU }, 1687 - { "ahik", 0xd8, INSTR_RIE_RRI0 }, 1688 - { "aghik", 0xd9, INSTR_RIE_RRI0 }, 1689 - { { 0, LONG_INSN_ALHSIK }, 0xda, INSTR_RIE_RRI0 }, 1690 - { { 0, LONG_INSN_ALGHSIK }, 0xdb, INSTR_RIE_RRI0 }, 1691 - { "cgrb", 0xe4, INSTR_RRS_RRRDU }, 1692 - { "clgrb", 0xe5, INSTR_RRS_RRRDU }, 1693 - { "crb", 0xf6, INSTR_RRS_RRRDU }, 1694 - { "clrb", 0xf7, INSTR_RRS_RRRDU }, 1695 - { "cgib", 0xfc, INSTR_RIS_RURDI }, 1696 - { "clgib", 0xfd, INSTR_RIS_RURDU }, 1697 - { "cib", 0xfe, INSTR_RIS_RURDI }, 1698 - { "clib", 0xff, INSTR_RIS_RURDU }, 1699 - { "", 0, INSTR_INVALID } 1700 - }; 1701 - 1702 - static struct s390_insn opcode_ed[] = { 1703 - { "mayl", 0x38, INSTR_RXF_FRRDF }, 1704 - { "myl", 0x39, INSTR_RXF_FRRDF }, 1705 - { "may", 0x3a, INSTR_RXF_FRRDF }, 1706 - { "my", 0x3b, INSTR_RXF_FRRDF }, 1707 - { "mayh", 0x3c, INSTR_RXF_FRRDF }, 1708 - { "myh", 0x3d, INSTR_RXF_FRRDF }, 1709 - { "sldt", 0x40, INSTR_RXF_FRRDF }, 1710 - { "srdt", 0x41, INSTR_RXF_FRRDF }, 1711 - { "slxt", 0x48, INSTR_RXF_FRRDF }, 1712 - { "srxt", 0x49, INSTR_RXF_FRRDF }, 1713 - { "tdcet", 0x50, INSTR_RXE_FRRD }, 1714 - { "tdget", 0x51, INSTR_RXE_FRRD }, 1715 - { "tdcdt", 0x54, INSTR_RXE_FRRD }, 1716 - { "tdgdt", 0x55, INSTR_RXE_FRRD }, 1717 - { "tdcxt", 0x58, INSTR_RXE_FRRD }, 1718 - { "tdgxt", 0x59, INSTR_RXE_FRRD }, 1719 - { "ley", 0x64, INSTR_RXY_FRRD }, 1720 - { "ldy", 0x65, INSTR_RXY_FRRD }, 1721 - { "stey", 0x66, INSTR_RXY_FRRD }, 1722 - { "stdy", 0x67, INSTR_RXY_FRRD }, 1723 - { "czdt", 0xa8, INSTR_RSL_LRDFU }, 1724 - { "czxt", 0xa9, INSTR_RSL_LRDFU }, 1725 - { "cdzt", 0xaa, INSTR_RSL_LRDFU }, 1726 - { "cxzt", 0xab, INSTR_RSL_LRDFU }, 1727 - { "ldeb", 0x04, INSTR_RXE_FRRD }, 1728 - { "lxdb", 0x05, INSTR_RXE_FRRD }, 1729 - { "lxeb", 0x06, INSTR_RXE_FRRD }, 1730 - { "mxdb", 0x07, INSTR_RXE_FRRD }, 1731 - { "keb", 0x08, INSTR_RXE_FRRD }, 1732 - { "ceb", 0x09, INSTR_RXE_FRRD }, 1733 - { "aeb", 0x0a, INSTR_RXE_FRRD }, 1734 - { "seb", 0x0b, INSTR_RXE_FRRD }, 1735 - { "mdeb", 0x0c, INSTR_RXE_FRRD }, 1736 - { "deb", 0x0d, INSTR_RXE_FRRD }, 1737 - { "maeb", 0x0e, INSTR_RXF_FRRDF }, 1738 - { "mseb", 0x0f, INSTR_RXF_FRRDF }, 1739 - { "tceb", 0x10, INSTR_RXE_FRRD }, 1740 - { "tcdb", 0x11, INSTR_RXE_FRRD }, 1741 - { "tcxb", 0x12, INSTR_RXE_FRRD }, 1742 - { "sqeb", 0x14, INSTR_RXE_FRRD }, 1743 - { "sqdb", 0x15, INSTR_RXE_FRRD }, 1744 - { "meeb", 0x17, INSTR_RXE_FRRD }, 1745 - { "kdb", 0x18, INSTR_RXE_FRRD }, 1746 - { "cdb", 0x19, INSTR_RXE_FRRD }, 1747 - { "adb", 0x1a, INSTR_RXE_FRRD }, 1748 - { "sdb", 0x1b, INSTR_RXE_FRRD }, 1749 - { "mdb", 0x1c, INSTR_RXE_FRRD }, 1750 - { "ddb", 0x1d, INSTR_RXE_FRRD }, 1751 - { "madb", 0x1e, INSTR_RXF_FRRDF }, 1752 - { "msdb", 0x1f, INSTR_RXF_FRRDF }, 1753 - { "lde", 0x24, INSTR_RXE_FRRD }, 1754 - { "lxd", 0x25, INSTR_RXE_FRRD }, 1755 - { "lxe", 0x26, INSTR_RXE_FRRD }, 1756 - { "mae", 0x2e, INSTR_RXF_FRRDF }, 1757 - { "mse", 0x2f, INSTR_RXF_FRRDF }, 1758 - { "sqe", 0x34, INSTR_RXE_FRRD }, 1759 - { "sqd", 0x35, INSTR_RXE_FRRD }, 1760 - { "mee", 0x37, INSTR_RXE_FRRD }, 1761 - { "mad", 0x3e, INSTR_RXF_FRRDF }, 1762 - { "msd", 0x3f, INSTR_RXF_FRRDF }, 1763 - { "", 0, INSTR_INVALID } 1764 - }; 298 + static char long_insn_name[][7] = LONG_INSN_INITIALIZER; 299 + static struct s390_insn opcode[] = OPCODE_TABLE_INITIALIZER; 300 + static struct s390_opcode_offset opcode_offset[] = OPCODE_OFFSET_INITIALIZER; 1765 301 1766 302 /* Extracts an operand value from an instruction. */ 1767 303 static unsigned int extract_operand(unsigned char *code, ··· 391 1777 392 1778 struct s390_insn *find_insn(unsigned char *code) 393 1779 { 394 - unsigned char opfrag = code[1]; 395 - unsigned char opmask; 396 - struct s390_insn *table; 1780 + struct s390_opcode_offset *entry; 1781 + struct s390_insn *insn; 1782 + unsigned char opfrag; 1783 + int i; 397 1784 398 - switch (code[0]) { 399 - case 0x01: 400 - table = opcode_01; 401 - break; 402 - case 0xa5: 403 - table = opcode_a5; 404 - break; 405 - case 0xa7: 406 - table = opcode_a7; 407 - break; 408 - case 0xaa: 409 - table = opcode_aa; 410 - break; 411 - case 0xb2: 412 - table = opcode_b2; 413 - break; 414 - case 0xb3: 415 - table = opcode_b3; 416 - break; 417 - case 0xb9: 418 - table = opcode_b9; 419 - break; 420 - case 0xc0: 421 - table = opcode_c0; 422 - break; 423 - case 0xc2: 424 - table = opcode_c2; 425 - break; 426 - case 0xc4: 427 - table = opcode_c4; 428 - break; 429 - case 0xc6: 430 - table = opcode_c6; 431 - break; 432 - case 0xc8: 433 - table = opcode_c8; 434 - break; 435 - case 0xcc: 436 - table = opcode_cc; 437 - break; 438 - case 0xe3: 439 - table = opcode_e3; 440 - opfrag = code[5]; 441 - break; 442 - case 0xe5: 443 - table = opcode_e5; 444 - break; 445 - case 0xe7: 446 - table = opcode_e7; 447 - opfrag = code[5]; 448 - break; 449 - case 0xeb: 450 - table = opcode_eb; 451 - opfrag = code[5]; 452 - break; 453 - case 0xec: 454 - table = opcode_ec; 455 - opfrag = code[5]; 456 - break; 457 - case 0xed: 458 - table = opcode_ed; 459 - opfrag = code[5]; 460 - break; 461 - default: 462 - table = opcode; 463 - opfrag = code[0]; 464 - break; 1785 + for (i = 0; i < ARRAY_SIZE(opcode_offset); i++) { 1786 + entry = &opcode_offset[i]; 1787 + if (entry->opcode == code[0] || entry->opcode == 0) 1788 + break; 465 1789 } 466 - while (table->format != INSTR_INVALID) { 467 - opmask = formats[table->format][0]; 468 - if (table->opfrag == (opfrag & opmask)) 469 - return table; 470 - table++; 1790 + 1791 + opfrag = *(code + entry->byte) & entry->mask; 1792 + 1793 + insn = &opcode[entry->offset]; 1794 + for (i = 0; i < entry->count; i++) { 1795 + if (insn->opfrag == opfrag) 1796 + return insn; 1797 + insn++; 471 1798 } 472 1799 return NULL; 473 1800 } 474 - 475 - /** 476 - * insn_to_mnemonic - decode an s390 instruction 477 - * @instruction: instruction to decode 478 - * @buf: buffer to fill with mnemonic 479 - * @len: length of buffer 480 - * 481 - * Decode the instruction at @instruction and store the corresponding 482 - * mnemonic into @buf of length @len. 483 - * @buf is left unchanged if the instruction could not be decoded. 484 - * Returns: 485 - * %0 on success, %-ENOENT if the instruction was not found. 486 - */ 487 - int insn_to_mnemonic(unsigned char *instruction, char *buf, unsigned int len) 488 - { 489 - struct s390_insn *insn; 490 - 491 - insn = find_insn(instruction); 492 - if (!insn) 493 - return -ENOENT; 494 - if (insn->name[0] == '\0') 495 - snprintf(buf, len, "%s", 496 - long_insn_name[(int) insn->name[1]]); 497 - else 498 - snprintf(buf, len, "%.5s", insn->name); 499 - return 0; 500 - } 501 - EXPORT_SYMBOL_GPL(insn_to_mnemonic); 502 1801 503 1802 static int print_insn(char *buffer, unsigned char *code, unsigned long addr) 504 1803 { ··· 426 1899 ptr = buffer; 427 1900 insn = find_insn(code); 428 1901 if (insn) { 429 - if (insn->name[0] == '\0') 430 - ptr += sprintf(ptr, "%s\t", 431 - long_insn_name[(int) insn->name[1]]); 1902 + if (insn->zero == 0) 1903 + ptr += sprintf(ptr, "%.7s\t", 1904 + long_insn_name[insn->offset]); 432 1905 else 433 1906 ptr += sprintf(ptr, "%.5s\t", insn->name); 434 1907 /* Extract the operands. */ 435 1908 separator = 0; 436 - for (ops = formats[insn->format] + 1, i = 0; 1909 + for (ops = formats[insn->format], i = 0; 437 1910 *ops != 0 && i < 6; ops++, i++) { 438 1911 operand = operands + *ops; 439 1912 value = extract_operand(code, operand);
+3 -142
arch/s390/kernel/early.c
··· 31 31 #include <asm/facility.h> 32 32 #include "entry.h" 33 33 34 - /* 35 - * Create a Kernel NSS if the SAVESYS= parameter is defined 36 - */ 37 - #define DEFSYS_CMD_SIZE 128 38 - #define SAVESYS_CMD_SIZE 32 39 - 40 - char kernel_nss_name[NSS_NAME_SIZE + 1]; 41 - 42 34 static void __init setup_boot_command_line(void); 43 35 44 36 /* ··· 50 58 *(__u64 *) &tod_clock_base[1] = TOD_UNIX_EPOCH; 51 59 S390_lowcore.last_update_clock = TOD_UNIX_EPOCH; 52 60 } 53 - 54 - #ifdef CONFIG_SHARED_KERNEL 55 - int __init savesys_ipl_nss(char *cmd, const int cmdlen); 56 - 57 - asm( 58 - " .section .init.text,\"ax\",@progbits\n" 59 - " .align 4\n" 60 - " .type savesys_ipl_nss, @function\n" 61 - "savesys_ipl_nss:\n" 62 - " stmg 6,15,48(15)\n" 63 - " lgr 14,3\n" 64 - " sam31\n" 65 - " diag 2,14,0x8\n" 66 - " sam64\n" 67 - " lgr 2,14\n" 68 - " lmg 6,15,48(15)\n" 69 - " br 14\n" 70 - " .size savesys_ipl_nss, .-savesys_ipl_nss\n" 71 - " .previous\n"); 72 - 73 - static __initdata char upper_command_line[COMMAND_LINE_SIZE]; 74 - 75 - static noinline __init void create_kernel_nss(void) 76 - { 77 - unsigned int i, stext_pfn, eshared_pfn, end_pfn, min_size; 78 - #ifdef CONFIG_BLK_DEV_INITRD 79 - unsigned int sinitrd_pfn, einitrd_pfn; 80 - #endif 81 - int response; 82 - int hlen; 83 - size_t len; 84 - char *savesys_ptr; 85 - char defsys_cmd[DEFSYS_CMD_SIZE]; 86 - char savesys_cmd[SAVESYS_CMD_SIZE]; 87 - 88 - /* Do nothing if we are not running under VM */ 89 - if (!MACHINE_IS_VM) 90 - return; 91 - 92 - /* Convert COMMAND_LINE to upper case */ 93 - for (i = 0; i < strlen(boot_command_line); i++) 94 - upper_command_line[i] = toupper(boot_command_line[i]); 95 - 96 - savesys_ptr = strstr(upper_command_line, "SAVESYS="); 97 - 98 - if (!savesys_ptr) 99 - return; 100 - 101 - savesys_ptr += 8; /* Point to the beginning of the NSS name */ 102 - for (i = 0; i < NSS_NAME_SIZE; i++) { 103 - if (savesys_ptr[i] == ' ' || savesys_ptr[i] == '\0') 104 - break; 105 - kernel_nss_name[i] = savesys_ptr[i]; 106 - } 107 - 108 - stext_pfn = PFN_DOWN(__pa(&_stext)); 109 - eshared_pfn = PFN_DOWN(__pa(&_eshared)); 110 - end_pfn = PFN_UP(__pa(&_end)); 111 - min_size = end_pfn << 2; 112 - 113 - hlen = snprintf(defsys_cmd, DEFSYS_CMD_SIZE, 114 - "DEFSYS %s 00000-%.5X EW %.5X-%.5X SR %.5X-%.5X", 115 - kernel_nss_name, stext_pfn - 1, stext_pfn, 116 - eshared_pfn - 1, eshared_pfn, end_pfn); 117 - 118 - #ifdef CONFIG_BLK_DEV_INITRD 119 - if (INITRD_START && INITRD_SIZE) { 120 - sinitrd_pfn = PFN_DOWN(__pa(INITRD_START)); 121 - einitrd_pfn = PFN_UP(__pa(INITRD_START + INITRD_SIZE)); 122 - min_size = einitrd_pfn << 2; 123 - hlen += snprintf(defsys_cmd + hlen, DEFSYS_CMD_SIZE - hlen, 124 - " EW %.5X-%.5X", sinitrd_pfn, einitrd_pfn); 125 - } 126 - #endif 127 - 128 - snprintf(defsys_cmd + hlen, DEFSYS_CMD_SIZE - hlen, 129 - " EW MINSIZE=%.7iK PARMREGS=0-13", min_size); 130 - defsys_cmd[DEFSYS_CMD_SIZE - 1] = '\0'; 131 - snprintf(savesys_cmd, SAVESYS_CMD_SIZE, "SAVESYS %s \n IPL %s", 132 - kernel_nss_name, kernel_nss_name); 133 - savesys_cmd[SAVESYS_CMD_SIZE - 1] = '\0'; 134 - 135 - __cpcmd(defsys_cmd, NULL, 0, &response); 136 - 137 - if (response != 0) { 138 - pr_err("Defining the Linux kernel NSS failed with rc=%d\n", 139 - response); 140 - kernel_nss_name[0] = '\0'; 141 - return; 142 - } 143 - 144 - len = strlen(savesys_cmd); 145 - ASCEBC(savesys_cmd, len); 146 - response = savesys_ipl_nss(savesys_cmd, len); 147 - 148 - /* On success: response is equal to the command size, 149 - * max SAVESYS_CMD_SIZE 150 - * On error: response contains the numeric portion of cp error message. 151 - * for SAVESYS it will be >= 263 152 - * for missing privilege class, it will be 1 153 - */ 154 - if (response > SAVESYS_CMD_SIZE || response == 1) { 155 - pr_err("Saving the Linux kernel NSS failed with rc=%d\n", 156 - response); 157 - kernel_nss_name[0] = '\0'; 158 - return; 159 - } 160 - 161 - /* re-initialize cputime accounting. */ 162 - get_tod_clock_ext(tod_clock_base); 163 - S390_lowcore.last_update_clock = *(__u64 *) &tod_clock_base[1]; 164 - S390_lowcore.last_update_timer = 0x7fffffffffffffffULL; 165 - S390_lowcore.user_timer = 0; 166 - S390_lowcore.system_timer = 0; 167 - asm volatile("SPT 0(%0)" : : "a" (&S390_lowcore.last_update_timer)); 168 - 169 - /* re-setup boot command line with new ipl vm parms */ 170 - ipl_update_parameters(); 171 - setup_boot_command_line(); 172 - 173 - ipl_flags = IPL_NSS_VALID; 174 - } 175 - 176 - #else /* CONFIG_SHARED_KERNEL */ 177 - 178 - static inline void create_kernel_nss(void) { } 179 - 180 - #endif /* CONFIG_SHARED_KERNEL */ 181 61 182 62 /* 183 63 * Clear bss memory ··· 239 375 S390_lowcore.machine_flags |= MACHINE_FLAG_IDTE; 240 376 if (test_facility(40)) 241 377 S390_lowcore.machine_flags |= MACHINE_FLAG_LPP; 242 - if (test_facility(50) && test_facility(73)) 378 + if (test_facility(50) && test_facility(73)) { 243 379 S390_lowcore.machine_flags |= MACHINE_FLAG_TE; 380 + __ctl_set_bit(0, 55); 381 + } 244 382 if (test_facility(51)) 245 383 S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_LC; 246 384 if (test_facility(129)) { ··· 415 549 append_to_cmdline(append_ipl_scpdata); 416 550 } 417 551 418 - /* 419 - * Save ipl parameters, clear bss memory, initialize storage keys 420 - * and create a kernel NSS at startup if the SAVESYS= parm is defined 421 - */ 422 552 void __init startup_init(void) 423 553 { 424 554 reset_tod_clock(); ··· 431 569 setup_arch_string(); 432 570 ipl_update_parameters(); 433 571 setup_boot_command_line(); 434 - create_kernel_nss(); 435 572 detect_diag9c(); 436 573 detect_diag44(); 437 574 detect_machine_facilities();
+53 -7
arch/s390/kernel/entry.S
··· 13 13 #include <linux/linkage.h> 14 14 #include <asm/processor.h> 15 15 #include <asm/cache.h> 16 + #include <asm/ctl_reg.h> 16 17 #include <asm/errno.h> 17 18 #include <asm/ptrace.h> 18 19 #include <asm/thread_info.h> ··· 953 952 */ 954 953 ENTRY(mcck_int_handler) 955 954 STCK __LC_MCCK_CLOCK 956 - la %r1,4095 # revalidate r1 957 - spt __LC_CPU_TIMER_SAVE_AREA-4095(%r1) # revalidate cpu timer 958 - lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)# revalidate gprs 955 + la %r1,4095 # validate r1 956 + spt __LC_CPU_TIMER_SAVE_AREA-4095(%r1) # validate cpu timer 957 + sckc __LC_CLOCK_COMPARATOR # validate comparator 958 + lam %a0,%a15,__LC_AREGS_SAVE_AREA-4095(%r1) # validate acrs 959 + lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)# validate gprs 959 960 lg %r12,__LC_CURRENT 960 961 larl %r13,cleanup_critical 961 962 lmg %r8,%r9,__LC_MCK_OLD_PSW 962 963 TSTMSK __LC_MCCK_CODE,MCCK_CODE_SYSTEM_DAMAGE 963 964 jo .Lmcck_panic # yes -> rest of mcck code invalid 964 - lghi %r14,__LC_CPU_TIMER_SAVE_AREA 965 + TSTMSK __LC_MCCK_CODE,MCCK_CODE_CR_VALID 966 + jno .Lmcck_panic # control registers invalid -> panic 967 + la %r14,4095 968 + lctlg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r14) # validate ctl regs 969 + ptlb 970 + lg %r11,__LC_MCESAD-4095(%r14) # extended machine check save area 971 + nill %r11,0xfc00 # MCESA_ORIGIN_MASK 972 + TSTMSK __LC_CREGS_SAVE_AREA+16-4095(%r14),CR2_GUARDED_STORAGE 973 + jno 0f 974 + TSTMSK __LC_MCCK_CODE,MCCK_CODE_GS_VALID 975 + jno 0f 976 + .insn rxy,0xe3000000004d,0,__MCESA_GS_SAVE_AREA(%r11) # LGSC 977 + 0: l %r14,__LC_FP_CREG_SAVE_AREA-4095(%r14) 978 + TSTMSK __LC_MCCK_CODE,MCCK_CODE_FC_VALID 979 + jo 0f 980 + sr %r14,%r14 981 + 0: sfpc %r14 982 + TSTMSK __LC_MACHINE_FLAGS,MACHINE_FLAG_VX 983 + jo 0f 984 + lghi %r14,__LC_FPREGS_SAVE_AREA 985 + ld %f0,0(%r14) 986 + ld %f1,8(%r14) 987 + ld %f2,16(%r14) 988 + ld %f3,24(%r14) 989 + ld %f4,32(%r14) 990 + ld %f5,40(%r14) 991 + ld %f6,48(%r14) 992 + ld %f7,56(%r14) 993 + ld %f8,64(%r14) 994 + ld %f9,72(%r14) 995 + ld %f10,80(%r14) 996 + ld %f11,88(%r14) 997 + ld %f12,96(%r14) 998 + ld %f13,104(%r14) 999 + ld %f14,112(%r14) 1000 + ld %f15,120(%r14) 1001 + j 1f 1002 + 0: VLM %v0,%v15,0,%r11 1003 + VLM %v16,%v31,256,%r11 1004 + 1: lghi %r14,__LC_CPU_TIMER_SAVE_AREA 965 1005 mvc __LC_MCCK_ENTER_TIMER(8),0(%r14) 966 1006 TSTMSK __LC_MCCK_CODE,MCCK_CODE_CPU_TIMER_VALID 967 1007 jo 3f ··· 1018 976 la %r14,__LC_LAST_UPDATE_TIMER 1019 977 2: spt 0(%r14) 1020 978 mvc __LC_MCCK_ENTER_TIMER(8),0(%r14) 1021 - 3: TSTMSK __LC_MCCK_CODE,(MCCK_CODE_PSW_MWP_VALID|MCCK_CODE_PSW_IA_VALID) 1022 - jno .Lmcck_panic # no -> skip cleanup critical 1023 - SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+64,__LC_MCCK_ENTER_TIMER 979 + 3: TSTMSK __LC_MCCK_CODE,MCCK_CODE_PSW_MWP_VALID 980 + jno .Lmcck_panic 981 + tmhh %r8,0x0001 # interrupting from user ? 982 + jnz 4f 983 + TSTMSK __LC_MCCK_CODE,MCCK_CODE_PSW_IA_VALID 984 + jno .Lmcck_panic 985 + 4: SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+64,__LC_MCCK_ENTER_TIMER 1024 986 .Lmcck_skip: 1025 987 lghi %r14,__LC_GPREGS_SAVE_AREA+64 1026 988 stmg %r0,%r7,__PT_R0(%r11)
+1
arch/s390/kernel/entry.h
··· 78 78 long sys_s390_guarded_storage(int command, struct gs_cb __user *); 79 79 long sys_s390_pci_mmio_write(unsigned long, const void __user *, size_t); 80 80 long sys_s390_pci_mmio_read(unsigned long, void __user *, size_t); 81 + long sys_s390_sthyi(unsigned long function_code, void __user *buffer, u64 __user *return_code, unsigned long flags); 81 82 82 83 DECLARE_PER_CPU(u64, mt_cycles[8]); 83 84
+3 -4
arch/s390/kernel/guarded_storage.c
··· 12 12 #include <asm/guarded_storage.h> 13 13 #include "entry.h" 14 14 15 - void exit_thread_gs(void) 15 + void guarded_storage_release(struct task_struct *tsk) 16 16 { 17 - kfree(current->thread.gs_cb); 18 - kfree(current->thread.gs_bc_cb); 19 - current->thread.gs_cb = current->thread.gs_bc_cb = NULL; 17 + kfree(tsk->thread.gs_cb); 18 + kfree(tsk->thread.gs_bc_cb); 20 19 } 21 20 22 21 static int gs_enable(void)
-36
arch/s390/kernel/ipl.c
··· 279 279 { 280 280 struct ipl_parameter_block *ipl = IPL_PARMBLOCK_START; 281 281 282 - if (ipl_flags & IPL_NSS_VALID) 283 - return IPL_TYPE_NSS; 284 282 if (!(ipl_flags & IPL_DEVNO_VALID)) 285 283 return IPL_TYPE_UNKNOWN; 286 284 if (!(ipl_flags & IPL_PARMBLOCK_VALID)) ··· 531 533 .attrs = ipl_ccw_attrs_lpar 532 534 }; 533 535 534 - /* NSS ipl device attributes */ 535 - 536 - DEFINE_IPL_ATTR_RO(ipl_nss, name, "%s\n", kernel_nss_name); 537 - 538 - static struct attribute *ipl_nss_attrs[] = { 539 - &sys_ipl_type_attr.attr, 540 - &sys_ipl_nss_name_attr.attr, 541 - &sys_ipl_ccw_loadparm_attr.attr, 542 - &sys_ipl_vm_parm_attr.attr, 543 - NULL, 544 - }; 545 - 546 - static struct attribute_group ipl_nss_attr_group = { 547 - .attrs = ipl_nss_attrs, 548 - }; 549 - 550 536 /* UNKNOWN ipl device attributes */ 551 537 552 538 static struct attribute *ipl_unknown_attrs[] = { ··· 579 597 case IPL_TYPE_FCP: 580 598 case IPL_TYPE_FCP_DUMP: 581 599 rc = sysfs_create_group(&ipl_kset->kobj, &ipl_fcp_attr_group); 582 - break; 583 - case IPL_TYPE_NSS: 584 - rc = sysfs_create_group(&ipl_kset->kobj, &ipl_nss_attr_group); 585 600 break; 586 601 default: 587 602 rc = sysfs_create_group(&ipl_kset->kobj, ··· 1151 1172 return rc; 1152 1173 1153 1174 reipl_block_ccw_init(reipl_block_nss); 1154 - if (ipl_info.type == IPL_TYPE_NSS) { 1155 - memset(reipl_block_nss->ipl_info.ccw.nss_name, 1156 - ' ', NSS_NAME_SIZE); 1157 - memcpy(reipl_block_nss->ipl_info.ccw.nss_name, 1158 - kernel_nss_name, strlen(kernel_nss_name)); 1159 - ASCEBC(reipl_block_nss->ipl_info.ccw.nss_name, NSS_NAME_SIZE); 1160 - reipl_block_nss->ipl_info.ccw.vm_flags |= 1161 - DIAG308_VM_FLAGS_NSS_VALID; 1162 - 1163 - reipl_block_ccw_fill_parms(reipl_block_nss); 1164 - } 1165 - 1166 1175 reipl_capabilities |= IPL_TYPE_NSS; 1167 1176 return 0; 1168 1177 } ··· 1938 1971 ipl_info.data.fcp.lun = IPL_PARMBLOCK_START->ipl_info.fcp.lun; 1939 1972 break; 1940 1973 case IPL_TYPE_NSS: 1941 - strncpy(ipl_info.data.nss.name, kernel_nss_name, 1942 - sizeof(ipl_info.data.nss.name)); 1943 - break; 1944 1974 case IPL_TYPE_UNKNOWN: 1945 1975 /* We have no info to copy */ 1946 1976 break;
-7
arch/s390/kernel/kprobes.c
··· 161 161 162 162 static int swap_instruction(void *data) 163 163 { 164 - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 165 - unsigned long status = kcb->kprobe_status; 166 164 struct swap_insn_args *args = data; 167 165 struct ftrace_insn new_insn, *insn; 168 166 struct kprobe *p = args->p; ··· 183 185 ftrace_generate_nop_insn(&new_insn); 184 186 } 185 187 skip_ftrace: 186 - kcb->kprobe_status = KPROBE_SWAP_INST; 187 188 s390_kernel_write(p->addr, &new_insn, len); 188 - kcb->kprobe_status = status; 189 189 return 0; 190 190 } 191 191 NOKPROBE_SYMBOL(swap_instruction); ··· 570 574 const struct exception_table_entry *entry; 571 575 572 576 switch(kcb->kprobe_status) { 573 - case KPROBE_SWAP_INST: 574 - /* We are here because the instruction replacement failed */ 575 - return 0; 576 577 case KPROBE_HIT_SS: 577 578 case KPROBE_REENTER: 578 579 /*
+10 -12
arch/s390/kernel/machine_kexec.c
··· 106 106 static noinline void __machine_kdump(void *image) 107 107 { 108 108 struct mcesa *mcesa; 109 - unsigned long cr2_old, cr2_new; 109 + union ctlreg2 cr2_old, cr2_new; 110 110 int this_cpu, cpu; 111 111 112 112 lgr_info_log(); ··· 123 123 if (MACHINE_HAS_VX) 124 124 save_vx_regs((__vector128 *) mcesa->vector_save_area); 125 125 if (MACHINE_HAS_GS) { 126 - __ctl_store(cr2_old, 2, 2); 127 - cr2_new = cr2_old | (1UL << 4); 128 - __ctl_load(cr2_new, 2, 2); 126 + __ctl_store(cr2_old.val, 2, 2); 127 + cr2_new = cr2_old; 128 + cr2_new.gse = 1; 129 + __ctl_load(cr2_new.val, 2, 2); 129 130 save_gs_cb((struct gs_cb *) mcesa->guarded_storage_save_area); 130 - __ctl_load(cr2_old, 2, 2); 131 + __ctl_load(cr2_old.val, 2, 2); 131 132 } 132 133 /* 133 134 * To create a good backchain for this CPU in the dump store_status ··· 146 145 /* 147 146 * Check if kdump checksums are valid: We call purgatory with parameter "0" 148 147 */ 149 - static int kdump_csum_valid(struct kimage *image) 148 + static bool kdump_csum_valid(struct kimage *image) 150 149 { 151 150 #ifdef CONFIG_CRASH_DUMP 152 151 int (*start_kdump)(int) = (void *)image->start; ··· 155 154 __arch_local_irq_stnsm(0xfb); /* disable DAT */ 156 155 rc = start_kdump(0); 157 156 __arch_local_irq_stosm(0x04); /* enable DAT */ 158 - return rc ? 0 : -EINVAL; 157 + return rc == 0; 159 158 #else 160 - return -EINVAL; 159 + return false; 161 160 #endif 162 161 } 163 162 ··· 220 219 { 221 220 void *reboot_code_buffer; 222 221 223 - /* Can't replace kernel image since it is read-only. */ 224 - if (ipl_flags & IPL_NSS_VALID) 225 - return -EOPNOTSUPP; 226 - 227 222 if (image->type == KEXEC_TYPE_CRASH) 228 223 return machine_kexec_prepare_kdump(); 229 224 ··· 266 269 s390_reset_system(); 267 270 data_mover = (relocate_kernel_t) page_to_phys(image->control_code_page); 268 271 272 + __arch_local_irq_stnsm(0xfb); /* disable DAT - avoid no-execute */ 269 273 /* Call the moving routine */ 270 274 (*data_mover)(&image->head, image->start); 271 275
+17
arch/s390/kernel/module.c
··· 31 31 #include <linux/kernel.h> 32 32 #include <linux/moduleloader.h> 33 33 #include <linux/bug.h> 34 + #include <asm/alternative.h> 34 35 35 36 #if 0 36 37 #define DEBUGP printk ··· 430 429 const Elf_Shdr *sechdrs, 431 430 struct module *me) 432 431 { 432 + const Elf_Shdr *s; 433 + char *secstrings; 434 + 435 + if (IS_ENABLED(CONFIG_ALTERNATIVES)) { 436 + secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; 437 + for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) { 438 + if (!strcmp(".altinstructions", 439 + secstrings + s->sh_name)) { 440 + /* patch .altinstructions */ 441 + void *aseg = (void *)s->sh_addr; 442 + 443 + apply_alternatives(aseg, aseg + s->sh_size); 444 + } 445 + } 446 + } 447 + 433 448 jump_label_apply_nops(me); 434 449 return 0; 435 450 }
+113 -90
arch/s390/kernel/nmi.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/errno.h> 14 14 #include <linux/hardirq.h> 15 + #include <linux/log2.h> 16 + #include <linux/kprobes.h> 17 + #include <linux/slab.h> 15 18 #include <linux/time.h> 16 19 #include <linux/module.h> 17 20 #include <linux/sched/signal.h> ··· 40 37 }; 41 38 42 39 static DEFINE_PER_CPU(struct mcck_struct, cpu_mcck); 40 + static struct kmem_cache *mcesa_cache; 41 + static unsigned long mcesa_origin_lc; 43 42 44 - static void s390_handle_damage(void) 43 + static inline int nmi_needs_mcesa(void) 45 44 { 46 - smp_send_stop(); 45 + return MACHINE_HAS_VX || MACHINE_HAS_GS; 46 + } 47 + 48 + static inline unsigned long nmi_get_mcesa_size(void) 49 + { 50 + if (MACHINE_HAS_GS) 51 + return MCESA_MAX_SIZE; 52 + return MCESA_MIN_SIZE; 53 + } 54 + 55 + /* 56 + * The initial machine check extended save area for the boot CPU. 57 + * It will be replaced by nmi_init() with an allocated structure. 58 + * The structure is required for machine check happening early in 59 + * the boot process. 60 + */ 61 + static struct mcesa boot_mcesa __initdata __aligned(MCESA_MAX_SIZE); 62 + 63 + void __init nmi_alloc_boot_cpu(struct lowcore *lc) 64 + { 65 + if (!nmi_needs_mcesa()) 66 + return; 67 + lc->mcesad = (unsigned long) &boot_mcesa; 68 + if (MACHINE_HAS_GS) 69 + lc->mcesad |= ilog2(MCESA_MAX_SIZE); 70 + } 71 + 72 + static int __init nmi_init(void) 73 + { 74 + unsigned long origin, cr0, size; 75 + 76 + if (!nmi_needs_mcesa()) 77 + return 0; 78 + size = nmi_get_mcesa_size(); 79 + if (size > MCESA_MIN_SIZE) 80 + mcesa_origin_lc = ilog2(size); 81 + /* create slab cache for the machine-check-extended-save-areas */ 82 + mcesa_cache = kmem_cache_create("nmi_save_areas", size, size, 0, NULL); 83 + if (!mcesa_cache) 84 + panic("Couldn't create nmi save area cache"); 85 + origin = (unsigned long) kmem_cache_alloc(mcesa_cache, GFP_KERNEL); 86 + if (!origin) 87 + panic("Couldn't allocate nmi save area"); 88 + /* The pointer is stored with mcesa_bits ORed in */ 89 + kmemleak_not_leak((void *) origin); 90 + __ctl_store(cr0, 0, 0); 91 + __ctl_clear_bit(0, 28); /* disable lowcore protection */ 92 + /* Replace boot_mcesa on the boot CPU */ 93 + S390_lowcore.mcesad = origin | mcesa_origin_lc; 94 + __ctl_load(cr0, 0, 0); 95 + return 0; 96 + } 97 + early_initcall(nmi_init); 98 + 99 + int nmi_alloc_per_cpu(struct lowcore *lc) 100 + { 101 + unsigned long origin; 102 + 103 + if (!nmi_needs_mcesa()) 104 + return 0; 105 + origin = (unsigned long) kmem_cache_alloc(mcesa_cache, GFP_KERNEL); 106 + if (!origin) 107 + return -ENOMEM; 108 + /* The pointer is stored with mcesa_bits ORed in */ 109 + kmemleak_not_leak((void *) origin); 110 + lc->mcesad = origin | mcesa_origin_lc; 111 + return 0; 112 + } 113 + 114 + void nmi_free_per_cpu(struct lowcore *lc) 115 + { 116 + if (!nmi_needs_mcesa()) 117 + return; 118 + kmem_cache_free(mcesa_cache, (void *)(lc->mcesad & MCESA_ORIGIN_MASK)); 119 + } 120 + 121 + static notrace void s390_handle_damage(void) 122 + { 123 + smp_emergency_stop(); 47 124 disabled_wait((unsigned long) __builtin_return_address(0)); 48 125 while (1); 49 126 } 127 + NOKPROBE_SYMBOL(s390_handle_damage); 50 128 51 129 /* 52 130 * Main machine check handler function. Will be called with interrupts enabled ··· 184 100 EXPORT_SYMBOL_GPL(s390_handle_mcck); 185 101 186 102 /* 187 - * returns 0 if all registers could be validated 103 + * returns 0 if all required registers are available 188 104 * returns 1 otherwise 189 105 */ 190 - static int notrace s390_validate_registers(union mci mci, int umode) 106 + static int notrace s390_check_registers(union mci mci, int umode) 191 107 { 108 + union ctlreg2 cr2; 192 109 int kill_task; 193 - u64 zero; 194 110 void *fpt_save_area; 195 - struct mcesa *mcesa; 196 111 197 112 kill_task = 0; 198 - zero = 0; 199 113 200 114 if (!mci.gr) { 201 115 /* ··· 204 122 s390_handle_damage(); 205 123 kill_task = 1; 206 124 } 207 - /* Validate control registers */ 125 + /* Check control registers */ 208 126 if (!mci.cr) { 209 127 /* 210 128 * Control registers have unknown contents. 211 129 * Can't recover and therefore stopping machine. 212 130 */ 213 131 s390_handle_damage(); 214 - } else { 215 - asm volatile( 216 - " lctlg 0,15,0(%0)\n" 217 - " ptlb\n" 218 - : : "a" (&S390_lowcore.cregs_save_area) : "memory"); 219 132 } 220 133 if (!mci.fp) { 221 134 /* ··· 218 141 * kernel currently uses floating point registers the 219 142 * system is stopped. If the process has its floating 220 143 * pointer registers loaded it is terminated. 221 - * Otherwise just revalidate the registers. 222 144 */ 223 145 if (S390_lowcore.fpu_flags & KERNEL_VXR_V0V7) 224 146 s390_handle_damage(); ··· 231 155 * If the kernel currently uses the floating pointer 232 156 * registers and needs the FPC register the system is 233 157 * stopped. If the process has its floating pointer 234 - * registers loaded it is terminated. Otherwiese the 235 - * FPC is just revalidated. 158 + * registers loaded it is terminated. 236 159 */ 237 160 if (S390_lowcore.fpu_flags & KERNEL_FPC) 238 161 s390_handle_damage(); 239 - asm volatile("lfpc %0" : : "Q" (zero)); 240 162 if (!test_cpu_flag(CIF_FPU)) 241 163 kill_task = 1; 242 - } else { 243 - asm volatile("lfpc %0" 244 - : : "Q" (S390_lowcore.fpt_creg_save_area)); 245 164 } 246 165 247 - mcesa = (struct mcesa *)(S390_lowcore.mcesad & MCESA_ORIGIN_MASK); 248 - if (!MACHINE_HAS_VX) { 249 - /* Validate floating point registers */ 250 - asm volatile( 251 - " ld 0,0(%0)\n" 252 - " ld 1,8(%0)\n" 253 - " ld 2,16(%0)\n" 254 - " ld 3,24(%0)\n" 255 - " ld 4,32(%0)\n" 256 - " ld 5,40(%0)\n" 257 - " ld 6,48(%0)\n" 258 - " ld 7,56(%0)\n" 259 - " ld 8,64(%0)\n" 260 - " ld 9,72(%0)\n" 261 - " ld 10,80(%0)\n" 262 - " ld 11,88(%0)\n" 263 - " ld 12,96(%0)\n" 264 - " ld 13,104(%0)\n" 265 - " ld 14,112(%0)\n" 266 - " ld 15,120(%0)\n" 267 - : : "a" (fpt_save_area) : "memory"); 268 - } else { 269 - /* Validate vector registers */ 270 - union ctlreg0 cr0; 271 - 166 + if (MACHINE_HAS_VX) { 272 167 if (!mci.vr) { 273 168 /* 274 169 * Vector registers can't be restored. If the kernel 275 170 * currently uses vector registers the system is 276 171 * stopped. If the process has its vector registers 277 - * loaded it is terminated. Otherwise just revalidate 278 - * the registers. 172 + * loaded it is terminated. 279 173 */ 280 174 if (S390_lowcore.fpu_flags & KERNEL_VXR) 281 175 s390_handle_damage(); 282 176 if (!test_cpu_flag(CIF_FPU)) 283 177 kill_task = 1; 284 178 } 285 - cr0.val = S390_lowcore.cregs_save_area[0]; 286 - cr0.afp = cr0.vx = 1; 287 - __ctl_load(cr0.val, 0, 0); 288 - asm volatile( 289 - " la 1,%0\n" 290 - " .word 0xe70f,0x1000,0x0036\n" /* vlm 0,15,0(1) */ 291 - " .word 0xe70f,0x1100,0x0c36\n" /* vlm 16,31,256(1) */ 292 - : : "Q" (*(struct vx_array *) mcesa->vector_save_area) 293 - : "1"); 294 - __ctl_load(S390_lowcore.cregs_save_area[0], 0, 0); 295 179 } 296 - /* Validate access registers */ 297 - asm volatile( 298 - " lam 0,15,0(%0)" 299 - : : "a" (&S390_lowcore.access_regs_save_area)); 180 + /* Check if access registers are valid */ 300 181 if (!mci.ar) { 301 182 /* 302 183 * Access registers have unknown contents. ··· 261 228 */ 262 229 kill_task = 1; 263 230 } 264 - /* Validate guarded storage registers */ 265 - if (MACHINE_HAS_GS && (S390_lowcore.cregs_save_area[2] & (1UL << 4))) { 266 - if (!mci.gs) 231 + /* Check guarded storage registers */ 232 + cr2.val = S390_lowcore.cregs_save_area[2]; 233 + if (cr2.gse) { 234 + if (!mci.gs) { 267 235 /* 268 236 * Guarded storage register can't be restored and 269 237 * the current processes uses guarded storage. 270 238 * It has to be terminated. 271 239 */ 272 240 kill_task = 1; 273 - else 274 - load_gs_cb((struct gs_cb *) 275 - mcesa->guarded_storage_save_area); 241 + } 276 242 } 277 - /* 278 - * We don't even try to validate the TOD register, since we simply 279 - * can't write something sensible into that register. 280 - */ 281 - /* 282 - * See if we can validate the TOD programmable register with its 283 - * old contents (should be zero) otherwise set it to zero. 284 - */ 285 - if (!mci.pr) 286 - asm volatile( 287 - " sr 0,0\n" 288 - " sckpf" 289 - : : : "0", "cc"); 290 - else 291 - asm volatile( 292 - " l 0,%0\n" 293 - " sckpf" 294 - : : "Q" (S390_lowcore.tod_progreg_save_area) 295 - : "0", "cc"); 296 - /* Validate clock comparator register */ 297 - set_clock_comparator(S390_lowcore.clock_comparator); 298 243 /* Check if old PSW is valid */ 299 - if (!mci.wp) 244 + if (!mci.wp) { 300 245 /* 301 246 * Can't tell if we come from user or kernel mode 302 247 * -> stopping machine. 303 248 */ 304 249 s390_handle_damage(); 250 + } 251 + /* Check for invalid kernel instruction address */ 252 + if (!mci.ia && !umode) { 253 + /* 254 + * The instruction address got lost while running 255 + * in the kernel -> stopping machine. 256 + */ 257 + s390_handle_damage(); 258 + } 305 259 306 260 if (!mci.ms || !mci.pm || !mci.ia) 307 261 kill_task = 1; 308 262 309 263 return kill_task; 310 264 } 265 + NOKPROBE_SYMBOL(s390_check_registers); 311 266 312 267 /* 313 268 * Backup the guest's machine check info to its description block ··· 321 300 mcck_backup->failing_storage_address 322 301 = S390_lowcore.failing_storage_address; 323 302 } 303 + NOKPROBE_SYMBOL(s390_backup_mcck_info); 324 304 325 305 #define MAX_IPD_COUNT 29 326 306 #define MAX_IPD_TIME (5 * 60 * USEC_PER_SEC) /* 5 minutes */ ··· 394 372 s390_handle_damage(); 395 373 } 396 374 } 397 - if (s390_validate_registers(mci, user_mode(regs))) { 375 + if (s390_check_registers(mci, user_mode(regs))) { 398 376 /* 399 377 * Couldn't restore all register contents for the 400 378 * user space process -> mark task for termination. ··· 465 443 clear_cpu_flag(CIF_MCCK_GUEST); 466 444 nmi_exit(); 467 445 } 446 + NOKPROBE_SYMBOL(s390_do_machine_check); 468 447 469 448 static int __init machine_check_init(void) 470 449 {
+218 -60
arch/s390/kernel/perf_cpum_cf_events.c
··· 10 10 11 11 /* BEGIN: CPUM_CF COUNTER DEFINITIONS =================================== */ 12 12 13 - CPUMF_EVENT_ATTR(cf, CPU_CYCLES, 0x0000); 14 - CPUMF_EVENT_ATTR(cf, INSTRUCTIONS, 0x0001); 15 - CPUMF_EVENT_ATTR(cf, L1I_DIR_WRITES, 0x0002); 16 - CPUMF_EVENT_ATTR(cf, L1I_PENALTY_CYCLES, 0x0003); 17 - CPUMF_EVENT_ATTR(cf, PROBLEM_STATE_CPU_CYCLES, 0x0020); 18 - CPUMF_EVENT_ATTR(cf, PROBLEM_STATE_INSTRUCTIONS, 0x0021); 19 - CPUMF_EVENT_ATTR(cf, PROBLEM_STATE_L1I_DIR_WRITES, 0x0022); 20 - CPUMF_EVENT_ATTR(cf, PROBLEM_STATE_L1I_PENALTY_CYCLES, 0x0023); 21 - CPUMF_EVENT_ATTR(cf, PROBLEM_STATE_L1D_DIR_WRITES, 0x0024); 22 - CPUMF_EVENT_ATTR(cf, PROBLEM_STATE_L1D_PENALTY_CYCLES, 0x0025); 23 - CPUMF_EVENT_ATTR(cf, L1D_DIR_WRITES, 0x0004); 24 - CPUMF_EVENT_ATTR(cf, L1D_PENALTY_CYCLES, 0x0005); 25 - CPUMF_EVENT_ATTR(cf, PRNG_FUNCTIONS, 0x0040); 26 - CPUMF_EVENT_ATTR(cf, PRNG_CYCLES, 0x0041); 27 - CPUMF_EVENT_ATTR(cf, PRNG_BLOCKED_FUNCTIONS, 0x0042); 28 - CPUMF_EVENT_ATTR(cf, PRNG_BLOCKED_CYCLES, 0x0043); 29 - CPUMF_EVENT_ATTR(cf, SHA_FUNCTIONS, 0x0044); 30 - CPUMF_EVENT_ATTR(cf, SHA_CYCLES, 0x0045); 31 - CPUMF_EVENT_ATTR(cf, SHA_BLOCKED_FUNCTIONS, 0x0046); 32 - CPUMF_EVENT_ATTR(cf, SHA_BLOCKED_CYCLES, 0x0047); 33 - CPUMF_EVENT_ATTR(cf, DEA_FUNCTIONS, 0x0048); 34 - CPUMF_EVENT_ATTR(cf, DEA_CYCLES, 0x0049); 35 - CPUMF_EVENT_ATTR(cf, DEA_BLOCKED_FUNCTIONS, 0x004a); 36 - CPUMF_EVENT_ATTR(cf, DEA_BLOCKED_CYCLES, 0x004b); 37 - CPUMF_EVENT_ATTR(cf, AES_FUNCTIONS, 0x004c); 38 - CPUMF_EVENT_ATTR(cf, AES_CYCLES, 0x004d); 39 - CPUMF_EVENT_ATTR(cf, AES_BLOCKED_FUNCTIONS, 0x004e); 40 - CPUMF_EVENT_ATTR(cf, AES_BLOCKED_CYCLES, 0x004f); 13 + CPUMF_EVENT_ATTR(cf_fvn1, CPU_CYCLES, 0x0000); 14 + CPUMF_EVENT_ATTR(cf_fvn1, INSTRUCTIONS, 0x0001); 15 + CPUMF_EVENT_ATTR(cf_fvn1, L1I_DIR_WRITES, 0x0002); 16 + CPUMF_EVENT_ATTR(cf_fvn1, L1I_PENALTY_CYCLES, 0x0003); 17 + CPUMF_EVENT_ATTR(cf_fvn1, PROBLEM_STATE_CPU_CYCLES, 0x0020); 18 + CPUMF_EVENT_ATTR(cf_fvn1, PROBLEM_STATE_INSTRUCTIONS, 0x0021); 19 + CPUMF_EVENT_ATTR(cf_fvn1, PROBLEM_STATE_L1I_DIR_WRITES, 0x0022); 20 + CPUMF_EVENT_ATTR(cf_fvn1, PROBLEM_STATE_L1I_PENALTY_CYCLES, 0x0023); 21 + CPUMF_EVENT_ATTR(cf_fvn1, PROBLEM_STATE_L1D_DIR_WRITES, 0x0024); 22 + CPUMF_EVENT_ATTR(cf_fvn1, PROBLEM_STATE_L1D_PENALTY_CYCLES, 0x0025); 23 + CPUMF_EVENT_ATTR(cf_fvn1, L1D_DIR_WRITES, 0x0004); 24 + CPUMF_EVENT_ATTR(cf_fvn1, L1D_PENALTY_CYCLES, 0x0005); 25 + CPUMF_EVENT_ATTR(cf_fvn3, CPU_CYCLES, 0x0000); 26 + CPUMF_EVENT_ATTR(cf_fvn3, INSTRUCTIONS, 0x0001); 27 + CPUMF_EVENT_ATTR(cf_fvn3, L1I_DIR_WRITES, 0x0002); 28 + CPUMF_EVENT_ATTR(cf_fvn3, L1I_PENALTY_CYCLES, 0x0003); 29 + CPUMF_EVENT_ATTR(cf_fvn3, PROBLEM_STATE_CPU_CYCLES, 0x0020); 30 + CPUMF_EVENT_ATTR(cf_fvn3, PROBLEM_STATE_INSTRUCTIONS, 0x0021); 31 + CPUMF_EVENT_ATTR(cf_fvn3, L1D_DIR_WRITES, 0x0004); 32 + CPUMF_EVENT_ATTR(cf_fvn3, L1D_PENALTY_CYCLES, 0x0005); 33 + CPUMF_EVENT_ATTR(cf_svn_generic, PRNG_FUNCTIONS, 0x0040); 34 + CPUMF_EVENT_ATTR(cf_svn_generic, PRNG_CYCLES, 0x0041); 35 + CPUMF_EVENT_ATTR(cf_svn_generic, PRNG_BLOCKED_FUNCTIONS, 0x0042); 36 + CPUMF_EVENT_ATTR(cf_svn_generic, PRNG_BLOCKED_CYCLES, 0x0043); 37 + CPUMF_EVENT_ATTR(cf_svn_generic, SHA_FUNCTIONS, 0x0044); 38 + CPUMF_EVENT_ATTR(cf_svn_generic, SHA_CYCLES, 0x0045); 39 + CPUMF_EVENT_ATTR(cf_svn_generic, SHA_BLOCKED_FUNCTIONS, 0x0046); 40 + CPUMF_EVENT_ATTR(cf_svn_generic, SHA_BLOCKED_CYCLES, 0x0047); 41 + CPUMF_EVENT_ATTR(cf_svn_generic, DEA_FUNCTIONS, 0x0048); 42 + CPUMF_EVENT_ATTR(cf_svn_generic, DEA_CYCLES, 0x0049); 43 + CPUMF_EVENT_ATTR(cf_svn_generic, DEA_BLOCKED_FUNCTIONS, 0x004a); 44 + CPUMF_EVENT_ATTR(cf_svn_generic, DEA_BLOCKED_CYCLES, 0x004b); 45 + CPUMF_EVENT_ATTR(cf_svn_generic, AES_FUNCTIONS, 0x004c); 46 + CPUMF_EVENT_ATTR(cf_svn_generic, AES_CYCLES, 0x004d); 47 + CPUMF_EVENT_ATTR(cf_svn_generic, AES_BLOCKED_FUNCTIONS, 0x004e); 48 + CPUMF_EVENT_ATTR(cf_svn_generic, AES_BLOCKED_CYCLES, 0x004f); 41 49 CPUMF_EVENT_ATTR(cf_z10, L1I_L2_SOURCED_WRITES, 0x0080); 42 50 CPUMF_EVENT_ATTR(cf_z10, L1D_L2_SOURCED_WRITES, 0x0081); 43 51 CPUMF_EVENT_ATTR(cf_z10, L1I_L3_LOCAL_WRITES, 0x0082); ··· 179 171 CPUMF_EVENT_ATTR(cf_z13, TX_C_TABORT_SPECIAL, 0x00dc); 180 172 CPUMF_EVENT_ATTR(cf_z13, MT_DIAG_CYCLES_ONE_THR_ACTIVE, 0x01c0); 181 173 CPUMF_EVENT_ATTR(cf_z13, MT_DIAG_CYCLES_TWO_THR_ACTIVE, 0x01c1); 174 + CPUMF_EVENT_ATTR(cf_z14, L1D_WRITES_RO_EXCL, 0x0080); 175 + CPUMF_EVENT_ATTR(cf_z14, DTLB2_WRITES, 0x0081); 176 + CPUMF_EVENT_ATTR(cf_z14, DTLB2_MISSES, 0x0082); 177 + CPUMF_EVENT_ATTR(cf_z14, DTLB2_HPAGE_WRITES, 0x0083); 178 + CPUMF_EVENT_ATTR(cf_z14, DTLB2_GPAGE_WRITES, 0x0084); 179 + CPUMF_EVENT_ATTR(cf_z14, L1D_L2D_SOURCED_WRITES, 0x0085); 180 + CPUMF_EVENT_ATTR(cf_z14, ITLB2_WRITES, 0x0086); 181 + CPUMF_EVENT_ATTR(cf_z14, ITLB2_MISSES, 0x0087); 182 + CPUMF_EVENT_ATTR(cf_z14, L1I_L2I_SOURCED_WRITES, 0x0088); 183 + CPUMF_EVENT_ATTR(cf_z14, TLB2_PTE_WRITES, 0x0089); 184 + CPUMF_EVENT_ATTR(cf_z14, TLB2_CRSTE_WRITES, 0x008a); 185 + CPUMF_EVENT_ATTR(cf_z14, TLB2_ENGINES_BUSY, 0x008b); 186 + CPUMF_EVENT_ATTR(cf_z14, TX_C_TEND, 0x008c); 187 + CPUMF_EVENT_ATTR(cf_z14, TX_NC_TEND, 0x008d); 188 + CPUMF_EVENT_ATTR(cf_z14, L1C_TLB2_MISSES, 0x008f); 189 + CPUMF_EVENT_ATTR(cf_z14, L1D_ONCHIP_L3_SOURCED_WRITES, 0x0090); 190 + CPUMF_EVENT_ATTR(cf_z14, L1D_ONCHIP_MEMORY_SOURCED_WRITES, 0x0091); 191 + CPUMF_EVENT_ATTR(cf_z14, L1D_ONCHIP_L3_SOURCED_WRITES_IV, 0x0092); 192 + CPUMF_EVENT_ATTR(cf_z14, L1D_ONCLUSTER_L3_SOURCED_WRITES, 0x0093); 193 + CPUMF_EVENT_ATTR(cf_z14, L1D_ONCLUSTER_MEMORY_SOURCED_WRITES, 0x0094); 194 + CPUMF_EVENT_ATTR(cf_z14, L1D_ONCLUSTER_L3_SOURCED_WRITES_IV, 0x0095); 195 + CPUMF_EVENT_ATTR(cf_z14, L1D_OFFCLUSTER_L3_SOURCED_WRITES, 0x0096); 196 + CPUMF_EVENT_ATTR(cf_z14, L1D_OFFCLUSTER_MEMORY_SOURCED_WRITES, 0x0097); 197 + CPUMF_EVENT_ATTR(cf_z14, L1D_OFFCLUSTER_L3_SOURCED_WRITES_IV, 0x0098); 198 + CPUMF_EVENT_ATTR(cf_z14, L1D_OFFDRAWER_L3_SOURCED_WRITES, 0x0099); 199 + CPUMF_EVENT_ATTR(cf_z14, L1D_OFFDRAWER_MEMORY_SOURCED_WRITES, 0x009a); 200 + CPUMF_EVENT_ATTR(cf_z14, L1D_OFFDRAWER_L3_SOURCED_WRITES_IV, 0x009b); 201 + CPUMF_EVENT_ATTR(cf_z14, L1D_ONDRAWER_L4_SOURCED_WRITES, 0x009c); 202 + CPUMF_EVENT_ATTR(cf_z14, L1D_OFFDRAWER_L4_SOURCED_WRITES, 0x009d); 203 + CPUMF_EVENT_ATTR(cf_z14, L1D_ONCHIP_L3_SOURCED_WRITES_RO, 0x009e); 204 + CPUMF_EVENT_ATTR(cf_z14, L1I_ONCHIP_L3_SOURCED_WRITES, 0x00a2); 205 + CPUMF_EVENT_ATTR(cf_z14, L1I_ONCHIP_MEMORY_SOURCED_WRITES, 0x00a3); 206 + CPUMF_EVENT_ATTR(cf_z14, L1I_ONCHIP_L3_SOURCED_WRITES_IV, 0x00a4); 207 + CPUMF_EVENT_ATTR(cf_z14, L1I_ONCLUSTER_L3_SOURCED_WRITES, 0x00a5); 208 + CPUMF_EVENT_ATTR(cf_z14, L1I_ONCLUSTER_MEMORY_SOURCED_WRITES, 0x00a6); 209 + CPUMF_EVENT_ATTR(cf_z14, L1I_ONCLUSTER_L3_SOURCED_WRITES_IV, 0x00a7); 210 + CPUMF_EVENT_ATTR(cf_z14, L1I_OFFCLUSTER_L3_SOURCED_WRITES, 0x00a8); 211 + CPUMF_EVENT_ATTR(cf_z14, L1I_OFFCLUSTER_MEMORY_SOURCED_WRITES, 0x00a9); 212 + CPUMF_EVENT_ATTR(cf_z14, L1I_OFFCLUSTER_L3_SOURCED_WRITES_IV, 0x00aa); 213 + CPUMF_EVENT_ATTR(cf_z14, L1I_OFFDRAWER_L3_SOURCED_WRITES, 0x00ab); 214 + CPUMF_EVENT_ATTR(cf_z14, L1I_OFFDRAWER_MEMORY_SOURCED_WRITES, 0x00ac); 215 + CPUMF_EVENT_ATTR(cf_z14, L1I_OFFDRAWER_L3_SOURCED_WRITES_IV, 0x00ad); 216 + CPUMF_EVENT_ATTR(cf_z14, L1I_ONDRAWER_L4_SOURCED_WRITES, 0x00ae); 217 + CPUMF_EVENT_ATTR(cf_z14, L1I_OFFDRAWER_L4_SOURCED_WRITES, 0x00af); 218 + CPUMF_EVENT_ATTR(cf_z14, BCD_DFP_EXECUTION_SLOTS, 0x00e0); 219 + CPUMF_EVENT_ATTR(cf_z14, VX_BCD_EXECUTION_SLOTS, 0x00e1); 220 + CPUMF_EVENT_ATTR(cf_z14, DECIMAL_INSTRUCTIONS, 0x00e2); 221 + CPUMF_EVENT_ATTR(cf_z14, LAST_HOST_TRANSLATIONS, 0x00e9); 222 + CPUMF_EVENT_ATTR(cf_z14, TX_NC_TABORT, 0x00f3); 223 + CPUMF_EVENT_ATTR(cf_z14, TX_C_TABORT_NO_SPECIAL, 0x00f4); 224 + CPUMF_EVENT_ATTR(cf_z14, TX_C_TABORT_SPECIAL, 0x00f5); 225 + CPUMF_EVENT_ATTR(cf_z14, MT_DIAG_CYCLES_ONE_THR_ACTIVE, 0x01c0); 226 + CPUMF_EVENT_ATTR(cf_z14, MT_DIAG_CYCLES_TWO_THR_ACTIVE, 0x01c1); 182 227 183 - static struct attribute *cpumcf_pmu_event_attr[] __initdata = { 184 - CPUMF_EVENT_PTR(cf, CPU_CYCLES), 185 - CPUMF_EVENT_PTR(cf, INSTRUCTIONS), 186 - CPUMF_EVENT_PTR(cf, L1I_DIR_WRITES), 187 - CPUMF_EVENT_PTR(cf, L1I_PENALTY_CYCLES), 188 - CPUMF_EVENT_PTR(cf, PROBLEM_STATE_CPU_CYCLES), 189 - CPUMF_EVENT_PTR(cf, PROBLEM_STATE_INSTRUCTIONS), 190 - CPUMF_EVENT_PTR(cf, PROBLEM_STATE_L1I_DIR_WRITES), 191 - CPUMF_EVENT_PTR(cf, PROBLEM_STATE_L1I_PENALTY_CYCLES), 192 - CPUMF_EVENT_PTR(cf, PROBLEM_STATE_L1D_DIR_WRITES), 193 - CPUMF_EVENT_PTR(cf, PROBLEM_STATE_L1D_PENALTY_CYCLES), 194 - CPUMF_EVENT_PTR(cf, L1D_DIR_WRITES), 195 - CPUMF_EVENT_PTR(cf, L1D_PENALTY_CYCLES), 196 - CPUMF_EVENT_PTR(cf, PRNG_FUNCTIONS), 197 - CPUMF_EVENT_PTR(cf, PRNG_CYCLES), 198 - CPUMF_EVENT_PTR(cf, PRNG_BLOCKED_FUNCTIONS), 199 - CPUMF_EVENT_PTR(cf, PRNG_BLOCKED_CYCLES), 200 - CPUMF_EVENT_PTR(cf, SHA_FUNCTIONS), 201 - CPUMF_EVENT_PTR(cf, SHA_CYCLES), 202 - CPUMF_EVENT_PTR(cf, SHA_BLOCKED_FUNCTIONS), 203 - CPUMF_EVENT_PTR(cf, SHA_BLOCKED_CYCLES), 204 - CPUMF_EVENT_PTR(cf, DEA_FUNCTIONS), 205 - CPUMF_EVENT_PTR(cf, DEA_CYCLES), 206 - CPUMF_EVENT_PTR(cf, DEA_BLOCKED_FUNCTIONS), 207 - CPUMF_EVENT_PTR(cf, DEA_BLOCKED_CYCLES), 208 - CPUMF_EVENT_PTR(cf, AES_FUNCTIONS), 209 - CPUMF_EVENT_PTR(cf, AES_CYCLES), 210 - CPUMF_EVENT_PTR(cf, AES_BLOCKED_FUNCTIONS), 211 - CPUMF_EVENT_PTR(cf, AES_BLOCKED_CYCLES), 228 + static struct attribute *cpumcf_fvn1_pmu_event_attr[] __initdata = { 229 + CPUMF_EVENT_PTR(cf_fvn1, CPU_CYCLES), 230 + CPUMF_EVENT_PTR(cf_fvn1, INSTRUCTIONS), 231 + CPUMF_EVENT_PTR(cf_fvn1, L1I_DIR_WRITES), 232 + CPUMF_EVENT_PTR(cf_fvn1, L1I_PENALTY_CYCLES), 233 + CPUMF_EVENT_PTR(cf_fvn1, PROBLEM_STATE_CPU_CYCLES), 234 + CPUMF_EVENT_PTR(cf_fvn1, PROBLEM_STATE_INSTRUCTIONS), 235 + CPUMF_EVENT_PTR(cf_fvn1, PROBLEM_STATE_L1I_DIR_WRITES), 236 + CPUMF_EVENT_PTR(cf_fvn1, PROBLEM_STATE_L1I_PENALTY_CYCLES), 237 + CPUMF_EVENT_PTR(cf_fvn1, PROBLEM_STATE_L1D_DIR_WRITES), 238 + CPUMF_EVENT_PTR(cf_fvn1, PROBLEM_STATE_L1D_PENALTY_CYCLES), 239 + CPUMF_EVENT_PTR(cf_fvn1, L1D_DIR_WRITES), 240 + CPUMF_EVENT_PTR(cf_fvn1, L1D_PENALTY_CYCLES), 241 + NULL, 242 + }; 243 + 244 + static struct attribute *cpumcf_fvn3_pmu_event_attr[] __initdata = { 245 + CPUMF_EVENT_PTR(cf_fvn3, CPU_CYCLES), 246 + CPUMF_EVENT_PTR(cf_fvn3, INSTRUCTIONS), 247 + CPUMF_EVENT_PTR(cf_fvn3, L1I_DIR_WRITES), 248 + CPUMF_EVENT_PTR(cf_fvn3, L1I_PENALTY_CYCLES), 249 + CPUMF_EVENT_PTR(cf_fvn3, PROBLEM_STATE_CPU_CYCLES), 250 + CPUMF_EVENT_PTR(cf_fvn3, PROBLEM_STATE_INSTRUCTIONS), 251 + CPUMF_EVENT_PTR(cf_fvn3, L1D_DIR_WRITES), 252 + CPUMF_EVENT_PTR(cf_fvn3, L1D_PENALTY_CYCLES), 253 + NULL, 254 + }; 255 + 256 + static struct attribute *cpumcf_svn_generic_pmu_event_attr[] __initdata = { 257 + CPUMF_EVENT_PTR(cf_svn_generic, PRNG_FUNCTIONS), 258 + CPUMF_EVENT_PTR(cf_svn_generic, PRNG_CYCLES), 259 + CPUMF_EVENT_PTR(cf_svn_generic, PRNG_BLOCKED_FUNCTIONS), 260 + CPUMF_EVENT_PTR(cf_svn_generic, PRNG_BLOCKED_CYCLES), 261 + CPUMF_EVENT_PTR(cf_svn_generic, SHA_FUNCTIONS), 262 + CPUMF_EVENT_PTR(cf_svn_generic, SHA_CYCLES), 263 + CPUMF_EVENT_PTR(cf_svn_generic, SHA_BLOCKED_FUNCTIONS), 264 + CPUMF_EVENT_PTR(cf_svn_generic, SHA_BLOCKED_CYCLES), 265 + CPUMF_EVENT_PTR(cf_svn_generic, DEA_FUNCTIONS), 266 + CPUMF_EVENT_PTR(cf_svn_generic, DEA_CYCLES), 267 + CPUMF_EVENT_PTR(cf_svn_generic, DEA_BLOCKED_FUNCTIONS), 268 + CPUMF_EVENT_PTR(cf_svn_generic, DEA_BLOCKED_CYCLES), 269 + CPUMF_EVENT_PTR(cf_svn_generic, AES_FUNCTIONS), 270 + CPUMF_EVENT_PTR(cf_svn_generic, AES_CYCLES), 271 + CPUMF_EVENT_PTR(cf_svn_generic, AES_BLOCKED_FUNCTIONS), 272 + CPUMF_EVENT_PTR(cf_svn_generic, AES_BLOCKED_CYCLES), 212 273 NULL, 213 274 }; 214 275 ··· 430 353 NULL, 431 354 }; 432 355 356 + static struct attribute *cpumcf_z14_pmu_event_attr[] __initdata = { 357 + CPUMF_EVENT_PTR(cf_z14, L1D_WRITES_RO_EXCL), 358 + CPUMF_EVENT_PTR(cf_z14, DTLB2_WRITES), 359 + CPUMF_EVENT_PTR(cf_z14, DTLB2_MISSES), 360 + CPUMF_EVENT_PTR(cf_z14, DTLB2_HPAGE_WRITES), 361 + CPUMF_EVENT_PTR(cf_z14, DTLB2_GPAGE_WRITES), 362 + CPUMF_EVENT_PTR(cf_z14, L1D_L2D_SOURCED_WRITES), 363 + CPUMF_EVENT_PTR(cf_z14, ITLB2_WRITES), 364 + CPUMF_EVENT_PTR(cf_z14, ITLB2_MISSES), 365 + CPUMF_EVENT_PTR(cf_z14, L1I_L2I_SOURCED_WRITES), 366 + CPUMF_EVENT_PTR(cf_z14, TLB2_PTE_WRITES), 367 + CPUMF_EVENT_PTR(cf_z14, TLB2_CRSTE_WRITES), 368 + CPUMF_EVENT_PTR(cf_z14, TLB2_ENGINES_BUSY), 369 + CPUMF_EVENT_PTR(cf_z14, TX_C_TEND), 370 + CPUMF_EVENT_PTR(cf_z14, TX_NC_TEND), 371 + CPUMF_EVENT_PTR(cf_z14, L1C_TLB2_MISSES), 372 + CPUMF_EVENT_PTR(cf_z14, L1D_ONCHIP_L3_SOURCED_WRITES), 373 + CPUMF_EVENT_PTR(cf_z14, L1D_ONCHIP_MEMORY_SOURCED_WRITES), 374 + CPUMF_EVENT_PTR(cf_z14, L1D_ONCHIP_L3_SOURCED_WRITES_IV), 375 + CPUMF_EVENT_PTR(cf_z14, L1D_ONCLUSTER_L3_SOURCED_WRITES), 376 + CPUMF_EVENT_PTR(cf_z14, L1D_ONCLUSTER_MEMORY_SOURCED_WRITES), 377 + CPUMF_EVENT_PTR(cf_z14, L1D_ONCLUSTER_L3_SOURCED_WRITES_IV), 378 + CPUMF_EVENT_PTR(cf_z14, L1D_OFFCLUSTER_L3_SOURCED_WRITES), 379 + CPUMF_EVENT_PTR(cf_z14, L1D_OFFCLUSTER_MEMORY_SOURCED_WRITES), 380 + CPUMF_EVENT_PTR(cf_z14, L1D_OFFCLUSTER_L3_SOURCED_WRITES_IV), 381 + CPUMF_EVENT_PTR(cf_z14, L1D_OFFDRAWER_L3_SOURCED_WRITES), 382 + CPUMF_EVENT_PTR(cf_z14, L1D_OFFDRAWER_MEMORY_SOURCED_WRITES), 383 + CPUMF_EVENT_PTR(cf_z14, L1D_OFFDRAWER_L3_SOURCED_WRITES_IV), 384 + CPUMF_EVENT_PTR(cf_z14, L1D_ONDRAWER_L4_SOURCED_WRITES), 385 + CPUMF_EVENT_PTR(cf_z14, L1D_OFFDRAWER_L4_SOURCED_WRITES), 386 + CPUMF_EVENT_PTR(cf_z14, L1D_ONCHIP_L3_SOURCED_WRITES_RO), 387 + CPUMF_EVENT_PTR(cf_z14, L1I_ONCHIP_L3_SOURCED_WRITES), 388 + CPUMF_EVENT_PTR(cf_z14, L1I_ONCHIP_MEMORY_SOURCED_WRITES), 389 + CPUMF_EVENT_PTR(cf_z14, L1I_ONCHIP_L3_SOURCED_WRITES_IV), 390 + CPUMF_EVENT_PTR(cf_z14, L1I_ONCLUSTER_L3_SOURCED_WRITES), 391 + CPUMF_EVENT_PTR(cf_z14, L1I_ONCLUSTER_MEMORY_SOURCED_WRITES), 392 + CPUMF_EVENT_PTR(cf_z14, L1I_ONCLUSTER_L3_SOURCED_WRITES_IV), 393 + CPUMF_EVENT_PTR(cf_z14, L1I_OFFCLUSTER_L3_SOURCED_WRITES), 394 + CPUMF_EVENT_PTR(cf_z14, L1I_OFFCLUSTER_MEMORY_SOURCED_WRITES), 395 + CPUMF_EVENT_PTR(cf_z14, L1I_OFFCLUSTER_L3_SOURCED_WRITES_IV), 396 + CPUMF_EVENT_PTR(cf_z14, L1I_OFFDRAWER_L3_SOURCED_WRITES), 397 + CPUMF_EVENT_PTR(cf_z14, L1I_OFFDRAWER_MEMORY_SOURCED_WRITES), 398 + CPUMF_EVENT_PTR(cf_z14, L1I_OFFDRAWER_L3_SOURCED_WRITES_IV), 399 + CPUMF_EVENT_PTR(cf_z14, L1I_ONDRAWER_L4_SOURCED_WRITES), 400 + CPUMF_EVENT_PTR(cf_z14, L1I_OFFDRAWER_L4_SOURCED_WRITES), 401 + CPUMF_EVENT_PTR(cf_z14, BCD_DFP_EXECUTION_SLOTS), 402 + CPUMF_EVENT_PTR(cf_z14, VX_BCD_EXECUTION_SLOTS), 403 + CPUMF_EVENT_PTR(cf_z14, DECIMAL_INSTRUCTIONS), 404 + CPUMF_EVENT_PTR(cf_z14, LAST_HOST_TRANSLATIONS), 405 + CPUMF_EVENT_PTR(cf_z14, TX_NC_TABORT), 406 + CPUMF_EVENT_PTR(cf_z14, TX_C_TABORT_NO_SPECIAL), 407 + CPUMF_EVENT_PTR(cf_z14, TX_C_TABORT_SPECIAL), 408 + CPUMF_EVENT_PTR(cf_z14, MT_DIAG_CYCLES_ONE_THR_ACTIVE), 409 + CPUMF_EVENT_PTR(cf_z14, MT_DIAG_CYCLES_TWO_THR_ACTIVE), 410 + NULL, 411 + }; 412 + 433 413 /* END: CPUM_CF COUNTER DEFINITIONS ===================================== */ 434 414 435 415 static struct attribute_group cpumcf_pmu_events_group = { ··· 513 379 514 380 515 381 static __init struct attribute **merge_attr(struct attribute **a, 516 - struct attribute **b) 382 + struct attribute **b, 383 + struct attribute **c) 517 384 { 518 385 struct attribute **new; 519 386 int j, i; ··· 522 387 for (j = 0; a[j]; j++) 523 388 ; 524 389 for (i = 0; b[i]; i++) 390 + j++; 391 + for (i = 0; c[i]; i++) 525 392 j++; 526 393 j++; 527 394 ··· 535 398 new[j++] = a[i]; 536 399 for (i = 0; b[i]; i++) 537 400 new[j++] = b[i]; 401 + for (i = 0; c[i]; i++) 402 + new[j++] = c[i]; 538 403 new[j] = NULL; 539 404 540 405 return new; ··· 544 405 545 406 __init const struct attribute_group **cpumf_cf_event_group(void) 546 407 { 547 - struct attribute **combined, **model; 408 + struct attribute **combined, **model, **cfvn, **csvn; 548 409 struct attribute *none[] = { NULL }; 410 + struct cpumf_ctr_info ci; 549 411 struct cpuid cpu_id; 550 412 413 + /* Determine generic counters set(s) */ 414 + qctri(&ci); 415 + switch (ci.cfvn) { 416 + case 1: 417 + cfvn = cpumcf_fvn1_pmu_event_attr; 418 + break; 419 + case 3: 420 + cfvn = cpumcf_fvn3_pmu_event_attr; 421 + break; 422 + default: 423 + cfvn = none; 424 + } 425 + csvn = cpumcf_svn_generic_pmu_event_attr; 426 + 427 + /* Determine model-specific counter set(s) */ 551 428 get_cpu_id(&cpu_id); 552 429 switch (cpu_id.machine) { 553 430 case 0x2097: ··· 582 427 case 0x2965: 583 428 model = cpumcf_z13_pmu_event_attr; 584 429 break; 430 + case 0x3906: 431 + model = cpumcf_z14_pmu_event_attr; 432 + break; 585 433 default: 586 434 model = none; 587 435 break; 588 436 } 589 437 590 - combined = merge_attr(cpumcf_pmu_event_attr, model); 438 + combined = merge_attr(cfvn, csvn, model); 591 439 if (combined) 592 440 cpumcf_pmu_events_group.attrs = combined; 593 441 return cpumcf_pmu_attr_groups;
+1 -5
arch/s390/kernel/perf_cpum_sf.c
··· 823 823 } 824 824 825 825 /* Check online status of the CPU to which the event is pinned */ 826 - if (event->cpu >= 0) { 827 - if ((unsigned int)event->cpu >= nr_cpumask_bits) 826 + if (event->cpu >= 0 && !cpu_online(event->cpu)) 828 827 return -ENODEV; 829 - if (!cpu_online(event->cpu)) 830 - return -ENODEV; 831 - } 832 828 833 829 /* Force reset of idle/hv excludes regardless of what the 834 830 * user requested.
+3 -15
arch/s390/kernel/process.c
··· 44 44 45 45 extern void kernel_thread_starter(void); 46 46 47 - /* 48 - * Free current thread data structures etc.. 49 - */ 50 - void exit_thread(struct task_struct *tsk) 51 - { 52 - if (tsk == current) { 53 - exit_thread_runtime_instr(); 54 - exit_thread_gs(); 55 - } 56 - } 57 - 58 47 void flush_thread(void) 59 - { 60 - } 61 - 62 - void release_thread(struct task_struct *dead_task) 63 48 { 64 49 } 65 50 66 51 void arch_release_task_struct(struct task_struct *tsk) 67 52 { 53 + runtime_instr_release(tsk); 54 + guarded_storage_release(tsk); 68 55 } 69 56 70 57 int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) ··· 87 100 memset(&p->thread.per_user, 0, sizeof(p->thread.per_user)); 88 101 memset(&p->thread.per_event, 0, sizeof(p->thread.per_event)); 89 102 clear_tsk_thread_flag(p, TIF_SINGLE_STEP); 103 + p->thread.per_flags = 0; 90 104 /* Initialize per thread user and system timer values */ 91 105 p->thread.user_timer = 0; 92 106 p->thread.guest_timer = 0;
+146 -26
arch/s390/kernel/ptrace.c
··· 31 31 #include <linux/uaccess.h> 32 32 #include <asm/unistd.h> 33 33 #include <asm/switch_to.h> 34 + #include <asm/runtime_instr.h> 35 + #include <asm/facility.h> 36 + 34 37 #include "entry.h" 35 38 36 39 #ifdef CONFIG_COMPAT ··· 48 45 struct pt_regs *regs = task_pt_regs(task); 49 46 struct thread_struct *thread = &task->thread; 50 47 struct per_regs old, new; 51 - unsigned long cr0_old, cr0_new; 52 - unsigned long cr2_old, cr2_new; 48 + union ctlreg0 cr0_old, cr0_new; 49 + union ctlreg2 cr2_old, cr2_new; 53 50 int cr0_changed, cr2_changed; 54 51 55 - __ctl_store(cr0_old, 0, 0); 56 - __ctl_store(cr2_old, 2, 2); 52 + __ctl_store(cr0_old.val, 0, 0); 53 + __ctl_store(cr2_old.val, 2, 2); 57 54 cr0_new = cr0_old; 58 55 cr2_new = cr2_old; 59 56 /* Take care of the enable/disable of transactional execution. */ 60 57 if (MACHINE_HAS_TE) { 61 58 /* Set or clear transaction execution TXC bit 8. */ 62 - cr0_new |= (1UL << 55); 59 + cr0_new.tcx = 1; 63 60 if (task->thread.per_flags & PER_FLAG_NO_TE) 64 - cr0_new &= ~(1UL << 55); 61 + cr0_new.tcx = 0; 65 62 /* Set or clear transaction execution TDC bits 62 and 63. */ 66 - cr2_new &= ~3UL; 63 + cr2_new.tdc = 0; 67 64 if (task->thread.per_flags & PER_FLAG_TE_ABORT_RAND) { 68 65 if (task->thread.per_flags & PER_FLAG_TE_ABORT_RAND_TEND) 69 - cr2_new |= 1UL; 66 + cr2_new.tdc = 1; 70 67 else 71 - cr2_new |= 2UL; 68 + cr2_new.tdc = 2; 72 69 } 73 70 } 74 71 /* Take care of enable/disable of guarded storage. */ 75 72 if (MACHINE_HAS_GS) { 76 - cr2_new &= ~(1UL << 4); 73 + cr2_new.gse = 0; 77 74 if (task->thread.gs_cb) 78 - cr2_new |= (1UL << 4); 75 + cr2_new.gse = 1; 79 76 } 80 77 /* Load control register 0/2 iff changed */ 81 - cr0_changed = cr0_new != cr0_old; 82 - cr2_changed = cr2_new != cr2_old; 78 + cr0_changed = cr0_new.val != cr0_old.val; 79 + cr2_changed = cr2_new.val != cr2_old.val; 83 80 if (cr0_changed) 84 - __ctl_load(cr0_new, 0, 0); 81 + __ctl_load(cr0_new.val, 0, 0); 85 82 if (cr2_changed) 86 - __ctl_load(cr2_new, 2, 2); 83 + __ctl_load(cr2_new.val, 2, 2); 87 84 /* Copy user specified PER registers */ 88 85 new.control = thread->per_user.control; 89 86 new.start = thread->per_user.start; ··· 1175 1172 unsigned int pos, unsigned int count, 1176 1173 const void *kbuf, const void __user *ubuf) 1177 1174 { 1178 - struct gs_cb *data = target->thread.gs_cb; 1175 + struct gs_cb gs_cb = { }, *data = NULL; 1179 1176 int rc; 1180 1177 1181 1178 if (!MACHINE_HAS_GS) 1182 1179 return -ENODEV; 1183 - if (!data) { 1180 + if (!target->thread.gs_cb) { 1184 1181 data = kzalloc(sizeof(*data), GFP_KERNEL); 1185 1182 if (!data) 1186 1183 return -ENOMEM; 1187 - data->gsd = 25; 1188 - target->thread.gs_cb = data; 1189 - if (target == current) 1190 - __ctl_set_bit(2, 4); 1191 - } else if (target == current) { 1192 - save_gs_cb(data); 1193 1184 } 1185 + if (!target->thread.gs_cb) 1186 + gs_cb.gsd = 25; 1187 + else if (target == current) 1188 + save_gs_cb(&gs_cb); 1189 + else 1190 + gs_cb = *target->thread.gs_cb; 1194 1191 rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, 1195 - data, 0, sizeof(struct gs_cb)); 1196 - if (target == current) 1197 - restore_gs_cb(data); 1192 + &gs_cb, 0, sizeof(gs_cb)); 1193 + if (rc) { 1194 + kfree(data); 1195 + return -EFAULT; 1196 + } 1197 + preempt_disable(); 1198 + if (!target->thread.gs_cb) 1199 + target->thread.gs_cb = data; 1200 + *target->thread.gs_cb = gs_cb; 1201 + if (target == current) { 1202 + __ctl_set_bit(2, 4); 1203 + restore_gs_cb(target->thread.gs_cb); 1204 + } 1205 + preempt_enable(); 1198 1206 return rc; 1199 1207 } 1200 1208 ··· 1241 1227 } 1242 1228 return user_regset_copyin(&pos, &count, &kbuf, &ubuf, 1243 1229 data, 0, sizeof(struct gs_cb)); 1230 + } 1231 + 1232 + static bool is_ri_cb_valid(struct runtime_instr_cb *cb) 1233 + { 1234 + return (cb->rca & 0x1f) == 0 && 1235 + (cb->roa & 0xfff) == 0 && 1236 + (cb->rla & 0xfff) == 0xfff && 1237 + cb->s == 1 && 1238 + cb->k == 1 && 1239 + cb->h == 0 && 1240 + cb->reserved1 == 0 && 1241 + cb->ps == 1 && 1242 + cb->qs == 0 && 1243 + cb->pc == 1 && 1244 + cb->qc == 0 && 1245 + cb->reserved2 == 0 && 1246 + cb->key == PAGE_DEFAULT_KEY && 1247 + cb->reserved3 == 0 && 1248 + cb->reserved4 == 0 && 1249 + cb->reserved5 == 0 && 1250 + cb->reserved6 == 0 && 1251 + cb->reserved7 == 0 && 1252 + cb->reserved8 == 0 && 1253 + cb->rla >= cb->roa && 1254 + cb->rca >= cb->roa && 1255 + cb->rca <= cb->rla+1 && 1256 + cb->m < 3; 1257 + } 1258 + 1259 + static int s390_runtime_instr_get(struct task_struct *target, 1260 + const struct user_regset *regset, 1261 + unsigned int pos, unsigned int count, 1262 + void *kbuf, void __user *ubuf) 1263 + { 1264 + struct runtime_instr_cb *data = target->thread.ri_cb; 1265 + 1266 + if (!test_facility(64)) 1267 + return -ENODEV; 1268 + if (!data) 1269 + return -ENODATA; 1270 + 1271 + return user_regset_copyout(&pos, &count, &kbuf, &ubuf, 1272 + data, 0, sizeof(struct runtime_instr_cb)); 1273 + } 1274 + 1275 + static int s390_runtime_instr_set(struct task_struct *target, 1276 + const struct user_regset *regset, 1277 + unsigned int pos, unsigned int count, 1278 + const void *kbuf, const void __user *ubuf) 1279 + { 1280 + struct runtime_instr_cb ri_cb = { }, *data = NULL; 1281 + int rc; 1282 + 1283 + if (!test_facility(64)) 1284 + return -ENODEV; 1285 + 1286 + if (!target->thread.ri_cb) { 1287 + data = kzalloc(sizeof(*data), GFP_KERNEL); 1288 + if (!data) 1289 + return -ENOMEM; 1290 + } 1291 + 1292 + if (target->thread.ri_cb) { 1293 + if (target == current) 1294 + store_runtime_instr_cb(&ri_cb); 1295 + else 1296 + ri_cb = *target->thread.ri_cb; 1297 + } 1298 + 1299 + rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, 1300 + &ri_cb, 0, sizeof(struct runtime_instr_cb)); 1301 + if (rc) { 1302 + kfree(data); 1303 + return -EFAULT; 1304 + } 1305 + 1306 + if (!is_ri_cb_valid(&ri_cb)) { 1307 + kfree(data); 1308 + return -EINVAL; 1309 + } 1310 + 1311 + preempt_disable(); 1312 + if (!target->thread.ri_cb) 1313 + target->thread.ri_cb = data; 1314 + *target->thread.ri_cb = ri_cb; 1315 + if (target == current) 1316 + load_runtime_instr_cb(target->thread.ri_cb); 1317 + preempt_enable(); 1318 + 1319 + return 0; 1244 1320 } 1245 1321 1246 1322 static const struct user_regset s390_regsets[] = { ··· 1405 1301 .align = sizeof(__u64), 1406 1302 .get = s390_gs_bc_get, 1407 1303 .set = s390_gs_bc_set, 1304 + }, 1305 + { 1306 + .core_note_type = NT_S390_RI_CB, 1307 + .n = sizeof(struct runtime_instr_cb) / sizeof(__u64), 1308 + .size = sizeof(__u64), 1309 + .align = sizeof(__u64), 1310 + .get = s390_runtime_instr_get, 1311 + .set = s390_runtime_instr_set, 1408 1312 }, 1409 1313 }; 1410 1314 ··· 1649 1537 .align = sizeof(__u64), 1650 1538 .get = s390_gs_cb_get, 1651 1539 .set = s390_gs_cb_set, 1540 + }, 1541 + { 1542 + .core_note_type = NT_S390_RI_CB, 1543 + .n = sizeof(struct runtime_instr_cb) / sizeof(__u64), 1544 + .size = sizeof(__u64), 1545 + .align = sizeof(__u64), 1546 + .get = s390_runtime_instr_get, 1547 + .set = s390_runtime_instr_set, 1652 1548 }, 1653 1549 }; 1654 1550
-3
arch/s390/kernel/relocate_kernel.S
··· 29 29 ENTRY(relocate_kernel) 30 30 basr %r13,0 # base address 31 31 .base: 32 - stnsm sys_msk-.base(%r13),0xfb # disable DAT 33 32 stctg %c0,%c15,ctlregs-.base(%r13) 34 33 stmg %r0,%r15,gprregs-.base(%r13) 35 34 lghi %r0,3 ··· 102 103 .align 8 103 104 load_psw: 104 105 .long 0x00080000,0x80000000 105 - sys_msk: 106 - .quad 0 107 106 ctlregs: 108 107 .rept 16 109 108 .quad 0
+21 -21
arch/s390/kernel/runtime_instr.c
··· 21 21 /* empty control block to disable RI by loading it */ 22 22 struct runtime_instr_cb runtime_instr_empty_cb; 23 23 24 + void runtime_instr_release(struct task_struct *tsk) 25 + { 26 + kfree(tsk->thread.ri_cb); 27 + } 28 + 24 29 static void disable_runtime_instr(void) 25 30 { 26 - struct pt_regs *regs = task_pt_regs(current); 31 + struct task_struct *task = current; 32 + struct pt_regs *regs; 27 33 34 + if (!task->thread.ri_cb) 35 + return; 36 + regs = task_pt_regs(task); 37 + preempt_disable(); 28 38 load_runtime_instr_cb(&runtime_instr_empty_cb); 39 + kfree(task->thread.ri_cb); 40 + task->thread.ri_cb = NULL; 41 + preempt_enable(); 29 42 30 43 /* 31 44 * Make sure the RI bit is deleted from the PSW. If the user did not ··· 50 37 51 38 static void init_runtime_instr_cb(struct runtime_instr_cb *cb) 52 39 { 53 - cb->buf_limit = 0xfff; 54 - cb->pstate = 1; 55 - cb->pstate_set_buf = 1; 56 - cb->pstate_sample = 1; 57 - cb->pstate_collect = 1; 40 + cb->rla = 0xfff; 41 + cb->s = 1; 42 + cb->k = 1; 43 + cb->ps = 1; 44 + cb->pc = 1; 58 45 cb->key = PAGE_DEFAULT_KEY; 59 - cb->valid = 1; 60 - } 61 - 62 - void exit_thread_runtime_instr(void) 63 - { 64 - struct task_struct *task = current; 65 - 66 - if (!task->thread.ri_cb) 67 - return; 68 - disable_runtime_instr(); 69 - kfree(task->thread.ri_cb); 70 - task->thread.ri_cb = NULL; 46 + cb->v = 1; 71 47 } 72 48 73 49 SYSCALL_DEFINE1(s390_runtime_instr, int, command) ··· 67 65 return -EOPNOTSUPP; 68 66 69 67 if (command == S390_RUNTIME_INSTR_STOP) { 70 - preempt_disable(); 71 - exit_thread_runtime_instr(); 72 - preempt_enable(); 68 + disable_runtime_instr(); 73 69 return 0; 74 70 } 75 71
+9 -12
arch/s390/kernel/setup.c
··· 55 55 #include <asm/mmu_context.h> 56 56 #include <asm/cpcmd.h> 57 57 #include <asm/lowcore.h> 58 + #include <asm/nmi.h> 58 59 #include <asm/irq.h> 59 60 #include <asm/page.h> 60 61 #include <asm/ptrace.h> 61 62 #include <asm/sections.h> 62 63 #include <asm/ebcdic.h> 63 - #include <asm/kvm_virtio.h> 64 64 #include <asm/diag.h> 65 65 #include <asm/os_info.h> 66 66 #include <asm/sclp.h> 67 67 #include <asm/sysinfo.h> 68 68 #include <asm/numa.h> 69 + #include <asm/alternative.h> 69 70 #include "entry.h" 70 71 71 72 /* ··· 340 339 lc->stfl_fac_list = S390_lowcore.stfl_fac_list; 341 340 memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list, 342 341 MAX_FACILITY_BIT/8); 343 - if (MACHINE_HAS_VX || MACHINE_HAS_GS) { 344 - unsigned long bits, size; 345 - 346 - bits = MACHINE_HAS_GS ? 11 : 10; 347 - size = 1UL << bits; 348 - lc->mcesad = (__u64) memblock_virt_alloc(size, size); 349 - if (MACHINE_HAS_GS) 350 - lc->mcesad |= bits; 351 - } 352 - lc->vdso_per_cpu_data = (unsigned long) &lc->paste[0]; 342 + nmi_alloc_boot_cpu(lc); 343 + vdso_alloc_boot_cpu(lc); 353 344 lc->sync_enter_timer = S390_lowcore.sync_enter_timer; 354 345 lc->async_enter_timer = S390_lowcore.async_enter_timer; 355 346 lc->exit_timer = S390_lowcore.exit_timer; ··· 373 380 374 381 #ifdef CONFIG_SMP 375 382 lc->spinlock_lockval = arch_spin_lockval(0); 383 + lc->spinlock_index = 0; 384 + arch_spin_lock_setup(0); 376 385 #endif 377 386 378 387 set_prefix((u32)(unsigned long) lc); ··· 759 764 /* 760 765 * Transactional execution support HWCAP_S390_TE is bit 10. 761 766 */ 762 - if (test_facility(50) && test_facility(73)) 767 + if (MACHINE_HAS_TE) 763 768 elf_hwcap |= HWCAP_S390_TE; 764 769 765 770 /* ··· 949 954 /* Setup default console */ 950 955 conmode_default(); 951 956 set_preferred_console(); 957 + 958 + apply_alternative_instructions(); 952 959 953 960 /* Setup zfcpdump support */ 954 961 setup_zfcpdump();
+36 -51
arch/s390/kernel/smp.c
··· 37 37 #include <linux/sched/task_stack.h> 38 38 #include <linux/crash_dump.h> 39 39 #include <linux/memblock.h> 40 + #include <linux/kprobes.h> 40 41 #include <asm/asm-offsets.h> 41 42 #include <asm/diag.h> 42 43 #include <asm/switch_to.h> ··· 81 80 82 81 static u8 boot_core_type; 83 82 static struct pcpu pcpu_devices[NR_CPUS]; 84 - 85 - static struct kmem_cache *pcpu_mcesa_cache; 86 83 87 84 unsigned int smp_cpu_mt_shift; 88 85 EXPORT_SYMBOL(smp_cpu_mt_shift); ··· 192 193 static int pcpu_alloc_lowcore(struct pcpu *pcpu, int cpu) 193 194 { 194 195 unsigned long async_stack, panic_stack; 195 - unsigned long mcesa_origin, mcesa_bits; 196 196 struct lowcore *lc; 197 197 198 - mcesa_origin = mcesa_bits = 0; 199 198 if (pcpu != &pcpu_devices[0]) { 200 199 pcpu->lowcore = (struct lowcore *) 201 200 __get_free_pages(GFP_KERNEL | GFP_DMA, LC_ORDER); ··· 201 204 panic_stack = __get_free_page(GFP_KERNEL); 202 205 if (!pcpu->lowcore || !panic_stack || !async_stack) 203 206 goto out; 204 - if (MACHINE_HAS_VX || MACHINE_HAS_GS) { 205 - mcesa_origin = (unsigned long) 206 - kmem_cache_alloc(pcpu_mcesa_cache, GFP_KERNEL); 207 - if (!mcesa_origin) 208 - goto out; 209 - /* The pointer is stored with mcesa_bits ORed in */ 210 - kmemleak_not_leak((void *) mcesa_origin); 211 - mcesa_bits = MACHINE_HAS_GS ? 11 : 0; 212 - } 213 207 } else { 214 208 async_stack = pcpu->lowcore->async_stack - ASYNC_FRAME_OFFSET; 215 209 panic_stack = pcpu->lowcore->panic_stack - PANIC_FRAME_OFFSET; 216 - mcesa_origin = pcpu->lowcore->mcesad & MCESA_ORIGIN_MASK; 217 - mcesa_bits = pcpu->lowcore->mcesad & MCESA_LC_MASK; 218 210 } 219 211 lc = pcpu->lowcore; 220 212 memcpy(lc, &S390_lowcore, 512); 221 213 memset((char *) lc + 512, 0, sizeof(*lc) - 512); 222 214 lc->async_stack = async_stack + ASYNC_FRAME_OFFSET; 223 215 lc->panic_stack = panic_stack + PANIC_FRAME_OFFSET; 224 - lc->mcesad = mcesa_origin | mcesa_bits; 225 216 lc->cpu_nr = cpu; 226 217 lc->spinlock_lockval = arch_spin_lockval(cpu); 227 - if (vdso_alloc_per_cpu(lc)) 218 + lc->spinlock_index = 0; 219 + if (nmi_alloc_per_cpu(lc)) 228 220 goto out; 221 + if (vdso_alloc_per_cpu(lc)) 222 + goto out_mcesa; 229 223 lowcore_ptr[cpu] = lc; 230 224 pcpu_sigp_retry(pcpu, SIGP_SET_PREFIX, (u32)(unsigned long) lc); 231 225 return 0; 226 + 227 + out_mcesa: 228 + nmi_free_per_cpu(lc); 232 229 out: 233 230 if (pcpu != &pcpu_devices[0]) { 234 - if (mcesa_origin) 235 - kmem_cache_free(pcpu_mcesa_cache, 236 - (void *) mcesa_origin); 237 231 free_page(panic_stack); 238 232 free_pages(async_stack, ASYNC_ORDER); 239 233 free_pages((unsigned long) pcpu->lowcore, LC_ORDER); ··· 236 248 237 249 static void pcpu_free_lowcore(struct pcpu *pcpu) 238 250 { 239 - unsigned long mcesa_origin; 240 - 241 251 pcpu_sigp_retry(pcpu, SIGP_SET_PREFIX, 0); 242 252 lowcore_ptr[pcpu - pcpu_devices] = NULL; 243 253 vdso_free_per_cpu(pcpu->lowcore); 254 + nmi_free_per_cpu(pcpu->lowcore); 244 255 if (pcpu == &pcpu_devices[0]) 245 256 return; 246 - if (MACHINE_HAS_VX || MACHINE_HAS_GS) { 247 - mcesa_origin = pcpu->lowcore->mcesad & MCESA_ORIGIN_MASK; 248 - kmem_cache_free(pcpu_mcesa_cache, (void *) mcesa_origin); 249 - } 250 257 free_page(pcpu->lowcore->panic_stack-PANIC_FRAME_OFFSET); 251 258 free_pages(pcpu->lowcore->async_stack-ASYNC_FRAME_OFFSET, ASYNC_ORDER); 252 259 free_pages((unsigned long) pcpu->lowcore, LC_ORDER); ··· 257 274 cpumask_set_cpu(cpu, mm_cpumask(&init_mm)); 258 275 lc->cpu_nr = cpu; 259 276 lc->spinlock_lockval = arch_spin_lockval(cpu); 277 + lc->spinlock_index = 0; 260 278 lc->percpu_offset = __per_cpu_offset[cpu]; 261 279 lc->kernel_asce = S390_lowcore.kernel_asce; 262 280 lc->machine_flags = S390_lowcore.machine_flags; ··· 266 282 save_access_regs((unsigned int *) lc->access_regs_save_area); 267 283 memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list, 268 284 MAX_FACILITY_BIT/8); 285 + arch_spin_lock_setup(cpu); 269 286 } 270 287 271 288 static void pcpu_attach_task(struct pcpu *pcpu, struct task_struct *tsk) ··· 408 423 * Send cpus emergency shutdown signal. This gives the cpus the 409 424 * opportunity to complete outstanding interrupts. 410 425 */ 411 - static void smp_emergency_stop(cpumask_t *cpumask) 426 + void notrace smp_emergency_stop(void) 412 427 { 428 + cpumask_t cpumask; 413 429 u64 end; 414 430 int cpu; 415 431 432 + cpumask_copy(&cpumask, cpu_online_mask); 433 + cpumask_clear_cpu(smp_processor_id(), &cpumask); 434 + 416 435 end = get_tod_clock() + (1000000UL << 12); 417 - for_each_cpu(cpu, cpumask) { 436 + for_each_cpu(cpu, &cpumask) { 418 437 struct pcpu *pcpu = pcpu_devices + cpu; 419 438 set_bit(ec_stop_cpu, &pcpu->ec_mask); 420 439 while (__pcpu_sigp(pcpu->address, SIGP_EMERGENCY_SIGNAL, ··· 427 438 cpu_relax(); 428 439 } 429 440 while (get_tod_clock() < end) { 430 - for_each_cpu(cpu, cpumask) 441 + for_each_cpu(cpu, &cpumask) 431 442 if (pcpu_stopped(pcpu_devices + cpu)) 432 - cpumask_clear_cpu(cpu, cpumask); 433 - if (cpumask_empty(cpumask)) 443 + cpumask_clear_cpu(cpu, &cpumask); 444 + if (cpumask_empty(&cpumask)) 434 445 break; 435 446 cpu_relax(); 436 447 } 437 448 } 449 + NOKPROBE_SYMBOL(smp_emergency_stop); 438 450 439 451 /* 440 452 * Stop all cpus but the current one. 441 453 */ 442 454 void smp_send_stop(void) 443 455 { 444 - cpumask_t cpumask; 445 456 int cpu; 446 457 447 458 /* Disable all interrupts/machine checks */ ··· 449 460 trace_hardirqs_off(); 450 461 451 462 debug_set_critical(); 452 - cpumask_copy(&cpumask, cpu_online_mask); 453 - cpumask_clear_cpu(smp_processor_id(), &cpumask); 454 463 455 464 if (oops_in_progress) 456 - smp_emergency_stop(&cpumask); 465 + smp_emergency_stop(); 457 466 458 467 /* stop all processors */ 459 - for_each_cpu(cpu, &cpumask) { 460 - struct pcpu *pcpu = pcpu_devices + cpu; 461 - pcpu_sigp_retry(pcpu, SIGP_STOP, 0); 462 - while (!pcpu_stopped(pcpu)) 468 + for_each_online_cpu(cpu) { 469 + if (cpu == smp_processor_id()) 470 + continue; 471 + pcpu_sigp_retry(pcpu_devices + cpu, SIGP_STOP, 0); 472 + while (!pcpu_stopped(pcpu_devices + cpu)) 463 473 cpu_relax(); 464 474 } 465 475 } ··· 792 804 */ 793 805 static void smp_start_secondary(void *cpuvoid) 794 806 { 807 + int cpu = smp_processor_id(); 808 + 795 809 S390_lowcore.last_update_clock = get_tod_clock(); 796 810 S390_lowcore.restart_stack = (unsigned long) restart_stack; 797 811 S390_lowcore.restart_fn = (unsigned long) do_restart; ··· 807 817 init_cpu_timer(); 808 818 vtime_init(); 809 819 pfault_init(); 810 - notify_cpu_starting(smp_processor_id()); 811 - set_cpu_online(smp_processor_id(), true); 820 + notify_cpu_starting(cpu); 821 + if (topology_cpu_dedicated(cpu)) 822 + set_cpu_flag(CIF_DEDICATED_CPU); 823 + else 824 + clear_cpu_flag(CIF_DEDICATED_CPU); 825 + set_cpu_online(cpu, true); 812 826 inc_irq_stat(CPU_RST); 813 827 local_irq_enable(); 814 828 cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); ··· 921 927 922 928 void __init smp_prepare_cpus(unsigned int max_cpus) 923 929 { 924 - unsigned long size; 925 - 926 930 /* request the 0x1201 emergency signal external interrupt */ 927 931 if (register_external_irq(EXT_IRQ_EMERGENCY_SIG, do_ext_call_interrupt)) 928 932 panic("Couldn't request external interrupt 0x1201"); 929 933 /* request the 0x1202 external call external interrupt */ 930 934 if (register_external_irq(EXT_IRQ_EXTERNAL_CALL, do_ext_call_interrupt)) 931 935 panic("Couldn't request external interrupt 0x1202"); 932 - /* create slab cache for the machine-check-extended-save-areas */ 933 - if (MACHINE_HAS_VX || MACHINE_HAS_GS) { 934 - size = 1UL << (MACHINE_HAS_GS ? 11 : 10); 935 - pcpu_mcesa_cache = kmem_cache_create("nmi_save_areas", 936 - size, size, 0, NULL); 937 - if (!pcpu_mcesa_cache) 938 - panic("Couldn't create nmi save area cache"); 939 - } 940 936 } 941 937 942 938 void __init smp_prepare_boot_cpu(void) ··· 949 965 pcpu_devices[0].address = stap(); 950 966 S390_lowcore.cpu_nr = 0; 951 967 S390_lowcore.spinlock_lockval = arch_spin_lockval(0); 968 + S390_lowcore.spinlock_index = 0; 952 969 } 953 970 954 971 /*
+4 -4
arch/s390/kernel/suspend.c
··· 153 153 { 154 154 unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin)); 155 155 unsigned long nosave_end_pfn = PFN_DOWN(__pa(&__nosave_end)); 156 - unsigned long eshared_pfn = PFN_DOWN(__pa(&_eshared)) - 1; 156 + unsigned long end_rodata_pfn = PFN_DOWN(__pa(&__end_rodata)) - 1; 157 157 unsigned long stext_pfn = PFN_DOWN(__pa(&_stext)); 158 158 159 159 /* Always save lowcore pages (LC protection might be enabled). */ ··· 161 161 return 0; 162 162 if (pfn >= nosave_begin_pfn && pfn < nosave_end_pfn) 163 163 return 1; 164 - /* Skip memory holes and read-only pages (NSS, DCSS, ...). */ 165 - if (pfn >= stext_pfn && pfn <= eshared_pfn) 166 - return ipl_info.type == IPL_TYPE_NSS ? 1 : 0; 164 + /* Skip memory holes and read-only pages (DCSS, ...). */ 165 + if (pfn >= stext_pfn && pfn <= end_rodata_pfn) 166 + return 0; 167 167 if (tprot(PFN_PHYS(pfn))) 168 168 return 1; 169 169 return 0;
+1
arch/s390/kernel/syscalls.S
··· 389 389 SYSCALL(sys_pwritev2,compat_sys_pwritev2) 390 390 SYSCALL(sys_s390_guarded_storage,compat_sys_s390_guarded_storage) /* 378 */ 391 391 SYSCALL(sys_statx,compat_sys_statx) 392 + SYSCALL(sys_s390_sthyi,compat_sys_s390_sthyi)
+42 -1
arch/s390/kernel/topology.c
··· 133 133 topo->socket_id = socket->id; 134 134 topo->core_id = rcore; 135 135 topo->thread_id = lcpu + i; 136 + topo->dedicated = tl_core->d; 136 137 cpumask_set_cpu(lcpu + i, &drawer->mask); 137 138 cpumask_set_cpu(lcpu + i, &book->mask); 138 139 cpumask_set_cpu(lcpu + i, &socket->mask); ··· 274 273 stsi(info, 15, 1, topology_mnest_limit()); 275 274 } 276 275 276 + static void __arch_update_dedicated_flag(void *arg) 277 + { 278 + if (topology_cpu_dedicated(smp_processor_id())) 279 + set_cpu_flag(CIF_DEDICATED_CPU); 280 + else 281 + clear_cpu_flag(CIF_DEDICATED_CPU); 282 + } 283 + 277 284 static int __arch_update_cpu_topology(void) 278 285 { 279 286 struct sysinfo_15_1_x *info = tl_info; ··· 307 298 int cpu, rc; 308 299 309 300 rc = __arch_update_cpu_topology(); 301 + on_each_cpu(__arch_update_dedicated_flag, NULL, 0); 310 302 for_each_online_cpu(cpu) { 311 303 dev = get_cpu_device(cpu); 312 304 kobject_uevent(&dev->kobj, KOBJ_CHANGE); ··· 445 435 .attrs = topology_cpu_attrs, 446 436 }; 447 437 438 + static ssize_t cpu_dedicated_show(struct device *dev, 439 + struct device_attribute *attr, char *buf) 440 + { 441 + int cpu = dev->id; 442 + ssize_t count; 443 + 444 + mutex_lock(&smp_cpu_state_mutex); 445 + count = sprintf(buf, "%d\n", topology_cpu_dedicated(cpu)); 446 + mutex_unlock(&smp_cpu_state_mutex); 447 + return count; 448 + } 449 + static DEVICE_ATTR(dedicated, 0444, cpu_dedicated_show, NULL); 450 + 451 + static struct attribute *topology_extra_cpu_attrs[] = { 452 + &dev_attr_dedicated.attr, 453 + NULL, 454 + }; 455 + 456 + static struct attribute_group topology_extra_cpu_attr_group = { 457 + .attrs = topology_extra_cpu_attrs, 458 + }; 459 + 448 460 int topology_cpu_init(struct cpu *cpu) 449 461 { 450 - return sysfs_create_group(&cpu->dev.kobj, &topology_cpu_attr_group); 462 + int rc; 463 + 464 + rc = sysfs_create_group(&cpu->dev.kobj, &topology_cpu_attr_group); 465 + if (rc || !MACHINE_HAS_TOPOLOGY) 466 + return rc; 467 + rc = sysfs_create_group(&cpu->dev.kobj, &topology_extra_cpu_attr_group); 468 + if (rc) 469 + sysfs_remove_group(&cpu->dev.kobj, &topology_cpu_attr_group); 470 + return rc; 451 471 } 452 472 453 473 static const struct cpumask *cpu_thread_mask(int cpu) ··· 549 509 alloc_masks(info, &drawer_info, 3); 550 510 out: 551 511 __arch_update_cpu_topology(); 512 + __arch_update_dedicated_flag(NULL); 552 513 } 553 514 554 515 static inline int topology_get_mode(int enabled)
+16 -4
arch/s390/kernel/vdso.c
··· 140 140 */ 141 141 #define SEGMENT_ORDER 2 142 142 143 + /* 144 + * The initial vdso_data structure for the boot CPU. Eventually 145 + * it is replaced with a properly allocated structure in vdso_init. 146 + * This is necessary because a valid S390_lowcore.vdso_per_cpu_data 147 + * pointer is required to be able to return from an interrupt or 148 + * program check. See the exit paths in entry.S. 149 + */ 150 + struct vdso_data boot_vdso_data __initdata; 151 + 152 + void __init vdso_alloc_boot_cpu(struct lowcore *lowcore) 153 + { 154 + lowcore->vdso_per_cpu_data = (unsigned long) &boot_vdso_data; 155 + } 156 + 143 157 int vdso_alloc_per_cpu(struct lowcore *lowcore) 144 158 { 145 159 unsigned long segment_table, page_table, page_frame; ··· 180 166 vd->node_id = cpu_to_node(vd->cpu_nr); 181 167 182 168 /* Set up access register mode page table */ 183 - clear_table((unsigned long *) segment_table, _SEGMENT_ENTRY_EMPTY, 184 - PAGE_SIZE << SEGMENT_ORDER); 185 - clear_table((unsigned long *) page_table, _PAGE_INVALID, 186 - 256*sizeof(unsigned long)); 169 + memset64((u64 *)segment_table, _SEGMENT_ENTRY_EMPTY, _CRST_ENTRIES); 170 + memset64((u64 *)page_table, _PAGE_INVALID, PTRS_PER_PTE); 187 171 188 172 *(unsigned long *) segment_table = _SEGMENT_ENTRY + page_table; 189 173 *(unsigned long *) page_table = _PAGE_PROTECT + page_frame;
+23 -5
arch/s390/kernel/vmlinux.lds.S
··· 60 60 61 61 RO_DATA_SECTION(PAGE_SIZE) 62 62 63 - #ifdef CONFIG_SHARED_KERNEL 64 - . = ALIGN(0x100000); /* VM shared segments are 1MB aligned */ 65 - #endif 66 - 67 63 . = ALIGN(PAGE_SIZE); 68 - _eshared = .; /* End of shareable data */ 69 64 _sdata = .; /* Start of data section */ 70 65 71 66 . = ALIGN(PAGE_SIZE); ··· 98 103 99 104 .exit.data : { 100 105 EXIT_DATA 106 + } 107 + 108 + /* 109 + * struct alt_inst entries. From the header (alternative.h): 110 + * "Alternative instructions for different CPU types or capabilities" 111 + * Think locking instructions on spinlocks. 112 + * Note, that it is a part of __init region. 113 + */ 114 + . = ALIGN(8); 115 + .altinstructions : { 116 + __alt_instructions = .; 117 + *(.altinstructions) 118 + __alt_instructions_end = .; 119 + } 120 + 121 + /* 122 + * And here are the replacement instructions. The linker sticks 123 + * them as binary blobs. The .altinstructions has enough data to 124 + * get the address and the length of them to patch the kernel safely. 125 + * Note, that it is a part of __init region. 126 + */ 127 + .altinstr_replacement : { 128 + *(.altinstr_replacement) 101 129 } 102 130 103 131 /* early.c uses stsi, which requires page aligned data. */
+1 -1
arch/s390/kvm/Makefile
··· 12 12 ccflags-y := -Ivirt/kvm -Iarch/s390/kvm 13 13 14 14 kvm-objs := $(common-objs) kvm-s390.o intercept.o interrupt.o priv.o sigp.o 15 - kvm-objs += diag.o gaccess.o guestdbg.o sthyi.o vsie.o 15 + kvm-objs += diag.o gaccess.o guestdbg.o vsie.o 16 16 17 17 obj-$(CONFIG_KVM) += kvm.o
+56
arch/s390/kvm/intercept.c
··· 18 18 #include <asm/kvm_host.h> 19 19 #include <asm/asm-offsets.h> 20 20 #include <asm/irq.h> 21 + #include <asm/sysinfo.h> 21 22 22 23 #include "kvm-s390.h" 23 24 #include "gaccess.h" ··· 359 358 return kvm_s390_handle_sigp_pei(vcpu); 360 359 361 360 return -EOPNOTSUPP; 361 + } 362 + 363 + /* 364 + * Handle the sthyi instruction that provides the guest with system 365 + * information, like current CPU resources available at each level of 366 + * the machine. 367 + */ 368 + int handle_sthyi(struct kvm_vcpu *vcpu) 369 + { 370 + int reg1, reg2, r = 0; 371 + u64 code, addr, cc = 0, rc = 0; 372 + struct sthyi_sctns *sctns = NULL; 373 + 374 + if (!test_kvm_facility(vcpu->kvm, 74)) 375 + return kvm_s390_inject_program_int(vcpu, PGM_OPERATION); 376 + 377 + kvm_s390_get_regs_rre(vcpu, &reg1, &reg2); 378 + code = vcpu->run->s.regs.gprs[reg1]; 379 + addr = vcpu->run->s.regs.gprs[reg2]; 380 + 381 + vcpu->stat.instruction_sthyi++; 382 + VCPU_EVENT(vcpu, 3, "STHYI: fc: %llu addr: 0x%016llx", code, addr); 383 + trace_kvm_s390_handle_sthyi(vcpu, code, addr); 384 + 385 + if (reg1 == reg2 || reg1 & 1 || reg2 & 1) 386 + return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); 387 + 388 + if (code & 0xffff) { 389 + cc = 3; 390 + rc = 4; 391 + goto out; 392 + } 393 + 394 + if (addr & ~PAGE_MASK) 395 + return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); 396 + 397 + sctns = (void *)get_zeroed_page(GFP_KERNEL); 398 + if (!sctns) 399 + return -ENOMEM; 400 + 401 + cc = sthyi_fill(sctns, &rc); 402 + 403 + out: 404 + if (!cc) { 405 + r = write_guest(vcpu, addr, reg2, sctns, PAGE_SIZE); 406 + if (r) { 407 + free_page((unsigned long)sctns); 408 + return kvm_s390_inject_prog_cond(vcpu, r); 409 + } 410 + } 411 + 412 + free_page((unsigned long)sctns); 413 + vcpu->run->s.regs.gprs[reg2 + 1] = rc; 414 + kvm_s390_set_psw_cc(vcpu, cc); 415 + return r; 362 416 } 363 417 364 418 static int handle_operexc(struct kvm_vcpu *vcpu)
+3 -3
arch/s390/kvm/interrupt.c
··· 2483 2483 2484 2484 mci.val = mcck_info->mcic; 2485 2485 if (mci.sr) 2486 - cr14 |= MCCK_CR14_RECOVERY_SUB_MASK; 2486 + cr14 |= CR14_RECOVERY_SUBMASK; 2487 2487 if (mci.dg) 2488 - cr14 |= MCCK_CR14_DEGRAD_SUB_MASK; 2488 + cr14 |= CR14_DEGRADATION_SUBMASK; 2489 2489 if (mci.w) 2490 - cr14 |= MCCK_CR14_WARN_SUB_MASK; 2490 + cr14 |= CR14_WARNING_SUBMASK; 2491 2491 2492 2492 mchk = mci.ck ? &inti.mchk : &irq.u.mchk; 2493 2493 mchk->cr14 = cr14;
+1 -3
arch/s390/kvm/kvm-s390.c
··· 1884 1884 1885 1885 rc = -ENOMEM; 1886 1886 1887 - ratelimit_state_init(&kvm->arch.sthyi_limit, 5 * HZ, 500); 1888 - 1889 1887 kvm->arch.use_esca = 0; /* start with basic SCA */ 1890 1888 if (!sclp.has_64bscao) 1891 1889 alloc_flags |= GFP_DMA; ··· 3281 3283 */ 3282 3284 if ((kvm_run->kvm_dirty_regs & KVM_SYNC_RICCB) && 3283 3285 test_kvm_facility(vcpu->kvm, 64) && 3284 - riccb->valid && 3286 + riccb->v && 3285 3287 !(vcpu->arch.sie_block->ecb3 & ECB3_RI)) { 3286 3288 VCPU_EVENT(vcpu, 3, "%s", "ENABLE: RI (sync_regs)"); 3287 3289 vcpu->arch.sie_block->ecb3 |= ECB3_RI;
+2 -3
arch/s390/kvm/kvm-s390.h
··· 242 242 kvm_s390_rewind_psw(vcpu, kvm_s390_get_ilen(vcpu)); 243 243 } 244 244 245 + int handle_sthyi(struct kvm_vcpu *vcpu); 246 + 245 247 /* implemented in priv.c */ 246 248 int is_valid_psw(psw_t *psw); 247 249 int kvm_s390_handle_aa(struct kvm_vcpu *vcpu); ··· 269 267 /* implemented in sigp.c */ 270 268 int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu); 271 269 int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu); 272 - 273 - /* implemented in sthyi.c */ 274 - int handle_sthyi(struct kvm_vcpu *vcpu); 275 270 276 271 /* implemented in kvm-s390.c */ 277 272 void kvm_s390_set_tod_clock_ext(struct kvm *kvm,
+115 -67
arch/s390/kvm/sthyi.c arch/s390/kernel/sthyi.c
··· 8 8 * Copyright IBM Corp. 2016 9 9 * Author(s): Janosch Frank <frankja@linux.vnet.ibm.com> 10 10 */ 11 - #include <linux/kvm_host.h> 12 11 #include <linux/errno.h> 13 12 #include <linux/pagemap.h> 14 13 #include <linux/vmalloc.h> 15 - #include <linux/ratelimit.h> 16 - 17 - #include <asm/kvm_host.h> 14 + #include <linux/syscalls.h> 15 + #include <linux/mutex.h> 18 16 #include <asm/asm-offsets.h> 19 17 #include <asm/sclp.h> 20 18 #include <asm/diag.h> 21 19 #include <asm/sysinfo.h> 22 20 #include <asm/ebcdic.h> 23 - 24 - #include "kvm-s390.h" 25 - #include "gaccess.h" 26 - #include "trace.h" 21 + #include <asm/facility.h> 22 + #include <asm/sthyi.h> 23 + #include "entry.h" 27 24 28 25 #define DED_WEIGHT 0xffff 29 26 /* ··· 140 143 struct cpu_inf cp; 141 144 struct cpu_inf ifl; 142 145 }; 146 + 147 + /* 148 + * STHYI requires extensive locking in the higher hypervisors 149 + * and is very computational/memory expensive. Therefore we 150 + * cache the retrieved data whose valid period is 1s. 151 + */ 152 + #define CACHE_VALID_JIFFIES HZ 153 + 154 + struct sthyi_info { 155 + void *info; 156 + unsigned long end; 157 + }; 158 + 159 + static DEFINE_MUTEX(sthyi_mutex); 160 + static struct sthyi_info sthyi_cache; 143 161 144 162 static inline u64 cpu_id(u8 ctidx, void *diag224_buf) 145 163 { ··· 394 382 vfree(diag204_buf); 395 383 } 396 384 397 - static int sthyi(u64 vaddr) 385 + static int sthyi(u64 vaddr, u64 *rc) 398 386 { 399 387 register u64 code asm("0") = 0; 400 388 register u64 addr asm("2") = vaddr; 389 + register u64 rcode asm("3"); 401 390 int cc; 402 391 403 392 asm volatile( 404 393 ".insn rre,0xB2560000,%[code],%[addr]\n" 405 394 "ipm %[cc]\n" 406 395 "srl %[cc],28\n" 407 - : [cc] "=d" (cc) 396 + : [cc] "=d" (cc), "=d" (rcode) 408 397 : [code] "d" (code), [addr] "a" (addr) 409 - : "3", "memory", "cc"); 398 + : "memory", "cc"); 399 + *rc = rcode; 410 400 return cc; 411 401 } 412 402 413 - int handle_sthyi(struct kvm_vcpu *vcpu) 403 + static int fill_dst(void *dst, u64 *rc) 414 404 { 415 - int reg1, reg2, r = 0; 416 - u64 code, addr, cc = 0; 417 - struct sthyi_sctns *sctns = NULL; 418 - 419 - if (!test_kvm_facility(vcpu->kvm, 74)) 420 - return kvm_s390_inject_program_int(vcpu, PGM_OPERATION); 405 + struct sthyi_sctns *sctns = (struct sthyi_sctns *)dst; 421 406 422 407 /* 423 - * STHYI requires extensive locking in the higher hypervisors 424 - * and is very computational/memory expensive. Therefore we 425 - * ratelimit the executions per VM. 408 + * If the facility is on, we don't want to emulate the instruction. 409 + * We ask the hypervisor to provide the data. 426 410 */ 427 - if (!__ratelimit(&vcpu->kvm->arch.sthyi_limit)) { 428 - kvm_s390_retry_instr(vcpu); 429 - return 0; 430 - } 431 - 432 - kvm_s390_get_regs_rre(vcpu, &reg1, &reg2); 433 - code = vcpu->run->s.regs.gprs[reg1]; 434 - addr = vcpu->run->s.regs.gprs[reg2]; 435 - 436 - vcpu->stat.instruction_sthyi++; 437 - VCPU_EVENT(vcpu, 3, "STHYI: fc: %llu addr: 0x%016llx", code, addr); 438 - trace_kvm_s390_handle_sthyi(vcpu, code, addr); 439 - 440 - if (reg1 == reg2 || reg1 & 1 || reg2 & 1) 441 - return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); 442 - 443 - if (code & 0xffff) { 444 - cc = 3; 445 - goto out; 446 - } 447 - 448 - if (addr & ~PAGE_MASK) 449 - return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); 450 - 451 - sctns = (void *)get_zeroed_page(GFP_KERNEL); 452 - if (!sctns) 453 - return -ENOMEM; 454 - 455 - /* 456 - * If we are a guest, we don't want to emulate an emulated 457 - * instruction. We ask the hypervisor to provide the data. 458 - */ 459 - if (test_facility(74)) { 460 - cc = sthyi((u64)sctns); 461 - goto out; 462 - } 411 + if (test_facility(74)) 412 + return sthyi((u64)dst, rc); 463 413 464 414 fill_hdr(sctns); 465 415 fill_stsi(sctns); 466 416 fill_diag(sctns); 417 + *rc = 0; 418 + return 0; 419 + } 467 420 468 - out: 469 - if (!cc) { 470 - r = write_guest(vcpu, addr, reg2, sctns, PAGE_SIZE); 471 - if (r) { 472 - free_page((unsigned long)sctns); 473 - return kvm_s390_inject_prog_cond(vcpu, r); 474 - } 421 + static int sthyi_init_cache(void) 422 + { 423 + if (sthyi_cache.info) 424 + return 0; 425 + sthyi_cache.info = (void *)get_zeroed_page(GFP_KERNEL); 426 + if (!sthyi_cache.info) 427 + return -ENOMEM; 428 + sthyi_cache.end = jiffies - 1; /* expired */ 429 + return 0; 430 + } 431 + 432 + static int sthyi_update_cache(u64 *rc) 433 + { 434 + int r; 435 + 436 + memset(sthyi_cache.info, 0, PAGE_SIZE); 437 + r = fill_dst(sthyi_cache.info, rc); 438 + if (r) 439 + return r; 440 + sthyi_cache.end = jiffies + CACHE_VALID_JIFFIES; 441 + return r; 442 + } 443 + 444 + /* 445 + * sthyi_fill - Fill page with data returned by the STHYI instruction 446 + * 447 + * @dst: Pointer to zeroed page 448 + * @rc: Pointer for storing the return code of the instruction 449 + * 450 + * Fills the destination with system information returned by the STHYI 451 + * instruction. The data is generated by emulation or execution of STHYI, 452 + * if available. The return value is the condition code that would be 453 + * returned, the rc parameter is the return code which is passed in 454 + * register R2 + 1. 455 + */ 456 + int sthyi_fill(void *dst, u64 *rc) 457 + { 458 + int r; 459 + 460 + mutex_lock(&sthyi_mutex); 461 + r = sthyi_init_cache(); 462 + if (r) 463 + goto out; 464 + 465 + if (time_is_before_jiffies(sthyi_cache.end)) { 466 + /* cache expired */ 467 + r = sthyi_update_cache(rc); 468 + if (r) 469 + goto out; 475 470 } 471 + *rc = 0; 472 + memcpy(dst, sthyi_cache.info, PAGE_SIZE); 473 + out: 474 + mutex_unlock(&sthyi_mutex); 475 + return r; 476 + } 477 + EXPORT_SYMBOL_GPL(sthyi_fill); 476 478 477 - free_page((unsigned long)sctns); 478 - vcpu->run->s.regs.gprs[reg2 + 1] = cc ? 4 : 0; 479 - kvm_s390_set_psw_cc(vcpu, cc); 479 + SYSCALL_DEFINE4(s390_sthyi, unsigned long, function_code, void __user *, buffer, 480 + u64 __user *, return_code, unsigned long, flags) 481 + { 482 + u64 sthyi_rc; 483 + void *info; 484 + int r; 485 + 486 + if (flags) 487 + return -EINVAL; 488 + if (function_code != STHYI_FC_CP_IFL_CAP) 489 + return -EOPNOTSUPP; 490 + info = (void *)get_zeroed_page(GFP_KERNEL); 491 + if (!info) 492 + return -ENOMEM; 493 + r = sthyi_fill(info, &sthyi_rc); 494 + if (r < 0) 495 + goto out; 496 + if (return_code && put_user(sthyi_rc, return_code)) { 497 + r = -EFAULT; 498 + goto out; 499 + } 500 + if (copy_to_user(buffer, info, PAGE_SIZE)) 501 + r = -EFAULT; 502 + out: 503 + free_page((unsigned long)info); 480 504 return r; 481 505 }
+56 -8
arch/s390/lib/mem.S
··· 79 79 ex %r4,0(%r3) 80 80 br %r14 81 81 .Lmemset_fill: 82 - stc %r3,0(%r2) 83 82 cghi %r4,1 84 83 lgr %r1,%r2 85 - ber %r14 84 + je .Lmemset_fill_exit 86 85 aghi %r4,-2 87 - srlg %r3,%r4,8 88 - ltgr %r3,%r3 86 + srlg %r5,%r4,8 87 + ltgr %r5,%r5 89 88 jz .Lmemset_fill_remainder 90 89 .Lmemset_fill_loop: 91 - mvc 1(256,%r1),0(%r1) 90 + stc %r3,0(%r1) 91 + mvc 1(255,%r1),0(%r1) 92 92 la %r1,256(%r1) 93 - brctg %r3,.Lmemset_fill_loop 93 + brctg %r5,.Lmemset_fill_loop 94 94 .Lmemset_fill_remainder: 95 - larl %r3,.Lmemset_mvc 96 - ex %r4,0(%r3) 95 + stc %r3,0(%r1) 96 + larl %r5,.Lmemset_mvc 97 + ex %r4,0(%r5) 98 + br %r14 99 + .Lmemset_fill_exit: 100 + stc %r3,0(%r1) 97 101 br %r14 98 102 .Lmemset_xc: 99 103 xc 0(1,%r1),0(%r1) ··· 131 127 .Lmemcpy_mvc: 132 128 mvc 0(1,%r1),0(%r3) 133 129 EXPORT_SYMBOL(memcpy) 130 + 131 + /* 132 + * __memset16/32/64 133 + * 134 + * void *__memset16(uint16_t *s, uint16_t v, size_t count) 135 + * void *__memset32(uint32_t *s, uint32_t v, size_t count) 136 + * void *__memset64(uint64_t *s, uint64_t v, size_t count) 137 + */ 138 + .macro __MEMSET bits,bytes,insn 139 + ENTRY(__memset\bits) 140 + ltgr %r4,%r4 141 + bzr %r14 142 + cghi %r4,\bytes 143 + je .L__memset_exit\bits 144 + aghi %r4,-(\bytes+1) 145 + srlg %r5,%r4,8 146 + ltgr %r5,%r5 147 + lgr %r1,%r2 148 + jz .L__memset_remainder\bits 149 + .L__memset_loop\bits: 150 + \insn %r3,0(%r1) 151 + mvc \bytes(256-\bytes,%r1),0(%r1) 152 + la %r1,256(%r1) 153 + brctg %r5,.L__memset_loop\bits 154 + .L__memset_remainder\bits: 155 + \insn %r3,0(%r1) 156 + larl %r5,.L__memset_mvc\bits 157 + ex %r4,0(%r5) 158 + br %r14 159 + .L__memset_exit\bits: 160 + \insn %r3,0(%r2) 161 + br %r14 162 + .L__memset_mvc\bits: 163 + mvc \bytes(1,%r1),0(%r1) 164 + .endm 165 + 166 + __MEMSET 16,2,sth 167 + EXPORT_SYMBOL(__memset16) 168 + 169 + __MEMSET 32,4,st 170 + EXPORT_SYMBOL(__memset32) 171 + 172 + __MEMSET 64,8,stg 173 + EXPORT_SYMBOL(__memset64)
+226 -175
arch/s390/lib/spinlock.c
··· 9 9 #include <linux/types.h> 10 10 #include <linux/export.h> 11 11 #include <linux/spinlock.h> 12 + #include <linux/jiffies.h> 12 13 #include <linux/init.h> 13 14 #include <linux/smp.h> 15 + #include <linux/percpu.h> 16 + #include <asm/alternative.h> 14 17 #include <asm/io.h> 15 18 16 19 int spin_retry = -1; ··· 36 33 } 37 34 __setup("spin_retry=", spin_retry_setup); 38 35 36 + struct spin_wait { 37 + struct spin_wait *next, *prev; 38 + int node_id; 39 + } __aligned(32); 40 + 41 + static DEFINE_PER_CPU_ALIGNED(struct spin_wait, spin_wait[4]); 42 + 43 + #define _Q_LOCK_CPU_OFFSET 0 44 + #define _Q_LOCK_STEAL_OFFSET 16 45 + #define _Q_TAIL_IDX_OFFSET 18 46 + #define _Q_TAIL_CPU_OFFSET 20 47 + 48 + #define _Q_LOCK_CPU_MASK 0x0000ffff 49 + #define _Q_LOCK_STEAL_ADD 0x00010000 50 + #define _Q_LOCK_STEAL_MASK 0x00030000 51 + #define _Q_TAIL_IDX_MASK 0x000c0000 52 + #define _Q_TAIL_CPU_MASK 0xfff00000 53 + 54 + #define _Q_LOCK_MASK (_Q_LOCK_CPU_MASK | _Q_LOCK_STEAL_MASK) 55 + #define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) 56 + 57 + void arch_spin_lock_setup(int cpu) 58 + { 59 + struct spin_wait *node; 60 + int ix; 61 + 62 + node = per_cpu_ptr(&spin_wait[0], cpu); 63 + for (ix = 0; ix < 4; ix++, node++) { 64 + memset(node, 0, sizeof(*node)); 65 + node->node_id = ((cpu + 1) << _Q_TAIL_CPU_OFFSET) + 66 + (ix << _Q_TAIL_IDX_OFFSET); 67 + } 68 + } 69 + 39 70 static inline int arch_load_niai4(int *lock) 40 71 { 41 72 int owner; 42 73 43 74 asm volatile( 44 - #ifdef CONFIG_HAVE_MARCH_ZEC12_FEATURES 45 - " .long 0xb2fa0040\n" /* NIAI 4 */ 46 - #endif 75 + ALTERNATIVE("", ".long 0xb2fa0040", 49) /* NIAI 4 */ 47 76 " l %0,%1\n" 48 77 : "=d" (owner) : "Q" (*lock) : "memory"); 49 78 return owner; ··· 86 51 int expected = old; 87 52 88 53 asm volatile( 89 - #ifdef CONFIG_HAVE_MARCH_ZEC12_FEATURES 90 - " .long 0xb2fa0080\n" /* NIAI 8 */ 91 - #endif 54 + ALTERNATIVE("", ".long 0xb2fa0080", 49) /* NIAI 8 */ 92 55 " cs %0,%3,%1\n" 93 56 : "=d" (old), "=Q" (*lock) 94 57 : "0" (old), "d" (new), "Q" (*lock) ··· 94 61 return expected == old; 95 62 } 96 63 64 + static inline struct spin_wait *arch_spin_decode_tail(int lock) 65 + { 66 + int ix, cpu; 67 + 68 + ix = (lock & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET; 69 + cpu = (lock & _Q_TAIL_CPU_MASK) >> _Q_TAIL_CPU_OFFSET; 70 + return per_cpu_ptr(&spin_wait[ix], cpu - 1); 71 + } 72 + 73 + static inline int arch_spin_yield_target(int lock, struct spin_wait *node) 74 + { 75 + if (lock & _Q_LOCK_CPU_MASK) 76 + return lock & _Q_LOCK_CPU_MASK; 77 + if (node == NULL || node->prev == NULL) 78 + return 0; /* 0 -> no target cpu */ 79 + while (node->prev) 80 + node = node->prev; 81 + return node->node_id >> _Q_TAIL_CPU_OFFSET; 82 + } 83 + 84 + static inline void arch_spin_lock_queued(arch_spinlock_t *lp) 85 + { 86 + struct spin_wait *node, *next; 87 + int lockval, ix, node_id, tail_id, old, new, owner, count; 88 + 89 + ix = S390_lowcore.spinlock_index++; 90 + barrier(); 91 + lockval = SPINLOCK_LOCKVAL; /* cpu + 1 */ 92 + node = this_cpu_ptr(&spin_wait[ix]); 93 + node->prev = node->next = NULL; 94 + node_id = node->node_id; 95 + 96 + /* Enqueue the node for this CPU in the spinlock wait queue */ 97 + while (1) { 98 + old = READ_ONCE(lp->lock); 99 + if ((old & _Q_LOCK_CPU_MASK) == 0 && 100 + (old & _Q_LOCK_STEAL_MASK) != _Q_LOCK_STEAL_MASK) { 101 + /* 102 + * The lock is free but there may be waiters. 103 + * With no waiters simply take the lock, if there 104 + * are waiters try to steal the lock. The lock may 105 + * be stolen three times before the next queued 106 + * waiter will get the lock. 107 + */ 108 + new = (old ? (old + _Q_LOCK_STEAL_ADD) : 0) | lockval; 109 + if (__atomic_cmpxchg_bool(&lp->lock, old, new)) 110 + /* Got the lock */ 111 + goto out; 112 + /* lock passing in progress */ 113 + continue; 114 + } 115 + /* Make the node of this CPU the new tail. */ 116 + new = node_id | (old & _Q_LOCK_MASK); 117 + if (__atomic_cmpxchg_bool(&lp->lock, old, new)) 118 + break; 119 + } 120 + /* Set the 'next' pointer of the tail node in the queue */ 121 + tail_id = old & _Q_TAIL_MASK; 122 + if (tail_id != 0) { 123 + node->prev = arch_spin_decode_tail(tail_id); 124 + WRITE_ONCE(node->prev->next, node); 125 + } 126 + 127 + /* Pass the virtual CPU to the lock holder if it is not running */ 128 + owner = arch_spin_yield_target(old, node); 129 + if (owner && arch_vcpu_is_preempted(owner - 1)) 130 + smp_yield_cpu(owner - 1); 131 + 132 + /* Spin on the CPU local node->prev pointer */ 133 + if (tail_id != 0) { 134 + count = spin_retry; 135 + while (READ_ONCE(node->prev) != NULL) { 136 + if (count-- >= 0) 137 + continue; 138 + count = spin_retry; 139 + /* Query running state of lock holder again. */ 140 + owner = arch_spin_yield_target(old, node); 141 + if (owner && arch_vcpu_is_preempted(owner - 1)) 142 + smp_yield_cpu(owner - 1); 143 + } 144 + } 145 + 146 + /* Spin on the lock value in the spinlock_t */ 147 + count = spin_retry; 148 + while (1) { 149 + old = READ_ONCE(lp->lock); 150 + owner = old & _Q_LOCK_CPU_MASK; 151 + if (!owner) { 152 + tail_id = old & _Q_TAIL_MASK; 153 + new = ((tail_id != node_id) ? tail_id : 0) | lockval; 154 + if (__atomic_cmpxchg_bool(&lp->lock, old, new)) 155 + /* Got the lock */ 156 + break; 157 + continue; 158 + } 159 + if (count-- >= 0) 160 + continue; 161 + count = spin_retry; 162 + if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(owner - 1)) 163 + smp_yield_cpu(owner - 1); 164 + } 165 + 166 + /* Pass lock_spin job to next CPU in the queue */ 167 + if (node_id && tail_id != node_id) { 168 + /* Wait until the next CPU has set up the 'next' pointer */ 169 + while ((next = READ_ONCE(node->next)) == NULL) 170 + ; 171 + next->prev = NULL; 172 + } 173 + 174 + out: 175 + S390_lowcore.spinlock_index--; 176 + } 177 + 178 + static inline void arch_spin_lock_classic(arch_spinlock_t *lp) 179 + { 180 + int lockval, old, new, owner, count; 181 + 182 + lockval = SPINLOCK_LOCKVAL; /* cpu + 1 */ 183 + 184 + /* Pass the virtual CPU to the lock holder if it is not running */ 185 + owner = arch_spin_yield_target(ACCESS_ONCE(lp->lock), NULL); 186 + if (owner && arch_vcpu_is_preempted(owner - 1)) 187 + smp_yield_cpu(owner - 1); 188 + 189 + count = spin_retry; 190 + while (1) { 191 + old = arch_load_niai4(&lp->lock); 192 + owner = old & _Q_LOCK_CPU_MASK; 193 + /* Try to get the lock if it is free. */ 194 + if (!owner) { 195 + new = (old & _Q_TAIL_MASK) | lockval; 196 + if (arch_cmpxchg_niai8(&lp->lock, old, new)) 197 + /* Got the lock */ 198 + return; 199 + continue; 200 + } 201 + if (count-- >= 0) 202 + continue; 203 + count = spin_retry; 204 + if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(owner - 1)) 205 + smp_yield_cpu(owner - 1); 206 + } 207 + } 208 + 97 209 void arch_spin_lock_wait(arch_spinlock_t *lp) 98 210 { 99 - int cpu = SPINLOCK_LOCKVAL; 100 - int owner, count; 101 - 102 - /* Pass the virtual CPU to the lock holder if it is not running */ 103 - owner = arch_load_niai4(&lp->lock); 104 - if (owner && arch_vcpu_is_preempted(~owner)) 105 - smp_yield_cpu(~owner); 106 - 107 - count = spin_retry; 108 - while (1) { 109 - owner = arch_load_niai4(&lp->lock); 110 - /* Try to get the lock if it is free. */ 111 - if (!owner) { 112 - if (arch_cmpxchg_niai8(&lp->lock, 0, cpu)) 113 - return; 114 - continue; 115 - } 116 - if (count-- >= 0) 117 - continue; 118 - count = spin_retry; 119 - /* 120 - * For multiple layers of hypervisors, e.g. z/VM + LPAR 121 - * yield the CPU unconditionally. For LPAR rely on the 122 - * sense running status. 123 - */ 124 - if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) 125 - smp_yield_cpu(~owner); 126 - } 211 + /* Use classic spinlocks + niai if the steal time is >= 10% */ 212 + if (test_cpu_flag(CIF_DEDICATED_CPU)) 213 + arch_spin_lock_queued(lp); 214 + else 215 + arch_spin_lock_classic(lp); 127 216 } 128 217 EXPORT_SYMBOL(arch_spin_lock_wait); 129 - 130 - void arch_spin_lock_wait_flags(arch_spinlock_t *lp, unsigned long flags) 131 - { 132 - int cpu = SPINLOCK_LOCKVAL; 133 - int owner, count; 134 - 135 - local_irq_restore(flags); 136 - 137 - /* Pass the virtual CPU to the lock holder if it is not running */ 138 - owner = arch_load_niai4(&lp->lock); 139 - if (owner && arch_vcpu_is_preempted(~owner)) 140 - smp_yield_cpu(~owner); 141 - 142 - count = spin_retry; 143 - while (1) { 144 - owner = arch_load_niai4(&lp->lock); 145 - /* Try to get the lock if it is free. */ 146 - if (!owner) { 147 - local_irq_disable(); 148 - if (arch_cmpxchg_niai8(&lp->lock, 0, cpu)) 149 - return; 150 - local_irq_restore(flags); 151 - continue; 152 - } 153 - if (count-- >= 0) 154 - continue; 155 - count = spin_retry; 156 - /* 157 - * For multiple layers of hypervisors, e.g. z/VM + LPAR 158 - * yield the CPU unconditionally. For LPAR rely on the 159 - * sense running status. 160 - */ 161 - if (!MACHINE_IS_LPAR || arch_vcpu_is_preempted(~owner)) 162 - smp_yield_cpu(~owner); 163 - } 164 - } 165 - EXPORT_SYMBOL(arch_spin_lock_wait_flags); 166 218 167 219 int arch_spin_trylock_retry(arch_spinlock_t *lp) 168 220 { ··· 266 148 } 267 149 EXPORT_SYMBOL(arch_spin_trylock_retry); 268 150 269 - void _raw_read_lock_wait(arch_rwlock_t *rw) 151 + void arch_read_lock_wait(arch_rwlock_t *rw) 270 152 { 271 - int count = spin_retry; 272 - int owner, old; 273 - 274 - #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES 275 - __RAW_LOCK(&rw->lock, -1, __RAW_OP_ADD); 276 - #endif 277 - owner = 0; 278 - while (1) { 279 - if (count-- <= 0) { 280 - if (owner && arch_vcpu_is_preempted(~owner)) 281 - smp_yield_cpu(~owner); 282 - count = spin_retry; 283 - } 284 - old = ACCESS_ONCE(rw->lock); 285 - owner = ACCESS_ONCE(rw->owner); 286 - if (old < 0) 287 - continue; 288 - if (__atomic_cmpxchg_bool(&rw->lock, old, old + 1)) 289 - return; 153 + if (unlikely(in_interrupt())) { 154 + while (READ_ONCE(rw->cnts) & 0x10000) 155 + barrier(); 156 + return; 290 157 } 291 - } 292 - EXPORT_SYMBOL(_raw_read_lock_wait); 293 158 294 - int _raw_read_trylock_retry(arch_rwlock_t *rw) 159 + /* Remove this reader again to allow recursive read locking */ 160 + __atomic_add_const(-1, &rw->cnts); 161 + /* Put the reader into the wait queue */ 162 + arch_spin_lock(&rw->wait); 163 + /* Now add this reader to the count value again */ 164 + __atomic_add_const(1, &rw->cnts); 165 + /* Loop until the writer is done */ 166 + while (READ_ONCE(rw->cnts) & 0x10000) 167 + barrier(); 168 + arch_spin_unlock(&rw->wait); 169 + } 170 + EXPORT_SYMBOL(arch_read_lock_wait); 171 + 172 + void arch_write_lock_wait(arch_rwlock_t *rw) 295 173 { 296 - int count = spin_retry; 297 174 int old; 298 175 299 - while (count-- > 0) { 300 - old = ACCESS_ONCE(rw->lock); 301 - if (old < 0) 302 - continue; 303 - if (__atomic_cmpxchg_bool(&rw->lock, old, old + 1)) 304 - return 1; 305 - } 306 - return 0; 307 - } 308 - EXPORT_SYMBOL(_raw_read_trylock_retry); 176 + /* Add this CPU to the write waiters */ 177 + __atomic_add(0x20000, &rw->cnts); 309 178 310 - #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES 179 + /* Put the writer into the wait queue */ 180 + arch_spin_lock(&rw->wait); 311 181 312 - void _raw_write_lock_wait(arch_rwlock_t *rw, int prev) 313 - { 314 - int count = spin_retry; 315 - int owner, old; 316 - 317 - owner = 0; 318 182 while (1) { 319 - if (count-- <= 0) { 320 - if (owner && arch_vcpu_is_preempted(~owner)) 321 - smp_yield_cpu(~owner); 322 - count = spin_retry; 323 - } 324 - old = ACCESS_ONCE(rw->lock); 325 - owner = ACCESS_ONCE(rw->owner); 326 - smp_mb(); 327 - if (old >= 0) { 328 - prev = __RAW_LOCK(&rw->lock, 0x80000000, __RAW_OP_OR); 329 - old = prev; 330 - } 331 - if ((old & 0x7fffffff) == 0 && prev >= 0) 183 + old = READ_ONCE(rw->cnts); 184 + if ((old & 0x1ffff) == 0 && 185 + __atomic_cmpxchg_bool(&rw->cnts, old, old | 0x10000)) 186 + /* Got the lock */ 332 187 break; 188 + barrier(); 333 189 } 190 + 191 + arch_spin_unlock(&rw->wait); 334 192 } 335 - EXPORT_SYMBOL(_raw_write_lock_wait); 193 + EXPORT_SYMBOL(arch_write_lock_wait); 336 194 337 - #else /* CONFIG_HAVE_MARCH_Z196_FEATURES */ 338 - 339 - void _raw_write_lock_wait(arch_rwlock_t *rw) 195 + void arch_spin_relax(arch_spinlock_t *lp) 340 196 { 341 - int count = spin_retry; 342 - int owner, old, prev; 197 + int cpu; 343 198 344 - prev = 0x80000000; 345 - owner = 0; 346 - while (1) { 347 - if (count-- <= 0) { 348 - if (owner && arch_vcpu_is_preempted(~owner)) 349 - smp_yield_cpu(~owner); 350 - count = spin_retry; 351 - } 352 - old = ACCESS_ONCE(rw->lock); 353 - owner = ACCESS_ONCE(rw->owner); 354 - if (old >= 0 && 355 - __atomic_cmpxchg_bool(&rw->lock, old, old | 0x80000000)) 356 - prev = old; 357 - else 358 - smp_mb(); 359 - if ((old & 0x7fffffff) == 0 && prev >= 0) 360 - break; 361 - } 362 - } 363 - EXPORT_SYMBOL(_raw_write_lock_wait); 364 - 365 - #endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */ 366 - 367 - int _raw_write_trylock_retry(arch_rwlock_t *rw) 368 - { 369 - int count = spin_retry; 370 - int old; 371 - 372 - while (count-- > 0) { 373 - old = ACCESS_ONCE(rw->lock); 374 - if (old) 375 - continue; 376 - if (__atomic_cmpxchg_bool(&rw->lock, 0, 0x80000000)) 377 - return 1; 378 - } 379 - return 0; 380 - } 381 - EXPORT_SYMBOL(_raw_write_trylock_retry); 382 - 383 - void arch_lock_relax(int cpu) 384 - { 199 + cpu = READ_ONCE(lp->lock) & _Q_LOCK_CPU_MASK; 385 200 if (!cpu) 386 201 return; 387 - if (MACHINE_IS_LPAR && !arch_vcpu_is_preempted(~cpu)) 202 + if (MACHINE_IS_LPAR && !arch_vcpu_is_preempted(cpu - 1)) 388 203 return; 389 - smp_yield_cpu(~cpu); 204 + smp_yield_cpu(cpu - 1); 390 205 } 391 - EXPORT_SYMBOL(arch_lock_relax); 206 + EXPORT_SYMBOL(arch_spin_relax);
+14 -14
arch/s390/lib/string.c
··· 56 56 * 57 57 * returns the minimum of the length of @s and @n 58 58 */ 59 - size_t strnlen(const char * s, size_t n) 59 + size_t strnlen(const char *s, size_t n) 60 60 { 61 61 return __strnend(s, n) - s; 62 62 } ··· 195 195 196 196 /** 197 197 * strcmp - Compare two strings 198 - * @cs: One string 199 - * @ct: Another string 198 + * @s1: One string 199 + * @s2: Another string 200 200 * 201 - * returns 0 if @cs and @ct are equal, 202 - * < 0 if @cs is less than @ct 203 - * > 0 if @cs is greater than @ct 201 + * returns 0 if @s1 and @s2 are equal, 202 + * < 0 if @s1 is less than @s2 203 + * > 0 if @s1 is greater than @s2 204 204 */ 205 - int strcmp(const char *cs, const char *ct) 205 + int strcmp(const char *s1, const char *s2) 206 206 { 207 207 register int r0 asm("0") = 0; 208 208 int ret = 0; ··· 214 214 " ic %1,0(%3)\n" 215 215 " sr %0,%1\n" 216 216 "1:" 217 - : "+d" (ret), "+d" (r0), "+a" (cs), "+a" (ct) 217 + : "+d" (ret), "+d" (r0), "+a" (s1), "+a" (s2) 218 218 : : "cc", "memory"); 219 219 return ret; 220 220 } ··· 225 225 * @s: The string to be searched 226 226 * @c: The character to search for 227 227 */ 228 - char * strrchr(const char * s, int c) 228 + char *strrchr(const char *s, int c) 229 229 { 230 230 size_t len = __strend(s) - s; 231 231 ··· 261 261 * @s1: The string to be searched 262 262 * @s2: The string to search for 263 263 */ 264 - char * strstr(const char * s1,const char * s2) 264 + char *strstr(const char *s1, const char *s2) 265 265 { 266 266 int l1, l2; 267 267 ··· 307 307 308 308 /** 309 309 * memcmp - Compare two areas of memory 310 - * @cs: One area of memory 311 - * @ct: Another area of memory 310 + * @s1: One area of memory 311 + * @s2: Another area of memory 312 312 * @count: The size of the area. 313 313 */ 314 - int memcmp(const void *cs, const void *ct, size_t n) 314 + int memcmp(const void *s1, const void *s2, size_t n) 315 315 { 316 316 int ret; 317 317 318 - ret = clcle(cs, n, ct, n); 318 + ret = clcle(s1, n, s2, n); 319 319 if (ret) 320 320 ret = ret == 1 ? -1 : 1; 321 321 return ret;
+2 -2
arch/s390/mm/init.c
··· 145 145 146 146 void free_initmem(void) 147 147 { 148 - __set_memory((unsigned long) _sinittext, 149 - (_einittext - _sinittext) >> PAGE_SHIFT, 148 + __set_memory((unsigned long)_sinittext, 149 + (unsigned long)(_einittext - _sinittext) >> PAGE_SHIFT, 150 150 SET_MEMORY_RW | SET_MEMORY_NX); 151 151 free_initmem_default(POISON_FREE_INITMEM); 152 152 }
+7 -7
arch/s390/mm/pgalloc.c
··· 159 159 struct page *page_table_alloc_pgste(struct mm_struct *mm) 160 160 { 161 161 struct page *page; 162 - unsigned long *table; 162 + u64 *table; 163 163 164 164 page = alloc_page(GFP_KERNEL); 165 165 if (page) { 166 - table = (unsigned long *) page_to_phys(page); 167 - clear_table(table, _PAGE_INVALID, PAGE_SIZE/2); 168 - clear_table(table + PTRS_PER_PTE, 0, PAGE_SIZE/2); 166 + table = (u64 *)page_to_phys(page); 167 + memset64(table, _PAGE_INVALID, PTRS_PER_PTE); 168 + memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE); 169 169 } 170 170 return page; 171 171 } ··· 222 222 if (mm_alloc_pgste(mm)) { 223 223 /* Return 4K page table with PGSTEs */ 224 224 atomic_set(&page->_mapcount, 3); 225 - clear_table(table, _PAGE_INVALID, PAGE_SIZE/2); 226 - clear_table(table + PTRS_PER_PTE, 0, PAGE_SIZE/2); 225 + memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); 226 + memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); 227 227 } else { 228 228 /* Return the first 2K fragment of the page */ 229 229 atomic_set(&page->_mapcount, 1); 230 - clear_table(table, _PAGE_INVALID, PAGE_SIZE); 230 + memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); 231 231 spin_lock_bh(&mm->context.lock); 232 232 list_add(&page->lru, &mm->context.pgtable_list); 233 233 spin_unlock_bh(&mm->context.lock);
+8 -8
arch/s390/mm/vmem.c
··· 60 60 pte = (pte_t *) memblock_alloc(size, size); 61 61 if (!pte) 62 62 return NULL; 63 - clear_table((unsigned long *) pte, _PAGE_INVALID, size); 63 + memset64((u64 *)pte, _PAGE_INVALID, PTRS_PER_PTE); 64 64 return pte; 65 65 } 66 66 ··· 403 403 404 404 for_each_memblock(memory, reg) 405 405 vmem_add_mem(reg->base, reg->size); 406 - __set_memory((unsigned long) _stext, 407 - (_etext - _stext) >> PAGE_SHIFT, 406 + __set_memory((unsigned long)_stext, 407 + (unsigned long)(_etext - _stext) >> PAGE_SHIFT, 408 408 SET_MEMORY_RO | SET_MEMORY_X); 409 - __set_memory((unsigned long) _etext, 410 - (_eshared - _etext) >> PAGE_SHIFT, 409 + __set_memory((unsigned long)_etext, 410 + (unsigned long)(__end_rodata - _etext) >> PAGE_SHIFT, 411 411 SET_MEMORY_RO); 412 - __set_memory((unsigned long) _sinittext, 413 - (_einittext - _sinittext) >> PAGE_SHIFT, 412 + __set_memory((unsigned long)_sinittext, 413 + (unsigned long)(_einittext - _sinittext) >> PAGE_SHIFT, 414 414 SET_MEMORY_RO | SET_MEMORY_X); 415 415 pr_info("Write protected kernel read-only data: %luk\n", 416 - (_eshared - _stext) >> 10); 416 + (unsigned long)(__end_rodata - _stext) >> 10); 417 417 } 418 418 419 419 /*
+5 -2
arch/s390/net/bpf_jit.h
··· 53 53 * 54 54 * We get 160 bytes stack space from calling function, but only use 55 55 * 12 * 8 byte for old backchain, r15..r6, and tail_call_cnt. 56 + * 57 + * The stack size used by the BPF program ("BPF stack" above) is passed 58 + * via "aux->stack_depth". 56 59 */ 57 - #define STK_SPACE (MAX_BPF_STACK + 8 + 8 + 4 + 4 + 160) 60 + #define STK_SPACE_ADD (8 + 8 + 4 + 4 + 160) 58 61 #define STK_160_UNUSED (160 - 12 * 8) 59 - #define STK_OFF (STK_SPACE - STK_160_UNUSED) 62 + #define STK_OFF (STK_SPACE_ADD - STK_160_UNUSED) 60 63 #define STK_OFF_TMP 160 /* Offset of tmp buffer on stack */ 61 64 #define STK_OFF_HLEN 168 /* Offset of SKB header length on stack */ 62 65 #define STK_OFF_SKBP 176 /* Offset of SKB pointer on stack */
+13 -13
arch/s390/net/bpf_jit_comp.c
··· 320 320 /* 321 321 * Restore registers from "rs" (register start) to "re" (register end) on stack 322 322 */ 323 - static void restore_regs(struct bpf_jit *jit, u32 rs, u32 re) 323 + static void restore_regs(struct bpf_jit *jit, u32 rs, u32 re, u32 stack_depth) 324 324 { 325 325 u32 off = STK_OFF_R6 + (rs - 6) * 8; 326 326 327 327 if (jit->seen & SEEN_STACK) 328 - off += STK_OFF; 328 + off += STK_OFF + stack_depth; 329 329 330 330 if (rs == re) 331 331 /* lg %rs,off(%r15) */ ··· 369 369 * Save and restore clobbered registers (6-15) on stack. 370 370 * We save/restore registers in chunks with gap >= 2 registers. 371 371 */ 372 - static void save_restore_regs(struct bpf_jit *jit, int op) 372 + static void save_restore_regs(struct bpf_jit *jit, int op, u32 stack_depth) 373 373 { 374 374 375 375 int re = 6, rs; ··· 382 382 if (op == REGS_SAVE) 383 383 save_regs(jit, rs, re); 384 384 else 385 - restore_regs(jit, rs, re); 385 + restore_regs(jit, rs, re, stack_depth); 386 386 re++; 387 387 } while (re <= 15); 388 388 } ··· 414 414 * Save registers and create stack frame if necessary. 415 415 * See stack frame layout desription in "bpf_jit.h"! 416 416 */ 417 - static void bpf_jit_prologue(struct bpf_jit *jit) 417 + static void bpf_jit_prologue(struct bpf_jit *jit, u32 stack_depth) 418 418 { 419 419 if (jit->seen & SEEN_TAIL_CALL) { 420 420 /* xc STK_OFF_TCCNT(4,%r15),STK_OFF_TCCNT(%r15) */ ··· 427 427 /* Tail calls have to skip above initialization */ 428 428 jit->tail_call_start = jit->prg; 429 429 /* Save registers */ 430 - save_restore_regs(jit, REGS_SAVE); 430 + save_restore_regs(jit, REGS_SAVE, stack_depth); 431 431 /* Setup literal pool */ 432 432 if (jit->seen & SEEN_LITERAL) { 433 433 /* basr %r13,0 */ ··· 442 442 /* la %bfp,STK_160_UNUSED(%r15) (BPF frame pointer) */ 443 443 EMIT4_DISP(0x41000000, BPF_REG_FP, REG_15, STK_160_UNUSED); 444 444 /* aghi %r15,-STK_OFF */ 445 - EMIT4_IMM(0xa70b0000, REG_15, -STK_OFF); 445 + EMIT4_IMM(0xa70b0000, REG_15, -(STK_OFF + stack_depth)); 446 446 if (jit->seen & SEEN_FUNC) 447 447 /* stg %w1,152(%r15) (backchain) */ 448 448 EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0, ··· 459 459 /* 460 460 * Function epilogue 461 461 */ 462 - static void bpf_jit_epilogue(struct bpf_jit *jit) 462 + static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth) 463 463 { 464 464 /* Return 0 */ 465 465 if (jit->seen & SEEN_RET0) { ··· 471 471 /* Load exit code: lgr %r2,%b0 */ 472 472 EMIT4(0xb9040000, REG_2, BPF_REG_0); 473 473 /* Restore registers */ 474 - save_restore_regs(jit, REGS_RESTORE); 474 + save_restore_regs(jit, REGS_RESTORE, stack_depth); 475 475 /* br %r14 */ 476 476 _EMIT2(0x07fe); 477 477 } ··· 1019 1019 */ 1020 1020 1021 1021 if (jit->seen & SEEN_STACK) 1022 - off = STK_OFF_TCCNT + STK_OFF; 1022 + off = STK_OFF_TCCNT + STK_OFF + fp->aux->stack_depth; 1023 1023 else 1024 1024 off = STK_OFF_TCCNT; 1025 1025 /* lhi %w0,1 */ ··· 1047 1047 /* 1048 1048 * Restore registers before calling function 1049 1049 */ 1050 - save_restore_regs(jit, REGS_RESTORE); 1050 + save_restore_regs(jit, REGS_RESTORE, fp->aux->stack_depth); 1051 1051 1052 1052 /* 1053 1053 * goto *(prog->bpf_func + tail_call_start); ··· 1273 1273 jit->lit = jit->lit_start; 1274 1274 jit->prg = 0; 1275 1275 1276 - bpf_jit_prologue(jit); 1276 + bpf_jit_prologue(jit, fp->aux->stack_depth); 1277 1277 for (i = 0; i < fp->len; i += insn_count) { 1278 1278 insn_count = bpf_jit_insn(jit, fp, i); 1279 1279 if (insn_count < 0) ··· 1281 1281 /* Next instruction address */ 1282 1282 jit->addrs[i + insn_count] = jit->prg; 1283 1283 } 1284 - bpf_jit_epilogue(jit); 1284 + bpf_jit_epilogue(jit, fp->aux->stack_depth); 1285 1285 1286 1286 jit->lit_start = jit->prg; 1287 1287 jit->size = jit->lit;
+3 -2
arch/s390/pci/pci.c
··· 368 368 /* End of second scan with interrupts on. */ 369 369 break; 370 370 /* First scan complete, reenable interrupts. */ 371 - zpci_set_irq_ctrl(SIC_IRQ_MODE_SINGLE, NULL, PCI_ISC); 371 + if (zpci_set_irq_ctrl(SIC_IRQ_MODE_SINGLE, NULL, PCI_ISC)) 372 + break; 372 373 si = 0; 373 374 continue; 374 375 } ··· 957 956 if (!s390_pci_probe) 958 957 return 0; 959 958 960 - if (!test_facility(69) || !test_facility(71) || !test_facility(72)) 959 + if (!test_facility(69) || !test_facility(71)) 961 960 return 0; 962 961 963 962 rc = zpci_debug_init();
+5 -1
arch/s390/pci/pci_insn.c
··· 7 7 #include <linux/export.h> 8 8 #include <linux/errno.h> 9 9 #include <linux/delay.h> 10 + #include <asm/facility.h> 10 11 #include <asm/pci_insn.h> 11 12 #include <asm/pci_debug.h> 12 13 #include <asm/processor.h> ··· 92 91 } 93 92 94 93 /* Set Interruption Controls */ 95 - void zpci_set_irq_ctrl(u16 ctl, char *unused, u8 isc) 94 + int zpci_set_irq_ctrl(u16 ctl, char *unused, u8 isc) 96 95 { 96 + if (!test_facility(72)) 97 + return -EIO; 97 98 asm volatile ( 98 99 " .insn rsy,0xeb00000000d1,%[ctl],%[isc],%[u]\n" 99 100 : : [ctl] "d" (ctl), [isc] "d" (isc << 27), [u] "Q" (*unused)); 101 + return 0; 100 102 } 101 103 102 104 /* PCI Load */
+10
arch/s390/tools/Makefile
··· 4 4 # 5 5 6 6 hostprogs-y += gen_facilities 7 + hostprogs-y += gen_opcode_table 8 + 7 9 HOSTCFLAGS_gen_facilities.o += -Wall $(LINUXINCLUDE) 10 + HOSTCFLAGS_gen_opcode_table.o += -Wall $(LINUXINCLUDE) 8 11 9 12 define filechk_facilities.h 10 13 $(obj)/gen_facilities 11 14 endef 12 15 16 + define filechk_dis.h 17 + ( $(obj)/gen_opcode_table < $(srctree)/arch/$(ARCH)/tools/opcodes.txt ) 18 + endef 19 + 13 20 include/generated/facilities.h: $(obj)/gen_facilities FORCE 14 21 $(call filechk,facilities.h) 22 + 23 + include/generated/dis.h: $(obj)/gen_opcode_table FORCE 24 + $(call filechk,dis.h,__FUN)
+336
arch/s390/tools/gen_opcode_table.c
··· 1 + /* 2 + * Generate opcode table initializers for the in-kernel disassembler. 3 + * 4 + * Copyright IBM Corp. 2017 5 + * 6 + */ 7 + 8 + #include <stdlib.h> 9 + #include <string.h> 10 + #include <ctype.h> 11 + #include <stdio.h> 12 + 13 + #define STRING_SIZE_MAX 20 14 + 15 + struct insn_type { 16 + unsigned char byte; 17 + unsigned char mask; 18 + char **format; 19 + }; 20 + 21 + struct insn { 22 + struct insn_type *type; 23 + char opcode[STRING_SIZE_MAX]; 24 + char name[STRING_SIZE_MAX]; 25 + char upper[STRING_SIZE_MAX]; 26 + char format[STRING_SIZE_MAX]; 27 + unsigned int name_len; 28 + }; 29 + 30 + struct insn_group { 31 + struct insn_type *type; 32 + int offset; 33 + int count; 34 + char opcode[2]; 35 + }; 36 + 37 + struct insn_format { 38 + char *format; 39 + int type; 40 + }; 41 + 42 + struct gen_opcode { 43 + struct insn *insn; 44 + int nr; 45 + struct insn_group *group; 46 + int nr_groups; 47 + }; 48 + 49 + /* 50 + * Table of instruction format types. Each opcode is defined with at 51 + * least one byte (two nibbles), three nibbles, or two bytes (four 52 + * nibbles). 53 + * The byte member of each instruction format type entry defines 54 + * within which byte of an instruction the third (and fourth) nibble 55 + * of an opcode can be found. The mask member is the and-mask that 56 + * needs to be applied on this byte in order to get the third (and 57 + * fourth) nibble of the opcode. 58 + * The format array defines all instruction formats (as defined in the 59 + * Principles of Operation) which have the same position of the opcode 60 + * nibbles. 61 + * A special case are instruction formats with 1-byte opcodes. In this 62 + * case the byte member always is zero, so that the mask is applied on 63 + * the (only) byte that contains the opcode. 64 + */ 65 + static struct insn_type insn_type_table[] = { 66 + { 67 + .byte = 0, 68 + .mask = 0xff, 69 + .format = (char *[]) { 70 + "MII", 71 + "RR", 72 + "RS", 73 + "RSI", 74 + "RX", 75 + "SI", 76 + "SMI", 77 + "SS", 78 + NULL, 79 + }, 80 + }, 81 + { 82 + .byte = 1, 83 + .mask = 0x0f, 84 + .format = (char *[]) { 85 + "RI", 86 + "RIL", 87 + "SSF", 88 + NULL, 89 + }, 90 + }, 91 + { 92 + .byte = 1, 93 + .mask = 0xff, 94 + .format = (char *[]) { 95 + "E", 96 + "IE", 97 + "RRE", 98 + "RRF", 99 + "RRR", 100 + "S", 101 + "SIL", 102 + "SSE", 103 + NULL, 104 + }, 105 + }, 106 + { 107 + .byte = 5, 108 + .mask = 0xff, 109 + .format = (char *[]) { 110 + "RIE", 111 + "RIS", 112 + "RRS", 113 + "RSE", 114 + "RSL", 115 + "RSY", 116 + "RXE", 117 + "RXF", 118 + "RXY", 119 + "SIY", 120 + "VRI", 121 + "VRR", 122 + "VRS", 123 + "VRV", 124 + "VRX", 125 + "VSI", 126 + NULL, 127 + }, 128 + }, 129 + }; 130 + 131 + static struct insn_type *insn_format_to_type(char *format) 132 + { 133 + char tmp[STRING_SIZE_MAX]; 134 + char *base_format, **ptr; 135 + int i; 136 + 137 + strcpy(tmp, format); 138 + base_format = tmp; 139 + base_format = strsep(&base_format, "_"); 140 + for (i = 0; i < sizeof(insn_type_table) / sizeof(insn_type_table[0]); i++) { 141 + ptr = insn_type_table[i].format; 142 + while (*ptr) { 143 + if (!strcmp(base_format, *ptr)) 144 + return &insn_type_table[i]; 145 + ptr++; 146 + } 147 + } 148 + exit(EXIT_FAILURE); 149 + } 150 + 151 + static void read_instructions(struct gen_opcode *desc) 152 + { 153 + struct insn insn; 154 + int rc, i; 155 + 156 + while (1) { 157 + rc = scanf("%s %s %s", insn.opcode, insn.name, insn.format); 158 + if (rc == EOF) 159 + break; 160 + if (rc != 3) 161 + exit(EXIT_FAILURE); 162 + insn.type = insn_format_to_type(insn.format); 163 + insn.name_len = strlen(insn.name); 164 + for (i = 0; i <= insn.name_len; i++) 165 + insn.upper[i] = toupper((unsigned char)insn.name[i]); 166 + desc->nr++; 167 + desc->insn = realloc(desc->insn, desc->nr * sizeof(*desc->insn)); 168 + if (!desc->insn) 169 + exit(EXIT_FAILURE); 170 + desc->insn[desc->nr - 1] = insn; 171 + } 172 + } 173 + 174 + static int cmpformat(const void *a, const void *b) 175 + { 176 + return strcmp(((struct insn *)a)->format, ((struct insn *)b)->format); 177 + } 178 + 179 + static void print_formats(struct gen_opcode *desc) 180 + { 181 + char *format; 182 + int i, count; 183 + 184 + qsort(desc->insn, desc->nr, sizeof(*desc->insn), cmpformat); 185 + format = ""; 186 + count = 0; 187 + printf("enum {\n"); 188 + for (i = 0; i < desc->nr; i++) { 189 + if (!strcmp(format, desc->insn[i].format)) 190 + continue; 191 + count++; 192 + format = desc->insn[i].format; 193 + printf("\tINSTR_%s,\n", format); 194 + } 195 + printf("}; /* %d */\n\n", count); 196 + } 197 + 198 + static int cmp_long_insn(const void *a, const void *b) 199 + { 200 + return strcmp(((struct insn *)a)->name, ((struct insn *)b)->name); 201 + } 202 + 203 + static void print_long_insn(struct gen_opcode *desc) 204 + { 205 + struct insn *insn; 206 + int i, count; 207 + 208 + qsort(desc->insn, desc->nr, sizeof(*desc->insn), cmp_long_insn); 209 + count = 0; 210 + printf("enum {\n"); 211 + for (i = 0; i < desc->nr; i++) { 212 + insn = &desc->insn[i]; 213 + if (insn->name_len < 6) 214 + continue; 215 + printf("\tLONG_INSN_%s,\n", insn->upper); 216 + count++; 217 + } 218 + printf("}; /* %d */\n\n", count); 219 + 220 + printf("#define LONG_INSN_INITIALIZER { \\\n"); 221 + for (i = 0; i < desc->nr; i++) { 222 + insn = &desc->insn[i]; 223 + if (insn->name_len < 6) 224 + continue; 225 + printf("\t[LONG_INSN_%s] = \"%s\", \\\n", insn->upper, insn->name); 226 + } 227 + printf("}\n\n"); 228 + } 229 + 230 + static void print_opcode(struct insn *insn, int nr) 231 + { 232 + char *opcode; 233 + 234 + opcode = insn->opcode; 235 + if (insn->type->byte != 0) 236 + opcode += 2; 237 + printf("\t[%4d] = { .opfrag = 0x%s, .format = INSTR_%s, ", nr, opcode, insn->format); 238 + if (insn->name_len < 6) 239 + printf(".name = \"%s\" ", insn->name); 240 + else 241 + printf(".offset = LONG_INSN_%s ", insn->upper); 242 + printf("}, \\\n"); 243 + } 244 + 245 + static void add_to_group(struct gen_opcode *desc, struct insn *insn, int offset) 246 + { 247 + struct insn_group *group; 248 + 249 + group = desc->group ? &desc->group[desc->nr_groups - 1] : NULL; 250 + if (group && (!strncmp(group->opcode, insn->opcode, 2) || group->type->byte == 0)) { 251 + group->count++; 252 + return; 253 + } 254 + desc->nr_groups++; 255 + desc->group = realloc(desc->group, desc->nr_groups * sizeof(*desc->group)); 256 + if (!desc->group) 257 + exit(EXIT_FAILURE); 258 + group = &desc->group[desc->nr_groups - 1]; 259 + strncpy(group->opcode, insn->opcode, 2); 260 + group->type = insn->type; 261 + group->offset = offset; 262 + group->count = 1; 263 + } 264 + 265 + static int cmpopcode(const void *a, const void *b) 266 + { 267 + return strcmp(((struct insn *)a)->opcode, ((struct insn *)b)->opcode); 268 + } 269 + 270 + static void print_opcode_table(struct gen_opcode *desc) 271 + { 272 + char opcode[2] = ""; 273 + struct insn *insn; 274 + int i, offset; 275 + 276 + qsort(desc->insn, desc->nr, sizeof(*desc->insn), cmpopcode); 277 + printf("#define OPCODE_TABLE_INITIALIZER { \\\n"); 278 + offset = 0; 279 + for (i = 0; i < desc->nr; i++) { 280 + insn = &desc->insn[i]; 281 + if (insn->type->byte == 0) 282 + continue; 283 + add_to_group(desc, insn, offset); 284 + if (strncmp(opcode, insn->opcode, 2)) { 285 + strncpy(opcode, insn->opcode, 2); 286 + printf("\t/* %.2s */ \\\n", opcode); 287 + } 288 + print_opcode(insn, offset); 289 + offset++; 290 + } 291 + printf("\t/* 1-byte opcode instructions */ \\\n"); 292 + for (i = 0; i < desc->nr; i++) { 293 + insn = &desc->insn[i]; 294 + if (insn->type->byte != 0) 295 + continue; 296 + add_to_group(desc, insn, offset); 297 + print_opcode(insn, offset); 298 + offset++; 299 + } 300 + printf("}\n\n"); 301 + } 302 + 303 + static void print_opcode_table_offsets(struct gen_opcode *desc) 304 + { 305 + struct insn_group *group; 306 + int i; 307 + 308 + printf("#define OPCODE_OFFSET_INITIALIZER { \\\n"); 309 + for (i = 0; i < desc->nr_groups; i++) { 310 + group = &desc->group[i]; 311 + printf("\t{ .opcode = 0x%.2s, .mask = 0x%02x, .byte = %d, .offset = %d, .count = %d }, \\\n", 312 + group->opcode, group->type->mask, group->type->byte, group->offset, group->count); 313 + } 314 + printf("}\n\n"); 315 + } 316 + 317 + int main(int argc, char **argv) 318 + { 319 + struct gen_opcode _desc = { 0 }; 320 + struct gen_opcode *desc = &_desc; 321 + 322 + read_instructions(desc); 323 + printf("#ifndef __S390_GENERATED_DIS_H__\n"); 324 + printf("#define __S390_GENERATED_DIS_H__\n"); 325 + printf("/*\n"); 326 + printf(" * DO NOT MODIFY.\n"); 327 + printf(" *\n"); 328 + printf(" * This file was generated by %s\n", __FILE__); 329 + printf(" */\n\n"); 330 + print_formats(desc); 331 + print_long_insn(desc); 332 + print_opcode_table(desc); 333 + print_opcode_table_offsets(desc); 334 + printf("#endif\n"); 335 + exit(EXIT_SUCCESS); 336 + }
+1183
arch/s390/tools/opcodes.txt
··· 1 + 0101 pr E 2 + 0102 upt E 3 + 0104 ptff E 4 + 0107 sckpf E 5 + 010a pfpo E 6 + 010b tam E 7 + 010c sam24 E 8 + 010d sam31 E 9 + 010e sam64 E 10 + 01ff trap2 E 11 + 04 spm RR_R0 12 + 05 balr RR_RR 13 + 06 bctr RR_RR 14 + 07 bcr RR_UR 15 + 0a svc RR_U0 16 + 0b bsm RR_RR 17 + 0c bassm RR_RR 18 + 0d basr RR_RR 19 + 0e mvcl RR_RR 20 + 0f clcl RR_RR 21 + 10 lpr RR_RR 22 + 11 lnr RR_RR 23 + 12 ltr RR_RR 24 + 13 lcr RR_RR 25 + 14 nr RR_RR 26 + 15 clr RR_RR 27 + 16 or RR_RR 28 + 17 xr RR_RR 29 + 18 lr RR_RR 30 + 19 cr RR_RR 31 + 1a ar RR_RR 32 + 1b sr RR_RR 33 + 1c mr RR_RR 34 + 1d dr RR_RR 35 + 1e alr RR_RR 36 + 1f slr RR_RR 37 + 20 lpdr RR_FF 38 + 21 lndr RR_FF 39 + 22 ltdr RR_FF 40 + 23 lcdr RR_FF 41 + 24 hdr RR_FF 42 + 25 ldxr RR_FF 43 + 26 mxr RR_FF 44 + 27 mxdr RR_FF 45 + 28 ldr RR_FF 46 + 29 cdr RR_FF 47 + 2a adr RR_FF 48 + 2b sdr RR_FF 49 + 2c mdr RR_FF 50 + 2d ddr RR_FF 51 + 2e awr RR_FF 52 + 2f swr RR_FF 53 + 30 lper RR_FF 54 + 31 lner RR_FF 55 + 32 lter RR_FF 56 + 33 lcer RR_FF 57 + 34 her RR_FF 58 + 35 ledr RR_FF 59 + 36 axr RR_FF 60 + 37 sxr RR_FF 61 + 38 ler RR_FF 62 + 39 cer RR_FF 63 + 3a aer RR_FF 64 + 3b ser RR_FF 65 + 3c mder RR_FF 66 + 3d der RR_FF 67 + 3e aur RR_FF 68 + 3f sur RR_FF 69 + 40 sth RX_RRRD 70 + 41 la RX_RRRD 71 + 42 stc RX_RRRD 72 + 43 ic RX_RRRD 73 + 44 ex RX_RRRD 74 + 45 bal RX_RRRD 75 + 46 bct RX_RRRD 76 + 47 bc RX_URRD 77 + 48 lh RX_RRRD 78 + 49 ch RX_RRRD 79 + 4a ah RX_RRRD 80 + 4b sh RX_RRRD 81 + 4c mh RX_RRRD 82 + 4d bas RX_RRRD 83 + 4e cvd RX_RRRD 84 + 4f cvb RX_RRRD 85 + 50 st RX_RRRD 86 + 51 lae RX_RRRD 87 + 54 n RX_RRRD 88 + 55 cl RX_RRRD 89 + 56 o RX_RRRD 90 + 57 x RX_RRRD 91 + 58 l RX_RRRD 92 + 59 c RX_RRRD 93 + 5a a RX_RRRD 94 + 5b s RX_RRRD 95 + 5c m RX_RRRD 96 + 5d d RX_RRRD 97 + 5e al RX_RRRD 98 + 5f sl RX_RRRD 99 + 60 std RX_FRRD 100 + 67 mxd RX_FRRD 101 + 68 ld RX_FRRD 102 + 69 cd RX_FRRD 103 + 6a ad RX_FRRD 104 + 6b sd RX_FRRD 105 + 6c md RX_FRRD 106 + 6d dd RX_FRRD 107 + 6e aw RX_FRRD 108 + 6f sw RX_FRRD 109 + 70 ste RX_FRRD 110 + 71 ms RX_RRRD 111 + 78 le RX_FRRD 112 + 79 ce RX_FRRD 113 + 7a ae RX_FRRD 114 + 7b se RX_FRRD 115 + 7c mde RX_FRRD 116 + 7d de RX_FRRD 117 + 7e au RX_FRRD 118 + 7f su RX_FRRD 119 + 80 ssm SI_RD 120 + 82 lpsw SI_RD 121 + 83 diag RS_RRRD 122 + 84 brxh RSI_RRP 123 + 85 brxle RSI_RRP 124 + 86 bxh RS_RRRD 125 + 87 bxle RS_RRRD 126 + 88 srl RS_R0RD 127 + 89 sll RS_R0RD 128 + 8a sra RS_R0RD 129 + 8b sla RS_R0RD 130 + 8c srdl RS_R0RD 131 + 8d sldl RS_R0RD 132 + 8e srda RS_R0RD 133 + 8f slda RS_R0RD 134 + 90 stm RS_RRRD 135 + 91 tm SI_URD 136 + 92 mvi SI_URD 137 + 93 ts SI_RD 138 + 94 ni SI_URD 139 + 95 cli SI_URD 140 + 96 oi SI_URD 141 + 97 xi SI_URD 142 + 98 lm RS_RRRD 143 + 99 trace RS_RRRD 144 + 9a lam RS_AARD 145 + 9b stam RS_AARD 146 + a50 iihh RI_RU 147 + a51 iihl RI_RU 148 + a52 iilh RI_RU 149 + a53 iill RI_RU 150 + a54 nihh RI_RU 151 + a55 nihl RI_RU 152 + a56 nilh RI_RU 153 + a57 nill RI_RU 154 + a58 oihh RI_RU 155 + a59 oihl RI_RU 156 + a5a oilh RI_RU 157 + a5b oill RI_RU 158 + a5c llihh RI_RU 159 + a5d llihl RI_RU 160 + a5e llilh RI_RU 161 + a5f llill RI_RU 162 + a70 tmlh RI_RU 163 + a71 tmll RI_RU 164 + a72 tmhh RI_RU 165 + a73 tmhl RI_RU 166 + a74 brc RI_UP 167 + a75 bras RI_RP 168 + a76 brct RI_RP 169 + a77 brctg RI_RP 170 + a78 lhi RI_RI 171 + a79 lghi RI_RI 172 + a7a ahi RI_RI 173 + a7b aghi RI_RI 174 + a7c mhi RI_RI 175 + a7d mghi RI_RI 176 + a7e chi RI_RI 177 + a7f cghi RI_RI 178 + a8 mvcle RS_RRRD 179 + a9 clcle RS_RRRD 180 + aa0 rinext RI_RI 181 + aa1 rion RI_RI 182 + aa2 tric RI_RI 183 + aa3 rioff RI_RI 184 + aa4 riemit RI_RI 185 + ac stnsm SI_URD 186 + ad stosm SI_URD 187 + ae sigp RS_RRRD 188 + af mc SI_URD 189 + b1 lra RX_RRRD 190 + b202 stidp S_RD 191 + b204 sck S_RD 192 + b205 stck S_RD 193 + b206 sckc S_RD 194 + b207 stckc S_RD 195 + b208 spt S_RD 196 + b209 stpt S_RD 197 + b20a spka S_RD 198 + b20b ipk S_00 199 + b20d ptlb S_00 200 + b210 spx S_RD 201 + b211 stpx S_RD 202 + b212 stap S_RD 203 + b214 sie S_RD 204 + b218 pc S_RD 205 + b219 sac S_RD 206 + b21a cfc S_RD 207 + b220 servc RRE_RR 208 + b221 ipte RRF_RURR 209 + b222 ipm RRE_R0 210 + b223 ivsk RRE_RR 211 + b224 iac RRE_R0 212 + b225 ssar RRE_R0 213 + b226 epar RRE_R0 214 + b227 esar RRE_R0 215 + b228 pt RRE_RR 216 + b229 iske RRE_RR 217 + b22a rrbe RRE_RR 218 + b22b sske RRF_U0RR 219 + b22c tb RRE_RR 220 + b22d dxr RRE_FF 221 + b22e pgin RRE_RR 222 + b22f pgout RRE_RR 223 + b230 csch S_00 224 + b231 hsch S_00 225 + b232 msch S_RD 226 + b233 ssch S_RD 227 + b234 stsch S_RD 228 + b235 tsch S_RD 229 + b236 tpi S_RD 230 + b237 sal S_00 231 + b238 rsch S_00 232 + b239 stcrw S_RD 233 + b23a stcps S_RD 234 + b23b rchp S_00 235 + b23c schm S_00 236 + b240 bakr RRE_RR 237 + b241 cksm RRE_RR 238 + b244 sqdr RRE_FF 239 + b245 sqer RRE_FF 240 + b246 stura RRE_RR 241 + b247 msta RRE_R0 242 + b248 palb RRE_00 243 + b249 ereg RRE_RR 244 + b24a esta RRE_RR 245 + b24b lura RRE_RR 246 + b24c tar RRE_AR 247 + b24d cpya RRE_AA 248 + b24e sar RRE_AR 249 + b24f ear RRE_RA 250 + b250 csp RRE_RR 251 + b252 msr RRE_RR 252 + b254 mvpg RRE_RR 253 + b255 mvst RRE_RR 254 + b256 sthyi RRE_RR 255 + b257 cuse RRE_RR 256 + b258 bsg RRE_RR 257 + b25a bsa RRE_RR 258 + b25d clst RRE_RR 259 + b25e srst RRE_RR 260 + b263 cmpsc RRE_RR 261 + b274 siga S_RD 262 + b276 xsch S_00 263 + b277 rp S_RD 264 + b278 stcke S_RD 265 + b279 sacf S_RD 266 + b27c stckf S_RD 267 + b27d stsi S_RD 268 + b280 lpp S_RD 269 + b284 lcctl S_RD 270 + b285 lpctl S_RD 271 + b286 qsi S_RD 272 + b287 lsctl S_RD 273 + b28e qctri S_RD 274 + b299 srnm S_RD 275 + b29c stfpc S_RD 276 + b29d lfpc S_RD 277 + b2a5 tre RRE_RR 278 + b2a6 cu21 RRF_U0RR 279 + b2a7 cu12 RRF_U0RR 280 + b2b0 stfle S_RD 281 + b2b1 stfl S_RD 282 + b2b2 lpswe S_RD 283 + b2b8 srnmb S_RD 284 + b2b9 srnmt S_RD 285 + b2bd lfas S_RD 286 + b2e0 scctr RRE_RR 287 + b2e1 spctr RRE_RR 288 + b2e4 ecctr RRE_RR 289 + b2e5 epctr RRE_RR 290 + b2e8 ppa RRF_U0RR 291 + b2ec etnd RRE_R0 292 + b2ed ecpga RRE_RR 293 + b2f8 tend S_00 294 + b2fa niai IE_UU 295 + b2fc tabort S_RD 296 + b2ff trap4 S_RD 297 + b300 lpebr RRE_FF 298 + b301 lnebr RRE_FF 299 + b302 ltebr RRE_FF 300 + b303 lcebr RRE_FF 301 + b304 ldebr RRE_FF 302 + b305 lxdbr RRE_FF 303 + b306 lxebr RRE_FF 304 + b307 mxdbr RRE_FF 305 + b308 kebr RRE_FF 306 + b309 cebr RRE_FF 307 + b30a aebr RRE_FF 308 + b30b sebr RRE_FF 309 + b30c mdebr RRE_FF 310 + b30d debr RRE_FF 311 + b30e maebr RRF_F0FF 312 + b30f msebr RRF_F0FF 313 + b310 lpdbr RRE_FF 314 + b311 lndbr RRE_FF 315 + b312 ltdbr RRE_FF 316 + b313 lcdbr RRE_FF 317 + b314 sqebr RRE_FF 318 + b315 sqdbr RRE_FF 319 + b316 sqxbr RRE_FF 320 + b317 meebr RRE_FF 321 + b318 kdbr RRE_FF 322 + b319 cdbr RRE_FF 323 + b31a adbr RRE_FF 324 + b31b sdbr RRE_FF 325 + b31c mdbr RRE_FF 326 + b31d ddbr RRE_FF 327 + b31e madbr RRF_F0FF 328 + b31f msdbr RRF_F0FF 329 + b324 lder RRE_FF 330 + b325 lxdr RRE_FF 331 + b326 lxer RRE_FF 332 + b32e maer RRF_F0FF 333 + b32f mser RRF_F0FF 334 + b336 sqxr RRE_FF 335 + b337 meer RRE_FF 336 + b338 maylr RRF_F0FF 337 + b339 mylr RRF_F0FF 338 + b33a mayr RRF_F0FF 339 + b33b myr RRF_F0FF 340 + b33c mayhr RRF_F0FF 341 + b33d myhr RRF_F0FF 342 + b33e madr RRF_F0FF 343 + b33f msdr RRF_F0FF 344 + b340 lpxbr RRE_FF 345 + b341 lnxbr RRE_FF 346 + b342 ltxbr RRE_FF 347 + b343 lcxbr RRE_FF 348 + b344 ledbra RRF_UUFF 349 + b345 ldxbra RRF_UUFF 350 + b346 lexbra RRF_UUFF 351 + b347 fixbra RRF_UUFF 352 + b348 kxbr RRE_FF 353 + b349 cxbr RRE_FF 354 + b34a axbr RRE_FF 355 + b34b sxbr RRE_FF 356 + b34c mxbr RRE_FF 357 + b34d dxbr RRE_FF 358 + b350 tbedr RRF_U0FF 359 + b351 tbdr RRF_U0FF 360 + b353 diebr RRF_FUFF 361 + b357 fiebra RRF_UUFF 362 + b358 thder RRE_FF 363 + b359 thdr RRE_FF 364 + b35b didbr RRF_FUFF 365 + b35f fidbra RRF_UUFF 366 + b360 lpxr RRE_FF 367 + b361 lnxr RRE_FF 368 + b362 ltxr RRE_FF 369 + b363 lcxr RRE_FF 370 + b365 lxr RRE_FF 371 + b366 lexr RRE_FF 372 + b367 fixr RRE_FF 373 + b369 cxr RRE_FF 374 + b370 lpdfr RRE_FF 375 + b371 lndfr RRE_FF 376 + b372 cpsdr RRF_F0FF2 377 + b373 lcdfr RRE_FF 378 + b374 lzer RRE_F0 379 + b375 lzdr RRE_F0 380 + b376 lzxr RRE_F0 381 + b377 fier RRE_FF 382 + b37f fidr RRE_FF 383 + b384 sfpc RRE_RR 384 + b385 sfasr RRE_R0 385 + b38c efpc RRE_RR 386 + b390 celfbr RRF_UUFR 387 + b391 cdlfbr RRF_UUFR 388 + b392 cxlfbr RRF_UUFR 389 + b394 cefbra RRF_UUFR 390 + b395 cdfbra RRF_UUFR 391 + b396 cxfbra RRF_UUFR 392 + b398 cfebra RRF_UURF 393 + b399 cfdbra RRF_UURF 394 + b39a cfxbra RRF_UURF 395 + b39c clfebr RRF_UURF 396 + b39d clfdbr RRF_UURF 397 + b39e clfxbr RRF_UURF 398 + b3a0 celgbr RRF_UUFR 399 + b3a1 cdlgbr RRF_UUFR 400 + b3a2 cxlgbr RRF_UUFR 401 + b3a4 cegbra RRF_UUFR 402 + b3a5 cdgbra RRF_UUFR 403 + b3a6 cxgbra RRF_UUFR 404 + b3a8 cgebra RRF_UURF 405 + b3a9 cgdbra RRF_UURF 406 + b3aa cgxbra RRF_UURF 407 + b3ac clgebr RRF_UURF 408 + b3ad clgdbr RRF_UURF 409 + b3ae clgxbr RRF_UURF 410 + b3b4 cefr RRE_FR 411 + b3b5 cdfr RRE_FR 412 + b3b6 cxfr RRE_FR 413 + b3b8 cfer RRF_U0RF 414 + b3b9 cfdr RRF_U0RF 415 + b3ba cfxr RRF_U0RF 416 + b3c1 ldgr RRE_FR 417 + b3c4 cegr RRE_FR 418 + b3c5 cdgr RRE_FR 419 + b3c6 cxgr RRE_FR 420 + b3c8 cger RRF_U0RF 421 + b3c9 cgdr RRF_U0RF 422 + b3ca cgxr RRF_U0RF 423 + b3cd lgdr RRE_RF 424 + b3d0 mdtra RRF_FUFF2 425 + b3d1 ddtra RRF_FUFF2 426 + b3d2 adtra RRF_FUFF2 427 + b3d3 sdtra RRF_FUFF2 428 + b3d4 ldetr RRF_0UFF 429 + b3d5 ledtr RRF_UUFF 430 + b3d6 ltdtr RRE_FF 431 + b3d7 fidtr RRF_UUFF 432 + b3d8 mxtra RRF_FUFF2 433 + b3d9 dxtra RRF_FUFF2 434 + b3da axtra RRF_FUFF2 435 + b3db sxtra RRF_FUFF2 436 + b3dc lxdtr RRF_0UFF 437 + b3dd ldxtr RRF_UUFF 438 + b3de ltxtr RRE_FF 439 + b3df fixtr RRF_UUFF 440 + b3e0 kdtr RRE_FF 441 + b3e1 cgdtra RRF_UURF 442 + b3e2 cudtr RRE_RF 443 + b3e3 csdtr RRF_0URF 444 + b3e4 cdtr RRE_FF 445 + b3e5 eedtr RRE_RF 446 + b3e7 esdtr RRE_RF 447 + b3e8 kxtr RRE_FF 448 + b3e9 cgxtra RRF_UURF 449 + b3ea cuxtr RRE_RF 450 + b3eb csxtr RRF_0URF 451 + b3ec cxtr RRE_FF 452 + b3ed eextr RRE_RF 453 + b3ef esxtr RRE_RF 454 + b3f1 cdgtra RRF_UUFR 455 + b3f2 cdutr RRE_FR 456 + b3f3 cdstr RRE_FR 457 + b3f4 cedtr RRE_FF 458 + b3f5 qadtr RRF_FUFF 459 + b3f6 iedtr RRF_F0FR 460 + b3f7 rrdtr RRF_FFRU 461 + b3f9 cxgtra RRF_UUFR 462 + b3fa cxutr RRE_FR 463 + b3fb cxstr RRE_FR 464 + b3fc cextr RRE_FF 465 + b3fd qaxtr RRF_FUFF 466 + b3fe iextr RRF_F0FR 467 + b3ff rrxtr RRF_FFRU 468 + b6 stctl RS_CCRD 469 + b7 lctl RS_CCRD 470 + b900 lpgr RRE_RR 471 + b901 lngr RRE_RR 472 + b902 ltgr RRE_RR 473 + b903 lcgr RRE_RR 474 + b904 lgr RRE_RR 475 + b905 lurag RRE_RR 476 + b906 lgbr RRE_RR 477 + b907 lghr RRE_RR 478 + b908 agr RRE_RR 479 + b909 sgr RRE_RR 480 + b90a algr RRE_RR 481 + b90b slgr RRE_RR 482 + b90c msgr RRE_RR 483 + b90d dsgr RRE_RR 484 + b90e eregg RRE_RR 485 + b90f lrvgr RRE_RR 486 + b910 lpgfr RRE_RR 487 + b911 lngfr RRE_RR 488 + b912 ltgfr RRE_RR 489 + b913 lcgfr RRE_RR 490 + b914 lgfr RRE_RR 491 + b916 llgfr RRE_RR 492 + b917 llgtr RRE_RR 493 + b918 agfr RRE_RR 494 + b919 sgfr RRE_RR 495 + b91a algfr RRE_RR 496 + b91b slgfr RRE_RR 497 + b91c msgfr RRE_RR 498 + b91d dsgfr RRE_RR 499 + b91e kmac RRE_RR 500 + b91f lrvr RRE_RR 501 + b920 cgr RRE_RR 502 + b921 clgr RRE_RR 503 + b925 sturg RRE_RR 504 + b926 lbr RRE_RR 505 + b927 lhr RRE_RR 506 + b928 pckmo RRE_00 507 + b929 kma RRF_R0RR 508 + b92a kmf RRE_RR 509 + b92b kmo RRE_RR 510 + b92c pcc RRE_00 511 + b92d kmctr RRF_R0RR 512 + b92e km RRE_RR 513 + b92f kmc RRE_RR 514 + b930 cgfr RRE_RR 515 + b931 clgfr RRE_RR 516 + b93c ppno RRE_RR 517 + b93e kimd RRE_RR 518 + b93f klmd RRE_RR 519 + b941 cfdtr RRF_UURF 520 + b942 clgdtr RRF_UURF 521 + b943 clfdtr RRF_UURF 522 + b946 bctgr RRE_RR 523 + b949 cfxtr RRF_UURF 524 + b94a clgxtr RRF_UURF 525 + b94b clfxtr RRF_UURF 526 + b951 cdftr RRF_UUFR 527 + b952 cdlgtr RRF_UUFR 528 + b953 cdlftr RRF_UUFR 529 + b959 cxftr RRF_UUFR 530 + b95a cxlgtr RRF_UUFR 531 + b95b cxlftr RRF_UUFR 532 + b960 cgrt RRF_U0RR 533 + b961 clgrt RRF_U0RR 534 + b972 crt RRF_U0RR 535 + b973 clrt RRF_U0RR 536 + b980 ngr RRE_RR 537 + b981 ogr RRE_RR 538 + b982 xgr RRE_RR 539 + b983 flogr RRE_RR 540 + b984 llgcr RRE_RR 541 + b985 llghr RRE_RR 542 + b986 mlgr RRE_RR 543 + b987 dlgr RRE_RR 544 + b988 alcgr RRE_RR 545 + b989 slbgr RRE_RR 546 + b98a cspg RRE_RR 547 + b98d epsw RRE_RR 548 + b98e idte RRF_RURR2 549 + b98f crdte RRF_RURR2 550 + b990 trtt RRF_U0RR 551 + b991 trto RRF_U0RR 552 + b992 trot RRF_U0RR 553 + b993 troo RRF_U0RR 554 + b994 llcr RRE_RR 555 + b995 llhr RRE_RR 556 + b996 mlr RRE_RR 557 + b997 dlr RRE_RR 558 + b998 alcr RRE_RR 559 + b999 slbr RRE_RR 560 + b99a epair RRE_R0 561 + b99b esair RRE_R0 562 + b99d esea RRE_R0 563 + b99e pti RRE_RR 564 + b99f ssair RRE_R0 565 + b9a1 tpei RRE_RR 566 + b9a2 ptf RRE_R0 567 + b9aa lptea RRF_RURR2 568 + b9ac irbm RRE_RR 569 + b9ae rrbm RRE_RR 570 + b9af pfmf RRE_RR 571 + b9b0 cu14 RRF_U0RR 572 + b9b1 cu24 RRF_U0RR 573 + b9b2 cu41 RRE_RR 574 + b9b3 cu42 RRE_RR 575 + b9bd trtre RRF_U0RR 576 + b9be srstu RRE_RR 577 + b9bf trte RRF_U0RR 578 + b9c8 ahhhr RRF_R0RR2 579 + b9c9 shhhr RRF_R0RR2 580 + b9ca alhhhr RRF_R0RR2 581 + b9cb slhhhr RRF_R0RR2 582 + b9cd chhr RRE_RR 583 + b9cf clhhr RRE_RR 584 + b9d0 pcistg RRE_RR 585 + b9d2 pcilg RRE_RR 586 + b9d3 rpcit RRE_RR 587 + b9d8 ahhlr RRF_R0RR2 588 + b9d9 shhlr RRF_R0RR2 589 + b9da alhhlr RRF_R0RR2 590 + b9db slhhlr RRF_R0RR2 591 + b9dd chlr RRE_RR 592 + b9df clhlr RRE_RR 593 + b9e0 locfhr RRF_U0RR 594 + b9e1 popcnt RRE_RR 595 + b9e2 locgr RRF_U0RR 596 + b9e4 ngrk RRF_R0RR2 597 + b9e6 ogrk RRF_R0RR2 598 + b9e7 xgrk RRF_R0RR2 599 + b9e8 agrk RRF_R0RR2 600 + b9e9 sgrk RRF_R0RR2 601 + b9ea algrk RRF_R0RR2 602 + b9eb slgrk RRF_R0RR2 603 + b9ec mgrk RRF_R0RR2 604 + b9ed msgrkc RRF_R0RR2 605 + b9f2 locr RRF_U0RR 606 + b9f4 nrk RRF_R0RR2 607 + b9f6 ork RRF_R0RR2 608 + b9f7 xrk RRF_R0RR2 609 + b9f8 ark RRF_R0RR2 610 + b9f9 srk RRF_R0RR2 611 + b9fa alrk RRF_R0RR2 612 + b9fb slrk RRF_R0RR2 613 + b9fd msrkc RRF_R0RR2 614 + ba cs RS_RRRD 615 + bb cds RS_RRRD 616 + bd clm RS_RURD 617 + be stcm RS_RURD 618 + bf icm RS_RURD 619 + c00 larl RIL_RP 620 + c01 lgfi RIL_RI 621 + c04 brcl RIL_UP 622 + c05 brasl RIL_RP 623 + c06 xihf RIL_RU 624 + c07 xilf RIL_RU 625 + c08 iihf RIL_RU 626 + c09 iilf RIL_RU 627 + c0a nihf RIL_RU 628 + c0b nilf RIL_RU 629 + c0c oihf RIL_RU 630 + c0d oilf RIL_RU 631 + c0e llihf RIL_RU 632 + c0f llilf RIL_RU 633 + c20 msgfi RIL_RI 634 + c21 msfi RIL_RI 635 + c24 slgfi RIL_RU 636 + c25 slfi RIL_RU 637 + c28 agfi RIL_RI 638 + c29 afi RIL_RI 639 + c2a algfi RIL_RU 640 + c2b alfi RIL_RU 641 + c2c cgfi RIL_RI 642 + c2d cfi RIL_RI 643 + c2e clgfi RIL_RU 644 + c2f clfi RIL_RU 645 + c42 llhrl RIL_RP 646 + c44 lghrl RIL_RP 647 + c45 lhrl RIL_RP 648 + c46 llghrl RIL_RP 649 + c47 sthrl RIL_RP 650 + c48 lgrl RIL_RP 651 + c4b stgrl RIL_RP 652 + c4c lgfrl RIL_RP 653 + c4d lrl RIL_RP 654 + c4e llgfrl RIL_RP 655 + c4f strl RIL_RP 656 + c5 bprp MII_UPP 657 + c60 exrl RIL_RP 658 + c62 pfdrl RIL_UP 659 + c64 cghrl RIL_RP 660 + c65 chrl RIL_RP 661 + c66 clghrl RIL_RP 662 + c67 clhrl RIL_RP 663 + c68 cgrl RIL_RP 664 + c6a clgrl RIL_RP 665 + c6c cgfrl RIL_RP 666 + c6d crl RIL_RP 667 + c6e clgfrl RIL_RP 668 + c6f clrl RIL_RP 669 + c7 bpp SMI_U0RDP 670 + c80 mvcos SSF_RRDRD 671 + c81 ectg SSF_RRDRD 672 + c82 csst SSF_RRDRD 673 + c84 lpd SSF_RRDRD2 674 + c85 lpdg SSF_RRDRD2 675 + cc6 brcth RIL_RP 676 + cc8 aih RIL_RI 677 + cca alsih RIL_RI 678 + ccb alsihn RIL_RI 679 + ccd cih RIL_RI 680 + ccf clih RIL_RU 681 + d0 trtr SS_L0RDRD 682 + d1 mvn SS_L0RDRD 683 + d2 mvc SS_L0RDRD 684 + d3 mvz SS_L0RDRD 685 + d4 nc SS_L0RDRD 686 + d5 clc SS_L0RDRD 687 + d6 oc SS_L0RDRD 688 + d7 xc SS_L0RDRD 689 + d9 mvck SS_RRRDRD 690 + da mvcp SS_RRRDRD 691 + db mvcs SS_RRRDRD 692 + dc tr SS_L0RDRD 693 + dd trt SS_L0RDRD 694 + de ed SS_L0RDRD 695 + df edmk SS_L0RDRD 696 + e1 pku SS_L2RDRD 697 + e2 unpku SS_L0RDRD 698 + e302 ltg RXY_RRRD 699 + e303 lrag RXY_RRRD 700 + e304 lg RXY_RRRD 701 + e306 cvby RXY_RRRD 702 + e308 ag RXY_RRRD 703 + e309 sg RXY_RRRD 704 + e30a alg RXY_RRRD 705 + e30b slg RXY_RRRD 706 + e30c msg RXY_RRRD 707 + e30d dsg RXY_RRRD 708 + e30e cvbg RXY_RRRD 709 + e30f lrvg RXY_RRRD 710 + e312 lt RXY_RRRD 711 + e313 lray RXY_RRRD 712 + e314 lgf RXY_RRRD 713 + e315 lgh RXY_RRRD 714 + e316 llgf RXY_RRRD 715 + e317 llgt RXY_RRRD 716 + e318 agf RXY_RRRD 717 + e319 sgf RXY_RRRD 718 + e31a algf RXY_RRRD 719 + e31b slgf RXY_RRRD 720 + e31c msgf RXY_RRRD 721 + e31d dsgf RXY_RRRD 722 + e31e lrv RXY_RRRD 723 + e31f lrvh RXY_RRRD 724 + e320 cg RXY_RRRD 725 + e321 clg RXY_RRRD 726 + e324 stg RXY_RRRD 727 + e325 ntstg RXY_RRRD 728 + e326 cvdy RXY_RRRD 729 + e32a lzrg RXY_RRRD 730 + e32e cvdg RXY_RRRD 731 + e32f strvg RXY_RRRD 732 + e330 cgf RXY_RRRD 733 + e331 clgf RXY_RRRD 734 + e332 ltgf RXY_RRRD 735 + e334 cgh RXY_RRRD 736 + e336 pfd RXY_URRD 737 + e338 agh RXY_RRRD 738 + e339 sgh RXY_RRRD 739 + e33a llzrgf RXY_RRRD 740 + e33b lzrf RXY_RRRD 741 + e33c mgh RXY_RRRD 742 + e33e strv RXY_RRRD 743 + e33f strvh RXY_RRRD 744 + e346 bctg RXY_RRRD 745 + e347 bic RXY_URRD 746 + e348 llgfsg RXY_RRRD 747 + e349 stgsc RXY_RRRD 748 + e34c lgg RXY_RRRD 749 + e34d lgsc RXY_RRRD 750 + e350 sty RXY_RRRD 751 + e351 msy RXY_RRRD 752 + e353 msc RXY_RRRD 753 + e354 ny RXY_RRRD 754 + e355 cly RXY_RRRD 755 + e356 oy RXY_RRRD 756 + e357 xy RXY_RRRD 757 + e358 ly RXY_RRRD 758 + e359 cy RXY_RRRD 759 + e35a ay RXY_RRRD 760 + e35b sy RXY_RRRD 761 + e35c mfy RXY_RRRD 762 + e35e aly RXY_RRRD 763 + e35f sly RXY_RRRD 764 + e370 sthy RXY_RRRD 765 + e371 lay RXY_RRRD 766 + e372 stcy RXY_RRRD 767 + e373 icy RXY_RRRD 768 + e375 laey RXY_RRRD 769 + e376 lb RXY_RRRD 770 + e377 lgb RXY_RRRD 771 + e378 lhy RXY_RRRD 772 + e379 chy RXY_RRRD 773 + e37a ahy RXY_RRRD 774 + e37b shy RXY_RRRD 775 + e37c mhy RXY_RRRD 776 + e380 ng RXY_RRRD 777 + e381 og RXY_RRRD 778 + e382 xg RXY_RRRD 779 + e383 msgc RXY_RRRD 780 + e384 mg RXY_RRRD 781 + e385 lgat RXY_RRRD 782 + e386 mlg RXY_RRRD 783 + e387 dlg RXY_RRRD 784 + e388 alcg RXY_RRRD 785 + e389 slbg RXY_RRRD 786 + e38e stpq RXY_RRRD 787 + e38f lpq RXY_RRRD 788 + e390 llgc RXY_RRRD 789 + e391 llgh RXY_RRRD 790 + e394 llc RXY_RRRD 791 + e395 llh RXY_RRRD 792 + e396 ml RXY_RRRD 793 + e397 dl RXY_RRRD 794 + e398 alc RXY_RRRD 795 + e399 slb RXY_RRRD 796 + e39c llgtat RXY_RRRD 797 + e39d llgfat RXY_RRRD 798 + e39f lat RXY_RRRD 799 + e3c0 lbh RXY_RRRD 800 + e3c2 llch RXY_RRRD 801 + e3c3 stch RXY_RRRD 802 + e3c4 lhh RXY_RRRD 803 + e3c6 llhh RXY_RRRD 804 + e3c7 sthh RXY_RRRD 805 + e3c8 lfhat RXY_RRRD 806 + e3ca lfh RXY_RRRD 807 + e3cb stfh RXY_RRRD 808 + e3cd chf RXY_RRRD 809 + e3cf clhf RXY_RRRD 810 + e3d0 mpcifc RXY_RRRD 811 + e3d4 stpcifc RXY_RRRD 812 + e500 lasp SSE_RDRD 813 + e501 tprot SSE_RDRD 814 + e502 strag SSE_RDRD 815 + e50e mvcsk SSE_RDRD 816 + e50f mvcdk SSE_RDRD 817 + e544 mvhhi SIL_RDI 818 + e548 mvghi SIL_RDI 819 + e54c mvhi SIL_RDI 820 + e554 chhsi SIL_RDI 821 + e555 clhhsi SIL_RDU 822 + e558 cghsi SIL_RDI 823 + e559 clghsi SIL_RDU 824 + e55c chsi SIL_RDI 825 + e55d clfhsi SIL_RDU 826 + e560 tbegin SIL_RDU 827 + e561 tbeginc SIL_RDU 828 + e634 vpkz VSI_URDV 829 + e635 vlrl VSI_URDV 830 + e637 vlrlr VRS_RRDV 831 + e63c vupkz VSI_URDV 832 + e63d vstrl VSI_URDV 833 + e63f vstrlr VRS_RRDV 834 + e649 vlip VRI_V0UU2 835 + e650 vcvb VRR_RV0U 836 + e652 vcvbg VRR_RV0U 837 + e658 vcvd VRI_VR0UU 838 + e659 vsrp VRI_VVUUU2 839 + e65a vcvdg VRI_VR0UU 840 + e65b vpsop VRI_VVUUU2 841 + e65f vtp VRR_0V 842 + e671 vap VRI_VVV0UU2 843 + e673 vsp VRI_VVV0UU2 844 + e677 vcp VRR_0VV0U 845 + e678 vmp VRI_VVV0UU2 846 + e679 vmsp VRI_VVV0UU2 847 + e67a vdp VRI_VVV0UU2 848 + e67b vrp VRI_VVV0UU2 849 + e67e vsdp VRI_VVV0UU2 850 + e700 vleb VRX_VRRDU 851 + e701 vleh VRX_VRRDU 852 + e702 vleg VRX_VRRDU 853 + e703 vlef VRX_VRRDU 854 + e704 vllez VRX_VRRDU 855 + e705 vlrep VRX_VRRDU 856 + e706 vl VRX_VRRD 857 + e707 vlbb VRX_VRRDU 858 + e708 vsteb VRX_VRRDU 859 + e709 vsteh VRX_VRRDU 860 + e70a vsteg VRX_VRRDU 861 + e70b vstef VRX_VRRDU 862 + e70e vst VRX_VRRD 863 + e712 vgeg VRV_VVXRDU 864 + e713 vgef VRV_VVXRDU 865 + e71a vsceg VRV_VVXRDU 866 + e71b vscef VRV_VVXRDU 867 + e721 vlgv VRS_RVRDU 868 + e722 vlvg VRS_VRRDU 869 + e727 lcbb RXE_RRRDU 870 + e730 vesl VRS_VVRDU 871 + e733 verll VRS_VVRDU 872 + e736 vlm VRS_VVRD 873 + e737 vll VRS_VRRD 874 + e738 vesrl VRS_VVRDU 875 + e73a vesra VRS_VVRDU 876 + e73e vstm VRS_VVRD 877 + e73f vstl VRS_VRRD 878 + e740 vleib VRI_V0IU 879 + e741 vleih VRI_V0IU 880 + e742 vleig VRI_V0IU 881 + e743 vleif VRI_V0IU 882 + e744 vgbm VRI_V0U 883 + e745 vrepi VRI_V0IU 884 + e746 vgm VRI_V0UUU 885 + e74a vftci VRI_VVUUU 886 + e74d vrep VRI_VVUU 887 + e750 vpopct VRR_VV0U 888 + e752 vctz VRR_VV0U 889 + e753 vclz VRR_VV0U 890 + e756 vlr VRX_VV 891 + e75c vistr VRR_VV0U0U 892 + e75f vseg VRR_VV0U 893 + e760 vmrl VRR_VVV0U 894 + e761 vmrh VRR_VVV0U 895 + e762 vlvgp VRR_VRR 896 + e764 vsum VRR_VVV0U 897 + e765 vsumg VRR_VVV0U 898 + e766 vcksm VRR_VVV 899 + e767 vsumq VRR_VVV0U 900 + e768 vn VRR_VVV 901 + e769 vnc VRR_VVV 902 + e76a vo VRR_VVV 903 + e76b vno VRR_VVV 904 + e76c vnx VRR_VVV 905 + e76d vx VRR_VVV 906 + e76e vnn VRR_VVV 907 + e76f voc VRR_VVV 908 + e770 veslv VRR_VVV0U 909 + e772 verim VRI_VVV0UU 910 + e773 verllv VRR_VVV0U 911 + e774 vsl VRR_VVV 912 + e775 vslb VRR_VVV 913 + e777 vsldb VRI_VVV0U 914 + e778 vesrlv VRR_VVV0U 915 + e77a vesrav VRR_VVV0U 916 + e77c vsrl VRR_VVV 917 + e77d vsrlb VRR_VVV 918 + e77e vsra VRR_VVV 919 + e77f vsrab VRR_VVV 920 + e780 vfee VRR_VVV0U0U 921 + e781 vfene VRR_VVV0U0U 922 + e782 vfae VRR_VVV0U0U 923 + e784 vpdi VRR_VVV0U 924 + e785 vbperm VRR_VVV 925 + e78a vstrc VRR_VVVUU0V 926 + e78c vperm VRR_VVV0V 927 + e78d vsel VRR_VVV0V 928 + e78e vfms VRR_VVVU0UV 929 + e78f vfma VRR_VVVU0UV 930 + e794 vpk VRR_VVV0U 931 + e795 vpkls VRR_VVV0U0U 932 + e797 vpks VRR_VVV0U0U 933 + e79e vfnms VRR_VVVU0UV 934 + e79f vfnma VRR_VVVU0UV 935 + e7a1 vmlh VRR_VVV0U 936 + e7a2 vml VRR_VVV0U 937 + e7a3 vmh VRR_VVV0U 938 + e7a4 vmle VRR_VVV0U 939 + e7a5 vmlo VRR_VVV0U 940 + e7a6 vme VRR_VVV0U 941 + e7a7 vmo VRR_VVV0U 942 + e7a9 vmalh VRR_VVVU0V 943 + e7aa vmal VRR_VVVU0V 944 + e7ab vmah VRR_VVVU0V 945 + e7ac vmale VRR_VVVU0V 946 + e7ad vmalo VRR_VVVU0V 947 + e7ae vmae VRR_VVVU0V 948 + e7af vmao VRR_VVVU0V 949 + e7b4 vgfm VRR_VVV0U 950 + e7b8 vmsl VRR_VVVUU0V 951 + e7b9 vaccc VRR_VVVU0V 952 + e7bb vac VRR_VVVU0V 953 + e7bc vgfma VRR_VVVU0V 954 + e7bd vsbcbi VRR_VVVU0V 955 + e7bf vsbi VRR_VVVU0V 956 + e7c0 vclgd VRR_VV0UUU 957 + e7c1 vcdlg VRR_VV0UUU 958 + e7c2 vcgd VRR_VV0UUU 959 + e7c3 vcdg VRR_VV0UUU 960 + e7c4 vlde VRR_VV0UU2 961 + e7c5 vled VRR_VV0UUU 962 + e7c7 vfi VRR_VV0UUU 963 + e7ca wfk VRR_VV0UU2 964 + e7cb wfc VRR_VV0UU2 965 + e7cc vfpso VRR_VV0UUU 966 + e7ce vfsq VRR_VV0UU2 967 + e7d4 vupll VRR_VV0U 968 + e7d5 vuplh VRR_VV0U 969 + e7d6 vupl VRR_VV0U 970 + e7d7 vuph VRR_VV0U 971 + e7d8 vtm VRR_VV 972 + e7d9 vecl VRR_VV0U 973 + e7db vec VRR_VV0U 974 + e7de vlc VRR_VV0U 975 + e7df vlp VRR_VV0U 976 + e7e2 vfs VRR_VVV0UU 977 + e7e3 vfa VRR_VVV0UU 978 + e7e5 vfd VRR_VVV0UU 979 + e7e7 vfm VRR_VVV0UU 980 + e7e8 vfce VRR_VVV0UUU 981 + e7ea vfche VRR_VVV0UUU 982 + e7eb vfch VRR_VVV0UUU 983 + e7ee vfmin VRR_VVV0UUU 984 + e7ef vfmax VRR_VVV0UUU 985 + e7f0 vavgl VRR_VVV0U 986 + e7f1 vacc VRR_VVV0U 987 + e7f2 vavg VRR_VVV0U 988 + e7f3 va VRR_VVV0U 989 + e7f5 vscbi VRR_VVV0U 990 + e7f7 vs VRR_VVV0U 991 + e7f8 vceq VRR_VVV0U0U 992 + e7f9 vchl VRR_VVV0U0U 993 + e7fb vch VRR_VVV0U0U 994 + e7fc vmnl VRR_VVV0U 995 + e7fd vmxl VRR_VVV0U 996 + e7fe vmn VRR_VVV0U 997 + e7ff vmx VRR_VVV0U 998 + e8 mvcin SS_L0RDRD 999 + e9 pka SS_L2RDRD 1000 + ea unpka SS_L0RDRD 1001 + eb04 lmg RSY_RRRD 1002 + eb0a srag RSY_RRRD 1003 + eb0b slag RSY_RRRD 1004 + eb0c srlg RSY_RRRD 1005 + eb0d sllg RSY_RRRD 1006 + eb0f tracg RSY_RRRD 1007 + eb14 csy RSY_RRRD 1008 + eb17 stcctm RSY_RURD 1009 + eb1c rllg RSY_RRRD 1010 + eb1d rll RSY_RRRD 1011 + eb20 clmh RSY_RURD 1012 + eb21 clmy RSY_RURD 1013 + eb23 clt RSY_RURD 1014 + eb24 stmg RSY_RRRD 1015 + eb25 stctg RSY_CCRD 1016 + eb26 stmh RSY_RRRD 1017 + eb2b clgt RSY_RURD 1018 + eb2c stcmh RSY_RURD 1019 + eb2d stcmy RSY_RURD 1020 + eb2f lctlg RSY_CCRD 1021 + eb30 csg RSY_RRRD 1022 + eb31 cdsy RSY_RRRD 1023 + eb3e cdsg RSY_RRRD 1024 + eb44 bxhg RSY_RRRD 1025 + eb45 bxleg RSY_RRRD 1026 + eb4c ecag RSY_RRRD 1027 + eb51 tmy SIY_URD 1028 + eb52 mviy SIY_URD 1029 + eb54 niy SIY_URD 1030 + eb55 cliy SIY_URD 1031 + eb56 oiy SIY_URD 1032 + eb57 xiy SIY_URD 1033 + eb60 lric RSY_RDRU 1034 + eb61 stric RSY_RDRU 1035 + eb62 mric RSY_RDRU 1036 + eb6a asi SIY_IRD 1037 + eb6e alsi SIY_IRD 1038 + eb7a agsi SIY_IRD 1039 + eb7e algsi SIY_IRD 1040 + eb80 icmh RSY_RURD 1041 + eb81 icmy RSY_RURD 1042 + eb8e mvclu RSY_RRRD 1043 + eb8f clclu RSY_RRRD 1044 + eb90 stmy RSY_RRRD 1045 + eb96 lmh RSY_RRRD 1046 + eb98 lmy RSY_RRRD 1047 + eb9a lamy RSY_AARD 1048 + eb9b stamy RSY_AARD 1049 + ebc0 tp RSL_R0RD 1050 + ebd0 pcistb RSY_RRRD 1051 + ebd1 sic RSY_RRRD 1052 + ebdc srak RSY_RRRD 1053 + ebdd slak RSY_RRRD 1054 + ebde srlk RSY_RRRD 1055 + ebdf sllk RSY_RRRD 1056 + ebe0 locfh RSY_RURD2 1057 + ebe1 stocfh RSY_RURD2 1058 + ebe2 locg RSY_RURD2 1059 + ebe3 stocg RSY_RURD2 1060 + ebe4 lang RSY_RRRD 1061 + ebe6 laog RSY_RRRD 1062 + ebe7 laxg RSY_RRRD 1063 + ebe8 laag RSY_RRRD 1064 + ebea laalg RSY_RRRD 1065 + ebf2 loc RSY_RURD2 1066 + ebf3 stoc RSY_RURD2 1067 + ebf4 lan RSY_RRRD 1068 + ebf6 lao RSY_RRRD 1069 + ebf7 lax RSY_RRRD 1070 + ebf8 laa RSY_RRRD 1071 + ebfa laal RSY_RRRD 1072 + ec42 lochi RIE_RUI0 1073 + ec44 brxhg RIE_RRP 1074 + ec45 brxlg RIE_RRP 1075 + ec46 locghi RIE_RUI0 1076 + ec4e lochhi RIE_RUI0 1077 + ec51 risblg RIE_RRUUU 1078 + ec54 rnsbg RIE_RRUUU 1079 + ec55 risbg RIE_RRUUU 1080 + ec56 rosbg RIE_RRUUU 1081 + ec57 rxsbg RIE_RRUUU 1082 + ec59 risbgn RIE_RRUUU 1083 + ec5d risbhg RIE_RRUUU 1084 + ec64 cgrj RIE_RRPU 1085 + ec65 clgrj RIE_RRPU 1086 + ec70 cgit RIE_R0IU 1087 + ec71 clgit RIE_R0UU 1088 + ec72 cit RIE_R0IU 1089 + ec73 clfit RIE_R0UU 1090 + ec76 crj RIE_RRPU 1091 + ec77 clrj RIE_RRPU 1092 + ec7c cgij RIE_RUPI 1093 + ec7d clgij RIE_RUPU 1094 + ec7e cij RIE_RUPI 1095 + ec7f clij RIE_RUPU 1096 + ecd8 ahik RIE_RRI0 1097 + ecd9 aghik RIE_RRI0 1098 + ecda alhsik RIE_RRI0 1099 + ecdb alghsik RIE_RRI0 1100 + ece4 cgrb RRS_RRRDU 1101 + ece5 clgrb RRS_RRRDU 1102 + ecf6 crb RRS_RRRDU 1103 + ecf7 clrb RRS_RRRDU 1104 + ecfc cgib RIS_RURDI 1105 + ecfd clgib RIS_RURDU 1106 + ecfe cib RIS_RURDI 1107 + ecff clib RIS_RURDU 1108 + ed04 ldeb RXE_FRRD 1109 + ed05 lxdb RXE_FRRD 1110 + ed06 lxeb RXE_FRRD 1111 + ed07 mxdb RXE_FRRD 1112 + ed08 keb RXE_FRRD 1113 + ed09 ceb RXE_FRRD 1114 + ed0a aeb RXE_FRRD 1115 + ed0b seb RXE_FRRD 1116 + ed0c mdeb RXE_FRRD 1117 + ed0d deb RXE_FRRD 1118 + ed0e maeb RXF_FRRDF 1119 + ed0f mseb RXF_FRRDF 1120 + ed10 tceb RXE_FRRD 1121 + ed11 tcdb RXE_FRRD 1122 + ed12 tcxb RXE_FRRD 1123 + ed14 sqeb RXE_FRRD 1124 + ed15 sqdb RXE_FRRD 1125 + ed17 meeb RXE_FRRD 1126 + ed18 kdb RXE_FRRD 1127 + ed19 cdb RXE_FRRD 1128 + ed1a adb RXE_FRRD 1129 + ed1b sdb RXE_FRRD 1130 + ed1c mdb RXE_FRRD 1131 + ed1d ddb RXE_FRRD 1132 + ed1e madb RXF_FRRDF 1133 + ed1f msdb RXF_FRRDF 1134 + ed24 lde RXE_FRRD 1135 + ed25 lxd RXE_FRRD 1136 + ed26 lxe RXE_FRRD 1137 + ed2e mae RXF_FRRDF 1138 + ed2f mse RXF_FRRDF 1139 + ed34 sqe RXE_FRRD 1140 + ed35 sqd RXE_FRRD 1141 + ed37 mee RXE_FRRD 1142 + ed38 mayl RXF_FRRDF 1143 + ed39 myl RXF_FRRDF 1144 + ed3a may RXF_FRRDF 1145 + ed3b my RXF_FRRDF 1146 + ed3c mayh RXF_FRRDF 1147 + ed3d myh RXF_FRRDF 1148 + ed3e mad RXF_FRRDF 1149 + ed3f msd RXF_FRRDF 1150 + ed40 sldt RXF_FRRDF 1151 + ed41 srdt RXF_FRRDF 1152 + ed48 slxt RXF_FRRDF 1153 + ed49 srxt RXF_FRRDF 1154 + ed50 tdcet RXE_FRRD 1155 + ed51 tdget RXE_FRRD 1156 + ed54 tdcdt RXE_FRRD 1157 + ed55 tdgdt RXE_FRRD 1158 + ed58 tdcxt RXE_FRRD 1159 + ed59 tdgxt RXE_FRRD 1160 + ed64 ley RXY_FRRD 1161 + ed65 ldy RXY_FRRD 1162 + ed66 stey RXY_FRRD 1163 + ed67 stdy RXY_FRRD 1164 + eda8 czdt RSL_LRDFU 1165 + eda9 czxt RSL_LRDFU 1166 + edaa cdzt RSL_LRDFU 1167 + edab cxzt RSL_LRDFU 1168 + edac cpdt RSL_LRDFU 1169 + edad cpxt RSL_LRDFU 1170 + edae cdpt RSL_LRDFU 1171 + edaf cxpt RSL_LRDFU 1172 + ee plo SS_RRRDRD2 1173 + ef lmd SS_RRRDRD3 1174 + f0 srp SS_LIRDRD 1175 + f1 mvo SS_LLRDRD 1176 + f2 pack SS_LLRDRD 1177 + f3 unpk SS_LLRDRD 1178 + f8 zap SS_LLRDRD 1179 + f9 cp SS_LLRDRD 1180 + fa ap SS_LLRDRD 1181 + fb sp SS_LLRDRD 1182 + fc mp SS_LLRDRD 1183 + fd dp SS_LLRDRD
+8 -8
drivers/s390/block/dasd_eer.c
··· 296 296 { 297 297 struct dasd_ccw_req *temp_cqr; 298 298 int data_size; 299 - struct timeval tv; 299 + struct timespec64 ts; 300 300 struct dasd_eer_header header; 301 301 unsigned long flags; 302 302 struct eerbuffer *eerb; ··· 310 310 311 311 header.total_size = sizeof(header) + data_size + 4; /* "EOR" */ 312 312 header.trigger = trigger; 313 - do_gettimeofday(&tv); 314 - header.tv_sec = tv.tv_sec; 315 - header.tv_usec = tv.tv_usec; 313 + ktime_get_real_ts64(&ts); 314 + header.tv_sec = ts.tv_sec; 315 + header.tv_usec = ts.tv_nsec / NSEC_PER_USEC; 316 316 strncpy(header.busid, dev_name(&device->cdev->dev), 317 317 DASD_EER_BUSID_SIZE); 318 318 ··· 340 340 { 341 341 int data_size; 342 342 int snss_rc; 343 - struct timeval tv; 343 + struct timespec64 ts; 344 344 struct dasd_eer_header header; 345 345 unsigned long flags; 346 346 struct eerbuffer *eerb; ··· 353 353 354 354 header.total_size = sizeof(header) + data_size + 4; /* "EOR" */ 355 355 header.trigger = DASD_EER_STATECHANGE; 356 - do_gettimeofday(&tv); 357 - header.tv_sec = tv.tv_sec; 358 - header.tv_usec = tv.tv_usec; 356 + ktime_get_real_ts64(&ts); 357 + header.tv_sec = ts.tv_sec; 358 + header.tv_usec = ts.tv_nsec / NSEC_PER_USEC; 359 359 strncpy(header.busid, dev_name(&device->cdev->dev), 360 360 DASD_EER_BUSID_SIZE); 361 361
-16
drivers/s390/block/dasd_int.h
··· 96 96 d_data); \ 97 97 } while(0) 98 98 99 - #define DBF_DEV_EXC(d_level, d_device, d_str, d_data...) \ 100 - do { \ 101 - debug_sprintf_exception(d_device->debug_area, \ 102 - d_level, \ 103 - d_str "\n", \ 104 - d_data); \ 105 - } while(0) 106 - 107 99 #define DBF_EVENT(d_level, d_str, d_data...)\ 108 100 do { \ 109 101 debug_sprintf_event(dasd_debug_area, \ ··· 113 121 "0.%x.%04x " d_str "\n", \ 114 122 __dev_id.ssid, __dev_id.devno, d_data); \ 115 123 } while (0) 116 - 117 - #define DBF_EXC(d_level, d_str, d_data...)\ 118 - do { \ 119 - debug_sprintf_exception(dasd_debug_area, \ 120 - d_level,\ 121 - d_str "\n", \ 122 - d_data); \ 123 - } while(0) 124 124 125 125 /* limit size for an errorstring */ 126 126 #define ERRORLENGTH 30
+1 -7
drivers/s390/block/scm_blk.h
··· 56 56 57 57 static inline void SCM_LOG_HEX(int level, void *data, int length) 58 58 { 59 - if (!debug_level_enabled(scm_debug, level)) 60 - return; 61 - while (length > 0) { 62 - debug_event(scm_debug, level, data, length); 63 - length -= scm_debug->buf_size; 64 - data += scm_debug->buf_size; 65 - } 59 + debug_event(scm_debug, level, data, length); 66 60 } 67 61 68 62 static inline void SCM_LOG_STATE(int level, struct scm_device *scmdev)
+2 -5
drivers/s390/char/sclp_con.c
··· 211 211 /* Setup timer to output current console buffer after 1/10 second */ 212 212 if (sclp_conbuf != NULL && sclp_chars_in_buffer(sclp_conbuf) != 0 && 213 213 !timer_pending(&sclp_con_timer)) { 214 - init_timer(&sclp_con_timer); 215 - sclp_con_timer.function = sclp_console_timeout; 216 - sclp_con_timer.data = 0UL; 217 - sclp_con_timer.expires = jiffies + HZ/10; 218 - add_timer(&sclp_con_timer); 214 + setup_timer(&sclp_con_timer, sclp_console_timeout, 0UL); 215 + mod_timer(&sclp_con_timer, jiffies + HZ / 10); 219 216 } 220 217 out: 221 218 spin_unlock_irqrestore(&sclp_con_lock, flags);
+2 -5
drivers/s390/char/sclp_tty.c
··· 218 218 /* Setup timer to output current console buffer after 1/10 second */ 219 219 if (sclp_ttybuf && sclp_chars_in_buffer(sclp_ttybuf) && 220 220 !timer_pending(&sclp_tty_timer)) { 221 - init_timer(&sclp_tty_timer); 222 - sclp_tty_timer.function = sclp_tty_timeout; 223 - sclp_tty_timer.data = 0UL; 224 - sclp_tty_timer.expires = jiffies + HZ/10; 225 - add_timer(&sclp_tty_timer); 221 + setup_timer(&sclp_tty_timer, sclp_tty_timeout, 0UL); 222 + mod_timer(&sclp_tty_timer, jiffies + HZ / 10); 226 223 } 227 224 spin_unlock_irqrestore(&sclp_tty_lock, flags); 228 225 out:
+1 -2
drivers/s390/char/tape_class.c
··· 68 68 69 69 tcd->char_device->owner = fops->owner; 70 70 tcd->char_device->ops = fops; 71 - tcd->char_device->dev = dev; 72 71 73 - rc = cdev_add(tcd->char_device, tcd->char_device->dev, 1); 72 + rc = cdev_add(tcd->char_device, dev, 1); 74 73 if (rc) 75 74 goto fail_with_cdev; 76 75
+1 -2
drivers/s390/char/vmlogrdr.c
··· 812 812 } 813 813 vmlogrdr_cdev->owner = THIS_MODULE; 814 814 vmlogrdr_cdev->ops = &vmlogrdr_fops; 815 - vmlogrdr_cdev->dev = dev; 816 - rc = cdev_add(vmlogrdr_cdev, vmlogrdr_cdev->dev, MAXMINOR); 815 + rc = cdev_add(vmlogrdr_cdev, dev, MAXMINOR); 817 816 if (!rc) 818 817 return 0; 819 818
+5 -6
drivers/s390/char/vmur.c
··· 110 110 mutex_init(&urd->io_mutex); 111 111 init_waitqueue_head(&urd->wait); 112 112 spin_lock_init(&urd->open_lock); 113 - atomic_set(&urd->ref_count, 1); 113 + refcount_set(&urd->ref_count, 1); 114 114 urd->cdev = cdev; 115 115 get_device(&cdev->dev); 116 116 return urd; ··· 126 126 127 127 static void urdev_get(struct urdev *urd) 128 128 { 129 - atomic_inc(&urd->ref_count); 129 + refcount_inc(&urd->ref_count); 130 130 } 131 131 132 132 static struct urdev *urdev_get_from_cdev(struct ccw_device *cdev) ··· 159 159 160 160 static void urdev_put(struct urdev *urd) 161 161 { 162 - if (atomic_dec_and_test(&urd->ref_count)) 162 + if (refcount_dec_and_test(&urd->ref_count)) 163 163 urdev_free(urd); 164 164 } 165 165 ··· 892 892 } 893 893 894 894 urd->char_device->ops = &ur_fops; 895 - urd->char_device->dev = MKDEV(major, minor); 896 895 urd->char_device->owner = ur_fops.owner; 897 896 898 - rc = cdev_add(urd->char_device, urd->char_device->dev, 1); 897 + rc = cdev_add(urd->char_device, MKDEV(major, minor), 1); 899 898 if (rc) 900 899 goto fail_free_cdev; 901 900 if (urd->cdev->id.cu_type == READER_PUNCH_DEVTYPE) { ··· 945 946 rc = -EBUSY; 946 947 goto fail_urdev_put; 947 948 } 948 - if (!force && (atomic_read(&urd->ref_count) > 2)) { 949 + if (!force && (refcount_read(&urd->ref_count) > 2)) { 949 950 /* There is still a user of urd (e.g. ur_open) */ 950 951 TRACE("ur_set_offline: BUSY\n"); 951 952 rc = -EBUSY;
+3 -1
drivers/s390/char/vmur.h
··· 12 12 #ifndef _VMUR_H_ 13 13 #define _VMUR_H_ 14 14 15 + #include <linux/refcount.h> 16 + 15 17 #define DEV_CLASS_UR_I 0x20 /* diag210 unit record input device class */ 16 18 #define DEV_CLASS_UR_O 0x10 /* diag210 unit record output device class */ 17 19 /* ··· 72 70 size_t reclen; /* Record length for *write* CCWs */ 73 71 int class; /* VM device class */ 74 72 int io_request_rc; /* return code from I/O request */ 75 - atomic_t ref_count; /* reference counter */ 73 + refcount_t ref_count; /* reference counter */ 76 74 wait_queue_head_t wait; /* wait queue to serialize open */ 77 75 int open_flag; /* "urdev is open" flag */ 78 76 spinlock_t open_lock; /* serialize critical sections */
+6
drivers/s390/cio/ccwgroup.c
··· 373 373 rc = -EINVAL; 374 374 goto error; 375 375 } 376 + /* Check if the devices are bound to the required ccw driver. */ 377 + if (gdev->count && gdrv && gdrv->ccw_driver && 378 + gdev->cdev[0]->drv != gdrv->ccw_driver) { 379 + rc = -EINVAL; 380 + goto error; 381 + } 376 382 377 383 dev_set_name(&gdev->dev, "%s", dev_name(&gdev->cdev[0]->dev)); 378 384 gdev->dev.groups = ccwgroup_attr_groups;
+1 -5
drivers/s390/cio/chsc_sch.c
··· 43 43 44 44 static void CHSC_LOG_HEX(int level, void *data, int length) 45 45 { 46 - while (length > 0) { 47 - debug_event(chsc_debug_log_id, level, data, length); 48 - length -= chsc_debug_log_id->buf_size; 49 - data += chsc_debug_log_id->buf_size; 50 - } 46 + debug_event(chsc_debug_log_id, level, data, length); 51 47 } 52 48 53 49 MODULE_AUTHOR("IBM Corporation");
+1 -7
drivers/s390/cio/cio_debug.h
··· 23 23 24 24 static inline void CIO_HEX_EVENT(int level, void *data, int length) 25 25 { 26 - if (unlikely(!cio_debug_trace_id)) 27 - return; 28 - while (length > 0) { 29 - debug_event(cio_debug_trace_id, level, data, length); 30 - length -= cio_debug_trace_id->buf_size; 31 - data += cio_debug_trace_id->buf_size; 32 - } 26 + debug_event(cio_debug_trace_id, level, data, length); 33 27 } 34 28 35 29 #endif
+105 -175
drivers/s390/cio/cmf.c
··· 58 58 59 59 /* indices for READCMB */ 60 60 enum cmb_index { 61 + avg_utilization = -1, 61 62 /* basic and exended format: */ 62 - cmb_ssch_rsch_count, 63 + cmb_ssch_rsch_count = 0, 63 64 cmb_sample_count, 64 65 cmb_device_connect_time, 65 66 cmb_function_pending_time, ··· 216 215 unsigned long address; 217 216 wait_queue_head_t wait; 218 217 int ret; 219 - struct kref kref; 220 218 }; 221 219 222 - static void cmf_set_schib_release(struct kref *kref) 223 - { 224 - struct set_schib_struct *set_data; 225 - 226 - set_data = container_of(kref, struct set_schib_struct, kref); 227 - kfree(set_data); 228 - } 229 - 230 220 #define CMF_PENDING 1 221 + #define SET_SCHIB_TIMEOUT (10 * HZ) 231 222 232 223 static int set_schib_wait(struct ccw_device *cdev, u32 mme, 233 - int mbfc, unsigned long address) 224 + int mbfc, unsigned long address) 234 225 { 235 - struct set_schib_struct *set_data; 236 - int ret; 226 + struct set_schib_struct set_data; 227 + int ret = -ENODEV; 237 228 238 229 spin_lock_irq(cdev->ccwlock); 239 - if (!cdev->private->cmb) { 240 - ret = -ENODEV; 230 + if (!cdev->private->cmb) 241 231 goto out; 242 - } 243 - set_data = kzalloc(sizeof(struct set_schib_struct), GFP_ATOMIC); 244 - if (!set_data) { 245 - ret = -ENOMEM; 246 - goto out; 247 - } 248 - init_waitqueue_head(&set_data->wait); 249 - kref_init(&set_data->kref); 250 - set_data->mme = mme; 251 - set_data->mbfc = mbfc; 252 - set_data->address = address; 253 232 254 233 ret = set_schib(cdev, mme, mbfc, address); 255 234 if (ret != -EBUSY) 256 - goto out_put; 235 + goto out; 257 236 258 - if (cdev->private->state != DEV_STATE_ONLINE) { 259 - /* if the device is not online, don't even try again */ 260 - ret = -EBUSY; 261 - goto out_put; 262 - } 237 + /* if the device is not online, don't even try again */ 238 + if (cdev->private->state != DEV_STATE_ONLINE) 239 + goto out; 240 + 241 + init_waitqueue_head(&set_data.wait); 242 + set_data.mme = mme; 243 + set_data.mbfc = mbfc; 244 + set_data.address = address; 245 + set_data.ret = CMF_PENDING; 263 246 264 247 cdev->private->state = DEV_STATE_CMFCHANGE; 265 - set_data->ret = CMF_PENDING; 266 - cdev->private->cmb_wait = set_data; 267 - 248 + cdev->private->cmb_wait = &set_data; 268 249 spin_unlock_irq(cdev->ccwlock); 269 - if (wait_event_interruptible(set_data->wait, 270 - set_data->ret != CMF_PENDING)) { 271 - spin_lock_irq(cdev->ccwlock); 272 - if (set_data->ret == CMF_PENDING) { 273 - set_data->ret = -ERESTARTSYS; 250 + 251 + ret = wait_event_interruptible_timeout(set_data.wait, 252 + set_data.ret != CMF_PENDING, 253 + SET_SCHIB_TIMEOUT); 254 + spin_lock_irq(cdev->ccwlock); 255 + if (ret <= 0) { 256 + if (set_data.ret == CMF_PENDING) { 257 + set_data.ret = (ret == 0) ? -ETIME : ret; 274 258 if (cdev->private->state == DEV_STATE_CMFCHANGE) 275 259 cdev->private->state = DEV_STATE_ONLINE; 276 260 } 277 - spin_unlock_irq(cdev->ccwlock); 278 261 } 279 - spin_lock_irq(cdev->ccwlock); 280 262 cdev->private->cmb_wait = NULL; 281 - ret = set_data->ret; 282 - out_put: 283 - kref_put(&set_data->kref, cmf_set_schib_release); 263 + ret = set_data.ret; 284 264 out: 285 265 spin_unlock_irq(cdev->ccwlock); 286 266 return ret; ··· 269 287 270 288 void retry_set_schib(struct ccw_device *cdev) 271 289 { 272 - struct set_schib_struct *set_data; 290 + struct set_schib_struct *set_data = cdev->private->cmb_wait; 273 291 274 - set_data = cdev->private->cmb_wait; 275 - if (!set_data) { 276 - WARN_ON(1); 292 + if (!set_data) 277 293 return; 278 - } 279 - kref_get(&set_data->kref); 294 + 280 295 set_data->ret = set_schib(cdev, set_data->mme, set_data->mbfc, 281 296 set_data->address); 282 297 wake_up(&set_data->wait); 283 - kref_put(&set_data->kref, cmf_set_schib_release); 284 298 } 285 299 286 300 static int cmf_copy_block(struct ccw_device *cdev) 287 301 { 288 - struct subchannel *sch; 289 - void *reference_buf; 290 - void *hw_block; 302 + struct subchannel *sch = to_subchannel(cdev->dev.parent); 291 303 struct cmb_data *cmb_data; 292 - 293 - sch = to_subchannel(cdev->dev.parent); 304 + void *hw_block; 294 305 295 306 if (cio_update_schib(sch)) 296 307 return -ENODEV; ··· 298 323 } 299 324 cmb_data = cdev->private->cmb; 300 325 hw_block = cmb_data->hw_block; 301 - if (!memcmp(cmb_data->last_block, hw_block, cmb_data->size)) 302 - /* No need to copy. */ 303 - return 0; 304 - reference_buf = kzalloc(cmb_data->size, GFP_ATOMIC); 305 - if (!reference_buf) 306 - return -ENOMEM; 307 - /* Ensure consistency of block copied from hardware. */ 308 - do { 309 - memcpy(cmb_data->last_block, hw_block, cmb_data->size); 310 - memcpy(reference_buf, hw_block, cmb_data->size); 311 - } while (memcmp(cmb_data->last_block, reference_buf, cmb_data->size)); 326 + memcpy(cmb_data->last_block, hw_block, cmb_data->size); 312 327 cmb_data->last_update = get_tod_clock(); 313 - kfree(reference_buf); 314 328 return 0; 315 329 } 316 330 317 331 struct copy_block_struct { 318 332 wait_queue_head_t wait; 319 333 int ret; 320 - struct kref kref; 321 334 }; 322 - 323 - static void cmf_copy_block_release(struct kref *kref) 324 - { 325 - struct copy_block_struct *copy_block; 326 - 327 - copy_block = container_of(kref, struct copy_block_struct, kref); 328 - kfree(copy_block); 329 - } 330 335 331 336 static int cmf_cmb_copy_wait(struct ccw_device *cdev) 332 337 { 333 - struct copy_block_struct *copy_block; 334 - int ret; 335 - unsigned long flags; 338 + struct copy_block_struct copy_block; 339 + int ret = -ENODEV; 336 340 337 - spin_lock_irqsave(cdev->ccwlock, flags); 338 - if (!cdev->private->cmb) { 339 - ret = -ENODEV; 341 + spin_lock_irq(cdev->ccwlock); 342 + if (!cdev->private->cmb) 340 343 goto out; 341 - } 342 - copy_block = kzalloc(sizeof(struct copy_block_struct), GFP_ATOMIC); 343 - if (!copy_block) { 344 - ret = -ENOMEM; 345 - goto out; 346 - } 347 - init_waitqueue_head(&copy_block->wait); 348 - kref_init(&copy_block->kref); 349 344 350 345 ret = cmf_copy_block(cdev); 351 346 if (ret != -EBUSY) 352 - goto out_put; 347 + goto out; 353 348 354 - if (cdev->private->state != DEV_STATE_ONLINE) { 355 - ret = -EBUSY; 356 - goto out_put; 357 - } 349 + if (cdev->private->state != DEV_STATE_ONLINE) 350 + goto out; 351 + 352 + init_waitqueue_head(&copy_block.wait); 353 + copy_block.ret = CMF_PENDING; 358 354 359 355 cdev->private->state = DEV_STATE_CMFUPDATE; 360 - copy_block->ret = CMF_PENDING; 361 - cdev->private->cmb_wait = copy_block; 356 + cdev->private->cmb_wait = &copy_block; 357 + spin_unlock_irq(cdev->ccwlock); 362 358 363 - spin_unlock_irqrestore(cdev->ccwlock, flags); 364 - if (wait_event_interruptible(copy_block->wait, 365 - copy_block->ret != CMF_PENDING)) { 366 - spin_lock_irqsave(cdev->ccwlock, flags); 367 - if (copy_block->ret == CMF_PENDING) { 368 - copy_block->ret = -ERESTARTSYS; 359 + ret = wait_event_interruptible(copy_block.wait, 360 + copy_block.ret != CMF_PENDING); 361 + spin_lock_irq(cdev->ccwlock); 362 + if (ret) { 363 + if (copy_block.ret == CMF_PENDING) { 364 + copy_block.ret = -ERESTARTSYS; 369 365 if (cdev->private->state == DEV_STATE_CMFUPDATE) 370 366 cdev->private->state = DEV_STATE_ONLINE; 371 367 } 372 - spin_unlock_irqrestore(cdev->ccwlock, flags); 373 368 } 374 - spin_lock_irqsave(cdev->ccwlock, flags); 375 369 cdev->private->cmb_wait = NULL; 376 - ret = copy_block->ret; 377 - out_put: 378 - kref_put(&copy_block->kref, cmf_copy_block_release); 370 + ret = copy_block.ret; 379 371 out: 380 - spin_unlock_irqrestore(cdev->ccwlock, flags); 372 + spin_unlock_irq(cdev->ccwlock); 381 373 return ret; 382 374 } 383 375 384 376 void cmf_retry_copy_block(struct ccw_device *cdev) 385 377 { 386 - struct copy_block_struct *copy_block; 378 + struct copy_block_struct *copy_block = cdev->private->cmb_wait; 387 379 388 - copy_block = cdev->private->cmb_wait; 389 - if (!copy_block) { 390 - WARN_ON(1); 380 + if (!copy_block) 391 381 return; 392 - } 393 - kref_get(&copy_block->kref); 382 + 394 383 copy_block->ret = cmf_copy_block(cdev); 395 384 wake_up(&copy_block->wait); 396 - kref_put(&copy_block->kref, cmf_copy_block_release); 397 385 } 398 386 399 387 static void cmf_generic_reset(struct ccw_device *cdev) ··· 588 650 return set_schib_wait(cdev, mme, 0, offset); 589 651 } 590 652 653 + /* calculate utilization in 0.1 percent units */ 654 + static u64 __cmb_utilization(u64 device_connect_time, u64 function_pending_time, 655 + u64 device_disconnect_time, u64 start_time) 656 + { 657 + u64 utilization, elapsed_time; 658 + 659 + utilization = time_to_nsec(device_connect_time + 660 + function_pending_time + 661 + device_disconnect_time); 662 + 663 + elapsed_time = get_tod_clock() - start_time; 664 + elapsed_time = tod_to_ns(elapsed_time); 665 + elapsed_time /= 1000; 666 + 667 + return elapsed_time ? (utilization / elapsed_time) : 0; 668 + } 669 + 591 670 static u64 read_cmb(struct ccw_device *cdev, int index) 592 671 { 593 - struct cmb *cmb; 594 - u32 val; 595 - int ret; 672 + struct cmb_data *cmb_data; 596 673 unsigned long flags; 597 - 598 - ret = cmf_cmb_copy_wait(cdev); 599 - if (ret < 0) 600 - return 0; 674 + struct cmb *cmb; 675 + u64 ret = 0; 676 + u32 val; 601 677 602 678 spin_lock_irqsave(cdev->ccwlock, flags); 603 - if (!cdev->private->cmb) { 604 - ret = 0; 679 + cmb_data = cdev->private->cmb; 680 + if (!cmb_data) 605 681 goto out; 606 - } 607 - cmb = ((struct cmb_data *)cdev->private->cmb)->last_block; 608 682 683 + cmb = cmb_data->hw_block; 609 684 switch (index) { 685 + case avg_utilization: 686 + ret = __cmb_utilization(cmb->device_connect_time, 687 + cmb->function_pending_time, 688 + cmb->device_disconnect_time, 689 + cdev->private->cmb_start_time); 690 + goto out; 610 691 case cmb_ssch_rsch_count: 611 692 ret = cmb->ssch_rsch_count; 612 693 goto out; ··· 648 691 val = cmb->device_active_only_time; 649 692 break; 650 693 default: 651 - ret = 0; 652 694 goto out; 653 695 } 654 696 ret = time_to_avg_nsec(val, cmb->sample_count); ··· 685 729 /* we only know values before device_busy_time */ 686 730 data->size = offsetof(struct cmbdata, device_busy_time); 687 731 688 - /* convert to nanoseconds */ 689 - data->elapsed_time = (time * 1000) >> 12; 732 + data->elapsed_time = tod_to_ns(time); 690 733 691 734 /* copy data to new structure */ 692 735 data->ssch_rsch_count = cmb->ssch_rsch_count; ··· 859 904 return set_schib_wait(cdev, mme, 1, mba); 860 905 } 861 906 862 - 863 907 static u64 read_cmbe(struct ccw_device *cdev, int index) 864 908 { 865 - struct cmbe *cmb; 866 909 struct cmb_data *cmb_data; 867 - u32 val; 868 - int ret; 869 910 unsigned long flags; 870 - 871 - ret = cmf_cmb_copy_wait(cdev); 872 - if (ret < 0) 873 - return 0; 911 + struct cmbe *cmb; 912 + u64 ret = 0; 913 + u32 val; 874 914 875 915 spin_lock_irqsave(cdev->ccwlock, flags); 876 916 cmb_data = cdev->private->cmb; 877 - if (!cmb_data) { 878 - ret = 0; 917 + if (!cmb_data) 879 918 goto out; 880 - } 881 - cmb = cmb_data->last_block; 882 919 920 + cmb = cmb_data->hw_block; 883 921 switch (index) { 922 + case avg_utilization: 923 + ret = __cmb_utilization(cmb->device_connect_time, 924 + cmb->function_pending_time, 925 + cmb->device_disconnect_time, 926 + cdev->private->cmb_start_time); 927 + goto out; 884 928 case cmb_ssch_rsch_count: 885 929 ret = cmb->ssch_rsch_count; 886 930 goto out; ··· 908 954 val = cmb->initial_command_response_time; 909 955 break; 910 956 default: 911 - ret = 0; 912 957 goto out; 913 958 } 914 959 ret = time_to_avg_nsec(val, cmb->sample_count); ··· 944 991 /* we only know values before device_busy_time */ 945 992 data->size = offsetof(struct cmbdata, device_busy_time); 946 993 947 - /* conver to nanoseconds */ 948 - data->elapsed_time = (time * 1000) >> 12; 994 + data->elapsed_time = tod_to_ns(time); 949 995 950 996 cmb = cmb_data->last_block; 951 997 /* copy data to new structure */ ··· 997 1045 struct device_attribute *attr, 998 1046 char *buf) 999 1047 { 1000 - struct ccw_device *cdev; 1001 - long interval; 1048 + struct ccw_device *cdev = to_ccwdev(dev); 1002 1049 unsigned long count; 1003 - struct cmb_data *cmb_data; 1050 + long interval; 1004 1051 1005 - cdev = to_ccwdev(dev); 1006 1052 count = cmf_read(cdev, cmb_sample_count); 1007 1053 spin_lock_irq(cdev->ccwlock); 1008 - cmb_data = cdev->private->cmb; 1009 1054 if (count) { 1010 - interval = cmb_data->last_update - 1011 - cdev->private->cmb_start_time; 1012 - interval = (interval * 1000) >> 12; 1055 + interval = get_tod_clock() - cdev->private->cmb_start_time; 1056 + interval = tod_to_ns(interval); 1013 1057 interval /= count; 1014 1058 } else 1015 1059 interval = -1; ··· 1017 1069 struct device_attribute *attr, 1018 1070 char *buf) 1019 1071 { 1020 - struct cmbdata data; 1021 - u64 utilization; 1022 - unsigned long t, u; 1023 - int ret; 1072 + unsigned long u = cmf_read(to_ccwdev(dev), avg_utilization); 1024 1073 1025 - ret = cmf_readall(to_ccwdev(dev), &data); 1026 - if (ret == -EAGAIN || ret == -ENODEV) 1027 - /* No data (yet/currently) available to use for calculation. */ 1028 - return sprintf(buf, "n/a\n"); 1029 - else if (ret) 1030 - return ret; 1031 - 1032 - utilization = data.device_connect_time + 1033 - data.function_pending_time + 1034 - data.device_disconnect_time; 1035 - 1036 - /* calculate value in 0.1 percent units */ 1037 - t = data.elapsed_time / 1000; 1038 - u = utilization / t; 1039 - 1040 - return sprintf(buf, "%02ld.%01ld%%\n", u/ 10, u - (u/ 10) * 10); 1074 + return sprintf(buf, "%02lu.%01lu%%\n", u / 10, u % 10); 1041 1075 } 1042 1076 1043 1077 #define cmf_attr(name) \
+1 -7
drivers/s390/cio/eadm_sch.c
··· 43 43 44 44 static void EADM_LOG_HEX(int level, void *data, int length) 45 45 { 46 - if (!debug_level_enabled(eadm_debug, level)) 47 - return; 48 - while (length > 0) { 49 - debug_event(eadm_debug, level, data, length); 50 - length -= eadm_debug->buf_size; 51 - data += eadm_debug->buf_size; 52 - } 46 + debug_event(eadm_debug, level, data, length); 53 47 } 54 48 55 49 static void orb_init(union orb *orb)
+3 -15
drivers/s390/cio/qdio_debug.h
··· 34 34 35 35 static inline void DBF_HEX(void *addr, int len) 36 36 { 37 - while (len > 0) { 38 - debug_event(qdio_dbf_setup, DBF_ERR, addr, len); 39 - len -= qdio_dbf_setup->buf_size; 40 - addr += qdio_dbf_setup->buf_size; 41 - } 37 + debug_event(qdio_dbf_setup, DBF_ERR, addr, len); 42 38 } 43 39 44 40 #define DBF_ERROR(text...) \ ··· 46 50 47 51 static inline void DBF_ERROR_HEX(void *addr, int len) 48 52 { 49 - while (len > 0) { 50 - debug_event(qdio_dbf_error, DBF_ERR, addr, len); 51 - len -= qdio_dbf_error->buf_size; 52 - addr += qdio_dbf_error->buf_size; 53 - } 53 + debug_event(qdio_dbf_error, DBF_ERR, addr, len); 54 54 } 55 55 56 56 #define DBF_DEV_EVENT(level, device, text...) \ ··· 61 69 static inline void DBF_DEV_HEX(struct qdio_irq *dev, void *addr, 62 70 int len, int level) 63 71 { 64 - while (len > 0) { 65 - debug_event(dev->debug_area, level, addr, len); 66 - len -= dev->debug_area->buf_size; 67 - addr += dev->debug_area->buf_size; 68 - } 72 + debug_event(dev->debug_area, level, addr, len); 69 73 } 70 74 71 75 int qdio_allocate_dbf(struct qdio_initialize *init_data,
+3 -7
drivers/s390/cio/qdio_thinint.c
··· 57 57 int i; 58 58 59 59 for (i = 0; i < TIQDIO_NR_NONSHARED_IND; i++) 60 - if (!atomic_read(&q_indicators[i].count)) { 61 - atomic_set(&q_indicators[i].count, 1); 60 + if (!atomic_cmpxchg(&q_indicators[i].count, 0, 1)) 62 61 return &q_indicators[i].ind; 63 - } 64 62 65 63 /* use the shared indicator */ 66 64 atomic_inc(&q_indicators[TIQDIO_SHARED_IND].count); ··· 67 69 68 70 static void put_indicator(u32 *addr) 69 71 { 70 - int i; 72 + struct indicator_t *ind = container_of(addr, struct indicator_t, ind); 71 73 72 74 if (!addr) 73 75 return; 74 - i = ((unsigned long)addr - (unsigned long)q_indicators) / 75 - sizeof(struct indicator_t); 76 - atomic_dec(&q_indicators[i].count); 76 + atomic_dec(&ind->count); 77 77 } 78 78 79 79 void tiqdio_add_input_queues(struct qdio_irq *irq_ptr)
+19 -5
drivers/s390/cio/vfio_ccw_cp.c
··· 106 106 { 107 107 int ret = 0; 108 108 109 - if (!len || pa->pa_nr) 109 + if (!len) 110 + return 0; 111 + 112 + if (pa->pa_nr) 110 113 return -EINVAL; 111 114 112 115 pa->pa_iova = iova; ··· 333 330 { 334 331 struct ccw1 *ccw = chain->ch_ccw + idx; 335 332 333 + if (ccw_is_test(ccw) || ccw_is_noop(ccw) || ccw_is_tic(ccw)) 334 + return; 336 335 if (!ccw->count) 337 336 return; 338 337 ··· 507 502 508 503 ccw = chain->ch_ccw + idx; 509 504 505 + if (!ccw->count) { 506 + /* 507 + * We just want the translation result of any direct ccw 508 + * to be an IDA ccw, so let's add the IDA flag for it. 509 + * Although the flag will be ignored by firmware. 510 + */ 511 + ccw->flags |= CCW_FLAG_IDA; 512 + return 0; 513 + } 514 + 510 515 /* 511 516 * Pin data page(s) in memory. 512 517 * The number of pages actually is the count of the idaws which will be ··· 557 542 558 543 ccw = chain->ch_ccw + idx; 559 544 545 + if (!ccw->count) 546 + return 0; 547 + 560 548 /* Calculate size of idaws. */ 561 549 ret = copy_from_iova(cp->mdev, &idaw_iova, ccw->cda, sizeof(idaw_iova)); 562 550 if (ret) ··· 588 570 589 571 for (i = 0; i < idaw_nr; i++) { 590 572 idaw_iova = *(idaws + i); 591 - if (IS_ERR_VALUE(idaw_iova)) { 592 - ret = -EFAULT; 593 - goto out_free_idaws; 594 - } 595 573 596 574 ret = pfn_array_alloc_pin(pat->pat_pa + i, cp->mdev, 597 575 idaw_iova, 1);
+43
drivers/s390/crypto/ap_asm.h
··· 117 117 return reg1; 118 118 } 119 119 120 + /* 121 + * union ap_qact_ap_info - used together with the 122 + * ap_aqic() function to provide a convenient way 123 + * to handle the ap info needed by the qact function. 124 + */ 125 + union ap_qact_ap_info { 126 + unsigned long val; 127 + struct { 128 + unsigned int : 3; 129 + unsigned int mode : 3; 130 + unsigned int : 26; 131 + unsigned int cat : 8; 132 + unsigned int : 8; 133 + unsigned char ver[2]; 134 + }; 135 + }; 136 + 137 + /** 138 + * ap_qact(): Query AP combatibility type. 139 + * @qid: The AP queue number 140 + * @apinfo: On input the info about the AP queue. On output the 141 + * alternate AP queue info provided by the qact function 142 + * in GR2 is stored in. 143 + * 144 + * Returns AP queue status. Check response_code field for failures. 145 + */ 146 + static inline struct ap_queue_status ap_qact(ap_qid_t qid, int ifbit, 147 + union ap_qact_ap_info *apinfo) 148 + { 149 + register unsigned long reg0 asm ("0") = qid | (5UL << 24) 150 + | ((ifbit & 0x01) << 22); 151 + register unsigned long reg1_in asm ("1") = apinfo->val; 152 + register struct ap_queue_status reg1_out asm ("1"); 153 + register unsigned long reg2 asm ("2") = 0; 154 + 155 + asm volatile( 156 + ".long 0xb2af0000" /* PQAP(QACT) */ 157 + : "+d" (reg0), "+d" (reg1_in), "=d" (reg1_out), "+d" (reg2) 158 + : : "cc"); 159 + apinfo->val = reg2; 160 + return reg1_out; 161 + } 162 + 120 163 /** 121 164 * ap_nqap(): Send message to adjunct processor queue. 122 165 * @qid: The AP queue number
+65 -9
drivers/s390/crypto/ap_bus.c
··· 176 176 return test_facility(15); 177 177 } 178 178 179 + /* 180 + * ap_qact_available(): Test if the PQAP(QACT) subfunction is available. 181 + * 182 + * Returns 1 if the QACT subfunction is available. 183 + */ 184 + static inline int ap_qact_available(void) 185 + { 186 + if (ap_configuration) 187 + return ap_configuration->qact; 188 + return 0; 189 + } 190 + 179 191 /** 180 192 * ap_test_queue(): Test adjunct processor queue. 181 193 * @qid: The AP queue number ··· 1000 988 } 1001 989 1002 990 /* 991 + * This function checks the type and returns either 0 for not 992 + * supported or the highest compatible type value (which may 993 + * include the input type value). 994 + */ 995 + static int ap_get_compatible_type(ap_qid_t qid, int rawtype, unsigned int func) 996 + { 997 + int comp_type = 0; 998 + 999 + /* < CEX2A is not supported */ 1000 + if (rawtype < AP_DEVICE_TYPE_CEX2A) 1001 + return 0; 1002 + /* up to CEX6 known and fully supported */ 1003 + if (rawtype <= AP_DEVICE_TYPE_CEX6) 1004 + return rawtype; 1005 + /* 1006 + * unknown new type > CEX6, check for compatibility 1007 + * to the highest known and supported type which is 1008 + * currently CEX6 with the help of the QACT function. 1009 + */ 1010 + if (ap_qact_available()) { 1011 + struct ap_queue_status status; 1012 + union ap_qact_ap_info apinfo = {0}; 1013 + 1014 + apinfo.mode = (func >> 26) & 0x07; 1015 + apinfo.cat = AP_DEVICE_TYPE_CEX6; 1016 + status = ap_qact(qid, 0, &apinfo); 1017 + if (status.response_code == AP_RESPONSE_NORMAL 1018 + && apinfo.cat >= AP_DEVICE_TYPE_CEX2A 1019 + && apinfo.cat <= AP_DEVICE_TYPE_CEX6) 1020 + comp_type = apinfo.cat; 1021 + } 1022 + if (!comp_type) 1023 + AP_DBF(DBF_WARN, "queue=%02x.%04x unable to map type %d\n", 1024 + AP_QID_CARD(qid), AP_QID_QUEUE(qid), rawtype); 1025 + else if (comp_type != rawtype) 1026 + AP_DBF(DBF_INFO, "queue=%02x.%04x map type %d to %d\n", 1027 + AP_QID_CARD(qid), AP_QID_QUEUE(qid), rawtype, comp_type); 1028 + return comp_type; 1029 + } 1030 + 1031 + /* 1003 1032 * helper function to be used with bus_find_dev 1004 1033 * matches for the card device with the given id 1005 1034 */ ··· 1067 1014 struct ap_card *ac; 1068 1015 struct device *dev; 1069 1016 ap_qid_t qid; 1070 - int depth = 0, type = 0; 1071 - unsigned int functions = 0; 1017 + int comp_type, depth = 0, type = 0; 1018 + unsigned int func = 0; 1072 1019 int rc, id, dom, borked, domains, defdomdevs = 0; 1073 1020 1074 1021 AP_DBF(DBF_DEBUG, "ap_scan_bus running\n"); ··· 1119 1066 } 1120 1067 continue; 1121 1068 } 1122 - rc = ap_query_queue(qid, &depth, &type, &functions); 1069 + rc = ap_query_queue(qid, &depth, &type, &func); 1123 1070 if (dev) { 1124 1071 spin_lock_bh(&aq->lock); 1125 1072 if (rc == -ENODEV || 1126 1073 /* adapter reconfiguration */ 1127 - (ac && ac->functions != functions)) 1074 + (ac && ac->functions != func)) 1128 1075 aq->state = AP_STATE_BORKED; 1129 1076 borked = aq->state == AP_STATE_BORKED; 1130 1077 spin_unlock_bh(&aq->lock); ··· 1140 1087 } 1141 1088 if (rc) 1142 1089 continue; 1143 - /* new queue device needed */ 1090 + /* a new queue device is needed, check out comp type */ 1091 + comp_type = ap_get_compatible_type(qid, type, func); 1092 + if (!comp_type) 1093 + continue; 1094 + /* maybe a card device needs to be created first */ 1144 1095 if (!ac) { 1145 - /* but first create the card device */ 1146 - ac = ap_card_create(id, depth, 1147 - type, functions); 1096 + ac = ap_card_create(id, depth, type, 1097 + comp_type, func); 1148 1098 if (!ac) 1149 1099 continue; 1150 1100 ac->ap_dev.device.bus = &ap_bus_type; ··· 1165 1109 get_device(&ac->ap_dev.device); 1166 1110 } 1167 1111 /* now create the new queue device */ 1168 - aq = ap_queue_create(qid, type); 1112 + aq = ap_queue_create(qid, comp_type); 1169 1113 if (!aq) 1170 1114 continue; 1171 1115 aq->card = ac;
+2 -2
drivers/s390/crypto/ap_bus.h
··· 250 250 void ap_queue_suspend(struct ap_device *ap_dev); 251 251 void ap_queue_resume(struct ap_device *ap_dev); 252 252 253 - struct ap_card *ap_card_create(int id, int queue_depth, int device_type, 254 - unsigned int device_functions); 253 + struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type, 254 + int comp_device_type, unsigned int functions); 255 255 256 256 int ap_module_init(void); 257 257 void ap_module_exit(void);
+5 -7
drivers/s390/crypto/ap_card.c
··· 171 171 kfree(ac); 172 172 } 173 173 174 - struct ap_card *ap_card_create(int id, int queue_depth, int device_type, 175 - unsigned int functions) 174 + struct ap_card *ap_card_create(int id, int queue_depth, int raw_type, 175 + int comp_type, unsigned int functions) 176 176 { 177 177 struct ap_card *ac; 178 178 179 179 ac = kzalloc(sizeof(*ac), GFP_KERNEL); 180 180 if (!ac) 181 181 return NULL; 182 + INIT_LIST_HEAD(&ac->list); 182 183 INIT_LIST_HEAD(&ac->queues); 183 184 ac->ap_dev.device.release = ap_card_device_release; 184 185 ac->ap_dev.device.type = &ap_card_type; 185 - ac->ap_dev.device_type = device_type; 186 - /* CEX6 toleration: map to CEX5 */ 187 - if (device_type == AP_DEVICE_TYPE_CEX6) 188 - ac->ap_dev.device_type = AP_DEVICE_TYPE_CEX5; 189 - ac->raw_hwtype = device_type; 186 + ac->ap_dev.device_type = comp_type; 187 + ac->raw_hwtype = raw_type; 190 188 ac->queue_depth = queue_depth; 191 189 ac->functions = functions; 192 190 ac->id = id;
+1 -3
drivers/s390/crypto/ap_queue.c
··· 627 627 aq->ap_dev.device.release = ap_queue_device_release; 628 628 aq->ap_dev.device.type = &ap_queue_type; 629 629 aq->ap_dev.device_type = device_type; 630 - /* CEX6 toleration: map to CEX5 */ 631 - if (device_type == AP_DEVICE_TYPE_CEX6) 632 - aq->ap_dev.device_type = AP_DEVICE_TYPE_CEX5; 633 630 aq->qid = qid; 634 631 aq->state = AP_STATE_RESET_START; 635 632 aq->interrupt = AP_INTR_DISABLED; 636 633 spin_lock_init(&aq->lock); 634 + INIT_LIST_HEAD(&aq->list); 637 635 INIT_LIST_HEAD(&aq->pendingq); 638 636 INIT_LIST_HEAD(&aq->requestq); 639 637 setup_timer(&aq->timeout, ap_request_timeout, (unsigned long) aq);
+1 -2
drivers/s390/crypto/pkey_api.c
··· 125 125 * allocate consecutive memory for request CPRB, request param 126 126 * block, reply CPRB and reply param block 127 127 */ 128 - cprbmem = kmalloc(2 * cprbplusparamblen, GFP_KERNEL); 128 + cprbmem = kzalloc(2 * cprbplusparamblen, GFP_KERNEL); 129 129 if (!cprbmem) 130 130 return -ENOMEM; 131 - memset(cprbmem, 0, 2 * cprbplusparamblen); 132 131 133 132 preqcblk = (struct CPRBX *) cprbmem; 134 133 prepcblk = (struct CPRBX *) (cprbmem + cprbplusparamblen);
+1
drivers/s390/crypto/zcrypt_api.h
··· 76 76 #define ZCRYPT_CEX3A 8 77 77 #define ZCRYPT_CEX4 10 78 78 #define ZCRYPT_CEX5 11 79 + #define ZCRYPT_CEX6 12 79 80 80 81 /** 81 82 * Large random numbers are pulled in 4096 byte chunks from the crypto cards
+39 -9
drivers/s390/crypto/zcrypt_cex4.c
··· 45 45 .match_flags = AP_DEVICE_ID_MATCH_CARD_TYPE }, 46 46 { .dev_type = AP_DEVICE_TYPE_CEX5, 47 47 .match_flags = AP_DEVICE_ID_MATCH_CARD_TYPE }, 48 + { .dev_type = AP_DEVICE_TYPE_CEX6, 49 + .match_flags = AP_DEVICE_ID_MATCH_CARD_TYPE }, 48 50 { /* end of list */ }, 49 51 }; 50 52 ··· 56 54 { .dev_type = AP_DEVICE_TYPE_CEX4, 57 55 .match_flags = AP_DEVICE_ID_MATCH_QUEUE_TYPE }, 58 56 { .dev_type = AP_DEVICE_TYPE_CEX5, 57 + .match_flags = AP_DEVICE_ID_MATCH_QUEUE_TYPE }, 58 + { .dev_type = AP_DEVICE_TYPE_CEX6, 59 59 .match_flags = AP_DEVICE_ID_MATCH_QUEUE_TYPE }, 60 60 { /* end of list */ }, 61 61 }; ··· 76 72 * MEX_1k, MEX_2k, MEX_4k, CRT_1k, CRT_2k, CRT_4k, RNG, SECKEY 77 73 */ 78 74 static const int CEX4A_SPEED_IDX[] = { 79 - 5, 6, 59, 20, 115, 581, 0, 0}; 75 + 14, 19, 249, 42, 228, 1458, 0, 0}; 80 76 static const int CEX5A_SPEED_IDX[] = { 81 - 3, 3, 6, 8, 32, 218, 0, 0}; 77 + 8, 9, 20, 18, 66, 458, 0, 0}; 78 + static const int CEX6A_SPEED_IDX[] = { 79 + 6, 9, 20, 17, 65, 438, 0, 0}; 80 + 82 81 static const int CEX4C_SPEED_IDX[] = { 83 - 24, 25, 82, 41, 138, 1111, 79, 8}; 82 + 59, 69, 308, 83, 278, 2204, 209, 40}; 84 83 static const int CEX5C_SPEED_IDX[] = { 85 - 10, 14, 23, 17, 45, 242, 63, 4}; 84 + 24, 31, 50, 37, 90, 479, 27, 10}; 85 + static const int CEX6C_SPEED_IDX[] = { 86 + 16, 20, 32, 27, 77, 455, 23, 9}; 87 + 86 88 static const int CEX4P_SPEED_IDX[] = { 87 - 142, 198, 1852, 203, 331, 1563, 0, 8}; 89 + 224, 313, 3560, 359, 605, 2827, 0, 50}; 88 90 static const int CEX5P_SPEED_IDX[] = { 89 - 49, 67, 131, 52, 85, 287, 0, 4}; 91 + 63, 84, 156, 83, 142, 533, 0, 10}; 92 + static const int CEX6P_SPEED_IDX[] = { 93 + 55, 70, 121, 73, 129, 522, 0, 9}; 90 94 91 95 struct ap_card *ac = to_ap_card(&ap_dev->device); 92 96 struct zcrypt_card *zc; ··· 111 99 zc->user_space_type = ZCRYPT_CEX4; 112 100 memcpy(zc->speed_rating, CEX4A_SPEED_IDX, 113 101 sizeof(CEX4A_SPEED_IDX)); 114 - } else { 102 + } else if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX5) { 115 103 zc->type_string = "CEX5A"; 116 104 zc->user_space_type = ZCRYPT_CEX5; 117 105 memcpy(zc->speed_rating, CEX5A_SPEED_IDX, 118 106 sizeof(CEX5A_SPEED_IDX)); 107 + } else { 108 + zc->type_string = "CEX6A"; 109 + zc->user_space_type = ZCRYPT_CEX6; 110 + memcpy(zc->speed_rating, CEX6A_SPEED_IDX, 111 + sizeof(CEX6A_SPEED_IDX)); 119 112 } 120 113 zc->min_mod_size = CEX4A_MIN_MOD_SIZE; 121 114 if (ap_test_bit(&ac->functions, AP_FUNC_MEX4K) && ··· 142 125 zc->user_space_type = ZCRYPT_CEX3C; 143 126 memcpy(zc->speed_rating, CEX4C_SPEED_IDX, 144 127 sizeof(CEX4C_SPEED_IDX)); 145 - } else { 128 + } else if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX5) { 146 129 zc->type_string = "CEX5C"; 147 130 /* wrong user space type, must be CEX5 148 131 * just keep it for cca compatibility ··· 150 133 zc->user_space_type = ZCRYPT_CEX3C; 151 134 memcpy(zc->speed_rating, CEX5C_SPEED_IDX, 152 135 sizeof(CEX5C_SPEED_IDX)); 136 + } else { 137 + zc->type_string = "CEX6C"; 138 + /* wrong user space type, must be CEX6 139 + * just keep it for cca compatibility 140 + */ 141 + zc->user_space_type = ZCRYPT_CEX3C; 142 + memcpy(zc->speed_rating, CEX6C_SPEED_IDX, 143 + sizeof(CEX6C_SPEED_IDX)); 153 144 } 154 145 zc->min_mod_size = CEX4C_MIN_MOD_SIZE; 155 146 zc->max_mod_size = CEX4C_MAX_MOD_SIZE; ··· 168 143 zc->user_space_type = ZCRYPT_CEX4; 169 144 memcpy(zc->speed_rating, CEX4P_SPEED_IDX, 170 145 sizeof(CEX4P_SPEED_IDX)); 171 - } else { 146 + } else if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX5) { 172 147 zc->type_string = "CEX5P"; 173 148 zc->user_space_type = ZCRYPT_CEX5; 174 149 memcpy(zc->speed_rating, CEX5P_SPEED_IDX, 175 150 sizeof(CEX5P_SPEED_IDX)); 151 + } else { 152 + zc->type_string = "CEX6P"; 153 + zc->user_space_type = ZCRYPT_CEX6; 154 + memcpy(zc->speed_rating, CEX6P_SPEED_IDX, 155 + sizeof(CEX6P_SPEED_IDX)); 176 156 } 177 157 zc->min_mod_size = CEX4C_MIN_MOD_SIZE; 178 158 zc->max_mod_size = CEX4C_MAX_MOD_SIZE;
+3 -3
drivers/s390/crypto/zcrypt_msgtype50.c
··· 240 240 mod = meb2->modulus + sizeof(meb2->modulus) - mod_len; 241 241 exp = meb2->exponent + sizeof(meb2->exponent) - mod_len; 242 242 inp = meb2->message + sizeof(meb2->message) - mod_len; 243 - } else { 244 - /* mod_len > 256 = 4096 bit RSA Key */ 243 + } else if (mod_len <= 512) { 245 244 struct type50_meb3_msg *meb3 = ap_msg->message; 246 245 memset(meb3, 0, sizeof(*meb3)); 247 246 ap_msg->length = sizeof(*meb3); ··· 250 251 mod = meb3->modulus + sizeof(meb3->modulus) - mod_len; 251 252 exp = meb3->exponent + sizeof(meb3->exponent) - mod_len; 252 253 inp = meb3->message + sizeof(meb3->message) - mod_len; 253 - } 254 + } else 255 + return -EINVAL; 254 256 255 257 if (copy_from_user(mod, mex->n_modulus, mod_len) || 256 258 copy_from_user(exp, mex->b_key, mod_len) ||
+2 -1
drivers/s390/crypto/zcrypt_msgtype6.c
··· 474 474 *fcode = (msg->hdr.function_code[0] << 8) | msg->hdr.function_code[1]; 475 475 *dom = (unsigned short *)&msg->cprbx.domain; 476 476 477 - if (memcmp(function_code, "US", 2) == 0) 477 + if (memcmp(function_code, "US", 2) == 0 478 + || memcmp(function_code, "AU", 2) == 0) 478 479 ap_msg->special = 1; 479 480 else 480 481 ap_msg->special = 0;
+1
drivers/s390/net/ctcm_main.c
··· 1761 1761 .owner = THIS_MODULE, 1762 1762 .name = CTC_DRIVER_NAME, 1763 1763 }, 1764 + .ccw_driver = &ctcm_ccw_driver, 1764 1765 .setup = ctcm_probe_device, 1765 1766 .remove = ctcm_remove_device, 1766 1767 .set_online = ctcm_new_device,
+1
drivers/s390/net/lcs.c
··· 2396 2396 .owner = THIS_MODULE, 2397 2397 .name = "lcs", 2398 2398 }, 2399 + .ccw_driver = &lcs_ccw_driver, 2399 2400 .setup = lcs_probe_device, 2400 2401 .remove = lcs_remove_device, 2401 2402 .set_online = lcs_new_device,
+1
drivers/s390/net/qeth_core_main.c
··· 5875 5875 .owner = THIS_MODULE, 5876 5876 .name = "qeth", 5877 5877 }, 5878 + .ccw_driver = &qeth_ccw_driver, 5878 5879 .setup = qeth_core_probe_device, 5879 5880 .remove = qeth_core_remove_device, 5880 5881 .set_online = qeth_core_set_online,
+1 -5
drivers/s390/virtio/Makefile
··· 6 6 # it under the terms of the GNU General Public License (version 2 only) 7 7 # as published by the Free Software Foundation. 8 8 9 - s390-virtio-objs := virtio_ccw.o 10 - ifdef CONFIG_S390_GUEST_OLD_TRANSPORT 11 - s390-virtio-objs += kvm_virtio.o 12 - endif 13 - obj-$(CONFIG_S390_GUEST) += $(s390-virtio-objs) 9 + obj-$(CONFIG_S390_GUEST) += virtio_ccw.o
-515
drivers/s390/virtio/kvm_virtio.c
··· 1 - /* 2 - * virtio for kvm on s390 3 - * 4 - * Copyright IBM Corp. 2008 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 - * 10 - * Author(s): Christian Borntraeger <borntraeger@de.ibm.com> 11 - */ 12 - 13 - #include <linux/kernel_stat.h> 14 - #include <linux/init.h> 15 - #include <linux/bootmem.h> 16 - #include <linux/err.h> 17 - #include <linux/virtio.h> 18 - #include <linux/virtio_config.h> 19 - #include <linux/slab.h> 20 - #include <linux/virtio_console.h> 21 - #include <linux/interrupt.h> 22 - #include <linux/virtio_ring.h> 23 - #include <linux/export.h> 24 - #include <linux/pfn.h> 25 - #include <asm/io.h> 26 - #include <asm/kvm_para.h> 27 - #include <asm/kvm_virtio.h> 28 - #include <asm/sclp.h> 29 - #include <asm/setup.h> 30 - #include <asm/irq.h> 31 - 32 - #define VIRTIO_SUBCODE_64 0x0D00 33 - 34 - /* 35 - * The pointer to our (page) of device descriptions. 36 - */ 37 - static void *kvm_devices; 38 - static struct work_struct hotplug_work; 39 - 40 - struct kvm_device { 41 - struct virtio_device vdev; 42 - struct kvm_device_desc *desc; 43 - }; 44 - 45 - #define to_kvmdev(vd) container_of(vd, struct kvm_device, vdev) 46 - 47 - /* 48 - * memory layout: 49 - * - kvm_device_descriptor 50 - * struct kvm_device_desc 51 - * - configuration 52 - * struct kvm_vqconfig 53 - * - feature bits 54 - * - config space 55 - */ 56 - static struct kvm_vqconfig *kvm_vq_config(const struct kvm_device_desc *desc) 57 - { 58 - return (struct kvm_vqconfig *)(desc + 1); 59 - } 60 - 61 - static u8 *kvm_vq_features(const struct kvm_device_desc *desc) 62 - { 63 - return (u8 *)(kvm_vq_config(desc) + desc->num_vq); 64 - } 65 - 66 - static u8 *kvm_vq_configspace(const struct kvm_device_desc *desc) 67 - { 68 - return kvm_vq_features(desc) + desc->feature_len * 2; 69 - } 70 - 71 - /* 72 - * The total size of the config page used by this device (incl. desc) 73 - */ 74 - static unsigned desc_size(const struct kvm_device_desc *desc) 75 - { 76 - return sizeof(*desc) 77 - + desc->num_vq * sizeof(struct kvm_vqconfig) 78 - + desc->feature_len * 2 79 - + desc->config_len; 80 - } 81 - 82 - /* This gets the device's feature bits. */ 83 - static u64 kvm_get_features(struct virtio_device *vdev) 84 - { 85 - unsigned int i; 86 - u32 features = 0; 87 - struct kvm_device_desc *desc = to_kvmdev(vdev)->desc; 88 - u8 *in_features = kvm_vq_features(desc); 89 - 90 - for (i = 0; i < min(desc->feature_len * 8, 32); i++) 91 - if (in_features[i / 8] & (1 << (i % 8))) 92 - features |= (1 << i); 93 - return features; 94 - } 95 - 96 - static int kvm_finalize_features(struct virtio_device *vdev) 97 - { 98 - unsigned int i, bits; 99 - struct kvm_device_desc *desc = to_kvmdev(vdev)->desc; 100 - /* Second half of bitmap is features we accept. */ 101 - u8 *out_features = kvm_vq_features(desc) + desc->feature_len; 102 - 103 - /* Give virtio_ring a chance to accept features. */ 104 - vring_transport_features(vdev); 105 - 106 - /* Make sure we don't have any features > 32 bits! */ 107 - BUG_ON((u32)vdev->features != vdev->features); 108 - 109 - memset(out_features, 0, desc->feature_len); 110 - bits = min_t(unsigned, desc->feature_len, sizeof(vdev->features)) * 8; 111 - for (i = 0; i < bits; i++) { 112 - if (__virtio_test_bit(vdev, i)) 113 - out_features[i / 8] |= (1 << (i % 8)); 114 - } 115 - 116 - return 0; 117 - } 118 - 119 - /* 120 - * Reading and writing elements in config space 121 - */ 122 - static void kvm_get(struct virtio_device *vdev, unsigned int offset, 123 - void *buf, unsigned len) 124 - { 125 - struct kvm_device_desc *desc = to_kvmdev(vdev)->desc; 126 - 127 - BUG_ON(offset + len > desc->config_len); 128 - memcpy(buf, kvm_vq_configspace(desc) + offset, len); 129 - } 130 - 131 - static void kvm_set(struct virtio_device *vdev, unsigned int offset, 132 - const void *buf, unsigned len) 133 - { 134 - struct kvm_device_desc *desc = to_kvmdev(vdev)->desc; 135 - 136 - BUG_ON(offset + len > desc->config_len); 137 - memcpy(kvm_vq_configspace(desc) + offset, buf, len); 138 - } 139 - 140 - /* 141 - * The operations to get and set the status word just access 142 - * the status field of the device descriptor. set_status will also 143 - * make a hypercall to the host, to tell about status changes 144 - */ 145 - static u8 kvm_get_status(struct virtio_device *vdev) 146 - { 147 - return to_kvmdev(vdev)->desc->status; 148 - } 149 - 150 - static void kvm_set_status(struct virtio_device *vdev, u8 status) 151 - { 152 - BUG_ON(!status); 153 - to_kvmdev(vdev)->desc->status = status; 154 - kvm_hypercall1(KVM_S390_VIRTIO_SET_STATUS, 155 - (unsigned long) to_kvmdev(vdev)->desc); 156 - } 157 - 158 - /* 159 - * To reset the device, we use the KVM_VIRTIO_RESET hypercall, using the 160 - * descriptor address. The Host will zero the status and all the 161 - * features. 162 - */ 163 - static void kvm_reset(struct virtio_device *vdev) 164 - { 165 - kvm_hypercall1(KVM_S390_VIRTIO_RESET, 166 - (unsigned long) to_kvmdev(vdev)->desc); 167 - } 168 - 169 - /* 170 - * When the virtio_ring code wants to notify the Host, it calls us here and we 171 - * make a hypercall. We hand the address of the virtqueue so the Host 172 - * knows which virtqueue we're talking about. 173 - */ 174 - static bool kvm_notify(struct virtqueue *vq) 175 - { 176 - long rc; 177 - struct kvm_vqconfig *config = vq->priv; 178 - 179 - rc = kvm_hypercall1(KVM_S390_VIRTIO_NOTIFY, config->address); 180 - if (rc < 0) 181 - return false; 182 - return true; 183 - } 184 - 185 - /* 186 - * This routine finds the first virtqueue described in the configuration of 187 - * this device and sets it up. 188 - */ 189 - static struct virtqueue *kvm_find_vq(struct virtio_device *vdev, 190 - unsigned index, 191 - void (*callback)(struct virtqueue *vq), 192 - const char *name, bool ctx) 193 - { 194 - struct kvm_device *kdev = to_kvmdev(vdev); 195 - struct kvm_vqconfig *config; 196 - struct virtqueue *vq; 197 - int err; 198 - 199 - if (index >= kdev->desc->num_vq) 200 - return ERR_PTR(-ENOENT); 201 - 202 - if (!name) 203 - return NULL; 204 - 205 - config = kvm_vq_config(kdev->desc)+index; 206 - 207 - err = vmem_add_mapping(config->address, 208 - vring_size(config->num, 209 - KVM_S390_VIRTIO_RING_ALIGN)); 210 - if (err) 211 - goto out; 212 - 213 - vq = vring_new_virtqueue(index, config->num, KVM_S390_VIRTIO_RING_ALIGN, 214 - vdev, true, ctx, (void *) config->address, 215 - kvm_notify, callback, name); 216 - if (!vq) { 217 - err = -ENOMEM; 218 - goto unmap; 219 - } 220 - 221 - /* 222 - * register a callback token 223 - * The host will sent this via the external interrupt parameter 224 - */ 225 - config->token = (u64) vq; 226 - 227 - vq->priv = config; 228 - return vq; 229 - unmap: 230 - vmem_remove_mapping(config->address, 231 - vring_size(config->num, 232 - KVM_S390_VIRTIO_RING_ALIGN)); 233 - out: 234 - return ERR_PTR(err); 235 - } 236 - 237 - static void kvm_del_vq(struct virtqueue *vq) 238 - { 239 - struct kvm_vqconfig *config = vq->priv; 240 - 241 - vring_del_virtqueue(vq); 242 - vmem_remove_mapping(config->address, 243 - vring_size(config->num, 244 - KVM_S390_VIRTIO_RING_ALIGN)); 245 - } 246 - 247 - static void kvm_del_vqs(struct virtio_device *vdev) 248 - { 249 - struct virtqueue *vq, *n; 250 - 251 - list_for_each_entry_safe(vq, n, &vdev->vqs, list) 252 - kvm_del_vq(vq); 253 - } 254 - 255 - static int kvm_find_vqs(struct virtio_device *vdev, unsigned nvqs, 256 - struct virtqueue *vqs[], 257 - vq_callback_t *callbacks[], 258 - const char * const names[], 259 - const bool *ctx, 260 - struct irq_affinity *desc) 261 - { 262 - struct kvm_device *kdev = to_kvmdev(vdev); 263 - int i; 264 - 265 - /* We must have this many virtqueues. */ 266 - if (nvqs > kdev->desc->num_vq) 267 - return -ENOENT; 268 - 269 - for (i = 0; i < nvqs; ++i) { 270 - vqs[i] = kvm_find_vq(vdev, i, callbacks[i], names[i], 271 - ctx ? ctx[i] : false); 272 - if (IS_ERR(vqs[i])) 273 - goto error; 274 - } 275 - return 0; 276 - 277 - error: 278 - kvm_del_vqs(vdev); 279 - return PTR_ERR(vqs[i]); 280 - } 281 - 282 - static const char *kvm_bus_name(struct virtio_device *vdev) 283 - { 284 - return ""; 285 - } 286 - 287 - /* 288 - * The config ops structure as defined by virtio config 289 - */ 290 - static const struct virtio_config_ops kvm_vq_configspace_ops = { 291 - .get_features = kvm_get_features, 292 - .finalize_features = kvm_finalize_features, 293 - .get = kvm_get, 294 - .set = kvm_set, 295 - .get_status = kvm_get_status, 296 - .set_status = kvm_set_status, 297 - .reset = kvm_reset, 298 - .find_vqs = kvm_find_vqs, 299 - .del_vqs = kvm_del_vqs, 300 - .bus_name = kvm_bus_name, 301 - }; 302 - 303 - /* 304 - * The root device for the kvm virtio devices. 305 - * This makes them appear as /sys/devices/kvm_s390/0,1,2 not /sys/devices/0,1,2. 306 - */ 307 - static struct device *kvm_root; 308 - 309 - /* 310 - * adds a new device and register it with virtio 311 - * appropriate drivers are loaded by the device model 312 - */ 313 - static void add_kvm_device(struct kvm_device_desc *d, unsigned int offset) 314 - { 315 - struct kvm_device *kdev; 316 - 317 - kdev = kzalloc(sizeof(*kdev), GFP_KERNEL); 318 - if (!kdev) { 319 - printk(KERN_EMERG "Cannot allocate kvm dev %u type %u\n", 320 - offset, d->type); 321 - return; 322 - } 323 - 324 - kdev->vdev.dev.parent = kvm_root; 325 - kdev->vdev.id.device = d->type; 326 - kdev->vdev.config = &kvm_vq_configspace_ops; 327 - kdev->desc = d; 328 - 329 - if (register_virtio_device(&kdev->vdev) != 0) { 330 - printk(KERN_ERR "Failed to register kvm device %u type %u\n", 331 - offset, d->type); 332 - kfree(kdev); 333 - } 334 - } 335 - 336 - /* 337 - * scan_devices() simply iterates through the device page. 338 - * The type 0 is reserved to mean "end of devices". 339 - */ 340 - static void scan_devices(void) 341 - { 342 - unsigned int i; 343 - struct kvm_device_desc *d; 344 - 345 - for (i = 0; i < PAGE_SIZE; i += desc_size(d)) { 346 - d = kvm_devices + i; 347 - 348 - if (d->type == 0) 349 - break; 350 - 351 - add_kvm_device(d, i); 352 - } 353 - } 354 - 355 - /* 356 - * match for a kvm device with a specific desc pointer 357 - */ 358 - static int match_desc(struct device *dev, void *data) 359 - { 360 - struct virtio_device *vdev = dev_to_virtio(dev); 361 - struct kvm_device *kdev = to_kvmdev(vdev); 362 - 363 - return kdev->desc == data; 364 - } 365 - 366 - /* 367 - * hotplug_device tries to find changes in the device page. 368 - */ 369 - static void hotplug_devices(struct work_struct *dummy) 370 - { 371 - unsigned int i; 372 - struct kvm_device_desc *d; 373 - struct device *dev; 374 - 375 - for (i = 0; i < PAGE_SIZE; i += desc_size(d)) { 376 - d = kvm_devices + i; 377 - 378 - /* end of list */ 379 - if (d->type == 0) 380 - break; 381 - 382 - /* device already exists */ 383 - dev = device_find_child(kvm_root, d, match_desc); 384 - if (dev) { 385 - /* XXX check for hotplug remove */ 386 - put_device(dev); 387 - continue; 388 - } 389 - 390 - /* new device */ 391 - printk(KERN_INFO "Adding new virtio device %p\n", d); 392 - add_kvm_device(d, i); 393 - } 394 - } 395 - 396 - /* 397 - * we emulate the request_irq behaviour on top of s390 extints 398 - */ 399 - static void kvm_extint_handler(struct ext_code ext_code, 400 - unsigned int param32, unsigned long param64) 401 - { 402 - struct virtqueue *vq; 403 - u32 param; 404 - 405 - if ((ext_code.subcode & 0xff00) != VIRTIO_SUBCODE_64) 406 - return; 407 - inc_irq_stat(IRQEXT_VRT); 408 - 409 - /* The LSB might be overloaded, we have to mask it */ 410 - vq = (struct virtqueue *)(param64 & ~1UL); 411 - 412 - /* We use ext_params to decide what this interrupt means */ 413 - param = param32 & VIRTIO_PARAM_MASK; 414 - 415 - switch (param) { 416 - case VIRTIO_PARAM_CONFIG_CHANGED: 417 - virtio_config_changed(vq->vdev); 418 - break; 419 - case VIRTIO_PARAM_DEV_ADD: 420 - schedule_work(&hotplug_work); 421 - break; 422 - case VIRTIO_PARAM_VRING_INTERRUPT: 423 - default: 424 - vring_interrupt(0, vq); 425 - break; 426 - } 427 - } 428 - 429 - /* 430 - * For s390-virtio, we expect a page above main storage containing 431 - * the virtio configuration. Try to actually load from this area 432 - * in order to figure out if the host provides this page. 433 - */ 434 - static int __init test_devices_support(unsigned long addr) 435 - { 436 - int ret = -EIO; 437 - 438 - asm volatile( 439 - "0: lura 0,%1\n" 440 - "1: xgr %0,%0\n" 441 - "2:\n" 442 - EX_TABLE(0b,2b) 443 - EX_TABLE(1b,2b) 444 - : "+d" (ret) 445 - : "a" (addr) 446 - : "0", "cc"); 447 - return ret; 448 - } 449 - /* 450 - * Init function for virtio 451 - * devices are in a single page above top of "normal" + standby mem 452 - */ 453 - static int __init kvm_devices_init(void) 454 - { 455 - int rc; 456 - unsigned long total_memory_size = sclp.rzm * sclp.rnmax; 457 - 458 - if (!MACHINE_IS_KVM) 459 - return -ENODEV; 460 - 461 - if (test_devices_support(total_memory_size) < 0) 462 - return -ENODEV; 463 - 464 - pr_warn("The s390-virtio transport is deprecated. Please switch to a modern host providing virtio-ccw.\n"); 465 - 466 - rc = vmem_add_mapping(total_memory_size, PAGE_SIZE); 467 - if (rc) 468 - return rc; 469 - 470 - kvm_devices = (void *) total_memory_size; 471 - 472 - kvm_root = root_device_register("kvm_s390"); 473 - if (IS_ERR(kvm_root)) { 474 - rc = PTR_ERR(kvm_root); 475 - printk(KERN_ERR "Could not register kvm_s390 root device"); 476 - vmem_remove_mapping(total_memory_size, PAGE_SIZE); 477 - return rc; 478 - } 479 - 480 - INIT_WORK(&hotplug_work, hotplug_devices); 481 - 482 - irq_subclass_register(IRQ_SUBCLASS_SERVICE_SIGNAL); 483 - register_external_irq(EXT_IRQ_CP_SERVICE, kvm_extint_handler); 484 - 485 - scan_devices(); 486 - return 0; 487 - } 488 - 489 - /* code for early console output with virtio_console */ 490 - static int early_put_chars(u32 vtermno, const char *buf, int count) 491 - { 492 - char scratch[17]; 493 - unsigned int len = count; 494 - 495 - if (len > sizeof(scratch) - 1) 496 - len = sizeof(scratch) - 1; 497 - scratch[len] = '\0'; 498 - memcpy(scratch, buf, len); 499 - kvm_hypercall1(KVM_S390_VIRTIO_NOTIFY, __pa(scratch)); 500 - return len; 501 - } 502 - 503 - static int __init s390_virtio_console_init(void) 504 - { 505 - if (sclp.has_vt220 || sclp.has_linemode) 506 - return -ENODEV; 507 - return virtio_cons_early_init(early_put_chars); 508 - } 509 - console_initcall(s390_virtio_console_init); 510 - 511 - 512 - /* 513 - * We do this after core stuff, but before the drivers. 514 - */ 515 - postcore_initcall(kvm_devices_init);
+1
include/uapi/linux/elf.h
··· 412 412 #define NT_S390_VXRS_HIGH 0x30a /* s390 vector registers 16-31 */ 413 413 #define NT_S390_GS_CB 0x30b /* s390 guarded storage registers */ 414 414 #define NT_S390_GS_BC 0x30c /* s390 guarded storage broadcast control block */ 415 + #define NT_S390_RI_CB 0x30d /* s390 runtime instrumentation */ 415 416 #define NT_ARM_VFP 0x400 /* ARM VFP/NEON registers */ 416 417 #define NT_ARM_TLS 0x401 /* ARM TLS register */ 417 418 #define NT_ARM_HW_BREAK 0x402 /* ARM hardware breakpoint registers */
+8
samples/kprobes/kprobe_example.c
··· 47 47 " pstate = 0x%lx\n", 48 48 p->symbol_name, p->addr, (long)regs->pc, (long)regs->pstate); 49 49 #endif 50 + #ifdef CONFIG_S390 51 + pr_info("<%s> pre_handler: p->addr, 0x%p, ip = 0x%lx, flags = 0x%lx\n", 52 + p->symbol_name, p->addr, regs->psw.addr, regs->flags); 53 + #endif 50 54 51 55 /* A dump_stack() here will give a stack backtrace */ 52 56 return 0; ··· 79 75 #ifdef CONFIG_ARM64 80 76 pr_info("<%s> post_handler: p->addr = 0x%p, pstate = 0x%lx\n", 81 77 p->symbol_name, p->addr, (long)regs->pstate); 78 + #endif 79 + #ifdef CONFIG_S390 80 + pr_info("<%s> pre_handler: p->addr, 0x%p, flags = 0x%lx\n", 81 + p->symbol_name, p->addr, regs->flags); 82 82 #endif 83 83 } 84 84