Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Martin Schwidefsky:
"The most notable change for this pull request is the ftrace rework
from Heiko. It brings a small performance improvement and the ground
work to support a new gcc option to replace the mcount blocks with a
single nop.

Two new s390 specific system calls are added to emulate user space
mmio for PCI, an artifact of the how PCI memory is accessed.

Two patches for the memory management with changes to common code.
For KVM mm_forbids_zeropage is added which disables the empty zero
page for an mm that is used by a KVM process. And an optimization,
pmdp_get_and_clear_full is added analog to ptep_get_and_clear_full.

Some micro optimization for the cmpxchg and the spinlock code.

And as usual bug fixes and cleanups"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (46 commits)
s390/cputime: fix 31-bit compile
s390/scm_block: make the number of reqs per HW req configurable
s390/scm_block: handle multiple requests in one HW request
s390/scm_block: allocate aidaw pages only when necessary
s390/scm_block: use mempool to manage aidaw requests
s390/eadm: change timeout value
s390/mm: fix memory leak of ptlock in pmd_free_tlb
s390: use local symbol names in entry[64].S
s390/ptrace: always include vector registers in core files
s390/simd: clear vector register pointer on fork/clone
s390: translate cputime magic constants to macros
s390/idle: convert open coded idle time seqcount
s390/idle: add missing irq off lockdep annotation
s390/debug: avoid function call for debug_sprintf_*
s390/kprobes: fix instruction copy for out of line execution
s390: remove diag 44 calls from cpu_relax()
s390/dasd: retry partition detection
s390/dasd: fix list corruption for sleep_on requests
s390/dasd: fix infinite term I/O loop
s390/dasd: remove unused code
...

+1529 -1508
+58 -398
Documentation/s390/Debugging390.txt
··· 26 26 Register Usage & Stackframes on Linux for s/390 & z/Architecture 27 27 A sample program with comments 28 28 Compiling programs for debugging on Linux for s/390 & z/Architecture 29 - Figuring out gcc compile errors 30 - Debugging Tools 31 - objdump 32 - strace 33 - Performance Debugging 34 29 Debugging under VM 35 30 s/390 & z/Architecture IO Overview 36 31 Debugging IO on s/390 & z/Architecture under VM ··· 109 114 110 115 16-17 16-17 Address Space Control 111 116 112 - 00 Primary Space Mode when DAT on 113 - The linux kernel currently runs in this mode, CR1 is affiliated with 114 - this mode & points to the primary segment table origin etc. 117 + 00 Primary Space Mode: 118 + The register CR1 contains the primary address-space control ele- 119 + ment (PASCE), which points to the primary space region/segment 120 + table origin. 115 121 116 - 01 Access register mode this mode is used in functions to 117 - copy data between kernel & user space. 122 + 01 Access register mode 118 123 119 - 10 Secondary space mode not used in linux however CR7 the 120 - register affiliated with this mode is & this & normally 121 - CR13=CR7 to allow us to copy data between kernel & user space. 122 - We do this as follows: 123 - We set ar2 to 0 to designate its 124 - affiliated gpr ( gpr2 )to point to primary=kernel space. 125 - We set ar4 to 1 to designate its 126 - affiliated gpr ( gpr4 ) to point to secondary=home=user space 127 - & then essentially do a memcopy(gpr2,gpr4,size) to 128 - copy data between the address spaces, the reason we use home space for the 129 - kernel & don't keep secondary space free is that code will not run in 130 - secondary space. 124 + 10 Secondary Space Mode: 125 + The register CR7 contains the secondary address-space control 126 + element (SASCE), which points to the secondary space region or 127 + segment table origin. 131 128 132 - 11 Home Space Mode all user programs run in this mode. 133 - it is affiliated with CR13. 129 + 11 Home Space Mode: 130 + The register CR13 contains the home space address-space control 131 + element (HASCE), which points to the home space region/segment 132 + table origin. 133 + 134 + See "Address Spaces on Linux for s/390 & z/Architecture" below 135 + for more information about address space usage in Linux. 134 136 135 137 18-19 18-19 Condition codes (CC) 136 138 ··· 241 249 Address Spaces on Linux for s/390 & z/Architecture 242 250 ================================================== 243 251 244 - Our addressing scheme is as follows 252 + Our addressing scheme is basically as follows: 245 253 246 - 254 + Primary Space Home Space 247 255 Himem 0x7fffffff 2GB on s/390 ***************** **************** 248 256 currently 0x3ffffffffff (2^42)-1 * User Stack * * * 249 257 on z/Architecture. ***************** * * ··· 256 264 * Sections * * * 257 265 0x00000000 ***************** **************** 258 266 259 - This also means that we need to look at the PSW problem state bit 260 - or the addressing mode to decide whether we are looking at 261 - user or kernel space. 267 + This also means that we need to look at the PSW problem state bit and the 268 + addressing mode to decide whether we are looking at user or kernel space. 269 + 270 + User space runs in primary address mode (or access register mode within 271 + the vdso code). 272 + 273 + The kernel usually also runs in home space mode, however when accessing 274 + user space the kernel switches to primary or secondary address mode if 275 + the mvcos instruction is not available or if a compare-and-swap (futex) 276 + instruction on a user space address is performed. 277 + 278 + When also looking at the ASCE control registers, this means: 279 + 280 + User space: 281 + - runs in primary or access register mode 282 + - cr1 contains the user asce 283 + - cr7 contains the user asce 284 + - cr13 contains the kernel asce 285 + 286 + Kernel space: 287 + - runs in home space mode 288 + - cr1 contains the user or kernel asce 289 + -> the kernel asce is loaded when a uaccess requires primary or 290 + secondary address mode 291 + - cr7 contains the user or kernel asce, (changed with set_fs()) 292 + - cr13 contains the kernel asce 293 + 294 + In case of uaccess the kernel changes to: 295 + - primary space mode in case of a uaccess (copy_to_user) and uses 296 + e.g. the mvcp instruction to access user space. However the kernel 297 + will stay in home space mode if the mvcos instruction is available 298 + - secondary space mode in case of futex atomic operations, so that the 299 + instructions come from primary address space and data from secondary 300 + space 301 + 302 + In case of KVM, the kernel runs in home space mode, but cr1 gets switched 303 + to contain the gmap asce before the SIE instruction gets executed. When 304 + the SIE instruction is finished, cr1 will be switched back to contain the 305 + user asce. 306 + 262 307 263 308 Virtual Addresses on s/390 & z/Architecture 264 309 =========================================== ··· 735 706 some bugs, please make sure you are using gdb-5.0 or later developed 736 707 after Nov'2000. 737 708 738 - Figuring out gcc compile errors 739 - =============================== 740 - If you are getting a lot of syntax errors compiling a program & the problem 741 - isn't blatantly obvious from the source. 742 - It often helps to just preprocess the file, this is done with the -E 743 - option in gcc. 744 - What this does is that it runs through the very first phase of compilation 745 - ( compilation in gcc is done in several stages & gcc calls many programs to 746 - achieve its end result ) with the -E option gcc just calls the gcc preprocessor (cpp). 747 - The c preprocessor does the following, it joins all the files #included together 748 - recursively ( #include files can #include other files ) & also the c file you wish to compile. 749 - It puts a fully qualified path of the #included files in a comment & it 750 - does macro expansion. 751 - This is useful for debugging because 752 - 1) You can double check whether the files you expect to be included are the ones 753 - that are being included ( e.g. double check that you aren't going to the i386 asm directory ). 754 - 2) Check that macro definitions aren't clashing with typedefs, 755 - 3) Check that definitions aren't being used before they are being included. 756 - 4) Helps put the line emitting the error under the microscope if it contains macros. 757 709 758 - For convenience the Linux kernel's makefile will do preprocessing automatically for you 759 - by suffixing the file you want built with .i ( instead of .o ) 760 - 761 - e.g. 762 - from the linux directory type 763 - make arch/s390/kernel/signal.i 764 - this will build 765 - 766 - s390-gcc -D__KERNEL__ -I/home1/barrow/linux/include -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer 767 - -fno-strict-aliasing -D__SMP__ -pipe -fno-strength-reduce -E arch/s390/kernel/signal.c 768 - > arch/s390/kernel/signal.i 769 - 770 - Now look at signal.i you should see something like. 771 - 772 - 773 - # 1 "/home1/barrow/linux/include/asm/types.h" 1 774 - typedef unsigned short umode_t; 775 - typedef __signed__ char __s8; 776 - typedef unsigned char __u8; 777 - typedef __signed__ short __s16; 778 - typedef unsigned short __u16; 779 - 780 - If instead you are getting errors further down e.g. 781 - unknown instruction:2515 "move.l" or better still unknown instruction:2515 782 - "Fixme not implemented yet, call Martin" you are probably are attempting to compile some code 783 - meant for another architecture or code that is simply not implemented, with a fixme statement 784 - stuck into the inline assembly code so that the author of the file now knows he has work to do. 785 - To look at the assembly emitted by gcc just before it is about to call gas ( the gnu assembler ) 786 - use the -S option. 787 - Again for your convenience the Linux kernel's Makefile will hold your hand & 788 - do all this donkey work for you also by building the file with the .s suffix. 789 - e.g. 790 - from the Linux directory type 791 - make arch/s390/kernel/signal.s 792 - 793 - s390-gcc -D__KERNEL__ -I/home1/barrow/linux/include -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer 794 - -fno-strict-aliasing -D__SMP__ -pipe -fno-strength-reduce -S arch/s390/kernel/signal.c 795 - -o arch/s390/kernel/signal.s 796 - 797 - 798 - This will output something like, ( please note the constant pool & the useful comments 799 - in the prologue to give you a hand at interpreting it ). 800 - 801 - .LC54: 802 - .string "misaligned (__u16 *) in __xchg\n" 803 - .LC57: 804 - .string "misaligned (__u32 *) in __xchg\n" 805 - .L$PG1: # Pool sys_sigsuspend 806 - .LC192: 807 - .long -262401 808 - .LC193: 809 - .long -1 810 - .LC194: 811 - .long schedule-.L$PG1 812 - .LC195: 813 - .long do_signal-.L$PG1 814 - .align 4 815 - .globl sys_sigsuspend 816 - .type sys_sigsuspend,@function 817 - sys_sigsuspend: 818 - # leaf function 0 819 - # automatics 16 820 - # outgoing args 0 821 - # need frame pointer 0 822 - # call alloca 0 823 - # has varargs 0 824 - # incoming args (stack) 0 825 - # function length 168 826 - STM 8,15,32(15) 827 - LR 0,15 828 - AHI 15,-112 829 - BASR 13,0 830 - .L$CO1: AHI 13,.L$PG1-.L$CO1 831 - ST 0,0(15) 832 - LR 8,2 833 - N 5,.LC192-.L$PG1(13) 834 - 835 - Adding -g to the above output makes the output even more useful 836 - e.g. typing 837 - make CC:="s390-gcc -g" kernel/sched.s 838 - 839 - which compiles. 840 - s390-gcc -g -D__KERNEL__ -I/home/barrow/linux-2.3/include -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer -fno-strict-aliasing -pipe -fno-strength-reduce -S kernel/sched.c -o kernel/sched.s 841 - 842 - also outputs stabs ( debugger ) info, from this info you can find out the 843 - offsets & sizes of various elements in structures. 844 - e.g. the stab for the structure 845 - struct rlimit { 846 - unsigned long rlim_cur; 847 - unsigned long rlim_max; 848 - }; 849 - is 850 - .stabs "rlimit:T(151,2)=s8rlim_cur:(0,5),0,32;rlim_max:(0,5),32,32;;",128,0,0,0 851 - from this stab you can see that 852 - rlimit_cur starts at bit offset 0 & is 32 bits in size 853 - rlimit_max starts at bit offset 32 & is 32 bits in size. 854 - 855 - 856 - Debugging Tools: 857 - ================ 858 - 859 - objdump 860 - ======= 861 - This is a tool with many options the most useful being ( if compiled with -g). 862 - objdump --source <victim program or object file> > <victims debug listing > 863 - 864 - 865 - The whole kernel can be compiled like this ( Doing this will make a 17MB kernel 866 - & a 200 MB listing ) however you have to strip it before building the image 867 - using the strip command to make it a more reasonable size to boot it. 868 - 869 - A source/assembly mixed dump of the kernel can be done with the line 870 - objdump --source vmlinux > vmlinux.lst 871 - Also, if the file isn't compiled -g, this will output as much debugging information 872 - as it can (e.g. function names). This is very slow as it spends lots 873 - of time searching for debugging info. The following self explanatory line should be used 874 - instead if the code isn't compiled -g, as it is much faster: 875 - objdump --disassemble-all --syms vmlinux > vmlinux.lst 876 - 877 - As hard drive space is valuable most of us use the following approach. 878 - 1) Look at the emitted psw on the console to find the crash address in the kernel. 879 - 2) Look at the file System.map ( in the linux directory ) produced when building 880 - the kernel to find the closest address less than the current PSW to find the 881 - offending function. 882 - 3) use grep or similar to search the source tree looking for the source file 883 - with this function if you don't know where it is. 884 - 4) rebuild this object file with -g on, as an example suppose the file was 885 - ( /arch/s390/kernel/signal.o ) 886 - 5) Assuming the file with the erroneous function is signal.c Move to the base of the 887 - Linux source tree. 888 - 6) rm /arch/s390/kernel/signal.o 889 - 7) make /arch/s390/kernel/signal.o 890 - 8) watch the gcc command line emitted 891 - 9) type it in again or alternatively cut & paste it on the console adding the -g option. 892 - 10) objdump --source arch/s390/kernel/signal.o > signal.lst 893 - This will output the source & the assembly intermixed, as the snippet below shows 894 - This will unfortunately output addresses which aren't the same 895 - as the kernel ones you should be able to get around the mental arithmetic 896 - by playing with the --adjust-vma parameter to objdump. 897 - 898 - 899 - 900 - 901 - static inline void spin_lock(spinlock_t *lp) 902 - { 903 - a0: 18 34 lr %r3,%r4 904 - a2: a7 3a 03 bc ahi %r3,956 905 - __asm__ __volatile(" lhi 1,-1\n" 906 - a6: a7 18 ff ff lhi %r1,-1 907 - aa: 1f 00 slr %r0,%r0 908 - ac: ba 01 30 00 cs %r0,%r1,0(%r3) 909 - b0: a7 44 ff fd jm aa <sys_sigsuspend+0x2e> 910 - saveset = current->blocked; 911 - b4: d2 07 f0 68 mvc 104(8,%r15),972(%r4) 912 - b8: 43 cc 913 - return (set->sig[0] & mask) != 0; 914 - } 915 - 916 - 6) If debugging under VM go down to that section in the document for more info. 917 - 918 - 919 - I now have a tool which takes the pain out of --adjust-vma 920 - & you are able to do something like 921 - make /arch/s390/kernel/traps.lst 922 - & it automatically generates the correctly relocated entries for 923 - the text segment in traps.lst. 924 - This tool is now standard in linux distro's in scripts/makelst 925 - 926 - strace: 927 - ------- 928 - Q. What is it ? 929 - A. It is a tool for intercepting calls to the kernel & logging them 930 - to a file & on the screen. 931 - 932 - Q. What use is it ? 933 - A. You can use it to find out what files a particular program opens. 934 - 935 - 936 - 937 - Example 1 938 - --------- 939 - If you wanted to know does ping work but didn't have the source 940 - strace ping -c 1 127.0.0.1 941 - & then look at the man pages for each of the syscalls below, 942 - ( In fact this is sometimes easier than looking at some spaghetti 943 - source which conditionally compiles for several architectures ). 944 - Not everything that it throws out needs to make sense immediately. 945 - 946 - Just looking quickly you can see that it is making up a RAW socket 947 - for the ICMP protocol. 948 - Doing an alarm(10) for a 10 second timeout 949 - & doing a gettimeofday call before & after each read to see 950 - how long the replies took, & writing some text to stdout so the user 951 - has an idea what is going on. 952 - 953 - socket(PF_INET, SOCK_RAW, IPPROTO_ICMP) = 3 954 - getuid() = 0 955 - setuid(0) = 0 956 - stat("/usr/share/locale/C/libc.cat", 0xbffff134) = -1 ENOENT (No such file or directory) 957 - stat("/usr/share/locale/libc/C", 0xbffff134) = -1 ENOENT (No such file or directory) 958 - stat("/usr/local/share/locale/C/libc.cat", 0xbffff134) = -1 ENOENT (No such file or directory) 959 - getpid() = 353 960 - setsockopt(3, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0 961 - setsockopt(3, SOL_SOCKET, SO_RCVBUF, [49152], 4) = 0 962 - fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(3, 1), ...}) = 0 963 - mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40008000 964 - ioctl(1, TCGETS, {B9600 opost isig icanon echo ...}) = 0 965 - write(1, "PING 127.0.0.1 (127.0.0.1): 56 d"..., 42PING 127.0.0.1 (127.0.0.1): 56 data bytes 966 - ) = 42 967 - sigaction(SIGINT, {0x8049ba0, [], SA_RESTART}, {SIG_DFL}) = 0 968 - sigaction(SIGALRM, {0x8049600, [], SA_RESTART}, {SIG_DFL}) = 0 969 - gettimeofday({948904719, 138951}, NULL) = 0 970 - sendto(3, "\10\0D\201a\1\0\0\17#\2178\307\36"..., 64, 0, {sin_family=AF_INET, 971 - sin_port=htons(0), sin_addr=inet_addr("127.0.0.1")}, 16) = 64 972 - sigaction(SIGALRM, {0x8049600, [], SA_RESTART}, {0x8049600, [], SA_RESTART}) = 0 973 - sigaction(SIGALRM, {0x8049ba0, [], SA_RESTART}, {0x8049600, [], SA_RESTART}) = 0 974 - alarm(10) = 0 975 - recvfrom(3, "E\0\0T\0005\0\0@\1|r\177\0\0\1\177"..., 192, 0, 976 - {sin_family=AF_INET, sin_port=htons(50882), sin_addr=inet_addr("127.0.0.1")}, [16]) = 84 977 - gettimeofday({948904719, 160224}, NULL) = 0 978 - recvfrom(3, "E\0\0T\0006\0\0\377\1\275p\177\0"..., 192, 0, 979 - {sin_family=AF_INET, sin_port=htons(50882), sin_addr=inet_addr("127.0.0.1")}, [16]) = 84 980 - gettimeofday({948904719, 166952}, NULL) = 0 981 - write(1, "64 bytes from 127.0.0.1: icmp_se"..., 982 - 5764 bytes from 127.0.0.1: icmp_seq=0 ttl=255 time=28.0 ms 983 - 984 - Example 2 985 - --------- 986 - strace passwd 2>&1 | grep open 987 - produces the following output 988 - open("/etc/ld.so.cache", O_RDONLY) = 3 989 - open("/opt/kde/lib/libc.so.5", O_RDONLY) = -1 ENOENT (No such file or directory) 990 - open("/lib/libc.so.5", O_RDONLY) = 3 991 - open("/dev", O_RDONLY) = 3 992 - open("/var/run/utmp", O_RDONLY) = 3 993 - open("/etc/passwd", O_RDONLY) = 3 994 - open("/etc/shadow", O_RDONLY) = 3 995 - open("/etc/login.defs", O_RDONLY) = 4 996 - open("/dev/tty", O_RDONLY) = 4 997 - 998 - The 2>&1 is done to redirect stderr to stdout & grep is then filtering this input 999 - through the pipe for each line containing the string open. 1000 - 1001 - 1002 - Example 3 1003 - --------- 1004 - Getting sophisticated 1005 - telnetd crashes & I don't know why 1006 - 1007 - Steps 1008 - ----- 1009 - 1) Replace the following line in /etc/inetd.conf 1010 - telnet stream tcp nowait root /usr/sbin/in.telnetd -h 1011 - with 1012 - telnet stream tcp nowait root /blah 1013 - 1014 - 2) Create the file /blah with the following contents to start tracing telnetd 1015 - #!/bin/bash 1016 - /usr/bin/strace -o/t1 -f /usr/sbin/in.telnetd -h 1017 - 3) chmod 700 /blah to make it executable only to root 1018 - 4) 1019 - killall -HUP inetd 1020 - or ps aux | grep inetd 1021 - get inetd's process id 1022 - & kill -HUP inetd to restart it. 1023 - 1024 - Important options 1025 - ----------------- 1026 - -o is used to tell strace to output to a file in our case t1 in the root directory 1027 - -f is to follow children i.e. 1028 - e.g in our case above telnetd will start the login process & subsequently a shell like bash. 1029 - You will be able to tell which is which from the process ID's listed on the left hand side 1030 - of the strace output. 1031 - -p<pid> will tell strace to attach to a running process, yup this can be done provided 1032 - it isn't being traced or debugged already & you have enough privileges, 1033 - the reason 2 processes cannot trace or debug the same program is that strace 1034 - becomes the parent process of the one being debugged & processes ( unlike people ) 1035 - can have only one parent. 1036 - 1037 - 1038 - However the file /t1 will get big quite quickly 1039 - to test it telnet 127.0.0.1 1040 - 1041 - now look at what files in.telnetd execve'd 1042 - 413 execve("/usr/sbin/in.telnetd", ["/usr/sbin/in.telnetd", "-h"], [/* 17 vars */]) = 0 1043 - 414 execve("/bin/login", ["/bin/login", "-h", "localhost", "-p"], [/* 2 vars */]) = 0 1044 - 1045 - Whey it worked!. 1046 - 1047 - 1048 - Other hints: 1049 - ------------ 1050 - If the program is not very interactive ( i.e. not much keyboard input ) 1051 - & is crashing in one architecture but not in another you can do 1052 - an strace of both programs under as identical a scenario as you can 1053 - on both architectures outputting to a file then. 1054 - do a diff of the two traces using the diff program 1055 - i.e. 1056 - diff output1 output2 1057 - & maybe you'll be able to see where the call paths differed, this 1058 - is possibly near the cause of the crash. 1059 - 1060 - More info 1061 - --------- 1062 - Look at man pages for strace & the various syscalls 1063 - e.g. man strace, man alarm, man socket. 1064 - 1065 - 1066 - Performance Debugging 1067 - ===================== 1068 - gcc is capable of compiling in profiling code just add the -p option 1069 - to the CFLAGS, this obviously affects program size & performance. 1070 - This can be used by the gprof gnu profiling tool or the 1071 - gcov the gnu code coverage tool ( code coverage is a means of testing 1072 - code quality by checking if all the code in an executable in exercised by 1073 - a tester ). 1074 - 1075 - 1076 - Using top to find out where processes are sleeping in the kernel 1077 - ---------------------------------------------------------------- 1078 - To do this copy the System.map from the root directory where 1079 - the linux kernel was built to the /boot directory on your 1080 - linux machine. 1081 - Start top 1082 - Now type fU<return> 1083 - You should see a new field called WCHAN which 1084 - tells you where each process is sleeping here is a typical output. 1085 - 1086 - 6:59pm up 41 min, 1 user, load average: 0.00, 0.00, 0.00 1087 - 28 processes: 27 sleeping, 1 running, 0 zombie, 0 stopped 1088 - CPU states: 0.0% user, 0.1% system, 0.0% nice, 99.8% idle 1089 - Mem: 254900K av, 45976K used, 208924K free, 0K shrd, 28636K buff 1090 - Swap: 0K av, 0K used, 0K free 8620K cached 1091 - 1092 - PID USER PRI NI SIZE RSS SHARE WCHAN STAT LIB %CPU %MEM TIME COMMAND 1093 - 750 root 12 0 848 848 700 do_select S 0 0.1 0.3 0:00 in.telnetd 1094 - 767 root 16 0 1140 1140 964 R 0 0.1 0.4 0:00 top 1095 - 1 root 8 0 212 212 180 do_select S 0 0.0 0.0 0:00 init 1096 - 2 root 9 0 0 0 0 down_inte SW 0 0.0 0.0 0:00 kmcheck 1097 - 1098 - The time command 1099 - ---------------- 1100 - Another related command is the time command which gives you an indication 1101 - of where a process is spending the majority of its time. 1102 - e.g. 1103 - time ping -c 5 nc 1104 - outputs 1105 - real 0m4.054s 1106 - user 0m0.010s 1107 - sys 0m0.010s 1108 710 1109 711 Debugging under VM 1110 712 ==================
+18 -226
arch/s390/include/asm/cmpxchg.h
··· 11 11 #include <linux/types.h> 12 12 #include <linux/bug.h> 13 13 14 - extern void __xchg_called_with_bad_pointer(void); 15 - 16 - static inline unsigned long __xchg(unsigned long x, void *ptr, int size) 17 - { 18 - unsigned long addr, old; 19 - int shift; 20 - 21 - switch (size) { 22 - case 1: 23 - addr = (unsigned long) ptr; 24 - shift = (3 ^ (addr & 3)) << 3; 25 - addr ^= addr & 3; 26 - asm volatile( 27 - " l %0,%4\n" 28 - "0: lr 0,%0\n" 29 - " nr 0,%3\n" 30 - " or 0,%2\n" 31 - " cs %0,0,%4\n" 32 - " jl 0b\n" 33 - : "=&d" (old), "=Q" (*(int *) addr) 34 - : "d" ((x & 0xff) << shift), "d" (~(0xff << shift)), 35 - "Q" (*(int *) addr) : "memory", "cc", "0"); 36 - return old >> shift; 37 - case 2: 38 - addr = (unsigned long) ptr; 39 - shift = (2 ^ (addr & 2)) << 3; 40 - addr ^= addr & 2; 41 - asm volatile( 42 - " l %0,%4\n" 43 - "0: lr 0,%0\n" 44 - " nr 0,%3\n" 45 - " or 0,%2\n" 46 - " cs %0,0,%4\n" 47 - " jl 0b\n" 48 - : "=&d" (old), "=Q" (*(int *) addr) 49 - : "d" ((x & 0xffff) << shift), "d" (~(0xffff << shift)), 50 - "Q" (*(int *) addr) : "memory", "cc", "0"); 51 - return old >> shift; 52 - case 4: 53 - asm volatile( 54 - " l %0,%3\n" 55 - "0: cs %0,%2,%3\n" 56 - " jl 0b\n" 57 - : "=&d" (old), "=Q" (*(int *) ptr) 58 - : "d" (x), "Q" (*(int *) ptr) 59 - : "memory", "cc"); 60 - return old; 61 - #ifdef CONFIG_64BIT 62 - case 8: 63 - asm volatile( 64 - " lg %0,%3\n" 65 - "0: csg %0,%2,%3\n" 66 - " jl 0b\n" 67 - : "=&d" (old), "=m" (*(long *) ptr) 68 - : "d" (x), "Q" (*(long *) ptr) 69 - : "memory", "cc"); 70 - return old; 71 - #endif /* CONFIG_64BIT */ 72 - } 73 - __xchg_called_with_bad_pointer(); 74 - return x; 75 - } 76 - 77 - #define xchg(ptr, x) \ 78 - ({ \ 79 - __typeof__(*(ptr)) __ret; \ 80 - __ret = (__typeof__(*(ptr))) \ 81 - __xchg((unsigned long)(x), (void *)(ptr), sizeof(*(ptr)));\ 82 - __ret; \ 14 + #define cmpxchg(ptr, o, n) \ 15 + ({ \ 16 + __typeof__(*(ptr)) __o = (o); \ 17 + __typeof__(*(ptr)) __n = (n); \ 18 + (__typeof__(*(ptr))) __sync_val_compare_and_swap((ptr),__o,__n);\ 83 19 }) 84 20 85 - /* 86 - * Atomic compare and exchange. Compare OLD with MEM, if identical, 87 - * store NEW in MEM. Return the initial value in MEM. Success is 88 - * indicated by comparing RETURN with OLD. 89 - */ 21 + #define cmpxchg64 cmpxchg 22 + #define cmpxchg_local cmpxchg 23 + #define cmpxchg64_local cmpxchg 24 + 25 + #define xchg(ptr, x) \ 26 + ({ \ 27 + __typeof__(ptr) __ptr = (ptr); \ 28 + __typeof__(*(ptr)) __old; \ 29 + do { \ 30 + __old = *__ptr; \ 31 + } while (!__sync_bool_compare_and_swap(__ptr, __old, x)); \ 32 + __old; \ 33 + }) 90 34 91 35 #define __HAVE_ARCH_CMPXCHG 92 - 93 - extern void __cmpxchg_called_with_bad_pointer(void); 94 - 95 - static inline unsigned long __cmpxchg(void *ptr, unsigned long old, 96 - unsigned long new, int size) 97 - { 98 - unsigned long addr, prev, tmp; 99 - int shift; 100 - 101 - switch (size) { 102 - case 1: 103 - addr = (unsigned long) ptr; 104 - shift = (3 ^ (addr & 3)) << 3; 105 - addr ^= addr & 3; 106 - asm volatile( 107 - " l %0,%2\n" 108 - "0: nr %0,%5\n" 109 - " lr %1,%0\n" 110 - " or %0,%3\n" 111 - " or %1,%4\n" 112 - " cs %0,%1,%2\n" 113 - " jnl 1f\n" 114 - " xr %1,%0\n" 115 - " nr %1,%5\n" 116 - " jnz 0b\n" 117 - "1:" 118 - : "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) addr) 119 - : "d" ((old & 0xff) << shift), 120 - "d" ((new & 0xff) << shift), 121 - "d" (~(0xff << shift)) 122 - : "memory", "cc"); 123 - return prev >> shift; 124 - case 2: 125 - addr = (unsigned long) ptr; 126 - shift = (2 ^ (addr & 2)) << 3; 127 - addr ^= addr & 2; 128 - asm volatile( 129 - " l %0,%2\n" 130 - "0: nr %0,%5\n" 131 - " lr %1,%0\n" 132 - " or %0,%3\n" 133 - " or %1,%4\n" 134 - " cs %0,%1,%2\n" 135 - " jnl 1f\n" 136 - " xr %1,%0\n" 137 - " nr %1,%5\n" 138 - " jnz 0b\n" 139 - "1:" 140 - : "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) addr) 141 - : "d" ((old & 0xffff) << shift), 142 - "d" ((new & 0xffff) << shift), 143 - "d" (~(0xffff << shift)) 144 - : "memory", "cc"); 145 - return prev >> shift; 146 - case 4: 147 - asm volatile( 148 - " cs %0,%3,%1\n" 149 - : "=&d" (prev), "=Q" (*(int *) ptr) 150 - : "0" (old), "d" (new), "Q" (*(int *) ptr) 151 - : "memory", "cc"); 152 - return prev; 153 - #ifdef CONFIG_64BIT 154 - case 8: 155 - asm volatile( 156 - " csg %0,%3,%1\n" 157 - : "=&d" (prev), "=Q" (*(long *) ptr) 158 - : "0" (old), "d" (new), "Q" (*(long *) ptr) 159 - : "memory", "cc"); 160 - return prev; 161 - #endif /* CONFIG_64BIT */ 162 - } 163 - __cmpxchg_called_with_bad_pointer(); 164 - return old; 165 - } 166 - 167 - #define cmpxchg(ptr, o, n) \ 168 - ({ \ 169 - __typeof__(*(ptr)) __ret; \ 170 - __ret = (__typeof__(*(ptr))) \ 171 - __cmpxchg((ptr), (unsigned long)(o), (unsigned long)(n), \ 172 - sizeof(*(ptr))); \ 173 - __ret; \ 174 - }) 175 - 176 - #ifdef CONFIG_64BIT 177 - #define cmpxchg64(ptr, o, n) \ 178 - ({ \ 179 - cmpxchg((ptr), (o), (n)); \ 180 - }) 181 - #else /* CONFIG_64BIT */ 182 - static inline unsigned long long __cmpxchg64(void *ptr, 183 - unsigned long long old, 184 - unsigned long long new) 185 - { 186 - register_pair rp_old = {.pair = old}; 187 - register_pair rp_new = {.pair = new}; 188 - unsigned long long *ullptr = ptr; 189 - 190 - asm volatile( 191 - " cds %0,%2,%1" 192 - : "+d" (rp_old), "+Q" (*ullptr) 193 - : "d" (rp_new) 194 - : "memory", "cc"); 195 - return rp_old.pair; 196 - } 197 - 198 - #define cmpxchg64(ptr, o, n) \ 199 - ({ \ 200 - __typeof__(*(ptr)) __ret; \ 201 - __ret = (__typeof__(*(ptr))) \ 202 - __cmpxchg64((ptr), \ 203 - (unsigned long long)(o), \ 204 - (unsigned long long)(n)); \ 205 - __ret; \ 206 - }) 207 - #endif /* CONFIG_64BIT */ 208 36 209 37 #define __cmpxchg_double_op(p1, p2, o1, o2, n1, n2, insn) \ 210 38 ({ \ ··· 92 264 }) 93 265 94 266 #define system_has_cmpxchg_double() 1 95 - 96 - #include <asm-generic/cmpxchg-local.h> 97 - 98 - static inline unsigned long __cmpxchg_local(void *ptr, 99 - unsigned long old, 100 - unsigned long new, int size) 101 - { 102 - switch (size) { 103 - case 1: 104 - case 2: 105 - case 4: 106 - #ifdef CONFIG_64BIT 107 - case 8: 108 - #endif 109 - return __cmpxchg(ptr, old, new, size); 110 - default: 111 - return __cmpxchg_local_generic(ptr, old, new, size); 112 - } 113 - 114 - return old; 115 - } 116 - 117 - /* 118 - * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make 119 - * them available. 120 - */ 121 - #define cmpxchg_local(ptr, o, n) \ 122 - ({ \ 123 - __typeof__(*(ptr)) __ret; \ 124 - __ret = (__typeof__(*(ptr))) \ 125 - __cmpxchg_local((ptr), (unsigned long)(o), \ 126 - (unsigned long)(n), sizeof(*(ptr))); \ 127 - __ret; \ 128 - }) 129 - 130 - #define cmpxchg64_local(ptr, o, n) cmpxchg64((ptr), (o), (n)) 131 267 132 268 #endif /* __ASM_CMPXCHG_H */
+24 -22
arch/s390/include/asm/cputime.h
··· 10 10 #include <linux/types.h> 11 11 #include <asm/div64.h> 12 12 13 + #define CPUTIME_PER_USEC 4096ULL 14 + #define CPUTIME_PER_SEC (CPUTIME_PER_USEC * USEC_PER_SEC) 13 15 14 16 /* We want to use full resolution of the CPU timer: 2**-12 micro-seconds. */ 15 17 ··· 40 38 */ 41 39 static inline unsigned long cputime_to_jiffies(const cputime_t cputime) 42 40 { 43 - return __div((__force unsigned long long) cputime, 4096000000ULL / HZ); 41 + return __div((__force unsigned long long) cputime, CPUTIME_PER_SEC / HZ); 44 42 } 45 43 46 44 static inline cputime_t jiffies_to_cputime(const unsigned int jif) 47 45 { 48 - return (__force cputime_t)(jif * (4096000000ULL / HZ)); 46 + return (__force cputime_t)(jif * (CPUTIME_PER_SEC / HZ)); 49 47 } 50 48 51 49 static inline u64 cputime64_to_jiffies64(cputime64_t cputime) 52 50 { 53 51 unsigned long long jif = (__force unsigned long long) cputime; 54 - do_div(jif, 4096000000ULL / HZ); 52 + do_div(jif, CPUTIME_PER_SEC / HZ); 55 53 return jif; 56 54 } 57 55 58 56 static inline cputime64_t jiffies64_to_cputime64(const u64 jif) 59 57 { 60 - return (__force cputime64_t)(jif * (4096000000ULL / HZ)); 58 + return (__force cputime64_t)(jif * (CPUTIME_PER_SEC / HZ)); 61 59 } 62 60 63 61 /* ··· 70 68 71 69 static inline cputime_t usecs_to_cputime(const unsigned int m) 72 70 { 73 - return (__force cputime_t)(m * 4096ULL); 71 + return (__force cputime_t)(m * CPUTIME_PER_USEC); 74 72 } 75 73 76 74 #define usecs_to_cputime64(m) usecs_to_cputime(m) ··· 80 78 */ 81 79 static inline unsigned int cputime_to_secs(const cputime_t cputime) 82 80 { 83 - return __div((__force unsigned long long) cputime, 2048000000) >> 1; 81 + return __div((__force unsigned long long) cputime, CPUTIME_PER_SEC / 2) >> 1; 84 82 } 85 83 86 84 static inline cputime_t secs_to_cputime(const unsigned int s) 87 85 { 88 - return (__force cputime_t)(s * 4096000000ULL); 86 + return (__force cputime_t)(s * CPUTIME_PER_SEC); 89 87 } 90 88 91 89 /* ··· 93 91 */ 94 92 static inline cputime_t timespec_to_cputime(const struct timespec *value) 95 93 { 96 - unsigned long long ret = value->tv_sec * 4096000000ULL; 97 - return (__force cputime_t)(ret + value->tv_nsec * 4096 / 1000); 94 + unsigned long long ret = value->tv_sec * CPUTIME_PER_SEC; 95 + return (__force cputime_t)(ret + __div(value->tv_nsec * CPUTIME_PER_USEC, NSEC_PER_USEC)); 98 96 } 99 97 100 98 static inline void cputime_to_timespec(const cputime_t cputime, ··· 105 103 register_pair rp; 106 104 107 105 rp.pair = __cputime >> 1; 108 - asm ("dr %0,%1" : "+d" (rp) : "d" (2048000000UL)); 109 - value->tv_nsec = rp.subreg.even * 1000 / 4096; 106 + asm ("dr %0,%1" : "+d" (rp) : "d" (CPUTIME_PER_SEC / 2)); 107 + value->tv_nsec = rp.subreg.even * NSEC_PER_USEC / CPUTIME_PER_USEC; 110 108 value->tv_sec = rp.subreg.odd; 111 109 #else 112 - value->tv_nsec = (__cputime % 4096000000ULL) * 1000 / 4096; 113 - value->tv_sec = __cputime / 4096000000ULL; 110 + value->tv_nsec = (__cputime % CPUTIME_PER_SEC) * NSEC_PER_USEC / CPUTIME_PER_USEC; 111 + value->tv_sec = __cputime / CPUTIME_PER_SEC; 114 112 #endif 115 113 } 116 114 ··· 121 119 */ 122 120 static inline cputime_t timeval_to_cputime(const struct timeval *value) 123 121 { 124 - unsigned long long ret = value->tv_sec * 4096000000ULL; 125 - return (__force cputime_t)(ret + value->tv_usec * 4096ULL); 122 + unsigned long long ret = value->tv_sec * CPUTIME_PER_SEC; 123 + return (__force cputime_t)(ret + value->tv_usec * CPUTIME_PER_USEC); 126 124 } 127 125 128 126 static inline void cputime_to_timeval(const cputime_t cputime, ··· 133 131 register_pair rp; 134 132 135 133 rp.pair = __cputime >> 1; 136 - asm ("dr %0,%1" : "+d" (rp) : "d" (2048000000UL)); 137 - value->tv_usec = rp.subreg.even / 4096; 134 + asm ("dr %0,%1" : "+d" (rp) : "d" (CPUTIME_PER_USEC / 2)); 135 + value->tv_usec = rp.subreg.even / CPUTIME_PER_USEC; 138 136 value->tv_sec = rp.subreg.odd; 139 137 #else 140 - value->tv_usec = (__cputime % 4096000000ULL) / 4096; 141 - value->tv_sec = __cputime / 4096000000ULL; 138 + value->tv_usec = (__cputime % CPUTIME_PER_SEC) / CPUTIME_PER_USEC; 139 + value->tv_sec = __cputime / CPUTIME_PER_SEC; 142 140 #endif 143 141 } 144 142 ··· 148 146 static inline clock_t cputime_to_clock_t(cputime_t cputime) 149 147 { 150 148 unsigned long long clock = (__force unsigned long long) cputime; 151 - do_div(clock, 4096000000ULL / USER_HZ); 149 + do_div(clock, CPUTIME_PER_SEC / USER_HZ); 152 150 return clock; 153 151 } 154 152 155 153 static inline cputime_t clock_t_to_cputime(unsigned long x) 156 154 { 157 - return (__force cputime_t)(x * (4096000000ULL / USER_HZ)); 155 + return (__force cputime_t)(x * (CPUTIME_PER_SEC / USER_HZ)); 158 156 } 159 157 160 158 /* ··· 163 161 static inline clock_t cputime64_to_clock_t(cputime64_t cputime) 164 162 { 165 163 unsigned long long clock = (__force unsigned long long) cputime; 166 - do_div(clock, 4096000000ULL / USER_HZ); 164 + do_div(clock, CPUTIME_PER_SEC / USER_HZ); 167 165 return clock; 168 166 } 169 167
+27 -2
arch/s390/include/asm/debug.h
··· 151 151 * stored in the s390dbf. See Documentation/s390/s390dbf.txt for more details! 152 152 */ 153 153 extern debug_entry_t * 154 - debug_sprintf_event(debug_info_t* id,int level,char *string,...) 154 + __debug_sprintf_event(debug_info_t *id, int level, char *string, ...) 155 155 __attribute__ ((format(printf, 3, 4))); 156 156 157 + #define debug_sprintf_event(_id, _level, _fmt, ...) \ 158 + ({ \ 159 + debug_entry_t *__ret; \ 160 + debug_info_t *__id = _id; \ 161 + int __level = _level; \ 162 + if ((!__id) || (__level > __id->level)) \ 163 + __ret = NULL; \ 164 + else \ 165 + __ret = __debug_sprintf_event(__id, __level, \ 166 + _fmt, ## __VA_ARGS__); \ 167 + __ret; \ 168 + }) 157 169 158 170 static inline debug_entry_t* 159 171 debug_exception(debug_info_t* id, int level, void* data, int length) ··· 206 194 * stored in the s390dbf. See Documentation/s390/s390dbf.txt for more details! 207 195 */ 208 196 extern debug_entry_t * 209 - debug_sprintf_exception(debug_info_t* id,int level,char *string,...) 197 + __debug_sprintf_exception(debug_info_t *id, int level, char *string, ...) 210 198 __attribute__ ((format(printf, 3, 4))); 199 + 200 + #define debug_sprintf_exception(_id, _level, _fmt, ...) \ 201 + ({ \ 202 + debug_entry_t *__ret; \ 203 + debug_info_t *__id = _id; \ 204 + int __level = _level; \ 205 + if ((!__id) || (__level > __id->level)) \ 206 + __ret = NULL; \ 207 + else \ 208 + __ret = __debug_sprintf_exception(__id, __level, \ 209 + _fmt, ## __VA_ARGS__);\ 210 + __ret; \ 211 + }) 211 212 212 213 int debug_register_view(debug_info_t* id, struct debug_view* view); 213 214 int debug_unregister_view(debug_info_t* id, struct debug_view* view);
+51 -7
arch/s390/include/asm/ftrace.h
··· 1 1 #ifndef _ASM_S390_FTRACE_H 2 2 #define _ASM_S390_FTRACE_H 3 3 4 + #define ARCH_SUPPORTS_FTRACE_OPS 1 5 + 6 + #define MCOUNT_INSN_SIZE 24 7 + #define MCOUNT_RETURN_FIXUP 18 8 + 4 9 #ifndef __ASSEMBLY__ 5 10 6 - extern void _mcount(void); 11 + #define ftrace_return_address(n) __builtin_return_address(n) 12 + 13 + void _mcount(void); 14 + void ftrace_caller(void); 15 + 7 16 extern char ftrace_graph_caller_end; 17 + extern unsigned long ftrace_plt; 8 18 9 19 struct dyn_arch_ftrace { }; 10 20 11 - #define MCOUNT_ADDR ((long)_mcount) 21 + #define MCOUNT_ADDR ((unsigned long)_mcount) 22 + #define FTRACE_ADDR ((unsigned long)ftrace_caller) 12 23 24 + #define KPROBE_ON_FTRACE_NOP 0 25 + #define KPROBE_ON_FTRACE_CALL 1 13 26 14 27 static inline unsigned long ftrace_call_adjust(unsigned long addr) 15 28 { 16 29 return addr; 17 30 } 18 31 32 + struct ftrace_insn { 33 + u16 opc; 34 + s32 disp; 35 + } __packed; 36 + 37 + static inline void ftrace_generate_nop_insn(struct ftrace_insn *insn) 38 + { 39 + #ifdef CONFIG_FUNCTION_TRACER 40 + /* jg .+24 */ 41 + insn->opc = 0xc0f4; 42 + insn->disp = MCOUNT_INSN_SIZE / 2; 43 + #endif 44 + } 45 + 46 + static inline int is_ftrace_nop(struct ftrace_insn *insn) 47 + { 48 + #ifdef CONFIG_FUNCTION_TRACER 49 + if (insn->disp == MCOUNT_INSN_SIZE / 2) 50 + return 1; 51 + #endif 52 + return 0; 53 + } 54 + 55 + static inline void ftrace_generate_call_insn(struct ftrace_insn *insn, 56 + unsigned long ip) 57 + { 58 + #ifdef CONFIG_FUNCTION_TRACER 59 + unsigned long target; 60 + 61 + /* brasl r0,ftrace_caller */ 62 + target = is_module_addr((void *) ip) ? ftrace_plt : FTRACE_ADDR; 63 + insn->opc = 0xc005; 64 + insn->disp = (target - ip) / 2; 65 + #endif 66 + } 67 + 19 68 #endif /* __ASSEMBLY__ */ 20 - 21 - #define MCOUNT_INSN_SIZE 18 22 - 23 - #define ARCH_SUPPORTS_FTRACE_OPS 1 24 - 25 69 #endif /* _ASM_S390_FTRACE_H */
+2 -1
arch/s390/include/asm/idle.h
··· 9 9 10 10 #include <linux/types.h> 11 11 #include <linux/device.h> 12 + #include <linux/seqlock.h> 12 13 13 14 struct s390_idle_data { 14 - unsigned int sequence; 15 + seqcount_t seqcount; 15 16 unsigned long long idle_count; 16 17 unsigned long long idle_time; 17 18 unsigned long long clock_idle_enter;
+9
arch/s390/include/asm/io.h
··· 39 39 { 40 40 } 41 41 42 + static inline void __iomem *ioport_map(unsigned long port, unsigned int nr) 43 + { 44 + return NULL; 45 + } 46 + 47 + static inline void ioport_unmap(void __iomem *p) 48 + { 49 + } 50 + 42 51 /* 43 52 * s390 needs a private implementation of pci_iomap since ioremap with its 44 53 * offset parameter isn't sufficient. That's because BAR spaces are not
+4 -7
arch/s390/include/asm/irq.h
··· 1 1 #ifndef _ASM_IRQ_H 2 2 #define _ASM_IRQ_H 3 3 4 - #define EXT_INTERRUPT 1 5 - #define IO_INTERRUPT 2 6 - #define THIN_INTERRUPT 3 4 + #define EXT_INTERRUPT 0 5 + #define IO_INTERRUPT 1 6 + #define THIN_INTERRUPT 2 7 7 8 - #define NR_IRQS_BASE 4 8 + #define NR_IRQS_BASE 3 9 9 10 10 #ifdef CONFIG_PCI_NR_MSI 11 11 # define NR_IRQS (NR_IRQS_BASE + CONFIG_PCI_NR_MSI) 12 12 #else 13 13 # define NR_IRQS NR_IRQS_BASE 14 14 #endif 15 - 16 - /* This number is used when no interrupt has been assigned */ 17 - #define NO_IRQ 0 18 15 19 16 /* External interruption codes */ 20 17 #define EXT_IRQ_INTERRUPT_KEY 0x0040
+1
arch/s390/include/asm/kprobes.h
··· 60 60 struct arch_specific_insn { 61 61 /* copy of original instruction */ 62 62 kprobe_opcode_t *insn; 63 + unsigned int is_ftrace_insn : 1; 63 64 }; 64 65 65 66 struct prev_kprobe {
+2 -2
arch/s390/include/asm/lowcore.h
··· 147 147 __u32 softirq_pending; /* 0x02ec */ 148 148 __u32 percpu_offset; /* 0x02f0 */ 149 149 __u32 machine_flags; /* 0x02f4 */ 150 - __u32 ftrace_func; /* 0x02f8 */ 150 + __u8 pad_0x02f8[0x02fc-0x02f8]; /* 0x02f8 */ 151 151 __u32 spinlock_lockval; /* 0x02fc */ 152 152 153 153 __u8 pad_0x0300[0x0e00-0x0300]; /* 0x0300 */ ··· 297 297 __u64 percpu_offset; /* 0x0378 */ 298 298 __u64 vdso_per_cpu_data; /* 0x0380 */ 299 299 __u64 machine_flags; /* 0x0388 */ 300 - __u64 ftrace_func; /* 0x0390 */ 300 + __u8 pad_0x0390[0x0398-0x0390]; /* 0x0390 */ 301 301 __u64 gmap; /* 0x0398 */ 302 302 __u32 spinlock_lockval; /* 0x03a0 */ 303 303 __u8 pad_0x03a0[0x0400-0x03a4]; /* 0x03a4 */
+1 -4
arch/s390/include/asm/pci.h
··· 50 50 atomic64_t unmapped_pages; 51 51 } __packed __aligned(16); 52 52 53 - #define ZPCI_MSI_VEC_BITS 11 54 - #define ZPCI_MSI_VEC_MAX (1 << ZPCI_MSI_VEC_BITS) 55 - #define ZPCI_MSI_VEC_MASK (ZPCI_MSI_VEC_MAX - 1) 56 - 57 53 enum zpci_state { 58 54 ZPCI_FN_STATE_RESERVED, 59 55 ZPCI_FN_STATE_STANDBY, ··· 86 90 87 91 /* IRQ stuff */ 88 92 u64 msi_addr; /* MSI address */ 93 + unsigned int max_msi; /* maximum number of MSI's */ 89 94 struct airq_iv *aibv; /* adapter interrupt bit vector */ 90 95 unsigned int aisb; /* number of the summary bit */ 91 96
+4 -2
arch/s390/include/asm/pci_io.h
··· 139 139 int size, rc = 0; 140 140 141 141 while (n > 0) { 142 - size = zpci_get_max_write_size((u64) src, (u64) dst, n, 8); 142 + size = zpci_get_max_write_size((u64 __force) src, 143 + (u64) dst, n, 8); 143 144 req = ZPCI_CREATE_REQ(entry->fh, entry->bar, size); 144 145 rc = zpci_read_single(req, dst, offset, size); 145 146 if (rc) ··· 163 162 return -EINVAL; 164 163 165 164 while (n > 0) { 166 - size = zpci_get_max_write_size((u64) dst, (u64) src, n, 128); 165 + size = zpci_get_max_write_size((u64 __force) dst, 166 + (u64) src, n, 128); 167 167 req = ZPCI_CREATE_REQ(entry->fh, entry->bar, size); 168 168 169 169 if (size > 8) /* main path */
-2
arch/s390/include/asm/pgalloc.h
··· 22 22 void page_table_free(struct mm_struct *, unsigned long *); 23 23 void page_table_free_rcu(struct mmu_gather *, unsigned long *, unsigned long); 24 24 25 - void page_table_reset_pgste(struct mm_struct *, unsigned long, unsigned long, 26 - bool init_skey); 27 25 int set_guest_storage_key(struct mm_struct *mm, unsigned long addr, 28 26 unsigned long key, bool nq); 29 27
+32 -1
arch/s390/include/asm/pgtable.h
··· 133 133 #define MODULES_LEN (1UL << 31) 134 134 #endif 135 135 136 + static inline int is_module_addr(void *addr) 137 + { 138 + #ifdef CONFIG_64BIT 139 + BUILD_BUG_ON(MODULES_LEN > (1UL << 31)); 140 + if (addr < (void *)MODULES_VADDR) 141 + return 0; 142 + if (addr > (void *)MODULES_END) 143 + return 0; 144 + #endif 145 + return 1; 146 + } 147 + 136 148 /* 137 149 * A 31 bit pagetable entry of S390 has following format: 138 150 * | PFRA | | OS | ··· 491 479 return 0; 492 480 } 493 481 482 + /* 483 + * In the case that a guest uses storage keys 484 + * faults should no longer be backed by zero pages 485 + */ 486 + #define mm_forbids_zeropage mm_use_skey 494 487 static inline int mm_use_skey(struct mm_struct *mm) 495 488 { 496 489 #ifdef CONFIG_PGSTE ··· 1651 1634 return pmd; 1652 1635 } 1653 1636 1637 + #define __HAVE_ARCH_PMDP_GET_AND_CLEAR_FULL 1638 + static inline pmd_t pmdp_get_and_clear_full(struct mm_struct *mm, 1639 + unsigned long address, 1640 + pmd_t *pmdp, int full) 1641 + { 1642 + pmd_t pmd = *pmdp; 1643 + 1644 + if (!full) 1645 + pmdp_flush_lazy(mm, address, pmdp); 1646 + pmd_clear(pmdp); 1647 + return pmd; 1648 + } 1649 + 1654 1650 #define __HAVE_ARCH_PMDP_CLEAR_FLUSH 1655 1651 static inline pmd_t pmdp_clear_flush(struct vm_area_struct *vma, 1656 1652 unsigned long address, pmd_t *pmdp) ··· 1776 1746 extern int vmem_add_mapping(unsigned long start, unsigned long size); 1777 1747 extern int vmem_remove_mapping(unsigned long start, unsigned long size); 1778 1748 extern int s390_enable_sie(void); 1779 - extern void s390_enable_skey(void); 1749 + extern int s390_enable_skey(void); 1750 + extern void s390_reset_cmma(struct mm_struct *mm); 1780 1751 1781 1752 /* 1782 1753 * No page table caches to initialise
-2
arch/s390/include/asm/processor.h
··· 217 217 */ 218 218 static inline void cpu_relax(void) 219 219 { 220 - if (MACHINE_HAS_DIAG44) 221 - asm volatile("diag 0,0,68"); 222 220 barrier(); 223 221 } 224 222
+1 -8
arch/s390/include/asm/spinlock.h
··· 18 18 static inline int 19 19 _raw_compare_and_swap(unsigned int *lock, unsigned int old, unsigned int new) 20 20 { 21 - unsigned int old_expected = old; 22 - 23 - asm volatile( 24 - " cs %0,%3,%1" 25 - : "=d" (old), "=Q" (*lock) 26 - : "0" (old), "d" (new), "Q" (*lock) 27 - : "cc", "memory" ); 28 - return old == old_expected; 21 + return __sync_bool_compare_and_swap(lock, old, new); 29 22 } 30 23 31 24 /*
+1
arch/s390/include/asm/tlb.h
··· 121 121 #ifdef CONFIG_64BIT 122 122 if (tlb->mm->context.asce_limit <= (1UL << 31)) 123 123 return; 124 + pgtable_pmd_page_dtor(virt_to_page(pmd)); 124 125 tlb_remove_table(tlb, pmd); 125 126 #endif 126 127 }
+3 -1
arch/s390/include/uapi/asm/unistd.h
··· 287 287 #define __NR_getrandom 349 288 288 #define __NR_memfd_create 350 289 289 #define __NR_bpf 351 290 - #define NR_syscalls 352 290 + #define __NR_s390_pci_mmio_write 352 291 + #define __NR_s390_pci_mmio_read 353 292 + #define NR_syscalls 354 291 293 292 294 /* 293 295 * There are some system calls that are not present on 64 bit, some
+2 -3
arch/s390/kernel/asm-offsets.c
··· 17 17 * Make sure that the compiler is new enough. We want a compiler that 18 18 * is known to work with the "Q" assembler constraint. 19 19 */ 20 - #if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 3) 21 - #error Your compiler is too old; please use version 3.3.3 or newer 20 + #if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 3) 21 + #error Your compiler is too old; please use version 4.3 or newer 22 22 #endif 23 23 24 24 int main(void) ··· 156 156 DEFINE(__LC_INT_CLOCK, offsetof(struct _lowcore, int_clock)); 157 157 DEFINE(__LC_MCCK_CLOCK, offsetof(struct _lowcore, mcck_clock)); 158 158 DEFINE(__LC_MACHINE_FLAGS, offsetof(struct _lowcore, machine_flags)); 159 - DEFINE(__LC_FTRACE_FUNC, offsetof(struct _lowcore, ftrace_func)); 160 159 DEFINE(__LC_DUMP_REIPL, offsetof(struct _lowcore, ipib)); 161 160 BLANK(); 162 161 DEFINE(__LC_CPU_TIMER_SAVE_AREA, offsetof(struct _lowcore, cpu_timer_save_area));
+1 -1
arch/s390/kernel/compat_signal.c
··· 434 434 ksig->ka.sa.sa_restorer | PSW32_ADDR_AMODE; 435 435 } else { 436 436 /* Signal frames without vectors registers are short ! */ 437 - __u16 __user *svc = (void *) frame + frame_size - 2; 437 + __u16 __user *svc = (void __user *) frame + frame_size - 2; 438 438 if (__put_user(S390_SYSCALL_OPCODE | __NR_sigreturn, svc)) 439 439 return -EFAULT; 440 440 restorer = (unsigned long __force) svc | PSW32_ADDR_AMODE;
+2
arch/s390/kernel/compat_wrapper.c
··· 218 218 COMPAT_SYSCALL_WRAP3(getrandom, char __user *, buf, size_t, count, unsigned int, flags) 219 219 COMPAT_SYSCALL_WRAP2(memfd_create, const char __user *, uname, unsigned int, flags) 220 220 COMPAT_SYSCALL_WRAP3(bpf, int, cmd, union bpf_attr *, attr, unsigned int, size); 221 + COMPAT_SYSCALL_WRAP3(s390_pci_mmio_write, const unsigned long, mmio_addr, const void __user *, user_buffer, const size_t, length); 222 + COMPAT_SYSCALL_WRAP3(s390_pci_mmio_read, const unsigned long, mmio_addr, void __user *, user_buffer, const size_t, length);
+4 -8
arch/s390/kernel/debug.c
··· 1019 1019 */ 1020 1020 1021 1021 debug_entry_t* 1022 - debug_sprintf_event(debug_info_t* id, int level,char *string,...) 1022 + __debug_sprintf_event(debug_info_t *id, int level, char *string, ...) 1023 1023 { 1024 1024 va_list ap; 1025 1025 int numargs,idx; ··· 1027 1027 debug_sprintf_entry_t *curr_event; 1028 1028 debug_entry_t *active; 1029 1029 1030 - if((!id) || (level > id->level)) 1031 - return NULL; 1032 1030 if (!debug_active || !id->areas) 1033 1031 return NULL; 1034 1032 numargs=debug_count_numargs(string); ··· 1048 1050 1049 1051 return active; 1050 1052 } 1051 - EXPORT_SYMBOL(debug_sprintf_event); 1053 + EXPORT_SYMBOL(__debug_sprintf_event); 1052 1054 1053 1055 /* 1054 1056 * debug_sprintf_exception: 1055 1057 */ 1056 1058 1057 1059 debug_entry_t* 1058 - debug_sprintf_exception(debug_info_t* id, int level,char *string,...) 1060 + __debug_sprintf_exception(debug_info_t *id, int level, char *string, ...) 1059 1061 { 1060 1062 va_list ap; 1061 1063 int numargs,idx; ··· 1063 1065 debug_sprintf_entry_t *curr_event; 1064 1066 debug_entry_t *active; 1065 1067 1066 - if((!id) || (level > id->level)) 1067 - return NULL; 1068 1068 if (!debug_active || !id->areas) 1069 1069 return NULL; 1070 1070 ··· 1085 1089 1086 1090 return active; 1087 1091 } 1088 - EXPORT_SYMBOL(debug_sprintf_exception); 1092 + EXPORT_SYMBOL(__debug_sprintf_exception); 1089 1093 1090 1094 /* 1091 1095 * debug_register_view:
+2 -1
arch/s390/kernel/dumpstack.c
··· 191 191 console_verbose(); 192 192 spin_lock_irq(&die_lock); 193 193 bust_spinlocks(1); 194 - printk("%s: %04x [#%d] ", str, regs->int_code & 0xffff, ++die_counter); 194 + printk("%s: %04x ilc:%d [#%d] ", str, regs->int_code & 0xffff, 195 + regs->int_code >> 17, ++die_counter); 195 196 #ifdef CONFIG_PREEMPT 196 197 printk("PREEMPT "); 197 198 #endif
-4
arch/s390/kernel/early.c
··· 12 12 #include <linux/errno.h> 13 13 #include <linux/string.h> 14 14 #include <linux/ctype.h> 15 - #include <linux/ftrace.h> 16 15 #include <linux/lockdep.h> 17 16 #include <linux/module.h> 18 17 #include <linux/pfn.h> ··· 489 490 detect_machine_facilities(); 490 491 setup_topology(); 491 492 sclp_early_detect(); 492 - #ifdef CONFIG_DYNAMIC_FTRACE 493 - S390_lowcore.ftrace_func = (unsigned long)ftrace_caller; 494 - #endif 495 493 lockdep_on(); 496 494 }
+212 -212
arch/s390/kernel/entry.S
··· 53 53 .macro TRACE_IRQS_ON 54 54 #ifdef CONFIG_TRACE_IRQFLAGS 55 55 basr %r2,%r0 56 - l %r1,BASED(.Lhardirqs_on) 56 + l %r1,BASED(.Lc_hardirqs_on) 57 57 basr %r14,%r1 # call trace_hardirqs_on_caller 58 58 #endif 59 59 .endm ··· 61 61 .macro TRACE_IRQS_OFF 62 62 #ifdef CONFIG_TRACE_IRQFLAGS 63 63 basr %r2,%r0 64 - l %r1,BASED(.Lhardirqs_off) 64 + l %r1,BASED(.Lc_hardirqs_off) 65 65 basr %r14,%r1 # call trace_hardirqs_off_caller 66 66 #endif 67 67 .endm ··· 70 70 #ifdef CONFIG_LOCKDEP 71 71 tm __PT_PSW+1(%r11),0x01 # returning to user ? 72 72 jz .+10 73 - l %r1,BASED(.Llockdep_sys_exit) 73 + l %r1,BASED(.Lc_lockdep_sys_exit) 74 74 basr %r14,%r1 # call lockdep_sys_exit 75 75 #endif 76 76 .endm ··· 87 87 tmh %r8,0x0001 # interrupting from user ? 88 88 jnz 1f 89 89 lr %r14,%r9 90 - sl %r14,BASED(.Lcritical_start) 91 - cl %r14,BASED(.Lcritical_length) 90 + sl %r14,BASED(.Lc_critical_start) 91 + cl %r14,BASED(.Lc_critical_length) 92 92 jhe 0f 93 93 la %r11,\savearea # inside critical section, do cleanup 94 94 bras %r14,cleanup_critical ··· 162 162 lm %r6,%r15,__SF_GPRS(%r15) # load gprs of next task 163 163 br %r14 164 164 165 - __critical_start: 165 + .L__critical_start: 166 166 /* 167 167 * SVC interrupt handler routine. System calls are synchronous events and 168 168 * are executed with interrupts enabled. ··· 170 170 171 171 ENTRY(system_call) 172 172 stpt __LC_SYNC_ENTER_TIMER 173 - sysc_stm: 173 + .Lsysc_stm: 174 174 stm %r8,%r15,__LC_SAVE_AREA_SYNC 175 175 l %r12,__LC_THREAD_INFO 176 176 l %r13,__LC_SVC_NEW_PSW+4 177 177 lhi %r14,_PIF_SYSCALL 178 - sysc_per: 178 + .Lsysc_per: 179 179 l %r15,__LC_KERNEL_STACK 180 180 la %r11,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs 181 - sysc_vtime: 181 + .Lsysc_vtime: 182 182 UPDATE_VTIME %r8,%r9,__LC_SYNC_ENTER_TIMER 183 183 stm %r0,%r7,__PT_R0(%r11) 184 184 mvc __PT_R8(32,%r11),__LC_SAVE_AREA_SYNC 185 185 mvc __PT_PSW(8,%r11),__LC_SVC_OLD_PSW 186 186 mvc __PT_INT_CODE(4,%r11),__LC_SVC_ILC 187 187 st %r14,__PT_FLAGS(%r11) 188 - sysc_do_svc: 188 + .Lsysc_do_svc: 189 189 l %r10,__TI_sysc_table(%r12) # 31 bit system call table 190 190 lh %r8,__PT_INT_CODE+2(%r11) 191 191 sla %r8,2 # shift and test for svc0 192 - jnz sysc_nr_ok 192 + jnz .Lsysc_nr_ok 193 193 # svc 0: system call number in %r1 194 194 cl %r1,BASED(.Lnr_syscalls) 195 - jnl sysc_nr_ok 195 + jnl .Lsysc_nr_ok 196 196 sth %r1,__PT_INT_CODE+2(%r11) 197 197 lr %r8,%r1 198 198 sla %r8,2 199 - sysc_nr_ok: 199 + .Lsysc_nr_ok: 200 200 xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15) 201 201 st %r2,__PT_ORIG_GPR2(%r11) 202 202 st %r7,STACK_FRAME_OVERHEAD(%r15) 203 203 l %r9,0(%r8,%r10) # get system call addr. 204 204 tm __TI_flags+3(%r12),_TIF_TRACE 205 - jnz sysc_tracesys 205 + jnz .Lsysc_tracesys 206 206 basr %r14,%r9 # call sys_xxxx 207 207 st %r2,__PT_R2(%r11) # store return value 208 208 209 - sysc_return: 209 + .Lsysc_return: 210 210 LOCKDEP_SYS_EXIT 211 - sysc_tif: 211 + .Lsysc_tif: 212 212 tm __PT_PSW+1(%r11),0x01 # returning to user ? 213 - jno sysc_restore 213 + jno .Lsysc_restore 214 214 tm __PT_FLAGS+3(%r11),_PIF_WORK 215 - jnz sysc_work 215 + jnz .Lsysc_work 216 216 tm __TI_flags+3(%r12),_TIF_WORK 217 - jnz sysc_work # check for thread work 217 + jnz .Lsysc_work # check for thread work 218 218 tm __LC_CPU_FLAGS+3,_CIF_WORK 219 - jnz sysc_work 220 - sysc_restore: 219 + jnz .Lsysc_work 220 + .Lsysc_restore: 221 221 mvc __LC_RETURN_PSW(8),__PT_PSW(%r11) 222 222 stpt __LC_EXIT_TIMER 223 223 lm %r0,%r15,__PT_R0(%r11) 224 224 lpsw __LC_RETURN_PSW 225 - sysc_done: 225 + .Lsysc_done: 226 226 227 227 # 228 228 # One of the work bits is on. Find out which one. 229 229 # 230 - sysc_work: 230 + .Lsysc_work: 231 231 tm __LC_CPU_FLAGS+3,_CIF_MCCK_PENDING 232 - jo sysc_mcck_pending 232 + jo .Lsysc_mcck_pending 233 233 tm __TI_flags+3(%r12),_TIF_NEED_RESCHED 234 - jo sysc_reschedule 234 + jo .Lsysc_reschedule 235 235 tm __PT_FLAGS+3(%r11),_PIF_PER_TRAP 236 - jo sysc_singlestep 236 + jo .Lsysc_singlestep 237 237 tm __TI_flags+3(%r12),_TIF_SIGPENDING 238 - jo sysc_sigpending 238 + jo .Lsysc_sigpending 239 239 tm __TI_flags+3(%r12),_TIF_NOTIFY_RESUME 240 - jo sysc_notify_resume 240 + jo .Lsysc_notify_resume 241 241 tm __LC_CPU_FLAGS+3,_CIF_ASCE 242 - jo sysc_uaccess 243 - j sysc_return # beware of critical section cleanup 242 + jo .Lsysc_uaccess 243 + j .Lsysc_return # beware of critical section cleanup 244 244 245 245 # 246 246 # _TIF_NEED_RESCHED is set, call schedule 247 247 # 248 - sysc_reschedule: 249 - l %r1,BASED(.Lschedule) 250 - la %r14,BASED(sysc_return) 248 + .Lsysc_reschedule: 249 + l %r1,BASED(.Lc_schedule) 250 + la %r14,BASED(.Lsysc_return) 251 251 br %r1 # call schedule 252 252 253 253 # 254 254 # _CIF_MCCK_PENDING is set, call handler 255 255 # 256 - sysc_mcck_pending: 257 - l %r1,BASED(.Lhandle_mcck) 258 - la %r14,BASED(sysc_return) 256 + .Lsysc_mcck_pending: 257 + l %r1,BASED(.Lc_handle_mcck) 258 + la %r14,BASED(.Lsysc_return) 259 259 br %r1 # TIF bit will be cleared by handler 260 260 261 261 # 262 262 # _CIF_ASCE is set, load user space asce 263 263 # 264 - sysc_uaccess: 264 + .Lsysc_uaccess: 265 265 ni __LC_CPU_FLAGS+3,255-_CIF_ASCE 266 266 lctl %c1,%c1,__LC_USER_ASCE # load primary asce 267 - j sysc_return 267 + j .Lsysc_return 268 268 269 269 # 270 270 # _TIF_SIGPENDING is set, call do_signal 271 271 # 272 - sysc_sigpending: 272 + .Lsysc_sigpending: 273 273 lr %r2,%r11 # pass pointer to pt_regs 274 - l %r1,BASED(.Ldo_signal) 274 + l %r1,BASED(.Lc_do_signal) 275 275 basr %r14,%r1 # call do_signal 276 276 tm __PT_FLAGS+3(%r11),_PIF_SYSCALL 277 - jno sysc_return 277 + jno .Lsysc_return 278 278 lm %r2,%r7,__PT_R2(%r11) # load svc arguments 279 279 l %r10,__TI_sysc_table(%r12) # 31 bit system call table 280 280 xr %r8,%r8 # svc 0 returns -ENOSYS 281 281 clc __PT_INT_CODE+2(2,%r11),BASED(.Lnr_syscalls+2) 282 - jnl sysc_nr_ok # invalid svc number -> do svc 0 282 + jnl .Lsysc_nr_ok # invalid svc number -> do svc 0 283 283 lh %r8,__PT_INT_CODE+2(%r11) # load new svc number 284 284 sla %r8,2 285 - j sysc_nr_ok # restart svc 285 + j .Lsysc_nr_ok # restart svc 286 286 287 287 # 288 288 # _TIF_NOTIFY_RESUME is set, call do_notify_resume 289 289 # 290 - sysc_notify_resume: 290 + .Lsysc_notify_resume: 291 291 lr %r2,%r11 # pass pointer to pt_regs 292 - l %r1,BASED(.Ldo_notify_resume) 293 - la %r14,BASED(sysc_return) 292 + l %r1,BASED(.Lc_do_notify_resume) 293 + la %r14,BASED(.Lsysc_return) 294 294 br %r1 # call do_notify_resume 295 295 296 296 # 297 297 # _PIF_PER_TRAP is set, call do_per_trap 298 298 # 299 - sysc_singlestep: 299 + .Lsysc_singlestep: 300 300 ni __PT_FLAGS+3(%r11),255-_PIF_PER_TRAP 301 301 lr %r2,%r11 # pass pointer to pt_regs 302 - l %r1,BASED(.Ldo_per_trap) 303 - la %r14,BASED(sysc_return) 302 + l %r1,BASED(.Lc_do_per_trap) 303 + la %r14,BASED(.Lsysc_return) 304 304 br %r1 # call do_per_trap 305 305 306 306 # 307 307 # call tracehook_report_syscall_entry/tracehook_report_syscall_exit before 308 308 # and after the system call 309 309 # 310 - sysc_tracesys: 311 - l %r1,BASED(.Ltrace_enter) 310 + .Lsysc_tracesys: 311 + l %r1,BASED(.Lc_trace_enter) 312 312 lr %r2,%r11 # pass pointer to pt_regs 313 313 la %r3,0 314 314 xr %r0,%r0 ··· 316 316 st %r0,__PT_R2(%r11) 317 317 basr %r14,%r1 # call do_syscall_trace_enter 318 318 cl %r2,BASED(.Lnr_syscalls) 319 - jnl sysc_tracenogo 319 + jnl .Lsysc_tracenogo 320 320 lr %r8,%r2 321 321 sll %r8,2 322 322 l %r9,0(%r8,%r10) 323 - sysc_tracego: 323 + .Lsysc_tracego: 324 324 lm %r3,%r7,__PT_R3(%r11) 325 325 st %r7,STACK_FRAME_OVERHEAD(%r15) 326 326 l %r2,__PT_ORIG_GPR2(%r11) 327 327 basr %r14,%r9 # call sys_xxx 328 328 st %r2,__PT_R2(%r11) # store return value 329 - sysc_tracenogo: 329 + .Lsysc_tracenogo: 330 330 tm __TI_flags+3(%r12),_TIF_TRACE 331 - jz sysc_return 332 - l %r1,BASED(.Ltrace_exit) 331 + jz .Lsysc_return 332 + l %r1,BASED(.Lc_trace_exit) 333 333 lr %r2,%r11 # pass pointer to pt_regs 334 - la %r14,BASED(sysc_return) 334 + la %r14,BASED(.Lsysc_return) 335 335 br %r1 # call do_syscall_trace_exit 336 336 337 337 # ··· 341 341 la %r11,STACK_FRAME_OVERHEAD(%r15) 342 342 l %r12,__LC_THREAD_INFO 343 343 l %r13,__LC_SVC_NEW_PSW+4 344 - l %r1,BASED(.Lschedule_tail) 344 + l %r1,BASED(.Lc_schedule_tail) 345 345 basr %r14,%r1 # call schedule_tail 346 346 TRACE_IRQS_ON 347 347 ssm __LC_SVC_NEW_PSW # reenable interrupts 348 348 tm __PT_PSW+1(%r11),0x01 # forking a kernel thread ? 349 - jne sysc_tracenogo 349 + jne .Lsysc_tracenogo 350 350 # it's a kernel thread 351 351 lm %r9,%r10,__PT_R9(%r11) # load gprs 352 352 ENTRY(kernel_thread_starter) 353 353 la %r2,0(%r10) 354 354 basr %r14,%r9 355 - j sysc_tracenogo 355 + j .Lsysc_tracenogo 356 356 357 357 /* 358 358 * Program check handler routine ··· 369 369 tmh %r8,0x4000 # PER bit set in old PSW ? 370 370 jnz 0f # -> enabled, can't be a double fault 371 371 tm __LC_PGM_ILC+3,0x80 # check for per exception 372 - jnz pgm_svcper # -> single stepped svc 372 + jnz .Lpgm_svcper # -> single stepped svc 373 373 0: CHECK_STACK STACK_SIZE,__LC_SAVE_AREA_SYNC 374 374 ahi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE) 375 375 j 2f ··· 386 386 jz 0f 387 387 l %r1,__TI_task(%r12) 388 388 tmh %r8,0x0001 # kernel per event ? 389 - jz pgm_kprobe 389 + jz .Lpgm_kprobe 390 390 oi __PT_FLAGS+3(%r11),_PIF_PER_TRAP 391 391 mvc __THREAD_per_address(4,%r1),__LC_PER_ADDRESS 392 392 mvc __THREAD_per_cause(2,%r1),__LC_PER_CODE 393 393 mvc __THREAD_per_paid(1,%r1),__LC_PER_ACCESS_ID 394 394 0: REENABLE_IRQS 395 395 xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15) 396 - l %r1,BASED(.Ljump_table) 396 + l %r1,BASED(.Lc_jump_table) 397 397 la %r10,0x7f 398 398 n %r10,__PT_INT_CODE(%r11) 399 - je sysc_return 399 + je .Lsysc_return 400 400 sll %r10,2 401 401 l %r1,0(%r10,%r1) # load address of handler routine 402 402 lr %r2,%r11 # pass pointer to pt_regs 403 403 basr %r14,%r1 # branch to interrupt-handler 404 - j sysc_return 404 + j .Lsysc_return 405 405 406 406 # 407 407 # PER event in supervisor state, must be kprobes 408 408 # 409 - pgm_kprobe: 409 + .Lpgm_kprobe: 410 410 REENABLE_IRQS 411 411 xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15) 412 - l %r1,BASED(.Ldo_per_trap) 412 + l %r1,BASED(.Lc_do_per_trap) 413 413 lr %r2,%r11 # pass pointer to pt_regs 414 414 basr %r14,%r1 # call do_per_trap 415 - j sysc_return 415 + j .Lsysc_return 416 416 417 417 # 418 418 # single stepped system call 419 419 # 420 - pgm_svcper: 420 + .Lpgm_svcper: 421 421 mvc __LC_RETURN_PSW(4),__LC_SVC_NEW_PSW 422 - mvc __LC_RETURN_PSW+4(4),BASED(.Lsysc_per) 422 + mvc __LC_RETURN_PSW+4(4),BASED(.Lc_sysc_per) 423 423 lhi %r14,_PIF_SYSCALL | _PIF_PER_TRAP 424 - lpsw __LC_RETURN_PSW # branch to sysc_per and enable irqs 424 + lpsw __LC_RETURN_PSW # branch to .Lsysc_per and enable irqs 425 425 426 426 /* 427 427 * IO interrupt handler routine ··· 435 435 l %r13,__LC_SVC_NEW_PSW+4 436 436 lm %r8,%r9,__LC_IO_OLD_PSW 437 437 tmh %r8,0x0001 # interrupting from user ? 438 - jz io_skip 438 + jz .Lio_skip 439 439 UPDATE_VTIME %r14,%r15,__LC_ASYNC_ENTER_TIMER 440 - io_skip: 440 + .Lio_skip: 441 441 SWITCH_ASYNC __LC_SAVE_AREA_ASYNC,__LC_ASYNC_STACK,STACK_SHIFT 442 442 stm %r0,%r7,__PT_R0(%r11) 443 443 mvc __PT_R8(32,%r11),__LC_SAVE_AREA_ASYNC ··· 446 446 xc __PT_FLAGS(4,%r11),__PT_FLAGS(%r11) 447 447 TRACE_IRQS_OFF 448 448 xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15) 449 - io_loop: 450 - l %r1,BASED(.Ldo_IRQ) 449 + .Lio_loop: 450 + l %r1,BASED(.Lc_do_IRQ) 451 451 lr %r2,%r11 # pass pointer to pt_regs 452 452 lhi %r3,IO_INTERRUPT 453 453 tm __PT_INT_CODE+8(%r11),0x80 # adapter interrupt ? 454 - jz io_call 454 + jz .Lio_call 455 455 lhi %r3,THIN_INTERRUPT 456 - io_call: 456 + .Lio_call: 457 457 basr %r14,%r1 # call do_IRQ 458 458 tm __LC_MACHINE_FLAGS+2,0x10 # MACHINE_FLAG_LPAR 459 - jz io_return 459 + jz .Lio_return 460 460 tpi 0 461 - jz io_return 461 + jz .Lio_return 462 462 mvc __PT_INT_CODE(12,%r11),__LC_SUBCHANNEL_ID 463 - j io_loop 464 - io_return: 463 + j .Lio_loop 464 + .Lio_return: 465 465 LOCKDEP_SYS_EXIT 466 466 TRACE_IRQS_ON 467 - io_tif: 467 + .Lio_tif: 468 468 tm __TI_flags+3(%r12),_TIF_WORK 469 - jnz io_work # there is work to do (signals etc.) 469 + jnz .Lio_work # there is work to do (signals etc.) 470 470 tm __LC_CPU_FLAGS+3,_CIF_WORK 471 - jnz io_work 472 - io_restore: 471 + jnz .Lio_work 472 + .Lio_restore: 473 473 mvc __LC_RETURN_PSW(8),__PT_PSW(%r11) 474 474 stpt __LC_EXIT_TIMER 475 475 lm %r0,%r15,__PT_R0(%r11) 476 476 lpsw __LC_RETURN_PSW 477 - io_done: 477 + .Lio_done: 478 478 479 479 # 480 480 # There is work todo, find out in which context we have been interrupted: ··· 483 483 # the preemption counter and if it is zero call preempt_schedule_irq 484 484 # Before any work can be done, a switch to the kernel stack is required. 485 485 # 486 - io_work: 486 + .Lio_work: 487 487 tm __PT_PSW+1(%r11),0x01 # returning to user ? 488 - jo io_work_user # yes -> do resched & signal 488 + jo .Lio_work_user # yes -> do resched & signal 489 489 #ifdef CONFIG_PREEMPT 490 490 # check for preemptive scheduling 491 491 icm %r0,15,__TI_precount(%r12) 492 - jnz io_restore # preemption disabled 492 + jnz .Lio_restore # preemption disabled 493 493 tm __TI_flags+3(%r12),_TIF_NEED_RESCHED 494 - jno io_restore 494 + jno .Lio_restore 495 495 # switch to kernel stack 496 496 l %r1,__PT_R15(%r11) 497 497 ahi %r1,-(STACK_FRAME_OVERHEAD + __PT_SIZE) ··· 499 499 xc __SF_BACKCHAIN(4,%r1),__SF_BACKCHAIN(%r1) 500 500 la %r11,STACK_FRAME_OVERHEAD(%r1) 501 501 lr %r15,%r1 502 - # TRACE_IRQS_ON already done at io_return, call 502 + # TRACE_IRQS_ON already done at .Lio_return, call 503 503 # TRACE_IRQS_OFF to keep things symmetrical 504 504 TRACE_IRQS_OFF 505 - l %r1,BASED(.Lpreempt_irq) 505 + l %r1,BASED(.Lc_preempt_irq) 506 506 basr %r14,%r1 # call preempt_schedule_irq 507 - j io_return 507 + j .Lio_return 508 508 #else 509 - j io_restore 509 + j .Lio_restore 510 510 #endif 511 511 512 512 # 513 513 # Need to do work before returning to userspace, switch to kernel stack 514 514 # 515 - io_work_user: 515 + .Lio_work_user: 516 516 l %r1,__LC_KERNEL_STACK 517 517 mvc STACK_FRAME_OVERHEAD(__PT_SIZE,%r1),0(%r11) 518 518 xc __SF_BACKCHAIN(4,%r1),__SF_BACKCHAIN(%r1) ··· 522 522 # 523 523 # One of the work bits is on. Find out which one. 524 524 # 525 - io_work_tif: 525 + .Lio_work_tif: 526 526 tm __LC_CPU_FLAGS+3(%r12),_CIF_MCCK_PENDING 527 - jo io_mcck_pending 527 + jo .Lio_mcck_pending 528 528 tm __TI_flags+3(%r12),_TIF_NEED_RESCHED 529 - jo io_reschedule 529 + jo .Lio_reschedule 530 530 tm __TI_flags+3(%r12),_TIF_SIGPENDING 531 - jo io_sigpending 531 + jo .Lio_sigpending 532 532 tm __TI_flags+3(%r12),_TIF_NOTIFY_RESUME 533 - jo io_notify_resume 533 + jo .Lio_notify_resume 534 534 tm __LC_CPU_FLAGS+3,_CIF_ASCE 535 - jo io_uaccess 536 - j io_return # beware of critical section cleanup 535 + jo .Lio_uaccess 536 + j .Lio_return # beware of critical section cleanup 537 537 538 538 # 539 539 # _CIF_MCCK_PENDING is set, call handler 540 540 # 541 - io_mcck_pending: 542 - # TRACE_IRQS_ON already done at io_return 543 - l %r1,BASED(.Lhandle_mcck) 541 + .Lio_mcck_pending: 542 + # TRACE_IRQS_ON already done at .Lio_return 543 + l %r1,BASED(.Lc_handle_mcck) 544 544 basr %r14,%r1 # TIF bit will be cleared by handler 545 545 TRACE_IRQS_OFF 546 - j io_return 546 + j .Lio_return 547 547 548 548 # 549 549 # _CIF_ASCE is set, load user space asce 550 550 # 551 - io_uaccess: 551 + .Lio_uaccess: 552 552 ni __LC_CPU_FLAGS+3,255-_CIF_ASCE 553 553 lctl %c1,%c1,__LC_USER_ASCE # load primary asce 554 - j io_return 554 + j .Lio_return 555 555 556 556 # 557 557 # _TIF_NEED_RESCHED is set, call schedule 558 558 # 559 - io_reschedule: 560 - # TRACE_IRQS_ON already done at io_return 561 - l %r1,BASED(.Lschedule) 559 + .Lio_reschedule: 560 + # TRACE_IRQS_ON already done at .Lio_return 561 + l %r1,BASED(.Lc_schedule) 562 562 ssm __LC_SVC_NEW_PSW # reenable interrupts 563 563 basr %r14,%r1 # call scheduler 564 564 ssm __LC_PGM_NEW_PSW # disable I/O and ext. interrupts 565 565 TRACE_IRQS_OFF 566 - j io_return 566 + j .Lio_return 567 567 568 568 # 569 569 # _TIF_SIGPENDING is set, call do_signal 570 570 # 571 - io_sigpending: 572 - # TRACE_IRQS_ON already done at io_return 573 - l %r1,BASED(.Ldo_signal) 571 + .Lio_sigpending: 572 + # TRACE_IRQS_ON already done at .Lio_return 573 + l %r1,BASED(.Lc_do_signal) 574 574 ssm __LC_SVC_NEW_PSW # reenable interrupts 575 575 lr %r2,%r11 # pass pointer to pt_regs 576 576 basr %r14,%r1 # call do_signal 577 577 ssm __LC_PGM_NEW_PSW # disable I/O and ext. interrupts 578 578 TRACE_IRQS_OFF 579 - j io_return 579 + j .Lio_return 580 580 581 581 # 582 582 # _TIF_SIGPENDING is set, call do_signal 583 583 # 584 - io_notify_resume: 585 - # TRACE_IRQS_ON already done at io_return 586 - l %r1,BASED(.Ldo_notify_resume) 584 + .Lio_notify_resume: 585 + # TRACE_IRQS_ON already done at .Lio_return 586 + l %r1,BASED(.Lc_do_notify_resume) 587 587 ssm __LC_SVC_NEW_PSW # reenable interrupts 588 588 lr %r2,%r11 # pass pointer to pt_regs 589 589 basr %r14,%r1 # call do_notify_resume 590 590 ssm __LC_PGM_NEW_PSW # disable I/O and ext. interrupts 591 591 TRACE_IRQS_OFF 592 - j io_return 592 + j .Lio_return 593 593 594 594 /* 595 595 * External interrupt handler routine ··· 603 603 l %r13,__LC_SVC_NEW_PSW+4 604 604 lm %r8,%r9,__LC_EXT_OLD_PSW 605 605 tmh %r8,0x0001 # interrupting from user ? 606 - jz ext_skip 606 + jz .Lext_skip 607 607 UPDATE_VTIME %r14,%r15,__LC_ASYNC_ENTER_TIMER 608 - ext_skip: 608 + .Lext_skip: 609 609 SWITCH_ASYNC __LC_SAVE_AREA_ASYNC,__LC_ASYNC_STACK,STACK_SHIFT 610 610 stm %r0,%r7,__PT_R0(%r11) 611 611 mvc __PT_R8(32,%r11),__LC_SAVE_AREA_ASYNC ··· 614 614 mvc __PT_INT_PARM(4,%r11),__LC_EXT_PARAMS 615 615 xc __PT_FLAGS(4,%r11),__PT_FLAGS(%r11) 616 616 TRACE_IRQS_OFF 617 - l %r1,BASED(.Ldo_IRQ) 617 + l %r1,BASED(.Lc_do_IRQ) 618 618 lr %r2,%r11 # pass pointer to pt_regs 619 619 lhi %r3,EXT_INTERRUPT 620 620 basr %r14,%r1 # call do_IRQ 621 - j io_return 621 + j .Lio_return 622 622 623 623 /* 624 - * Load idle PSW. The second "half" of this function is in cleanup_idle. 624 + * Load idle PSW. The second "half" of this function is in .Lcleanup_idle. 625 625 */ 626 626 ENTRY(psw_idle) 627 627 st %r3,__SF_EMPTY(%r15) 628 628 basr %r1,0 629 - la %r1,psw_idle_lpsw+4-.(%r1) 629 + la %r1,.Lpsw_idle_lpsw+4-.(%r1) 630 630 st %r1,__SF_EMPTY+4(%r15) 631 631 oi __SF_EMPTY+4(%r15),0x80 632 632 stck __CLOCK_IDLE_ENTER(%r2) 633 633 stpt __TIMER_IDLE_ENTER(%r2) 634 - psw_idle_lpsw: 634 + .Lpsw_idle_lpsw: 635 635 lpsw __SF_EMPTY(%r15) 636 636 br %r14 637 - psw_idle_end: 637 + .Lpsw_idle_end: 638 638 639 - __critical_end: 639 + .L__critical_end: 640 640 641 641 /* 642 642 * Machine check handler routines ··· 650 650 l %r13,__LC_SVC_NEW_PSW+4 651 651 lm %r8,%r9,__LC_MCK_OLD_PSW 652 652 tm __LC_MCCK_CODE,0x80 # system damage? 653 - jo mcck_panic # yes -> rest of mcck code invalid 653 + jo .Lmcck_panic # yes -> rest of mcck code invalid 654 654 la %r14,__LC_CPU_TIMER_SAVE_AREA 655 655 mvc __LC_MCCK_ENTER_TIMER(8),0(%r14) 656 656 tm __LC_MCCK_CODE+5,0x02 # stored cpu timer value valid? ··· 668 668 2: spt 0(%r14) 669 669 mvc __LC_MCCK_ENTER_TIMER(8),0(%r14) 670 670 3: tm __LC_MCCK_CODE+2,0x09 # mwp + ia of old psw valid? 671 - jno mcck_panic # no -> skip cleanup critical 671 + jno .Lmcck_panic # no -> skip cleanup critical 672 672 tm %r8,0x0001 # interrupting from user ? 673 - jz mcck_skip 673 + jz .Lmcck_skip 674 674 UPDATE_VTIME %r14,%r15,__LC_MCCK_ENTER_TIMER 675 - mcck_skip: 675 + .Lmcck_skip: 676 676 SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+32,__LC_PANIC_STACK,PAGE_SHIFT 677 677 stm %r0,%r7,__PT_R0(%r11) 678 678 mvc __PT_R8(32,%r11),__LC_GPREGS_SAVE_AREA+32 679 679 stm %r8,%r9,__PT_PSW(%r11) 680 680 xc __PT_FLAGS(4,%r11),__PT_FLAGS(%r11) 681 681 xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15) 682 - l %r1,BASED(.Ldo_machine_check) 682 + l %r1,BASED(.Lc_do_machine_check) 683 683 lr %r2,%r11 # pass pointer to pt_regs 684 684 basr %r14,%r1 # call s390_do_machine_check 685 685 tm __PT_PSW+1(%r11),0x01 # returning to user ? 686 - jno mcck_return 686 + jno .Lmcck_return 687 687 l %r1,__LC_KERNEL_STACK # switch to kernel stack 688 688 mvc STACK_FRAME_OVERHEAD(__PT_SIZE,%r1),0(%r11) 689 689 xc __SF_BACKCHAIN(4,%r1),__SF_BACKCHAIN(%r1) ··· 691 691 lr %r15,%r1 692 692 ssm __LC_PGM_NEW_PSW # turn dat on, keep irqs off 693 693 tm __LC_CPU_FLAGS+3,_CIF_MCCK_PENDING 694 - jno mcck_return 694 + jno .Lmcck_return 695 695 TRACE_IRQS_OFF 696 - l %r1,BASED(.Lhandle_mcck) 696 + l %r1,BASED(.Lc_handle_mcck) 697 697 basr %r14,%r1 # call s390_handle_mcck 698 698 TRACE_IRQS_ON 699 - mcck_return: 699 + .Lmcck_return: 700 700 mvc __LC_RETURN_MCCK_PSW(8),__PT_PSW(%r11) # move return PSW 701 701 tm __LC_RETURN_MCCK_PSW+1,0x01 # returning to user ? 702 702 jno 0f ··· 706 706 0: lm %r0,%r15,__PT_R0(%r11) 707 707 lpsw __LC_RETURN_MCCK_PSW 708 708 709 - mcck_panic: 709 + .Lmcck_panic: 710 710 l %r14,__LC_PANIC_STACK 711 711 slr %r14,%r15 712 712 sra %r14,PAGE_SHIFT 713 713 jz 0f 714 714 l %r15,__LC_PANIC_STACK 715 - j mcck_skip 715 + j .Lmcck_skip 716 716 0: ahi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE) 717 - j mcck_skip 717 + j .Lmcck_skip 718 718 719 719 # 720 720 # PSW restart interrupt handler ··· 764 764 1: .long kernel_stack_overflow 765 765 #endif 766 766 767 - cleanup_table: 767 + .Lcleanup_table: 768 768 .long system_call + 0x80000000 769 - .long sysc_do_svc + 0x80000000 770 - .long sysc_tif + 0x80000000 771 - .long sysc_restore + 0x80000000 772 - .long sysc_done + 0x80000000 773 - .long io_tif + 0x80000000 774 - .long io_restore + 0x80000000 775 - .long io_done + 0x80000000 769 + .long .Lsysc_do_svc + 0x80000000 770 + .long .Lsysc_tif + 0x80000000 771 + .long .Lsysc_restore + 0x80000000 772 + .long .Lsysc_done + 0x80000000 773 + .long .Lio_tif + 0x80000000 774 + .long .Lio_restore + 0x80000000 775 + .long .Lio_done + 0x80000000 776 776 .long psw_idle + 0x80000000 777 - .long psw_idle_end + 0x80000000 777 + .long .Lpsw_idle_end + 0x80000000 778 778 779 779 cleanup_critical: 780 - cl %r9,BASED(cleanup_table) # system_call 780 + cl %r9,BASED(.Lcleanup_table) # system_call 781 781 jl 0f 782 - cl %r9,BASED(cleanup_table+4) # sysc_do_svc 783 - jl cleanup_system_call 784 - cl %r9,BASED(cleanup_table+8) # sysc_tif 782 + cl %r9,BASED(.Lcleanup_table+4) # .Lsysc_do_svc 783 + jl .Lcleanup_system_call 784 + cl %r9,BASED(.Lcleanup_table+8) # .Lsysc_tif 785 785 jl 0f 786 - cl %r9,BASED(cleanup_table+12) # sysc_restore 787 - jl cleanup_sysc_tif 788 - cl %r9,BASED(cleanup_table+16) # sysc_done 789 - jl cleanup_sysc_restore 790 - cl %r9,BASED(cleanup_table+20) # io_tif 786 + cl %r9,BASED(.Lcleanup_table+12) # .Lsysc_restore 787 + jl .Lcleanup_sysc_tif 788 + cl %r9,BASED(.Lcleanup_table+16) # .Lsysc_done 789 + jl .Lcleanup_sysc_restore 790 + cl %r9,BASED(.Lcleanup_table+20) # .Lio_tif 791 791 jl 0f 792 - cl %r9,BASED(cleanup_table+24) # io_restore 793 - jl cleanup_io_tif 794 - cl %r9,BASED(cleanup_table+28) # io_done 795 - jl cleanup_io_restore 796 - cl %r9,BASED(cleanup_table+32) # psw_idle 792 + cl %r9,BASED(.Lcleanup_table+24) # .Lio_restore 793 + jl .Lcleanup_io_tif 794 + cl %r9,BASED(.Lcleanup_table+28) # .Lio_done 795 + jl .Lcleanup_io_restore 796 + cl %r9,BASED(.Lcleanup_table+32) # psw_idle 797 797 jl 0f 798 - cl %r9,BASED(cleanup_table+36) # psw_idle_end 799 - jl cleanup_idle 798 + cl %r9,BASED(.Lcleanup_table+36) # .Lpsw_idle_end 799 + jl .Lcleanup_idle 800 800 0: br %r14 801 801 802 - cleanup_system_call: 802 + .Lcleanup_system_call: 803 803 # check if stpt has been executed 804 - cl %r9,BASED(cleanup_system_call_insn) 804 + cl %r9,BASED(.Lcleanup_system_call_insn) 805 805 jh 0f 806 806 mvc __LC_SYNC_ENTER_TIMER(8),__LC_ASYNC_ENTER_TIMER 807 807 chi %r11,__LC_SAVE_AREA_ASYNC 808 808 je 0f 809 809 mvc __LC_SYNC_ENTER_TIMER(8),__LC_MCCK_ENTER_TIMER 810 810 0: # check if stm has been executed 811 - cl %r9,BASED(cleanup_system_call_insn+4) 811 + cl %r9,BASED(.Lcleanup_system_call_insn+4) 812 812 jh 0f 813 813 mvc __LC_SAVE_AREA_SYNC(32),0(%r11) 814 814 0: # set up saved registers r12, and r13 815 815 st %r12,16(%r11) # r12 thread-info pointer 816 816 st %r13,20(%r11) # r13 literal-pool pointer 817 817 # check if the user time calculation has been done 818 - cl %r9,BASED(cleanup_system_call_insn+8) 818 + cl %r9,BASED(.Lcleanup_system_call_insn+8) 819 819 jh 0f 820 820 l %r10,__LC_EXIT_TIMER 821 821 l %r15,__LC_EXIT_TIMER+4 ··· 824 824 st %r10,__LC_USER_TIMER 825 825 st %r15,__LC_USER_TIMER+4 826 826 0: # check if the system time calculation has been done 827 - cl %r9,BASED(cleanup_system_call_insn+12) 827 + cl %r9,BASED(.Lcleanup_system_call_insn+12) 828 828 jh 0f 829 829 l %r10,__LC_LAST_UPDATE_TIMER 830 830 l %r15,__LC_LAST_UPDATE_TIMER+4 ··· 848 848 # setup saved register 15 849 849 st %r15,28(%r11) # r15 stack pointer 850 850 # set new psw address and exit 851 - l %r9,BASED(cleanup_table+4) # sysc_do_svc + 0x80000000 851 + l %r9,BASED(.Lcleanup_table+4) # .Lsysc_do_svc + 0x80000000 852 852 br %r14 853 - cleanup_system_call_insn: 853 + .Lcleanup_system_call_insn: 854 854 .long system_call + 0x80000000 855 - .long sysc_stm + 0x80000000 856 - .long sysc_vtime + 0x80000000 + 36 857 - .long sysc_vtime + 0x80000000 + 76 855 + .long .Lsysc_stm + 0x80000000 856 + .long .Lsysc_vtime + 0x80000000 + 36 857 + .long .Lsysc_vtime + 0x80000000 + 76 858 858 859 - cleanup_sysc_tif: 860 - l %r9,BASED(cleanup_table+8) # sysc_tif + 0x80000000 859 + .Lcleanup_sysc_tif: 860 + l %r9,BASED(.Lcleanup_table+8) # .Lsysc_tif + 0x80000000 861 861 br %r14 862 862 863 - cleanup_sysc_restore: 864 - cl %r9,BASED(cleanup_sysc_restore_insn) 863 + .Lcleanup_sysc_restore: 864 + cl %r9,BASED(.Lcleanup_sysc_restore_insn) 865 865 jhe 0f 866 866 l %r9,12(%r11) # get saved pointer to pt_regs 867 867 mvc __LC_RETURN_PSW(8),__PT_PSW(%r9) ··· 869 869 lm %r0,%r7,__PT_R0(%r9) 870 870 0: lm %r8,%r9,__LC_RETURN_PSW 871 871 br %r14 872 - cleanup_sysc_restore_insn: 873 - .long sysc_done - 4 + 0x80000000 872 + .Lcleanup_sysc_restore_insn: 873 + .long .Lsysc_done - 4 + 0x80000000 874 874 875 - cleanup_io_tif: 876 - l %r9,BASED(cleanup_table+20) # io_tif + 0x80000000 875 + .Lcleanup_io_tif: 876 + l %r9,BASED(.Lcleanup_table+20) # .Lio_tif + 0x80000000 877 877 br %r14 878 878 879 - cleanup_io_restore: 880 - cl %r9,BASED(cleanup_io_restore_insn) 879 + .Lcleanup_io_restore: 880 + cl %r9,BASED(.Lcleanup_io_restore_insn) 881 881 jhe 0f 882 882 l %r9,12(%r11) # get saved r11 pointer to pt_regs 883 883 mvc __LC_RETURN_PSW(8),__PT_PSW(%r9) ··· 885 885 lm %r0,%r7,__PT_R0(%r9) 886 886 0: lm %r8,%r9,__LC_RETURN_PSW 887 887 br %r14 888 - cleanup_io_restore_insn: 889 - .long io_done - 4 + 0x80000000 888 + .Lcleanup_io_restore_insn: 889 + .long .Lio_done - 4 + 0x80000000 890 890 891 - cleanup_idle: 891 + .Lcleanup_idle: 892 892 # copy interrupt clock & cpu timer 893 893 mvc __CLOCK_IDLE_EXIT(8,%r2),__LC_INT_CLOCK 894 894 mvc __TIMER_IDLE_EXIT(8,%r2),__LC_ASYNC_ENTER_TIMER ··· 897 897 mvc __CLOCK_IDLE_EXIT(8,%r2),__LC_MCCK_CLOCK 898 898 mvc __TIMER_IDLE_EXIT(8,%r2),__LC_MCCK_ENTER_TIMER 899 899 0: # check if stck has been executed 900 - cl %r9,BASED(cleanup_idle_insn) 900 + cl %r9,BASED(.Lcleanup_idle_insn) 901 901 jhe 1f 902 902 mvc __CLOCK_IDLE_ENTER(8,%r2),__CLOCK_IDLE_EXIT(%r2) 903 903 mvc __TIMER_IDLE_ENTER(8,%r2),__TIMER_IDLE_EXIT(%r3) ··· 913 913 stm %r9,%r10,__LC_SYSTEM_TIMER 914 914 mvc __LC_LAST_UPDATE_TIMER(8),__TIMER_IDLE_EXIT(%r2) 915 915 # prepare return psw 916 - n %r8,BASED(cleanup_idle_wait) # clear irq & wait state bits 916 + n %r8,BASED(.Lcleanup_idle_wait) # clear irq & wait state bits 917 917 l %r9,24(%r11) # return from psw_idle 918 918 br %r14 919 - cleanup_idle_insn: 920 - .long psw_idle_lpsw + 0x80000000 921 - cleanup_idle_wait: 919 + .Lcleanup_idle_insn: 920 + .long .Lpsw_idle_lpsw + 0x80000000 921 + .Lcleanup_idle_wait: 922 922 .long 0xfcfdffff 923 923 924 924 /* ··· 933 933 /* 934 934 * Symbol constants 935 935 */ 936 - .Ldo_machine_check: .long s390_do_machine_check 937 - .Lhandle_mcck: .long s390_handle_mcck 938 - .Ldo_IRQ: .long do_IRQ 939 - .Ldo_signal: .long do_signal 940 - .Ldo_notify_resume: .long do_notify_resume 941 - .Ldo_per_trap: .long do_per_trap 942 - .Ljump_table: .long pgm_check_table 943 - .Lschedule: .long schedule 936 + .Lc_do_machine_check: .long s390_do_machine_check 937 + .Lc_handle_mcck: .long s390_handle_mcck 938 + .Lc_do_IRQ: .long do_IRQ 939 + .Lc_do_signal: .long do_signal 940 + .Lc_do_notify_resume: .long do_notify_resume 941 + .Lc_do_per_trap: .long do_per_trap 942 + .Lc_jump_table: .long pgm_check_table 943 + .Lc_schedule: .long schedule 944 944 #ifdef CONFIG_PREEMPT 945 - .Lpreempt_irq: .long preempt_schedule_irq 945 + .Lc_preempt_irq: .long preempt_schedule_irq 946 946 #endif 947 - .Ltrace_enter: .long do_syscall_trace_enter 948 - .Ltrace_exit: .long do_syscall_trace_exit 949 - .Lschedule_tail: .long schedule_tail 950 - .Lsysc_per: .long sysc_per + 0x80000000 947 + .Lc_trace_enter: .long do_syscall_trace_enter 948 + .Lc_trace_exit: .long do_syscall_trace_exit 949 + .Lc_schedule_tail: .long schedule_tail 950 + .Lc_sysc_per: .long .Lsysc_per + 0x80000000 951 951 #ifdef CONFIG_TRACE_IRQFLAGS 952 - .Lhardirqs_on: .long trace_hardirqs_on_caller 953 - .Lhardirqs_off: .long trace_hardirqs_off_caller 952 + .Lc_hardirqs_on: .long trace_hardirqs_on_caller 953 + .Lc_hardirqs_off: .long trace_hardirqs_off_caller 954 954 #endif 955 955 #ifdef CONFIG_LOCKDEP 956 - .Llockdep_sys_exit: .long lockdep_sys_exit 956 + .Lc_lockdep_sys_exit: .long lockdep_sys_exit 957 957 #endif 958 - .Lcritical_start: .long __critical_start + 0x80000000 959 - .Lcritical_length: .long __critical_end - __critical_start 958 + .Lc_critical_start: .long .L__critical_start + 0x80000000 959 + .Lc_critical_length: .long .L__critical_end - .L__critical_start 960 960 961 961 .section .rodata, "a" 962 962 #define SYSCALL(esa,esame,emu) .long esa
+2
arch/s390/kernel/entry.h
··· 74 74 long sys_s390_personality(unsigned int personality); 75 75 long sys_s390_runtime_instr(int command, int signum); 76 76 77 + long sys_s390_pci_mmio_write(unsigned long, const void __user *, size_t); 78 + long sys_s390_pci_mmio_read(unsigned long, void __user *, size_t); 77 79 #endif /* _ENTRY_H */
+186 -186
arch/s390/kernel/entry64.S
··· 91 91 .if \reason==1 92 92 # Some program interrupts are suppressing (e.g. protection). 93 93 # We must also check the instruction after SIE in that case. 94 - # do_protection_exception will rewind to rewind_pad 94 + # do_protection_exception will rewind to .Lrewind_pad 95 95 jh .+42 96 96 .else 97 97 jhe .+42 ··· 192 192 lmg %r6,%r15,__SF_GPRS(%r15) # load gprs of next task 193 193 br %r14 194 194 195 - __critical_start: 195 + .L__critical_start: 196 196 /* 197 197 * SVC interrupt handler routine. System calls are synchronous events and 198 198 * are executed with interrupts enabled. ··· 200 200 201 201 ENTRY(system_call) 202 202 stpt __LC_SYNC_ENTER_TIMER 203 - sysc_stmg: 203 + .Lsysc_stmg: 204 204 stmg %r8,%r15,__LC_SAVE_AREA_SYNC 205 205 lg %r10,__LC_LAST_BREAK 206 206 lg %r12,__LC_THREAD_INFO 207 207 lghi %r14,_PIF_SYSCALL 208 - sysc_per: 208 + .Lsysc_per: 209 209 lg %r15,__LC_KERNEL_STACK 210 210 la %r11,STACK_FRAME_OVERHEAD(%r15) # pointer to pt_regs 211 - sysc_vtime: 211 + .Lsysc_vtime: 212 212 UPDATE_VTIME %r13,__LC_SYNC_ENTER_TIMER 213 213 LAST_BREAK %r13 214 214 stmg %r0,%r7,__PT_R0(%r11) ··· 216 216 mvc __PT_PSW(16,%r11),__LC_SVC_OLD_PSW 217 217 mvc __PT_INT_CODE(4,%r11),__LC_SVC_ILC 218 218 stg %r14,__PT_FLAGS(%r11) 219 - sysc_do_svc: 219 + .Lsysc_do_svc: 220 220 lg %r10,__TI_sysc_table(%r12) # address of system call table 221 221 llgh %r8,__PT_INT_CODE+2(%r11) 222 222 slag %r8,%r8,2 # shift and test for svc 0 223 - jnz sysc_nr_ok 223 + jnz .Lsysc_nr_ok 224 224 # svc 0: system call number in %r1 225 225 llgfr %r1,%r1 # clear high word in r1 226 226 cghi %r1,NR_syscalls 227 - jnl sysc_nr_ok 227 + jnl .Lsysc_nr_ok 228 228 sth %r1,__PT_INT_CODE+2(%r11) 229 229 slag %r8,%r1,2 230 - sysc_nr_ok: 230 + .Lsysc_nr_ok: 231 231 xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) 232 232 stg %r2,__PT_ORIG_GPR2(%r11) 233 233 stg %r7,STACK_FRAME_OVERHEAD(%r15) 234 234 lgf %r9,0(%r8,%r10) # get system call add. 235 235 tm __TI_flags+7(%r12),_TIF_TRACE 236 - jnz sysc_tracesys 236 + jnz .Lsysc_tracesys 237 237 basr %r14,%r9 # call sys_xxxx 238 238 stg %r2,__PT_R2(%r11) # store return value 239 239 240 - sysc_return: 240 + .Lsysc_return: 241 241 LOCKDEP_SYS_EXIT 242 - sysc_tif: 242 + .Lsysc_tif: 243 243 tm __PT_PSW+1(%r11),0x01 # returning to user ? 244 - jno sysc_restore 244 + jno .Lsysc_restore 245 245 tm __PT_FLAGS+7(%r11),_PIF_WORK 246 - jnz sysc_work 246 + jnz .Lsysc_work 247 247 tm __TI_flags+7(%r12),_TIF_WORK 248 - jnz sysc_work # check for work 248 + jnz .Lsysc_work # check for work 249 249 tm __LC_CPU_FLAGS+7,_CIF_WORK 250 - jnz sysc_work 251 - sysc_restore: 250 + jnz .Lsysc_work 251 + .Lsysc_restore: 252 252 lg %r14,__LC_VDSO_PER_CPU 253 253 lmg %r0,%r10,__PT_R0(%r11) 254 254 mvc __LC_RETURN_PSW(16),__PT_PSW(%r11) ··· 256 256 mvc __VDSO_ECTG_BASE(16,%r14),__LC_EXIT_TIMER 257 257 lmg %r11,%r15,__PT_R11(%r11) 258 258 lpswe __LC_RETURN_PSW 259 - sysc_done: 259 + .Lsysc_done: 260 260 261 261 # 262 262 # One of the work bits is on. Find out which one. 263 263 # 264 - sysc_work: 264 + .Lsysc_work: 265 265 tm __LC_CPU_FLAGS+7,_CIF_MCCK_PENDING 266 - jo sysc_mcck_pending 266 + jo .Lsysc_mcck_pending 267 267 tm __TI_flags+7(%r12),_TIF_NEED_RESCHED 268 - jo sysc_reschedule 268 + jo .Lsysc_reschedule 269 269 #ifdef CONFIG_UPROBES 270 270 tm __TI_flags+7(%r12),_TIF_UPROBE 271 - jo sysc_uprobe_notify 271 + jo .Lsysc_uprobe_notify 272 272 #endif 273 273 tm __PT_FLAGS+7(%r11),_PIF_PER_TRAP 274 - jo sysc_singlestep 274 + jo .Lsysc_singlestep 275 275 tm __TI_flags+7(%r12),_TIF_SIGPENDING 276 - jo sysc_sigpending 276 + jo .Lsysc_sigpending 277 277 tm __TI_flags+7(%r12),_TIF_NOTIFY_RESUME 278 - jo sysc_notify_resume 278 + jo .Lsysc_notify_resume 279 279 tm __LC_CPU_FLAGS+7,_CIF_ASCE 280 - jo sysc_uaccess 281 - j sysc_return # beware of critical section cleanup 280 + jo .Lsysc_uaccess 281 + j .Lsysc_return # beware of critical section cleanup 282 282 283 283 # 284 284 # _TIF_NEED_RESCHED is set, call schedule 285 285 # 286 - sysc_reschedule: 287 - larl %r14,sysc_return 286 + .Lsysc_reschedule: 287 + larl %r14,.Lsysc_return 288 288 jg schedule 289 289 290 290 # 291 291 # _CIF_MCCK_PENDING is set, call handler 292 292 # 293 - sysc_mcck_pending: 294 - larl %r14,sysc_return 293 + .Lsysc_mcck_pending: 294 + larl %r14,.Lsysc_return 295 295 jg s390_handle_mcck # TIF bit will be cleared by handler 296 296 297 297 # 298 298 # _CIF_ASCE is set, load user space asce 299 299 # 300 - sysc_uaccess: 300 + .Lsysc_uaccess: 301 301 ni __LC_CPU_FLAGS+7,255-_CIF_ASCE 302 302 lctlg %c1,%c1,__LC_USER_ASCE # load primary asce 303 - j sysc_return 303 + j .Lsysc_return 304 304 305 305 # 306 306 # _TIF_SIGPENDING is set, call do_signal 307 307 # 308 - sysc_sigpending: 308 + .Lsysc_sigpending: 309 309 lgr %r2,%r11 # pass pointer to pt_regs 310 310 brasl %r14,do_signal 311 311 tm __PT_FLAGS+7(%r11),_PIF_SYSCALL 312 - jno sysc_return 312 + jno .Lsysc_return 313 313 lmg %r2,%r7,__PT_R2(%r11) # load svc arguments 314 314 lg %r10,__TI_sysc_table(%r12) # address of system call table 315 315 lghi %r8,0 # svc 0 returns -ENOSYS 316 316 llgh %r1,__PT_INT_CODE+2(%r11) # load new svc number 317 317 cghi %r1,NR_syscalls 318 - jnl sysc_nr_ok # invalid svc number -> do svc 0 318 + jnl .Lsysc_nr_ok # invalid svc number -> do svc 0 319 319 slag %r8,%r1,2 320 - j sysc_nr_ok # restart svc 320 + j .Lsysc_nr_ok # restart svc 321 321 322 322 # 323 323 # _TIF_NOTIFY_RESUME is set, call do_notify_resume 324 324 # 325 - sysc_notify_resume: 325 + .Lsysc_notify_resume: 326 326 lgr %r2,%r11 # pass pointer to pt_regs 327 - larl %r14,sysc_return 327 + larl %r14,.Lsysc_return 328 328 jg do_notify_resume 329 329 330 330 # 331 331 # _TIF_UPROBE is set, call uprobe_notify_resume 332 332 # 333 333 #ifdef CONFIG_UPROBES 334 - sysc_uprobe_notify: 334 + .Lsysc_uprobe_notify: 335 335 lgr %r2,%r11 # pass pointer to pt_regs 336 - larl %r14,sysc_return 336 + larl %r14,.Lsysc_return 337 337 jg uprobe_notify_resume 338 338 #endif 339 339 340 340 # 341 341 # _PIF_PER_TRAP is set, call do_per_trap 342 342 # 343 - sysc_singlestep: 343 + .Lsysc_singlestep: 344 344 ni __PT_FLAGS+7(%r11),255-_PIF_PER_TRAP 345 345 lgr %r2,%r11 # pass pointer to pt_regs 346 - larl %r14,sysc_return 346 + larl %r14,.Lsysc_return 347 347 jg do_per_trap 348 348 349 349 # 350 350 # call tracehook_report_syscall_entry/tracehook_report_syscall_exit before 351 351 # and after the system call 352 352 # 353 - sysc_tracesys: 353 + .Lsysc_tracesys: 354 354 lgr %r2,%r11 # pass pointer to pt_regs 355 355 la %r3,0 356 356 llgh %r0,__PT_INT_CODE+2(%r11) ··· 358 358 brasl %r14,do_syscall_trace_enter 359 359 lghi %r0,NR_syscalls 360 360 clgr %r0,%r2 361 - jnh sysc_tracenogo 361 + jnh .Lsysc_tracenogo 362 362 sllg %r8,%r2,2 363 363 lgf %r9,0(%r8,%r10) 364 - sysc_tracego: 364 + .Lsysc_tracego: 365 365 lmg %r3,%r7,__PT_R3(%r11) 366 366 stg %r7,STACK_FRAME_OVERHEAD(%r15) 367 367 lg %r2,__PT_ORIG_GPR2(%r11) 368 368 basr %r14,%r9 # call sys_xxx 369 369 stg %r2,__PT_R2(%r11) # store return value 370 - sysc_tracenogo: 370 + .Lsysc_tracenogo: 371 371 tm __TI_flags+7(%r12),_TIF_TRACE 372 - jz sysc_return 372 + jz .Lsysc_return 373 373 lgr %r2,%r11 # pass pointer to pt_regs 374 - larl %r14,sysc_return 374 + larl %r14,.Lsysc_return 375 375 jg do_syscall_trace_exit 376 376 377 377 # ··· 384 384 TRACE_IRQS_ON 385 385 ssm __LC_SVC_NEW_PSW # reenable interrupts 386 386 tm __PT_PSW+1(%r11),0x01 # forking a kernel thread ? 387 - jne sysc_tracenogo 387 + jne .Lsysc_tracenogo 388 388 # it's a kernel thread 389 389 lmg %r9,%r10,__PT_R9(%r11) # load gprs 390 390 ENTRY(kernel_thread_starter) 391 391 la %r2,0(%r10) 392 392 basr %r14,%r9 393 - j sysc_tracenogo 393 + j .Lsysc_tracenogo 394 394 395 395 /* 396 396 * Program check handler routine ··· 409 409 tmhh %r8,0x4000 # PER bit set in old PSW ? 410 410 jnz 0f # -> enabled, can't be a double fault 411 411 tm __LC_PGM_ILC+3,0x80 # check for per exception 412 - jnz pgm_svcper # -> single stepped svc 412 + jnz .Lpgm_svcper # -> single stepped svc 413 413 0: CHECK_STACK STACK_SIZE,__LC_SAVE_AREA_SYNC 414 414 aghi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE) 415 415 j 2f ··· 432 432 tm __LC_PGM_ILC+3,0x80 # check for per exception 433 433 jz 0f 434 434 tmhh %r8,0x0001 # kernel per event ? 435 - jz pgm_kprobe 435 + jz .Lpgm_kprobe 436 436 oi __PT_FLAGS+7(%r11),_PIF_PER_TRAP 437 437 mvc __THREAD_per_address(8,%r14),__LC_PER_ADDRESS 438 438 mvc __THREAD_per_cause(2,%r14),__LC_PER_CODE ··· 443 443 llgh %r10,__PT_INT_CODE+2(%r11) 444 444 nill %r10,0x007f 445 445 sll %r10,2 446 - je sysc_return 446 + je .Lsysc_return 447 447 lgf %r1,0(%r10,%r1) # load address of handler routine 448 448 lgr %r2,%r11 # pass pointer to pt_regs 449 449 basr %r14,%r1 # branch to interrupt-handler 450 - j sysc_return 450 + j .Lsysc_return 451 451 452 452 # 453 453 # PER event in supervisor state, must be kprobes 454 454 # 455 - pgm_kprobe: 455 + .Lpgm_kprobe: 456 456 REENABLE_IRQS 457 457 xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) 458 458 lgr %r2,%r11 # pass pointer to pt_regs 459 459 brasl %r14,do_per_trap 460 - j sysc_return 460 + j .Lsysc_return 461 461 462 462 # 463 463 # single stepped system call 464 464 # 465 - pgm_svcper: 465 + .Lpgm_svcper: 466 466 mvc __LC_RETURN_PSW(8),__LC_SVC_NEW_PSW 467 - larl %r14,sysc_per 467 + larl %r14,.Lsysc_per 468 468 stg %r14,__LC_RETURN_PSW+8 469 469 lghi %r14,_PIF_SYSCALL | _PIF_PER_TRAP 470 - lpswe __LC_RETURN_PSW # branch to sysc_per and enable irqs 470 + lpswe __LC_RETURN_PSW # branch to .Lsysc_per and enable irqs 471 471 472 472 /* 473 473 * IO interrupt handler routine ··· 483 483 HANDLE_SIE_INTERCEPT %r14,2 484 484 SWITCH_ASYNC __LC_SAVE_AREA_ASYNC,__LC_ASYNC_STACK,STACK_SHIFT 485 485 tmhh %r8,0x0001 # interrupting from user? 486 - jz io_skip 486 + jz .Lio_skip 487 487 UPDATE_VTIME %r14,__LC_ASYNC_ENTER_TIMER 488 488 LAST_BREAK %r14 489 - io_skip: 489 + .Lio_skip: 490 490 stmg %r0,%r7,__PT_R0(%r11) 491 491 mvc __PT_R8(64,%r11),__LC_SAVE_AREA_ASYNC 492 492 stmg %r8,%r9,__PT_PSW(%r11) ··· 494 494 xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11) 495 495 TRACE_IRQS_OFF 496 496 xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) 497 - io_loop: 497 + .Lio_loop: 498 498 lgr %r2,%r11 # pass pointer to pt_regs 499 499 lghi %r3,IO_INTERRUPT 500 500 tm __PT_INT_CODE+8(%r11),0x80 # adapter interrupt ? 501 - jz io_call 501 + jz .Lio_call 502 502 lghi %r3,THIN_INTERRUPT 503 - io_call: 503 + .Lio_call: 504 504 brasl %r14,do_IRQ 505 505 tm __LC_MACHINE_FLAGS+6,0x10 # MACHINE_FLAG_LPAR 506 - jz io_return 506 + jz .Lio_return 507 507 tpi 0 508 - jz io_return 508 + jz .Lio_return 509 509 mvc __PT_INT_CODE(12,%r11),__LC_SUBCHANNEL_ID 510 - j io_loop 511 - io_return: 510 + j .Lio_loop 511 + .Lio_return: 512 512 LOCKDEP_SYS_EXIT 513 513 TRACE_IRQS_ON 514 - io_tif: 514 + .Lio_tif: 515 515 tm __TI_flags+7(%r12),_TIF_WORK 516 - jnz io_work # there is work to do (signals etc.) 516 + jnz .Lio_work # there is work to do (signals etc.) 517 517 tm __LC_CPU_FLAGS+7,_CIF_WORK 518 - jnz io_work 519 - io_restore: 518 + jnz .Lio_work 519 + .Lio_restore: 520 520 lg %r14,__LC_VDSO_PER_CPU 521 521 lmg %r0,%r10,__PT_R0(%r11) 522 522 mvc __LC_RETURN_PSW(16),__PT_PSW(%r11) ··· 524 524 mvc __VDSO_ECTG_BASE(16,%r14),__LC_EXIT_TIMER 525 525 lmg %r11,%r15,__PT_R11(%r11) 526 526 lpswe __LC_RETURN_PSW 527 - io_done: 527 + .Lio_done: 528 528 529 529 # 530 530 # There is work todo, find out in which context we have been interrupted: ··· 535 535 # the preemption counter and if it is zero call preempt_schedule_irq 536 536 # Before any work can be done, a switch to the kernel stack is required. 537 537 # 538 - io_work: 538 + .Lio_work: 539 539 tm __PT_PSW+1(%r11),0x01 # returning to user ? 540 - jo io_work_user # yes -> do resched & signal 540 + jo .Lio_work_user # yes -> do resched & signal 541 541 #ifdef CONFIG_PREEMPT 542 542 # check for preemptive scheduling 543 543 icm %r0,15,__TI_precount(%r12) 544 - jnz io_restore # preemption is disabled 544 + jnz .Lio_restore # preemption is disabled 545 545 tm __TI_flags+7(%r12),_TIF_NEED_RESCHED 546 - jno io_restore 546 + jno .Lio_restore 547 547 # switch to kernel stack 548 548 lg %r1,__PT_R15(%r11) 549 549 aghi %r1,-(STACK_FRAME_OVERHEAD + __PT_SIZE) ··· 551 551 xc __SF_BACKCHAIN(8,%r1),__SF_BACKCHAIN(%r1) 552 552 la %r11,STACK_FRAME_OVERHEAD(%r1) 553 553 lgr %r15,%r1 554 - # TRACE_IRQS_ON already done at io_return, call 554 + # TRACE_IRQS_ON already done at .Lio_return, call 555 555 # TRACE_IRQS_OFF to keep things symmetrical 556 556 TRACE_IRQS_OFF 557 557 brasl %r14,preempt_schedule_irq 558 - j io_return 558 + j .Lio_return 559 559 #else 560 - j io_restore 560 + j .Lio_restore 561 561 #endif 562 562 563 563 # 564 564 # Need to do work before returning to userspace, switch to kernel stack 565 565 # 566 - io_work_user: 566 + .Lio_work_user: 567 567 lg %r1,__LC_KERNEL_STACK 568 568 mvc STACK_FRAME_OVERHEAD(__PT_SIZE,%r1),0(%r11) 569 569 xc __SF_BACKCHAIN(8,%r1),__SF_BACKCHAIN(%r1) ··· 573 573 # 574 574 # One of the work bits is on. Find out which one. 575 575 # 576 - io_work_tif: 576 + .Lio_work_tif: 577 577 tm __LC_CPU_FLAGS+7,_CIF_MCCK_PENDING 578 - jo io_mcck_pending 578 + jo .Lio_mcck_pending 579 579 tm __TI_flags+7(%r12),_TIF_NEED_RESCHED 580 - jo io_reschedule 580 + jo .Lio_reschedule 581 581 tm __TI_flags+7(%r12),_TIF_SIGPENDING 582 - jo io_sigpending 582 + jo .Lio_sigpending 583 583 tm __TI_flags+7(%r12),_TIF_NOTIFY_RESUME 584 - jo io_notify_resume 584 + jo .Lio_notify_resume 585 585 tm __LC_CPU_FLAGS+7,_CIF_ASCE 586 - jo io_uaccess 587 - j io_return # beware of critical section cleanup 586 + jo .Lio_uaccess 587 + j .Lio_return # beware of critical section cleanup 588 588 589 589 # 590 590 # _CIF_MCCK_PENDING is set, call handler 591 591 # 592 - io_mcck_pending: 593 - # TRACE_IRQS_ON already done at io_return 592 + .Lio_mcck_pending: 593 + # TRACE_IRQS_ON already done at .Lio_return 594 594 brasl %r14,s390_handle_mcck # TIF bit will be cleared by handler 595 595 TRACE_IRQS_OFF 596 - j io_return 596 + j .Lio_return 597 597 598 598 # 599 599 # _CIF_ASCE is set, load user space asce 600 600 # 601 - io_uaccess: 601 + .Lio_uaccess: 602 602 ni __LC_CPU_FLAGS+7,255-_CIF_ASCE 603 603 lctlg %c1,%c1,__LC_USER_ASCE # load primary asce 604 - j io_return 604 + j .Lio_return 605 605 606 606 # 607 607 # _TIF_NEED_RESCHED is set, call schedule 608 608 # 609 - io_reschedule: 610 - # TRACE_IRQS_ON already done at io_return 609 + .Lio_reschedule: 610 + # TRACE_IRQS_ON already done at .Lio_return 611 611 ssm __LC_SVC_NEW_PSW # reenable interrupts 612 612 brasl %r14,schedule # call scheduler 613 613 ssm __LC_PGM_NEW_PSW # disable I/O and ext. interrupts 614 614 TRACE_IRQS_OFF 615 - j io_return 615 + j .Lio_return 616 616 617 617 # 618 618 # _TIF_SIGPENDING or is set, call do_signal 619 619 # 620 - io_sigpending: 621 - # TRACE_IRQS_ON already done at io_return 620 + .Lio_sigpending: 621 + # TRACE_IRQS_ON already done at .Lio_return 622 622 ssm __LC_SVC_NEW_PSW # reenable interrupts 623 623 lgr %r2,%r11 # pass pointer to pt_regs 624 624 brasl %r14,do_signal 625 625 ssm __LC_PGM_NEW_PSW # disable I/O and ext. interrupts 626 626 TRACE_IRQS_OFF 627 - j io_return 627 + j .Lio_return 628 628 629 629 # 630 630 # _TIF_NOTIFY_RESUME or is set, call do_notify_resume 631 631 # 632 - io_notify_resume: 633 - # TRACE_IRQS_ON already done at io_return 632 + .Lio_notify_resume: 633 + # TRACE_IRQS_ON already done at .Lio_return 634 634 ssm __LC_SVC_NEW_PSW # reenable interrupts 635 635 lgr %r2,%r11 # pass pointer to pt_regs 636 636 brasl %r14,do_notify_resume 637 637 ssm __LC_PGM_NEW_PSW # disable I/O and ext. interrupts 638 638 TRACE_IRQS_OFF 639 - j io_return 639 + j .Lio_return 640 640 641 641 /* 642 642 * External interrupt handler routine ··· 652 652 HANDLE_SIE_INTERCEPT %r14,3 653 653 SWITCH_ASYNC __LC_SAVE_AREA_ASYNC,__LC_ASYNC_STACK,STACK_SHIFT 654 654 tmhh %r8,0x0001 # interrupting from user ? 655 - jz ext_skip 655 + jz .Lext_skip 656 656 UPDATE_VTIME %r14,__LC_ASYNC_ENTER_TIMER 657 657 LAST_BREAK %r14 658 - ext_skip: 658 + .Lext_skip: 659 659 stmg %r0,%r7,__PT_R0(%r11) 660 660 mvc __PT_R8(64,%r11),__LC_SAVE_AREA_ASYNC 661 661 stmg %r8,%r9,__PT_PSW(%r11) ··· 669 669 lgr %r2,%r11 # pass pointer to pt_regs 670 670 lghi %r3,EXT_INTERRUPT 671 671 brasl %r14,do_IRQ 672 - j io_return 672 + j .Lio_return 673 673 674 674 /* 675 - * Load idle PSW. The second "half" of this function is in cleanup_idle. 675 + * Load idle PSW. The second "half" of this function is in .Lcleanup_idle. 676 676 */ 677 677 ENTRY(psw_idle) 678 678 stg %r3,__SF_EMPTY(%r15) 679 - larl %r1,psw_idle_lpsw+4 679 + larl %r1,.Lpsw_idle_lpsw+4 680 680 stg %r1,__SF_EMPTY+8(%r15) 681 681 STCK __CLOCK_IDLE_ENTER(%r2) 682 682 stpt __TIMER_IDLE_ENTER(%r2) 683 - psw_idle_lpsw: 683 + .Lpsw_idle_lpsw: 684 684 lpswe __SF_EMPTY(%r15) 685 685 br %r14 686 - psw_idle_end: 686 + .Lpsw_idle_end: 687 687 688 - __critical_end: 688 + .L__critical_end: 689 689 690 690 /* 691 691 * Machine check handler routines ··· 701 701 lmg %r8,%r9,__LC_MCK_OLD_PSW 702 702 HANDLE_SIE_INTERCEPT %r14,4 703 703 tm __LC_MCCK_CODE,0x80 # system damage? 704 - jo mcck_panic # yes -> rest of mcck code invalid 704 + jo .Lmcck_panic # yes -> rest of mcck code invalid 705 705 lghi %r14,__LC_CPU_TIMER_SAVE_AREA 706 706 mvc __LC_MCCK_ENTER_TIMER(8),0(%r14) 707 707 tm __LC_MCCK_CODE+5,0x02 # stored cpu timer value valid? ··· 719 719 2: spt 0(%r14) 720 720 mvc __LC_MCCK_ENTER_TIMER(8),0(%r14) 721 721 3: tm __LC_MCCK_CODE+2,0x09 # mwp + ia of old psw valid? 722 - jno mcck_panic # no -> skip cleanup critical 722 + jno .Lmcck_panic # no -> skip cleanup critical 723 723 SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+64,__LC_PANIC_STACK,PAGE_SHIFT 724 724 tm %r8,0x0001 # interrupting from user ? 725 - jz mcck_skip 725 + jz .Lmcck_skip 726 726 UPDATE_VTIME %r14,__LC_MCCK_ENTER_TIMER 727 727 LAST_BREAK %r14 728 - mcck_skip: 728 + .Lmcck_skip: 729 729 lghi %r14,__LC_GPREGS_SAVE_AREA+64 730 730 stmg %r0,%r7,__PT_R0(%r11) 731 731 mvc __PT_R8(64,%r11),0(%r14) ··· 735 735 lgr %r2,%r11 # pass pointer to pt_regs 736 736 brasl %r14,s390_do_machine_check 737 737 tm __PT_PSW+1(%r11),0x01 # returning to user ? 738 - jno mcck_return 738 + jno .Lmcck_return 739 739 lg %r1,__LC_KERNEL_STACK # switch to kernel stack 740 740 mvc STACK_FRAME_OVERHEAD(__PT_SIZE,%r1),0(%r11) 741 741 xc __SF_BACKCHAIN(8,%r1),__SF_BACKCHAIN(%r1) ··· 743 743 lgr %r15,%r1 744 744 ssm __LC_PGM_NEW_PSW # turn dat on, keep irqs off 745 745 tm __LC_CPU_FLAGS+7,_CIF_MCCK_PENDING 746 - jno mcck_return 746 + jno .Lmcck_return 747 747 TRACE_IRQS_OFF 748 748 brasl %r14,s390_handle_mcck 749 749 TRACE_IRQS_ON 750 - mcck_return: 750 + .Lmcck_return: 751 751 lg %r14,__LC_VDSO_PER_CPU 752 752 lmg %r0,%r10,__PT_R0(%r11) 753 753 mvc __LC_RETURN_MCCK_PSW(16),__PT_PSW(%r11) # move return PSW ··· 758 758 0: lmg %r11,%r15,__PT_R11(%r11) 759 759 lpswe __LC_RETURN_MCCK_PSW 760 760 761 - mcck_panic: 761 + .Lmcck_panic: 762 762 lg %r14,__LC_PANIC_STACK 763 763 slgr %r14,%r15 764 764 srag %r14,%r14,PAGE_SHIFT 765 765 jz 0f 766 766 lg %r15,__LC_PANIC_STACK 767 767 0: aghi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE) 768 - j mcck_skip 768 + j .Lmcck_skip 769 769 770 770 # 771 771 # PSW restart interrupt handler ··· 815 815 #endif 816 816 817 817 .align 8 818 - cleanup_table: 818 + .Lcleanup_table: 819 819 .quad system_call 820 - .quad sysc_do_svc 821 - .quad sysc_tif 822 - .quad sysc_restore 823 - .quad sysc_done 824 - .quad io_tif 825 - .quad io_restore 826 - .quad io_done 820 + .quad .Lsysc_do_svc 821 + .quad .Lsysc_tif 822 + .quad .Lsysc_restore 823 + .quad .Lsysc_done 824 + .quad .Lio_tif 825 + .quad .Lio_restore 826 + .quad .Lio_done 827 827 .quad psw_idle 828 - .quad psw_idle_end 828 + .quad .Lpsw_idle_end 829 829 830 830 cleanup_critical: 831 - clg %r9,BASED(cleanup_table) # system_call 831 + clg %r9,BASED(.Lcleanup_table) # system_call 832 832 jl 0f 833 - clg %r9,BASED(cleanup_table+8) # sysc_do_svc 834 - jl cleanup_system_call 835 - clg %r9,BASED(cleanup_table+16) # sysc_tif 833 + clg %r9,BASED(.Lcleanup_table+8) # .Lsysc_do_svc 834 + jl .Lcleanup_system_call 835 + clg %r9,BASED(.Lcleanup_table+16) # .Lsysc_tif 836 836 jl 0f 837 - clg %r9,BASED(cleanup_table+24) # sysc_restore 838 - jl cleanup_sysc_tif 839 - clg %r9,BASED(cleanup_table+32) # sysc_done 840 - jl cleanup_sysc_restore 841 - clg %r9,BASED(cleanup_table+40) # io_tif 837 + clg %r9,BASED(.Lcleanup_table+24) # .Lsysc_restore 838 + jl .Lcleanup_sysc_tif 839 + clg %r9,BASED(.Lcleanup_table+32) # .Lsysc_done 840 + jl .Lcleanup_sysc_restore 841 + clg %r9,BASED(.Lcleanup_table+40) # .Lio_tif 842 842 jl 0f 843 - clg %r9,BASED(cleanup_table+48) # io_restore 844 - jl cleanup_io_tif 845 - clg %r9,BASED(cleanup_table+56) # io_done 846 - jl cleanup_io_restore 847 - clg %r9,BASED(cleanup_table+64) # psw_idle 843 + clg %r9,BASED(.Lcleanup_table+48) # .Lio_restore 844 + jl .Lcleanup_io_tif 845 + clg %r9,BASED(.Lcleanup_table+56) # .Lio_done 846 + jl .Lcleanup_io_restore 847 + clg %r9,BASED(.Lcleanup_table+64) # psw_idle 848 848 jl 0f 849 - clg %r9,BASED(cleanup_table+72) # psw_idle_end 850 - jl cleanup_idle 849 + clg %r9,BASED(.Lcleanup_table+72) # .Lpsw_idle_end 850 + jl .Lcleanup_idle 851 851 0: br %r14 852 852 853 853 854 - cleanup_system_call: 854 + .Lcleanup_system_call: 855 855 # check if stpt has been executed 856 - clg %r9,BASED(cleanup_system_call_insn) 856 + clg %r9,BASED(.Lcleanup_system_call_insn) 857 857 jh 0f 858 858 mvc __LC_SYNC_ENTER_TIMER(8),__LC_ASYNC_ENTER_TIMER 859 859 cghi %r11,__LC_SAVE_AREA_ASYNC 860 860 je 0f 861 861 mvc __LC_SYNC_ENTER_TIMER(8),__LC_MCCK_ENTER_TIMER 862 862 0: # check if stmg has been executed 863 - clg %r9,BASED(cleanup_system_call_insn+8) 863 + clg %r9,BASED(.Lcleanup_system_call_insn+8) 864 864 jh 0f 865 865 mvc __LC_SAVE_AREA_SYNC(64),0(%r11) 866 866 0: # check if base register setup + TIF bit load has been done 867 - clg %r9,BASED(cleanup_system_call_insn+16) 867 + clg %r9,BASED(.Lcleanup_system_call_insn+16) 868 868 jhe 0f 869 869 # set up saved registers r10 and r12 870 870 stg %r10,16(%r11) # r10 last break 871 871 stg %r12,32(%r11) # r12 thread-info pointer 872 872 0: # check if the user time update has been done 873 - clg %r9,BASED(cleanup_system_call_insn+24) 873 + clg %r9,BASED(.Lcleanup_system_call_insn+24) 874 874 jh 0f 875 875 lg %r15,__LC_EXIT_TIMER 876 876 slg %r15,__LC_SYNC_ENTER_TIMER 877 877 alg %r15,__LC_USER_TIMER 878 878 stg %r15,__LC_USER_TIMER 879 879 0: # check if the system time update has been done 880 - clg %r9,BASED(cleanup_system_call_insn+32) 880 + clg %r9,BASED(.Lcleanup_system_call_insn+32) 881 881 jh 0f 882 882 lg %r15,__LC_LAST_UPDATE_TIMER 883 883 slg %r15,__LC_EXIT_TIMER ··· 904 904 # setup saved register r15 905 905 stg %r15,56(%r11) # r15 stack pointer 906 906 # set new psw address and exit 907 - larl %r9,sysc_do_svc 907 + larl %r9,.Lsysc_do_svc 908 908 br %r14 909 - cleanup_system_call_insn: 909 + .Lcleanup_system_call_insn: 910 910 .quad system_call 911 - .quad sysc_stmg 912 - .quad sysc_per 913 - .quad sysc_vtime+18 914 - .quad sysc_vtime+42 911 + .quad .Lsysc_stmg 912 + .quad .Lsysc_per 913 + .quad .Lsysc_vtime+18 914 + .quad .Lsysc_vtime+42 915 915 916 - cleanup_sysc_tif: 917 - larl %r9,sysc_tif 916 + .Lcleanup_sysc_tif: 917 + larl %r9,.Lsysc_tif 918 918 br %r14 919 919 920 - cleanup_sysc_restore: 921 - clg %r9,BASED(cleanup_sysc_restore_insn) 920 + .Lcleanup_sysc_restore: 921 + clg %r9,BASED(.Lcleanup_sysc_restore_insn) 922 922 je 0f 923 923 lg %r9,24(%r11) # get saved pointer to pt_regs 924 924 mvc __LC_RETURN_PSW(16),__PT_PSW(%r9) ··· 926 926 lmg %r0,%r7,__PT_R0(%r9) 927 927 0: lmg %r8,%r9,__LC_RETURN_PSW 928 928 br %r14 929 - cleanup_sysc_restore_insn: 930 - .quad sysc_done - 4 929 + .Lcleanup_sysc_restore_insn: 930 + .quad .Lsysc_done - 4 931 931 932 - cleanup_io_tif: 933 - larl %r9,io_tif 932 + .Lcleanup_io_tif: 933 + larl %r9,.Lio_tif 934 934 br %r14 935 935 936 - cleanup_io_restore: 937 - clg %r9,BASED(cleanup_io_restore_insn) 936 + .Lcleanup_io_restore: 937 + clg %r9,BASED(.Lcleanup_io_restore_insn) 938 938 je 0f 939 939 lg %r9,24(%r11) # get saved r11 pointer to pt_regs 940 940 mvc __LC_RETURN_PSW(16),__PT_PSW(%r9) ··· 942 942 lmg %r0,%r7,__PT_R0(%r9) 943 943 0: lmg %r8,%r9,__LC_RETURN_PSW 944 944 br %r14 945 - cleanup_io_restore_insn: 946 - .quad io_done - 4 945 + .Lcleanup_io_restore_insn: 946 + .quad .Lio_done - 4 947 947 948 - cleanup_idle: 948 + .Lcleanup_idle: 949 949 # copy interrupt clock & cpu timer 950 950 mvc __CLOCK_IDLE_EXIT(8,%r2),__LC_INT_CLOCK 951 951 mvc __TIMER_IDLE_EXIT(8,%r2),__LC_ASYNC_ENTER_TIMER ··· 954 954 mvc __CLOCK_IDLE_EXIT(8,%r2),__LC_MCCK_CLOCK 955 955 mvc __TIMER_IDLE_EXIT(8,%r2),__LC_MCCK_ENTER_TIMER 956 956 0: # check if stck & stpt have been executed 957 - clg %r9,BASED(cleanup_idle_insn) 957 + clg %r9,BASED(.Lcleanup_idle_insn) 958 958 jhe 1f 959 959 mvc __CLOCK_IDLE_ENTER(8,%r2),__CLOCK_IDLE_EXIT(%r2) 960 960 mvc __TIMER_IDLE_ENTER(8,%r2),__TIMER_IDLE_EXIT(%r2) ··· 973 973 nihh %r8,0xfcfd # clear irq & wait state bits 974 974 lg %r9,48(%r11) # return from psw_idle 975 975 br %r14 976 - cleanup_idle_insn: 977 - .quad psw_idle_lpsw 976 + .Lcleanup_idle_insn: 977 + .quad .Lpsw_idle_lpsw 978 978 979 979 /* 980 980 * Integer constants 981 981 */ 982 982 .align 8 983 983 .Lcritical_start: 984 - .quad __critical_start 984 + .quad .L__critical_start 985 985 .Lcritical_length: 986 - .quad __critical_end - __critical_start 986 + .quad .L__critical_end - .L__critical_start 987 987 988 988 989 989 #if IS_ENABLED(CONFIG_KVM) ··· 1000 1000 lmg %r0,%r13,0(%r3) # load guest gprs 0-13 1001 1001 lg %r14,__LC_GMAP # get gmap pointer 1002 1002 ltgr %r14,%r14 1003 - jz sie_gmap 1003 + jz .Lsie_gmap 1004 1004 lctlg %c1,%c1,__GMAP_ASCE(%r14) # load primary asce 1005 - sie_gmap: 1005 + .Lsie_gmap: 1006 1006 lg %r14,__SF_EMPTY(%r15) # get control block pointer 1007 1007 oi __SIE_PROG0C+3(%r14),1 # we are going into SIE now 1008 1008 tm __SIE_PROG20+3(%r14),1 # last exit... 1009 - jnz sie_done 1009 + jnz .Lsie_done 1010 1010 LPP __SF_EMPTY(%r15) # set guest id 1011 1011 sie 0(%r14) 1012 - sie_done: 1012 + .Lsie_done: 1013 1013 LPP __SF_EMPTY+16(%r15) # set host id 1014 1014 ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE 1015 1015 lctlg %c1,%c1,__LC_USER_ASCE # load primary asce 1016 1016 # some program checks are suppressing. C code (e.g. do_protection_exception) 1017 1017 # will rewind the PSW by the ILC, which is 4 bytes in case of SIE. Other 1018 - # instructions between sie64a and sie_done should not cause program 1018 + # instructions between sie64a and .Lsie_done should not cause program 1019 1019 # interrupts. So lets use a nop (47 00 00 00) as a landing pad. 1020 1020 # See also HANDLE_SIE_INTERCEPT 1021 - rewind_pad: 1021 + .Lrewind_pad: 1022 1022 nop 0 1023 1023 .globl sie_exit 1024 1024 sie_exit: ··· 1027 1027 lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers 1028 1028 lg %r2,__SF_EMPTY+24(%r15) # return exit reason code 1029 1029 br %r14 1030 - sie_fault: 1030 + .Lsie_fault: 1031 1031 lghi %r14,-EFAULT 1032 1032 stg %r14,__SF_EMPTY+24(%r15) # set exit reason code 1033 1033 j sie_exit 1034 1034 1035 1035 .align 8 1036 1036 .Lsie_critical: 1037 - .quad sie_gmap 1037 + .quad .Lsie_gmap 1038 1038 .Lsie_critical_length: 1039 - .quad sie_done - sie_gmap 1039 + .quad .Lsie_done - .Lsie_gmap 1040 1040 1041 - EX_TABLE(rewind_pad,sie_fault) 1042 - EX_TABLE(sie_exit,sie_fault) 1041 + EX_TABLE(.Lrewind_pad,.Lsie_fault) 1042 + EX_TABLE(sie_exit,.Lsie_fault) 1043 1043 #endif 1044 1044 1045 1045 .section .rodata, "a"
+85 -49
arch/s390/kernel/ftrace.c
··· 7 7 * Martin Schwidefsky <schwidefsky@de.ibm.com> 8 8 */ 9 9 10 + #include <linux/moduleloader.h> 10 11 #include <linux/hardirq.h> 11 12 #include <linux/uaccess.h> 12 13 #include <linux/ftrace.h> ··· 16 15 #include <linux/kprobes.h> 17 16 #include <trace/syscall.h> 18 17 #include <asm/asm-offsets.h> 18 + #include <asm/cacheflush.h> 19 19 #include "entry.h" 20 - 21 - void mcount_replace_code(void); 22 - void ftrace_disable_code(void); 23 - void ftrace_enable_insn(void); 24 20 25 21 /* 26 22 * The mcount code looks like this: ··· 25 27 * larl %r1,<&counter> # offset 6 26 28 * brasl %r14,_mcount # offset 12 27 29 * lg %r14,8(%r15) # offset 18 28 - * Total length is 24 bytes. The complete mcount block initially gets replaced 29 - * by ftrace_make_nop. Subsequent calls to ftrace_make_call / ftrace_make_nop 30 - * only patch the jg/lg instruction within the block. 31 - * Note: we do not patch the first instruction to an unconditional branch, 32 - * since that would break kprobes/jprobes. It is easier to leave the larl 33 - * instruction in and only modify the second instruction. 30 + * Total length is 24 bytes. Only the first instruction will be patched 31 + * by ftrace_make_call / ftrace_make_nop. 34 32 * The enabled ftrace code block looks like this: 35 - * larl %r0,.+24 # offset 0 36 - * > lg %r1,__LC_FTRACE_FUNC # offset 6 37 - * br %r1 # offset 12 38 - * brcl 0,0 # offset 14 39 - * brc 0,0 # offset 20 33 + * > brasl %r0,ftrace_caller # offset 0 34 + * larl %r1,<&counter> # offset 6 35 + * brasl %r14,_mcount # offset 12 36 + * lg %r14,8(%r15) # offset 18 40 37 * The ftrace function gets called with a non-standard C function call ABI 41 38 * where r0 contains the return address. It is also expected that the called 42 39 * function only clobbers r0 and r1, but restores r2-r15. 40 + * For module code we can't directly jump to ftrace caller, but need a 41 + * trampoline (ftrace_plt), which clobbers also r1. 43 42 * The return point of the ftrace function has offset 24, so execution 44 43 * continues behind the mcount block. 45 - * larl %r0,.+24 # offset 0 46 - * > jg .+18 # offset 6 47 - * br %r1 # offset 12 48 - * brcl 0,0 # offset 14 49 - * brc 0,0 # offset 20 44 + * The disabled ftrace code block looks like this: 45 + * > jg .+24 # offset 0 46 + * larl %r1,<&counter> # offset 6 47 + * brasl %r14,_mcount # offset 12 48 + * lg %r14,8(%r15) # offset 18 50 49 * The jg instruction branches to offset 24 to skip as many instructions 51 50 * as possible. 52 51 */ 53 - asm( 54 - " .align 4\n" 55 - "mcount_replace_code:\n" 56 - " larl %r0,0f\n" 57 - "ftrace_disable_code:\n" 58 - " jg 0f\n" 59 - " br %r1\n" 60 - " brcl 0,0\n" 61 - " brc 0,0\n" 62 - "0:\n" 63 - " .align 4\n" 64 - "ftrace_enable_insn:\n" 65 - " lg %r1,"__stringify(__LC_FTRACE_FUNC)"\n"); 66 52 67 - #define MCOUNT_BLOCK_SIZE 24 68 - #define MCOUNT_INSN_OFFSET 6 69 - #define FTRACE_INSN_SIZE 6 53 + unsigned long ftrace_plt; 70 54 71 55 int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, 72 56 unsigned long addr) ··· 59 79 int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, 60 80 unsigned long addr) 61 81 { 62 - /* Initial replacement of the whole mcount block */ 63 - if (addr == MCOUNT_ADDR) { 64 - if (probe_kernel_write((void *) rec->ip - MCOUNT_INSN_OFFSET, 65 - mcount_replace_code, 66 - MCOUNT_BLOCK_SIZE)) 67 - return -EPERM; 68 - return 0; 82 + struct ftrace_insn insn; 83 + unsigned short op; 84 + void *from, *to; 85 + size_t size; 86 + 87 + ftrace_generate_nop_insn(&insn); 88 + size = sizeof(insn); 89 + from = &insn; 90 + to = (void *) rec->ip; 91 + if (probe_kernel_read(&op, (void *) rec->ip, sizeof(op))) 92 + return -EFAULT; 93 + /* 94 + * If we find a breakpoint instruction, a kprobe has been placed 95 + * at the beginning of the function. We write the constant 96 + * KPROBE_ON_FTRACE_NOP into the remaining four bytes of the original 97 + * instruction so that the kprobes handler can execute a nop, if it 98 + * reaches this breakpoint. 99 + */ 100 + if (op == BREAKPOINT_INSTRUCTION) { 101 + size -= 2; 102 + from += 2; 103 + to += 2; 104 + insn.disp = KPROBE_ON_FTRACE_NOP; 69 105 } 70 - if (probe_kernel_write((void *) rec->ip, ftrace_disable_code, 71 - MCOUNT_INSN_SIZE)) 106 + if (probe_kernel_write(to, from, size)) 72 107 return -EPERM; 73 108 return 0; 74 109 } 75 110 76 111 int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 77 112 { 78 - if (probe_kernel_write((void *) rec->ip, ftrace_enable_insn, 79 - FTRACE_INSN_SIZE)) 113 + struct ftrace_insn insn; 114 + unsigned short op; 115 + void *from, *to; 116 + size_t size; 117 + 118 + ftrace_generate_call_insn(&insn, rec->ip); 119 + size = sizeof(insn); 120 + from = &insn; 121 + to = (void *) rec->ip; 122 + if (probe_kernel_read(&op, (void *) rec->ip, sizeof(op))) 123 + return -EFAULT; 124 + /* 125 + * If we find a breakpoint instruction, a kprobe has been placed 126 + * at the beginning of the function. We write the constant 127 + * KPROBE_ON_FTRACE_CALL into the remaining four bytes of the original 128 + * instruction so that the kprobes handler can execute a brasl if it 129 + * reaches this breakpoint. 130 + */ 131 + if (op == BREAKPOINT_INSTRUCTION) { 132 + size -= 2; 133 + from += 2; 134 + to += 2; 135 + insn.disp = KPROBE_ON_FTRACE_CALL; 136 + } 137 + if (probe_kernel_write(to, from, size)) 80 138 return -EPERM; 81 139 return 0; 82 140 } ··· 129 111 return 0; 130 112 } 131 113 114 + static int __init ftrace_plt_init(void) 115 + { 116 + unsigned int *ip; 117 + 118 + ftrace_plt = (unsigned long) module_alloc(PAGE_SIZE); 119 + if (!ftrace_plt) 120 + panic("cannot allocate ftrace plt\n"); 121 + ip = (unsigned int *) ftrace_plt; 122 + ip[0] = 0x0d10e310; /* basr 1,0; lg 1,10(1); br 1 */ 123 + ip[1] = 0x100a0004; 124 + ip[2] = 0x07f10000; 125 + ip[3] = FTRACE_ADDR >> 32; 126 + ip[4] = FTRACE_ADDR & 0xffffffff; 127 + set_memory_ro(ftrace_plt, 1); 128 + return 0; 129 + } 130 + device_initcall(ftrace_plt_init); 131 + 132 132 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 133 133 /* 134 134 * Hook the return address and push it in the stack of return addresses 135 135 * in current thread info. 136 136 */ 137 - unsigned long __kprobes prepare_ftrace_return(unsigned long parent, 138 - unsigned long ip) 137 + unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip) 139 138 { 140 139 struct ftrace_graph_ent trace; 141 140 ··· 172 137 out: 173 138 return parent; 174 139 } 140 + NOKPROBE_SYMBOL(prepare_ftrace_return); 175 141 176 142 /* 177 143 * Patch the kernel code at ftrace_graph_caller location. The instruction
+15 -14
arch/s390/kernel/idle.c
··· 19 19 20 20 static DEFINE_PER_CPU(struct s390_idle_data, s390_idle); 21 21 22 - void __kprobes enabled_wait(void) 22 + void enabled_wait(void) 23 23 { 24 24 struct s390_idle_data *idle = this_cpu_ptr(&s390_idle); 25 25 unsigned long long idle_time; ··· 35 35 /* Call the assembler magic in entry.S */ 36 36 psw_idle(idle, psw_mask); 37 37 38 + trace_hardirqs_off(); 39 + 38 40 /* Account time spent with enabled wait psw loaded as idle time. */ 39 - idle->sequence++; 40 - smp_wmb(); 41 + write_seqcount_begin(&idle->seqcount); 41 42 idle_time = idle->clock_idle_exit - idle->clock_idle_enter; 42 43 idle->clock_idle_enter = idle->clock_idle_exit = 0ULL; 43 44 idle->idle_time += idle_time; 44 45 idle->idle_count++; 45 46 account_idle_time(idle_time); 46 - smp_wmb(); 47 - idle->sequence++; 47 + write_seqcount_end(&idle->seqcount); 48 48 } 49 + NOKPROBE_SYMBOL(enabled_wait); 49 50 50 51 static ssize_t show_idle_count(struct device *dev, 51 52 struct device_attribute *attr, char *buf) 52 53 { 53 54 struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id); 54 55 unsigned long long idle_count; 55 - unsigned int sequence; 56 + unsigned int seq; 56 57 57 58 do { 58 - sequence = ACCESS_ONCE(idle->sequence); 59 + seq = read_seqcount_begin(&idle->seqcount); 59 60 idle_count = ACCESS_ONCE(idle->idle_count); 60 61 if (ACCESS_ONCE(idle->clock_idle_enter)) 61 62 idle_count++; 62 - } while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence)); 63 + } while (read_seqcount_retry(&idle->seqcount, seq)); 63 64 return sprintf(buf, "%llu\n", idle_count); 64 65 } 65 66 DEVICE_ATTR(idle_count, 0444, show_idle_count, NULL); ··· 70 69 { 71 70 struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id); 72 71 unsigned long long now, idle_time, idle_enter, idle_exit; 73 - unsigned int sequence; 72 + unsigned int seq; 74 73 75 74 do { 76 75 now = get_tod_clock(); 77 - sequence = ACCESS_ONCE(idle->sequence); 76 + seq = read_seqcount_begin(&idle->seqcount); 78 77 idle_time = ACCESS_ONCE(idle->idle_time); 79 78 idle_enter = ACCESS_ONCE(idle->clock_idle_enter); 80 79 idle_exit = ACCESS_ONCE(idle->clock_idle_exit); 81 - } while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence)); 80 + } while (read_seqcount_retry(&idle->seqcount, seq)); 82 81 idle_time += idle_enter ? ((idle_exit ? : now) - idle_enter) : 0; 83 82 return sprintf(buf, "%llu\n", idle_time >> 12); 84 83 } ··· 88 87 { 89 88 struct s390_idle_data *idle = &per_cpu(s390_idle, cpu); 90 89 unsigned long long now, idle_enter, idle_exit; 91 - unsigned int sequence; 90 + unsigned int seq; 92 91 93 92 do { 94 93 now = get_tod_clock(); 95 - sequence = ACCESS_ONCE(idle->sequence); 94 + seq = read_seqcount_begin(&idle->seqcount); 96 95 idle_enter = ACCESS_ONCE(idle->clock_idle_enter); 97 96 idle_exit = ACCESS_ONCE(idle->clock_idle_exit); 98 - } while ((sequence & 1) || (ACCESS_ONCE(idle->sequence) != sequence)); 97 + } while (read_seqcount_retry(&idle->seqcount, seq)); 99 98 return idle_enter ? ((idle_exit ?: now) - idle_enter) : 0; 100 99 } 101 100
+1 -4
arch/s390/kernel/irq.c
··· 127 127 for_each_online_cpu(cpu) 128 128 seq_printf(p, "CPU%d ", cpu); 129 129 seq_putc(p, '\n'); 130 - goto out; 131 130 } 132 131 if (index < NR_IRQS) { 133 132 if (index >= NR_IRQS_BASE) 134 133 goto out; 135 - /* Adjust index to process irqclass_main_desc array entries */ 136 - index--; 137 134 seq_printf(p, "%s: ", irqclass_main_desc[index].name); 138 135 irq = irqclass_main_desc[index].irq; 139 136 for_each_online_cpu(cpu) ··· 155 158 156 159 unsigned int arch_dynirq_lower_bound(unsigned int from) 157 160 { 158 - return from < THIN_INTERRUPT ? THIN_INTERRUPT : from; 161 + return from < NR_IRQS_BASE ? NR_IRQS_BASE : from; 159 162 } 160 163 161 164 /*
+126 -66
arch/s390/kernel/kprobes.c
··· 29 29 #include <linux/module.h> 30 30 #include <linux/slab.h> 31 31 #include <linux/hardirq.h> 32 + #include <linux/ftrace.h> 32 33 #include <asm/cacheflush.h> 33 34 #include <asm/sections.h> 34 35 #include <asm/dis.h> ··· 59 58 .insn_size = MAX_INSN_SIZE, 60 59 }; 61 60 62 - static void __kprobes copy_instruction(struct kprobe *p) 61 + static void copy_instruction(struct kprobe *p) 63 62 { 63 + unsigned long ip = (unsigned long) p->addr; 64 64 s64 disp, new_disp; 65 65 u64 addr, new_addr; 66 66 67 - memcpy(p->ainsn.insn, p->addr, insn_length(p->opcode >> 8)); 67 + if (ftrace_location(ip) == ip) { 68 + /* 69 + * If kprobes patches the instruction that is morphed by 70 + * ftrace make sure that kprobes always sees the branch 71 + * "jg .+24" that skips the mcount block 72 + */ 73 + ftrace_generate_nop_insn((struct ftrace_insn *)p->ainsn.insn); 74 + p->ainsn.is_ftrace_insn = 1; 75 + } else 76 + memcpy(p->ainsn.insn, p->addr, insn_length(*p->addr >> 8)); 77 + p->opcode = p->ainsn.insn[0]; 68 78 if (!probe_is_insn_relative_long(p->ainsn.insn)) 69 79 return; 70 80 /* ··· 91 79 new_disp = ((addr + (disp * 2)) - new_addr) / 2; 92 80 *(s32 *)&p->ainsn.insn[1] = new_disp; 93 81 } 82 + NOKPROBE_SYMBOL(copy_instruction); 94 83 95 84 static inline int is_kernel_addr(void *addr) 96 85 { 97 86 return addr < (void *)_end; 98 87 } 99 88 100 - static inline int is_module_addr(void *addr) 101 - { 102 - #ifdef CONFIG_64BIT 103 - BUILD_BUG_ON(MODULES_LEN > (1UL << 31)); 104 - if (addr < (void *)MODULES_VADDR) 105 - return 0; 106 - if (addr > (void *)MODULES_END) 107 - return 0; 108 - #endif 109 - return 1; 110 - } 111 - 112 - static int __kprobes s390_get_insn_slot(struct kprobe *p) 89 + static int s390_get_insn_slot(struct kprobe *p) 113 90 { 114 91 /* 115 92 * Get an insn slot that is within the same 2GB area like the original ··· 112 111 p->ainsn.insn = get_insn_slot(); 113 112 return p->ainsn.insn ? 0 : -ENOMEM; 114 113 } 114 + NOKPROBE_SYMBOL(s390_get_insn_slot); 115 115 116 - static void __kprobes s390_free_insn_slot(struct kprobe *p) 116 + static void s390_free_insn_slot(struct kprobe *p) 117 117 { 118 118 if (!p->ainsn.insn) 119 119 return; ··· 124 122 free_insn_slot(p->ainsn.insn, 0); 125 123 p->ainsn.insn = NULL; 126 124 } 125 + NOKPROBE_SYMBOL(s390_free_insn_slot); 127 126 128 - int __kprobes arch_prepare_kprobe(struct kprobe *p) 127 + int arch_prepare_kprobe(struct kprobe *p) 129 128 { 130 129 if ((unsigned long) p->addr & 0x01) 131 130 return -EINVAL; ··· 135 132 return -EINVAL; 136 133 if (s390_get_insn_slot(p)) 137 134 return -ENOMEM; 138 - p->opcode = *p->addr; 139 135 copy_instruction(p); 140 136 return 0; 141 137 } 138 + NOKPROBE_SYMBOL(arch_prepare_kprobe); 142 139 143 - struct ins_replace_args { 144 - kprobe_opcode_t *ptr; 145 - kprobe_opcode_t opcode; 146 - }; 147 - 148 - static int __kprobes swap_instruction(void *aref) 140 + int arch_check_ftrace_location(struct kprobe *p) 149 141 { 150 - struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 151 - unsigned long status = kcb->kprobe_status; 152 - struct ins_replace_args *args = aref; 153 - 154 - kcb->kprobe_status = KPROBE_SWAP_INST; 155 - probe_kernel_write(args->ptr, &args->opcode, sizeof(args->opcode)); 156 - kcb->kprobe_status = status; 157 142 return 0; 158 143 } 159 144 160 - void __kprobes arch_arm_kprobe(struct kprobe *p) 161 - { 162 - struct ins_replace_args args; 145 + struct swap_insn_args { 146 + struct kprobe *p; 147 + unsigned int arm_kprobe : 1; 148 + }; 163 149 164 - args.ptr = p->addr; 165 - args.opcode = BREAKPOINT_INSTRUCTION; 150 + static int swap_instruction(void *data) 151 + { 152 + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 153 + unsigned long status = kcb->kprobe_status; 154 + struct swap_insn_args *args = data; 155 + struct ftrace_insn new_insn, *insn; 156 + struct kprobe *p = args->p; 157 + size_t len; 158 + 159 + new_insn.opc = args->arm_kprobe ? BREAKPOINT_INSTRUCTION : p->opcode; 160 + len = sizeof(new_insn.opc); 161 + if (!p->ainsn.is_ftrace_insn) 162 + goto skip_ftrace; 163 + len = sizeof(new_insn); 164 + insn = (struct ftrace_insn *) p->addr; 165 + if (args->arm_kprobe) { 166 + if (is_ftrace_nop(insn)) 167 + new_insn.disp = KPROBE_ON_FTRACE_NOP; 168 + else 169 + new_insn.disp = KPROBE_ON_FTRACE_CALL; 170 + } else { 171 + ftrace_generate_call_insn(&new_insn, (unsigned long)p->addr); 172 + if (insn->disp == KPROBE_ON_FTRACE_NOP) 173 + ftrace_generate_nop_insn(&new_insn); 174 + } 175 + skip_ftrace: 176 + kcb->kprobe_status = KPROBE_SWAP_INST; 177 + probe_kernel_write(p->addr, &new_insn, len); 178 + kcb->kprobe_status = status; 179 + return 0; 180 + } 181 + NOKPROBE_SYMBOL(swap_instruction); 182 + 183 + void arch_arm_kprobe(struct kprobe *p) 184 + { 185 + struct swap_insn_args args = {.p = p, .arm_kprobe = 1}; 186 + 166 187 stop_machine(swap_instruction, &args, NULL); 167 188 } 189 + NOKPROBE_SYMBOL(arch_arm_kprobe); 168 190 169 - void __kprobes arch_disarm_kprobe(struct kprobe *p) 191 + void arch_disarm_kprobe(struct kprobe *p) 170 192 { 171 - struct ins_replace_args args; 193 + struct swap_insn_args args = {.p = p, .arm_kprobe = 0}; 172 194 173 - args.ptr = p->addr; 174 - args.opcode = p->opcode; 175 195 stop_machine(swap_instruction, &args, NULL); 176 196 } 197 + NOKPROBE_SYMBOL(arch_disarm_kprobe); 177 198 178 - void __kprobes arch_remove_kprobe(struct kprobe *p) 199 + void arch_remove_kprobe(struct kprobe *p) 179 200 { 180 201 s390_free_insn_slot(p); 181 202 } 203 + NOKPROBE_SYMBOL(arch_remove_kprobe); 182 204 183 - static void __kprobes enable_singlestep(struct kprobe_ctlblk *kcb, 184 - struct pt_regs *regs, 185 - unsigned long ip) 205 + static void enable_singlestep(struct kprobe_ctlblk *kcb, 206 + struct pt_regs *regs, 207 + unsigned long ip) 186 208 { 187 209 struct per_regs per_kprobe; 188 210 ··· 227 199 regs->psw.mask &= ~(PSW_MASK_IO | PSW_MASK_EXT); 228 200 regs->psw.addr = ip | PSW_ADDR_AMODE; 229 201 } 202 + NOKPROBE_SYMBOL(enable_singlestep); 230 203 231 - static void __kprobes disable_singlestep(struct kprobe_ctlblk *kcb, 232 - struct pt_regs *regs, 233 - unsigned long ip) 204 + static void disable_singlestep(struct kprobe_ctlblk *kcb, 205 + struct pt_regs *regs, 206 + unsigned long ip) 234 207 { 235 208 /* Restore control regs and psw mask, set new psw address */ 236 209 __ctl_load(kcb->kprobe_saved_ctl, 9, 11); ··· 239 210 regs->psw.mask |= kcb->kprobe_saved_imask; 240 211 regs->psw.addr = ip | PSW_ADDR_AMODE; 241 212 } 213 + NOKPROBE_SYMBOL(disable_singlestep); 242 214 243 215 /* 244 216 * Activate a kprobe by storing its pointer to current_kprobe. The 245 217 * previous kprobe is stored in kcb->prev_kprobe. A stack of up to 246 218 * two kprobes can be active, see KPROBE_REENTER. 247 219 */ 248 - static void __kprobes push_kprobe(struct kprobe_ctlblk *kcb, struct kprobe *p) 220 + static void push_kprobe(struct kprobe_ctlblk *kcb, struct kprobe *p) 249 221 { 250 222 kcb->prev_kprobe.kp = __this_cpu_read(current_kprobe); 251 223 kcb->prev_kprobe.status = kcb->kprobe_status; 252 224 __this_cpu_write(current_kprobe, p); 253 225 } 226 + NOKPROBE_SYMBOL(push_kprobe); 254 227 255 228 /* 256 229 * Deactivate a kprobe by backing up to the previous state. If the 257 230 * current state is KPROBE_REENTER prev_kprobe.kp will be non-NULL, 258 231 * for any other state prev_kprobe.kp will be NULL. 259 232 */ 260 - static void __kprobes pop_kprobe(struct kprobe_ctlblk *kcb) 233 + static void pop_kprobe(struct kprobe_ctlblk *kcb) 261 234 { 262 235 __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp); 263 236 kcb->kprobe_status = kcb->prev_kprobe.status; 264 237 } 238 + NOKPROBE_SYMBOL(pop_kprobe); 265 239 266 - void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, 267 - struct pt_regs *regs) 240 + void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs) 268 241 { 269 242 ri->ret_addr = (kprobe_opcode_t *) regs->gprs[14]; 270 243 271 244 /* Replace the return addr with trampoline addr */ 272 245 regs->gprs[14] = (unsigned long) &kretprobe_trampoline; 273 246 } 247 + NOKPROBE_SYMBOL(arch_prepare_kretprobe); 274 248 275 - static void __kprobes kprobe_reenter_check(struct kprobe_ctlblk *kcb, 276 - struct kprobe *p) 249 + static void kprobe_reenter_check(struct kprobe_ctlblk *kcb, struct kprobe *p) 277 250 { 278 251 switch (kcb->kprobe_status) { 279 252 case KPROBE_HIT_SSDONE: ··· 295 264 BUG(); 296 265 } 297 266 } 267 + NOKPROBE_SYMBOL(kprobe_reenter_check); 298 268 299 - static int __kprobes kprobe_handler(struct pt_regs *regs) 269 + static int kprobe_handler(struct pt_regs *regs) 300 270 { 301 271 struct kprobe_ctlblk *kcb; 302 272 struct kprobe *p; ··· 371 339 preempt_enable_no_resched(); 372 340 return 0; 373 341 } 342 + NOKPROBE_SYMBOL(kprobe_handler); 374 343 375 344 /* 376 345 * Function return probe trampoline: ··· 388 355 /* 389 356 * Called when the probe at kretprobe trampoline is hit 390 357 */ 391 - static int __kprobes trampoline_probe_handler(struct kprobe *p, 392 - struct pt_regs *regs) 358 + static int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) 393 359 { 394 360 struct kretprobe_instance *ri; 395 361 struct hlist_head *head, empty_rp; ··· 476 444 */ 477 445 return 1; 478 446 } 447 + NOKPROBE_SYMBOL(trampoline_probe_handler); 479 448 480 449 /* 481 450 * Called after single-stepping. p->addr is the address of the ··· 486 453 * single-stepped a copy of the instruction. The address of this 487 454 * copy is p->ainsn.insn. 488 455 */ 489 - static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) 456 + static void resume_execution(struct kprobe *p, struct pt_regs *regs) 490 457 { 491 458 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 492 459 unsigned long ip = regs->psw.addr & PSW_ADDR_INSN; 493 460 int fixup = probe_get_fixup_type(p->ainsn.insn); 461 + 462 + /* Check if the kprobes location is an enabled ftrace caller */ 463 + if (p->ainsn.is_ftrace_insn) { 464 + struct ftrace_insn *insn = (struct ftrace_insn *) p->addr; 465 + struct ftrace_insn call_insn; 466 + 467 + ftrace_generate_call_insn(&call_insn, (unsigned long) p->addr); 468 + /* 469 + * A kprobe on an enabled ftrace call site actually single 470 + * stepped an unconditional branch (ftrace nop equivalent). 471 + * Now we need to fixup things and pretend that a brasl r0,... 472 + * was executed instead. 473 + */ 474 + if (insn->disp == KPROBE_ON_FTRACE_CALL) { 475 + ip += call_insn.disp * 2 - MCOUNT_INSN_SIZE; 476 + regs->gprs[0] = (unsigned long)p->addr + sizeof(*insn); 477 + } 478 + } 494 479 495 480 if (fixup & FIXUP_PSW_NORMAL) 496 481 ip += (unsigned long) p->addr - (unsigned long) p->ainsn.insn; ··· 527 476 528 477 disable_singlestep(kcb, regs, ip); 529 478 } 479 + NOKPROBE_SYMBOL(resume_execution); 530 480 531 - static int __kprobes post_kprobe_handler(struct pt_regs *regs) 481 + static int post_kprobe_handler(struct pt_regs *regs) 532 482 { 533 483 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 534 484 struct kprobe *p = kprobe_running(); ··· 556 504 557 505 return 1; 558 506 } 507 + NOKPROBE_SYMBOL(post_kprobe_handler); 559 508 560 - static int __kprobes kprobe_trap_handler(struct pt_regs *regs, int trapnr) 509 + static int kprobe_trap_handler(struct pt_regs *regs, int trapnr) 561 510 { 562 511 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 563 512 struct kprobe *p = kprobe_running(); ··· 620 567 } 621 568 return 0; 622 569 } 570 + NOKPROBE_SYMBOL(kprobe_trap_handler); 623 571 624 - int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) 572 + int kprobe_fault_handler(struct pt_regs *regs, int trapnr) 625 573 { 626 574 int ret; 627 575 ··· 633 579 local_irq_restore(regs->psw.mask & ~PSW_MASK_PER); 634 580 return ret; 635 581 } 582 + NOKPROBE_SYMBOL(kprobe_fault_handler); 636 583 637 584 /* 638 585 * Wrapper routine to for handling exceptions. 639 586 */ 640 - int __kprobes kprobe_exceptions_notify(struct notifier_block *self, 641 - unsigned long val, void *data) 587 + int kprobe_exceptions_notify(struct notifier_block *self, 588 + unsigned long val, void *data) 642 589 { 643 590 struct die_args *args = (struct die_args *) data; 644 591 struct pt_regs *regs = args->regs; ··· 671 616 672 617 return ret; 673 618 } 619 + NOKPROBE_SYMBOL(kprobe_exceptions_notify); 674 620 675 - int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) 621 + int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) 676 622 { 677 623 struct jprobe *jp = container_of(p, struct jprobe, kp); 678 624 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); ··· 691 635 memcpy(kcb->jprobes_stack, (void *) stack, MIN_STACK_SIZE(stack)); 692 636 return 1; 693 637 } 638 + NOKPROBE_SYMBOL(setjmp_pre_handler); 694 639 695 - void __kprobes jprobe_return(void) 640 + void jprobe_return(void) 696 641 { 697 642 asm volatile(".word 0x0002"); 698 643 } 644 + NOKPROBE_SYMBOL(jprobe_return); 699 645 700 - int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) 646 + int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) 701 647 { 702 648 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 703 649 unsigned long stack; ··· 713 655 preempt_enable_no_resched(); 714 656 return 1; 715 657 } 658 + NOKPROBE_SYMBOL(longjmp_break_handler); 716 659 717 660 static struct kprobe trampoline = { 718 661 .addr = (kprobe_opcode_t *) &kretprobe_trampoline, ··· 725 666 return register_kprobe(&trampoline); 726 667 } 727 668 728 - int __kprobes arch_trampoline_kprobe(struct kprobe *p) 669 + int arch_trampoline_kprobe(struct kprobe *p) 729 670 { 730 671 return p->addr == (kprobe_opcode_t *) &kretprobe_trampoline; 731 672 } 673 + NOKPROBE_SYMBOL(arch_trampoline_kprobe);
+1
arch/s390/kernel/mcount.S
··· 27 27 .globl ftrace_regs_caller 28 28 .set ftrace_regs_caller,ftrace_caller 29 29 lgr %r1,%r15 30 + aghi %r0,MCOUNT_RETURN_FIXUP 30 31 aghi %r15,-STACK_FRAME_SIZE 31 32 stg %r1,__SF_BACKCHAIN(%r15) 32 33 stg %r1,(STACK_PTREGS_GPRS+15*8)(%r15)
-1
arch/s390/kernel/perf_cpum_sf.c
··· 1383 1383 cpuhw->lsctl.ed = 1; 1384 1384 1385 1385 /* Set in_use flag and store event */ 1386 - event->hw.idx = 0; /* only one sampling event per CPU supported */ 1387 1386 cpuhw->event = event; 1388 1387 cpuhw->flags |= PMU_F_IN_USE; 1389 1388
+2 -1
arch/s390/kernel/process.c
··· 61 61 return sf->gprs[8]; 62 62 } 63 63 64 - extern void __kprobes kernel_thread_starter(void); 64 + extern void kernel_thread_starter(void); 65 65 66 66 /* 67 67 * Free current thread data structures etc.. ··· 153 153 save_fp_ctl(&p->thread.fp_regs.fpc); 154 154 save_fp_regs(p->thread.fp_regs.fprs); 155 155 p->thread.fp_regs.pad = 0; 156 + p->thread.vxrs = NULL; 156 157 /* Set a new TLS ? */ 157 158 if (clone_flags & CLONE_SETTLS) { 158 159 unsigned long tls = frame->childregs.gprs[6];
+84 -33
arch/s390/kernel/ptrace.c
··· 248 248 */ 249 249 tmp = 0; 250 250 251 - } else if (addr < (addr_t) (&dummy->regs.fp_regs + 1)) { 252 - /* 253 - * floating point regs. are stored in the thread structure 251 + } else if (addr == (addr_t) &dummy->regs.fp_regs.fpc) { 252 + /* 253 + * floating point control reg. is in the thread structure 254 254 */ 255 - offset = addr - (addr_t) &dummy->regs.fp_regs; 256 - tmp = *(addr_t *)((addr_t) &child->thread.fp_regs + offset); 257 - if (addr == (addr_t) &dummy->regs.fp_regs.fpc) 258 - tmp <<= BITS_PER_LONG - 32; 255 + tmp = child->thread.fp_regs.fpc; 256 + tmp <<= BITS_PER_LONG - 32; 257 + 258 + } else if (addr < (addr_t) (&dummy->regs.fp_regs + 1)) { 259 + /* 260 + * floating point regs. are either in child->thread.fp_regs 261 + * or the child->thread.vxrs array 262 + */ 263 + offset = addr - (addr_t) &dummy->regs.fp_regs.fprs; 264 + #ifdef CONFIG_64BIT 265 + if (child->thread.vxrs) 266 + tmp = *(addr_t *) 267 + ((addr_t) child->thread.vxrs + 2*offset); 268 + else 269 + #endif 270 + tmp = *(addr_t *) 271 + ((addr_t) &child->thread.fp_regs.fprs + offset); 259 272 260 273 } else if (addr < (addr_t) (&dummy->regs.per_info + 1)) { 261 274 /* ··· 396 383 */ 397 384 return 0; 398 385 386 + } else if (addr == (addr_t) &dummy->regs.fp_regs.fpc) { 387 + /* 388 + * floating point control reg. is in the thread structure 389 + */ 390 + if ((unsigned int) data != 0 || 391 + test_fp_ctl(data >> (BITS_PER_LONG - 32))) 392 + return -EINVAL; 393 + child->thread.fp_regs.fpc = data >> (BITS_PER_LONG - 32); 394 + 399 395 } else if (addr < (addr_t) (&dummy->regs.fp_regs + 1)) { 400 396 /* 401 - * floating point regs. are stored in the thread structure 397 + * floating point regs. are either in child->thread.fp_regs 398 + * or the child->thread.vxrs array 402 399 */ 403 - if (addr == (addr_t) &dummy->regs.fp_regs.fpc) 404 - if ((unsigned int) data != 0 || 405 - test_fp_ctl(data >> (BITS_PER_LONG - 32))) 406 - return -EINVAL; 407 - offset = addr - (addr_t) &dummy->regs.fp_regs; 408 - *(addr_t *)((addr_t) &child->thread.fp_regs + offset) = data; 400 + offset = addr - (addr_t) &dummy->regs.fp_regs.fprs; 401 + #ifdef CONFIG_64BIT 402 + if (child->thread.vxrs) 403 + *(addr_t *)((addr_t) 404 + child->thread.vxrs + 2*offset) = data; 405 + else 406 + #endif 407 + *(addr_t *)((addr_t) 408 + &child->thread.fp_regs.fprs + offset) = data; 409 409 410 410 } else if (addr < (addr_t) (&dummy->regs.per_info + 1)) { 411 411 /* ··· 637 611 */ 638 612 tmp = 0; 639 613 614 + } else if (addr == (addr_t) &dummy32->regs.fp_regs.fpc) { 615 + /* 616 + * floating point control reg. is in the thread structure 617 + */ 618 + tmp = child->thread.fp_regs.fpc; 619 + 640 620 } else if (addr < (addr_t) (&dummy32->regs.fp_regs + 1)) { 641 621 /* 642 - * floating point regs. are stored in the thread structure 622 + * floating point regs. are either in child->thread.fp_regs 623 + * or the child->thread.vxrs array 643 624 */ 644 - offset = addr - (addr_t) &dummy32->regs.fp_regs; 645 - tmp = *(__u32 *)((addr_t) &child->thread.fp_regs + offset); 625 + offset = addr - (addr_t) &dummy32->regs.fp_regs.fprs; 626 + #ifdef CONFIG_64BIT 627 + if (child->thread.vxrs) 628 + tmp = *(__u32 *) 629 + ((addr_t) child->thread.vxrs + 2*offset); 630 + else 631 + #endif 632 + tmp = *(__u32 *) 633 + ((addr_t) &child->thread.fp_regs.fprs + offset); 646 634 647 635 } else if (addr < (addr_t) (&dummy32->regs.per_info + 1)) { 648 636 /* ··· 762 722 */ 763 723 return 0; 764 724 725 + } else if (addr == (addr_t) &dummy32->regs.fp_regs.fpc) { 726 + /* 727 + * floating point control reg. is in the thread structure 728 + */ 729 + if (test_fp_ctl(tmp)) 730 + return -EINVAL; 731 + child->thread.fp_regs.fpc = data; 732 + 765 733 } else if (addr < (addr_t) (&dummy32->regs.fp_regs + 1)) { 766 734 /* 767 - * floating point regs. are stored in the thread structure 735 + * floating point regs. are either in child->thread.fp_regs 736 + * or the child->thread.vxrs array 768 737 */ 769 - if (addr == (addr_t) &dummy32->regs.fp_regs.fpc && 770 - test_fp_ctl(tmp)) 771 - return -EINVAL; 772 - offset = addr - (addr_t) &dummy32->regs.fp_regs; 773 - *(__u32 *)((addr_t) &child->thread.fp_regs + offset) = tmp; 738 + offset = addr - (addr_t) &dummy32->regs.fp_regs.fprs; 739 + #ifdef CONFIG_64BIT 740 + if (child->thread.vxrs) 741 + *(__u32 *)((addr_t) 742 + child->thread.vxrs + 2*offset) = tmp; 743 + else 744 + #endif 745 + *(__u32 *)((addr_t) 746 + &child->thread.fp_regs.fprs + offset) = tmp; 774 747 775 748 } else if (addr < (addr_t) (&dummy32->regs.per_info + 1)) { 776 749 /* ··· 1091 1038 return 0; 1092 1039 } 1093 1040 1094 - static int s390_vxrs_active(struct task_struct *target, 1095 - const struct user_regset *regset) 1096 - { 1097 - return !!target->thread.vxrs; 1098 - } 1099 - 1100 1041 static int s390_vxrs_low_get(struct task_struct *target, 1101 1042 const struct user_regset *regset, 1102 1043 unsigned int pos, unsigned int count, ··· 1099 1052 __u64 vxrs[__NUM_VXRS_LOW]; 1100 1053 int i; 1101 1054 1055 + if (!MACHINE_HAS_VX) 1056 + return -ENODEV; 1102 1057 if (target->thread.vxrs) { 1103 1058 if (target == current) 1104 1059 save_vx_regs(target->thread.vxrs); ··· 1119 1070 __u64 vxrs[__NUM_VXRS_LOW]; 1120 1071 int i, rc; 1121 1072 1073 + if (!MACHINE_HAS_VX) 1074 + return -ENODEV; 1122 1075 if (!target->thread.vxrs) { 1123 1076 rc = alloc_vector_registers(target); 1124 1077 if (rc) ··· 1146 1095 { 1147 1096 __vector128 vxrs[__NUM_VXRS_HIGH]; 1148 1097 1098 + if (!MACHINE_HAS_VX) 1099 + return -ENODEV; 1149 1100 if (target->thread.vxrs) { 1150 1101 if (target == current) 1151 1102 save_vx_regs(target->thread.vxrs); ··· 1165 1112 { 1166 1113 int rc; 1167 1114 1115 + if (!MACHINE_HAS_VX) 1116 + return -ENODEV; 1168 1117 if (!target->thread.vxrs) { 1169 1118 rc = alloc_vector_registers(target); 1170 1119 if (rc) ··· 1251 1196 .n = __NUM_VXRS_LOW, 1252 1197 .size = sizeof(__u64), 1253 1198 .align = sizeof(__u64), 1254 - .active = s390_vxrs_active, 1255 1199 .get = s390_vxrs_low_get, 1256 1200 .set = s390_vxrs_low_set, 1257 1201 }, ··· 1259 1205 .n = __NUM_VXRS_HIGH, 1260 1206 .size = sizeof(__vector128), 1261 1207 .align = sizeof(__vector128), 1262 - .active = s390_vxrs_active, 1263 1208 .get = s390_vxrs_high_get, 1264 1209 .set = s390_vxrs_high_set, 1265 1210 }, ··· 1472 1419 .n = __NUM_VXRS_LOW, 1473 1420 .size = sizeof(__u64), 1474 1421 .align = sizeof(__u64), 1475 - .active = s390_vxrs_active, 1476 1422 .get = s390_vxrs_low_get, 1477 1423 .set = s390_vxrs_low_set, 1478 1424 }, ··· 1480 1428 .n = __NUM_VXRS_HIGH, 1481 1429 .size = sizeof(__vector128), 1482 1430 .align = sizeof(__vector128), 1483 - .active = s390_vxrs_active, 1484 1431 .get = s390_vxrs_high_get, 1485 1432 .set = s390_vxrs_high_set, 1486 1433 },
-2
arch/s390/kernel/setup.c
··· 41 41 #include <linux/ctype.h> 42 42 #include <linux/reboot.h> 43 43 #include <linux/topology.h> 44 - #include <linux/ftrace.h> 45 44 #include <linux/kexec.h> 46 45 #include <linux/crash_dump.h> 47 46 #include <linux/memory.h> ··· 355 356 lc->steal_timer = S390_lowcore.steal_timer; 356 357 lc->last_update_timer = S390_lowcore.last_update_timer; 357 358 lc->last_update_clock = S390_lowcore.last_update_clock; 358 - lc->ftrace_func = S390_lowcore.ftrace_func; 359 359 360 360 restart_stack = __alloc_bootmem(ASYNC_SIZE, ASYNC_SIZE, 0); 361 361 restart_stack += ASYNC_SIZE;
+1 -1
arch/s390/kernel/signal.c
··· 371 371 restorer = (unsigned long) ka->sa.sa_restorer | PSW_ADDR_AMODE; 372 372 } else { 373 373 /* Signal frame without vector registers are short ! */ 374 - __u16 __user *svc = (void *) frame + frame_size - 2; 374 + __u16 __user *svc = (void __user *) frame + frame_size - 2; 375 375 if (__put_user(S390_SYSCALL_OPCODE | __NR_sigreturn, svc)) 376 376 return -EFAULT; 377 377 restorer = (unsigned long) svc | PSW_ADDR_AMODE;
-1
arch/s390/kernel/smp.c
··· 236 236 lc->percpu_offset = __per_cpu_offset[cpu]; 237 237 lc->kernel_asce = S390_lowcore.kernel_asce; 238 238 lc->machine_flags = S390_lowcore.machine_flags; 239 - lc->ftrace_func = S390_lowcore.ftrace_func; 240 239 lc->user_timer = lc->system_timer = lc->steal_timer = 0; 241 240 __ctl_store(lc->cregs_save_area, 0, 15); 242 241 save_access_regs((unsigned int *) lc->access_regs_save_area);
+2
arch/s390/kernel/syscalls.S
··· 360 360 SYSCALL(sys_getrandom,sys_getrandom,compat_sys_getrandom) 361 361 SYSCALL(sys_memfd_create,sys_memfd_create,compat_sys_memfd_create) /* 350 */ 362 362 SYSCALL(sys_bpf,sys_bpf,compat_sys_bpf) 363 + SYSCALL(sys_ni_syscall,sys_s390_pci_mmio_write,compat_sys_s390_pci_mmio_write) 364 + SYSCALL(sys_ni_syscall,sys_s390_pci_mmio_read,compat_sys_s390_pci_mmio_read)
+2 -1
arch/s390/kernel/time.c
··· 61 61 /* 62 62 * Scheduler clock - returns current time in nanosec units. 63 63 */ 64 - unsigned long long notrace __kprobes sched_clock(void) 64 + unsigned long long notrace sched_clock(void) 65 65 { 66 66 return tod_to_ns(get_tod_clock_monotonic()); 67 67 } 68 + NOKPROBE_SYMBOL(sched_clock); 68 69 69 70 /* 70 71 * Monotonic_clock - returns # of nanoseconds passed since time_init()
+16 -9
arch/s390/kernel/traps.c
··· 49 49 return; 50 50 if (!printk_ratelimit()) 51 51 return; 52 - printk("User process fault: interruption code 0x%X ", regs->int_code); 52 + printk("User process fault: interruption code %04x ilc:%d ", 53 + regs->int_code & 0xffff, regs->int_code >> 17); 53 54 print_vma_addr("in ", regs->psw.addr & PSW_ADDR_INSN); 54 55 printk("\n"); 55 56 show_regs(regs); ··· 88 87 } 89 88 } 90 89 91 - static void __kprobes do_trap(struct pt_regs *regs, int si_signo, int si_code, 92 - char *str) 90 + static void do_trap(struct pt_regs *regs, int si_signo, int si_code, char *str) 93 91 { 94 92 if (notify_die(DIE_TRAP, str, regs, 0, 95 93 regs->int_code, si_signo) == NOTIFY_STOP) 96 94 return; 97 95 do_report_trap(regs, si_signo, si_code, str); 98 96 } 97 + NOKPROBE_SYMBOL(do_trap); 99 98 100 - void __kprobes do_per_trap(struct pt_regs *regs) 99 + void do_per_trap(struct pt_regs *regs) 101 100 { 102 101 siginfo_t info; 103 102 ··· 112 111 (void __force __user *) current->thread.per_event.address; 113 112 force_sig_info(SIGTRAP, &info, current); 114 113 } 114 + NOKPROBE_SYMBOL(do_per_trap); 115 115 116 116 void default_trap_handler(struct pt_regs *regs) 117 117 { ··· 153 151 "privileged operation") 154 152 DO_ERROR_INFO(special_op_exception, SIGILL, ILL_ILLOPN, 155 153 "special operation exception") 156 - DO_ERROR_INFO(translation_exception, SIGILL, ILL_ILLOPN, 157 - "translation exception") 158 154 159 155 #ifdef CONFIG_64BIT 160 156 DO_ERROR_INFO(transaction_exception, SIGILL, ILL_ILLOPN, ··· 179 179 do_trap(regs, SIGFPE, si_code, "floating point exception"); 180 180 } 181 181 182 - void __kprobes illegal_op(struct pt_regs *regs) 182 + void translation_exception(struct pt_regs *regs) 183 + { 184 + /* May never happen. */ 185 + die(regs, "Translation exception"); 186 + } 187 + 188 + void illegal_op(struct pt_regs *regs) 183 189 { 184 190 siginfo_t info; 185 191 __u8 opcode[6]; ··· 258 252 if (signal) 259 253 do_trap(regs, signal, ILL_ILLOPC, "illegal operation"); 260 254 } 261 - 255 + NOKPROBE_SYMBOL(illegal_op); 262 256 263 257 #ifdef CONFIG_MATHEMU 264 258 void specification_exception(struct pt_regs *regs) ··· 475 469 do_trap(regs, SIGILL, ILL_PRVOPC, "space switch event"); 476 470 } 477 471 478 - void __kprobes kernel_stack_overflow(struct pt_regs * regs) 472 + void kernel_stack_overflow(struct pt_regs *regs) 479 473 { 480 474 bust_spinlocks(1); 481 475 printk("Kernel stack overflow.\n"); ··· 483 477 bust_spinlocks(0); 484 478 panic("Corrupt kernel stack, can't continue."); 485 479 } 480 + NOKPROBE_SYMBOL(kernel_stack_overflow); 486 481 487 482 void __init trap_init(void) 488 483 {
+1 -1
arch/s390/kvm/kvm-s390.c
··· 271 271 case KVM_S390_VM_MEM_CLR_CMMA: 272 272 mutex_lock(&kvm->lock); 273 273 idx = srcu_read_lock(&kvm->srcu); 274 - page_table_reset_pgste(kvm->arch.gmap->mm, 0, TASK_SIZE, false); 274 + s390_reset_cmma(kvm->arch.gmap->mm); 275 275 srcu_read_unlock(&kvm->srcu, idx); 276 276 mutex_unlock(&kvm->lock); 277 277 ret = 0;
+12 -5
arch/s390/kvm/priv.c
··· 156 156 return 0; 157 157 } 158 158 159 - static void __skey_check_enable(struct kvm_vcpu *vcpu) 159 + static int __skey_check_enable(struct kvm_vcpu *vcpu) 160 160 { 161 + int rc = 0; 161 162 if (!(vcpu->arch.sie_block->ictl & (ICTL_ISKE | ICTL_SSKE | ICTL_RRBE))) 162 - return; 163 + return rc; 163 164 164 - s390_enable_skey(); 165 + rc = s390_enable_skey(); 165 166 trace_kvm_s390_skey_related_inst(vcpu); 166 167 vcpu->arch.sie_block->ictl &= ~(ICTL_ISKE | ICTL_SSKE | ICTL_RRBE); 168 + return rc; 167 169 } 168 170 169 171 170 172 static int handle_skey(struct kvm_vcpu *vcpu) 171 173 { 172 - __skey_check_enable(vcpu); 174 + int rc = __skey_check_enable(vcpu); 173 175 176 + if (rc) 177 + return rc; 174 178 vcpu->stat.instruction_storage_key++; 175 179 176 180 if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) ··· 687 683 } 688 684 689 685 if (vcpu->run->s.regs.gprs[reg1] & PFMF_SK) { 690 - __skey_check_enable(vcpu); 686 + int rc = __skey_check_enable(vcpu); 687 + 688 + if (rc) 689 + return rc; 691 690 if (set_guest_storage_key(current->mm, useraddr, 692 691 vcpu->run->s.regs.gprs[reg1] & PFMF_KEY, 693 692 vcpu->run->s.regs.gprs[reg1] & PFMF_NQ))
+6 -4
arch/s390/mm/fault.c
··· 261 261 return; 262 262 if (!printk_ratelimit()) 263 263 return; 264 - printk(KERN_ALERT "User process fault: interruption code 0x%X ", 265 - regs->int_code); 264 + printk(KERN_ALERT "User process fault: interruption code %04x ilc:%d", 265 + regs->int_code & 0xffff, regs->int_code >> 17); 266 266 print_vma_addr(KERN_CONT "in ", regs->psw.addr & PSW_ADDR_INSN); 267 267 printk(KERN_CONT "\n"); 268 268 printk(KERN_ALERT "failing address: %016lx TEID: %016lx\n", ··· 548 548 return fault; 549 549 } 550 550 551 - void __kprobes do_protection_exception(struct pt_regs *regs) 551 + void do_protection_exception(struct pt_regs *regs) 552 552 { 553 553 unsigned long trans_exc_code; 554 554 int fault; ··· 574 574 if (unlikely(fault)) 575 575 do_fault_error(regs, fault); 576 576 } 577 + NOKPROBE_SYMBOL(do_protection_exception); 577 578 578 - void __kprobes do_dat_exception(struct pt_regs *regs) 579 + void do_dat_exception(struct pt_regs *regs) 579 580 { 580 581 int access, fault; 581 582 ··· 585 584 if (unlikely(fault)) 586 585 do_fault_error(regs, fault); 587 586 } 587 + NOKPROBE_SYMBOL(do_dat_exception); 588 588 589 589 #ifdef CONFIG_PFAULT 590 590 /*
+82 -103
arch/s390/mm/pgtable.c
··· 18 18 #include <linux/rcupdate.h> 19 19 #include <linux/slab.h> 20 20 #include <linux/swapops.h> 21 + #include <linux/ksm.h> 22 + #include <linux/mman.h> 21 23 22 24 #include <asm/pgtable.h> 23 25 #include <asm/pgalloc.h> ··· 752 750 break; 753 751 /* Walk the process page table, lock and get pte pointer */ 754 752 ptep = get_locked_pte(gmap->mm, addr, &ptl); 755 - if (unlikely(!ptep)) 756 - continue; 753 + VM_BUG_ON(!ptep); 757 754 /* Set notification bit in the pgste of the pte */ 758 755 entry = *ptep; 759 756 if ((pte_val(entry) & (_PAGE_INVALID | _PAGE_PROTECT)) == 0) { ··· 762 761 gaddr += PAGE_SIZE; 763 762 len -= PAGE_SIZE; 764 763 } 765 - spin_unlock(ptl); 764 + pte_unmap_unlock(ptep, ptl); 766 765 } 767 766 up_read(&gmap->mm->mmap_sem); 768 767 return rc; ··· 835 834 __free_page(page); 836 835 } 837 836 838 - static inline unsigned long page_table_reset_pte(struct mm_struct *mm, pmd_t *pmd, 839 - unsigned long addr, unsigned long end, bool init_skey) 840 - { 841 - pte_t *start_pte, *pte; 842 - spinlock_t *ptl; 843 - pgste_t pgste; 844 - 845 - start_pte = pte_offset_map_lock(mm, pmd, addr, &ptl); 846 - pte = start_pte; 847 - do { 848 - pgste = pgste_get_lock(pte); 849 - pgste_val(pgste) &= ~_PGSTE_GPS_USAGE_MASK; 850 - if (init_skey) { 851 - unsigned long address; 852 - 853 - pgste_val(pgste) &= ~(PGSTE_ACC_BITS | PGSTE_FP_BIT | 854 - PGSTE_GR_BIT | PGSTE_GC_BIT); 855 - 856 - /* skip invalid and not writable pages */ 857 - if (pte_val(*pte) & _PAGE_INVALID || 858 - !(pte_val(*pte) & _PAGE_WRITE)) { 859 - pgste_set_unlock(pte, pgste); 860 - continue; 861 - } 862 - 863 - address = pte_val(*pte) & PAGE_MASK; 864 - page_set_storage_key(address, PAGE_DEFAULT_KEY, 1); 865 - } 866 - pgste_set_unlock(pte, pgste); 867 - } while (pte++, addr += PAGE_SIZE, addr != end); 868 - pte_unmap_unlock(start_pte, ptl); 869 - 870 - return addr; 871 - } 872 - 873 - static inline unsigned long page_table_reset_pmd(struct mm_struct *mm, pud_t *pud, 874 - unsigned long addr, unsigned long end, bool init_skey) 875 - { 876 - unsigned long next; 877 - pmd_t *pmd; 878 - 879 - pmd = pmd_offset(pud, addr); 880 - do { 881 - next = pmd_addr_end(addr, end); 882 - if (pmd_none_or_clear_bad(pmd)) 883 - continue; 884 - next = page_table_reset_pte(mm, pmd, addr, next, init_skey); 885 - } while (pmd++, addr = next, addr != end); 886 - 887 - return addr; 888 - } 889 - 890 - static inline unsigned long page_table_reset_pud(struct mm_struct *mm, pgd_t *pgd, 891 - unsigned long addr, unsigned long end, bool init_skey) 892 - { 893 - unsigned long next; 894 - pud_t *pud; 895 - 896 - pud = pud_offset(pgd, addr); 897 - do { 898 - next = pud_addr_end(addr, end); 899 - if (pud_none_or_clear_bad(pud)) 900 - continue; 901 - next = page_table_reset_pmd(mm, pud, addr, next, init_skey); 902 - } while (pud++, addr = next, addr != end); 903 - 904 - return addr; 905 - } 906 - 907 - void page_table_reset_pgste(struct mm_struct *mm, unsigned long start, 908 - unsigned long end, bool init_skey) 909 - { 910 - unsigned long addr, next; 911 - pgd_t *pgd; 912 - 913 - down_write(&mm->mmap_sem); 914 - if (init_skey && mm_use_skey(mm)) 915 - goto out_up; 916 - addr = start; 917 - pgd = pgd_offset(mm, addr); 918 - do { 919 - next = pgd_addr_end(addr, end); 920 - if (pgd_none_or_clear_bad(pgd)) 921 - continue; 922 - next = page_table_reset_pud(mm, pgd, addr, next, init_skey); 923 - } while (pgd++, addr = next, addr != end); 924 - if (init_skey) 925 - current->mm->context.use_skey = 1; 926 - out_up: 927 - up_write(&mm->mmap_sem); 928 - } 929 - EXPORT_SYMBOL(page_table_reset_pgste); 930 - 931 837 int set_guest_storage_key(struct mm_struct *mm, unsigned long addr, 932 838 unsigned long key, bool nq) 933 839 { ··· 898 990 static inline unsigned long *page_table_alloc_pgste(struct mm_struct *mm) 899 991 { 900 992 return NULL; 901 - } 902 - 903 - void page_table_reset_pgste(struct mm_struct *mm, unsigned long start, 904 - unsigned long end, bool init_skey) 905 - { 906 993 } 907 994 908 995 static inline void page_table_free_pgste(unsigned long *table) ··· 1250 1347 * Enable storage key handling from now on and initialize the storage 1251 1348 * keys with the default key. 1252 1349 */ 1253 - void s390_enable_skey(void) 1350 + static int __s390_enable_skey(pte_t *pte, unsigned long addr, 1351 + unsigned long next, struct mm_walk *walk) 1254 1352 { 1255 - page_table_reset_pgste(current->mm, 0, TASK_SIZE, true); 1353 + unsigned long ptev; 1354 + pgste_t pgste; 1355 + 1356 + pgste = pgste_get_lock(pte); 1357 + /* 1358 + * Remove all zero page mappings, 1359 + * after establishing a policy to forbid zero page mappings 1360 + * following faults for that page will get fresh anonymous pages 1361 + */ 1362 + if (is_zero_pfn(pte_pfn(*pte))) { 1363 + ptep_flush_direct(walk->mm, addr, pte); 1364 + pte_val(*pte) = _PAGE_INVALID; 1365 + } 1366 + /* Clear storage key */ 1367 + pgste_val(pgste) &= ~(PGSTE_ACC_BITS | PGSTE_FP_BIT | 1368 + PGSTE_GR_BIT | PGSTE_GC_BIT); 1369 + ptev = pte_val(*pte); 1370 + if (!(ptev & _PAGE_INVALID) && (ptev & _PAGE_WRITE)) 1371 + page_set_storage_key(ptev & PAGE_MASK, PAGE_DEFAULT_KEY, 1); 1372 + pgste_set_unlock(pte, pgste); 1373 + return 0; 1374 + } 1375 + 1376 + int s390_enable_skey(void) 1377 + { 1378 + struct mm_walk walk = { .pte_entry = __s390_enable_skey }; 1379 + struct mm_struct *mm = current->mm; 1380 + struct vm_area_struct *vma; 1381 + int rc = 0; 1382 + 1383 + down_write(&mm->mmap_sem); 1384 + if (mm_use_skey(mm)) 1385 + goto out_up; 1386 + 1387 + mm->context.use_skey = 1; 1388 + for (vma = mm->mmap; vma; vma = vma->vm_next) { 1389 + if (ksm_madvise(vma, vma->vm_start, vma->vm_end, 1390 + MADV_UNMERGEABLE, &vma->vm_flags)) { 1391 + mm->context.use_skey = 0; 1392 + rc = -ENOMEM; 1393 + goto out_up; 1394 + } 1395 + } 1396 + mm->def_flags &= ~VM_MERGEABLE; 1397 + 1398 + walk.mm = mm; 1399 + walk_page_range(0, TASK_SIZE, &walk); 1400 + 1401 + out_up: 1402 + up_write(&mm->mmap_sem); 1403 + return rc; 1256 1404 } 1257 1405 EXPORT_SYMBOL_GPL(s390_enable_skey); 1406 + 1407 + /* 1408 + * Reset CMMA state, make all pages stable again. 1409 + */ 1410 + static int __s390_reset_cmma(pte_t *pte, unsigned long addr, 1411 + unsigned long next, struct mm_walk *walk) 1412 + { 1413 + pgste_t pgste; 1414 + 1415 + pgste = pgste_get_lock(pte); 1416 + pgste_val(pgste) &= ~_PGSTE_GPS_USAGE_MASK; 1417 + pgste_set_unlock(pte, pgste); 1418 + return 0; 1419 + } 1420 + 1421 + void s390_reset_cmma(struct mm_struct *mm) 1422 + { 1423 + struct mm_walk walk = { .pte_entry = __s390_reset_cmma }; 1424 + 1425 + down_write(&mm->mmap_sem); 1426 + walk.mm = mm; 1427 + walk_page_range(0, TASK_SIZE, &walk); 1428 + up_write(&mm->mmap_sem); 1429 + } 1430 + EXPORT_SYMBOL_GPL(s390_reset_cmma); 1258 1431 1259 1432 /* 1260 1433 * Test and reset if a guest page is dirty
+1 -1
arch/s390/pci/Makefile
··· 3 3 # 4 4 5 5 obj-$(CONFIG_PCI) += pci.o pci_dma.o pci_clp.o pci_sysfs.o \ 6 - pci_event.o pci_debug.o pci_insn.o 6 + pci_event.o pci_debug.o pci_insn.o pci_mmio.o
+5 -4
arch/s390/pci/pci.c
··· 369 369 370 370 if (type == PCI_CAP_ID_MSI && nvec > 1) 371 371 return 1; 372 - msi_vecs = min(nvec, ZPCI_MSI_VEC_MAX); 373 - msi_vecs = min_t(unsigned int, msi_vecs, CONFIG_PCI_NR_MSI); 372 + msi_vecs = min_t(unsigned int, nvec, zdev->max_msi); 374 373 375 374 /* Allocate adapter summary indicator bit */ 376 375 rc = -EIO; ··· 473 474 len = pci_resource_len(pdev, i); 474 475 if (!len) 475 476 continue; 476 - pdev->resource[i].start = (resource_size_t) pci_iomap(pdev, i, 0); 477 + pdev->resource[i].start = 478 + (resource_size_t __force) pci_iomap(pdev, i, 0); 477 479 pdev->resource[i].end = pdev->resource[i].start + len - 1; 478 480 } 479 481 } ··· 489 489 len = pci_resource_len(pdev, i); 490 490 if (!len) 491 491 continue; 492 - pci_iounmap(pdev, (void *) pdev->resource[i].start); 492 + pci_iounmap(pdev, (void __iomem __force *) 493 + pdev->resource[i].start); 493 494 } 494 495 } 495 496
+1
arch/s390/pci/pci_clp.c
··· 62 62 zdev->tlb_refresh = response->refresh; 63 63 zdev->dma_mask = response->dasm; 64 64 zdev->msi_addr = response->msia; 65 + zdev->max_msi = response->noi; 65 66 zdev->fmb_update = response->mui; 66 67 67 68 switch (response->version) {
+2 -5
arch/s390/pci/pci_debug.c
··· 158 158 159 159 void zpci_debug_exit(void) 160 160 { 161 - if (pci_debug_msg_id) 162 - debug_unregister(pci_debug_msg_id); 163 - if (pci_debug_err_id) 164 - debug_unregister(pci_debug_err_id); 165 - 161 + debug_unregister(pci_debug_msg_id); 162 + debug_unregister(pci_debug_err_id); 166 163 debugfs_remove(debugfs_root); 167 164 }
+115
arch/s390/pci/pci_mmio.c
··· 1 + /* 2 + * Access to PCI I/O memory from user space programs. 3 + * 4 + * Copyright IBM Corp. 2014 5 + * Author(s): Alexey Ishchuk <aishchuk@linux.vnet.ibm.com> 6 + */ 7 + #include <linux/kernel.h> 8 + #include <linux/syscalls.h> 9 + #include <linux/init.h> 10 + #include <linux/mm.h> 11 + #include <linux/errno.h> 12 + #include <linux/pci.h> 13 + 14 + static long get_pfn(unsigned long user_addr, unsigned long access, 15 + unsigned long *pfn) 16 + { 17 + struct vm_area_struct *vma; 18 + long ret; 19 + 20 + down_read(&current->mm->mmap_sem); 21 + ret = -EINVAL; 22 + vma = find_vma(current->mm, user_addr); 23 + if (!vma) 24 + goto out; 25 + ret = -EACCES; 26 + if (!(vma->vm_flags & access)) 27 + goto out; 28 + ret = follow_pfn(vma, user_addr, pfn); 29 + out: 30 + up_read(&current->mm->mmap_sem); 31 + return ret; 32 + } 33 + 34 + SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr, 35 + const void __user *, user_buffer, size_t, length) 36 + { 37 + u8 local_buf[64]; 38 + void __iomem *io_addr; 39 + void *buf; 40 + unsigned long pfn; 41 + long ret; 42 + 43 + if (!zpci_is_enabled()) 44 + return -ENODEV; 45 + 46 + if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length) 47 + return -EINVAL; 48 + if (length > 64) { 49 + buf = kmalloc(length, GFP_KERNEL); 50 + if (!buf) 51 + return -ENOMEM; 52 + } else 53 + buf = local_buf; 54 + 55 + ret = get_pfn(mmio_addr, VM_WRITE, &pfn); 56 + if (ret) 57 + goto out; 58 + io_addr = (void *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK)); 59 + 60 + ret = -EFAULT; 61 + if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) 62 + goto out; 63 + 64 + if (copy_from_user(buf, user_buffer, length)) 65 + goto out; 66 + 67 + memcpy_toio(io_addr, buf, length); 68 + ret = 0; 69 + out: 70 + if (buf != local_buf) 71 + kfree(buf); 72 + return ret; 73 + } 74 + 75 + SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr, 76 + void __user *, user_buffer, size_t, length) 77 + { 78 + u8 local_buf[64]; 79 + void __iomem *io_addr; 80 + void *buf; 81 + unsigned long pfn; 82 + long ret; 83 + 84 + if (!zpci_is_enabled()) 85 + return -ENODEV; 86 + 87 + if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length) 88 + return -EINVAL; 89 + if (length > 64) { 90 + buf = kmalloc(length, GFP_KERNEL); 91 + if (!buf) 92 + return -ENOMEM; 93 + } else 94 + buf = local_buf; 95 + 96 + ret = get_pfn(mmio_addr, VM_READ, &pfn); 97 + if (ret) 98 + goto out; 99 + io_addr = (void *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK)); 100 + 101 + ret = -EFAULT; 102 + if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) 103 + goto out; 104 + 105 + memcpy_fromio(buf, io_addr, length); 106 + 107 + if (copy_to_user(user_buffer, buf, length)) 108 + goto out; 109 + 110 + ret = 0; 111 + out: 112 + if (buf != local_buf) 113 + kfree(buf); 114 + return ret; 115 + }
+16 -15
drivers/s390/block/dasd.c
··· 1377 1377 "I/O error, retry"); 1378 1378 break; 1379 1379 case -EINVAL: 1380 + /* 1381 + * device not valid so no I/O could be running 1382 + * handle CQR as termination successful 1383 + */ 1384 + cqr->status = DASD_CQR_CLEARED; 1385 + cqr->stopclk = get_tod_clock(); 1386 + cqr->starttime = 0; 1387 + /* no retries for invalid devices */ 1388 + cqr->retries = -1; 1389 + DBF_DEV_EVENT(DBF_ERR, device, "%s", 1390 + "EINVAL, handle as terminated"); 1391 + /* fake rc to success */ 1392 + rc = 0; 1393 + break; 1380 1394 case -EBUSY: 1381 1395 DBF_DEV_EVENT(DBF_ERR, device, "%s", 1382 1396 "device busy, retry later"); ··· 1697 1683 if (cqr->status == DASD_CQR_CLEAR_PENDING && 1698 1684 scsw_fctl(&irb->scsw) & SCSW_FCTL_CLEAR_FUNC) { 1699 1685 cqr->status = DASD_CQR_CLEARED; 1700 - if (cqr->callback_data == DASD_SLEEPON_START_TAG) 1701 - cqr->callback_data = DASD_SLEEPON_END_TAG; 1702 1686 dasd_device_clear_timer(device); 1703 1687 wake_up(&dasd_flush_wq); 1704 - wake_up(&generic_waitq); 1705 1688 dasd_schedule_device_bh(device); 1706 1689 return; 1707 1690 } ··· 2337 2326 return -EAGAIN; 2338 2327 2339 2328 /* normal recovery for basedev IO */ 2340 - if (__dasd_sleep_on_erp(cqr)) { 2329 + if (__dasd_sleep_on_erp(cqr)) 2330 + /* handle erp first */ 2341 2331 goto retry; 2342 - /* remember that ERP was needed */ 2343 - rc = 1; 2344 - /* skip processing for active cqr */ 2345 - if (cqr->status != DASD_CQR_TERMINATED && 2346 - cqr->status != DASD_CQR_NEED_ERP) 2347 - break; 2348 - } 2349 2332 } 2350 - 2351 - /* start ERP requests in upper loop */ 2352 - if (rc) 2353 - goto retry; 2354 2333 2355 2334 return 0; 2356 2335 }
+24 -2
drivers/s390/block/dasd_genhd.c
··· 99 99 int dasd_scan_partitions(struct dasd_block *block) 100 100 { 101 101 struct block_device *bdev; 102 + int retry, rc; 102 103 104 + retry = 5; 103 105 bdev = bdget_disk(block->gdp, 0); 104 - if (!bdev || blkdev_get(bdev, FMODE_READ, NULL) < 0) 106 + if (!bdev) { 107 + DBF_DEV_EVENT(DBF_ERR, block->base, "%s", 108 + "scan partitions error, bdget returned NULL"); 105 109 return -ENODEV; 110 + } 111 + 112 + rc = blkdev_get(bdev, FMODE_READ, NULL); 113 + if (rc < 0) { 114 + DBF_DEV_EVENT(DBF_ERR, block->base, 115 + "scan partitions error, blkdev_get returned %d", 116 + rc); 117 + return -ENODEV; 118 + } 106 119 /* 107 120 * See fs/partition/check.c:register_disk,rescan_partitions 108 121 * Can't call rescan_partitions directly. Use ioctl. 109 122 */ 110 - ioctl_by_bdev(bdev, BLKRRPART, 0); 123 + rc = ioctl_by_bdev(bdev, BLKRRPART, 0); 124 + while (rc == -EBUSY && retry > 0) { 125 + schedule(); 126 + rc = ioctl_by_bdev(bdev, BLKRRPART, 0); 127 + retry--; 128 + DBF_DEV_EVENT(DBF_ERR, block->base, 129 + "scan partitions error, retry %d rc %d", 130 + retry, rc); 131 + } 132 + 111 133 /* 112 134 * Since the matching blkdev_put call to the blkdev_get in 113 135 * this function is not called before dasd_destroy_partitions
+174 -48
drivers/s390/block/scm_blk.c
··· 10 10 11 11 #include <linux/interrupt.h> 12 12 #include <linux/spinlock.h> 13 + #include <linux/mempool.h> 13 14 #include <linux/module.h> 14 15 #include <linux/blkdev.h> 15 16 #include <linux/genhd.h> ··· 21 20 22 21 debug_info_t *scm_debug; 23 22 static int scm_major; 23 + static mempool_t *aidaw_pool; 24 24 static DEFINE_SPINLOCK(list_lock); 25 25 static LIST_HEAD(inactive_requests); 26 26 static unsigned int nr_requests = 64; 27 + static unsigned int nr_requests_per_io = 8; 27 28 static atomic_t nr_devices = ATOMIC_INIT(0); 28 29 module_param(nr_requests, uint, S_IRUGO); 29 30 MODULE_PARM_DESC(nr_requests, "Number of parallel requests."); 31 + 32 + module_param(nr_requests_per_io, uint, S_IRUGO); 33 + MODULE_PARM_DESC(nr_requests_per_io, "Number of requests per IO."); 30 34 31 35 MODULE_DESCRIPTION("Block driver for s390 storage class memory."); 32 36 MODULE_LICENSE("GPL"); ··· 42 36 struct aob_rq_header *aobrq = to_aobrq(scmrq); 43 37 44 38 free_page((unsigned long) scmrq->aob); 45 - free_page((unsigned long) scmrq->aidaw); 46 39 __scm_free_rq_cluster(scmrq); 40 + kfree(scmrq->request); 47 41 kfree(aobrq); 48 42 } 49 43 ··· 59 53 __scm_free_rq(scmrq); 60 54 } 61 55 spin_unlock_irq(&list_lock); 56 + 57 + mempool_destroy(aidaw_pool); 62 58 } 63 59 64 60 static int __scm_alloc_rq(void) ··· 73 65 return -ENOMEM; 74 66 75 67 scmrq = (void *) aobrq->data; 76 - scmrq->aidaw = (void *) get_zeroed_page(GFP_DMA); 77 68 scmrq->aob = (void *) get_zeroed_page(GFP_DMA); 78 - if (!scmrq->aob || !scmrq->aidaw) { 79 - __scm_free_rq(scmrq); 80 - return -ENOMEM; 81 - } 69 + if (!scmrq->aob) 70 + goto free; 82 71 83 - if (__scm_alloc_rq_cluster(scmrq)) { 84 - __scm_free_rq(scmrq); 85 - return -ENOMEM; 86 - } 72 + scmrq->request = kcalloc(nr_requests_per_io, sizeof(scmrq->request[0]), 73 + GFP_KERNEL); 74 + if (!scmrq->request) 75 + goto free; 76 + 77 + if (__scm_alloc_rq_cluster(scmrq)) 78 + goto free; 87 79 88 80 INIT_LIST_HEAD(&scmrq->list); 89 81 spin_lock_irq(&list_lock); ··· 91 83 spin_unlock_irq(&list_lock); 92 84 93 85 return 0; 86 + free: 87 + __scm_free_rq(scmrq); 88 + return -ENOMEM; 94 89 } 95 90 96 91 static int scm_alloc_rqs(unsigned int nrqs) 97 92 { 98 93 int ret = 0; 94 + 95 + aidaw_pool = mempool_create_page_pool(max(nrqs/8, 1U), 0); 96 + if (!aidaw_pool) 97 + return -ENOMEM; 99 98 100 99 while (nrqs-- && !ret) 101 100 ret = __scm_alloc_rq(); ··· 127 112 static void scm_request_done(struct scm_request *scmrq) 128 113 { 129 114 unsigned long flags; 115 + struct msb *msb; 116 + u64 aidaw; 117 + int i; 118 + 119 + for (i = 0; i < nr_requests_per_io && scmrq->request[i]; i++) { 120 + msb = &scmrq->aob->msb[i]; 121 + aidaw = msb->data_addr; 122 + 123 + if ((msb->flags & MSB_FLAG_IDA) && aidaw && 124 + IS_ALIGNED(aidaw, PAGE_SIZE)) 125 + mempool_free(virt_to_page(aidaw), aidaw_pool); 126 + } 130 127 131 128 spin_lock_irqsave(&list_lock, flags); 132 129 list_add(&scmrq->list, &inactive_requests); ··· 150 123 return rq_data_dir(req) != WRITE || bdev->state != SCM_WR_PROHIBIT; 151 124 } 152 125 153 - static void scm_request_prepare(struct scm_request *scmrq) 126 + static inline struct aidaw *scm_aidaw_alloc(void) 127 + { 128 + struct page *page = mempool_alloc(aidaw_pool, GFP_ATOMIC); 129 + 130 + return page ? page_address(page) : NULL; 131 + } 132 + 133 + static inline unsigned long scm_aidaw_bytes(struct aidaw *aidaw) 134 + { 135 + unsigned long _aidaw = (unsigned long) aidaw; 136 + unsigned long bytes = ALIGN(_aidaw, PAGE_SIZE) - _aidaw; 137 + 138 + return (bytes / sizeof(*aidaw)) * PAGE_SIZE; 139 + } 140 + 141 + struct aidaw *scm_aidaw_fetch(struct scm_request *scmrq, unsigned int bytes) 142 + { 143 + struct aidaw *aidaw; 144 + 145 + if (scm_aidaw_bytes(scmrq->next_aidaw) >= bytes) 146 + return scmrq->next_aidaw; 147 + 148 + aidaw = scm_aidaw_alloc(); 149 + if (aidaw) 150 + memset(aidaw, 0, PAGE_SIZE); 151 + return aidaw; 152 + } 153 + 154 + static int scm_request_prepare(struct scm_request *scmrq) 154 155 { 155 156 struct scm_blk_dev *bdev = scmrq->bdev; 156 157 struct scm_device *scmdev = bdev->gendisk->private_data; 157 - struct aidaw *aidaw = scmrq->aidaw; 158 - struct msb *msb = &scmrq->aob->msb[0]; 158 + int pos = scmrq->aob->request.msb_count; 159 + struct msb *msb = &scmrq->aob->msb[pos]; 160 + struct request *req = scmrq->request[pos]; 159 161 struct req_iterator iter; 162 + struct aidaw *aidaw; 160 163 struct bio_vec bv; 161 164 165 + aidaw = scm_aidaw_fetch(scmrq, blk_rq_bytes(req)); 166 + if (!aidaw) 167 + return -ENOMEM; 168 + 162 169 msb->bs = MSB_BS_4K; 163 - scmrq->aob->request.msb_count = 1; 164 - msb->scm_addr = scmdev->address + 165 - ((u64) blk_rq_pos(scmrq->request) << 9); 166 - msb->oc = (rq_data_dir(scmrq->request) == READ) ? 167 - MSB_OC_READ : MSB_OC_WRITE; 170 + scmrq->aob->request.msb_count++; 171 + msb->scm_addr = scmdev->address + ((u64) blk_rq_pos(req) << 9); 172 + msb->oc = (rq_data_dir(req) == READ) ? MSB_OC_READ : MSB_OC_WRITE; 168 173 msb->flags |= MSB_FLAG_IDA; 169 174 msb->data_addr = (u64) aidaw; 170 175 171 - rq_for_each_segment(bv, scmrq->request, iter) { 176 + rq_for_each_segment(bv, req, iter) { 172 177 WARN_ON(bv.bv_offset); 173 178 msb->blk_count += bv.bv_len >> 12; 174 179 aidaw->data_addr = (u64) page_address(bv.bv_page); 175 180 aidaw++; 176 181 } 182 + 183 + scmrq->next_aidaw = aidaw; 184 + return 0; 185 + } 186 + 187 + static inline void scm_request_set(struct scm_request *scmrq, 188 + struct request *req) 189 + { 190 + scmrq->request[scmrq->aob->request.msb_count] = req; 177 191 } 178 192 179 193 static inline void scm_request_init(struct scm_blk_dev *bdev, 180 - struct scm_request *scmrq, 181 - struct request *req) 194 + struct scm_request *scmrq) 182 195 { 183 196 struct aob_rq_header *aobrq = to_aobrq(scmrq); 184 197 struct aob *aob = scmrq->aob; 185 198 199 + memset(scmrq->request, 0, 200 + nr_requests_per_io * sizeof(scmrq->request[0])); 186 201 memset(aob, 0, sizeof(*aob)); 187 - memset(scmrq->aidaw, 0, PAGE_SIZE); 188 202 aobrq->scmdev = bdev->scmdev; 189 203 aob->request.cmd_code = ARQB_CMD_MOVE; 190 204 aob->request.data = (u64) aobrq; 191 - scmrq->request = req; 192 205 scmrq->bdev = bdev; 193 206 scmrq->retries = 4; 194 207 scmrq->error = 0; 208 + /* We don't use all msbs - place aidaws at the end of the aob page. */ 209 + scmrq->next_aidaw = (void *) &aob->msb[nr_requests_per_io]; 195 210 scm_request_cluster_init(scmrq); 196 211 } 197 212 ··· 249 180 void scm_request_requeue(struct scm_request *scmrq) 250 181 { 251 182 struct scm_blk_dev *bdev = scmrq->bdev; 183 + int i; 252 184 253 185 scm_release_cluster(scmrq); 254 - blk_requeue_request(bdev->rq, scmrq->request); 186 + for (i = 0; i < nr_requests_per_io && scmrq->request[i]; i++) 187 + blk_requeue_request(bdev->rq, scmrq->request[i]); 188 + 255 189 atomic_dec(&bdev->queued_reqs); 256 190 scm_request_done(scmrq); 257 191 scm_ensure_queue_restart(bdev); ··· 263 191 void scm_request_finish(struct scm_request *scmrq) 264 192 { 265 193 struct scm_blk_dev *bdev = scmrq->bdev; 194 + int i; 266 195 267 196 scm_release_cluster(scmrq); 268 - blk_end_request_all(scmrq->request, scmrq->error); 197 + for (i = 0; i < nr_requests_per_io && scmrq->request[i]; i++) 198 + blk_end_request_all(scmrq->request[i], scmrq->error); 199 + 269 200 atomic_dec(&bdev->queued_reqs); 270 201 scm_request_done(scmrq); 202 + } 203 + 204 + static int scm_request_start(struct scm_request *scmrq) 205 + { 206 + struct scm_blk_dev *bdev = scmrq->bdev; 207 + int ret; 208 + 209 + atomic_inc(&bdev->queued_reqs); 210 + if (!scmrq->aob->request.msb_count) { 211 + scm_request_requeue(scmrq); 212 + return -EINVAL; 213 + } 214 + 215 + ret = eadm_start_aob(scmrq->aob); 216 + if (ret) { 217 + SCM_LOG(5, "no subchannel"); 218 + scm_request_requeue(scmrq); 219 + } 220 + return ret; 271 221 } 272 222 273 223 static void scm_blk_request(struct request_queue *rq) 274 224 { 275 225 struct scm_device *scmdev = rq->queuedata; 276 226 struct scm_blk_dev *bdev = dev_get_drvdata(&scmdev->dev); 277 - struct scm_request *scmrq; 227 + struct scm_request *scmrq = NULL; 278 228 struct request *req; 279 - int ret; 280 229 281 230 while ((req = blk_peek_request(rq))) { 282 231 if (req->cmd_type != REQ_TYPE_FS) { ··· 307 214 continue; 308 215 } 309 216 310 - if (!scm_permit_request(bdev, req)) { 311 - scm_ensure_queue_restart(bdev); 312 - return; 313 - } 314 - scmrq = scm_request_fetch(); 217 + if (!scm_permit_request(bdev, req)) 218 + goto out; 219 + 315 220 if (!scmrq) { 316 - SCM_LOG(5, "no request"); 317 - scm_ensure_queue_restart(bdev); 318 - return; 221 + scmrq = scm_request_fetch(); 222 + if (!scmrq) { 223 + SCM_LOG(5, "no request"); 224 + goto out; 225 + } 226 + scm_request_init(bdev, scmrq); 319 227 } 320 - scm_request_init(bdev, scmrq, req); 228 + scm_request_set(scmrq, req); 229 + 321 230 if (!scm_reserve_cluster(scmrq)) { 322 231 SCM_LOG(5, "cluster busy"); 232 + scm_request_set(scmrq, NULL); 233 + if (scmrq->aob->request.msb_count) 234 + goto out; 235 + 323 236 scm_request_done(scmrq); 324 237 return; 325 238 } 239 + 326 240 if (scm_need_cluster_request(scmrq)) { 327 - atomic_inc(&bdev->queued_reqs); 328 - blk_start_request(req); 329 - scm_initiate_cluster_request(scmrq); 330 - return; 241 + if (scmrq->aob->request.msb_count) { 242 + /* Start cluster requests separately. */ 243 + scm_request_set(scmrq, NULL); 244 + if (scm_request_start(scmrq)) 245 + return; 246 + } else { 247 + atomic_inc(&bdev->queued_reqs); 248 + blk_start_request(req); 249 + scm_initiate_cluster_request(scmrq); 250 + } 251 + scmrq = NULL; 252 + continue; 331 253 } 332 - scm_request_prepare(scmrq); 333 - atomic_inc(&bdev->queued_reqs); 254 + 255 + if (scm_request_prepare(scmrq)) { 256 + SCM_LOG(5, "aidaw alloc failed"); 257 + scm_request_set(scmrq, NULL); 258 + goto out; 259 + } 334 260 blk_start_request(req); 335 261 336 - ret = eadm_start_aob(scmrq->aob); 337 - if (ret) { 338 - SCM_LOG(5, "no subchannel"); 339 - scm_request_requeue(scmrq); 262 + if (scmrq->aob->request.msb_count < nr_requests_per_io) 263 + continue; 264 + 265 + if (scm_request_start(scmrq)) 340 266 return; 341 - } 267 + 268 + scmrq = NULL; 342 269 } 270 + out: 271 + if (scmrq) 272 + scm_request_start(scmrq); 273 + else 274 + scm_ensure_queue_restart(bdev); 343 275 } 344 276 345 277 static void __scmrq_log_error(struct scm_request *scmrq) ··· 561 443 spin_unlock_irqrestore(&bdev->lock, flags); 562 444 } 563 445 446 + static bool __init scm_blk_params_valid(void) 447 + { 448 + if (!nr_requests_per_io || nr_requests_per_io > 64) 449 + return false; 450 + 451 + return scm_cluster_size_valid(); 452 + } 453 + 564 454 static int __init scm_blk_init(void) 565 455 { 566 456 int ret = -EINVAL; 567 457 568 - if (!scm_cluster_size_valid()) 458 + if (!scm_blk_params_valid()) 569 459 goto out; 570 460 571 461 ret = register_blkdev(0, "scm");
+4 -2
drivers/s390/block/scm_blk.h
··· 30 30 31 31 struct scm_request { 32 32 struct scm_blk_dev *bdev; 33 - struct request *request; 34 - struct aidaw *aidaw; 33 + struct aidaw *next_aidaw; 34 + struct request **request; 35 35 struct aob *aob; 36 36 struct list_head list; 37 37 u8 retries; ··· 54 54 55 55 void scm_request_finish(struct scm_request *); 56 56 void scm_request_requeue(struct scm_request *); 57 + 58 + struct aidaw *scm_aidaw_fetch(struct scm_request *scmrq, unsigned int bytes); 57 59 58 60 int scm_drv_init(void); 59 61 void scm_drv_cleanup(void);
+47 -22
drivers/s390/block/scm_blk_cluster.c
··· 57 57 scmrq->cluster.state = CLUSTER_NONE; 58 58 } 59 59 60 - static bool clusters_intersect(struct scm_request *A, struct scm_request *B) 60 + static bool clusters_intersect(struct request *A, struct request *B) 61 61 { 62 62 unsigned long firstA, lastA, firstB, lastB; 63 63 64 - firstA = ((u64) blk_rq_pos(A->request) << 9) / CLUSTER_SIZE; 65 - lastA = (((u64) blk_rq_pos(A->request) << 9) + 66 - blk_rq_bytes(A->request) - 1) / CLUSTER_SIZE; 64 + firstA = ((u64) blk_rq_pos(A) << 9) / CLUSTER_SIZE; 65 + lastA = (((u64) blk_rq_pos(A) << 9) + 66 + blk_rq_bytes(A) - 1) / CLUSTER_SIZE; 67 67 68 - firstB = ((u64) blk_rq_pos(B->request) << 9) / CLUSTER_SIZE; 69 - lastB = (((u64) blk_rq_pos(B->request) << 9) + 70 - blk_rq_bytes(B->request) - 1) / CLUSTER_SIZE; 68 + firstB = ((u64) blk_rq_pos(B) << 9) / CLUSTER_SIZE; 69 + lastB = (((u64) blk_rq_pos(B) << 9) + 70 + blk_rq_bytes(B) - 1) / CLUSTER_SIZE; 71 71 72 72 return (firstB <= lastA && firstA <= lastB); 73 73 } 74 74 75 75 bool scm_reserve_cluster(struct scm_request *scmrq) 76 76 { 77 + struct request *req = scmrq->request[scmrq->aob->request.msb_count]; 77 78 struct scm_blk_dev *bdev = scmrq->bdev; 78 79 struct scm_request *iter; 80 + int pos, add = 1; 79 81 80 82 if (write_cluster_size == 0) 81 83 return true; 82 84 83 85 spin_lock(&bdev->lock); 84 86 list_for_each_entry(iter, &bdev->cluster_list, cluster.list) { 85 - if (clusters_intersect(scmrq, iter) && 86 - (rq_data_dir(scmrq->request) == WRITE || 87 - rq_data_dir(iter->request) == WRITE)) { 88 - spin_unlock(&bdev->lock); 89 - return false; 87 + if (iter == scmrq) { 88 + /* 89 + * We don't have to use clusters_intersect here, since 90 + * cluster requests are always started separately. 91 + */ 92 + add = 0; 93 + continue; 94 + } 95 + for (pos = 0; pos <= iter->aob->request.msb_count; pos++) { 96 + if (clusters_intersect(req, iter->request[pos]) && 97 + (rq_data_dir(req) == WRITE || 98 + rq_data_dir(iter->request[pos]) == WRITE)) { 99 + spin_unlock(&bdev->lock); 100 + return false; 101 + } 90 102 } 91 103 } 92 - list_add(&scmrq->cluster.list, &bdev->cluster_list); 104 + if (add) 105 + list_add(&scmrq->cluster.list, &bdev->cluster_list); 93 106 spin_unlock(&bdev->lock); 94 107 95 108 return true; ··· 127 114 blk_queue_io_opt(bdev->rq, CLUSTER_SIZE); 128 115 } 129 116 130 - static void scm_prepare_cluster_request(struct scm_request *scmrq) 117 + static int scm_prepare_cluster_request(struct scm_request *scmrq) 131 118 { 132 119 struct scm_blk_dev *bdev = scmrq->bdev; 133 120 struct scm_device *scmdev = bdev->gendisk->private_data; 134 - struct request *req = scmrq->request; 135 - struct aidaw *aidaw = scmrq->aidaw; 121 + struct request *req = scmrq->request[0]; 136 122 struct msb *msb = &scmrq->aob->msb[0]; 137 123 struct req_iterator iter; 124 + struct aidaw *aidaw; 138 125 struct bio_vec bv; 139 126 int i = 0; 140 127 u64 addr; ··· 144 131 scmrq->cluster.state = CLUSTER_READ; 145 132 /* fall through */ 146 133 case CLUSTER_READ: 147 - scmrq->aob->request.msb_count = 1; 148 134 msb->bs = MSB_BS_4K; 149 135 msb->oc = MSB_OC_READ; 150 136 msb->flags = MSB_FLAG_IDA; 151 - msb->data_addr = (u64) aidaw; 152 137 msb->blk_count = write_cluster_size; 153 138 154 139 addr = scmdev->address + ((u64) blk_rq_pos(req) << 9); ··· 157 146 CLUSTER_SIZE)) 158 147 msb->blk_count = 2 * write_cluster_size; 159 148 149 + aidaw = scm_aidaw_fetch(scmrq, msb->blk_count * PAGE_SIZE); 150 + if (!aidaw) 151 + return -ENOMEM; 152 + 153 + scmrq->aob->request.msb_count = 1; 154 + msb->data_addr = (u64) aidaw; 160 155 for (i = 0; i < msb->blk_count; i++) { 161 156 aidaw->data_addr = (u64) scmrq->cluster.buf[i]; 162 157 aidaw++; ··· 170 153 171 154 break; 172 155 case CLUSTER_WRITE: 156 + aidaw = (void *) msb->data_addr; 173 157 msb->oc = MSB_OC_WRITE; 174 158 175 159 for (addr = msb->scm_addr; ··· 191 173 } 192 174 break; 193 175 } 176 + return 0; 194 177 } 195 178 196 179 bool scm_need_cluster_request(struct scm_request *scmrq) 197 180 { 198 - if (rq_data_dir(scmrq->request) == READ) 181 + int pos = scmrq->aob->request.msb_count; 182 + 183 + if (rq_data_dir(scmrq->request[pos]) == READ) 199 184 return false; 200 185 201 - return blk_rq_bytes(scmrq->request) < CLUSTER_SIZE; 186 + return blk_rq_bytes(scmrq->request[pos]) < CLUSTER_SIZE; 202 187 } 203 188 204 189 /* Called with queue lock held. */ 205 190 void scm_initiate_cluster_request(struct scm_request *scmrq) 206 191 { 207 - scm_prepare_cluster_request(scmrq); 192 + if (scm_prepare_cluster_request(scmrq)) 193 + goto requeue; 208 194 if (eadm_start_aob(scmrq->aob)) 209 - scm_request_requeue(scmrq); 195 + goto requeue; 196 + return; 197 + requeue: 198 + scm_request_requeue(scmrq); 210 199 } 211 200 212 201 bool scm_test_cluster_request(struct scm_request *scmrq)
+10
drivers/s390/char/Kconfig
··· 102 102 want for inform other people about your kernel panics, 103 103 need this feature and intend to run your kernel in LPAR. 104 104 105 + config SCLP_ASYNC_ID 106 + string "Component ID for Call Home" 107 + depends on SCLP_ASYNC 108 + default "000000000" 109 + help 110 + The Component ID for Call Home is used to identify the correct 111 + problem reporting queue the call home records should be sent to. 112 + 113 + If your are unsure, please use the default value "000000000". 114 + 105 115 config HMC_DRV 106 116 def_tristate m 107 117 prompt "Support for file transfers from HMC drive CD/DVD-ROM"
+2 -1
drivers/s390/char/sclp_async.c
··· 137 137 * Retain Queue 138 138 * e.g. 5639CC140 500 Red Hat RHEL5 Linux for zSeries (RHEL AS) 139 139 */ 140 - strncpy(sccb->evbuf.comp_id, "000000000", sizeof(sccb->evbuf.comp_id)); 140 + strncpy(sccb->evbuf.comp_id, CONFIG_SCLP_ASYNC_ID, 141 + sizeof(sccb->evbuf.comp_id)); 141 142 sccb->evbuf.header.length = sizeof(sccb->evbuf); 142 143 sccb->header.length = sizeof(sccb->evbuf) + sizeof(sccb->header); 143 144 sccb->header.function_code = SCLP_NORMAL_WRITE;
+1 -1
drivers/s390/cio/eadm_sch.c
··· 31 31 MODULE_DESCRIPTION("driver for s390 eadm subchannels"); 32 32 MODULE_LICENSE("GPL"); 33 33 34 - #define EADM_TIMEOUT (5 * HZ) 34 + #define EADM_TIMEOUT (7 * HZ) 35 35 static DEFINE_SPINLOCK(list_lock); 36 36 static LIST_HEAD(eadm_list); 37 37
+11
include/asm-generic/pgtable.h
··· 103 103 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 104 104 #endif 105 105 106 + #ifndef __HAVE_ARCH_PMDP_GET_AND_CLEAR_FULL 107 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 108 + static inline pmd_t pmdp_get_and_clear_full(struct mm_struct *mm, 109 + unsigned long address, pmd_t *pmdp, 110 + int full) 111 + { 112 + return pmdp_get_and_clear(mm, address, pmdp); 113 + } 114 + #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 115 + #endif 116 + 106 117 #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL 107 118 static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, 108 119 unsigned long address, pte_t *ptep,
+1
include/linux/kprobes.h
··· 335 335 extern int arch_prepare_kprobe_ftrace(struct kprobe *p); 336 336 #endif 337 337 338 + int arch_check_ftrace_location(struct kprobe *p); 338 339 339 340 /* Get the kprobe at this addr (if any) - called with preemption disabled */ 340 341 struct kprobe *get_kprobe(void *addr);
+11
include/linux/mm.h
··· 56 56 #define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0)) 57 57 #endif 58 58 59 + /* 60 + * To prevent common memory management code establishing 61 + * a zero page mapping on a read fault. 62 + * This macro should be defined within <asm/pgtable.h>. 63 + * s390 does this to prevent multiplexing of hardware bits 64 + * related to the physical page in case of virtualization. 65 + */ 66 + #ifndef mm_forbids_zeropage 67 + #define mm_forbids_zeropage(X) (0) 68 + #endif 69 + 59 70 extern unsigned long sysctl_user_reserve_kbytes; 60 71 extern unsigned long sysctl_admin_reserve_kbytes; 61 72
+11 -7
kernel/kprobes.c
··· 1410 1410 return ret; 1411 1411 } 1412 1412 1413 - static int check_kprobe_address_safe(struct kprobe *p, 1414 - struct module **probed_mod) 1413 + int __weak arch_check_ftrace_location(struct kprobe *p) 1415 1414 { 1416 - int ret = 0; 1417 1415 unsigned long ftrace_addr; 1418 1416 1419 - /* 1420 - * If the address is located on a ftrace nop, set the 1421 - * breakpoint to the following instruction. 1422 - */ 1423 1417 ftrace_addr = ftrace_location((unsigned long)p->addr); 1424 1418 if (ftrace_addr) { 1425 1419 #ifdef CONFIG_KPROBES_ON_FTRACE ··· 1425 1431 return -EINVAL; 1426 1432 #endif 1427 1433 } 1434 + return 0; 1435 + } 1428 1436 1437 + static int check_kprobe_address_safe(struct kprobe *p, 1438 + struct module **probed_mod) 1439 + { 1440 + int ret; 1441 + 1442 + ret = arch_check_ftrace_location(p); 1443 + if (ret) 1444 + return ret; 1429 1445 jump_label_lock(); 1430 1446 preempt_disable(); 1431 1447
+2
kernel/sys_ni.c
··· 169 169 cond_syscall(sys_spu_run); 170 170 cond_syscall(sys_spu_create); 171 171 cond_syscall(sys_subpage_prot); 172 + cond_syscall(sys_s390_pci_mmio_read); 173 + cond_syscall(sys_s390_pci_mmio_write); 172 174 173 175 /* mmu depending weak syscall entries */ 174 176 cond_syscall(sys_mprotect);
+3 -2
mm/huge_memory.c
··· 804 804 return VM_FAULT_OOM; 805 805 if (unlikely(khugepaged_enter(vma, vma->vm_flags))) 806 806 return VM_FAULT_OOM; 807 - if (!(flags & FAULT_FLAG_WRITE) && 807 + if (!(flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(mm) && 808 808 transparent_hugepage_use_zero_page()) { 809 809 spinlock_t *ptl; 810 810 pgtable_t pgtable; ··· 1399 1399 * pgtable_trans_huge_withdraw after finishing pmdp related 1400 1400 * operations. 1401 1401 */ 1402 - orig_pmd = pmdp_get_and_clear(tlb->mm, addr, pmd); 1402 + orig_pmd = pmdp_get_and_clear_full(tlb->mm, addr, pmd, 1403 + tlb->fullmm); 1403 1404 tlb_remove_pmd_tlb_entry(tlb, pmd, addr); 1404 1405 pgtable = pgtable_trans_huge_withdraw(tlb->mm, pmd); 1405 1406 if (is_huge_zero_pmd(orig_pmd)) {
+1 -1
mm/memory.c
··· 2627 2627 return VM_FAULT_SIGBUS; 2628 2628 2629 2629 /* Use the zero-page for reads */ 2630 - if (!(flags & FAULT_FLAG_WRITE)) { 2630 + if (!(flags & FAULT_FLAG_WRITE) && !mm_forbids_zeropage(mm)) { 2631 2631 entry = pte_mkspecial(pfn_pte(my_zero_pfn(address), 2632 2632 vma->vm_page_prot)); 2633 2633 page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
+1 -1
scripts/recordmcount.c
··· 404 404 } 405 405 if (w2(ghdr->e_machine) == EM_S390) { 406 406 reltype = R_390_64; 407 - mcount_adjust_64 = -8; 407 + mcount_adjust_64 = -14; 408 408 } 409 409 if (w2(ghdr->e_machine) == EM_MIPS) { 410 410 reltype = R_MIPS_64;
+1 -1
scripts/recordmcount.pl
··· 243 243 244 244 } elsif ($arch eq "s390" && $bits == 64) { 245 245 $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_390_(PC|PLT)32DBL\\s+_mcount\\+0x2\$"; 246 - $mcount_adjust = -8; 246 + $mcount_adjust = -14; 247 247 $alignment = 8; 248 248 $type = ".quad"; 249 249 $ld .= " -m elf64_s390";