Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 's390-6.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Heiko Carstens:

- Large cleanup of the con3270/tty3270 driver. Among others this fixes:
- Background Color Support
- ASCII Line Character Support
- VT100 Support
- Geometries other than 80x24

- Cleanup and improve cmpxchg() code. Also add cmpxchg_user_key() to
uaccess functions, which will be used by KVM to access KVM guest
memory with a specific storage key

- Add support for user space events counting to CPUMF

- Cleanup the vfio/ccw code, which also allows now to properly support
2K Format-2 IDALs

- Move kernel page table allocation and initialization to decompressor,
which finally allows to enter the kernel with dynamic address
translation enabled. This in turn allows to get rid of code with
special handling in the kernel, which has to distinguish if DAT is on
or off

- Replace kretprobe with rethook

- Various improvements to vfio/ap queue resets:
- Use TAPQ to verify completion of a reset in progress rather than
multiple invocations of ZAPQ.
- Check TAPQ response codes when verifying successful completion of
ZAPQ.
- Fix erroneous handling of some error response codes.
- Increase the maximum amount of time to wait for successful
completion of ZAPQ

- Rework system call wrappers to get rid of alias functions, which were
only left on s390

- Cleanup diag288_wdt watchdog driver. It has been agreed on with
Guenter Roeck that this goes upstream via the s390 tree

- Add missing loadparm parameter handling for list-directed ECKD
ipl/reipl

- Various improvements to memory detection code

- Remove arch_cpu_idle_time() since the current implementation is
broken, and allows user space observable accounted idle times which
can temporarily decrease

- Add Reset DAT-Protection support: (only) allow to change PTEs from RO
to RW with a new RDP instruction. Unlike the currently used IPTE
instruction, this does not necessarily guarantee that TLBs of all
CPUs are synchronously flushed; and that remote CPUs can see spurious
protection faults. The overall improvement for not requiring an all
CPU synchronization, like it is required with IPTE, should be
beneficial

- Fix KFENCE page fault reporting

- Smaller cleanups and improvement all over the place

* tag 's390-6.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (182 commits)
s390/irq,idle: simplify idle check
s390/processor: add test_and_set_cpu_flag() and test_and_clear_cpu_flag()
s390/processor: let cpu helper functions return boolean values
s390/kfence: fix page fault reporting
s390/zcrypt: introduce ctfm field in struct CPRBX
s390: remove confusing comment from uapi types header file
vfio/ccw: remove WARN_ON during shutdown
s390/entry: remove toolchain dependent micro-optimization
s390/mem_detect: do not truncate online memory ranges info
s390/vx: remove __uint128_t type from __vector128 struct again
s390/mm: add support for RDP (Reset DAT-Protection)
s390/mm: define private VM_FAULT_* reasons from top bits
Documentation: s390: correct spelling
s390/ap: fix status returned by ap_qact()
s390/ap: fix status returned by ap_aqic()
s390: vfio-ap: tighten the NIB validity check
Revert "s390/mem_detect: do not update output parameters on failure"
s390/idle: remove arch_cpu_idle_time() and corresponding code
s390/vx: use simple assignments to access __vector128 members
s390/vx: add 64 and 128 bit members to __vector128 struct
...

+4580 -4582
+5 -10
Documentation/ABI/testing/sysfs-bus-css
··· 1 1 What: /sys/bus/css/devices/.../type 2 2 Date: March 2008 3 - Contact: Cornelia Huck <cornelia.huck@de.ibm.com> 4 - linux-s390@vger.kernel.org 3 + Contact: linux-s390@vger.kernel.org 5 4 Description: Contains the subchannel type, as reported by the hardware. 6 5 This attribute is present for all subchannel types. 7 6 8 7 What: /sys/bus/css/devices/.../modalias 9 8 Date: March 2008 10 - Contact: Cornelia Huck <cornelia.huck@de.ibm.com> 11 - linux-s390@vger.kernel.org 9 + Contact: linux-s390@vger.kernel.org 12 10 Description: Contains the module alias as reported with uevents. 13 11 It is of the format css:t<type> and present for all 14 12 subchannel types. 15 13 16 14 What: /sys/bus/css/drivers/io_subchannel/.../chpids 17 15 Date: December 2002 18 - Contact: Cornelia Huck <cornelia.huck@de.ibm.com> 19 - linux-s390@vger.kernel.org 16 + Contact: linux-s390@vger.kernel.org 20 17 Description: Contains the ids of the channel paths used by this 21 18 subchannel, as reported by the channel subsystem 22 19 during subchannel recognition. ··· 23 26 24 27 What: /sys/bus/css/drivers/io_subchannel/.../pimpampom 25 28 Date: December 2002 26 - Contact: Cornelia Huck <cornelia.huck@de.ibm.com> 27 - linux-s390@vger.kernel.org 29 + Contact: linux-s390@vger.kernel.org 28 30 Description: Contains the PIM/PAM/POM values, as reported by the 29 31 channel subsystem when last queried by the common I/O 30 32 layer (this implies that this attribute is not necessarily ··· 34 38 35 39 What: /sys/bus/css/devices/.../driver_override 36 40 Date: June 2019 37 - Contact: Cornelia Huck <cohuck@redhat.com> 38 - linux-s390@vger.kernel.org 41 + Contact: linux-s390@vger.kernel.org 39 42 Description: This file allows the driver for a device to be specified. When 40 43 specified, only a driver with a name matching the value written 41 44 to driver_override will have an opportunity to bind to the
+2 -2
Documentation/s390/pci.rst
··· 51 51 52 52 The slot entries are set up using the function identifier (FID) of the 53 53 PCI function. The format depicted as XXXXXXXX above is 8 hexadecimal digits 54 - with 0 padding and lower case hexadecimal digitis. 54 + with 0 padding and lower case hexadecimal digits. 55 55 56 56 - /sys/bus/pci/slots/XXXXXXXX/power 57 57 ··· 66 66 67 67 - function_handle 68 68 Low-level identifier used for a configured PCI function. 69 - It might be useful for debuging. 69 + It might be useful for debugging. 70 70 71 71 - pchid 72 72 Model-dependent location of the I/O adapter.
+3 -3
Documentation/s390/vfio-ccw.rst
··· 176 176 Use the 'mdev_create' sysfs file, we need to manually create one (and 177 177 only one for our case) mediated device. 178 178 3. vfio_mdev.ko drives the mediated ccw device. 179 - vfio_mdev is also the vfio device drvier. It will probe the mdev and 179 + vfio_mdev is also the vfio device driver. It will probe the mdev and 180 180 add it to an iommu_group and a vfio_group. Then we could pass through 181 181 the mdev to a guest. 182 182 ··· 219 219 The operation was successful. 220 220 221 221 ``-EOPNOTSUPP`` 222 - The orb specified transport mode or an unidentified IDAW format, or the 223 - scsw specified a function other than the start function. 222 + The ORB specified transport mode or the 223 + SCSW specified a function other than the start function. 224 224 225 225 ``-EIO`` 226 226 A request was issued while the device was not in a state ready to accept
+8
MAINTAINERS
··· 18111 18111 F: Documentation/s390/ 18112 18112 F: arch/s390/ 18113 18113 F: drivers/s390/ 18114 + F: drivers/watchdog/diag288_wdt.c 18114 18115 18115 18116 S390 COMMON I/O LAYER 18116 18117 M: Vineeth Vijayan <vneethv@linux.ibm.com> ··· 18171 18170 F: arch/s390/pci/ 18172 18171 F: drivers/pci/hotplug/s390_pci_hpc.c 18173 18172 F: Documentation/s390/pci.rst 18173 + 18174 + S390 SCM DRIVER 18175 + M: Vineeth Vijayan <vneethv@linux.ibm.com> 18176 + L: linux-s390@vger.kernel.org 18177 + S: Supported 18178 + F: drivers/s390/block/scm* 18179 + F: drivers/s390/cio/scm.c 18174 18180 18175 18181 S390 VFIO AP DRIVER 18176 18182 M: Tony Krowiak <akrowiak@linux.ibm.com>
+1
arch/s390/Kconfig
··· 187 187 select HAVE_KPROBES 188 188 select HAVE_KPROBES_ON_FTRACE 189 189 select HAVE_KRETPROBES 190 + select HAVE_RETHOOK 190 191 select HAVE_KVM 191 192 select HAVE_LIVEPATCH 192 193 select HAVE_MEMBLOCK_PHYS_MAP
+1 -1
arch/s390/boot/Makefile
··· 35 35 36 36 CFLAGS_sclp_early_core.o += -I$(srctree)/drivers/s390/char 37 37 38 - obj-y := head.o als.o startup.o mem_detect.o ipl_parm.o ipl_report.o 38 + obj-y := head.o als.o startup.o mem_detect.o ipl_parm.o ipl_report.o vmem.o 39 39 obj-y += string.o ebcdic.o sclp_early_core.o mem.o ipl_vmparm.o cmdline.o 40 40 obj-y += version.o pgm_check_info.o ctype.o ipl_data.o machine_kexec_reloc.o 41 41 obj-$(findstring y, $(CONFIG_PROTECTED_VIRTUALIZATION_GUEST) $(CONFIG_PGSTE)) += uv.o
+38 -2
arch/s390/boot/boot.h
··· 8 8 9 9 #ifndef __ASSEMBLY__ 10 10 11 + struct machine_info { 12 + unsigned char has_edat1 : 1; 13 + unsigned char has_edat2 : 1; 14 + unsigned char has_nx : 1; 15 + }; 16 + 17 + struct vmlinux_info { 18 + unsigned long default_lma; 19 + unsigned long entry; 20 + unsigned long image_size; /* does not include .bss */ 21 + unsigned long bss_size; /* uncompressed image .bss size */ 22 + unsigned long bootdata_off; 23 + unsigned long bootdata_size; 24 + unsigned long bootdata_preserved_off; 25 + unsigned long bootdata_preserved_size; 26 + unsigned long dynsym_start; 27 + unsigned long rela_dyn_start; 28 + unsigned long rela_dyn_end; 29 + unsigned long amode31_size; 30 + unsigned long init_mm_off; 31 + unsigned long swapper_pg_dir_off; 32 + unsigned long invalid_pg_dir_off; 33 + }; 34 + 11 35 void startup_kernel(void); 12 - unsigned long detect_memory(void); 36 + unsigned long detect_memory(unsigned long *safe_addr); 37 + void mem_detect_set_usable_limit(unsigned long limit); 13 38 bool is_ipl_block_dump(void); 14 39 void store_ipl_parmblock(void); 40 + unsigned long read_ipl_report(unsigned long safe_addr); 15 41 void setup_boot_command_line(void); 16 42 void parse_boot_command_line(void); 17 43 void verify_facilities(void); ··· 45 19 void sclp_early_setup_buffer(void); 46 20 void print_pgm_check_info(void); 47 21 unsigned long get_random_base(unsigned long safe_addr); 22 + void setup_vmem(unsigned long asce_limit); 23 + unsigned long vmem_estimate_memory_needs(unsigned long online_mem_total); 48 24 void __printf(1, 2) decompressor_printk(const char *fmt, ...); 25 + void error(char *m); 26 + 27 + extern struct machine_info machine; 49 28 50 29 /* Symbols defined by linker scripts */ 51 30 extern const char kernel_version[]; ··· 62 31 extern char __boot_data_preserved_start[], __boot_data_preserved_end[]; 63 32 extern char _decompressor_syms_start[], _decompressor_syms_end[]; 64 33 extern char _stack_start[], _stack_end[]; 34 + extern char _end[]; 35 + extern unsigned char _compressed_start[]; 36 + extern unsigned char _compressed_end[]; 37 + extern struct vmlinux_info _vmlinux_info; 38 + #define vmlinux _vmlinux_info 65 39 66 - unsigned long read_ipl_report(unsigned long safe_offset); 40 + #define __abs_lowcore_pa(x) (((unsigned long)(x) - __abs_lowcore) % sizeof(struct lowcore)) 67 41 68 42 #endif /* __ASSEMBLY__ */ 69 43 #endif /* BOOT_BOOT_H */
+1
arch/s390/boot/decompressor.c
··· 11 11 #include <linux/string.h> 12 12 #include <asm/page.h> 13 13 #include "decompressor.h" 14 + #include "boot.h" 14 15 15 16 /* 16 17 * gzip declarations
-26
arch/s390/boot/decompressor.h
··· 2 2 #ifndef BOOT_COMPRESSED_DECOMPRESSOR_H 3 3 #define BOOT_COMPRESSED_DECOMPRESSOR_H 4 4 5 - #include <linux/stddef.h> 6 - 7 5 #ifdef CONFIG_KERNEL_UNCOMPRESSED 8 6 static inline void *decompress_kernel(void) { return NULL; } 9 7 #else 10 8 void *decompress_kernel(void); 11 9 #endif 12 10 unsigned long mem_safe_offset(void); 13 - void error(char *m); 14 - 15 - struct vmlinux_info { 16 - unsigned long default_lma; 17 - void (*entry)(void); 18 - unsigned long image_size; /* does not include .bss */ 19 - unsigned long bss_size; /* uncompressed image .bss size */ 20 - unsigned long bootdata_off; 21 - unsigned long bootdata_size; 22 - unsigned long bootdata_preserved_off; 23 - unsigned long bootdata_preserved_size; 24 - unsigned long dynsym_start; 25 - unsigned long rela_dyn_start; 26 - unsigned long rela_dyn_end; 27 - unsigned long amode31_size; 28 - }; 29 - 30 - /* Symbols defined by linker scripts */ 31 - extern char _end[]; 32 - extern unsigned char _compressed_start[]; 33 - extern unsigned char _compressed_end[]; 34 - extern char _vmlinux_info[]; 35 - 36 - #define vmlinux (*(struct vmlinux_info *)_vmlinux_info) 37 11 38 12 #endif /* BOOT_COMPRESSED_DECOMPRESSOR_H */
+7 -13
arch/s390/boot/kaslr.c
··· 132 132 unsigned long start, end, pos = 0; 133 133 int i; 134 134 135 - for_each_mem_detect_block(i, &start, &end) { 135 + for_each_mem_detect_usable_block(i, &start, &end) { 136 136 if (_min >= end) 137 137 continue; 138 138 if (start >= _max) ··· 153 153 unsigned long start, end; 154 154 int i; 155 155 156 - for_each_mem_detect_block(i, &start, &end) { 156 + for_each_mem_detect_usable_block(i, &start, &end) { 157 157 if (_min >= end) 158 158 continue; 159 159 if (start >= _max) ··· 172 172 173 173 unsigned long get_random_base(unsigned long safe_addr) 174 174 { 175 + unsigned long usable_total = get_mem_detect_usable_total(); 175 176 unsigned long memory_limit = get_mem_detect_end(); 176 177 unsigned long base_pos, max_pos, kernel_size; 177 - unsigned long kasan_needs; 178 178 int i; 179 - 180 - memory_limit = min(memory_limit, ident_map_size); 181 179 182 180 /* 183 181 * Avoid putting kernel in the end of physical memory 184 - * which kasan will use for shadow memory and early pgtable 185 - * mapping allocations. 182 + * which vmem and kasan code will use for shadow memory and 183 + * pgtable mapping allocations. 186 184 */ 187 - memory_limit -= kasan_estimate_memory_needs(memory_limit); 185 + memory_limit -= kasan_estimate_memory_needs(usable_total); 186 + memory_limit -= vmem_estimate_memory_needs(usable_total); 188 187 189 - if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_data.start && initrd_data.size) { 190 - if (safe_addr < initrd_data.start + initrd_data.size) 191 - safe_addr = initrd_data.start + initrd_data.size; 192 - } 193 188 safe_addr = ALIGN(safe_addr, THREAD_SIZE); 194 - 195 189 kernel_size = vmlinux.image_size + vmlinux.bss_size; 196 190 if (safe_addr + kernel_size > memory_limit) 197 191 return 0;
+36 -36
arch/s390/boot/mem_detect.c
··· 16 16 #define ENTRIES_EXTENDED_MAX \ 17 17 (256 * (1020 / 2) * sizeof(struct mem_detect_block)) 18 18 19 - /* 20 - * To avoid corrupting old kernel memory during dump, find lowest memory 21 - * chunk possible either right after the kernel end (decompressed kernel) or 22 - * after initrd (if it is present and there is no hole between the kernel end 23 - * and initrd) 24 - */ 25 - static void *mem_detect_alloc_extended(void) 26 - { 27 - unsigned long offset = ALIGN(mem_safe_offset(), sizeof(u64)); 28 - 29 - if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_data.start && initrd_data.size && 30 - initrd_data.start < offset + ENTRIES_EXTENDED_MAX) 31 - offset = ALIGN(initrd_data.start + initrd_data.size, sizeof(u64)); 32 - 33 - return (void *)offset; 34 - } 35 - 36 19 static struct mem_detect_block *__get_mem_detect_block_ptr(u32 n) 37 20 { 38 21 if (n < MEM_INLINED_ENTRIES) 39 22 return &mem_detect.entries[n]; 40 - if (unlikely(!mem_detect.entries_extended)) 41 - mem_detect.entries_extended = mem_detect_alloc_extended(); 42 23 return &mem_detect.entries_extended[n - MEM_INLINED_ENTRIES]; 43 24 } 44 25 ··· 128 147 return rc; 129 148 } 130 149 131 - static void search_mem_end(void) 150 + static unsigned long search_mem_end(void) 132 151 { 133 152 unsigned long range = 1 << (MAX_PHYSMEM_BITS - 20); /* in 1MB blocks */ 134 153 unsigned long offset = 0; ··· 140 159 if (!tprot(pivot << 20)) 141 160 offset = pivot; 142 161 } 143 - 144 - add_mem_detect_block(0, (offset + 1) << 20); 162 + return (offset + 1) << 20; 145 163 } 146 164 147 - unsigned long detect_memory(void) 165 + unsigned long detect_memory(unsigned long *safe_addr) 148 166 { 149 - unsigned long max_physmem_end; 167 + unsigned long max_physmem_end = 0; 150 168 151 169 sclp_early_get_memsize(&max_physmem_end); 170 + mem_detect.entries_extended = (struct mem_detect_block *)ALIGN(*safe_addr, sizeof(u64)); 152 171 153 172 if (!sclp_early_read_storage_info()) { 154 173 mem_detect.info_source = MEM_DETECT_SCLP_STOR_INFO; 155 - return max_physmem_end; 156 - } 157 - 158 - if (!diag260()) { 174 + } else if (!diag260()) { 159 175 mem_detect.info_source = MEM_DETECT_DIAG260; 160 - return max_physmem_end; 161 - } 162 - 163 - if (max_physmem_end) { 176 + max_physmem_end = max_physmem_end ?: get_mem_detect_end(); 177 + } else if (max_physmem_end) { 164 178 add_mem_detect_block(0, max_physmem_end); 165 179 mem_detect.info_source = MEM_DETECT_SCLP_READ_INFO; 166 - return max_physmem_end; 180 + } else { 181 + max_physmem_end = search_mem_end(); 182 + add_mem_detect_block(0, max_physmem_end); 183 + mem_detect.info_source = MEM_DETECT_BIN_SEARCH; 167 184 } 168 185 169 - search_mem_end(); 170 - mem_detect.info_source = MEM_DETECT_BIN_SEARCH; 171 - return get_mem_detect_end(); 186 + if (mem_detect.count > MEM_INLINED_ENTRIES) { 187 + *safe_addr += (mem_detect.count - MEM_INLINED_ENTRIES) * 188 + sizeof(struct mem_detect_block); 189 + } 190 + 191 + return max_physmem_end; 192 + } 193 + 194 + void mem_detect_set_usable_limit(unsigned long limit) 195 + { 196 + struct mem_detect_block *block; 197 + int i; 198 + 199 + /* make sure mem_detect.usable ends up within online memory block */ 200 + for (i = 0; i < mem_detect.count; i++) { 201 + block = __get_mem_detect_block_ptr(i); 202 + if (block->start >= limit) 203 + break; 204 + if (block->end >= limit) { 205 + mem_detect.usable = limit; 206 + break; 207 + } 208 + mem_detect.usable = block->end; 209 + } 172 210 }
+69 -17
arch/s390/boot/startup.c
··· 3 3 #include <linux/elf.h> 4 4 #include <asm/boot_data.h> 5 5 #include <asm/sections.h> 6 + #include <asm/maccess.h> 6 7 #include <asm/cpu_mf.h> 7 8 #include <asm/setup.h> 8 9 #include <asm/kasan.h> ··· 12 11 #include <asm/diag.h> 13 12 #include <asm/uv.h> 14 13 #include <asm/abs_lowcore.h> 14 + #include <asm/mem_detect.h> 15 15 #include "decompressor.h" 16 16 #include "boot.h" 17 17 #include "uv.h" ··· 20 18 unsigned long __bootdata_preserved(__kaslr_offset); 21 19 unsigned long __bootdata_preserved(__abs_lowcore); 22 20 unsigned long __bootdata_preserved(__memcpy_real_area); 21 + pte_t *__bootdata_preserved(memcpy_real_ptep); 23 22 unsigned long __bootdata(__amode31_base); 24 23 unsigned long __bootdata_preserved(VMALLOC_START); 25 24 unsigned long __bootdata_preserved(VMALLOC_END); ··· 36 33 u64 __bootdata_preserved(alt_stfle_fac_list[16]); 37 34 struct oldmem_data __bootdata_preserved(oldmem_data); 38 35 36 + struct machine_info machine; 37 + 39 38 void error(char *x) 40 39 { 41 40 sclp_early_printk("\n\n"); ··· 45 40 sclp_early_printk("\n\n -- System halted"); 46 41 47 42 disabled_wait(); 43 + } 44 + 45 + static void detect_facilities(void) 46 + { 47 + if (test_facility(8)) { 48 + machine.has_edat1 = 1; 49 + __ctl_set_bit(0, 23); 50 + } 51 + if (test_facility(78)) 52 + machine.has_edat2 = 1; 53 + if (!noexec_disabled && test_facility(130)) { 54 + machine.has_nx = 1; 55 + __ctl_set_bit(0, 20); 56 + } 48 57 } 49 58 50 59 static void setup_lpp(void) ··· 76 57 } 77 58 #endif 78 59 79 - static void rescue_initrd(unsigned long addr) 60 + static unsigned long rescue_initrd(unsigned long safe_addr) 80 61 { 81 62 if (!IS_ENABLED(CONFIG_BLK_DEV_INITRD)) 82 - return; 63 + return safe_addr; 83 64 if (!initrd_data.start || !initrd_data.size) 84 - return; 85 - if (addr <= initrd_data.start) 86 - return; 87 - memmove((void *)addr, (void *)initrd_data.start, initrd_data.size); 88 - initrd_data.start = addr; 65 + return safe_addr; 66 + if (initrd_data.start < safe_addr) { 67 + memmove((void *)safe_addr, (void *)initrd_data.start, initrd_data.size); 68 + initrd_data.start = safe_addr; 69 + } 70 + return initrd_data.start + initrd_data.size; 89 71 } 90 72 91 73 static void copy_bootdata(void) ··· 170 150 #endif 171 151 } 172 152 173 - static void setup_kernel_memory_layout(void) 153 + static unsigned long setup_kernel_memory_layout(void) 174 154 { 175 155 unsigned long vmemmap_start; 156 + unsigned long asce_limit; 176 157 unsigned long rte_size; 177 158 unsigned long pages; 178 159 unsigned long vmax; ··· 188 167 vmalloc_size > _REGION2_SIZE || 189 168 vmemmap_start + vmemmap_size + vmalloc_size + MODULES_LEN > 190 169 _REGION2_SIZE) { 191 - vmax = _REGION1_SIZE; 170 + asce_limit = _REGION1_SIZE; 192 171 rte_size = _REGION2_SIZE; 193 172 } else { 194 - vmax = _REGION2_SIZE; 173 + asce_limit = _REGION2_SIZE; 195 174 rte_size = _REGION3_SIZE; 196 175 } 197 176 /* ··· 199 178 * secure storage limit, so that any vmalloc allocation 200 179 * we do could be used to back secure guest storage. 201 180 */ 202 - vmax = adjust_to_uv_max(vmax); 181 + vmax = adjust_to_uv_max(asce_limit); 203 182 #ifdef CONFIG_KASAN 204 183 /* force vmalloc and modules below kasan shadow */ 205 184 vmax = min(vmax, KASAN_SHADOW_START); ··· 228 207 /* make sure vmemmap doesn't overlay with vmalloc area */ 229 208 VMALLOC_START = max(vmemmap_start + vmemmap_size, VMALLOC_START); 230 209 vmemmap = (struct page *)vmemmap_start; 210 + 211 + return asce_limit; 231 212 } 232 213 233 214 /* ··· 263 240 vmlinux.rela_dyn_start += offset; 264 241 vmlinux.rela_dyn_end += offset; 265 242 vmlinux.dynsym_start += offset; 243 + vmlinux.init_mm_off += offset; 244 + vmlinux.swapper_pg_dir_off += offset; 245 + vmlinux.invalid_pg_dir_off += offset; 266 246 } 267 247 268 248 static unsigned long reserve_amode31(unsigned long safe_addr) 269 249 { 270 250 __amode31_base = PAGE_ALIGN(safe_addr); 271 - return safe_addr + vmlinux.amode31_size; 251 + return __amode31_base + vmlinux.amode31_size; 272 252 } 273 253 274 254 void startup_kernel(void) 275 255 { 256 + unsigned long max_physmem_end; 276 257 unsigned long random_lma; 277 258 unsigned long safe_addr; 259 + unsigned long asce_limit; 278 260 void *img; 261 + psw_t psw; 279 262 280 263 initrd_data.start = parmarea.initrd_start; 281 264 initrd_data.size = parmarea.initrd_size; ··· 294 265 safe_addr = reserve_amode31(safe_addr); 295 266 safe_addr = read_ipl_report(safe_addr); 296 267 uv_query_info(); 297 - rescue_initrd(safe_addr); 268 + safe_addr = rescue_initrd(safe_addr); 298 269 sclp_early_read_info(); 299 270 setup_boot_command_line(); 300 271 parse_boot_command_line(); 272 + detect_facilities(); 301 273 sanitize_prot_virt_host(); 302 - setup_ident_map_size(detect_memory()); 274 + max_physmem_end = detect_memory(&safe_addr); 275 + setup_ident_map_size(max_physmem_end); 303 276 setup_vmalloc_size(); 304 - setup_kernel_memory_layout(); 277 + asce_limit = setup_kernel_memory_layout(); 278 + mem_detect_set_usable_limit(ident_map_size); 305 279 306 280 if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_enabled) { 307 281 random_lma = get_random_base(safe_addr); ··· 321 289 } else if (__kaslr_offset) 322 290 memcpy((void *)vmlinux.default_lma, img, vmlinux.image_size); 323 291 292 + /* 293 + * The order of the following operations is important: 294 + * 295 + * - handle_relocs() must follow clear_bss_section() to establish static 296 + * memory references to data in .bss to be used by setup_vmem() 297 + * (i.e init_mm.pgd) 298 + * 299 + * - setup_vmem() must follow handle_relocs() to be able using 300 + * static memory references to data in .bss (i.e init_mm.pgd) 301 + * 302 + * - copy_bootdata() must follow setup_vmem() to propagate changes to 303 + * bootdata made by setup_vmem() 304 + */ 324 305 clear_bss_section(); 325 - copy_bootdata(); 326 306 handle_relocs(__kaslr_offset); 307 + setup_vmem(asce_limit); 308 + copy_bootdata(); 327 309 328 310 if (__kaslr_offset) { 329 311 /* ··· 349 303 if (IS_ENABLED(CONFIG_KERNEL_UNCOMPRESSED)) 350 304 memset(img, 0, vmlinux.image_size); 351 305 } 352 - vmlinux.entry(); 306 + 307 + /* 308 + * Jump to the decompressed kernel entry point and switch DAT mode on. 309 + */ 310 + psw.addr = vmlinux.entry; 311 + psw.mask = PSW_KERNEL_BITS; 312 + __load_psw(psw); 353 313 }
+278
arch/s390/boot/vmem.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/sched/task.h> 3 + #include <linux/pgtable.h> 4 + #include <asm/pgalloc.h> 5 + #include <asm/facility.h> 6 + #include <asm/sections.h> 7 + #include <asm/mem_detect.h> 8 + #include <asm/maccess.h> 9 + #include <asm/abs_lowcore.h> 10 + #include "decompressor.h" 11 + #include "boot.h" 12 + 13 + #define init_mm (*(struct mm_struct *)vmlinux.init_mm_off) 14 + #define swapper_pg_dir vmlinux.swapper_pg_dir_off 15 + #define invalid_pg_dir vmlinux.invalid_pg_dir_off 16 + 17 + /* 18 + * Mimic virt_to_kpte() in lack of init_mm symbol. Skip pmd NULL check though. 19 + */ 20 + static inline pte_t *__virt_to_kpte(unsigned long va) 21 + { 22 + return pte_offset_kernel(pmd_offset(pud_offset(p4d_offset(pgd_offset_k(va), va), va), va), va); 23 + } 24 + 25 + unsigned long __bootdata_preserved(s390_invalid_asce); 26 + unsigned long __bootdata(pgalloc_pos); 27 + unsigned long __bootdata(pgalloc_end); 28 + unsigned long __bootdata(pgalloc_low); 29 + 30 + enum populate_mode { 31 + POPULATE_NONE, 32 + POPULATE_ONE2ONE, 33 + POPULATE_ABS_LOWCORE, 34 + }; 35 + 36 + static void boot_check_oom(void) 37 + { 38 + if (pgalloc_pos < pgalloc_low) 39 + error("out of memory on boot\n"); 40 + } 41 + 42 + static void pgtable_populate_init(void) 43 + { 44 + unsigned long initrd_end; 45 + unsigned long kernel_end; 46 + 47 + kernel_end = vmlinux.default_lma + vmlinux.image_size + vmlinux.bss_size; 48 + pgalloc_low = round_up(kernel_end, PAGE_SIZE); 49 + if (IS_ENABLED(CONFIG_BLK_DEV_INITRD)) { 50 + initrd_end = round_up(initrd_data.start + initrd_data.size, _SEGMENT_SIZE); 51 + pgalloc_low = max(pgalloc_low, initrd_end); 52 + } 53 + 54 + pgalloc_end = round_down(get_mem_detect_end(), PAGE_SIZE); 55 + pgalloc_pos = pgalloc_end; 56 + 57 + boot_check_oom(); 58 + } 59 + 60 + static void *boot_alloc_pages(unsigned int order) 61 + { 62 + unsigned long size = PAGE_SIZE << order; 63 + 64 + pgalloc_pos -= size; 65 + pgalloc_pos = round_down(pgalloc_pos, size); 66 + 67 + boot_check_oom(); 68 + 69 + return (void *)pgalloc_pos; 70 + } 71 + 72 + static void *boot_crst_alloc(unsigned long val) 73 + { 74 + unsigned long *table; 75 + 76 + table = boot_alloc_pages(CRST_ALLOC_ORDER); 77 + if (table) 78 + crst_table_init(table, val); 79 + return table; 80 + } 81 + 82 + static pte_t *boot_pte_alloc(void) 83 + { 84 + static void *pte_leftover; 85 + pte_t *pte; 86 + 87 + BUILD_BUG_ON(_PAGE_TABLE_SIZE * 2 != PAGE_SIZE); 88 + 89 + if (!pte_leftover) { 90 + pte_leftover = boot_alloc_pages(0); 91 + pte = pte_leftover + _PAGE_TABLE_SIZE; 92 + } else { 93 + pte = pte_leftover; 94 + pte_leftover = NULL; 95 + } 96 + memset64((u64 *)pte, _PAGE_INVALID, PTRS_PER_PTE); 97 + return pte; 98 + } 99 + 100 + static unsigned long _pa(unsigned long addr, enum populate_mode mode) 101 + { 102 + switch (mode) { 103 + case POPULATE_NONE: 104 + return -1; 105 + case POPULATE_ONE2ONE: 106 + return addr; 107 + case POPULATE_ABS_LOWCORE: 108 + return __abs_lowcore_pa(addr); 109 + default: 110 + return -1; 111 + } 112 + } 113 + 114 + static bool can_large_pud(pud_t *pu_dir, unsigned long addr, unsigned long end) 115 + { 116 + return machine.has_edat2 && 117 + IS_ALIGNED(addr, PUD_SIZE) && (end - addr) >= PUD_SIZE; 118 + } 119 + 120 + static bool can_large_pmd(pmd_t *pm_dir, unsigned long addr, unsigned long end) 121 + { 122 + return machine.has_edat1 && 123 + IS_ALIGNED(addr, PMD_SIZE) && (end - addr) >= PMD_SIZE; 124 + } 125 + 126 + static void pgtable_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long end, 127 + enum populate_mode mode) 128 + { 129 + unsigned long next; 130 + pte_t *pte, entry; 131 + 132 + pte = pte_offset_kernel(pmd, addr); 133 + for (; addr < end; addr += PAGE_SIZE, pte++) { 134 + if (pte_none(*pte)) { 135 + entry = __pte(_pa(addr, mode)); 136 + entry = set_pte_bit(entry, PAGE_KERNEL_EXEC); 137 + set_pte(pte, entry); 138 + } 139 + } 140 + } 141 + 142 + static void pgtable_pmd_populate(pud_t *pud, unsigned long addr, unsigned long end, 143 + enum populate_mode mode) 144 + { 145 + unsigned long next; 146 + pmd_t *pmd, entry; 147 + pte_t *pte; 148 + 149 + pmd = pmd_offset(pud, addr); 150 + for (; addr < end; addr = next, pmd++) { 151 + next = pmd_addr_end(addr, end); 152 + if (pmd_none(*pmd)) { 153 + if (can_large_pmd(pmd, addr, next)) { 154 + entry = __pmd(_pa(addr, mode)); 155 + entry = set_pmd_bit(entry, SEGMENT_KERNEL_EXEC); 156 + set_pmd(pmd, entry); 157 + continue; 158 + } 159 + pte = boot_pte_alloc(); 160 + pmd_populate(&init_mm, pmd, pte); 161 + } else if (pmd_large(*pmd)) { 162 + continue; 163 + } 164 + pgtable_pte_populate(pmd, addr, next, mode); 165 + } 166 + } 167 + 168 + static void pgtable_pud_populate(p4d_t *p4d, unsigned long addr, unsigned long end, 169 + enum populate_mode mode) 170 + { 171 + unsigned long next; 172 + pud_t *pud, entry; 173 + pmd_t *pmd; 174 + 175 + pud = pud_offset(p4d, addr); 176 + for (; addr < end; addr = next, pud++) { 177 + next = pud_addr_end(addr, end); 178 + if (pud_none(*pud)) { 179 + if (can_large_pud(pud, addr, next)) { 180 + entry = __pud(_pa(addr, mode)); 181 + entry = set_pud_bit(entry, REGION3_KERNEL_EXEC); 182 + set_pud(pud, entry); 183 + continue; 184 + } 185 + pmd = boot_crst_alloc(_SEGMENT_ENTRY_EMPTY); 186 + pud_populate(&init_mm, pud, pmd); 187 + } else if (pud_large(*pud)) { 188 + continue; 189 + } 190 + pgtable_pmd_populate(pud, addr, next, mode); 191 + } 192 + } 193 + 194 + static void pgtable_p4d_populate(pgd_t *pgd, unsigned long addr, unsigned long end, 195 + enum populate_mode mode) 196 + { 197 + unsigned long next; 198 + p4d_t *p4d; 199 + pud_t *pud; 200 + 201 + p4d = p4d_offset(pgd, addr); 202 + for (; addr < end; addr = next, p4d++) { 203 + next = p4d_addr_end(addr, end); 204 + if (p4d_none(*p4d)) { 205 + pud = boot_crst_alloc(_REGION3_ENTRY_EMPTY); 206 + p4d_populate(&init_mm, p4d, pud); 207 + } 208 + pgtable_pud_populate(p4d, addr, next, mode); 209 + } 210 + } 211 + 212 + static void pgtable_populate(unsigned long addr, unsigned long end, enum populate_mode mode) 213 + { 214 + unsigned long next; 215 + pgd_t *pgd; 216 + p4d_t *p4d; 217 + 218 + pgd = pgd_offset(&init_mm, addr); 219 + for (; addr < end; addr = next, pgd++) { 220 + next = pgd_addr_end(addr, end); 221 + if (pgd_none(*pgd)) { 222 + p4d = boot_crst_alloc(_REGION2_ENTRY_EMPTY); 223 + pgd_populate(&init_mm, pgd, p4d); 224 + } 225 + pgtable_p4d_populate(pgd, addr, next, mode); 226 + } 227 + } 228 + 229 + void setup_vmem(unsigned long asce_limit) 230 + { 231 + unsigned long start, end; 232 + unsigned long asce_type; 233 + unsigned long asce_bits; 234 + int i; 235 + 236 + if (asce_limit == _REGION1_SIZE) { 237 + asce_type = _REGION2_ENTRY_EMPTY; 238 + asce_bits = _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH; 239 + } else { 240 + asce_type = _REGION3_ENTRY_EMPTY; 241 + asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH; 242 + } 243 + s390_invalid_asce = invalid_pg_dir | _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH; 244 + 245 + crst_table_init((unsigned long *)swapper_pg_dir, asce_type); 246 + crst_table_init((unsigned long *)invalid_pg_dir, _REGION3_ENTRY_EMPTY); 247 + 248 + /* 249 + * To allow prefixing the lowcore must be mapped with 4KB pages. 250 + * To prevent creation of a large page at address 0 first map 251 + * the lowcore and create the identity mapping only afterwards. 252 + */ 253 + pgtable_populate_init(); 254 + pgtable_populate(0, sizeof(struct lowcore), POPULATE_ONE2ONE); 255 + for_each_mem_detect_usable_block(i, &start, &end) 256 + pgtable_populate(start, end, POPULATE_ONE2ONE); 257 + pgtable_populate(__abs_lowcore, __abs_lowcore + sizeof(struct lowcore), 258 + POPULATE_ABS_LOWCORE); 259 + pgtable_populate(__memcpy_real_area, __memcpy_real_area + PAGE_SIZE, 260 + POPULATE_NONE); 261 + memcpy_real_ptep = __virt_to_kpte(__memcpy_real_area); 262 + 263 + S390_lowcore.kernel_asce = swapper_pg_dir | asce_bits; 264 + S390_lowcore.user_asce = s390_invalid_asce; 265 + 266 + __ctl_load(S390_lowcore.kernel_asce, 1, 1); 267 + __ctl_load(S390_lowcore.user_asce, 7, 7); 268 + __ctl_load(S390_lowcore.kernel_asce, 13, 13); 269 + 270 + init_mm.context.asce = S390_lowcore.kernel_asce; 271 + } 272 + 273 + unsigned long vmem_estimate_memory_needs(unsigned long online_mem_total) 274 + { 275 + unsigned long pages = DIV_ROUND_UP(online_mem_total, PAGE_SIZE); 276 + 277 + return DIV_ROUND_UP(pages, _PAGE_ENTRIES) * _PAGE_TABLE_SIZE * 2; 278 + }
+1
arch/s390/crypto/arch_random.c
··· 10 10 #include <linux/atomic.h> 11 11 #include <linux/random.h> 12 12 #include <linux/static_key.h> 13 + #include <asm/archrandom.h> 13 14 #include <asm/cpacf.h> 14 15 15 16 DEFINE_STATIC_KEY_FALSE(s390_arch_random_available);
+13 -3
arch/s390/include/asm/abs_lowcore.h
··· 7 7 #define ABS_LOWCORE_MAP_SIZE (NR_CPUS * sizeof(struct lowcore)) 8 8 9 9 extern unsigned long __abs_lowcore; 10 - extern bool abs_lowcore_mapped; 11 10 12 - struct lowcore *get_abs_lowcore(unsigned long *flags); 13 - void put_abs_lowcore(struct lowcore *lc, unsigned long flags); 14 11 int abs_lowcore_map(int cpu, struct lowcore *lc, bool alloc); 15 12 void abs_lowcore_unmap(int cpu); 13 + 14 + static inline struct lowcore *get_abs_lowcore(void) 15 + { 16 + int cpu; 17 + 18 + cpu = get_cpu(); 19 + return ((struct lowcore *)__abs_lowcore) + cpu; 20 + } 21 + 22 + static inline void put_abs_lowcore(struct lowcore *lc) 23 + { 24 + put_cpu(); 25 + } 16 26 17 27 #endif /* _ASM_S390_ABS_LOWCORE_H */
+9 -3
arch/s390/include/asm/ap.h
··· 239 239 union { 240 240 unsigned long value; 241 241 struct ap_qirq_ctrl qirqctrl; 242 - struct ap_queue_status status; 242 + struct { 243 + u32 _pad; 244 + struct ap_queue_status status; 245 + }; 243 246 } reg1; 244 247 unsigned long reg2 = pa_ind; 245 248 ··· 256 253 " lgr %[reg1],1\n" /* gr1 (status) into reg1 */ 257 254 : [reg1] "+&d" (reg1) 258 255 : [reg0] "d" (reg0), [reg2] "d" (reg2) 259 - : "cc", "0", "1", "2"); 256 + : "cc", "memory", "0", "1", "2"); 260 257 261 258 return reg1.status; 262 259 } ··· 293 290 unsigned long reg0 = qid | (5UL << 24) | ((ifbit & 0x01) << 22); 294 291 union { 295 292 unsigned long value; 296 - struct ap_queue_status status; 293 + struct { 294 + u32 _pad; 295 + struct ap_queue_status status; 296 + }; 297 297 } reg1; 298 298 unsigned long reg2; 299 299
+4
arch/s390/include/asm/asm-extable.h
··· 12 12 #define EX_TYPE_UA_STORE 3 13 13 #define EX_TYPE_UA_LOAD_MEM 4 14 14 #define EX_TYPE_UA_LOAD_REG 5 15 + #define EX_TYPE_UA_LOAD_REGPAIR 6 15 16 16 17 #define EX_DATA_REG_ERR_SHIFT 0 17 18 #define EX_DATA_REG_ERR GENMASK(3, 0) ··· 85 84 86 85 #define EX_TABLE_UA_LOAD_REG(_fault, _target, _regerr, _regzero) \ 87 86 __EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REG, _regerr, _regzero, 0) 87 + 88 + #define EX_TABLE_UA_LOAD_REGPAIR(_fault, _target, _regerr, _regzero) \ 89 + __EX_TABLE_UA(__ex_table, _fault, _target, EX_TYPE_UA_LOAD_REGPAIR, _regerr, _regzero, 0) 88 90 89 91 #endif /* __ASM_EXTABLE_H */
+2
arch/s390/include/asm/ccwdev.h
··· 15 15 #include <asm/fcx.h> 16 16 #include <asm/irq.h> 17 17 #include <asm/schid.h> 18 + #include <linux/mutex.h> 18 19 19 20 /* structs from asm/cio.h */ 20 21 struct irb; ··· 88 87 spinlock_t *ccwlock; 89 88 /* private: */ 90 89 struct ccw_device_private *private; /* cio private information */ 90 + struct mutex reg_mutex; 91 91 /* public: */ 92 92 struct ccw_device_id id; 93 93 struct ccw_driver *drv;
+66 -43
arch/s390/include/asm/cmpxchg.h
··· 88 88 unsigned long old, 89 89 unsigned long new, int size) 90 90 { 91 - unsigned long prev, tmp; 92 - int shift; 93 - 94 91 switch (size) { 95 - case 1: 92 + case 1: { 93 + unsigned int prev, shift, mask; 94 + 96 95 shift = (3 ^ (address & 3)) << 3; 97 96 address ^= address & 3; 97 + old = (old & 0xff) << shift; 98 + new = (new & 0xff) << shift; 99 + mask = ~(0xff << shift); 98 100 asm volatile( 99 - " l %0,%2\n" 100 - "0: nr %0,%5\n" 101 - " lr %1,%0\n" 102 - " or %0,%3\n" 103 - " or %1,%4\n" 104 - " cs %0,%1,%2\n" 105 - " jnl 1f\n" 106 - " xr %1,%0\n" 107 - " nr %1,%5\n" 108 - " jnz 0b\n" 101 + " l %[prev],%[address]\n" 102 + " nr %[prev],%[mask]\n" 103 + " xilf %[mask],0xffffffff\n" 104 + " or %[new],%[prev]\n" 105 + " or %[prev],%[tmp]\n" 106 + "0: lr %[tmp],%[prev]\n" 107 + " cs %[prev],%[new],%[address]\n" 108 + " jnl 1f\n" 109 + " xr %[tmp],%[prev]\n" 110 + " xr %[new],%[tmp]\n" 111 + " nr %[tmp],%[mask]\n" 112 + " jz 0b\n" 109 113 "1:" 110 - : "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address) 111 - : "d" ((old & 0xff) << shift), 112 - "d" ((new & 0xff) << shift), 113 - "d" (~(0xff << shift)) 114 - : "memory", "cc"); 114 + : [prev] "=&d" (prev), 115 + [address] "+Q" (*(int *)address), 116 + [tmp] "+&d" (old), 117 + [new] "+&d" (new), 118 + [mask] "+&d" (mask) 119 + :: "memory", "cc"); 115 120 return prev >> shift; 116 - case 2: 121 + } 122 + case 2: { 123 + unsigned int prev, shift, mask; 124 + 117 125 shift = (2 ^ (address & 2)) << 3; 118 126 address ^= address & 2; 127 + old = (old & 0xffff) << shift; 128 + new = (new & 0xffff) << shift; 129 + mask = ~(0xffff << shift); 119 130 asm volatile( 120 - " l %0,%2\n" 121 - "0: nr %0,%5\n" 122 - " lr %1,%0\n" 123 - " or %0,%3\n" 124 - " or %1,%4\n" 125 - " cs %0,%1,%2\n" 126 - " jnl 1f\n" 127 - " xr %1,%0\n" 128 - " nr %1,%5\n" 129 - " jnz 0b\n" 131 + " l %[prev],%[address]\n" 132 + " nr %[prev],%[mask]\n" 133 + " xilf %[mask],0xffffffff\n" 134 + " or %[new],%[prev]\n" 135 + " or %[prev],%[tmp]\n" 136 + "0: lr %[tmp],%[prev]\n" 137 + " cs %[prev],%[new],%[address]\n" 138 + " jnl 1f\n" 139 + " xr %[tmp],%[prev]\n" 140 + " xr %[new],%[tmp]\n" 141 + " nr %[tmp],%[mask]\n" 142 + " jz 0b\n" 130 143 "1:" 131 - : "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address) 132 - : "d" ((old & 0xffff) << shift), 133 - "d" ((new & 0xffff) << shift), 134 - "d" (~(0xffff << shift)) 135 - : "memory", "cc"); 144 + : [prev] "=&d" (prev), 145 + [address] "+Q" (*(int *)address), 146 + [tmp] "+&d" (old), 147 + [new] "+&d" (new), 148 + [mask] "+&d" (mask) 149 + :: "memory", "cc"); 136 150 return prev >> shift; 137 - case 4: 151 + } 152 + case 4: { 153 + unsigned int prev = old; 154 + 138 155 asm volatile( 139 - " cs %0,%3,%1\n" 140 - : "=&d" (prev), "+Q" (*(int *) address) 141 - : "0" (old), "d" (new) 156 + " cs %[prev],%[new],%[address]\n" 157 + : [prev] "+&d" (prev), 158 + [address] "+Q" (*(int *)address) 159 + : [new] "d" (new) 142 160 : "memory", "cc"); 143 161 return prev; 144 - case 8: 162 + } 163 + case 8: { 164 + unsigned long prev = old; 165 + 145 166 asm volatile( 146 - " csg %0,%3,%1\n" 147 - : "=&d" (prev), "+QS" (*(long *) address) 148 - : "0" (old), "d" (new) 167 + " csg %[prev],%[new],%[address]\n" 168 + : [prev] "+&d" (prev), 169 + [address] "+QS" (*(long *)address) 170 + : [new] "d" (new) 149 171 : "memory", "cc"); 150 172 return prev; 173 + } 151 174 } 152 175 __cmpxchg_called_with_bad_pointer(); 153 176 return old;
-112
arch/s390/include/asm/cpu_mcf.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Counter facility support definitions for the Linux perf 4 - * 5 - * Copyright IBM Corp. 2019 6 - * Author(s): Hendrik Brueckner <brueckner@linux.ibm.com> 7 - */ 8 - #ifndef _ASM_S390_CPU_MCF_H 9 - #define _ASM_S390_CPU_MCF_H 10 - 11 - #include <linux/perf_event.h> 12 - #include <asm/cpu_mf.h> 13 - 14 - enum cpumf_ctr_set { 15 - CPUMF_CTR_SET_BASIC = 0, /* Basic Counter Set */ 16 - CPUMF_CTR_SET_USER = 1, /* Problem-State Counter Set */ 17 - CPUMF_CTR_SET_CRYPTO = 2, /* Crypto-Activity Counter Set */ 18 - CPUMF_CTR_SET_EXT = 3, /* Extended Counter Set */ 19 - CPUMF_CTR_SET_MT_DIAG = 4, /* MT-diagnostic Counter Set */ 20 - 21 - /* Maximum number of counter sets */ 22 - CPUMF_CTR_SET_MAX, 23 - }; 24 - 25 - #define CPUMF_LCCTL_ENABLE_SHIFT 16 26 - #define CPUMF_LCCTL_ACTCTL_SHIFT 0 27 - 28 - static inline void ctr_set_enable(u64 *state, u64 ctrsets) 29 - { 30 - *state |= ctrsets << CPUMF_LCCTL_ENABLE_SHIFT; 31 - } 32 - 33 - static inline void ctr_set_disable(u64 *state, u64 ctrsets) 34 - { 35 - *state &= ~(ctrsets << CPUMF_LCCTL_ENABLE_SHIFT); 36 - } 37 - 38 - static inline void ctr_set_start(u64 *state, u64 ctrsets) 39 - { 40 - *state |= ctrsets << CPUMF_LCCTL_ACTCTL_SHIFT; 41 - } 42 - 43 - static inline void ctr_set_stop(u64 *state, u64 ctrsets) 44 - { 45 - *state &= ~(ctrsets << CPUMF_LCCTL_ACTCTL_SHIFT); 46 - } 47 - 48 - static inline int ctr_stcctm(enum cpumf_ctr_set set, u64 range, u64 *dest) 49 - { 50 - switch (set) { 51 - case CPUMF_CTR_SET_BASIC: 52 - return stcctm(BASIC, range, dest); 53 - case CPUMF_CTR_SET_USER: 54 - return stcctm(PROBLEM_STATE, range, dest); 55 - case CPUMF_CTR_SET_CRYPTO: 56 - return stcctm(CRYPTO_ACTIVITY, range, dest); 57 - case CPUMF_CTR_SET_EXT: 58 - return stcctm(EXTENDED, range, dest); 59 - case CPUMF_CTR_SET_MT_DIAG: 60 - return stcctm(MT_DIAG_CLEARING, range, dest); 61 - case CPUMF_CTR_SET_MAX: 62 - return 3; 63 - } 64 - return 3; 65 - } 66 - 67 - struct cpu_cf_events { 68 - struct cpumf_ctr_info info; 69 - atomic_t ctr_set[CPUMF_CTR_SET_MAX]; 70 - atomic64_t alert; 71 - u64 state; /* For perf_event_open SVC */ 72 - u64 dev_state; /* For /dev/hwctr */ 73 - unsigned int flags; 74 - size_t used; /* Bytes used in data */ 75 - size_t usedss; /* Bytes used in start/stop */ 76 - unsigned char start[PAGE_SIZE]; /* Counter set at event add */ 77 - unsigned char stop[PAGE_SIZE]; /* Counter set at event delete */ 78 - unsigned char data[PAGE_SIZE]; /* Counter set at /dev/hwctr */ 79 - unsigned int sets; /* # Counter set saved in memory */ 80 - }; 81 - DECLARE_PER_CPU(struct cpu_cf_events, cpu_cf_events); 82 - 83 - bool kernel_cpumcf_avail(void); 84 - int __kernel_cpumcf_begin(void); 85 - unsigned long kernel_cpumcf_alert(int clear); 86 - void __kernel_cpumcf_end(void); 87 - 88 - static inline int kernel_cpumcf_begin(void) 89 - { 90 - if (!cpum_cf_avail()) 91 - return -ENODEV; 92 - 93 - preempt_disable(); 94 - return __kernel_cpumcf_begin(); 95 - } 96 - static inline void kernel_cpumcf_end(void) 97 - { 98 - __kernel_cpumcf_end(); 99 - preempt_enable(); 100 - } 101 - 102 - /* Return true if store counter set multiple instruction is available */ 103 - static inline int stccm_avail(void) 104 - { 105 - return test_facility(142); 106 - } 107 - 108 - size_t cpum_cf_ctrset_size(enum cpumf_ctr_set ctrset, 109 - struct cpumf_ctr_info *info); 110 - int cfset_online_cpu(unsigned int cpu); 111 - int cfset_offline_cpu(unsigned int cpu); 112 - #endif /* _ASM_S390_CPU_MCF_H */
-53
arch/s390/include/asm/cpu_mf.h
··· 42 42 return test_facility(40) && test_facility(68); 43 43 } 44 44 45 - 46 45 struct cpumf_ctr_info { 47 46 u16 cfvn; 48 47 u16 auth_ctl; ··· 273 274 : "cc", "memory"); 274 275 275 276 return cc ? -EINVAL : 0; 276 - } 277 - 278 - /* Sampling control helper functions */ 279 - 280 - #include <linux/time.h> 281 - 282 - static inline unsigned long freq_to_sample_rate(struct hws_qsi_info_block *qsi, 283 - unsigned long freq) 284 - { 285 - return (USEC_PER_SEC / freq) * qsi->cpu_speed; 286 - } 287 - 288 - static inline unsigned long sample_rate_to_freq(struct hws_qsi_info_block *qsi, 289 - unsigned long rate) 290 - { 291 - return USEC_PER_SEC * qsi->cpu_speed / rate; 292 - } 293 - 294 - /* Return TOD timestamp contained in an trailer entry */ 295 - static inline unsigned long long trailer_timestamp(struct hws_trailer_entry *te) 296 - { 297 - /* TOD in STCKE format */ 298 - if (te->header.t) 299 - return *((unsigned long long *) &te->timestamp[1]); 300 - 301 - /* TOD in STCK format */ 302 - return *((unsigned long long *) &te->timestamp[0]); 303 - } 304 - 305 - /* Return pointer to trailer entry of an sample data block */ 306 - static inline unsigned long *trailer_entry_ptr(unsigned long v) 307 - { 308 - void *ret; 309 - 310 - ret = (void *) v; 311 - ret += PAGE_SIZE; 312 - ret -= sizeof(struct hws_trailer_entry); 313 - 314 - return (unsigned long *) ret; 315 - } 316 - 317 - /* Return true if the entry in the sample data block table (sdbt) 318 - * is a link to the next sdbt */ 319 - static inline int is_link_entry(unsigned long *s) 320 - { 321 - return *s & 0x1ul ? 1 : 0; 322 - } 323 - 324 - /* Return pointer to the linked sdbt */ 325 - static inline unsigned long *get_next_sdbt(unsigned long *s) 326 - { 327 - return (unsigned long *) (*s & ~0x1ul); 328 277 } 329 278 #endif /* _ASM_S390_CPU_MF_H */
-19
arch/s390/include/asm/cputime.h
··· 11 11 #include <linux/types.h> 12 12 #include <asm/timex.h> 13 13 14 - #define CPUTIME_PER_USEC 4096ULL 15 - #define CPUTIME_PER_SEC (CPUTIME_PER_USEC * USEC_PER_SEC) 16 - 17 - /* We want to use full resolution of the CPU timer: 2**-12 micro-seconds. */ 18 - 19 - #define cmpxchg_cputime(ptr, old, new) cmpxchg64(ptr, old, new) 20 - 21 - /* 22 - * Convert cputime to microseconds. 23 - */ 24 - static inline u64 cputime_to_usecs(const u64 cputime) 25 - { 26 - return cputime >> 12; 27 - } 28 - 29 14 /* 30 15 * Convert cputime to nanoseconds. 31 16 */ 32 17 #define cputime_to_nsecs(cputime) tod_to_ns(cputime) 33 - 34 - u64 arch_cpu_idle_time(int cpu); 35 - 36 - #define arch_idle_time(cpu) arch_cpu_idle_time(cpu) 37 18 38 19 void account_idle_time_irq(void); 39 20
+15 -1
arch/s390/include/asm/diag.h
··· 12 12 #include <linux/if_ether.h> 13 13 #include <linux/percpu.h> 14 14 #include <asm/asm-extable.h> 15 + #include <asm/cio.h> 15 16 16 17 enum diag_stat_enum { 17 18 DIAG_STAT_X008, ··· 21 20 DIAG_STAT_X014, 22 21 DIAG_STAT_X044, 23 22 DIAG_STAT_X064, 23 + DIAG_STAT_X08C, 24 24 DIAG_STAT_X09C, 25 25 DIAG_STAT_X0DC, 26 26 DIAG_STAT_X204, ··· 81 79 u8 vrdccrty; /* real device type (output) */ 82 80 u8 vrdccrmd; /* real device model (output) */ 83 81 u8 vrdccrft; /* real device feature (output) */ 84 - } __attribute__((packed, aligned(4))); 82 + } __packed __aligned(4); 85 83 86 84 extern int diag210(struct diag210 *addr); 85 + 86 + struct diag8c { 87 + u8 flags; 88 + u8 num_partitions; 89 + u16 width; 90 + u16 height; 91 + u8 data[0]; 92 + } __packed __aligned(4); 93 + 94 + extern int diag8c(struct diag8c *out, struct ccw_dev_id *devno); 87 95 88 96 /* bit is set in flags, when physical cpu info is included in diag 204 data */ 89 97 #define DIAG204_LPAR_PHYS_FLG 0x80 ··· 330 318 int (*diag210)(struct diag210 *addr); 331 319 int (*diag26c)(void *req, void *resp, enum diag26c_sc subcode); 332 320 int (*diag14)(unsigned long rx, unsigned long ry1, unsigned long subcode); 321 + int (*diag8c)(struct diag8c *addr, struct ccw_dev_id *devno, size_t len); 333 322 void (*diag0c)(struct hypfs_diag0c_entry *entry); 334 323 void (*diag308_reset)(void); 335 324 }; ··· 343 330 int _diag14_amode31(unsigned long rx, unsigned long ry1, unsigned long subcode); 344 331 void _diag0c_amode31(struct hypfs_diag0c_entry *entry); 345 332 void _diag308_reset_amode31(void); 333 + int _diag8c_amode31(struct diag8c *addr, struct ccw_dev_id *devno, size_t len); 346 334 347 335 #endif /* _ASM_S390_DIAG_H */
+2 -2
arch/s390/include/asm/fpu/internal.h
··· 27 27 int i; 28 28 29 29 for (i = 0; i < __NUM_FPRS; i++) 30 - fprs[i] = *(freg_t *)(vxrs + i); 30 + fprs[i].ui = vxrs[i].high; 31 31 } 32 32 33 33 static inline void convert_fp_to_vx(__vector128 *vxrs, freg_t *fprs) ··· 35 35 int i; 36 36 37 37 for (i = 0; i < __NUM_FPRS; i++) 38 - *(freg_t *)(vxrs + i) = fprs[i]; 38 + vxrs[i].high = fprs[i].ui; 39 39 } 40 40 41 41 static inline void fpregs_store(_s390_fp_regs *fpregs, struct fpu *fpu)
+12
arch/s390/include/asm/idals.h
··· 23 23 #define IDA_SIZE_LOG 12 /* 11 for 2k , 12 for 4k */ 24 24 #define IDA_BLOCK_SIZE (1L<<IDA_SIZE_LOG) 25 25 26 + #define IDA_2K_SIZE_LOG 11 27 + #define IDA_2K_BLOCK_SIZE (1L << IDA_2K_SIZE_LOG) 28 + 26 29 /* 27 30 * Test if an address/length pair needs an idal list. 28 31 */ ··· 43 40 { 44 41 return ((__pa(vaddr) & (IDA_BLOCK_SIZE-1)) + length + 45 42 (IDA_BLOCK_SIZE-1)) >> IDA_SIZE_LOG; 43 + } 44 + 45 + /* 46 + * Return the number of 2K IDA words needed for an address/length pair. 47 + */ 48 + static inline unsigned int idal_2k_nr_words(void *vaddr, unsigned int length) 49 + { 50 + return ((__pa(vaddr) & (IDA_2K_BLOCK_SIZE - 1)) + length + 51 + (IDA_2K_BLOCK_SIZE - 1)) >> IDA_2K_SIZE_LOG; 46 52 } 47 53 48 54 /*
-5
arch/s390/include/asm/idle.h
··· 10 10 11 11 #include <linux/types.h> 12 12 #include <linux/device.h> 13 - #include <linux/seqlock.h> 14 13 15 14 struct s390_idle_data { 16 - seqcount_t seqcount; 17 15 unsigned long idle_count; 18 16 unsigned long idle_time; 19 17 unsigned long clock_idle_enter; 20 - unsigned long clock_idle_exit; 21 18 unsigned long timer_idle_enter; 22 - unsigned long timer_idle_exit; 23 19 unsigned long mt_cycles_enter[8]; 24 20 }; 25 21 ··· 23 27 extern struct device_attribute dev_attr_idle_time_us; 24 28 25 29 void psw_idle(struct s390_idle_data *data, unsigned long psw_mask); 26 - void psw_idle_exit(void); 27 30 28 31 #endif /* _S390_IDLE_H */
+4 -8
arch/s390/include/asm/kasan.h
··· 14 14 #define KASAN_SHADOW_END (KASAN_SHADOW_START + KASAN_SHADOW_SIZE) 15 15 16 16 extern void kasan_early_init(void); 17 - extern void kasan_copy_shadow_mapping(void); 18 - extern void kasan_free_early_identity(void); 19 17 20 18 /* 21 19 * Estimate kasan memory requirements, which it will reserve 22 20 * at the very end of available physical memory. To estimate 23 21 * that, we take into account that kasan would require 24 22 * 1/8 of available physical memory (for shadow memory) + 25 - * creating page tables for the whole memory + shadow memory 26 - * region (1 + 1/8). To keep page tables estimates simple take 27 - * the double of combined ptes size. 23 + * creating page tables for the shadow memory region. 24 + * To keep page tables estimates simple take the double of 25 + * combined ptes size. 28 26 * 29 27 * physmem parameter has to be already adjusted if not entire physical memory 30 28 * would be used (e.g. due to effect of "mem=" option). ··· 34 36 /* for shadow memory */ 35 37 kasan_needs = round_up(physmem / 8, PAGE_SIZE); 36 38 /* for paging structures */ 37 - pages = DIV_ROUND_UP(physmem + kasan_needs, PAGE_SIZE); 39 + pages = DIV_ROUND_UP(kasan_needs, PAGE_SIZE); 38 40 kasan_needs += DIV_ROUND_UP(pages, _PAGE_ENTRIES) * _PAGE_TABLE_SIZE * 2; 39 41 40 42 return kasan_needs; 41 43 } 42 44 #else 43 45 static inline void kasan_early_init(void) { } 44 - static inline void kasan_copy_shadow_mapping(void) { } 45 - static inline void kasan_free_early_identity(void) { } 46 46 static inline unsigned long kasan_estimate_memory_needs(unsigned long physmem) { return 0; } 47 47 #endif 48 48
-2
arch/s390/include/asm/kprobes.h
··· 70 70 }; 71 71 72 72 void arch_remove_kprobe(struct kprobe *p); 73 - void __kretprobe_trampoline(void); 74 - void trampoline_probe_handler(struct pt_regs *regs); 75 73 76 74 int kprobe_fault_handler(struct pt_regs *regs, int trapnr); 77 75 int kprobe_exceptions_notify(struct notifier_block *self,
+1 -1
arch/s390/include/asm/maccess.h
··· 7 7 struct iov_iter; 8 8 9 9 extern unsigned long __memcpy_real_area; 10 - void memcpy_real_init(void); 10 + extern pte_t *memcpy_real_ptep; 11 11 size_t memcpy_real_iter(struct iov_iter *iter, unsigned long src, size_t count); 12 12 int memcpy_real(void *dest, unsigned long src, size_t count); 13 13 #ifdef CONFIG_CRASH_DUMP
+31 -8
arch/s390/include/asm/mem_detect.h
··· 30 30 struct mem_detect_info { 31 31 u32 count; 32 32 u8 info_source; 33 + unsigned long usable; 33 34 struct mem_detect_block entries[MEM_INLINED_ENTRIES]; 34 35 struct mem_detect_block *entries_extended; 35 36 }; ··· 39 38 void add_mem_detect_block(u64 start, u64 end); 40 39 41 40 static inline int __get_mem_detect_block(u32 n, unsigned long *start, 42 - unsigned long *end) 41 + unsigned long *end, bool respect_usable_limit) 43 42 { 44 43 if (n >= mem_detect.count) { 45 44 *start = 0; ··· 54 53 *start = (unsigned long)mem_detect.entries_extended[n - MEM_INLINED_ENTRIES].start; 55 54 *end = (unsigned long)mem_detect.entries_extended[n - MEM_INLINED_ENTRIES].end; 56 55 } 56 + 57 + if (respect_usable_limit && mem_detect.usable) { 58 + if (*start >= mem_detect.usable) 59 + return -1; 60 + if (*end > mem_detect.usable) 61 + *end = mem_detect.usable; 62 + } 57 63 return 0; 58 64 } 59 65 60 66 /** 61 - * for_each_mem_detect_block - early online memory range iterator 67 + * for_each_mem_detect_usable_block - early online memory range iterator 62 68 * @i: an integer used as loop variable 63 69 * @p_start: ptr to unsigned long for start address of the range 64 70 * @p_end: ptr to unsigned long for end address of the range 65 71 * 66 - * Walks over detected online memory ranges. 72 + * Walks over detected online memory ranges below usable limit. 67 73 */ 68 - #define for_each_mem_detect_block(i, p_start, p_end) \ 69 - for (i = 0, __get_mem_detect_block(i, p_start, p_end); \ 70 - i < mem_detect.count; \ 71 - i++, __get_mem_detect_block(i, p_start, p_end)) 74 + #define for_each_mem_detect_usable_block(i, p_start, p_end) \ 75 + for (i = 0; !__get_mem_detect_block(i, p_start, p_end, true); i++) 76 + 77 + /* Walks over all detected online memory ranges disregarding usable limit. */ 78 + #define for_each_mem_detect_block(i, p_start, p_end) \ 79 + for (i = 0; !__get_mem_detect_block(i, p_start, p_end, false); i++) 80 + 81 + static inline unsigned long get_mem_detect_usable_total(void) 82 + { 83 + unsigned long start, end, total = 0; 84 + int i; 85 + 86 + for_each_mem_detect_usable_block(i, &start, &end) 87 + total += end - start; 88 + 89 + return total; 90 + } 72 91 73 92 static inline void get_mem_detect_reserved(unsigned long *start, 74 93 unsigned long *size) ··· 105 84 unsigned long start; 106 85 unsigned long end; 107 86 87 + if (mem_detect.usable) 88 + return mem_detect.usable; 108 89 if (mem_detect.count) { 109 - __get_mem_detect_block(mem_detect.count - 1, &start, &end); 90 + __get_mem_detect_block(mem_detect.count - 1, &start, &end, false); 110 91 return end; 111 92 } 112 93 return 0;
+68 -1
arch/s390/include/asm/pgtable.h
··· 23 23 #include <asm/uv.h> 24 24 25 25 extern pgd_t swapper_pg_dir[]; 26 + extern pgd_t invalid_pg_dir[]; 26 27 extern void paging_init(void); 27 28 extern unsigned long s390_invalid_asce; 28 29 ··· 182 181 #define _PAGE_SOFT_DIRTY 0x000 183 182 #endif 184 183 184 + #define _PAGE_SW_BITS 0xffUL /* All SW bits */ 185 + 185 186 #define _PAGE_SWP_EXCLUSIVE _PAGE_LARGE /* SW pte exclusive swap bit */ 186 187 187 188 /* Set of bits not changed in pte_modify */ 188 189 #define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_SPECIAL | _PAGE_DIRTY | \ 189 190 _PAGE_YOUNG | _PAGE_SOFT_DIRTY) 191 + 192 + /* 193 + * Mask of bits that must not be changed with RDP. Allow only _PAGE_PROTECT 194 + * HW bit and all SW bits. 195 + */ 196 + #define _PAGE_RDP_MASK ~(_PAGE_PROTECT | _PAGE_SW_BITS) 190 197 191 198 /* 192 199 * handle_pte_fault uses pte_present and pte_none to find out the pte type ··· 486 477 _REGION3_ENTRY_YOUNG | \ 487 478 _REGION_ENTRY_PROTECT | \ 488 479 _REGION_ENTRY_NOEXEC) 480 + #define REGION3_KERNEL_EXEC __pgprot(_REGION_ENTRY_TYPE_R3 | \ 481 + _REGION3_ENTRY_LARGE | \ 482 + _REGION3_ENTRY_READ | \ 483 + _REGION3_ENTRY_WRITE | \ 484 + _REGION3_ENTRY_YOUNG | \ 485 + _REGION3_ENTRY_DIRTY) 489 486 490 487 static inline bool mm_p4d_folded(struct mm_struct *mm) 491 488 { ··· 1060 1045 #define IPTE_NODAT 0x400 1061 1046 #define IPTE_GUEST_ASCE 0x800 1062 1047 1048 + static __always_inline void __ptep_rdp(unsigned long addr, pte_t *ptep, 1049 + unsigned long opt, unsigned long asce, 1050 + int local) 1051 + { 1052 + unsigned long pto; 1053 + 1054 + pto = __pa(ptep) & ~(PTRS_PER_PTE * sizeof(pte_t) - 1); 1055 + asm volatile(".insn rrf,0xb98b0000,%[r1],%[r2],%[asce],%[m4]" 1056 + : "+m" (*ptep) 1057 + : [r1] "a" (pto), [r2] "a" ((addr & PAGE_MASK) | opt), 1058 + [asce] "a" (asce), [m4] "i" (local)); 1059 + } 1060 + 1063 1061 static __always_inline void __ptep_ipte(unsigned long address, pte_t *ptep, 1064 1062 unsigned long opt, unsigned long asce, 1065 1063 int local) ··· 1223 1195 ptep_xchg_lazy(mm, addr, ptep, pte_wrprotect(pte)); 1224 1196 } 1225 1197 1198 + /* 1199 + * Check if PTEs only differ in _PAGE_PROTECT HW bit, but also allow SW PTE 1200 + * bits in the comparison. Those might change e.g. because of dirty and young 1201 + * tracking. 1202 + */ 1203 + static inline int pte_allow_rdp(pte_t old, pte_t new) 1204 + { 1205 + /* 1206 + * Only allow changes from RO to RW 1207 + */ 1208 + if (!(pte_val(old) & _PAGE_PROTECT) || pte_val(new) & _PAGE_PROTECT) 1209 + return 0; 1210 + 1211 + return (pte_val(old) & _PAGE_RDP_MASK) == (pte_val(new) & _PAGE_RDP_MASK); 1212 + } 1213 + 1214 + static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma, 1215 + unsigned long address) 1216 + { 1217 + /* 1218 + * RDP might not have propagated the PTE protection reset to all CPUs, 1219 + * so there could be spurious TLB protection faults. 1220 + * NOTE: This will also be called when a racing pagetable update on 1221 + * another thread already installed the correct PTE. Both cases cannot 1222 + * really be distinguished. 1223 + * Therefore, only do the local TLB flush when RDP can be used, to avoid 1224 + * unnecessary overhead. 1225 + */ 1226 + if (MACHINE_HAS_RDP) 1227 + asm volatile("ptlb" : : : "memory"); 1228 + } 1229 + #define flush_tlb_fix_spurious_fault flush_tlb_fix_spurious_fault 1230 + 1231 + void ptep_reset_dat_prot(struct mm_struct *mm, unsigned long addr, pte_t *ptep, 1232 + pte_t new); 1233 + 1226 1234 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS 1227 1235 static inline int ptep_set_access_flags(struct vm_area_struct *vma, 1228 1236 unsigned long addr, pte_t *ptep, ··· 1266 1202 { 1267 1203 if (pte_same(*ptep, entry)) 1268 1204 return 0; 1269 - ptep_xchg_direct(vma->vm_mm, addr, ptep, entry); 1205 + if (MACHINE_HAS_RDP && !mm_has_pgste(vma->vm_mm) && pte_allow_rdp(*ptep, entry)) 1206 + ptep_reset_dat_prot(vma->vm_mm, addr, ptep, entry); 1207 + else 1208 + ptep_xchg_direct(vma->vm_mm, addr, ptep, entry); 1270 1209 return 1; 1271 1210 } 1272 1211
+23 -6
arch/s390/include/asm/processor.h
··· 44 44 45 45 typedef long (*sys_call_ptr_t)(struct pt_regs *regs); 46 46 47 - static inline void set_cpu_flag(int flag) 47 + static __always_inline void set_cpu_flag(int flag) 48 48 { 49 49 S390_lowcore.cpu_flags |= (1UL << flag); 50 50 } 51 51 52 - static inline void clear_cpu_flag(int flag) 52 + static __always_inline void clear_cpu_flag(int flag) 53 53 { 54 54 S390_lowcore.cpu_flags &= ~(1UL << flag); 55 55 } 56 56 57 - static inline int test_cpu_flag(int flag) 57 + static __always_inline bool test_cpu_flag(int flag) 58 58 { 59 - return !!(S390_lowcore.cpu_flags & (1UL << flag)); 59 + return S390_lowcore.cpu_flags & (1UL << flag); 60 + } 61 + 62 + static __always_inline bool test_and_set_cpu_flag(int flag) 63 + { 64 + if (test_cpu_flag(flag)) 65 + return true; 66 + set_cpu_flag(flag); 67 + return false; 68 + } 69 + 70 + static __always_inline bool test_and_clear_cpu_flag(int flag) 71 + { 72 + if (!test_cpu_flag(flag)) 73 + return false; 74 + clear_cpu_flag(flag); 75 + return true; 60 76 } 61 77 62 78 /* 63 79 * Test CIF flag of another CPU. The caller needs to ensure that 64 80 * CPU hotplug can not happen, e.g. by disabling preemption. 65 81 */ 66 - static inline int test_cpu_flag_of(int flag, int cpu) 82 + static __always_inline bool test_cpu_flag_of(int flag, int cpu) 67 83 { 68 84 struct lowcore *lc = lowcore_ptr[cpu]; 69 - return !!(lc->cpu_flags & (1UL << flag)); 85 + 86 + return lc->cpu_flags & (1UL << flag); 70 87 } 71 88 72 89 #define arch_needs_cpu() test_cpu_flag(CIF_NOHZ_DELAY)
+1 -1
arch/s390/include/asm/ptrace.h
··· 26 26 #ifndef __ASSEMBLY__ 27 27 28 28 #define PSW_KERNEL_BITS (PSW_DEFAULT_KEY | PSW_MASK_BASE | PSW_ASC_HOME | \ 29 - PSW_MASK_EA | PSW_MASK_BA) 29 + PSW_MASK_EA | PSW_MASK_BA | PSW_MASK_DAT) 30 30 #define PSW_USER_BITS (PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | \ 31 31 PSW_DEFAULT_KEY | PSW_MASK_BASE | PSW_MASK_MCHECK | \ 32 32 PSW_MASK_PSTATE | PSW_ASC_PRIMARY)
+6
arch/s390/include/asm/setup.h
··· 34 34 #define MACHINE_FLAG_GS BIT(16) 35 35 #define MACHINE_FLAG_SCC BIT(17) 36 36 #define MACHINE_FLAG_PCI_MIO BIT(18) 37 + #define MACHINE_FLAG_RDP BIT(19) 37 38 38 39 #define LPP_MAGIC BIT(31) 39 40 #define LPP_PID_MASK _AC(0xffffffff, UL) ··· 74 73 75 74 extern int noexec_disabled; 76 75 extern unsigned long ident_map_size; 76 + extern unsigned long pgalloc_pos; 77 + extern unsigned long pgalloc_end; 78 + extern unsigned long pgalloc_low; 79 + extern unsigned long __amode31_base; 77 80 78 81 /* The Write Back bit position in the physaddr is given by the SLPC PCI */ 79 82 extern unsigned long mio_wb_bit_mask; ··· 100 95 #define MACHINE_HAS_GS (S390_lowcore.machine_flags & MACHINE_FLAG_GS) 101 96 #define MACHINE_HAS_SCC (S390_lowcore.machine_flags & MACHINE_FLAG_SCC) 102 97 #define MACHINE_HAS_PCI_MIO (S390_lowcore.machine_flags & MACHINE_FLAG_PCI_MIO) 98 + #define MACHINE_HAS_RDP (S390_lowcore.machine_flags & MACHINE_FLAG_RDP) 103 99 104 100 /* 105 101 * Console mode. Override with conmode=
+65 -79
arch/s390/include/asm/syscall_wrapper.h
··· 7 7 #ifndef _ASM_S390_SYSCALL_WRAPPER_H 8 8 #define _ASM_S390_SYSCALL_WRAPPER_H 9 9 10 - #define __SC_TYPE(t, a) t 11 - 12 - #define SYSCALL_PT_ARG6(regs, m, t1, t2, t3, t4, t5, t6)\ 13 - SYSCALL_PT_ARG5(regs, m, t1, t2, t3, t4, t5), \ 14 - m(t6, (regs->gprs[7])) 15 - 16 - #define SYSCALL_PT_ARG5(regs, m, t1, t2, t3, t4, t5) \ 17 - SYSCALL_PT_ARG4(regs, m, t1, t2, t3, t4), \ 18 - m(t5, (regs->gprs[6])) 19 - 20 - #define SYSCALL_PT_ARG4(regs, m, t1, t2, t3, t4) \ 21 - SYSCALL_PT_ARG3(regs, m, t1, t2, t3), \ 22 - m(t4, (regs->gprs[5])) 23 - 24 - #define SYSCALL_PT_ARG3(regs, m, t1, t2, t3) \ 25 - SYSCALL_PT_ARG2(regs, m, t1, t2), \ 26 - m(t3, (regs->gprs[4])) 27 - 28 - #define SYSCALL_PT_ARG2(regs, m, t1, t2) \ 29 - SYSCALL_PT_ARG1(regs, m, t1), \ 30 - m(t2, (regs->gprs[3])) 31 - 32 - #define SYSCALL_PT_ARG1(regs, m, t1) \ 33 - m(t1, (regs->orig_gpr2)) 34 - 35 - #define SYSCALL_PT_ARGS(x, ...) SYSCALL_PT_ARG##x(__VA_ARGS__) 10 + /* Mapping of registers to parameters for syscalls */ 11 + #define SC_S390_REGS_TO_ARGS(x, ...) \ 12 + __MAP(x, __SC_ARGS \ 13 + ,, regs->orig_gpr2,, regs->gprs[3],, regs->gprs[4] \ 14 + ,, regs->gprs[5],, regs->gprs[6],, regs->gprs[7]) 36 15 37 16 #ifdef CONFIG_COMPAT 38 - #define __SC_COMPAT_TYPE(t, a) \ 39 - __typeof(__builtin_choose_expr(sizeof(t) > 4, 0L, (t)0)) a 40 17 41 18 #define __SC_COMPAT_CAST(t, a) \ 42 19 ({ \ ··· 33 56 (t)__ReS; \ 34 57 }) 35 58 36 - #define __S390_SYS_STUBx(x, name, ...) \ 37 - long __s390_sys##name(struct pt_regs *regs); \ 38 - ALLOW_ERROR_INJECTION(__s390_sys##name, ERRNO); \ 39 - long __s390_sys##name(struct pt_regs *regs) \ 40 - { \ 41 - long ret = __do_sys##name(SYSCALL_PT_ARGS(x, regs, \ 42 - __SC_COMPAT_CAST, __MAP(x, __SC_TYPE, __VA_ARGS__))); \ 43 - __MAP(x,__SC_TEST,__VA_ARGS__); \ 44 - return ret; \ 45 - } 46 - 47 59 /* 48 60 * To keep the naming coherent, re-define SYSCALL_DEFINE0 to create an alias 49 61 * named __s390x_sys_*() 50 62 */ 51 63 #define COMPAT_SYSCALL_DEFINE0(sname) \ 52 - SYSCALL_METADATA(_##sname, 0); \ 53 64 long __s390_compat_sys_##sname(void); \ 54 65 ALLOW_ERROR_INJECTION(__s390_compat_sys_##sname, ERRNO); \ 55 66 long __s390_compat_sys_##sname(void) 56 67 57 68 #define SYSCALL_DEFINE0(sname) \ 58 69 SYSCALL_METADATA(_##sname, 0); \ 70 + long __s390_sys_##sname(void); \ 71 + ALLOW_ERROR_INJECTION(__s390_sys_##sname, ERRNO); \ 59 72 long __s390x_sys_##sname(void); \ 60 73 ALLOW_ERROR_INJECTION(__s390x_sys_##sname, ERRNO); \ 74 + static inline long __do_sys_##sname(void); \ 61 75 long __s390_sys_##sname(void) \ 62 - __attribute__((alias(__stringify(__s390x_sys_##sname)))); \ 63 - long __s390x_sys_##sname(void) 76 + { \ 77 + return __do_sys_##sname(); \ 78 + } \ 79 + long __s390x_sys_##sname(void) \ 80 + { \ 81 + return __do_sys_##sname(); \ 82 + } \ 83 + static inline long __do_sys_##sname(void) 64 84 65 85 #define COND_SYSCALL(name) \ 66 86 cond_syscall(__s390x_sys_##name); \ ··· 68 94 SYSCALL_ALIAS(__s390_sys_##name, sys_ni_posix_timers) 69 95 70 96 #define COMPAT_SYSCALL_DEFINEx(x, name, ...) \ 71 - __diag_push(); \ 72 - __diag_ignore(GCC, 8, "-Wattribute-alias", \ 73 - "Type aliasing is used to sanitize syscall arguments"); \ 74 97 long __s390_compat_sys##name(struct pt_regs *regs); \ 75 - long __s390_compat_sys##name(struct pt_regs *regs) \ 76 - __attribute__((alias(__stringify(__se_compat_sys##name)))); \ 77 98 ALLOW_ERROR_INJECTION(__s390_compat_sys##name, ERRNO); \ 78 - static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \ 79 - long __se_compat_sys##name(struct pt_regs *regs); \ 80 - long __se_compat_sys##name(struct pt_regs *regs) \ 99 + static inline long __se_compat_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)); \ 100 + static inline long __do_compat_sys##name(__MAP(x, __SC_DECL, __VA_ARGS__)); \ 101 + long __s390_compat_sys##name(struct pt_regs *regs) \ 81 102 { \ 82 - long ret = __do_compat_sys##name(SYSCALL_PT_ARGS(x, regs, __SC_DELOUSE, \ 83 - __MAP(x, __SC_TYPE, __VA_ARGS__))); \ 84 - __MAP(x,__SC_TEST,__VA_ARGS__); \ 85 - return ret; \ 103 + return __se_compat_sys##name(SC_S390_REGS_TO_ARGS(x, __VA_ARGS__)); \ 86 104 } \ 87 - __diag_pop(); \ 88 - static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) 105 + static inline long __se_compat_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)) \ 106 + { \ 107 + __MAP(x, __SC_TEST, __VA_ARGS__); \ 108 + return __do_compat_sys##name(__MAP(x, __SC_DELOUSE, __VA_ARGS__)); \ 109 + } \ 110 + static inline long __do_compat_sys##name(__MAP(x, __SC_DECL, __VA_ARGS__)) 89 111 90 112 /* 91 113 * As some compat syscalls may not be implemented, we need to expand ··· 94 124 #define COMPAT_SYS_NI(name) \ 95 125 SYSCALL_ALIAS(__s390_compat_sys_##name, sys_ni_posix_timers) 96 126 97 - #else /* CONFIG_COMPAT */ 127 + #define __S390_SYS_STUBx(x, name, ...) \ 128 + long __s390_sys##name(struct pt_regs *regs); \ 129 + ALLOW_ERROR_INJECTION(__s390_sys##name, ERRNO); \ 130 + static inline long ___se_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)); \ 131 + long __s390_sys##name(struct pt_regs *regs) \ 132 + { \ 133 + return ___se_sys##name(SC_S390_REGS_TO_ARGS(x, __VA_ARGS__)); \ 134 + } \ 135 + static inline long ___se_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)) \ 136 + { \ 137 + __MAP(x, __SC_TEST, __VA_ARGS__); \ 138 + return __do_sys##name(__MAP(x, __SC_COMPAT_CAST, __VA_ARGS__)); \ 139 + } 98 140 99 - #define __S390_SYS_STUBx(x, fullname, name, ...) 141 + #else /* CONFIG_COMPAT */ 100 142 101 143 #define SYSCALL_DEFINE0(sname) \ 102 144 SYSCALL_METADATA(_##sname, 0); \ 103 145 long __s390x_sys_##sname(void); \ 104 146 ALLOW_ERROR_INJECTION(__s390x_sys_##sname, ERRNO); \ 105 - long __s390x_sys_##sname(void) 147 + static inline long __do_sys_##sname(void); \ 148 + long __s390x_sys_##sname(void) \ 149 + { \ 150 + return __do_sys_##sname(); \ 151 + } \ 152 + static inline long __do_sys_##sname(void) 106 153 107 154 #define COND_SYSCALL(name) \ 108 155 cond_syscall(__s390x_sys_##name) 109 156 110 157 #define SYS_NI(name) \ 111 - SYSCALL_ALIAS(__s390x_sys_##name, sys_ni_posix_timers); 158 + SYSCALL_ALIAS(__s390x_sys_##name, sys_ni_posix_timers) 159 + 160 + #define __S390_SYS_STUBx(x, fullname, name, ...) 112 161 113 162 #endif /* CONFIG_COMPAT */ 114 163 115 - #define __SYSCALL_DEFINEx(x, name, ...) \ 116 - __diag_push(); \ 117 - __diag_ignore(GCC, 8, "-Wattribute-alias", \ 118 - "Type aliasing is used to sanitize syscall arguments"); \ 119 - long __s390x_sys##name(struct pt_regs *regs) \ 120 - __attribute__((alias(__stringify(__se_sys##name)))); \ 121 - ALLOW_ERROR_INJECTION(__s390x_sys##name, ERRNO); \ 122 - static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)); \ 123 - long __se_sys##name(struct pt_regs *regs); \ 124 - __S390_SYS_STUBx(x, name, __VA_ARGS__) \ 125 - long __se_sys##name(struct pt_regs *regs) \ 126 - { \ 127 - long ret = __do_sys##name(SYSCALL_PT_ARGS(x, regs, \ 128 - __SC_CAST, __MAP(x, __SC_TYPE, __VA_ARGS__))); \ 129 - __MAP(x,__SC_TEST,__VA_ARGS__); \ 130 - return ret; \ 131 - } \ 132 - __diag_pop(); \ 133 - static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__)) 164 + #define __SYSCALL_DEFINEx(x, name, ...) \ 165 + long __s390x_sys##name(struct pt_regs *regs); \ 166 + ALLOW_ERROR_INJECTION(__s390x_sys##name, ERRNO); \ 167 + static inline long __se_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)); \ 168 + static inline long __do_sys##name(__MAP(x, __SC_DECL, __VA_ARGS__)); \ 169 + __S390_SYS_STUBx(x, name, __VA_ARGS__); \ 170 + long __s390x_sys##name(struct pt_regs *regs) \ 171 + { \ 172 + return __se_sys##name(SC_S390_REGS_TO_ARGS(x, __VA_ARGS__)); \ 173 + } \ 174 + static inline long __se_sys##name(__MAP(x, __SC_LONG, __VA_ARGS__)) \ 175 + { \ 176 + __MAP(x, __SC_TEST, __VA_ARGS__); \ 177 + return __do_sys##name(__MAP(x, __SC_CAST, __VA_ARGS__)); \ 178 + } \ 179 + static inline long __do_sys##name(__MAP(x, __SC_DECL, __VA_ARGS__)) 134 180 135 181 #endif /* _ASM_S390_SYSCALL_WRAPPER_H */
+208
arch/s390/include/asm/uaccess.h
··· 390 390 goto err_label; \ 391 391 } while (0) 392 392 393 + void __cmpxchg_user_key_called_with_bad_pointer(void); 394 + 395 + #define CMPXCHG_USER_KEY_MAX_LOOPS 128 396 + 397 + static __always_inline int __cmpxchg_user_key(unsigned long address, void *uval, 398 + __uint128_t old, __uint128_t new, 399 + unsigned long key, int size) 400 + { 401 + int rc = 0; 402 + 403 + switch (size) { 404 + case 1: { 405 + unsigned int prev, shift, mask, _old, _new; 406 + unsigned long count; 407 + 408 + shift = (3 ^ (address & 3)) << 3; 409 + address ^= address & 3; 410 + _old = ((unsigned int)old & 0xff) << shift; 411 + _new = ((unsigned int)new & 0xff) << shift; 412 + mask = ~(0xff << shift); 413 + asm volatile( 414 + " spka 0(%[key])\n" 415 + " sacf 256\n" 416 + " llill %[count],%[max_loops]\n" 417 + "0: l %[prev],%[address]\n" 418 + "1: nr %[prev],%[mask]\n" 419 + " xilf %[mask],0xffffffff\n" 420 + " or %[new],%[prev]\n" 421 + " or %[prev],%[tmp]\n" 422 + "2: lr %[tmp],%[prev]\n" 423 + "3: cs %[prev],%[new],%[address]\n" 424 + "4: jnl 5f\n" 425 + " xr %[tmp],%[prev]\n" 426 + " xr %[new],%[tmp]\n" 427 + " nr %[tmp],%[mask]\n" 428 + " jnz 5f\n" 429 + " brct %[count],2b\n" 430 + "5: sacf 768\n" 431 + " spka %[default_key]\n" 432 + EX_TABLE_UA_LOAD_REG(0b, 5b, %[rc], %[prev]) 433 + EX_TABLE_UA_LOAD_REG(1b, 5b, %[rc], %[prev]) 434 + EX_TABLE_UA_LOAD_REG(3b, 5b, %[rc], %[prev]) 435 + EX_TABLE_UA_LOAD_REG(4b, 5b, %[rc], %[prev]) 436 + : [rc] "+&d" (rc), 437 + [prev] "=&d" (prev), 438 + [address] "+Q" (*(int *)address), 439 + [tmp] "+&d" (_old), 440 + [new] "+&d" (_new), 441 + [mask] "+&d" (mask), 442 + [count] "=a" (count) 443 + : [key] "%[count]" (key << 4), 444 + [default_key] "J" (PAGE_DEFAULT_KEY), 445 + [max_loops] "J" (CMPXCHG_USER_KEY_MAX_LOOPS) 446 + : "memory", "cc"); 447 + *(unsigned char *)uval = prev >> shift; 448 + if (!count) 449 + rc = -EAGAIN; 450 + return rc; 451 + } 452 + case 2: { 453 + unsigned int prev, shift, mask, _old, _new; 454 + unsigned long count; 455 + 456 + shift = (2 ^ (address & 2)) << 3; 457 + address ^= address & 2; 458 + _old = ((unsigned int)old & 0xffff) << shift; 459 + _new = ((unsigned int)new & 0xffff) << shift; 460 + mask = ~(0xffff << shift); 461 + asm volatile( 462 + " spka 0(%[key])\n" 463 + " sacf 256\n" 464 + " llill %[count],%[max_loops]\n" 465 + "0: l %[prev],%[address]\n" 466 + "1: nr %[prev],%[mask]\n" 467 + " xilf %[mask],0xffffffff\n" 468 + " or %[new],%[prev]\n" 469 + " or %[prev],%[tmp]\n" 470 + "2: lr %[tmp],%[prev]\n" 471 + "3: cs %[prev],%[new],%[address]\n" 472 + "4: jnl 5f\n" 473 + " xr %[tmp],%[prev]\n" 474 + " xr %[new],%[tmp]\n" 475 + " nr %[tmp],%[mask]\n" 476 + " jnz 5f\n" 477 + " brct %[count],2b\n" 478 + "5: sacf 768\n" 479 + " spka %[default_key]\n" 480 + EX_TABLE_UA_LOAD_REG(0b, 5b, %[rc], %[prev]) 481 + EX_TABLE_UA_LOAD_REG(1b, 5b, %[rc], %[prev]) 482 + EX_TABLE_UA_LOAD_REG(3b, 5b, %[rc], %[prev]) 483 + EX_TABLE_UA_LOAD_REG(4b, 5b, %[rc], %[prev]) 484 + : [rc] "+&d" (rc), 485 + [prev] "=&d" (prev), 486 + [address] "+Q" (*(int *)address), 487 + [tmp] "+&d" (_old), 488 + [new] "+&d" (_new), 489 + [mask] "+&d" (mask), 490 + [count] "=a" (count) 491 + : [key] "%[count]" (key << 4), 492 + [default_key] "J" (PAGE_DEFAULT_KEY), 493 + [max_loops] "J" (CMPXCHG_USER_KEY_MAX_LOOPS) 494 + : "memory", "cc"); 495 + *(unsigned short *)uval = prev >> shift; 496 + if (!count) 497 + rc = -EAGAIN; 498 + return rc; 499 + } 500 + case 4: { 501 + unsigned int prev = old; 502 + 503 + asm volatile( 504 + " spka 0(%[key])\n" 505 + " sacf 256\n" 506 + "0: cs %[prev],%[new],%[address]\n" 507 + "1: sacf 768\n" 508 + " spka %[default_key]\n" 509 + EX_TABLE_UA_LOAD_REG(0b, 1b, %[rc], %[prev]) 510 + EX_TABLE_UA_LOAD_REG(1b, 1b, %[rc], %[prev]) 511 + : [rc] "+&d" (rc), 512 + [prev] "+&d" (prev), 513 + [address] "+Q" (*(int *)address) 514 + : [new] "d" ((unsigned int)new), 515 + [key] "a" (key << 4), 516 + [default_key] "J" (PAGE_DEFAULT_KEY) 517 + : "memory", "cc"); 518 + *(unsigned int *)uval = prev; 519 + return rc; 520 + } 521 + case 8: { 522 + unsigned long prev = old; 523 + 524 + asm volatile( 525 + " spka 0(%[key])\n" 526 + " sacf 256\n" 527 + "0: csg %[prev],%[new],%[address]\n" 528 + "1: sacf 768\n" 529 + " spka %[default_key]\n" 530 + EX_TABLE_UA_LOAD_REG(0b, 1b, %[rc], %[prev]) 531 + EX_TABLE_UA_LOAD_REG(1b, 1b, %[rc], %[prev]) 532 + : [rc] "+&d" (rc), 533 + [prev] "+&d" (prev), 534 + [address] "+QS" (*(long *)address) 535 + : [new] "d" ((unsigned long)new), 536 + [key] "a" (key << 4), 537 + [default_key] "J" (PAGE_DEFAULT_KEY) 538 + : "memory", "cc"); 539 + *(unsigned long *)uval = prev; 540 + return rc; 541 + } 542 + case 16: { 543 + __uint128_t prev = old; 544 + 545 + asm volatile( 546 + " spka 0(%[key])\n" 547 + " sacf 256\n" 548 + "0: cdsg %[prev],%[new],%[address]\n" 549 + "1: sacf 768\n" 550 + " spka %[default_key]\n" 551 + EX_TABLE_UA_LOAD_REGPAIR(0b, 1b, %[rc], %[prev]) 552 + EX_TABLE_UA_LOAD_REGPAIR(1b, 1b, %[rc], %[prev]) 553 + : [rc] "+&d" (rc), 554 + [prev] "+&d" (prev), 555 + [address] "+QS" (*(__int128_t *)address) 556 + : [new] "d" (new), 557 + [key] "a" (key << 4), 558 + [default_key] "J" (PAGE_DEFAULT_KEY) 559 + : "memory", "cc"); 560 + *(__uint128_t *)uval = prev; 561 + return rc; 562 + } 563 + } 564 + __cmpxchg_user_key_called_with_bad_pointer(); 565 + return rc; 566 + } 567 + 568 + /** 569 + * cmpxchg_user_key() - cmpxchg with user space target, honoring storage keys 570 + * @ptr: User space address of value to compare to @old and exchange with 571 + * @new. Must be aligned to sizeof(*@ptr). 572 + * @uval: Address where the old value of *@ptr is written to. 573 + * @old: Old value. Compared to the content pointed to by @ptr in order to 574 + * determine if the exchange occurs. The old value read from *@ptr is 575 + * written to *@uval. 576 + * @new: New value to place at *@ptr. 577 + * @key: Access key to use for checking storage key protection. 578 + * 579 + * Perform a cmpxchg on a user space target, honoring storage key protection. 580 + * @key alone determines how key checking is performed, neither 581 + * storage-protection-override nor fetch-protection-override apply. 582 + * The caller must compare *@uval and @old to determine if values have been 583 + * exchanged. In case of an exception *@uval is set to zero. 584 + * 585 + * Return: 0: cmpxchg executed 586 + * -EFAULT: an exception happened when trying to access *@ptr 587 + * -EAGAIN: maxed out number of retries (byte and short only) 588 + */ 589 + #define cmpxchg_user_key(ptr, uval, old, new, key) \ 590 + ({ \ 591 + __typeof__(ptr) __ptr = (ptr); \ 592 + __typeof__(uval) __uval = (uval); \ 593 + \ 594 + BUILD_BUG_ON(sizeof(*(__ptr)) != sizeof(*(__uval))); \ 595 + might_fault(); \ 596 + __chk_user_ptr(__ptr); \ 597 + __cmpxchg_user_key((unsigned long)(__ptr), (void *)(__uval), \ 598 + (old), (new), (key), sizeof(*(__ptr))); \ 599 + }) 600 + 393 601 #endif /* __S390_UACCESS_H */
+6 -4
arch/s390/include/asm/unwind.h
··· 4 4 5 5 #include <linux/sched.h> 6 6 #include <linux/ftrace.h> 7 - #include <linux/kprobes.h> 7 + #include <linux/rethook.h> 8 8 #include <linux/llist.h> 9 9 #include <asm/ptrace.h> 10 10 #include <asm/stacktrace.h> ··· 43 43 bool error; 44 44 }; 45 45 46 - /* Recover the return address modified by kretprobe and ftrace_graph. */ 46 + /* Recover the return address modified by rethook and ftrace_graph. */ 47 47 static inline unsigned long unwind_recover_ret_addr(struct unwind_state *state, 48 48 unsigned long ip) 49 49 { 50 50 ip = ftrace_graph_ret_addr(state->task, &state->graph_idx, ip, (void *)state->sp); 51 - if (is_kretprobe_trampoline(ip)) 52 - ip = kretprobe_find_ret_addr(state->task, (void *)state->sp, &state->kr_cur); 51 + #ifdef CONFIG_RETHOOK 52 + if (is_rethook_trampoline(ip)) 53 + ip = rethook_find_ret_addr(state->task, state->sp, &state->kr_cur); 54 + #endif 53 55 return ip; 54 56 } 55 57
+25
arch/s390/include/uapi/asm/fs3270.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef __ASM_S390_UAPI_FS3270_H 3 + #define __ASM_S390_UAPI_FS3270_H 4 + 5 + #include <linux/types.h> 6 + #include <asm/ioctl.h> 7 + 8 + /* ioctls for fullscreen 3270 */ 9 + #define TUBICMD _IO('3', 3) /* set ccw command for fs reads. */ 10 + #define TUBOCMD _IO('3', 4) /* set ccw command for fs writes. */ 11 + #define TUBGETI _IO('3', 7) /* get ccw command for fs reads. */ 12 + #define TUBGETO _IO('3', 8) /* get ccw command for fs writes. */ 13 + #define TUBGETMOD _IO('3', 13) /* get characteristics like model, cols, rows */ 14 + 15 + /* For TUBGETMOD */ 16 + struct raw3270_iocb { 17 + __u16 model; 18 + __u16 line_cnt; 19 + __u16 col_cnt; 20 + __u16 pf_cnt; 21 + __u16 re_cnt; 22 + __u16 map; 23 + }; 24 + 25 + #endif /* __ASM_S390_UAPI_FS3270_H */
+75
arch/s390/include/uapi/asm/raw3270.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef __ASM_S390_UAPI_RAW3270_H 3 + #define __ASM_S390_UAPI_RAW3270_H 4 + 5 + /* Local Channel Commands */ 6 + #define TC_WRITE 0x01 /* Write */ 7 + #define TC_RDBUF 0x02 /* Read Buffer */ 8 + #define TC_EWRITE 0x05 /* Erase write */ 9 + #define TC_READMOD 0x06 /* Read modified */ 10 + #define TC_EWRITEA 0x0d /* Erase write alternate */ 11 + #define TC_WRITESF 0x11 /* Write structured field */ 12 + 13 + /* Buffer Control Orders */ 14 + #define TO_GE 0x08 /* Graphics Escape */ 15 + #define TO_SF 0x1d /* Start field */ 16 + #define TO_SBA 0x11 /* Set buffer address */ 17 + #define TO_IC 0x13 /* Insert cursor */ 18 + #define TO_PT 0x05 /* Program tab */ 19 + #define TO_RA 0x3c /* Repeat to address */ 20 + #define TO_SFE 0x29 /* Start field extended */ 21 + #define TO_EUA 0x12 /* Erase unprotected to address */ 22 + #define TO_MF 0x2c /* Modify field */ 23 + #define TO_SA 0x28 /* Set attribute */ 24 + 25 + /* Field Attribute Bytes */ 26 + #define TF_INPUT 0x40 /* Visible input */ 27 + #define TF_INPUTN 0x4c /* Invisible input */ 28 + #define TF_INMDT 0xc1 /* Visible, Set-MDT */ 29 + #define TF_LOG 0x60 30 + 31 + /* Character Attribute Bytes */ 32 + #define TAT_RESET 0x00 33 + #define TAT_FIELD 0xc0 34 + #define TAT_EXTHI 0x41 35 + #define TAT_FGCOLOR 0x42 36 + #define TAT_CHARS 0x43 37 + #define TAT_BGCOLOR 0x45 38 + #define TAT_TRANS 0x46 39 + 40 + /* Extended-Highlighting Bytes */ 41 + #define TAX_RESET 0x00 42 + #define TAX_BLINK 0xf1 43 + #define TAX_REVER 0xf2 44 + #define TAX_UNDER 0xf4 45 + 46 + /* Reset value */ 47 + #define TAR_RESET 0x00 48 + 49 + /* Color values */ 50 + #define TAC_RESET 0x00 51 + #define TAC_BLUE 0xf1 52 + #define TAC_RED 0xf2 53 + #define TAC_PINK 0xf3 54 + #define TAC_GREEN 0xf4 55 + #define TAC_TURQ 0xf5 56 + #define TAC_YELLOW 0xf6 57 + #define TAC_WHITE 0xf7 58 + #define TAC_DEFAULT 0x00 59 + 60 + /* Write Control Characters */ 61 + #define TW_NONE 0x40 /* No particular action */ 62 + #define TW_KR 0xc2 /* Keyboard restore */ 63 + #define TW_PLUSALARM 0x04 /* Add this bit for alarm */ 64 + 65 + #define RAW3270_FIRSTMINOR 1 /* First minor number */ 66 + #define RAW3270_MAXDEVS 255 /* Max number of 3270 devices */ 67 + 68 + #define AID_CLEAR 0x6d 69 + #define AID_ENTER 0x7d 70 + #define AID_PF3 0xf3 71 + #define AID_PF7 0xf7 72 + #define AID_PF8 0xf8 73 + #define AID_READ_PARTITION 0x88 74 + 75 + #endif /* __ASM_S390_UAPI_RAW3270_H */
+9 -6
arch/s390/include/uapi/asm/types.h
··· 12 12 13 13 #ifndef __ASSEMBLY__ 14 14 15 - /* A address type so that arithmetic can be done on it & it can be upgraded to 16 - 64 bit when necessary 17 - */ 18 - typedef unsigned long addr_t; 15 + typedef unsigned long addr_t; 19 16 typedef __signed__ long saddr_t; 20 17 21 18 typedef struct { 22 - __u32 u[4]; 23 - } __vector128; 19 + union { 20 + struct { 21 + __u64 high; 22 + __u64 low; 23 + }; 24 + __u32 u[4]; 25 + }; 26 + } __attribute__((packed, aligned(4))) __vector128; 24 27 25 28 #endif /* __ASSEMBLY__ */ 26 29
+2 -1
arch/s390/include/uapi/asm/zcrypt.h
··· 85 85 struct CPRBX { 86 86 __u16 cprb_len; /* CPRB length 220 */ 87 87 __u8 cprb_ver_id; /* CPRB version id. 0x02 */ 88 - __u8 _pad_000[3]; /* Alignment pad bytes */ 88 + __u8 ctfm; /* Command Type Filtering Mask */ 89 + __u8 pad_000[2]; /* Alignment pad bytes */ 89 90 __u8 func_id[2]; /* function id 0x5432 */ 90 91 __u8 cprb_flags[4]; /* Flags */ 91 92 __u32 req_parml; /* request parameter buffer len */
+2 -1
arch/s390/kernel/Makefile
··· 58 58 obj-$(CONFIG_KPROBES) += kprobes.o 59 59 obj-$(CONFIG_KPROBES) += kprobes_insn_page.o 60 60 obj-$(CONFIG_KPROBES) += mcount.o 61 + obj-$(CONFIG_RETHOOK) += rethook.o 61 62 obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o 62 63 obj-$(CONFIG_FUNCTION_TRACER) += mcount.o 63 64 obj-$(CONFIG_CRASH_DUMP) += crash_dump.o ··· 70 69 71 70 obj-$(CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT) += ima_arch.o 72 71 73 - obj-$(CONFIG_PERF_EVENTS) += perf_event.o perf_cpum_cf_common.o 72 + obj-$(CONFIG_PERF_EVENTS) += perf_event.o 74 73 obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf.o perf_cpum_sf.o 75 74 obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf_events.o perf_regs.o 76 75 obj-$(CONFIG_PERF_EVENTS) += perf_pai_crypto.o perf_pai_ext.o
-49
arch/s390/kernel/abs_lowcore.c
··· 3 3 #include <linux/pgtable.h> 4 4 #include <asm/abs_lowcore.h> 5 5 6 - #define ABS_LOWCORE_UNMAPPED 1 7 - #define ABS_LOWCORE_LAP_ON 2 8 - #define ABS_LOWCORE_IRQS_ON 4 9 - 10 6 unsigned long __bootdata_preserved(__abs_lowcore); 11 - bool __ro_after_init abs_lowcore_mapped; 12 7 13 8 int abs_lowcore_map(int cpu, struct lowcore *lc, bool alloc) 14 9 { ··· 43 48 vmem_unmap_4k_page(addr); 44 49 addr += PAGE_SIZE; 45 50 } 46 - } 47 - 48 - struct lowcore *get_abs_lowcore(unsigned long *flags) 49 - { 50 - unsigned long irq_flags; 51 - union ctlreg0 cr0; 52 - int cpu; 53 - 54 - *flags = 0; 55 - cpu = get_cpu(); 56 - if (abs_lowcore_mapped) { 57 - return ((struct lowcore *)__abs_lowcore) + cpu; 58 - } else { 59 - if (cpu != 0) 60 - panic("Invalid unmapped absolute lowcore access\n"); 61 - local_irq_save(irq_flags); 62 - if (!irqs_disabled_flags(irq_flags)) 63 - *flags |= ABS_LOWCORE_IRQS_ON; 64 - __ctl_store(cr0.val, 0, 0); 65 - if (cr0.lap) { 66 - *flags |= ABS_LOWCORE_LAP_ON; 67 - __ctl_clear_bit(0, 28); 68 - } 69 - *flags |= ABS_LOWCORE_UNMAPPED; 70 - return lowcore_ptr[0]; 71 - } 72 - } 73 - 74 - void put_abs_lowcore(struct lowcore *lc, unsigned long flags) 75 - { 76 - if (abs_lowcore_mapped) { 77 - if (flags) 78 - panic("Invalid mapped absolute lowcore release\n"); 79 - } else { 80 - if (smp_processor_id() != 0) 81 - panic("Invalid mapped absolute lowcore access\n"); 82 - if (!(flags & ABS_LOWCORE_UNMAPPED)) 83 - panic("Invalid unmapped absolute lowcore release\n"); 84 - if (flags & ABS_LOWCORE_LAP_ON) 85 - __ctl_set_bit(0, 28); 86 - if (flags & ABS_LOWCORE_IRQS_ON) 87 - local_irq_enable(); 88 - } 89 - put_cpu(); 90 51 }
+1 -1
arch/s390/kernel/cache.c
··· 46 46 #define CACHE_MAX_LEVEL 8 47 47 union cache_topology { 48 48 struct cache_info ci[CACHE_MAX_LEVEL]; 49 - unsigned long long raw; 49 + unsigned long raw; 50 50 }; 51 51 52 52 static const char * const cache_type_string[] = {
+2 -2
arch/s390/kernel/compat_signal.c
··· 139 139 /* Save vector registers to signal stack */ 140 140 if (MACHINE_HAS_VX) { 141 141 for (i = 0; i < __NUM_VXRS_LOW; i++) 142 - vxrs[i] = *((__u64 *)(current->thread.fpu.vxrs + i) + 1); 142 + vxrs[i] = current->thread.fpu.vxrs[i].low; 143 143 if (__copy_to_user(&sregs_ext->vxrs_low, vxrs, 144 144 sizeof(sregs_ext->vxrs_low)) || 145 145 __copy_to_user(&sregs_ext->vxrs_high, ··· 173 173 sizeof(sregs_ext->vxrs_high))) 174 174 return -EFAULT; 175 175 for (i = 0; i < __NUM_VXRS_LOW; i++) 176 - *((__u64 *)(current->thread.fpu.vxrs + i) + 1) = vxrs[i]; 176 + current->thread.fpu.vxrs[i].low = vxrs[i]; 177 177 } 178 178 return 0; 179 179 }
+1 -1
arch/s390/kernel/crash_dump.c
··· 110 110 111 111 /* Copy lower halves of vector registers 0-15 */ 112 112 for (i = 0; i < 16; i++) 113 - memcpy(&sa->vxrs_low[i], &vxrs[i].u[2], 8); 113 + sa->vxrs_low[i] = vxrs[i].low; 114 114 /* Copy vector registers 16-31 */ 115 115 memcpy(sa->vxrs_high, vxrs + 16, 16 * sizeof(__vector128)); 116 116 }
+26
arch/s390/kernel/diag.c
··· 35 35 [DIAG_STAT_X014] = { .code = 0x014, .name = "Spool File Services" }, 36 36 [DIAG_STAT_X044] = { .code = 0x044, .name = "Voluntary Timeslice End" }, 37 37 [DIAG_STAT_X064] = { .code = 0x064, .name = "NSS Manipulation" }, 38 + [DIAG_STAT_X08C] = { .code = 0x08c, .name = "Access 3270 Display Device Information" }, 38 39 [DIAG_STAT_X09C] = { .code = 0x09c, .name = "Relinquish Timeslice" }, 39 40 [DIAG_STAT_X0DC] = { .code = 0x0dc, .name = "Appldata Control" }, 40 41 [DIAG_STAT_X204] = { .code = 0x204, .name = "Logical-CPU Utilization" }, ··· 58 57 .diag26c = _diag26c_amode31, 59 58 .diag14 = _diag14_amode31, 60 59 .diag0c = _diag0c_amode31, 60 + .diag8c = _diag8c_amode31, 61 61 .diag308_reset = _diag308_reset_amode31 62 62 }; 63 63 64 64 static struct diag210 _diag210_tmp_amode31 __section(".amode31.data"); 65 65 struct diag210 __amode31_ref *__diag210_tmp_amode31 = &_diag210_tmp_amode31; 66 + 67 + static struct diag8c _diag8c_tmp_amode31 __section(".amode31.data"); 68 + static struct diag8c __amode31_ref *__diag8c_tmp_amode31 = &_diag8c_tmp_amode31; 66 69 67 70 static int show_diag_stat(struct seq_file *m, void *v) 68 71 { ··· 198 193 return ccode; 199 194 } 200 195 EXPORT_SYMBOL(diag210); 196 + 197 + /* 198 + * Diagnose 210: Get information about a virtual device 199 + */ 200 + int diag8c(struct diag8c *addr, struct ccw_dev_id *devno) 201 + { 202 + static DEFINE_SPINLOCK(diag8c_lock); 203 + unsigned long flags; 204 + int ccode; 205 + 206 + spin_lock_irqsave(&diag8c_lock, flags); 207 + 208 + diag_stat_inc(DIAG_STAT_X08C); 209 + ccode = diag_amode31_ops.diag8c(__diag8c_tmp_amode31, devno, sizeof(*addr)); 210 + 211 + *addr = *__diag8c_tmp_amode31; 212 + spin_unlock_irqrestore(&diag8c_lock, flags); 213 + 214 + return ccode; 215 + } 216 + EXPORT_SYMBOL(diag8c); 201 217 202 218 int diag224(void *ptr) 203 219 {
+4 -4
arch/s390/kernel/early.c
··· 18 18 #include <linux/uaccess.h> 19 19 #include <linux/kernel.h> 20 20 #include <asm/asm-extable.h> 21 + #include <linux/memblock.h> 21 22 #include <asm/diag.h> 22 23 #include <asm/ebcdic.h> 23 24 #include <asm/ipl.h> ··· 161 160 psw_t psw; 162 161 163 162 psw.addr = (unsigned long)early_pgm_check_handler; 164 - psw.mask = PSW_MASK_BASE | PSW_DEFAULT_KEY | PSW_MASK_EA | PSW_MASK_BA; 165 - if (IS_ENABLED(CONFIG_KASAN)) 166 - psw.mask |= PSW_MASK_DAT; 163 + psw.mask = PSW_KERNEL_BITS; 167 164 S390_lowcore.program_new_psw = psw; 168 165 S390_lowcore.preempt_count = INIT_PREEMPT_COUNT; 169 166 } ··· 226 227 S390_lowcore.machine_flags |= MACHINE_FLAG_PCI_MIO; 227 228 /* the control bit is set during PCI initialization */ 228 229 } 230 + if (test_facility(194)) 231 + S390_lowcore.machine_flags |= MACHINE_FLAG_RDP; 229 232 } 230 233 231 234 static inline void save_vector_registers(void) ··· 289 288 290 289 void __init startup_init(void) 291 290 { 292 - sclp_early_adjust_va(); 293 291 reset_tod_clock(); 294 292 check_image_bootable(); 295 293 time_early_init();
-6
arch/s390/kernel/entry.S
··· 137 137 lgr %r14,\reg 138 138 larl %r13,\start 139 139 slgr %r14,%r13 140 - #ifdef CONFIG_AS_IS_LLVM 141 140 clgfrl %r14,.Lrange_size\@ 142 - #else 143 - clgfi %r14,\end - \start 144 - #endif 145 141 jhe \outside_label 146 - #ifdef CONFIG_AS_IS_LLVM 147 142 .section .rodata, "a" 148 143 .align 4 149 144 .Lrange_size\@: 150 145 .long \end - \start 151 146 .previous 152 - #endif 153 147 .endm 154 148 155 149 .macro SIEEXIT
-1
arch/s390/kernel/entry.h
··· 73 73 #define __amode31_data __section(".amode31.data") 74 74 #define __amode31_ref __section(".amode31.refs") 75 75 extern long _start_amode31_refs[], _end_amode31_refs[]; 76 - extern unsigned long __amode31_base; 77 76 78 77 #endif /* _ENTRY_H */
+1
arch/s390/kernel/head64.S
··· 25 25 larl %r14,init_task 26 26 stg %r14,__LC_CURRENT 27 27 larl %r15,init_thread_union+THREAD_SIZE-STACK_FRAME_OVERHEAD-__PT_SIZE 28 + brasl %r14,sclp_early_adjust_va # allow sclp_early_printk 28 29 #ifdef CONFIG_KASAN 29 30 brasl %r14,kasan_early_init 30 31 #endif
+16 -71
arch/s390/kernel/idle.c
··· 24 24 void account_idle_time_irq(void) 25 25 { 26 26 struct s390_idle_data *idle = this_cpu_ptr(&s390_idle); 27 + unsigned long idle_time; 27 28 u64 cycles_new[8]; 28 29 int i; 29 30 30 - clear_cpu_flag(CIF_ENABLED_WAIT); 31 31 if (smp_cpu_mtid) { 32 32 stcctm(MT_DIAG, smp_cpu_mtid, cycles_new); 33 33 for (i = 0; i < smp_cpu_mtid; i++) 34 34 this_cpu_add(mt_cycles[i], cycles_new[i] - idle->mt_cycles_enter[i]); 35 35 } 36 36 37 - idle->clock_idle_exit = S390_lowcore.int_clock; 38 - idle->timer_idle_exit = S390_lowcore.sys_enter_timer; 37 + idle_time = S390_lowcore.int_clock - idle->clock_idle_enter; 39 38 40 39 S390_lowcore.steal_timer += idle->clock_idle_enter - S390_lowcore.last_update_clock; 41 - S390_lowcore.last_update_clock = idle->clock_idle_exit; 40 + S390_lowcore.last_update_clock = S390_lowcore.int_clock; 42 41 43 42 S390_lowcore.system_timer += S390_lowcore.last_update_timer - idle->timer_idle_enter; 44 - S390_lowcore.last_update_timer = idle->timer_idle_exit; 43 + S390_lowcore.last_update_timer = S390_lowcore.sys_enter_timer; 44 + 45 + /* Account time spent with enabled wait psw loaded as idle time. */ 46 + WRITE_ONCE(idle->idle_time, READ_ONCE(idle->idle_time) + idle_time); 47 + WRITE_ONCE(idle->idle_count, READ_ONCE(idle->idle_count) + 1); 48 + account_idle_time(cputime_to_nsecs(idle_time)); 45 49 } 46 50 47 - void arch_cpu_idle(void) 51 + void noinstr arch_cpu_idle(void) 48 52 { 49 53 struct s390_idle_data *idle = this_cpu_ptr(&s390_idle); 50 - unsigned long idle_time; 51 54 unsigned long psw_mask; 52 55 53 56 /* Wait for external, I/O or machine check interrupt. */ 54 - psw_mask = PSW_KERNEL_BITS | PSW_MASK_WAIT | PSW_MASK_DAT | 55 - PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; 57 + psw_mask = PSW_KERNEL_BITS | PSW_MASK_WAIT | 58 + PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; 56 59 clear_cpu_flag(CIF_NOHZ_DELAY); 57 60 58 61 /* psw_idle() returns with interrupts disabled. */ 59 62 psw_idle(idle, psw_mask); 60 - 61 - /* Account time spent with enabled wait psw loaded as idle time. */ 62 - raw_write_seqcount_begin(&idle->seqcount); 63 - idle_time = idle->clock_idle_exit - idle->clock_idle_enter; 64 - idle->clock_idle_enter = idle->clock_idle_exit = 0ULL; 65 - idle->idle_time += idle_time; 66 - idle->idle_count++; 67 - account_idle_time(cputime_to_nsecs(idle_time)); 68 - raw_write_seqcount_end(&idle->seqcount); 69 63 } 70 64 71 65 static ssize_t show_idle_count(struct device *dev, 72 - struct device_attribute *attr, char *buf) 66 + struct device_attribute *attr, char *buf) 73 67 { 74 68 struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id); 75 - unsigned long idle_count; 76 - unsigned int seq; 77 69 78 - do { 79 - seq = read_seqcount_begin(&idle->seqcount); 80 - idle_count = READ_ONCE(idle->idle_count); 81 - if (READ_ONCE(idle->clock_idle_enter)) 82 - idle_count++; 83 - } while (read_seqcount_retry(&idle->seqcount, seq)); 84 - return sprintf(buf, "%lu\n", idle_count); 70 + return sysfs_emit(buf, "%lu\n", READ_ONCE(idle->idle_count)); 85 71 } 86 72 DEVICE_ATTR(idle_count, 0444, show_idle_count, NULL); 87 73 88 74 static ssize_t show_idle_time(struct device *dev, 89 - struct device_attribute *attr, char *buf) 75 + struct device_attribute *attr, char *buf) 90 76 { 91 - unsigned long now, idle_time, idle_enter, idle_exit, in_idle; 92 77 struct s390_idle_data *idle = &per_cpu(s390_idle, dev->id); 93 - unsigned int seq; 94 78 95 - do { 96 - seq = read_seqcount_begin(&idle->seqcount); 97 - idle_time = READ_ONCE(idle->idle_time); 98 - idle_enter = READ_ONCE(idle->clock_idle_enter); 99 - idle_exit = READ_ONCE(idle->clock_idle_exit); 100 - } while (read_seqcount_retry(&idle->seqcount, seq)); 101 - in_idle = 0; 102 - now = get_tod_clock(); 103 - if (idle_enter) { 104 - if (idle_exit) { 105 - in_idle = idle_exit - idle_enter; 106 - } else if (now > idle_enter) { 107 - in_idle = now - idle_enter; 108 - } 109 - } 110 - idle_time += in_idle; 111 - return sprintf(buf, "%lu\n", idle_time >> 12); 79 + return sysfs_emit(buf, "%lu\n", READ_ONCE(idle->idle_time) >> 12); 112 80 } 113 81 DEVICE_ATTR(idle_time_us, 0444, show_idle_time, NULL); 114 - 115 - u64 arch_cpu_idle_time(int cpu) 116 - { 117 - struct s390_idle_data *idle = &per_cpu(s390_idle, cpu); 118 - unsigned long now, idle_enter, idle_exit, in_idle; 119 - unsigned int seq; 120 - 121 - do { 122 - seq = read_seqcount_begin(&idle->seqcount); 123 - idle_enter = READ_ONCE(idle->clock_idle_enter); 124 - idle_exit = READ_ONCE(idle->clock_idle_exit); 125 - } while (read_seqcount_retry(&idle->seqcount, seq)); 126 - in_idle = 0; 127 - now = get_tod_clock(); 128 - if (idle_enter) { 129 - if (idle_exit) { 130 - in_idle = idle_exit - idle_enter; 131 - } else if (now > idle_enter) { 132 - in_idle = now - idle_enter; 133 - } 134 - } 135 - return cputime_to_nsecs(in_idle); 136 - } 137 82 138 83 void arch_cpu_idle_enter(void) 139 84 {
+25 -74
arch/s390/kernel/ipl.c
··· 593 593 &sys_ipl_type_attr.attr, 594 594 &sys_ipl_eckd_bootprog_attr.attr, 595 595 &sys_ipl_eckd_br_chr_attr.attr, 596 + &sys_ipl_ccw_loadparm_attr.attr, 596 597 &sys_ipl_device_attr.attr, 597 598 &sys_ipl_secure_attr.attr, 598 599 &sys_ipl_has_secure_attr.attr, ··· 889 888 return len; 890 889 } 891 890 892 - /* FCP wrapper */ 893 - static ssize_t reipl_fcp_loadparm_show(struct kobject *kobj, 894 - struct kobj_attribute *attr, char *page) 895 - { 896 - return reipl_generic_loadparm_show(reipl_block_fcp, page); 897 - } 891 + #define DEFINE_GENERIC_LOADPARM(name) \ 892 + static ssize_t reipl_##name##_loadparm_show(struct kobject *kobj, \ 893 + struct kobj_attribute *attr, char *page) \ 894 + { \ 895 + return reipl_generic_loadparm_show(reipl_block_##name, page); \ 896 + } \ 897 + static ssize_t reipl_##name##_loadparm_store(struct kobject *kobj, \ 898 + struct kobj_attribute *attr, \ 899 + const char *buf, size_t len) \ 900 + { \ 901 + return reipl_generic_loadparm_store(reipl_block_##name, buf, len); \ 902 + } \ 903 + static struct kobj_attribute sys_reipl_##name##_loadparm_attr = \ 904 + __ATTR(loadparm, 0644, reipl_##name##_loadparm_show, \ 905 + reipl_##name##_loadparm_store) 898 906 899 - static ssize_t reipl_fcp_loadparm_store(struct kobject *kobj, 900 - struct kobj_attribute *attr, 901 - const char *buf, size_t len) 902 - { 903 - return reipl_generic_loadparm_store(reipl_block_fcp, buf, len); 904 - } 905 - 906 - static struct kobj_attribute sys_reipl_fcp_loadparm_attr = 907 - __ATTR(loadparm, 0644, reipl_fcp_loadparm_show, 908 - reipl_fcp_loadparm_store); 907 + DEFINE_GENERIC_LOADPARM(fcp); 908 + DEFINE_GENERIC_LOADPARM(nvme); 909 + DEFINE_GENERIC_LOADPARM(ccw); 910 + DEFINE_GENERIC_LOADPARM(nss); 911 + DEFINE_GENERIC_LOADPARM(eckd); 909 912 910 913 static ssize_t reipl_fcp_clear_show(struct kobject *kobj, 911 914 struct kobj_attribute *attr, char *page) ··· 999 994 DEFINE_IPL_ATTR_RW(reipl_nvme, br_lba, "%lld\n", "%lld\n", 1000 995 reipl_block_nvme->nvme.br_lba); 1001 996 1002 - /* nvme wrapper */ 1003 - static ssize_t reipl_nvme_loadparm_show(struct kobject *kobj, 1004 - struct kobj_attribute *attr, char *page) 1005 - { 1006 - return reipl_generic_loadparm_show(reipl_block_nvme, page); 1007 - } 1008 - 1009 - static ssize_t reipl_nvme_loadparm_store(struct kobject *kobj, 1010 - struct kobj_attribute *attr, 1011 - const char *buf, size_t len) 1012 - { 1013 - return reipl_generic_loadparm_store(reipl_block_nvme, buf, len); 1014 - } 1015 - 1016 - static struct kobj_attribute sys_reipl_nvme_loadparm_attr = 1017 - __ATTR(loadparm, 0644, reipl_nvme_loadparm_show, 1018 - reipl_nvme_loadparm_store); 1019 - 1020 997 static struct attribute *reipl_nvme_attrs[] = { 1021 998 &sys_reipl_nvme_fid_attr.attr, 1022 999 &sys_reipl_nvme_nsid_attr.attr, ··· 1033 1046 1034 1047 /* CCW reipl device attributes */ 1035 1048 DEFINE_IPL_CCW_ATTR_RW(reipl_ccw, device, reipl_block_ccw->ccw); 1036 - 1037 - /* NSS wrapper */ 1038 - static ssize_t reipl_nss_loadparm_show(struct kobject *kobj, 1039 - struct kobj_attribute *attr, char *page) 1040 - { 1041 - return reipl_generic_loadparm_show(reipl_block_nss, page); 1042 - } 1043 - 1044 - static ssize_t reipl_nss_loadparm_store(struct kobject *kobj, 1045 - struct kobj_attribute *attr, 1046 - const char *buf, size_t len) 1047 - { 1048 - return reipl_generic_loadparm_store(reipl_block_nss, buf, len); 1049 - } 1050 - 1051 - /* CCW wrapper */ 1052 - static ssize_t reipl_ccw_loadparm_show(struct kobject *kobj, 1053 - struct kobj_attribute *attr, char *page) 1054 - { 1055 - return reipl_generic_loadparm_show(reipl_block_ccw, page); 1056 - } 1057 - 1058 - static ssize_t reipl_ccw_loadparm_store(struct kobject *kobj, 1059 - struct kobj_attribute *attr, 1060 - const char *buf, size_t len) 1061 - { 1062 - return reipl_generic_loadparm_store(reipl_block_ccw, buf, len); 1063 - } 1064 - 1065 - static struct kobj_attribute sys_reipl_ccw_loadparm_attr = 1066 - __ATTR(loadparm, 0644, reipl_ccw_loadparm_show, 1067 - reipl_ccw_loadparm_store); 1068 1049 1069 1050 static ssize_t reipl_ccw_clear_show(struct kobject *kobj, 1070 1051 struct kobj_attribute *attr, char *page) ··· 1131 1176 &sys_reipl_eckd_device_attr.attr, 1132 1177 &sys_reipl_eckd_bootprog_attr.attr, 1133 1178 &sys_reipl_eckd_br_chr_attr.attr, 1179 + &sys_reipl_eckd_loadparm_attr.attr, 1134 1180 NULL, 1135 1181 }; 1136 1182 ··· 1150 1194 struct kobj_attribute *attr, 1151 1195 const char *buf, size_t len) 1152 1196 { 1153 - if (strtobool(buf, &reipl_eckd_clear) < 0) 1197 + if (kstrtobool(buf, &reipl_eckd_clear) < 0) 1154 1198 return -EINVAL; 1155 1199 return len; 1156 1200 } ··· 1206 1250 static struct kobj_attribute sys_reipl_nss_name_attr = 1207 1251 __ATTR(name, 0644, reipl_nss_name_show, 1208 1252 reipl_nss_name_store); 1209 - 1210 - static struct kobj_attribute sys_reipl_nss_loadparm_attr = 1211 - __ATTR(loadparm, 0644, reipl_nss_loadparm_show, 1212 - reipl_nss_loadparm_store); 1213 1253 1214 1254 static struct attribute *reipl_nss_attrs[] = { 1215 1255 &sys_reipl_nss_name_attr.attr, ··· 1938 1986 { 1939 1987 unsigned long ipib = (unsigned long) reipl_block_actual; 1940 1988 struct lowcore *abs_lc; 1941 - unsigned long flags; 1942 1989 unsigned int csum; 1943 1990 1944 1991 csum = (__force unsigned int) 1945 1992 csum_partial(reipl_block_actual, reipl_block_actual->hdr.len, 0); 1946 - abs_lc = get_abs_lowcore(&flags); 1993 + abs_lc = get_abs_lowcore(); 1947 1994 abs_lc->ipib = ipib; 1948 1995 abs_lc->ipib_checksum = csum; 1949 - put_abs_lowcore(abs_lc, flags); 1996 + put_abs_lowcore(abs_lc); 1950 1997 dump_run(trigger); 1951 1998 } 1952 1999
+4 -4
arch/s390/kernel/irq.c
··· 136 136 { 137 137 irqentry_state_t state = irqentry_enter(regs); 138 138 struct pt_regs *old_regs = set_irq_regs(regs); 139 - int from_idle; 139 + bool from_idle; 140 140 141 141 irq_enter_rcu(); 142 142 ··· 146 146 current->thread.last_break = regs->last_break; 147 147 } 148 148 149 - from_idle = !user_mode(regs) && regs->psw.addr == (unsigned long)psw_idle_exit; 149 + from_idle = test_and_clear_cpu_flag(CIF_ENABLED_WAIT); 150 150 if (from_idle) 151 151 account_idle_time_irq(); 152 152 ··· 171 171 { 172 172 irqentry_state_t state = irqentry_enter(regs); 173 173 struct pt_regs *old_regs = set_irq_regs(regs); 174 - int from_idle; 174 + bool from_idle; 175 175 176 176 irq_enter_rcu(); 177 177 ··· 185 185 regs->int_parm = S390_lowcore.ext_params; 186 186 regs->int_parm_long = S390_lowcore.ext_params2; 187 187 188 - from_idle = !user_mode(regs) && regs->psw.addr == (unsigned long)psw_idle_exit; 188 + from_idle = test_and_clear_cpu_flag(CIF_ENABLED_WAIT); 189 189 if (from_idle) 190 190 account_idle_time_irq(); 191 191
-30
arch/s390/kernel/kprobes.c
··· 281 281 } 282 282 NOKPROBE_SYMBOL(pop_kprobe); 283 283 284 - void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs) 285 - { 286 - ri->ret_addr = (kprobe_opcode_t *)regs->gprs[14]; 287 - ri->fp = (void *)regs->gprs[15]; 288 - 289 - /* Replace the return addr with trampoline addr */ 290 - regs->gprs[14] = (unsigned long)&__kretprobe_trampoline; 291 - } 292 - NOKPROBE_SYMBOL(arch_prepare_kretprobe); 293 - 294 284 static void kprobe_reenter_check(struct kprobe_ctlblk *kcb, struct kprobe *p) 295 285 { 296 286 switch (kcb->kprobe_status) { ··· 360 370 return 0; 361 371 } 362 372 NOKPROBE_SYMBOL(kprobe_handler); 363 - 364 - void arch_kretprobe_fixup_return(struct pt_regs *regs, 365 - kprobe_opcode_t *correct_ret_addr) 366 - { 367 - /* Replace fake return address with real one. */ 368 - regs->gprs[14] = (unsigned long)correct_ret_addr; 369 - } 370 - NOKPROBE_SYMBOL(arch_kretprobe_fixup_return); 371 - 372 - /* 373 - * Called from __kretprobe_trampoline 374 - */ 375 - void trampoline_probe_handler(struct pt_regs *regs) 376 - { 377 - kretprobe_trampoline_handler(regs, (void *)regs->gprs[15]); 378 - } 379 - NOKPROBE_SYMBOL(trampoline_probe_handler); 380 - 381 - /* assembler function that handles the kretprobes must not be probed itself */ 382 - NOKPROBE_SYMBOL(__kretprobe_trampoline); 383 373 384 374 /* 385 375 * Called after single-stepping. p->addr is the address of the
+2 -3
arch/s390/kernel/machine_kexec.c
··· 224 224 void arch_crash_save_vmcoreinfo(void) 225 225 { 226 226 struct lowcore *abs_lc; 227 - unsigned long flags; 228 227 229 228 VMCOREINFO_SYMBOL(lowcore_ptr); 230 229 VMCOREINFO_SYMBOL(high_memory); ··· 231 232 vmcoreinfo_append_str("SAMODE31=%lx\n", __samode31); 232 233 vmcoreinfo_append_str("EAMODE31=%lx\n", __eamode31); 233 234 vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset()); 234 - abs_lc = get_abs_lowcore(&flags); 235 + abs_lc = get_abs_lowcore(); 235 236 abs_lc->vmcore_info = paddr_vmcoreinfo_note(); 236 - put_abs_lowcore(abs_lc, flags); 237 + put_abs_lowcore(abs_lc); 237 238 } 238 239 239 240 void machine_shutdown(void)
+6 -6
arch/s390/kernel/mcount.S
··· 135 135 #endif 136 136 #endif /* CONFIG_FUNCTION_TRACER */ 137 137 138 - #ifdef CONFIG_KPROBES 138 + #ifdef CONFIG_RETHOOK 139 139 140 - SYM_FUNC_START(__kretprobe_trampoline) 140 + SYM_FUNC_START(arch_rethook_trampoline) 141 141 142 142 stg %r14,(__SF_GPRS+8*8)(%r15) 143 143 lay %r15,-STACK_FRAME_SIZE(%r15) ··· 152 152 epsw %r2,%r3 153 153 risbg %r3,%r2,0,31,32 154 154 stg %r3,STACK_PTREGS_PSW(%r15) 155 - larl %r1,__kretprobe_trampoline 155 + larl %r1,arch_rethook_trampoline 156 156 stg %r1,STACK_PTREGS_PSW+8(%r15) 157 157 158 158 lay %r2,STACK_PTREGS(%r15) 159 - brasl %r14,trampoline_probe_handler 159 + brasl %r14,arch_rethook_trampoline_callback 160 160 161 161 mvc __SF_EMPTY(16,%r7),STACK_PTREGS_PSW(%r15) 162 162 lmg %r0,%r15,STACK_PTREGS_GPRS(%r15) 163 163 lpswe __SF_EMPTY(%r15) 164 164 165 - SYM_FUNC_END(__kretprobe_trampoline) 165 + SYM_FUNC_END(arch_rethook_trampoline) 166 166 167 - #endif /* CONFIG_KPROBES */ 167 + #endif /* CONFIG_RETHOOK */
+2 -3
arch/s390/kernel/os_info.c
··· 59 59 void __init os_info_init(void) 60 60 { 61 61 struct lowcore *abs_lc; 62 - unsigned long flags; 63 62 64 63 os_info.version_major = OS_INFO_VERSION_MAJOR; 65 64 os_info.version_minor = OS_INFO_VERSION_MINOR; 66 65 os_info.magic = OS_INFO_MAGIC; 67 66 os_info.csum = os_info_csum(&os_info); 68 - abs_lc = get_abs_lowcore(&flags); 67 + abs_lc = get_abs_lowcore(); 69 68 abs_lc->os_info = __pa(&os_info); 70 - put_abs_lowcore(abs_lc, flags); 69 + put_abs_lowcore(abs_lc); 71 70 } 72 71 73 72 #ifdef CONFIG_CRASH_DUMP
+283 -25
arch/s390/kernel/perf_cpum_cf.c
··· 2 2 /* 3 3 * Performance event support for s390x - CPU-measurement Counter Facility 4 4 * 5 - * Copyright IBM Corp. 2012, 2021 5 + * Copyright IBM Corp. 2012, 2023 6 6 * Author(s): Hendrik Brueckner <brueckner@linux.ibm.com> 7 7 * Thomas Richter <tmricht@linux.ibm.com> 8 8 */ ··· 16 16 #include <linux/init.h> 17 17 #include <linux/export.h> 18 18 #include <linux/miscdevice.h> 19 + #include <linux/perf_event.h> 19 20 20 - #include <asm/cpu_mcf.h> 21 + #include <asm/cpu_mf.h> 21 22 #include <asm/hwctrset.h> 22 23 #include <asm/debug.h> 24 + 25 + enum cpumf_ctr_set { 26 + CPUMF_CTR_SET_BASIC = 0, /* Basic Counter Set */ 27 + CPUMF_CTR_SET_USER = 1, /* Problem-State Counter Set */ 28 + CPUMF_CTR_SET_CRYPTO = 2, /* Crypto-Activity Counter Set */ 29 + CPUMF_CTR_SET_EXT = 3, /* Extended Counter Set */ 30 + CPUMF_CTR_SET_MT_DIAG = 4, /* MT-diagnostic Counter Set */ 31 + 32 + /* Maximum number of counter sets */ 33 + CPUMF_CTR_SET_MAX, 34 + }; 35 + 36 + #define CPUMF_LCCTL_ENABLE_SHIFT 16 37 + #define CPUMF_LCCTL_ACTCTL_SHIFT 0 38 + 39 + static inline void ctr_set_enable(u64 *state, u64 ctrsets) 40 + { 41 + *state |= ctrsets << CPUMF_LCCTL_ENABLE_SHIFT; 42 + } 43 + 44 + static inline void ctr_set_disable(u64 *state, u64 ctrsets) 45 + { 46 + *state &= ~(ctrsets << CPUMF_LCCTL_ENABLE_SHIFT); 47 + } 48 + 49 + static inline void ctr_set_start(u64 *state, u64 ctrsets) 50 + { 51 + *state |= ctrsets << CPUMF_LCCTL_ACTCTL_SHIFT; 52 + } 53 + 54 + static inline void ctr_set_stop(u64 *state, u64 ctrsets) 55 + { 56 + *state &= ~(ctrsets << CPUMF_LCCTL_ACTCTL_SHIFT); 57 + } 58 + 59 + static inline int ctr_stcctm(enum cpumf_ctr_set set, u64 range, u64 *dest) 60 + { 61 + switch (set) { 62 + case CPUMF_CTR_SET_BASIC: 63 + return stcctm(BASIC, range, dest); 64 + case CPUMF_CTR_SET_USER: 65 + return stcctm(PROBLEM_STATE, range, dest); 66 + case CPUMF_CTR_SET_CRYPTO: 67 + return stcctm(CRYPTO_ACTIVITY, range, dest); 68 + case CPUMF_CTR_SET_EXT: 69 + return stcctm(EXTENDED, range, dest); 70 + case CPUMF_CTR_SET_MT_DIAG: 71 + return stcctm(MT_DIAG_CLEARING, range, dest); 72 + case CPUMF_CTR_SET_MAX: 73 + return 3; 74 + } 75 + return 3; 76 + } 77 + 78 + struct cpu_cf_events { 79 + struct cpumf_ctr_info info; 80 + atomic_t ctr_set[CPUMF_CTR_SET_MAX]; 81 + u64 state; /* For perf_event_open SVC */ 82 + u64 dev_state; /* For /dev/hwctr */ 83 + unsigned int flags; 84 + size_t used; /* Bytes used in data */ 85 + size_t usedss; /* Bytes used in start/stop */ 86 + unsigned char start[PAGE_SIZE]; /* Counter set at event add */ 87 + unsigned char stop[PAGE_SIZE]; /* Counter set at event delete */ 88 + unsigned char data[PAGE_SIZE]; /* Counter set at /dev/hwctr */ 89 + unsigned int sets; /* # Counter set saved in memory */ 90 + }; 91 + 92 + /* Per-CPU event structure for the counter facility */ 93 + static DEFINE_PER_CPU(struct cpu_cf_events, cpu_cf_events); 23 94 24 95 static unsigned int cfdiag_cpu_speed; /* CPU speed for CF_DIAG trailer */ 25 96 static debug_info_t *cf_dbg; ··· 181 110 te->clock_base = 1; /* Save clock base */ 182 111 te->tod_base = tod_clock_base.tod; 183 112 te->timestamp = get_tod_clock_fast(); 113 + } 114 + 115 + /* 116 + * Return the maximum possible counter set size (in number of 8 byte counters) 117 + * depending on type and model number. 118 + */ 119 + static size_t cpum_cf_ctrset_size(enum cpumf_ctr_set ctrset, 120 + struct cpumf_ctr_info *info) 121 + { 122 + size_t ctrset_size = 0; 123 + 124 + switch (ctrset) { 125 + case CPUMF_CTR_SET_BASIC: 126 + if (info->cfvn >= 1) 127 + ctrset_size = 6; 128 + break; 129 + case CPUMF_CTR_SET_USER: 130 + if (info->cfvn == 1) 131 + ctrset_size = 6; 132 + else if (info->cfvn >= 3) 133 + ctrset_size = 2; 134 + break; 135 + case CPUMF_CTR_SET_CRYPTO: 136 + if (info->csvn >= 1 && info->csvn <= 5) 137 + ctrset_size = 16; 138 + else if (info->csvn == 6 || info->csvn == 7) 139 + ctrset_size = 20; 140 + break; 141 + case CPUMF_CTR_SET_EXT: 142 + if (info->csvn == 1) 143 + ctrset_size = 32; 144 + else if (info->csvn == 2) 145 + ctrset_size = 48; 146 + else if (info->csvn >= 3 && info->csvn <= 5) 147 + ctrset_size = 128; 148 + else if (info->csvn == 6 || info->csvn == 7) 149 + ctrset_size = 160; 150 + break; 151 + case CPUMF_CTR_SET_MT_DIAG: 152 + if (info->csvn > 3) 153 + ctrset_size = 48; 154 + break; 155 + case CPUMF_CTR_SET_MAX: 156 + break; 157 + } 158 + 159 + return ctrset_size; 184 160 } 185 161 186 162 /* Read a counter set. The counter set number determines the counter set and ··· 506 388 cpuhw->flags &= ~PMU_F_ENABLED; 507 389 } 508 390 391 + #define PMC_INIT 0UL 392 + #define PMC_RELEASE 1UL 393 + 394 + static void cpum_cf_setup_cpu(void *flags) 395 + { 396 + struct cpu_cf_events *cpuhw = this_cpu_ptr(&cpu_cf_events); 397 + 398 + switch ((unsigned long)flags) { 399 + case PMC_INIT: 400 + memset(&cpuhw->info, 0, sizeof(cpuhw->info)); 401 + qctri(&cpuhw->info); 402 + cpuhw->flags |= PMU_F_RESERVED; 403 + break; 404 + 405 + case PMC_RELEASE: 406 + cpuhw->flags &= ~PMU_F_RESERVED; 407 + break; 408 + } 409 + 410 + /* Disable CPU counter sets */ 411 + lcctl(0); 412 + debug_sprintf_event(cf_dbg, 5, "%s flags %#x flags %#x state %#llx\n", 413 + __func__, *(int *)flags, cpuhw->flags, 414 + cpuhw->state); 415 + } 416 + 417 + /* Initialize the CPU-measurement counter facility */ 418 + static int __kernel_cpumcf_begin(void) 419 + { 420 + on_each_cpu(cpum_cf_setup_cpu, (void *)PMC_INIT, 1); 421 + irq_subclass_register(IRQ_SUBCLASS_MEASUREMENT_ALERT); 422 + 423 + return 0; 424 + } 425 + 426 + /* Release the CPU-measurement counter facility */ 427 + static void __kernel_cpumcf_end(void) 428 + { 429 + on_each_cpu(cpum_cf_setup_cpu, (void *)PMC_RELEASE, 1); 430 + irq_subclass_unregister(IRQ_SUBCLASS_MEASUREMENT_ALERT); 431 + } 509 432 510 433 /* Number of perf events counting hardware events */ 511 434 static atomic_t num_events = ATOMIC_INIT(0); ··· 556 397 /* Release the PMU if event is the last perf event */ 557 398 static void hw_perf_event_destroy(struct perf_event *event) 558 399 { 559 - if (!atomic_add_unless(&num_events, -1, 1)) { 560 - mutex_lock(&pmc_reserve_mutex); 561 - if (atomic_dec_return(&num_events) == 0) 562 - __kernel_cpumcf_end(); 563 - mutex_unlock(&pmc_reserve_mutex); 564 - } 400 + mutex_lock(&pmc_reserve_mutex); 401 + if (atomic_dec_return(&num_events) == 0) 402 + __kernel_cpumcf_end(); 403 + mutex_unlock(&pmc_reserve_mutex); 565 404 } 566 405 567 406 /* CPUMF <-> perf event mappings for kernel+userspace (basic set) */ ··· 591 434 mutex_unlock(&pmc_reserve_mutex); 592 435 } 593 436 437 + static int is_userspace_event(u64 ev) 438 + { 439 + return cpumf_generic_events_user[PERF_COUNT_HW_CPU_CYCLES] == ev || 440 + cpumf_generic_events_user[PERF_COUNT_HW_INSTRUCTIONS] == ev; 441 + } 442 + 594 443 static int __hw_perf_event_init(struct perf_event *event, unsigned int type) 595 444 { 596 445 struct perf_event_attr *attr = &event->attr; ··· 619 456 if (is_sampling_event(event)) /* No sampling support */ 620 457 return -ENOENT; 621 458 ev = attr->config; 622 - /* Count user space (problem-state) only */ 623 459 if (!attr->exclude_user && attr->exclude_kernel) { 624 - if (ev >= ARRAY_SIZE(cpumf_generic_events_user)) 625 - return -EOPNOTSUPP; 626 - ev = cpumf_generic_events_user[ev]; 627 - 628 - /* No support for kernel space counters only */ 460 + /* 461 + * Count user space (problem-state) only 462 + * Handle events 32 and 33 as 0:u and 1:u 463 + */ 464 + if (!is_userspace_event(ev)) { 465 + if (ev >= ARRAY_SIZE(cpumf_generic_events_user)) 466 + return -EOPNOTSUPP; 467 + ev = cpumf_generic_events_user[ev]; 468 + } 629 469 } else if (!attr->exclude_kernel && attr->exclude_user) { 470 + /* No support for kernel space counters only */ 630 471 return -EOPNOTSUPP; 631 - } else { /* Count user and kernel space */ 632 - if (ev >= ARRAY_SIZE(cpumf_generic_events_basic)) 633 - return -EOPNOTSUPP; 634 - ev = cpumf_generic_events_basic[ev]; 472 + } else { 473 + /* Count user and kernel space, incl. events 32 + 33 */ 474 + if (!is_userspace_event(ev)) { 475 + if (ev >= ARRAY_SIZE(cpumf_generic_events_basic)) 476 + return -EOPNOTSUPP; 477 + ev = cpumf_generic_events_basic[ev]; 478 + } 635 479 } 636 480 break; 637 481 ··· 931 761 .read = cpumf_pmu_read, 932 762 }; 933 763 764 + static int cpum_cf_setup(unsigned int cpu, unsigned long flags) 765 + { 766 + local_irq_disable(); 767 + cpum_cf_setup_cpu((void *)flags); 768 + local_irq_enable(); 769 + return 0; 770 + } 771 + 772 + static int cfset_online_cpu(unsigned int cpu); 773 + static int cpum_cf_online_cpu(unsigned int cpu) 774 + { 775 + debug_sprintf_event(cf_dbg, 4, "%s cpu %d in_irq %ld\n", __func__, 776 + cpu, in_interrupt()); 777 + cpum_cf_setup(cpu, PMC_INIT); 778 + return cfset_online_cpu(cpu); 779 + } 780 + 781 + static int cfset_offline_cpu(unsigned int cpu); 782 + static int cpum_cf_offline_cpu(unsigned int cpu) 783 + { 784 + debug_sprintf_event(cf_dbg, 4, "%s cpu %d\n", __func__, cpu); 785 + cfset_offline_cpu(cpu); 786 + return cpum_cf_setup(cpu, PMC_RELEASE); 787 + } 788 + 789 + /* Return true if store counter set multiple instruction is available */ 790 + static inline int stccm_avail(void) 791 + { 792 + return test_facility(142); 793 + } 794 + 795 + /* CPU-measurement alerts for the counter facility */ 796 + static void cpumf_measurement_alert(struct ext_code ext_code, 797 + unsigned int alert, unsigned long unused) 798 + { 799 + struct cpu_cf_events *cpuhw; 800 + 801 + if (!(alert & CPU_MF_INT_CF_MASK)) 802 + return; 803 + 804 + inc_irq_stat(IRQEXT_CMC); 805 + cpuhw = this_cpu_ptr(&cpu_cf_events); 806 + 807 + /* 808 + * Measurement alerts are shared and might happen when the PMU 809 + * is not reserved. Ignore these alerts in this case. 810 + */ 811 + if (!(cpuhw->flags & PMU_F_RESERVED)) 812 + return; 813 + 814 + /* counter authorization change alert */ 815 + if (alert & CPU_MF_INT_CF_CACA) 816 + qctri(&cpuhw->info); 817 + 818 + /* loss of counter data alert */ 819 + if (alert & CPU_MF_INT_CF_LCDA) 820 + pr_err("CPU[%i] Counter data was lost\n", smp_processor_id()); 821 + 822 + /* loss of MT counter data alert */ 823 + if (alert & CPU_MF_INT_CF_MTDA) 824 + pr_warn("CPU[%i] MT counter data was lost\n", 825 + smp_processor_id()); 826 + } 827 + 934 828 static int cfset_init(void); 935 829 static int __init cpumf_pmu_init(void) 936 830 { 937 831 int rc; 938 832 939 - if (!kernel_cpumcf_avail()) 833 + if (!cpum_cf_avail()) 940 834 return -ENODEV; 835 + 836 + /* 837 + * Clear bit 15 of cr0 to unauthorize problem-state to 838 + * extract measurement counters 839 + */ 840 + ctl_clear_bit(0, 48); 841 + 842 + /* register handler for measurement-alert interruptions */ 843 + rc = register_external_irq(EXT_IRQ_MEASURE_ALERT, 844 + cpumf_measurement_alert); 845 + if (rc) { 846 + pr_err("Registering for CPU-measurement alerts failed with rc=%i\n", rc); 847 + return rc; 848 + } 941 849 942 850 /* Setup s390dbf facility */ 943 851 cf_dbg = debug_register(KMSG_COMPONENT, 2, 1, 128); 944 852 if (!cf_dbg) { 945 853 pr_err("Registration of s390dbf(cpum_cf) failed\n"); 946 - return -ENOMEM; 854 + rc = -ENOMEM; 855 + goto out1; 947 856 } 948 857 debug_register_view(cf_dbg, &debug_sprintf_view); 949 858 950 859 cpumf_pmu.attr_groups = cpumf_cf_event_group(); 951 860 rc = perf_pmu_register(&cpumf_pmu, "cpum_cf", -1); 952 861 if (rc) { 953 - debug_unregister_view(cf_dbg, &debug_sprintf_view); 954 - debug_unregister(cf_dbg); 955 862 pr_err("Registering the cpum_cf PMU failed with rc=%i\n", rc); 863 + goto out2; 956 864 } else if (stccm_avail()) { /* Setup counter set device */ 957 865 cfset_init(); 958 866 } 867 + 868 + rc = cpuhp_setup_state(CPUHP_AP_PERF_S390_CF_ONLINE, 869 + "perf/s390/cf:online", 870 + cpum_cf_online_cpu, cpum_cf_offline_cpu); 871 + return rc; 872 + 873 + out2: 874 + debug_unregister_view(cf_dbg, &debug_sprintf_view); 875 + debug_unregister(cf_dbg); 876 + out1: 877 + unregister_external_irq(EXT_IRQ_MEASURE_ALERT, cpumf_measurement_alert); 959 878 return rc; 960 879 } 961 880 ··· 1261 1002 free_cpumask_var(mask); 1262 1003 return rc; 1263 1004 } 1264 - 1265 1005 1266 1006 /* Return the maximum required space for all possible CPUs in case one 1267 1007 * CPU will be onlined during the START, READ, STOP cycles. ··· 1524 1266 /* Hotplug add of a CPU. Scan through all active processes and add 1525 1267 * that CPU to the list of CPUs supplied with ioctl(..., START, ...). 1526 1268 */ 1527 - int cfset_online_cpu(unsigned int cpu) 1269 + static int cfset_online_cpu(unsigned int cpu) 1528 1270 { 1529 1271 struct cfset_call_on_cpu_parm p; 1530 1272 struct cfset_request *rp; ··· 1544 1286 /* Hotplug remove of a CPU. Scan through all active processes and clear 1545 1287 * that CPU from the list of CPUs supplied with ioctl(..., START, ...). 1546 1288 */ 1547 - int cfset_offline_cpu(unsigned int cpu) 1289 + static int cfset_offline_cpu(unsigned int cpu) 1548 1290 { 1549 1291 struct cfset_call_on_cpu_parm p; 1550 1292 struct cfset_request *rp;
-233
arch/s390/kernel/perf_cpum_cf_common.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * CPU-Measurement Counter Facility Support - Common Layer 4 - * 5 - * Copyright IBM Corp. 2019 6 - * Author(s): Hendrik Brueckner <brueckner@linux.ibm.com> 7 - */ 8 - #define KMSG_COMPONENT "cpum_cf_common" 9 - #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt 10 - 11 - #include <linux/kernel.h> 12 - #include <linux/kernel_stat.h> 13 - #include <linux/percpu.h> 14 - #include <linux/notifier.h> 15 - #include <linux/init.h> 16 - #include <linux/export.h> 17 - #include <asm/ctl_reg.h> 18 - #include <asm/irq.h> 19 - #include <asm/cpu_mcf.h> 20 - 21 - /* Per-CPU event structure for the counter facility */ 22 - DEFINE_PER_CPU(struct cpu_cf_events, cpu_cf_events) = { 23 - .ctr_set = { 24 - [CPUMF_CTR_SET_BASIC] = ATOMIC_INIT(0), 25 - [CPUMF_CTR_SET_USER] = ATOMIC_INIT(0), 26 - [CPUMF_CTR_SET_CRYPTO] = ATOMIC_INIT(0), 27 - [CPUMF_CTR_SET_EXT] = ATOMIC_INIT(0), 28 - [CPUMF_CTR_SET_MT_DIAG] = ATOMIC_INIT(0), 29 - }, 30 - .alert = ATOMIC64_INIT(0), 31 - .state = 0, 32 - .dev_state = 0, 33 - .flags = 0, 34 - .used = 0, 35 - .usedss = 0, 36 - .sets = 0 37 - }; 38 - /* Indicator whether the CPU-Measurement Counter Facility Support is ready */ 39 - static bool cpum_cf_initalized; 40 - 41 - /* CPU-measurement alerts for the counter facility */ 42 - static void cpumf_measurement_alert(struct ext_code ext_code, 43 - unsigned int alert, unsigned long unused) 44 - { 45 - struct cpu_cf_events *cpuhw; 46 - 47 - if (!(alert & CPU_MF_INT_CF_MASK)) 48 - return; 49 - 50 - inc_irq_stat(IRQEXT_CMC); 51 - cpuhw = this_cpu_ptr(&cpu_cf_events); 52 - 53 - /* Measurement alerts are shared and might happen when the PMU 54 - * is not reserved. Ignore these alerts in this case. */ 55 - if (!(cpuhw->flags & PMU_F_RESERVED)) 56 - return; 57 - 58 - /* counter authorization change alert */ 59 - if (alert & CPU_MF_INT_CF_CACA) 60 - qctri(&cpuhw->info); 61 - 62 - /* loss of counter data alert */ 63 - if (alert & CPU_MF_INT_CF_LCDA) 64 - pr_err("CPU[%i] Counter data was lost\n", smp_processor_id()); 65 - 66 - /* loss of MT counter data alert */ 67 - if (alert & CPU_MF_INT_CF_MTDA) 68 - pr_warn("CPU[%i] MT counter data was lost\n", 69 - smp_processor_id()); 70 - 71 - /* store alert for special handling by in-kernel users */ 72 - atomic64_or(alert, &cpuhw->alert); 73 - } 74 - 75 - #define PMC_INIT 0 76 - #define PMC_RELEASE 1 77 - static void cpum_cf_setup_cpu(void *flags) 78 - { 79 - struct cpu_cf_events *cpuhw = this_cpu_ptr(&cpu_cf_events); 80 - 81 - switch (*((int *) flags)) { 82 - case PMC_INIT: 83 - memset(&cpuhw->info, 0, sizeof(cpuhw->info)); 84 - qctri(&cpuhw->info); 85 - cpuhw->flags |= PMU_F_RESERVED; 86 - break; 87 - 88 - case PMC_RELEASE: 89 - cpuhw->flags &= ~PMU_F_RESERVED; 90 - break; 91 - } 92 - 93 - /* Disable CPU counter sets */ 94 - lcctl(0); 95 - } 96 - 97 - bool kernel_cpumcf_avail(void) 98 - { 99 - return cpum_cf_initalized; 100 - } 101 - EXPORT_SYMBOL(kernel_cpumcf_avail); 102 - 103 - /* Initialize the CPU-measurement counter facility */ 104 - int __kernel_cpumcf_begin(void) 105 - { 106 - int flags = PMC_INIT; 107 - 108 - on_each_cpu(cpum_cf_setup_cpu, &flags, 1); 109 - irq_subclass_register(IRQ_SUBCLASS_MEASUREMENT_ALERT); 110 - 111 - return 0; 112 - } 113 - EXPORT_SYMBOL(__kernel_cpumcf_begin); 114 - 115 - /* Obtain the CPU-measurement alerts for the counter facility */ 116 - unsigned long kernel_cpumcf_alert(int clear) 117 - { 118 - struct cpu_cf_events *cpuhw = this_cpu_ptr(&cpu_cf_events); 119 - unsigned long alert; 120 - 121 - alert = atomic64_read(&cpuhw->alert); 122 - if (clear) 123 - atomic64_set(&cpuhw->alert, 0); 124 - 125 - return alert; 126 - } 127 - EXPORT_SYMBOL(kernel_cpumcf_alert); 128 - 129 - /* Release the CPU-measurement counter facility */ 130 - void __kernel_cpumcf_end(void) 131 - { 132 - int flags = PMC_RELEASE; 133 - 134 - on_each_cpu(cpum_cf_setup_cpu, &flags, 1); 135 - irq_subclass_unregister(IRQ_SUBCLASS_MEASUREMENT_ALERT); 136 - } 137 - EXPORT_SYMBOL(__kernel_cpumcf_end); 138 - 139 - static int cpum_cf_setup(unsigned int cpu, int flags) 140 - { 141 - local_irq_disable(); 142 - cpum_cf_setup_cpu(&flags); 143 - local_irq_enable(); 144 - return 0; 145 - } 146 - 147 - static int cpum_cf_online_cpu(unsigned int cpu) 148 - { 149 - cpum_cf_setup(cpu, PMC_INIT); 150 - return cfset_online_cpu(cpu); 151 - } 152 - 153 - static int cpum_cf_offline_cpu(unsigned int cpu) 154 - { 155 - cfset_offline_cpu(cpu); 156 - return cpum_cf_setup(cpu, PMC_RELEASE); 157 - } 158 - 159 - /* Return the maximum possible counter set size (in number of 8 byte counters) 160 - * depending on type and model number. 161 - */ 162 - size_t cpum_cf_ctrset_size(enum cpumf_ctr_set ctrset, 163 - struct cpumf_ctr_info *info) 164 - { 165 - size_t ctrset_size = 0; 166 - 167 - switch (ctrset) { 168 - case CPUMF_CTR_SET_BASIC: 169 - if (info->cfvn >= 1) 170 - ctrset_size = 6; 171 - break; 172 - case CPUMF_CTR_SET_USER: 173 - if (info->cfvn == 1) 174 - ctrset_size = 6; 175 - else if (info->cfvn >= 3) 176 - ctrset_size = 2; 177 - break; 178 - case CPUMF_CTR_SET_CRYPTO: 179 - if (info->csvn >= 1 && info->csvn <= 5) 180 - ctrset_size = 16; 181 - else if (info->csvn == 6 || info->csvn == 7) 182 - ctrset_size = 20; 183 - break; 184 - case CPUMF_CTR_SET_EXT: 185 - if (info->csvn == 1) 186 - ctrset_size = 32; 187 - else if (info->csvn == 2) 188 - ctrset_size = 48; 189 - else if (info->csvn >= 3 && info->csvn <= 5) 190 - ctrset_size = 128; 191 - else if (info->csvn == 6 || info->csvn == 7) 192 - ctrset_size = 160; 193 - break; 194 - case CPUMF_CTR_SET_MT_DIAG: 195 - if (info->csvn > 3) 196 - ctrset_size = 48; 197 - break; 198 - case CPUMF_CTR_SET_MAX: 199 - break; 200 - } 201 - 202 - return ctrset_size; 203 - } 204 - 205 - static int __init cpum_cf_init(void) 206 - { 207 - int rc; 208 - 209 - if (!cpum_cf_avail()) 210 - return -ENODEV; 211 - 212 - /* clear bit 15 of cr0 to unauthorize problem-state to 213 - * extract measurement counters */ 214 - ctl_clear_bit(0, 48); 215 - 216 - /* register handler for measurement-alert interruptions */ 217 - rc = register_external_irq(EXT_IRQ_MEASURE_ALERT, 218 - cpumf_measurement_alert); 219 - if (rc) { 220 - pr_err("Registering for CPU-measurement alerts " 221 - "failed with rc=%i\n", rc); 222 - return rc; 223 - } 224 - 225 - rc = cpuhp_setup_state(CPUHP_AP_PERF_S390_CF_ONLINE, 226 - "perf/s390/cf:online", 227 - cpum_cf_online_cpu, cpum_cf_offline_cpu); 228 - if (!rc) 229 - cpum_cf_initalized = true; 230 - 231 - return rc; 232 - } 233 - early_initcall(cpum_cf_init);
+105 -43
arch/s390/kernel/perf_cpum_sf.c
··· 22 22 #include <asm/irq.h> 23 23 #include <asm/debug.h> 24 24 #include <asm/timex.h> 25 + #include <asm-generic/io.h> 25 26 26 27 /* Minimum number of sample-data-block-tables: 27 28 * At least one table is required for the sampling buffer structure. ··· 100 99 /* Debug feature */ 101 100 static debug_info_t *sfdbg; 102 101 102 + /* Sampling control helper functions */ 103 + static inline unsigned long freq_to_sample_rate(struct hws_qsi_info_block *qsi, 104 + unsigned long freq) 105 + { 106 + return (USEC_PER_SEC / freq) * qsi->cpu_speed; 107 + } 108 + 109 + static inline unsigned long sample_rate_to_freq(struct hws_qsi_info_block *qsi, 110 + unsigned long rate) 111 + { 112 + return USEC_PER_SEC * qsi->cpu_speed / rate; 113 + } 114 + 115 + /* Return TOD timestamp contained in an trailer entry */ 116 + static inline unsigned long long trailer_timestamp(struct hws_trailer_entry *te) 117 + { 118 + /* TOD in STCKE format */ 119 + if (te->header.t) 120 + return *((unsigned long long *)&te->timestamp[1]); 121 + 122 + /* TOD in STCK format */ 123 + return *((unsigned long long *)&te->timestamp[0]); 124 + } 125 + 126 + /* Return pointer to trailer entry of an sample data block */ 127 + static inline struct hws_trailer_entry *trailer_entry_ptr(unsigned long v) 128 + { 129 + void *ret; 130 + 131 + ret = (void *)v; 132 + ret += PAGE_SIZE; 133 + ret -= sizeof(struct hws_trailer_entry); 134 + 135 + return ret; 136 + } 137 + 138 + /* 139 + * Return true if the entry in the sample data block table (sdbt) 140 + * is a link to the next sdbt 141 + */ 142 + static inline int is_link_entry(unsigned long *s) 143 + { 144 + return *s & 0x1UL ? 1 : 0; 145 + } 146 + 147 + /* Return pointer to the linked sdbt */ 148 + static inline unsigned long *get_next_sdbt(unsigned long *s) 149 + { 150 + return phys_to_virt(*s & ~0x1UL); 151 + } 152 + 103 153 /* 104 154 * sf_disable() - Switch off sampling facility 105 155 */ ··· 202 150 } else { 203 151 /* Process SDB pointer */ 204 152 if (*curr) { 205 - free_page(*curr); 153 + free_page((unsigned long)phys_to_virt(*curr)); 206 154 curr++; 207 155 } 208 156 } ··· 222 170 sdb = get_zeroed_page(gfp_flags); 223 171 if (!sdb) 224 172 return -ENOMEM; 225 - te = (struct hws_trailer_entry *)trailer_entry_ptr(sdb); 173 + te = trailer_entry_ptr(sdb); 226 174 te->header.a = 1; 227 175 228 176 /* Link SDB into the sample-data-block-table */ 229 - *sdbt = sdb; 177 + *sdbt = virt_to_phys((void *)sdb); 230 178 231 179 return 0; 232 180 } ··· 285 233 } 286 234 sfb->num_sdbt++; 287 235 /* Link current page to tail of chain */ 288 - *tail = (unsigned long)(void *) new + 1; 236 + *tail = virt_to_phys((void *)new) + 1; 289 237 tail_prev = tail; 290 238 tail = new; 291 239 } ··· 315 263 } 316 264 317 265 /* Link sampling buffer to its origin */ 318 - *tail = (unsigned long) sfb->sdbt + 1; 266 + *tail = virt_to_phys(sfb->sdbt) + 1; 319 267 sfb->tail = tail; 320 268 321 269 debug_sprintf_event(sfdbg, 4, "%s: new buffer" ··· 353 301 * realloc_sampling_buffer() invocation. 354 302 */ 355 303 sfb->tail = sfb->sdbt; 356 - *sfb->tail = (unsigned long)(void *) sfb->sdbt + 1; 304 + *sfb->tail = virt_to_phys((void *)sfb->sdbt) + 1; 357 305 358 306 /* Allocate requested number of sample-data-blocks */ 359 307 rc = realloc_sampling_buffer(sfb, num_sdb, GFP_KERNEL); ··· 609 557 if (err) 610 558 pr_err("Switching off the sampling facility failed " 611 559 "with rc %i\n", err); 612 - debug_sprintf_event(sfdbg, 5, 613 - "%s: initialized: cpuhw %p\n", __func__, 614 - cpusf); 615 560 break; 616 561 case PMC_RELEASE: 617 562 cpusf->flags &= ~PMU_F_RESERVED; ··· 618 569 "with rc %i\n", err); 619 570 } else 620 571 deallocate_buffers(cpusf); 621 - debug_sprintf_event(sfdbg, 5, 622 - "%s: released: cpuhw %p\n", __func__, 623 - cpusf); 624 572 break; 625 573 } 626 574 if (err) ··· 1223 1177 struct hws_trailer_entry *te; 1224 1178 struct hws_basic_entry *sample; 1225 1179 1226 - te = (struct hws_trailer_entry *) trailer_entry_ptr(*sdbt); 1227 - sample = (struct hws_basic_entry *) *sdbt; 1180 + te = trailer_entry_ptr((unsigned long)sdbt); 1181 + sample = (struct hws_basic_entry *)sdbt; 1228 1182 while ((unsigned long *) sample < (unsigned long *) te) { 1229 1183 /* Check for an empty sample */ 1230 1184 if (!sample->def || sample->LS) ··· 1305 1259 union hws_trailer_header old, prev, new; 1306 1260 struct hw_perf_event *hwc = &event->hw; 1307 1261 struct hws_trailer_entry *te; 1308 - unsigned long *sdbt; 1262 + unsigned long *sdbt, sdb; 1309 1263 int done; 1310 1264 1311 1265 /* ··· 1322 1276 done = event_overflow = sampl_overflow = num_sdb = 0; 1323 1277 while (!done) { 1324 1278 /* Get the trailer entry of the sample-data-block */ 1325 - te = (struct hws_trailer_entry *) trailer_entry_ptr(*sdbt); 1279 + sdb = (unsigned long)phys_to_virt(*sdbt); 1280 + te = trailer_entry_ptr(sdb); 1326 1281 1327 1282 /* Leave loop if no more work to do (block full indicator) */ 1328 1283 if (!te->header.f) { ··· 1341 1294 sampl_overflow += te->header.overflow; 1342 1295 1343 1296 /* Timestamps are valid for full sample-data-blocks only */ 1344 - debug_sprintf_event(sfdbg, 6, "%s: sdbt %#lx " 1297 + debug_sprintf_event(sfdbg, 6, "%s: sdbt %#lx/%#lx " 1345 1298 "overflow %llu timestamp %#llx\n", 1346 - __func__, (unsigned long)sdbt, te->header.overflow, 1299 + __func__, sdb, (unsigned long)sdbt, 1300 + te->header.overflow, 1347 1301 (te->header.f) ? trailer_timestamp(te) : 0ULL); 1348 1302 1349 1303 /* Collect all samples from a single sample-data-block and 1350 1304 * flag if an (perf) event overflow happened. If so, the PMU 1351 1305 * is stopped and remaining samples will be discarded. 1352 1306 */ 1353 - hw_collect_samples(event, sdbt, &event_overflow); 1307 + hw_collect_samples(event, (unsigned long *)sdb, &event_overflow); 1354 1308 num_sdb++; 1355 1309 1356 1310 /* Reset trailer (using compare-double-and-swap) */ ··· 1409 1361 OVERFLOW_REG(hwc), num_sdb); 1410 1362 } 1411 1363 1412 - #define AUX_SDB_INDEX(aux, i) ((i) % aux->sfb.num_sdb) 1413 - #define AUX_SDB_NUM(aux, start, end) (end >= start ? end - start + 1 : 0) 1414 - #define AUX_SDB_NUM_ALERT(aux) AUX_SDB_NUM(aux, aux->head, aux->alert_mark) 1415 - #define AUX_SDB_NUM_EMPTY(aux) AUX_SDB_NUM(aux, aux->head, aux->empty_mark) 1364 + static inline unsigned long aux_sdb_index(struct aux_buffer *aux, 1365 + unsigned long i) 1366 + { 1367 + return i % aux->sfb.num_sdb; 1368 + } 1369 + 1370 + static inline unsigned long aux_sdb_num(unsigned long start, unsigned long end) 1371 + { 1372 + return end >= start ? end - start + 1 : 0; 1373 + } 1374 + 1375 + static inline unsigned long aux_sdb_num_alert(struct aux_buffer *aux) 1376 + { 1377 + return aux_sdb_num(aux->head, aux->alert_mark); 1378 + } 1379 + 1380 + static inline unsigned long aux_sdb_num_empty(struct aux_buffer *aux) 1381 + { 1382 + return aux_sdb_num(aux->head, aux->empty_mark); 1383 + } 1416 1384 1417 1385 /* 1418 1386 * Get trailer entry by index of SDB. ··· 1438 1374 { 1439 1375 unsigned long sdb; 1440 1376 1441 - index = AUX_SDB_INDEX(aux, index); 1377 + index = aux_sdb_index(aux, index); 1442 1378 sdb = aux->sdb_index[index]; 1443 - return (struct hws_trailer_entry *)trailer_entry_ptr(sdb); 1379 + return trailer_entry_ptr(sdb); 1444 1380 } 1445 1381 1446 1382 /* ··· 1462 1398 if (!aux) 1463 1399 return; 1464 1400 1465 - range_scan = AUX_SDB_NUM_ALERT(aux); 1401 + range_scan = aux_sdb_num_alert(aux); 1466 1402 for (i = 0, idx = aux->head; i < range_scan; i++, idx++) { 1467 1403 te = aux_sdb_trailer(aux, idx); 1468 1404 if (!te->header.f) ··· 1492 1428 struct aux_buffer *aux, 1493 1429 struct cpu_hw_sf *cpuhw) 1494 1430 { 1495 - unsigned long range; 1496 - unsigned long i, range_scan, idx; 1497 - unsigned long head, base, offset; 1431 + unsigned long range, i, range_scan, idx, head, base, offset; 1498 1432 struct hws_trailer_entry *te; 1499 1433 1500 1434 if (WARN_ON_ONCE(handle->head & ~PAGE_MASK)) ··· 1511 1449 "%s: range %ld head %ld alert %ld empty %ld\n", 1512 1450 __func__, range, aux->head, aux->alert_mark, 1513 1451 aux->empty_mark); 1514 - if (range > AUX_SDB_NUM_EMPTY(aux)) { 1515 - range_scan = range - AUX_SDB_NUM_EMPTY(aux); 1452 + if (range > aux_sdb_num_empty(aux)) { 1453 + range_scan = range - aux_sdb_num_empty(aux); 1516 1454 idx = aux->empty_mark + 1; 1517 1455 for (i = 0; i < range_scan; i++, idx++) { 1518 1456 te = aux_sdb_trailer(aux, idx); ··· 1530 1468 te->header.a = 1; 1531 1469 1532 1470 /* Reset hardware buffer head */ 1533 - head = AUX_SDB_INDEX(aux, aux->head); 1471 + head = aux_sdb_index(aux, aux->head); 1534 1472 base = aux->sdbt_index[head / CPUM_SF_SDB_PER_TABLE]; 1535 1473 offset = head % CPUM_SF_SDB_PER_TABLE; 1536 - cpuhw->lsctl.tear = base + offset * sizeof(unsigned long); 1537 - cpuhw->lsctl.dear = aux->sdb_index[head]; 1474 + cpuhw->lsctl.tear = virt_to_phys((void *)base) + offset * sizeof(unsigned long); 1475 + cpuhw->lsctl.dear = virt_to_phys((void *)aux->sdb_index[head]); 1538 1476 1539 1477 debug_sprintf_event(sfdbg, 6, "%s: head %ld alert %ld empty %ld " 1540 1478 "index %ld tear %#lx dear %#lx\n", __func__, ··· 1612 1550 debug_sprintf_event(sfdbg, 6, "%s: range %ld head %ld alert %ld " 1613 1551 "empty %ld\n", __func__, range, aux->head, 1614 1552 aux->alert_mark, aux->empty_mark); 1615 - if (range <= AUX_SDB_NUM_EMPTY(aux)) 1553 + if (range <= aux_sdb_num_empty(aux)) 1616 1554 /* 1617 1555 * No need to scan. All SDBs in range are marked as empty. 1618 1556 * Just set alert indicator. Should check race with hardware ··· 1633 1571 * Start scanning from one SDB behind empty_mark. If the new alert 1634 1572 * indicator fall into this range, set it. 1635 1573 */ 1636 - range_scan = range - AUX_SDB_NUM_EMPTY(aux); 1574 + range_scan = range - aux_sdb_num_empty(aux); 1637 1575 idx_old = idx = aux->empty_mark + 1; 1638 1576 for (i = 0; i < range_scan; i++, idx++) { 1639 1577 te = aux_sdb_trailer(aux, idx); ··· 1680 1618 return; 1681 1619 1682 1620 /* Inform user space new data arrived */ 1683 - size = AUX_SDB_NUM_ALERT(aux) << PAGE_SHIFT; 1621 + size = aux_sdb_num_alert(aux) << PAGE_SHIFT; 1684 1622 debug_sprintf_event(sfdbg, 6, "%s: #alert %ld\n", __func__, 1685 1623 size >> PAGE_SHIFT); 1686 1624 perf_aux_output_end(handle, size); ··· 1722 1660 "overflow %lld\n", __func__, 1723 1661 aux->head, range, overflow); 1724 1662 } else { 1725 - size = AUX_SDB_NUM_ALERT(aux) << PAGE_SHIFT; 1663 + size = aux_sdb_num_alert(aux) << PAGE_SHIFT; 1726 1664 perf_aux_output_end(&cpuhw->handle, size); 1727 1665 debug_sprintf_event(sfdbg, 6, "%s: head %ld alert %ld " 1728 1666 "already full, try another\n", ··· 1764 1702 { 1765 1703 struct hws_trailer_entry *te; 1766 1704 1767 - te = (struct hws_trailer_entry *)trailer_entry_ptr(sdb); 1705 + te = trailer_entry_ptr(sdb); 1768 1706 1769 1707 /* Save clock base */ 1770 1708 te->clock_base = 1; ··· 1844 1782 goto no_sdbt; 1845 1783 aux->sdbt_index[sfb->num_sdbt++] = (unsigned long)new; 1846 1784 /* Link current page to tail of chain */ 1847 - *tail = (unsigned long)(void *) new + 1; 1785 + *tail = virt_to_phys(new) + 1; 1848 1786 tail = new; 1849 1787 } 1850 1788 /* Tail is the entry in a SDBT */ 1851 - *tail = (unsigned long)pages[i]; 1789 + *tail = virt_to_phys(pages[i]); 1852 1790 aux->sdb_index[i] = (unsigned long)pages[i]; 1853 1791 aux_sdb_init((unsigned long)pages[i]); 1854 1792 } 1855 1793 sfb->num_sdb = nr_pages; 1856 1794 1857 1795 /* Link the last entry in the SDBT to the first SDBT */ 1858 - *tail = (unsigned long) sfb->sdbt + 1; 1796 + *tail = virt_to_phys(sfb->sdbt) + 1; 1859 1797 sfb->tail = tail; 1860 1798 1861 1799 /* ··· 1995 1933 cpuhw->lsctl.h = 1; 1996 1934 cpuhw->lsctl.interval = SAMPL_RATE(&event->hw); 1997 1935 if (!SAMPL_DIAG_MODE(&event->hw)) { 1998 - cpuhw->lsctl.tear = (unsigned long) cpuhw->sfb.sdbt; 1936 + cpuhw->lsctl.tear = virt_to_phys(cpuhw->sfb.sdbt); 1999 1937 cpuhw->lsctl.dear = *(unsigned long *) cpuhw->sfb.sdbt; 2000 1938 TEAR_REG(&event->hw) = (unsigned long) cpuhw->sfb.sdbt; 2001 1939 }
+1 -1
arch/s390/kernel/perf_pai_ext.c
··· 16 16 #include <linux/init.h> 17 17 #include <linux/export.h> 18 18 #include <linux/io.h> 19 + #include <linux/perf_event.h> 19 20 20 - #include <asm/cpu_mcf.h> 21 21 #include <asm/ctl_reg.h> 22 22 #include <asm/pai.h> 23 23 #include <asm/debug.h>
+2 -2
arch/s390/kernel/process.c
··· 147 147 if (unlikely(args->fn)) { 148 148 /* kernel thread */ 149 149 memset(&frame->childregs, 0, sizeof(struct pt_regs)); 150 - frame->childregs.psw.mask = PSW_KERNEL_BITS | PSW_MASK_DAT | 151 - PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; 150 + frame->childregs.psw.mask = PSW_KERNEL_BITS | PSW_MASK_IO | 151 + PSW_MASK_EXT | PSW_MASK_MCHECK; 152 152 frame->childregs.psw.addr = 153 153 (unsigned long)__ret_from_fork; 154 154 frame->childregs.gprs[9] = (unsigned long)args->fn;
+3 -3
arch/s390/kernel/ptrace.c
··· 990 990 if (target == current) 991 991 save_fpu_regs(); 992 992 for (i = 0; i < __NUM_VXRS_LOW; i++) 993 - vxrs[i] = *((__u64 *)(target->thread.fpu.vxrs + i) + 1); 993 + vxrs[i] = target->thread.fpu.vxrs[i].low; 994 994 return membuf_write(&to, vxrs, sizeof(vxrs)); 995 995 } 996 996 ··· 1008 1008 save_fpu_regs(); 1009 1009 1010 1010 for (i = 0; i < __NUM_VXRS_LOW; i++) 1011 - vxrs[i] = *((__u64 *)(target->thread.fpu.vxrs + i) + 1); 1011 + vxrs[i] = target->thread.fpu.vxrs[i].low; 1012 1012 1013 1013 rc = user_regset_copyin(&pos, &count, &kbuf, &ubuf, vxrs, 0, -1); 1014 1014 if (rc == 0) 1015 1015 for (i = 0; i < __NUM_VXRS_LOW; i++) 1016 - *((__u64 *)(target->thread.fpu.vxrs + i) + 1) = vxrs[i]; 1016 + target->thread.fpu.vxrs[i].low = vxrs[i]; 1017 1017 1018 1018 return rc; 1019 1019 }
+34
arch/s390/kernel/rethook.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + #include <linux/rethook.h> 3 + #include <linux/kprobes.h> 4 + #include "rethook.h" 5 + 6 + void arch_rethook_prepare(struct rethook_node *rh, struct pt_regs *regs, bool mcount) 7 + { 8 + rh->ret_addr = regs->gprs[14]; 9 + rh->frame = regs->gprs[15]; 10 + 11 + /* Replace the return addr with trampoline addr */ 12 + regs->gprs[14] = (unsigned long)&arch_rethook_trampoline; 13 + } 14 + NOKPROBE_SYMBOL(arch_rethook_prepare); 15 + 16 + void arch_rethook_fixup_return(struct pt_regs *regs, 17 + unsigned long correct_ret_addr) 18 + { 19 + /* Replace fake return address with real one. */ 20 + regs->gprs[14] = correct_ret_addr; 21 + } 22 + NOKPROBE_SYMBOL(arch_rethook_fixup_return); 23 + 24 + /* 25 + * Called from arch_rethook_trampoline 26 + */ 27 + unsigned long arch_rethook_trampoline_callback(struct pt_regs *regs) 28 + { 29 + return rethook_trampoline_handler(regs, regs->gprs[15]); 30 + } 31 + NOKPROBE_SYMBOL(arch_rethook_trampoline_callback); 32 + 33 + /* assembler function that handles the rethook must not be probed itself */ 34 + NOKPROBE_SYMBOL(arch_rethook_trampoline);
+7
arch/s390/kernel/rethook.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + #ifndef __S390_RETHOOK_H 3 + #define __S390_RETHOOK_H 4 + 5 + unsigned long arch_rethook_trampoline_callback(struct pt_regs *regs); 6 + 7 + #endif
+42 -54
arch/s390/kernel/setup.c
··· 149 149 unsigned long __bootdata(ident_map_size); 150 150 struct mem_detect_info __bootdata(mem_detect); 151 151 struct initrd_data __bootdata(initrd_data); 152 + unsigned long __bootdata(pgalloc_pos); 153 + unsigned long __bootdata(pgalloc_end); 154 + unsigned long __bootdata(pgalloc_low); 152 155 153 156 unsigned long __bootdata_preserved(__kaslr_offset); 154 157 unsigned long __bootdata(__amode31_base); ··· 414 411 call_on_stack_noreturn(rest_init, stack); 415 412 } 416 413 417 - static void __init setup_lowcore_dat_off(void) 414 + static void __init setup_lowcore(void) 418 415 { 419 - unsigned long int_psw_mask = PSW_KERNEL_BITS; 420 - struct lowcore *abs_lc, *lc; 416 + struct lowcore *lc, *abs_lc; 421 417 unsigned long mcck_stack; 422 - unsigned long flags; 423 - 424 - if (IS_ENABLED(CONFIG_KASAN)) 425 - int_psw_mask |= PSW_MASK_DAT; 426 418 427 419 /* 428 420 * Setup lowcore for boot cpu ··· 428 430 panic("%s: Failed to allocate %zu bytes align=%zx\n", 429 431 __func__, sizeof(*lc), sizeof(*lc)); 430 432 431 - lc->restart_psw.mask = PSW_KERNEL_BITS; 432 - lc->restart_psw.addr = (unsigned long) restart_int_handler; 433 - lc->external_new_psw.mask = int_psw_mask | PSW_MASK_MCHECK; 433 + lc->restart_psw.mask = PSW_KERNEL_BITS & ~PSW_MASK_DAT; 434 + lc->restart_psw.addr = __pa(restart_int_handler); 435 + lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; 434 436 lc->external_new_psw.addr = (unsigned long) ext_int_handler; 435 - lc->svc_new_psw.mask = int_psw_mask | PSW_MASK_MCHECK; 437 + lc->svc_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; 436 438 lc->svc_new_psw.addr = (unsigned long) system_call; 437 - lc->program_new_psw.mask = int_psw_mask | PSW_MASK_MCHECK; 439 + lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; 438 440 lc->program_new_psw.addr = (unsigned long) pgm_check_handler; 439 - lc->mcck_new_psw.mask = int_psw_mask; 441 + lc->mcck_new_psw.mask = PSW_KERNEL_BITS; 440 442 lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler; 441 - lc->io_new_psw.mask = int_psw_mask | PSW_MASK_MCHECK; 443 + lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; 442 444 lc->io_new_psw.addr = (unsigned long) io_int_handler; 443 445 lc->clock_comparator = clock_comparator_max; 444 446 lc->nodat_stack = ((unsigned long) &init_thread_union) ··· 475 477 lc->restart_fn = (unsigned long) do_restart; 476 478 lc->restart_data = 0; 477 479 lc->restart_source = -1U; 478 - 479 - abs_lc = get_abs_lowcore(&flags); 480 - abs_lc->restart_stack = lc->restart_stack; 481 - abs_lc->restart_fn = lc->restart_fn; 482 - abs_lc->restart_data = lc->restart_data; 483 - abs_lc->restart_source = lc->restart_source; 484 - abs_lc->restart_psw = lc->restart_psw; 485 - abs_lc->mcesad = lc->mcesad; 486 - put_abs_lowcore(abs_lc, flags); 480 + __ctl_store(lc->cregs_save_area, 0, 15); 487 481 488 482 mcck_stack = (unsigned long)memblock_alloc(THREAD_SIZE, THREAD_SIZE); 489 483 if (!mcck_stack) ··· 489 499 lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW); 490 500 lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW); 491 501 lc->preempt_count = PREEMPT_DISABLED; 502 + lc->kernel_asce = S390_lowcore.kernel_asce; 503 + lc->user_asce = S390_lowcore.user_asce; 504 + 505 + abs_lc = get_abs_lowcore(); 506 + abs_lc->restart_stack = lc->restart_stack; 507 + abs_lc->restart_fn = lc->restart_fn; 508 + abs_lc->restart_data = lc->restart_data; 509 + abs_lc->restart_source = lc->restart_source; 510 + abs_lc->restart_psw = lc->restart_psw; 511 + abs_lc->restart_flags = RESTART_FLAG_CTLREGS; 512 + memcpy(abs_lc->cregs_save_area, lc->cregs_save_area, sizeof(abs_lc->cregs_save_area)); 513 + abs_lc->program_new_psw = lc->program_new_psw; 514 + abs_lc->mcesad = lc->mcesad; 515 + put_abs_lowcore(abs_lc); 492 516 493 517 set_prefix(__pa(lc)); 494 518 lowcore_ptr[0] = lc; 495 - } 496 - 497 - static void __init setup_lowcore_dat_on(void) 498 - { 499 - struct lowcore *abs_lc; 500 - unsigned long flags; 501 - int i; 502 - 503 - __ctl_clear_bit(0, 28); 504 - S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT; 505 - S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT; 506 - S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT; 507 - S390_lowcore.mcck_new_psw.mask |= PSW_MASK_DAT; 508 - S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT; 509 - __ctl_set_bit(0, 28); 510 - __ctl_store(S390_lowcore.cregs_save_area, 0, 15); 511 - if (abs_lowcore_map(0, lowcore_ptr[0], true)) 519 + if (abs_lowcore_map(0, lowcore_ptr[0], false)) 512 520 panic("Couldn't setup absolute lowcore"); 513 - abs_lowcore_mapped = true; 514 - abs_lc = get_abs_lowcore(&flags); 515 - abs_lc->restart_flags = RESTART_FLAG_CTLREGS; 516 - abs_lc->program_new_psw = S390_lowcore.program_new_psw; 517 - for (i = 0; i < 16; i++) 518 - abs_lc->cregs_save_area[i] = S390_lowcore.cregs_save_area[i]; 519 - put_abs_lowcore(abs_lc, flags); 520 521 } 521 522 522 523 static struct resource code_resource = { ··· 600 619 601 620 static void __init setup_memory_end(void) 602 621 { 603 - memblock_remove(ident_map_size, PHYS_ADDR_MAX - ident_map_size); 604 622 max_pfn = max_low_pfn = PFN_DOWN(ident_map_size); 605 623 pr_notice("The maximum memory size is %luMB\n", ident_map_size >> 20); 606 624 } ··· 629 649 }; 630 650 631 651 #endif 652 + 653 + /* 654 + * Reserve page tables created by decompressor 655 + */ 656 + static void __init reserve_pgtables(void) 657 + { 658 + memblock_reserve(pgalloc_pos, pgalloc_end - pgalloc_pos); 659 + } 632 660 633 661 /* 634 662 * Reserve memory for kdump kernel to be loaded with kexec ··· 772 784 get_mem_info_source(), mem_detect.info_source); 773 785 /* keep memblock lists close to the kernel */ 774 786 memblock_set_bottom_up(true); 775 - for_each_mem_detect_block(i, &start, &end) { 787 + for_each_mem_detect_usable_block(i, &start, &end) 776 788 memblock_add(start, end - start); 789 + for_each_mem_detect_block(i, &start, &end) 777 790 memblock_physmem_add(start, end - start); 778 - } 779 791 memblock_set_bottom_up(false); 780 792 memblock_set_node(0, ULONG_MAX, &memblock.memory, 0); 781 793 } ··· 993 1005 setup_control_program_code(); 994 1006 995 1007 /* Do some memory reservations *before* memory is added to memblock */ 1008 + reserve_pgtables(); 996 1009 reserve_kernel(); 997 1010 reserve_initrd(); 998 1011 reserve_certificate_list(); ··· 1028 1039 #endif 1029 1040 1030 1041 setup_resources(); 1031 - setup_lowcore_dat_off(); 1042 + setup_lowcore(); 1032 1043 smp_fill_possible_mask(); 1033 1044 cpu_detect_mhz_feature(); 1034 1045 cpu_init(); ··· 1040 1051 static_branch_enable(&cpu_has_bear); 1041 1052 1042 1053 /* 1043 - * Create kernel page tables and switch to virtual addressing. 1054 + * Create kernel page tables. 1044 1055 */ 1045 1056 paging_init(); 1046 - memcpy_real_init(); 1057 + 1047 1058 /* 1048 1059 * After paging_init created the kernel page table, the new PSWs 1049 1060 * in lowcore can now run with DAT enabled. 1050 1061 */ 1051 - setup_lowcore_dat_on(); 1052 1062 #ifdef CONFIG_CRASH_DUMP 1053 1063 smp_save_dump_ipl_cpu(); 1054 1064 #endif
+2 -2
arch/s390/kernel/signal.c
··· 184 184 /* Save vector registers to signal stack */ 185 185 if (MACHINE_HAS_VX) { 186 186 for (i = 0; i < __NUM_VXRS_LOW; i++) 187 - vxrs[i] = *((__u64 *)(current->thread.fpu.vxrs + i) + 1); 187 + vxrs[i] = current->thread.fpu.vxrs[i].low; 188 188 if (__copy_to_user(&sregs_ext->vxrs_low, vxrs, 189 189 sizeof(sregs_ext->vxrs_low)) || 190 190 __copy_to_user(&sregs_ext->vxrs_high, ··· 210 210 sizeof(sregs_ext->vxrs_high))) 211 211 return -EFAULT; 212 212 for (i = 0; i < __NUM_VXRS_LOW; i++) 213 - *((__u64 *)(current->thread.fpu.vxrs + i) + 1) = vxrs[i]; 213 + current->thread.fpu.vxrs[i].low = vxrs[i]; 214 214 } 215 215 return 0; 216 216 }
+6 -8
arch/s390/kernel/smp.c
··· 323 323 { 324 324 struct lowcore *lc, *abs_lc; 325 325 unsigned int source_cpu; 326 - unsigned long flags; 327 326 328 327 lc = lowcore_ptr[pcpu - pcpu_devices]; 329 328 source_cpu = stap(); 330 - __load_psw_mask(PSW_KERNEL_BITS | PSW_MASK_DAT); 329 + 331 330 if (pcpu->address == source_cpu) { 332 331 call_on_stack(2, stack, void, __pcpu_delegate, 333 332 pcpu_delegate_fn *, func, void *, data); ··· 340 341 lc->restart_data = (unsigned long)data; 341 342 lc->restart_source = source_cpu; 342 343 } else { 343 - abs_lc = get_abs_lowcore(&flags); 344 + abs_lc = get_abs_lowcore(); 344 345 abs_lc->restart_stack = stack; 345 346 abs_lc->restart_fn = (unsigned long)func; 346 347 abs_lc->restart_data = (unsigned long)data; 347 348 abs_lc->restart_source = source_cpu; 348 - put_abs_lowcore(abs_lc, flags); 349 + put_abs_lowcore(abs_lc); 349 350 } 350 351 __bpon(); 351 352 asm volatile( ··· 487 488 int cpu; 488 489 489 490 /* Disable all interrupts/machine checks */ 490 - __load_psw_mask(PSW_KERNEL_BITS | PSW_MASK_DAT); 491 + __load_psw_mask(PSW_KERNEL_BITS); 491 492 trace_hardirqs_off(); 492 493 493 494 debug_set_critical(); ··· 592 593 { 593 594 struct ec_creg_mask_parms parms = { .cr = cr, }; 594 595 struct lowcore *abs_lc; 595 - unsigned long flags; 596 596 u64 ctlreg; 597 597 598 598 if (set) { ··· 602 604 parms.andval = ~(1UL << bit); 603 605 } 604 606 spin_lock(&ctl_lock); 605 - abs_lc = get_abs_lowcore(&flags); 607 + abs_lc = get_abs_lowcore(); 606 608 ctlreg = abs_lc->cregs_save_area[cr]; 607 609 ctlreg = (ctlreg & parms.andval) | parms.orval; 608 610 abs_lc->cregs_save_area[cr] = ctlreg; 609 - put_abs_lowcore(abs_lc, flags); 611 + put_abs_lowcore(abs_lc); 610 612 spin_unlock(&ctl_lock); 611 613 on_each_cpu(smp_ctl_bit_callback, &parms, 1); 612 614 }
+3 -3
arch/s390/kernel/stacktrace.c
··· 40 40 if (!addr) 41 41 return -EINVAL; 42 42 43 - #ifdef CONFIG_KPROBES 43 + #ifdef CONFIG_RETHOOK 44 44 /* 45 - * Mark stacktraces with kretprobed functions on them 45 + * Mark stacktraces with krethook functions on them 46 46 * as unreliable. 47 47 */ 48 - if (state.ip == (unsigned long)__kretprobe_trampoline) 48 + if (state.ip == (unsigned long)arch_rethook_trampoline) 49 49 return -EINVAL; 50 50 #endif 51 51
+13
arch/s390/kernel/text_amode31.S
··· 63 63 ENDPROC(_diag210_amode31) 64 64 65 65 /* 66 + * int diag8c(struct diag8c *addr, struct ccw_dev_id *devno, size_t len) 67 + */ 68 + ENTRY(_diag8c_amode31) 69 + llgf %r3,0(%r3) 70 + sam31 71 + diag %r2,%r4,0x8c 72 + .Ldiag8c_ex: 73 + sam64 74 + lgfr %r2,%r3 75 + BR_EX_AMODE31_r14 76 + EX_TABLE_AMODE31(.Ldiag8c_ex, .Ldiag8c_ex) 77 + ENDPROC(_diag8c_amode31) 78 + /* 66 79 * int _diag26c_amode31(void *req, void *resp, enum diag26c_sc subcode) 67 80 */ 68 81 ENTRY(_diag26c_amode31)
+4
arch/s390/kernel/vmlinux.lds.S
··· 216 216 QUAD(__rela_dyn_start) /* rela_dyn_start */ 217 217 QUAD(__rela_dyn_end) /* rela_dyn_end */ 218 218 QUAD(_eamode31 - _samode31) /* amode31_size */ 219 + QUAD(init_mm) 220 + QUAD(swapper_pg_dir) 221 + QUAD(invalid_pg_dir) 219 222 } :NONE 220 223 221 224 /* Debugging sections. */ ··· 230 227 DISCARDS 231 228 /DISCARD/ : { 232 229 *(.eh_frame) 230 + *(.interp) 233 231 } 234 232 }
+6 -6
arch/s390/lib/test_unwind.c
··· 47 47 static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs, 48 48 unsigned long sp) 49 49 { 50 - int frame_count, prev_is_func2, seen_func2_func1, seen_kretprobe_trampoline; 50 + int frame_count, prev_is_func2, seen_func2_func1, seen_arch_rethook_trampoline; 51 51 const int max_frames = 128; 52 52 struct unwind_state state; 53 53 size_t bt_pos = 0; ··· 63 63 frame_count = 0; 64 64 prev_is_func2 = 0; 65 65 seen_func2_func1 = 0; 66 - seen_kretprobe_trampoline = 0; 66 + seen_arch_rethook_trampoline = 0; 67 67 unwind_for_each_frame(&state, task, regs, sp) { 68 68 unsigned long addr = unwind_get_return_address(&state); 69 69 char sym[KSYM_SYMBOL_LEN]; ··· 89 89 if (prev_is_func2 && str_has_prefix(sym, "unwindme_func1")) 90 90 seen_func2_func1 = 1; 91 91 prev_is_func2 = str_has_prefix(sym, "unwindme_func2"); 92 - if (str_has_prefix(sym, "__kretprobe_trampoline+0x0/")) 93 - seen_kretprobe_trampoline = 1; 92 + if (str_has_prefix(sym, "arch_rethook_trampoline+0x0/")) 93 + seen_arch_rethook_trampoline = 1; 94 94 } 95 95 96 96 /* Check the results. */ ··· 106 106 kunit_err(current_test, "Maximum number of frames exceeded\n"); 107 107 ret = -EINVAL; 108 108 } 109 - if (seen_kretprobe_trampoline) { 110 - kunit_err(current_test, "__kretprobe_trampoline+0x0 in unwinding results\n"); 109 + if (seen_arch_rethook_trampoline) { 110 + kunit_err(current_test, "arch_rethook_trampoline+0x0 in unwinding results\n"); 111 111 ret = -EINVAL; 112 112 } 113 113 if (ret || force_bt)
+8 -8
arch/s390/mm/dump_pagetables.c
··· 33 33 #endif 34 34 IDENTITY_AFTER_NR, 35 35 IDENTITY_AFTER_END_NR, 36 - #ifdef CONFIG_KASAN 37 - KASAN_SHADOW_START_NR, 38 - KASAN_SHADOW_END_NR, 39 - #endif 40 36 VMEMMAP_NR, 41 37 VMEMMAP_END_NR, 42 38 VMALLOC_NR, ··· 43 47 ABS_LOWCORE_END_NR, 44 48 MEMCPY_REAL_NR, 45 49 MEMCPY_REAL_END_NR, 50 + #ifdef CONFIG_KASAN 51 + KASAN_SHADOW_START_NR, 52 + KASAN_SHADOW_END_NR, 53 + #endif 46 54 }; 47 55 48 56 static struct addr_marker address_markers[] = { ··· 62 62 #endif 63 63 [IDENTITY_AFTER_NR] = {(unsigned long)_end, "Identity Mapping Start"}, 64 64 [IDENTITY_AFTER_END_NR] = {0, "Identity Mapping End"}, 65 - #ifdef CONFIG_KASAN 66 - [KASAN_SHADOW_START_NR] = {KASAN_SHADOW_START, "Kasan Shadow Start"}, 67 - [KASAN_SHADOW_END_NR] = {KASAN_SHADOW_END, "Kasan Shadow End"}, 68 - #endif 69 65 [VMEMMAP_NR] = {0, "vmemmap Area Start"}, 70 66 [VMEMMAP_END_NR] = {0, "vmemmap Area End"}, 71 67 [VMALLOC_NR] = {0, "vmalloc Area Start"}, ··· 72 76 [ABS_LOWCORE_END_NR] = {0, "Lowcore Area End"}, 73 77 [MEMCPY_REAL_NR] = {0, "Real Memory Copy Area Start"}, 74 78 [MEMCPY_REAL_END_NR] = {0, "Real Memory Copy Area End"}, 79 + #ifdef CONFIG_KASAN 80 + [KASAN_SHADOW_START_NR] = {KASAN_SHADOW_START, "Kasan Shadow Start"}, 81 + [KASAN_SHADOW_END_NR] = {KASAN_SHADOW_END, "Kasan Shadow End"}, 82 + #endif 75 83 { -1, NULL } 76 84 }; 77 85
+7 -2
arch/s390/mm/extable.c
··· 47 47 return true; 48 48 } 49 49 50 - static bool ex_handler_ua_load_reg(const struct exception_table_entry *ex, struct pt_regs *regs) 50 + static bool ex_handler_ua_load_reg(const struct exception_table_entry *ex, 51 + bool pair, struct pt_regs *regs) 51 52 { 52 53 unsigned int reg_zero = FIELD_GET(EX_DATA_REG_ADDR, ex->data); 53 54 unsigned int reg_err = FIELD_GET(EX_DATA_REG_ERR, ex->data); 54 55 55 56 regs->gprs[reg_err] = -EFAULT; 56 57 regs->gprs[reg_zero] = 0; 58 + if (pair) 59 + regs->gprs[reg_zero + 1] = 0; 57 60 regs->psw.addr = extable_fixup(ex); 58 61 return true; 59 62 } ··· 78 75 case EX_TYPE_UA_LOAD_MEM: 79 76 return ex_handler_ua_load_mem(ex, regs); 80 77 case EX_TYPE_UA_LOAD_REG: 81 - return ex_handler_ua_load_reg(ex, regs); 78 + return ex_handler_ua_load_reg(ex, false, regs); 79 + case EX_TYPE_UA_LOAD_REGPAIR: 80 + return ex_handler_ua_load_reg(ex, true, regs); 82 81 } 83 82 panic("invalid exception table entry"); 84 83 }
+44 -19
arch/s390/mm/fault.c
··· 46 46 #define __SUBCODE_MASK 0x0600 47 47 #define __PF_RES_FIELD 0x8000000000000000ULL 48 48 49 - #define VM_FAULT_BADCONTEXT ((__force vm_fault_t) 0x010000) 50 - #define VM_FAULT_BADMAP ((__force vm_fault_t) 0x020000) 51 - #define VM_FAULT_BADACCESS ((__force vm_fault_t) 0x040000) 52 - #define VM_FAULT_SIGNAL ((__force vm_fault_t) 0x080000) 53 - #define VM_FAULT_PFAULT ((__force vm_fault_t) 0x100000) 49 + /* 50 + * Allocate private vm_fault_reason from top. Please make sure it won't 51 + * collide with vm_fault_reason. 52 + */ 53 + #define VM_FAULT_BADCONTEXT ((__force vm_fault_t)0x80000000) 54 + #define VM_FAULT_BADMAP ((__force vm_fault_t)0x40000000) 55 + #define VM_FAULT_BADACCESS ((__force vm_fault_t)0x20000000) 56 + #define VM_FAULT_SIGNAL ((__force vm_fault_t)0x10000000) 57 + #define VM_FAULT_PFAULT ((__force vm_fault_t)0x8000000) 54 58 55 59 enum fault_type { 56 60 KERNEL_FAULT, ··· 98 94 } 99 95 /* home space exception -> access via kernel ASCE */ 100 96 return KERNEL_FAULT; 97 + } 98 + 99 + static unsigned long get_fault_address(struct pt_regs *regs) 100 + { 101 + unsigned long trans_exc_code = regs->int_parm_long; 102 + 103 + return trans_exc_code & __FAIL_ADDR_MASK; 104 + } 105 + 106 + static bool fault_is_write(struct pt_regs *regs) 107 + { 108 + unsigned long trans_exc_code = regs->int_parm_long; 109 + 110 + return (trans_exc_code & store_indication) == 0x400; 101 111 } 102 112 103 113 static int bad_address(void *p) ··· 246 228 (void __user *)(regs->int_parm_long & __FAIL_ADDR_MASK)); 247 229 } 248 230 249 - static noinline void do_no_context(struct pt_regs *regs) 231 + static noinline void do_no_context(struct pt_regs *regs, vm_fault_t fault) 250 232 { 233 + enum fault_type fault_type; 234 + unsigned long address; 235 + bool is_write; 236 + 251 237 if (fixup_exception(regs)) 252 238 return; 239 + fault_type = get_fault_type(regs); 240 + if ((fault_type == KERNEL_FAULT) && (fault == VM_FAULT_BADCONTEXT)) { 241 + address = get_fault_address(regs); 242 + is_write = fault_is_write(regs); 243 + if (kfence_handle_page_fault(address, is_write, regs)) 244 + return; 245 + } 253 246 /* 254 247 * Oops. The kernel tried to access some bad page. We'll have to 255 248 * terminate things with extreme prejudice. 256 249 */ 257 - if (get_fault_type(regs) == KERNEL_FAULT) 250 + if (fault_type == KERNEL_FAULT) 258 251 printk(KERN_ALERT "Unable to handle kernel pointer dereference" 259 252 " in virtual kernel address space\n"); 260 253 else ··· 284 255 die (regs, "Low-address protection"); 285 256 } 286 257 287 - do_no_context(regs); 258 + do_no_context(regs, VM_FAULT_BADACCESS); 288 259 } 289 260 290 261 static noinline void do_sigbus(struct pt_regs *regs) ··· 315 286 fallthrough; 316 287 case VM_FAULT_BADCONTEXT: 317 288 case VM_FAULT_PFAULT: 318 - do_no_context(regs); 289 + do_no_context(regs, fault); 319 290 break; 320 291 case VM_FAULT_SIGNAL: 321 292 if (!user_mode(regs)) 322 - do_no_context(regs); 293 + do_no_context(regs, fault); 323 294 break; 324 295 default: /* fault & VM_FAULT_ERROR */ 325 296 if (fault & VM_FAULT_OOM) { 326 297 if (!user_mode(regs)) 327 - do_no_context(regs); 298 + do_no_context(regs, fault); 328 299 else 329 300 pagefault_out_of_memory(); 330 301 } else if (fault & VM_FAULT_SIGSEGV) { 331 302 /* Kernel mode? Handle exceptions or die */ 332 303 if (!user_mode(regs)) 333 - do_no_context(regs); 304 + do_no_context(regs, fault); 334 305 else 335 306 do_sigsegv(regs, SEGV_MAPERR); 336 307 } else if (fault & VM_FAULT_SIGBUS) { 337 308 /* Kernel mode? Handle exceptions or die */ 338 309 if (!user_mode(regs)) 339 - do_no_context(regs); 310 + do_no_context(regs, fault); 340 311 else 341 312 do_sigbus(regs); 342 313 } else ··· 363 334 struct mm_struct *mm; 364 335 struct vm_area_struct *vma; 365 336 enum fault_type type; 366 - unsigned long trans_exc_code; 367 337 unsigned long address; 368 338 unsigned int flags; 369 339 vm_fault_t fault; ··· 379 351 return 0; 380 352 381 353 mm = tsk->mm; 382 - trans_exc_code = regs->int_parm_long; 383 - address = trans_exc_code & __FAIL_ADDR_MASK; 384 - is_write = (trans_exc_code & store_indication) == 0x400; 354 + address = get_fault_address(regs); 355 + is_write = fault_is_write(regs); 385 356 386 357 /* 387 358 * Verify that the fault happened in user space, that ··· 391 364 type = get_fault_type(regs); 392 365 switch (type) { 393 366 case KERNEL_FAULT: 394 - if (kfence_handle_page_fault(address, is_write, regs)) 395 - return 0; 396 367 goto out; 397 368 case USER_FAULT: 398 369 case GMAP_FAULT:
+2 -31
arch/s390/mm/init.c
··· 52 52 #include <linux/virtio_config.h> 53 53 54 54 pgd_t swapper_pg_dir[PTRS_PER_PGD] __section(".bss..swapper_pg_dir"); 55 - static pgd_t invalid_pg_dir[PTRS_PER_PGD] __section(".bss..invalid_pg_dir"); 55 + pgd_t invalid_pg_dir[PTRS_PER_PGD] __section(".bss..invalid_pg_dir"); 56 56 57 - unsigned long s390_invalid_asce; 57 + unsigned long __bootdata_preserved(s390_invalid_asce); 58 58 59 59 unsigned long empty_zero_page, zero_page_mask; 60 60 EXPORT_SYMBOL(empty_zero_page); ··· 93 93 void __init paging_init(void) 94 94 { 95 95 unsigned long max_zone_pfns[MAX_NR_ZONES]; 96 - unsigned long pgd_type, asce_bits; 97 - psw_t psw; 98 96 99 - s390_invalid_asce = (unsigned long)invalid_pg_dir; 100 - s390_invalid_asce |= _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH; 101 - crst_table_init((unsigned long *)invalid_pg_dir, _REGION3_ENTRY_EMPTY); 102 - init_mm.pgd = swapper_pg_dir; 103 - if (VMALLOC_END > _REGION2_SIZE) { 104 - asce_bits = _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH; 105 - pgd_type = _REGION2_ENTRY_EMPTY; 106 - } else { 107 - asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH; 108 - pgd_type = _REGION3_ENTRY_EMPTY; 109 - } 110 - init_mm.context.asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits; 111 - S390_lowcore.kernel_asce = init_mm.context.asce; 112 - S390_lowcore.user_asce = s390_invalid_asce; 113 - crst_table_init((unsigned long *) init_mm.pgd, pgd_type); 114 97 vmem_map_init(); 115 - kasan_copy_shadow_mapping(); 116 - 117 - /* enable virtual mapping in kernel mode */ 118 - __ctl_load(S390_lowcore.kernel_asce, 1, 1); 119 - __ctl_load(S390_lowcore.user_asce, 7, 7); 120 - __ctl_load(S390_lowcore.kernel_asce, 13, 13); 121 - psw.mask = __extract_psw(); 122 - psw_bits(psw).dat = 1; 123 - psw_bits(psw).as = PSW_BITS_AS_HOME; 124 - __load_psw_mask(psw.mask); 125 - kasan_free_early_identity(); 126 - 127 98 sparse_init(); 128 99 zone_dma_bits = 31; 129 100 memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+71 -173
arch/s390/mm/kasan_init.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/kasan.h> 3 3 #include <linux/sched/task.h> 4 - #include <linux/memblock.h> 5 4 #include <linux/pgtable.h> 6 5 #include <asm/pgalloc.h> 7 6 #include <asm/kasan.h> ··· 14 15 15 16 static unsigned long segment_pos __initdata; 16 17 static unsigned long segment_low __initdata; 17 - static unsigned long pgalloc_pos __initdata; 18 - static unsigned long pgalloc_low __initdata; 19 - static unsigned long pgalloc_freeable __initdata; 20 18 static bool has_edat __initdata; 21 19 static bool has_nx __initdata; 22 20 23 21 #define __sha(x) ((unsigned long)kasan_mem_to_shadow((void *)x)) 24 - 25 - static pgd_t early_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); 26 22 27 23 static void __init kasan_early_panic(const char *reason) 28 24 { ··· 33 39 if (segment_pos < segment_low) 34 40 kasan_early_panic("out of memory during initialisation\n"); 35 41 36 - return (void *)segment_pos; 42 + return __va(segment_pos); 37 43 } 38 44 39 45 static void * __init kasan_early_alloc_pages(unsigned int order) ··· 43 49 if (pgalloc_pos < pgalloc_low) 44 50 kasan_early_panic("out of memory during initialisation\n"); 45 51 46 - return (void *)pgalloc_pos; 52 + return __va(pgalloc_pos); 47 53 } 48 54 49 55 static void * __init kasan_early_crst_alloc(unsigned long val) ··· 75 81 } 76 82 77 83 enum populate_mode { 78 - POPULATE_ONE2ONE, 79 84 POPULATE_MAP, 80 85 POPULATE_ZERO_SHADOW, 81 86 POPULATE_SHALLOW 82 87 }; 88 + 89 + static inline pgprot_t pgprot_clear_bit(pgprot_t pgprot, unsigned long bit) 90 + { 91 + return __pgprot(pgprot_val(pgprot) & ~bit); 92 + } 93 + 83 94 static void __init kasan_early_pgtable_populate(unsigned long address, 84 95 unsigned long end, 85 96 enum populate_mode mode) 86 97 { 87 - unsigned long pgt_prot_zero, pgt_prot, sgt_prot; 98 + pgprot_t pgt_prot_zero = PAGE_KERNEL_RO; 99 + pgprot_t pgt_prot = PAGE_KERNEL; 100 + pgprot_t sgt_prot = SEGMENT_KERNEL; 88 101 pgd_t *pg_dir; 89 102 p4d_t *p4_dir; 90 103 pud_t *pu_dir; 91 104 pmd_t *pm_dir; 92 105 pte_t *pt_dir; 106 + pmd_t pmd; 107 + pte_t pte; 93 108 94 - pgt_prot_zero = pgprot_val(PAGE_KERNEL_RO); 95 - if (!has_nx) 96 - pgt_prot_zero &= ~_PAGE_NOEXEC; 97 - pgt_prot = pgprot_val(PAGE_KERNEL); 98 - sgt_prot = pgprot_val(SEGMENT_KERNEL); 99 - if (!has_nx || mode == POPULATE_ONE2ONE) { 100 - pgt_prot &= ~_PAGE_NOEXEC; 101 - sgt_prot &= ~_SEGMENT_ENTRY_NOEXEC; 109 + if (!has_nx) { 110 + pgt_prot_zero = pgprot_clear_bit(pgt_prot_zero, _PAGE_NOEXEC); 111 + pgt_prot = pgprot_clear_bit(pgt_prot, _PAGE_NOEXEC); 112 + sgt_prot = pgprot_clear_bit(sgt_prot, _SEGMENT_ENTRY_NOEXEC); 102 113 } 103 114 104 - /* 105 - * The first 1MB of 1:1 mapping is mapped with 4KB pages 106 - */ 107 115 while (address < end) { 108 116 pg_dir = pgd_offset_k(address); 109 117 if (pgd_none(*pg_dir)) { ··· 162 166 pmd_populate(&init_mm, pm_dir, kasan_early_shadow_pte); 163 167 address = (address + PMD_SIZE) & PMD_MASK; 164 168 continue; 165 - } else if (has_edat && address) { 166 - void *page; 169 + } else if (has_edat) { 170 + void *page = kasan_early_alloc_segment(); 167 171 168 - if (mode == POPULATE_ONE2ONE) { 169 - page = (void *)address; 170 - } else { 171 - page = kasan_early_alloc_segment(); 172 - memset(page, 0, _SEGMENT_SIZE); 173 - } 174 - set_pmd(pm_dir, __pmd(__pa(page) | sgt_prot)); 172 + memset(page, 0, _SEGMENT_SIZE); 173 + pmd = __pmd(__pa(page)); 174 + pmd = set_pmd_bit(pmd, sgt_prot); 175 + set_pmd(pm_dir, pmd); 175 176 address = (address + PMD_SIZE) & PMD_MASK; 176 177 continue; 177 178 } ··· 185 192 void *page; 186 193 187 194 switch (mode) { 188 - case POPULATE_ONE2ONE: 189 - page = (void *)address; 190 - set_pte(pt_dir, __pte(__pa(page) | pgt_prot)); 191 - break; 192 195 case POPULATE_MAP: 193 196 page = kasan_early_alloc_pages(0); 194 197 memset(page, 0, PAGE_SIZE); 195 - set_pte(pt_dir, __pte(__pa(page) | pgt_prot)); 198 + pte = __pte(__pa(page)); 199 + pte = set_pte_bit(pte, pgt_prot); 200 + set_pte(pt_dir, pte); 196 201 break; 197 202 case POPULATE_ZERO_SHADOW: 198 203 page = kasan_early_shadow_page; 199 - set_pte(pt_dir, __pte(__pa(page) | pgt_prot_zero)); 204 + pte = __pte(__pa(page)); 205 + pte = set_pte_bit(pte, pgt_prot_zero); 206 + set_pte(pt_dir, pte); 200 207 break; 201 208 case POPULATE_SHALLOW: 202 209 /* should never happen */ ··· 205 212 } 206 213 address += PAGE_SIZE; 207 214 } 208 - } 209 - 210 - static void __init kasan_set_pgd(pgd_t *pgd, unsigned long asce_type) 211 - { 212 - unsigned long asce_bits; 213 - 214 - asce_bits = asce_type | _ASCE_TABLE_LENGTH; 215 - S390_lowcore.kernel_asce = (__pa(pgd) & PAGE_MASK) | asce_bits; 216 - S390_lowcore.user_asce = S390_lowcore.kernel_asce; 217 - 218 - __ctl_load(S390_lowcore.kernel_asce, 1, 1); 219 - __ctl_load(S390_lowcore.kernel_asce, 7, 7); 220 - __ctl_load(S390_lowcore.kernel_asce, 13, 13); 221 - } 222 - 223 - static void __init kasan_enable_dat(void) 224 - { 225 - psw_t psw; 226 - 227 - psw.mask = __extract_psw(); 228 - psw_bits(psw).dat = 1; 229 - psw_bits(psw).as = PSW_BITS_AS_HOME; 230 - __load_psw_mask(psw.mask); 231 215 } 232 216 233 217 static void __init kasan_early_detect_facilities(void) ··· 221 251 222 252 void __init kasan_early_init(void) 223 253 { 224 - unsigned long shadow_alloc_size; 225 - unsigned long initrd_end; 226 - unsigned long memsize; 227 - unsigned long pgt_prot = pgprot_val(PAGE_KERNEL_RO); 228 - pte_t pte_z; 254 + pte_t pte_z = __pte(__pa(kasan_early_shadow_page) | pgprot_val(PAGE_KERNEL_RO)); 229 255 pmd_t pmd_z = __pmd(__pa(kasan_early_shadow_pte) | _SEGMENT_ENTRY); 230 256 pud_t pud_z = __pud(__pa(kasan_early_shadow_pmd) | _REGION3_ENTRY); 231 257 p4d_t p4d_z = __p4d(__pa(kasan_early_shadow_pud) | _REGION2_ENTRY); 258 + unsigned long untracked_end = MODULES_VADDR; 259 + unsigned long shadow_alloc_size; 260 + unsigned long start, end; 261 + int i; 232 262 233 263 kasan_early_detect_facilities(); 234 264 if (!has_nx) 235 - pgt_prot &= ~_PAGE_NOEXEC; 236 - pte_z = __pte(__pa(kasan_early_shadow_page) | pgt_prot); 237 - 238 - memsize = get_mem_detect_end(); 239 - if (!memsize) 240 - kasan_early_panic("cannot detect physical memory size\n"); 241 - /* 242 - * Kasan currently supports standby memory but only if it follows 243 - * online memory (default allocation), i.e. no memory holes. 244 - * - memsize represents end of online memory 245 - * - ident_map_size represents online + standby and memory limits 246 - * accounted. 247 - * Kasan maps "memsize" right away. 248 - * [0, memsize] - as identity mapping 249 - * [__sha(0), __sha(memsize)] - shadow memory for identity mapping 250 - * The rest [memsize, ident_map_size] if memsize < ident_map_size 251 - * could be mapped/unmapped dynamically later during memory hotplug. 252 - */ 253 - memsize = min(memsize, ident_map_size); 265 + pte_z = clear_pte_bit(pte_z, __pgprot(_PAGE_NOEXEC)); 254 266 255 267 BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, P4D_SIZE)); 256 268 BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, P4D_SIZE)); 257 - crst_table_init((unsigned long *)early_pg_dir, _REGION2_ENTRY_EMPTY); 258 269 259 270 /* init kasan zero shadow */ 260 - crst_table_init((unsigned long *)kasan_early_shadow_p4d, 261 - p4d_val(p4d_z)); 262 - crst_table_init((unsigned long *)kasan_early_shadow_pud, 263 - pud_val(pud_z)); 264 - crst_table_init((unsigned long *)kasan_early_shadow_pmd, 265 - pmd_val(pmd_z)); 271 + crst_table_init((unsigned long *)kasan_early_shadow_p4d, p4d_val(p4d_z)); 272 + crst_table_init((unsigned long *)kasan_early_shadow_pud, pud_val(pud_z)); 273 + crst_table_init((unsigned long *)kasan_early_shadow_pmd, pmd_val(pmd_z)); 266 274 memset64((u64 *)kasan_early_shadow_pte, pte_val(pte_z), PTRS_PER_PTE); 267 275 268 - shadow_alloc_size = memsize >> KASAN_SHADOW_SCALE_SHIFT; 269 - pgalloc_low = round_up((unsigned long)_end, _SEGMENT_SIZE); 270 - if (IS_ENABLED(CONFIG_BLK_DEV_INITRD)) { 271 - initrd_end = 272 - round_up(initrd_data.start + initrd_data.size, _SEGMENT_SIZE); 273 - pgalloc_low = max(pgalloc_low, initrd_end); 274 - } 275 - 276 - if (pgalloc_low + shadow_alloc_size > memsize) 277 - kasan_early_panic("out of memory during initialisation\n"); 278 - 279 276 if (has_edat) { 280 - segment_pos = round_down(memsize, _SEGMENT_SIZE); 277 + shadow_alloc_size = get_mem_detect_usable_total() >> KASAN_SHADOW_SCALE_SHIFT; 278 + segment_pos = round_down(pgalloc_pos, _SEGMENT_SIZE); 281 279 segment_low = segment_pos - shadow_alloc_size; 280 + segment_low = round_down(segment_low, _SEGMENT_SIZE); 282 281 pgalloc_pos = segment_low; 283 - } else { 284 - pgalloc_pos = memsize; 285 282 } 286 - init_mm.pgd = early_pg_dir; 287 283 /* 288 284 * Current memory layout: 289 - * +- 0 -------------+ +- shadow start -+ 290 - * | 1:1 ram mapping | /| 1/8 ram | 291 - * | | / | | 292 - * +- end of ram ----+ / +----------------+ 293 - * | ... gap ... | / | | 294 - * | |/ | kasan | 295 - * +- shadow start --+ | zero | 296 - * | 1/8 addr space | | page | 297 - * +- shadow end -+ | mapping | 298 - * | ... gap ... |\ | (untracked) | 299 - * +- vmalloc area -+ \ | | 300 - * | vmalloc_size | \ | | 301 - * +- modules vaddr -+ \ +----------------+ 302 - * | 2Gb | \| unmapped | allocated per module 303 - * +-----------------+ +- shadow end ---+ 285 + * +- 0 -------------+ +- shadow start -+ 286 + * |1:1 ident mapping| /|1/8 of ident map| 287 + * | | / | | 288 + * +-end of ident map+ / +----------------+ 289 + * | ... gap ... | / | kasan | 290 + * | | / | zero page | 291 + * +- vmalloc area -+ / | mapping | 292 + * | vmalloc_size | / | (untracked) | 293 + * +- modules vaddr -+ / +----------------+ 294 + * | 2Gb |/ | unmapped | allocated per module 295 + * +- shadow start -+ +----------------+ 296 + * | 1/8 addr space | | zero pg mapping| (untracked) 297 + * +- shadow end ----+---------+- shadow end ---+ 304 298 * 305 299 * Current memory layout (KASAN_VMALLOC): 306 - * +- 0 -------------+ +- shadow start -+ 307 - * | 1:1 ram mapping | /| 1/8 ram | 308 - * | | / | | 309 - * +- end of ram ----+ / +----------------+ 310 - * | ... gap ... | / | kasan | 311 - * | |/ | zero | 312 - * +- shadow start --+ | page | 313 - * | 1/8 addr space | | mapping | 314 - * +- shadow end -+ | (untracked) | 315 - * | ... gap ... |\ | | 316 - * +- vmalloc area -+ \ +- vmalloc area -+ 317 - * | vmalloc_size | \ |shallow populate| 318 - * +- modules vaddr -+ \ +- modules area -+ 319 - * | 2Gb | \|shallow populate| 320 - * +-----------------+ +- shadow end ---+ 300 + * +- 0 -------------+ +- shadow start -+ 301 + * |1:1 ident mapping| /|1/8 of ident map| 302 + * | | / | | 303 + * +-end of ident map+ / +----------------+ 304 + * | ... gap ... | / | kasan zero page| (untracked) 305 + * | | / | mapping | 306 + * +- vmalloc area -+ / +----------------+ 307 + * | vmalloc_size | / |shallow populate| 308 + * +- modules vaddr -+ / +----------------+ 309 + * | 2Gb |/ |shallow populate| 310 + * +- shadow start -+ +----------------+ 311 + * | 1/8 addr space | | zero pg mapping| (untracked) 312 + * +- shadow end ----+---------+- shadow end ---+ 321 313 */ 322 314 /* populate kasan shadow (for identity mapping and zero page mapping) */ 323 - kasan_early_pgtable_populate(__sha(0), __sha(memsize), POPULATE_MAP); 315 + for_each_mem_detect_usable_block(i, &start, &end) 316 + kasan_early_pgtable_populate(__sha(start), __sha(end), POPULATE_MAP); 324 317 if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) { 318 + untracked_end = VMALLOC_START; 325 319 /* shallowly populate kasan shadow for vmalloc and modules */ 326 320 kasan_early_pgtable_populate(__sha(VMALLOC_START), __sha(MODULES_END), 327 321 POPULATE_SHALLOW); 328 322 } 329 323 /* populate kasan shadow for untracked memory */ 330 - kasan_early_pgtable_populate(__sha(ident_map_size), 331 - IS_ENABLED(CONFIG_KASAN_VMALLOC) ? 332 - __sha(VMALLOC_START) : 333 - __sha(MODULES_VADDR), 324 + kasan_early_pgtable_populate(__sha(ident_map_size), __sha(untracked_end), 334 325 POPULATE_ZERO_SHADOW); 335 326 kasan_early_pgtable_populate(__sha(MODULES_END), __sha(_REGION1_SIZE), 336 327 POPULATE_ZERO_SHADOW); 337 - /* memory allocated for identity mapping structs will be freed later */ 338 - pgalloc_freeable = pgalloc_pos; 339 - /* populate identity mapping */ 340 - kasan_early_pgtable_populate(0, memsize, POPULATE_ONE2ONE); 341 - kasan_set_pgd(early_pg_dir, _ASCE_TYPE_REGION2); 342 - kasan_enable_dat(); 343 328 /* enable kasan */ 344 329 init_task.kasan_depth = 0; 345 - memblock_reserve(pgalloc_pos, memsize - pgalloc_pos); 346 330 sclp_early_printk("KernelAddressSanitizer initialized\n"); 347 - } 348 - 349 - void __init kasan_copy_shadow_mapping(void) 350 - { 351 - /* 352 - * At this point we are still running on early pages setup early_pg_dir, 353 - * while swapper_pg_dir has just been initialized with identity mapping. 354 - * Carry over shadow memory region from early_pg_dir to swapper_pg_dir. 355 - */ 356 - 357 - pgd_t *pg_dir_src; 358 - pgd_t *pg_dir_dst; 359 - p4d_t *p4_dir_src; 360 - p4d_t *p4_dir_dst; 361 - 362 - pg_dir_src = pgd_offset_raw(early_pg_dir, KASAN_SHADOW_START); 363 - pg_dir_dst = pgd_offset_raw(init_mm.pgd, KASAN_SHADOW_START); 364 - p4_dir_src = p4d_offset(pg_dir_src, KASAN_SHADOW_START); 365 - p4_dir_dst = p4d_offset(pg_dir_dst, KASAN_SHADOW_START); 366 - memcpy(p4_dir_dst, p4_dir_src, 367 - (KASAN_SHADOW_SIZE >> P4D_SHIFT) * sizeof(p4d_t)); 368 - } 369 - 370 - void __init kasan_free_early_identity(void) 371 - { 372 - memblock_phys_free(pgalloc_pos, pgalloc_freeable - pgalloc_pos); 373 331 }
+8 -20
arch/s390/mm/maccess.c
··· 21 21 #include <asm/maccess.h> 22 22 23 23 unsigned long __bootdata_preserved(__memcpy_real_area); 24 - static __ro_after_init pte_t *memcpy_real_ptep; 24 + pte_t *__bootdata_preserved(memcpy_real_ptep); 25 25 static DEFINE_MUTEX(memcpy_real_mutex); 26 26 27 27 static notrace long s390_kernel_write_odd(void *dst, const void *src, size_t size) ··· 68 68 long copied; 69 69 70 70 spin_lock_irqsave(&s390_kernel_write_lock, flags); 71 - if (!(flags & PSW_MASK_DAT)) { 72 - memcpy(dst, src, size); 73 - } else { 74 - while (size) { 75 - copied = s390_kernel_write_odd(tmp, src, size); 76 - tmp += copied; 77 - src += copied; 78 - size -= copied; 79 - } 71 + while (size) { 72 + copied = s390_kernel_write_odd(tmp, src, size); 73 + tmp += copied; 74 + src += copied; 75 + size -= copied; 80 76 } 81 77 spin_unlock_irqrestore(&s390_kernel_write_lock, flags); 82 78 83 79 return dst; 84 - } 85 - 86 - void __init memcpy_real_init(void) 87 - { 88 - memcpy_real_ptep = vmem_get_alloc_pte(__memcpy_real_area, true); 89 - if (!memcpy_real_ptep) 90 - panic("Couldn't setup memcpy real area"); 91 80 } 92 81 93 82 size_t memcpy_real_iter(struct iov_iter *iter, unsigned long src, size_t count) ··· 151 162 void *ptr = phys_to_virt(addr); 152 163 void *bounce = ptr; 153 164 struct lowcore *abs_lc; 154 - unsigned long flags; 155 165 unsigned long size; 156 166 int this_cpu, cpu; 157 167 ··· 166 178 goto out; 167 179 size = PAGE_SIZE - (addr & ~PAGE_MASK); 168 180 if (addr < sizeof(struct lowcore)) { 169 - abs_lc = get_abs_lowcore(&flags); 181 + abs_lc = get_abs_lowcore(); 170 182 ptr = (void *)abs_lc + addr; 171 183 memcpy(bounce, ptr, size); 172 - put_abs_lowcore(abs_lc, flags); 184 + put_abs_lowcore(abs_lc); 173 185 } else if (cpu == this_cpu) { 174 186 ptr = (void *)(addr - virt_to_phys(lowcore_ptr[cpu])); 175 187 memcpy(bounce, ptr, size);
+25
arch/s390/mm/pgtable.c
··· 302 302 } 303 303 EXPORT_SYMBOL(ptep_xchg_direct); 304 304 305 + /* 306 + * Caller must check that new PTE only differs in _PAGE_PROTECT HW bit, so that 307 + * RDP can be used instead of IPTE. See also comments at pte_allow_rdp(). 308 + */ 309 + void ptep_reset_dat_prot(struct mm_struct *mm, unsigned long addr, pte_t *ptep, 310 + pte_t new) 311 + { 312 + preempt_disable(); 313 + atomic_inc(&mm->context.flush_count); 314 + if (cpumask_equal(mm_cpumask(mm), cpumask_of(smp_processor_id()))) 315 + __ptep_rdp(addr, ptep, 0, 0, 1); 316 + else 317 + __ptep_rdp(addr, ptep, 0, 0, 0); 318 + /* 319 + * PTE is not invalidated by RDP, only _PAGE_PROTECT is cleared. That 320 + * means it is still valid and active, and must not be changed according 321 + * to the architecture. But writing a new value that only differs in SW 322 + * bits is allowed. 323 + */ 324 + set_pte(ptep, new); 325 + atomic_dec(&mm->context.flush_count); 326 + preempt_enable(); 327 + } 328 + EXPORT_SYMBOL(ptep_reset_dat_prot); 329 + 305 330 pte_t ptep_xchg_lazy(struct mm_struct *mm, unsigned long addr, 306 331 pte_t *ptep, pte_t new) 307 332 {
+83 -20
arch/s390/mm/vmem.c
··· 11 11 #include <linux/list.h> 12 12 #include <linux/hugetlb.h> 13 13 #include <linux/slab.h> 14 + #include <linux/sort.h> 14 15 #include <asm/cacheflush.h> 15 16 #include <asm/nospec-branch.h> 16 17 #include <asm/pgalloc.h> ··· 297 296 /* Don't mess with any tables not fully in 1:1 mapping & vmemmap area */ 298 297 if (end > VMALLOC_START) 299 298 return; 300 - #ifdef CONFIG_KASAN 301 - if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end) 302 - return; 303 - #endif 299 + 304 300 pmd = pmd_offset(pud, start); 305 301 for (i = 0; i < PTRS_PER_PMD; i++, pmd++) 306 302 if (!pmd_none(*pmd)) ··· 369 371 /* Don't mess with any tables not fully in 1:1 mapping & vmemmap area */ 370 372 if (end > VMALLOC_START) 371 373 return; 372 - #ifdef CONFIG_KASAN 373 - if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end) 374 - return; 375 - #endif 376 374 377 375 pud = pud_offset(p4d, start); 378 376 for (i = 0; i < PTRS_PER_PUD; i++, pud++) { ··· 419 425 /* Don't mess with any tables not fully in 1:1 mapping & vmemmap area */ 420 426 if (end > VMALLOC_START) 421 427 return; 422 - #ifdef CONFIG_KASAN 423 - if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end) 424 - return; 425 - #endif 426 428 427 429 p4d = p4d_offset(pgd, start); 428 430 for (i = 0; i < PTRS_PER_P4D; i++, p4d++) { ··· 647 657 mutex_unlock(&vmem_mutex); 648 658 } 649 659 660 + static int __init memblock_region_cmp(const void *a, const void *b) 661 + { 662 + const struct memblock_region *r1 = a; 663 + const struct memblock_region *r2 = b; 664 + 665 + if (r1->base < r2->base) 666 + return -1; 667 + if (r1->base > r2->base) 668 + return 1; 669 + return 0; 670 + } 671 + 672 + static void __init memblock_region_swap(void *a, void *b, int size) 673 + { 674 + swap(*(struct memblock_region *)a, *(struct memblock_region *)b); 675 + } 676 + 650 677 /* 651 678 * map whole physical memory to virtual memory (identity mapping) 652 679 * we reserve enough space in the vmalloc area for vmemmap to hotplug ··· 671 664 */ 672 665 void __init vmem_map_init(void) 673 666 { 667 + struct memblock_region memory_rwx_regions[] = { 668 + { 669 + .base = 0, 670 + .size = sizeof(struct lowcore), 671 + .flags = MEMBLOCK_NONE, 672 + #ifdef CONFIG_NUMA 673 + .nid = NUMA_NO_NODE, 674 + #endif 675 + }, 676 + { 677 + .base = __pa(_stext), 678 + .size = _etext - _stext, 679 + .flags = MEMBLOCK_NONE, 680 + #ifdef CONFIG_NUMA 681 + .nid = NUMA_NO_NODE, 682 + #endif 683 + }, 684 + { 685 + .base = __pa(_sinittext), 686 + .size = _einittext - _sinittext, 687 + .flags = MEMBLOCK_NONE, 688 + #ifdef CONFIG_NUMA 689 + .nid = NUMA_NO_NODE, 690 + #endif 691 + }, 692 + { 693 + .base = __stext_amode31, 694 + .size = __etext_amode31 - __stext_amode31, 695 + .flags = MEMBLOCK_NONE, 696 + #ifdef CONFIG_NUMA 697 + .nid = NUMA_NO_NODE, 698 + #endif 699 + }, 700 + }; 701 + struct memblock_type memory_rwx = { 702 + .regions = memory_rwx_regions, 703 + .cnt = ARRAY_SIZE(memory_rwx_regions), 704 + .max = ARRAY_SIZE(memory_rwx_regions), 705 + }; 674 706 phys_addr_t base, end; 675 707 u64 i; 676 708 677 - for_each_mem_range(i, &base, &end) 678 - vmem_add_range(base, end - base); 709 + /* 710 + * Set RW+NX attribute on all memory, except regions enumerated with 711 + * memory_rwx exclude type. These regions need different attributes, 712 + * which are enforced afterwards. 713 + * 714 + * __for_each_mem_range() iterate and exclude types should be sorted. 715 + * The relative location of _stext and _sinittext is hardcoded in the 716 + * linker script. However a location of __stext_amode31 and the kernel 717 + * image itself are chosen dynamically. Thus, sort the exclude type. 718 + */ 719 + sort(&memory_rwx_regions, 720 + ARRAY_SIZE(memory_rwx_regions), sizeof(memory_rwx_regions[0]), 721 + memblock_region_cmp, memblock_region_swap); 722 + __for_each_mem_range(i, &memblock.memory, &memory_rwx, 723 + NUMA_NO_NODE, MEMBLOCK_NONE, &base, &end, NULL) { 724 + __set_memory((unsigned long)__va(base), 725 + (end - base) >> PAGE_SHIFT, 726 + SET_MEMORY_RW | SET_MEMORY_NX); 727 + } 728 + 679 729 __set_memory((unsigned long)_stext, 680 730 (unsigned long)(_etext - _stext) >> PAGE_SHIFT, 681 731 SET_MEMORY_RO | SET_MEMORY_X); ··· 742 678 __set_memory((unsigned long)_sinittext, 743 679 (unsigned long)(_einittext - _sinittext) >> PAGE_SHIFT, 744 680 SET_MEMORY_RO | SET_MEMORY_X); 745 - __set_memory(__stext_amode31, (__etext_amode31 - __stext_amode31) >> PAGE_SHIFT, 681 + __set_memory(__stext_amode31, 682 + (__etext_amode31 - __stext_amode31) >> PAGE_SHIFT, 746 683 SET_MEMORY_RO | SET_MEMORY_X); 747 684 748 - /* lowcore requires 4k mapping for real addresses / prefixing */ 749 - set_memory_4k(0, LC_PAGES); 750 - 751 685 /* lowcore must be executable for LPSWE */ 752 - if (!static_key_enabled(&cpu_has_bear)) 753 - set_memory_x(0, 1); 686 + if (static_key_enabled(&cpu_has_bear)) 687 + set_memory_nx(0, 1); 688 + set_memory_nx(PAGE_SIZE, 1); 754 689 755 690 pr_info("Write protected kernel read-only data: %luk\n", 756 691 (unsigned long)(__end_rodata - _stext) >> 10);
+2 -9
drivers/s390/char/Kconfig
··· 5 5 config TN3270 6 6 def_tristate y 7 7 prompt "Support for locally attached 3270 terminals" 8 - depends on CCW 8 + depends on CCW && TTY 9 9 help 10 10 Include support for IBM 3270 terminals. 11 - 12 - config TN3270_TTY 13 - def_tristate y 14 - prompt "Support for tty input/output on 3270 terminals" 15 - depends on TN3270 && TTY 16 - help 17 - Include support for using an IBM 3270 terminal as a Linux tty. 18 11 19 12 config TN3270_FS 20 13 def_tristate m ··· 19 26 config TN3270_CONSOLE 20 27 def_bool y 21 28 prompt "Support for console on 3270 terminal" 22 - depends on TN3270=y && TN3270_TTY=y 29 + depends on TN3270=y 23 30 help 24 31 Include support for using an IBM 3270 terminal as a Linux system 25 32 console. Available only if 3270 support is compiled in statically.
+1 -3
drivers/s390/char/Makefile
··· 21 21 sclp_cmd.o sclp_config.o sclp_cpi_sys.o sclp_ocf.o sclp_ctl.o \ 22 22 sclp_early.o sclp_early_core.o sclp_sd.o 23 23 24 - obj-$(CONFIG_TN3270) += raw3270.o 25 - obj-$(CONFIG_TN3270_CONSOLE) += con3270.o 26 - obj-$(CONFIG_TN3270_TTY) += tty3270.o 24 + obj-$(CONFIG_TN3270) += raw3270.o con3270.o 27 25 obj-$(CONFIG_TN3270_FS) += fs3270.o 28 26 29 27 obj-$(CONFIG_TN3215) += con3215.o
+1978 -437
drivers/s390/char/con3270.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * IBM/3270 Driver - console view. 3 + * IBM/3270 Driver - tty functions. 4 4 * 5 - * Author(s): 6 - * Original 3270 Code for 2.4 written by Richard Hitt (UTS Global) 7 - * Rewritten for 2.5 by Martin Schwidefsky <schwidefsky@de.ibm.com> 8 - * Copyright IBM Corp. 2003, 2009 5 + * Author(s): 6 + * Original 3270 Code for 2.4 written by Richard Hitt (UTS Global) 7 + * Rewritten for 2.5 by Martin Schwidefsky <schwidefsky@de.ibm.com> 8 + * -- Copyright IBM Corp. 2003 9 9 */ 10 10 11 11 #include <linux/module.h> 12 - #include <linux/console.h> 13 - #include <linux/init.h> 14 - #include <linux/interrupt.h> 15 - #include <linux/list.h> 16 - #include <linux/panic_notifier.h> 17 12 #include <linux/types.h> 18 - #include <linux/slab.h> 19 - #include <linux/err.h> 13 + #include <linux/kdev_t.h> 14 + #include <linux/tty.h> 15 + #include <linux/vt_kern.h> 16 + #include <linux/init.h> 17 + #include <linux/console.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/workqueue.h> 20 + #include <linux/panic_notifier.h> 20 21 #include <linux/reboot.h> 22 + #include <linux/slab.h> 23 + #include <linux/memblock.h> 24 + #include <linux/compat.h> 21 25 22 26 #include <asm/ccwdev.h> 23 27 #include <asm/cio.h> 24 - #include <asm/cpcmd.h> 25 28 #include <asm/ebcdic.h> 29 + #include <asm/cpcmd.h> 30 + #include <linux/uaccess.h> 26 31 27 32 #include "raw3270.h" 28 - #include "tty3270.h" 29 - #include "ctrlchar.h" 33 + #include "keyboard.h" 30 34 31 - #define CON3270_OUTPUT_BUFFER_SIZE 1024 32 - #define CON3270_STRING_PAGES 4 35 + #define TTY3270_CHAR_BUF_SIZE 256 36 + #define TTY3270_OUTPUT_BUFFER_SIZE 4096 37 + #define TTY3270_SCREEN_PAGES 8 /* has to be power-of-two */ 38 + #define TTY3270_RECALL_SIZE 16 /* has to be power-of-two */ 39 + #define TTY3270_STATUS_AREA_SIZE 40 33 40 34 - static struct raw3270_fn con3270_fn; 41 + static struct tty_driver *tty3270_driver; 42 + static int tty3270_max_index; 43 + static struct raw3270_fn tty3270_fn; 35 44 36 - static bool auto_update = true; 37 - module_param(auto_update, bool, 0); 45 + #define TTY3270_HIGHLIGHT_BLINK 1 46 + #define TTY3270_HIGHLIGHT_REVERSE 2 47 + #define TTY3270_HIGHLIGHT_UNDERSCORE 4 38 48 39 - /* 40 - * Main 3270 console view data structure. 41 - */ 42 - struct con3270 { 43 - struct raw3270_view view; 44 - struct list_head freemem; /* list of free memory for strings. */ 45 - 46 - /* Output stuff. */ 47 - struct list_head lines; /* list of lines. */ 48 - struct list_head update; /* list of lines to update. */ 49 - int line_nr; /* line number for next update. */ 50 - int nr_lines; /* # lines in list. */ 51 - int nr_up; /* # lines up in history. */ 52 - unsigned long update_flags; /* Update indication bits. */ 53 - struct string *cline; /* current output line. */ 54 - struct string *status; /* last line of display. */ 55 - struct raw3270_request *write; /* single write request. */ 56 - struct timer_list timer; 57 - 58 - /* Input stuff. */ 59 - struct string *input; /* input string for read request. */ 60 - struct raw3270_request *read; /* single read request. */ 61 - struct raw3270_request *kreset; /* single keyboard reset request. */ 62 - struct tasklet_struct readlet; /* tasklet to issue read request. */ 49 + struct tty3270_attribute { 50 + unsigned char alternate_charset:1; /* Graphics charset */ 51 + unsigned char highlight:3; /* Blink/reverse/underscore */ 52 + unsigned char f_color:4; /* Foreground color */ 53 + unsigned char b_color:4; /* Background color */ 63 54 }; 64 55 65 - static struct con3270 *condev; 56 + struct tty3270_cell { 57 + unsigned char character; 58 + struct tty3270_attribute attributes; 59 + }; 66 60 67 - /* con3270->update_flags. See con3270_update for details. */ 68 - #define CON_UPDATE_ERASE 1 /* Use EWRITEA instead of WRITE. */ 69 - #define CON_UPDATE_LIST 2 /* Update lines in tty3270->update. */ 70 - #define CON_UPDATE_STATUS 4 /* Update status line. */ 71 - #define CON_UPDATE_ALL 8 /* Recreate screen. */ 61 + struct tty3270_line { 62 + struct tty3270_cell *cells; 63 + int len; 64 + int dirty; 65 + }; 72 66 73 - static void con3270_update(struct timer_list *); 67 + static const unsigned char sfq_read_partition[] = { 68 + 0x00, 0x07, 0x01, 0xff, 0x03, 0x00, 0x81 69 + }; 70 + 71 + #define ESCAPE_NPAR 8 72 + 73 + /* 74 + * The main tty view data structure. 75 + * FIXME: 76 + * 1) describe line orientation & lines list concept against screen 77 + * 2) describe conversion of screen to lines 78 + * 3) describe line format. 79 + */ 80 + struct tty3270 { 81 + struct raw3270_view view; 82 + struct tty_port port; 83 + 84 + /* Output stuff. */ 85 + unsigned char wcc; /* Write control character. */ 86 + int nr_up; /* # lines up in history. */ 87 + unsigned long update_flags; /* Update indication bits. */ 88 + struct raw3270_request *write; /* Single write request. */ 89 + struct timer_list timer; /* Output delay timer. */ 90 + char *converted_line; /* RAW 3270 data stream */ 91 + unsigned int line_view_start; /* Start of visible area */ 92 + unsigned int line_write_start; /* current write position */ 93 + unsigned int oops_line; /* line counter used when print oops */ 94 + 95 + /* Current tty screen. */ 96 + unsigned int cx, cy; /* Current output position. */ 97 + struct tty3270_attribute attributes; 98 + struct tty3270_attribute saved_attributes; 99 + int allocated_lines; 100 + struct tty3270_line *screen; 101 + 102 + /* Input stuff. */ 103 + char *prompt; /* Output string for input area. */ 104 + char *input; /* Input string for read request. */ 105 + struct raw3270_request *read; /* Single read request. */ 106 + struct raw3270_request *kreset; /* Single keyboard reset request. */ 107 + struct raw3270_request *readpartreq; 108 + unsigned char inattr; /* Visible/invisible input. */ 109 + int throttle, attn; /* tty throttle/unthrottle. */ 110 + struct tasklet_struct readlet; /* Tasklet to issue read request. */ 111 + struct tasklet_struct hanglet; /* Tasklet to hang up the tty. */ 112 + struct kbd_data *kbd; /* key_maps stuff. */ 113 + 114 + /* Escape sequence parsing. */ 115 + int esc_state, esc_ques, esc_npar; 116 + int esc_par[ESCAPE_NPAR]; 117 + unsigned int saved_cx, saved_cy; 118 + 119 + /* Command recalling. */ 120 + char **rcl_lines; /* Array of recallable lines */ 121 + int rcl_write_index; /* Write index of recallable items */ 122 + int rcl_read_index; /* Read index of recallable items */ 123 + 124 + /* Character array for put_char/flush_chars. */ 125 + unsigned int char_count; 126 + char char_buf[TTY3270_CHAR_BUF_SIZE]; 127 + }; 128 + 129 + /* tty3270->update_flags. See tty3270_update for details. */ 130 + #define TTY_UPDATE_INPUT 0x1 /* Update input line. */ 131 + #define TTY_UPDATE_STATUS 0x2 /* Update status line. */ 132 + #define TTY_UPDATE_LINES 0x4 /* Update visible screen lines */ 133 + #define TTY_UPDATE_ALL 0x7 /* Recreate screen. */ 134 + 135 + #define TTY3270_INPUT_AREA_ROWS 2 74 136 75 137 /* 76 138 * Setup timeout for a device. On timeout trigger an update. 77 139 */ 78 - static void con3270_set_timer(struct con3270 *cp, int expires) 140 + static void tty3270_set_timer(struct tty3270 *tp, int expires) 79 141 { 80 - if (expires == 0) 81 - del_timer(&cp->timer); 82 - else 83 - mod_timer(&cp->timer, jiffies + expires); 142 + mod_timer(&tp->timer, jiffies + expires); 143 + } 144 + 145 + static int tty3270_tty_rows(struct tty3270 *tp) 146 + { 147 + return tp->view.rows - TTY3270_INPUT_AREA_ROWS; 148 + } 149 + 150 + static char *tty3270_add_ba(struct tty3270 *tp, char *cp, char order, int x, int y) 151 + { 152 + *cp++ = order; 153 + raw3270_buffer_address(tp->view.dev, cp, x, y); 154 + return cp + 2; 155 + } 156 + 157 + static char *tty3270_add_ra(struct tty3270 *tp, char *cp, int x, int y, char c) 158 + { 159 + cp = tty3270_add_ba(tp, cp, TO_RA, x, y); 160 + *cp++ = c; 161 + return cp; 162 + } 163 + 164 + static char *tty3270_add_sa(struct tty3270 *tp, char *cp, char attr, char value) 165 + { 166 + *cp++ = TO_SA; 167 + *cp++ = attr; 168 + *cp++ = value; 169 + return cp; 170 + } 171 + 172 + static char *tty3270_add_ge(struct tty3270 *tp, char *cp, char c) 173 + { 174 + *cp++ = TO_GE; 175 + *cp++ = c; 176 + return cp; 177 + } 178 + 179 + static char *tty3270_add_sf(struct tty3270 *tp, char *cp, char type) 180 + { 181 + *cp++ = TO_SF; 182 + *cp++ = type; 183 + return cp; 184 + } 185 + 186 + static int tty3270_line_increment(struct tty3270 *tp, unsigned int line, unsigned int incr) 187 + { 188 + return (line + incr) & (tp->allocated_lines - 1); 189 + } 190 + 191 + static struct tty3270_line *tty3270_get_write_line(struct tty3270 *tp, unsigned int num) 192 + { 193 + return tp->screen + tty3270_line_increment(tp, tp->line_write_start, num); 194 + } 195 + 196 + static struct tty3270_line *tty3270_get_view_line(struct tty3270 *tp, unsigned int num) 197 + { 198 + return tp->screen + tty3270_line_increment(tp, tp->line_view_start, num - tp->nr_up); 199 + } 200 + 201 + static int tty3270_input_size(int cols) 202 + { 203 + return cols * 2 - 11; 204 + } 205 + 206 + static void tty3270_update_prompt(struct tty3270 *tp, char *input) 207 + { 208 + strcpy(tp->prompt, input); 209 + tp->update_flags |= TTY_UPDATE_INPUT; 210 + tty3270_set_timer(tp, 1); 211 + } 212 + 213 + /* 214 + * The input line are the two last lines of the screen. 215 + */ 216 + static int tty3270_add_prompt(struct tty3270 *tp) 217 + { 218 + int count = 0; 219 + char *cp; 220 + 221 + cp = tp->converted_line; 222 + cp = tty3270_add_ba(tp, cp, TO_SBA, 0, -2); 223 + *cp++ = tp->view.ascebc['>']; 224 + 225 + if (*tp->prompt) { 226 + cp = tty3270_add_sf(tp, cp, TF_INMDT); 227 + count = min_t(int, strlen(tp->prompt), 228 + tp->view.cols * 2 - TTY3270_STATUS_AREA_SIZE - 2); 229 + memcpy(cp, tp->prompt, count); 230 + cp += count; 231 + } else { 232 + cp = tty3270_add_sf(tp, cp, tp->inattr); 233 + } 234 + *cp++ = TO_IC; 235 + /* Clear to end of input line. */ 236 + if (count < tp->view.cols * 2 - 11) 237 + cp = tty3270_add_ra(tp, cp, -TTY3270_STATUS_AREA_SIZE, -1, 0); 238 + return cp - tp->converted_line; 239 + } 240 + 241 + static char *tty3270_ebcdic_convert(struct tty3270 *tp, char *d, char *s) 242 + { 243 + while (*s) 244 + *d++ = tp->view.ascebc[(int)*s++]; 245 + return d; 84 246 } 85 247 86 248 /* 87 249 * The status line is the last line of the screen. It shows the string 88 - * "console view" in the lower left corner and "Running"/"More..."/"Holding" 89 - * in the lower right corner of the screen. 250 + * "Running"/"History X" in the lower right corner of the screen. 90 251 */ 91 - static void 92 - con3270_update_status(struct con3270 *cp) 252 + static int tty3270_add_status(struct tty3270 *tp) 93 253 { 94 - char *str; 254 + char *cp = tp->converted_line; 255 + int len; 95 256 96 - str = (cp->nr_up != 0) ? "History" : "Running"; 97 - memcpy(cp->status->string + 24, str, 7); 98 - codepage_convert(cp->view.ascebc, cp->status->string + 24, 7); 99 - cp->update_flags |= CON_UPDATE_STATUS; 257 + cp = tty3270_add_ba(tp, cp, TO_SBA, -TTY3270_STATUS_AREA_SIZE, -1); 258 + cp = tty3270_add_sf(tp, cp, TF_LOG); 259 + cp = tty3270_add_sa(tp, cp, TAT_FGCOLOR, TAC_GREEN); 260 + cp = tty3270_ebcdic_convert(tp, cp, " 7"); 261 + cp = tty3270_add_sa(tp, cp, TAT_EXTHI, TAX_REVER); 262 + cp = tty3270_ebcdic_convert(tp, cp, "PrevPg"); 263 + cp = tty3270_add_sa(tp, cp, TAT_EXTHI, TAX_RESET); 264 + cp = tty3270_ebcdic_convert(tp, cp, " 8"); 265 + cp = tty3270_add_sa(tp, cp, TAT_EXTHI, TAX_REVER); 266 + cp = tty3270_ebcdic_convert(tp, cp, "NextPg"); 267 + cp = tty3270_add_sa(tp, cp, TAT_EXTHI, TAX_RESET); 268 + cp = tty3270_ebcdic_convert(tp, cp, " 12"); 269 + cp = tty3270_add_sa(tp, cp, TAT_EXTHI, TAX_REVER); 270 + cp = tty3270_ebcdic_convert(tp, cp, "Recall"); 271 + cp = tty3270_add_sa(tp, cp, TAT_EXTHI, TAX_RESET); 272 + cp = tty3270_ebcdic_convert(tp, cp, " "); 273 + if (tp->nr_up) { 274 + len = sprintf(cp, "History %d", -tp->nr_up); 275 + codepage_convert(tp->view.ascebc, cp, len); 276 + cp += len; 277 + } else { 278 + cp = tty3270_ebcdic_convert(tp, cp, oops_in_progress ? "Crashed" : "Running"); 279 + } 280 + cp = tty3270_add_sf(tp, cp, TF_LOG); 281 + cp = tty3270_add_sa(tp, cp, TAT_FGCOLOR, TAC_RESET); 282 + return cp - (char *)tp->converted_line; 100 283 } 101 284 102 - static void 103 - con3270_create_status(struct con3270 *cp) 285 + static void tty3270_blank_screen(struct tty3270 *tp) 104 286 { 105 - static const unsigned char blueprint[] = 106 - { TO_SBA, 0, 0, TO_SF,TF_LOG,TO_SA,TAT_COLOR, TAC_GREEN, 107 - 'c','o','n','s','o','l','e',' ','v','i','e','w', 108 - TO_RA,0,0,0,'R','u','n','n','i','n','g',TO_SF,TF_LOG }; 287 + struct tty3270_line *line; 288 + int i; 109 289 110 - cp->status = alloc_string(&cp->freemem, sizeof(blueprint)); 111 - /* Copy blueprint to status line */ 112 - memcpy(cp->status->string, blueprint, sizeof(blueprint)); 113 - /* Set TO_RA addresses. */ 114 - raw3270_buffer_address(cp->view.dev, cp->status->string + 1, 115 - cp->view.cols * (cp->view.rows - 1)); 116 - raw3270_buffer_address(cp->view.dev, cp->status->string + 21, 117 - cp->view.cols * cp->view.rows - 8); 118 - /* Convert strings to ebcdic. */ 119 - codepage_convert(cp->view.ascebc, cp->status->string + 8, 12); 120 - codepage_convert(cp->view.ascebc, cp->status->string + 24, 7); 290 + for (i = 0; i < tty3270_tty_rows(tp); i++) { 291 + line = tty3270_get_write_line(tp, i); 292 + line->len = 0; 293 + line->dirty = 1; 294 + } 295 + tp->nr_up = 0; 121 296 } 122 297 123 298 /* 124 - * Set output offsets to 3270 datastream fragment of a console string. 299 + * Write request completion callback. 125 300 */ 126 - static void 127 - con3270_update_string(struct con3270 *cp, struct string *s, int nr) 301 + static void tty3270_write_callback(struct raw3270_request *rq, void *data) 128 302 { 129 - if (s->len < 4) { 130 - /* This indicates a bug, but printing a warning would 131 - * cause a deadlock. */ 132 - return; 303 + struct tty3270 *tp = container_of(rq->view, struct tty3270, view); 304 + 305 + if (rq->rc != 0) { 306 + /* Write wasn't successful. Refresh all. */ 307 + tp->update_flags = TTY_UPDATE_ALL; 308 + tty3270_set_timer(tp, 1); 133 309 } 134 - if (s->string[s->len - 4] != TO_RA) 135 - return; 136 - raw3270_buffer_address(cp->view.dev, s->string + s->len - 3, 137 - cp->view.cols * (nr + 1)); 138 - } 139 - 140 - /* 141 - * Rebuild update list to print all lines. 142 - */ 143 - static void 144 - con3270_rebuild_update(struct con3270 *cp) 145 - { 146 - struct string *s, *n; 147 - int nr; 148 - 149 - /* 150 - * Throw away update list and create a new one, 151 - * containing all lines that will fit on the screen. 152 - */ 153 - list_for_each_entry_safe(s, n, &cp->update, update) 154 - list_del_init(&s->update); 155 - nr = cp->view.rows - 2 + cp->nr_up; 156 - list_for_each_entry_reverse(s, &cp->lines, list) { 157 - if (nr < cp->view.rows - 1) 158 - list_add(&s->update, &cp->update); 159 - if (--nr < 0) 160 - break; 161 - } 162 - cp->line_nr = 0; 163 - cp->update_flags |= CON_UPDATE_LIST; 164 - } 165 - 166 - /* 167 - * Alloc string for size bytes. Free strings from history if necessary. 168 - */ 169 - static struct string * 170 - con3270_alloc_string(struct con3270 *cp, size_t size) 171 - { 172 - struct string *s, *n; 173 - 174 - s = alloc_string(&cp->freemem, size); 175 - if (s) 176 - return s; 177 - list_for_each_entry_safe(s, n, &cp->lines, list) { 178 - list_del(&s->list); 179 - if (!list_empty(&s->update)) 180 - list_del(&s->update); 181 - cp->nr_lines--; 182 - if (free_string(&cp->freemem, s) >= size) 183 - break; 184 - } 185 - s = alloc_string(&cp->freemem, size); 186 - BUG_ON(!s); 187 - if (cp->nr_up != 0 && cp->nr_up + cp->view.rows > cp->nr_lines) { 188 - cp->nr_up = cp->nr_lines - cp->view.rows + 1; 189 - con3270_rebuild_update(cp); 190 - con3270_update_status(cp); 191 - } 192 - return s; 193 - } 194 - 195 - /* 196 - * Write completion callback. 197 - */ 198 - static void 199 - con3270_write_callback(struct raw3270_request *rq, void *data) 200 - { 201 310 raw3270_request_reset(rq); 202 - xchg(&((struct con3270 *) rq->view)->write, rq); 311 + xchg(&tp->write, rq); 312 + } 313 + 314 + static int tty3270_required_length(struct tty3270 *tp, struct tty3270_line *line) 315 + { 316 + unsigned char f_color, b_color, highlight; 317 + struct tty3270_cell *cell; 318 + int i, flen = 3; /* Prefix (TO_SBA). */ 319 + 320 + flen += line->len; 321 + highlight = 0; 322 + f_color = TAC_RESET; 323 + b_color = TAC_RESET; 324 + 325 + for (i = 0, cell = line->cells; i < line->len; i++, cell++) { 326 + if (cell->attributes.highlight != highlight) { 327 + flen += 3; /* TO_SA to switch highlight. */ 328 + highlight = cell->attributes.highlight; 329 + } 330 + if (cell->attributes.f_color != f_color) { 331 + flen += 3; /* TO_SA to switch color. */ 332 + f_color = cell->attributes.f_color; 333 + } 334 + if (cell->attributes.b_color != b_color) { 335 + flen += 3; /* TO_SA to switch color. */ 336 + b_color = cell->attributes.b_color; 337 + } 338 + if (cell->attributes.alternate_charset) 339 + flen += 1; /* TO_GE to switch to graphics extensions */ 340 + } 341 + if (highlight) 342 + flen += 3; /* TO_SA to reset hightlight. */ 343 + if (f_color != TAC_RESET) 344 + flen += 3; /* TO_SA to reset color. */ 345 + if (b_color != TAC_RESET) 346 + flen += 3; /* TO_SA to reset color. */ 347 + if (line->len < tp->view.cols) 348 + flen += 4; /* Postfix (TO_RA). */ 349 + 350 + return flen; 351 + } 352 + 353 + static char *tty3270_add_reset_attributes(struct tty3270 *tp, struct tty3270_line *line, 354 + char *cp, struct tty3270_attribute *attr, int lineno) 355 + { 356 + if (attr->highlight) 357 + cp = tty3270_add_sa(tp, cp, TAT_EXTHI, TAX_RESET); 358 + if (attr->f_color != TAC_RESET) 359 + cp = tty3270_add_sa(tp, cp, TAT_FGCOLOR, TAX_RESET); 360 + if (attr->b_color != TAC_RESET) 361 + cp = tty3270_add_sa(tp, cp, TAT_BGCOLOR, TAX_RESET); 362 + if (line->len < tp->view.cols) 363 + cp = tty3270_add_ra(tp, cp, 0, lineno + 1, 0); 364 + return cp; 365 + } 366 + 367 + static char tty3270_graphics_translate(struct tty3270 *tp, char ch) 368 + { 369 + switch (ch) { 370 + case 'q': /* - */ 371 + return 0xa2; 372 + case 'x': /* '|' */ 373 + return 0x85; 374 + case 'l': /* |- */ 375 + return 0xc5; 376 + case 't': /* |_ */ 377 + return 0xc6; 378 + case 'u': /* _| */ 379 + return 0xd6; 380 + case 'k': /* -| */ 381 + return 0xd5; 382 + case 'j': 383 + return 0xd4; 384 + case 'm': 385 + return 0xc4; 386 + case 'n': /* + */ 387 + return 0xd3; 388 + case 'v': 389 + return 0xc7; 390 + case 'w': 391 + return 0xd7; 392 + default: 393 + return ch; 394 + } 395 + } 396 + 397 + static char *tty3270_add_attributes(struct tty3270 *tp, struct tty3270_line *line, 398 + struct tty3270_attribute *attr, char *cp, int lineno) 399 + { 400 + const unsigned char colors[16] = { 401 + [0] = TAC_DEFAULT, 402 + [1] = TAC_RED, 403 + [2] = TAC_GREEN, 404 + [3] = TAC_YELLOW, 405 + [4] = TAC_BLUE, 406 + [5] = TAC_PINK, 407 + [6] = TAC_TURQ, 408 + [7] = TAC_WHITE, 409 + [9] = TAC_DEFAULT 410 + }; 411 + 412 + const unsigned char highlights[8] = { 413 + [TTY3270_HIGHLIGHT_BLINK] = TAX_BLINK, 414 + [TTY3270_HIGHLIGHT_REVERSE] = TAX_REVER, 415 + [TTY3270_HIGHLIGHT_UNDERSCORE] = TAX_UNDER, 416 + }; 417 + 418 + struct tty3270_cell *cell; 419 + int c, i; 420 + 421 + cp = tty3270_add_ba(tp, cp, TO_SBA, 0, lineno); 422 + 423 + for (i = 0, cell = line->cells; i < line->len; i++, cell++) { 424 + if (cell->attributes.highlight != attr->highlight) { 425 + attr->highlight = cell->attributes.highlight; 426 + cp = tty3270_add_sa(tp, cp, TAT_EXTHI, highlights[attr->highlight]); 427 + } 428 + if (cell->attributes.f_color != attr->f_color) { 429 + attr->f_color = cell->attributes.f_color; 430 + cp = tty3270_add_sa(tp, cp, TAT_FGCOLOR, colors[attr->f_color]); 431 + } 432 + if (cell->attributes.b_color != attr->b_color) { 433 + attr->b_color = cell->attributes.b_color; 434 + cp = tty3270_add_sa(tp, cp, TAT_BGCOLOR, colors[attr->b_color]); 435 + } 436 + c = cell->character; 437 + if (cell->attributes.alternate_charset) 438 + cp = tty3270_add_ge(tp, cp, tty3270_graphics_translate(tp, c)); 439 + else 440 + *cp++ = tp->view.ascebc[c]; 441 + } 442 + return cp; 443 + } 444 + 445 + static void tty3270_reset_attributes(struct tty3270_attribute *attr) 446 + { 447 + attr->highlight = TAX_RESET; 448 + attr->f_color = TAC_RESET; 449 + attr->b_color = TAC_RESET; 203 450 } 204 451 205 452 /* 206 - * Update console display. 453 + * Convert a tty3270_line to a 3270 data fragment usable for output. 207 454 */ 208 - static void 209 - con3270_update(struct timer_list *t) 455 + static unsigned int tty3270_convert_line(struct tty3270 *tp, struct tty3270_line *line, int lineno) 210 456 { 211 - struct con3270 *cp = from_timer(cp, t, timer); 457 + struct tty3270_attribute attr; 458 + int flen; 459 + char *cp; 460 + 461 + /* Determine how long the fragment will be. */ 462 + flen = tty3270_required_length(tp, line); 463 + if (flen > PAGE_SIZE) 464 + return 0; 465 + /* Write 3270 data fragment. */ 466 + tty3270_reset_attributes(&attr); 467 + cp = tty3270_add_attributes(tp, line, &attr, tp->converted_line, lineno); 468 + cp = tty3270_add_reset_attributes(tp, line, cp, &attr, lineno); 469 + return cp - (char *)tp->converted_line; 470 + } 471 + 472 + static void tty3270_update_lines_visible(struct tty3270 *tp, struct raw3270_request *rq) 473 + { 474 + struct tty3270_line *line; 475 + int len, i; 476 + 477 + for (i = 0; i < tty3270_tty_rows(tp); i++) { 478 + line = tty3270_get_view_line(tp, i); 479 + if (!line->dirty) 480 + continue; 481 + len = tty3270_convert_line(tp, line, i); 482 + if (raw3270_request_add_data(rq, tp->converted_line, len)) 483 + break; 484 + line->dirty = 0; 485 + } 486 + if (i == tty3270_tty_rows(tp)) { 487 + for (i = 0; i < tp->allocated_lines; i++) 488 + tp->screen[i].dirty = 0; 489 + tp->update_flags &= ~TTY_UPDATE_LINES; 490 + } 491 + } 492 + 493 + static void tty3270_update_lines_all(struct tty3270 *tp, struct raw3270_request *rq) 494 + { 495 + struct tty3270_line *line; 496 + char buf[4]; 497 + int len, i; 498 + 499 + for (i = 0; i < tp->allocated_lines; i++) { 500 + line = tty3270_get_write_line(tp, i + tp->cy + 1); 501 + if (!line->dirty) 502 + continue; 503 + len = tty3270_convert_line(tp, line, tp->oops_line); 504 + if (raw3270_request_add_data(rq, tp->converted_line, len)) 505 + break; 506 + line->dirty = 0; 507 + if (++tp->oops_line >= tty3270_tty_rows(tp)) 508 + tp->oops_line = 0; 509 + } 510 + 511 + if (i == tp->allocated_lines) { 512 + if (tp->oops_line < tty3270_tty_rows(tp)) { 513 + tty3270_add_ra(tp, buf, 0, tty3270_tty_rows(tp), 0); 514 + if (raw3270_request_add_data(rq, buf, sizeof(buf))) 515 + return; 516 + } 517 + tp->update_flags &= ~TTY_UPDATE_LINES; 518 + } 519 + } 520 + 521 + /* 522 + * Update 3270 display. 523 + */ 524 + static void tty3270_update(struct timer_list *t) 525 + { 526 + struct tty3270 *tp = from_timer(tp, t, timer); 212 527 struct raw3270_request *wrq; 213 - char wcc, prolog[6]; 214 - unsigned long flags; 215 - unsigned long updated; 216 - struct string *s, *n; 217 - int rc; 528 + u8 cmd = TC_WRITE; 529 + int rc, len; 218 530 219 - if (!auto_update && !raw3270_view_active(&cp->view)) 220 - return; 221 - if (cp->view.dev) 222 - raw3270_activate_view(&cp->view); 223 - 224 - wrq = xchg(&cp->write, 0); 531 + wrq = xchg(&tp->write, 0); 225 532 if (!wrq) { 226 - con3270_set_timer(cp, 1); 533 + tty3270_set_timer(tp, 1); 227 534 return; 228 535 } 229 536 230 - spin_lock_irqsave(&cp->view.lock, flags); 231 - updated = 0; 232 - if (cp->update_flags & CON_UPDATE_ALL) { 233 - con3270_rebuild_update(cp); 234 - con3270_update_status(cp); 235 - cp->update_flags = CON_UPDATE_ERASE | CON_UPDATE_LIST | 236 - CON_UPDATE_STATUS; 237 - } 238 - if (cp->update_flags & CON_UPDATE_ERASE) { 239 - /* Use erase write alternate to initialize display. */ 240 - raw3270_request_set_cmd(wrq, TC_EWRITEA); 241 - updated |= CON_UPDATE_ERASE; 242 - } else 243 - raw3270_request_set_cmd(wrq, TC_WRITE); 537 + spin_lock_irq(&tp->view.lock); 538 + if (tp->update_flags == TTY_UPDATE_ALL) 539 + cmd = TC_EWRITEA; 244 540 245 - wcc = TW_NONE; 246 - raw3270_request_add_data(wrq, &wcc, 1); 541 + raw3270_request_set_cmd(wrq, cmd); 542 + raw3270_request_add_data(wrq, &tp->wcc, 1); 543 + tp->wcc = TW_NONE; 247 544 248 545 /* 249 546 * Update status line. 250 547 */ 251 - if (cp->update_flags & CON_UPDATE_STATUS) 252 - if (raw3270_request_add_data(wrq, cp->status->string, 253 - cp->status->len) == 0) 254 - updated |= CON_UPDATE_STATUS; 255 - 256 - if (cp->update_flags & CON_UPDATE_LIST) { 257 - prolog[0] = TO_SBA; 258 - prolog[3] = TO_SA; 259 - prolog[4] = TAT_COLOR; 260 - prolog[5] = TAC_TURQ; 261 - raw3270_buffer_address(cp->view.dev, prolog + 1, 262 - cp->view.cols * cp->line_nr); 263 - raw3270_request_add_data(wrq, prolog, 6); 264 - /* Write strings in the update list to the screen. */ 265 - list_for_each_entry_safe(s, n, &cp->update, update) { 266 - if (s != cp->cline) 267 - con3270_update_string(cp, s, cp->line_nr); 268 - if (raw3270_request_add_data(wrq, s->string, 269 - s->len) != 0) 270 - break; 271 - list_del_init(&s->update); 272 - if (s != cp->cline) 273 - cp->line_nr++; 274 - } 275 - if (list_empty(&cp->update)) 276 - updated |= CON_UPDATE_LIST; 548 + if (tp->update_flags & TTY_UPDATE_STATUS) { 549 + len = tty3270_add_status(tp); 550 + if (raw3270_request_add_data(wrq, tp->converted_line, len) == 0) 551 + tp->update_flags &= ~TTY_UPDATE_STATUS; 277 552 } 278 - wrq->callback = con3270_write_callback; 279 - rc = raw3270_start(&cp->view, wrq); 553 + 554 + /* 555 + * Write input line. 556 + */ 557 + if (tp->update_flags & TTY_UPDATE_INPUT) { 558 + len = tty3270_add_prompt(tp); 559 + if (raw3270_request_add_data(wrq, tp->converted_line, len) == 0) 560 + tp->update_flags &= ~TTY_UPDATE_INPUT; 561 + } 562 + 563 + if (tp->update_flags & TTY_UPDATE_LINES) { 564 + if (oops_in_progress) 565 + tty3270_update_lines_all(tp, wrq); 566 + else 567 + tty3270_update_lines_visible(tp, wrq); 568 + } 569 + 570 + wrq->callback = tty3270_write_callback; 571 + rc = raw3270_start(&tp->view, wrq); 280 572 if (rc == 0) { 281 - cp->update_flags &= ~updated; 282 - if (cp->update_flags) 283 - con3270_set_timer(cp, 1); 573 + if (tp->update_flags) 574 + tty3270_set_timer(tp, 1); 284 575 } else { 285 576 raw3270_request_reset(wrq); 286 - xchg(&cp->write, wrq); 577 + xchg(&tp->write, wrq); 287 578 } 288 - spin_unlock_irqrestore(&cp->view.lock, flags); 579 + spin_unlock_irq(&tp->view.lock); 289 580 } 290 581 291 582 /* 292 - * Read tasklet. 583 + * Command recalling. 293 584 */ 294 - static void 295 - con3270_read_tasklet(unsigned long data) 585 + static void tty3270_rcl_add(struct tty3270 *tp, char *input, int len) 296 586 { 297 - static char kreset_data = TW_KR; 298 - struct raw3270_request *rrq; 299 - struct con3270 *cp; 300 - unsigned long flags; 301 - int nr_up, deactivate; 587 + char *p; 302 588 303 - rrq = (struct raw3270_request *)data; 304 - cp = (struct con3270 *) rrq->view; 305 - spin_lock_irqsave(&cp->view.lock, flags); 306 - nr_up = cp->nr_up; 307 - deactivate = 0; 308 - /* Check aid byte. */ 309 - switch (cp->input->string[0]) { 310 - case 0x7d: /* enter: jump to bottom. */ 311 - nr_up = 0; 589 + if (len <= 0) 590 + return; 591 + p = tp->rcl_lines[tp->rcl_write_index++]; 592 + tp->rcl_write_index &= TTY3270_RECALL_SIZE - 1; 593 + memcpy(p, input, len); 594 + p[len] = '\0'; 595 + tp->rcl_read_index = tp->rcl_write_index; 596 + } 597 + 598 + static void tty3270_rcl_backward(struct kbd_data *kbd) 599 + { 600 + struct tty3270 *tp = container_of(kbd->port, struct tty3270, port); 601 + int i = 0; 602 + 603 + spin_lock_irq(&tp->view.lock); 604 + if (tp->inattr == TF_INPUT) { 605 + do { 606 + tp->rcl_read_index--; 607 + tp->rcl_read_index &= TTY3270_RECALL_SIZE - 1; 608 + } while (!*tp->rcl_lines[tp->rcl_read_index] && 609 + i++ < TTY3270_RECALL_SIZE - 1); 610 + tty3270_update_prompt(tp, tp->rcl_lines[tp->rcl_read_index]); 611 + } 612 + spin_unlock_irq(&tp->view.lock); 613 + } 614 + 615 + /* 616 + * Deactivate tty view. 617 + */ 618 + static void tty3270_exit_tty(struct kbd_data *kbd) 619 + { 620 + struct tty3270 *tp = container_of(kbd->port, struct tty3270, port); 621 + 622 + raw3270_deactivate_view(&tp->view); 623 + } 624 + 625 + static void tty3270_redraw(struct tty3270 *tp) 626 + { 627 + int i; 628 + 629 + for (i = 0; i < tty3270_tty_rows(tp); i++) 630 + tty3270_get_view_line(tp, i)->dirty = 1; 631 + tp->update_flags = TTY_UPDATE_ALL; 632 + tty3270_set_timer(tp, 1); 633 + } 634 + 635 + /* 636 + * Scroll forward in history. 637 + */ 638 + static void tty3270_scroll_forward(struct kbd_data *kbd) 639 + { 640 + struct tty3270 *tp = container_of(kbd->port, struct tty3270, port); 641 + 642 + spin_lock_irq(&tp->view.lock); 643 + 644 + if (tp->nr_up >= tty3270_tty_rows(tp)) 645 + tp->nr_up -= tty3270_tty_rows(tp) / 2; 646 + else 647 + tp->nr_up = 0; 648 + tty3270_redraw(tp); 649 + spin_unlock_irq(&tp->view.lock); 650 + } 651 + 652 + /* 653 + * Scroll backward in history. 654 + */ 655 + static void tty3270_scroll_backward(struct kbd_data *kbd) 656 + { 657 + struct tty3270 *tp = container_of(kbd->port, struct tty3270, port); 658 + 659 + spin_lock_irq(&tp->view.lock); 660 + tp->nr_up += tty3270_tty_rows(tp) / 2; 661 + if (tp->nr_up > tp->allocated_lines - tty3270_tty_rows(tp)) 662 + tp->nr_up = tp->allocated_lines - tty3270_tty_rows(tp); 663 + tty3270_redraw(tp); 664 + spin_unlock_irq(&tp->view.lock); 665 + } 666 + 667 + /* 668 + * Pass input line to tty. 669 + */ 670 + static void tty3270_read_tasklet(unsigned long data) 671 + { 672 + struct raw3270_request *rrq = (struct raw3270_request *)data; 673 + static char kreset_data = TW_KR; 674 + struct tty3270 *tp = container_of(rrq->view, struct tty3270, view); 675 + char *input; 676 + int len; 677 + 678 + spin_lock_irq(&tp->view.lock); 679 + /* 680 + * Two AID keys are special: For 0x7d (enter) the input line 681 + * has to be emitted to the tty and for 0x6d the screen 682 + * needs to be redrawn. 683 + */ 684 + input = NULL; 685 + len = 0; 686 + switch (tp->input[0]) { 687 + case AID_ENTER: 688 + /* Enter: write input to tty. */ 689 + input = tp->input + 6; 690 + len = tty3270_input_size(tp->view.cols) - 6 - rrq->rescnt; 691 + if (tp->inattr != TF_INPUTN) 692 + tty3270_rcl_add(tp, input, len); 693 + if (tp->nr_up > 0) 694 + tp->nr_up = 0; 695 + /* Clear input area. */ 696 + tty3270_update_prompt(tp, ""); 697 + tty3270_set_timer(tp, 1); 312 698 break; 313 - case 0xf3: /* PF3: deactivate the console view. */ 314 - deactivate = 1; 699 + case AID_CLEAR: 700 + /* Display has been cleared. Redraw. */ 701 + tp->update_flags = TTY_UPDATE_ALL; 702 + tty3270_set_timer(tp, 1); 703 + if (!list_empty(&tp->readpartreq->list)) 704 + break; 705 + raw3270_start_request(&tp->view, tp->readpartreq, TC_WRITESF, 706 + (char *)sfq_read_partition, sizeof(sfq_read_partition)); 315 707 break; 316 - case 0x6d: /* clear: start from scratch. */ 317 - cp->update_flags = CON_UPDATE_ALL; 318 - con3270_set_timer(cp, 1); 708 + case AID_READ_PARTITION: 709 + raw3270_read_modified_cb(tp->readpartreq, tp->input); 319 710 break; 320 - case 0xf7: /* PF7: do a page up in the console log. */ 321 - nr_up += cp->view.rows - 2; 322 - if (nr_up + cp->view.rows - 1 > cp->nr_lines) { 323 - nr_up = cp->nr_lines - cp->view.rows + 1; 324 - if (nr_up < 0) 325 - nr_up = 0; 326 - } 327 - break; 328 - case 0xf8: /* PF8: do a page down in the console log. */ 329 - nr_up -= cp->view.rows - 2; 330 - if (nr_up < 0) 331 - nr_up = 0; 711 + default: 332 712 break; 333 713 } 334 - if (nr_up != cp->nr_up) { 335 - cp->nr_up = nr_up; 336 - con3270_rebuild_update(cp); 337 - con3270_update_status(cp); 338 - con3270_set_timer(cp, 1); 339 - } 340 - spin_unlock_irqrestore(&cp->view.lock, flags); 714 + spin_unlock_irq(&tp->view.lock); 341 715 342 716 /* Start keyboard reset command. */ 343 - raw3270_request_reset(cp->kreset); 344 - raw3270_request_set_cmd(cp->kreset, TC_WRITE); 345 - raw3270_request_add_data(cp->kreset, &kreset_data, 1); 346 - raw3270_start(&cp->view, cp->kreset); 717 + raw3270_start_request(&tp->view, tp->kreset, TC_WRITE, &kreset_data, 1); 347 718 348 - if (deactivate) 349 - raw3270_deactivate_view(&cp->view); 719 + while (len-- > 0) 720 + kbd_keycode(tp->kbd, *input++); 721 + /* Emit keycode for AID byte. */ 722 + kbd_keycode(tp->kbd, 256 + tp->input[0]); 350 723 351 724 raw3270_request_reset(rrq); 352 - xchg(&cp->read, rrq); 353 - raw3270_put_view(&cp->view); 725 + xchg(&tp->read, rrq); 726 + raw3270_put_view(&tp->view); 354 727 } 355 728 356 729 /* 357 730 * Read request completion callback. 358 731 */ 359 - static void 360 - con3270_read_callback(struct raw3270_request *rq, void *data) 732 + static void tty3270_read_callback(struct raw3270_request *rq, void *data) 361 733 { 734 + struct tty3270 *tp = container_of(rq->view, struct tty3270, view); 735 + 362 736 raw3270_get_view(rq->view); 363 737 /* Schedule tasklet to pass input to tty. */ 364 - tasklet_schedule(&((struct con3270 *) rq->view)->readlet); 738 + tasklet_schedule(&tp->readlet); 365 739 } 366 740 367 741 /* 368 - * Issue a read request. Called only from interrupt function. 742 + * Issue a read request. Call with device lock. 369 743 */ 370 - static void 371 - con3270_issue_read(struct con3270 *cp) 744 + static void tty3270_issue_read(struct tty3270 *tp, int lock) 372 745 { 373 746 struct raw3270_request *rrq; 374 747 int rc; 375 748 376 - rrq = xchg(&cp->read, 0); 749 + rrq = xchg(&tp->read, 0); 377 750 if (!rrq) 378 751 /* Read already scheduled. */ 379 752 return; 380 - rrq->callback = con3270_read_callback; 381 - rrq->callback_data = cp; 753 + rrq->callback = tty3270_read_callback; 754 + rrq->callback_data = tp; 382 755 raw3270_request_set_cmd(rrq, TC_READMOD); 383 - raw3270_request_set_data(rrq, cp->input->string, cp->input->len); 756 + raw3270_request_set_data(rrq, tp->input, tty3270_input_size(tp->view.cols)); 384 757 /* Issue the read modified request. */ 385 - rc = raw3270_start_irq(&cp->view, rrq); 386 - if (rc) 758 + if (lock) 759 + rc = raw3270_start(&tp->view, rrq); 760 + else 761 + rc = raw3270_start_irq(&tp->view, rrq); 762 + if (rc) { 387 763 raw3270_request_reset(rrq); 764 + xchg(&tp->read, rrq); 765 + } 388 766 } 389 767 390 768 /* 391 - * Switch to the console view. 769 + * Hang up the tty 392 770 */ 393 - static int 394 - con3270_activate(struct raw3270_view *view) 771 + static void tty3270_hangup_tasklet(unsigned long data) 395 772 { 396 - struct con3270 *cp; 773 + struct tty3270 *tp = (struct tty3270 *)data; 397 774 398 - cp = (struct con3270 *) view; 399 - cp->update_flags = CON_UPDATE_ALL; 400 - con3270_set_timer(cp, 1); 775 + tty_port_tty_hangup(&tp->port, true); 776 + raw3270_put_view(&tp->view); 777 + } 778 + 779 + /* 780 + * Switch to the tty view. 781 + */ 782 + static int tty3270_activate(struct raw3270_view *view) 783 + { 784 + struct tty3270 *tp = container_of(view, struct tty3270, view); 785 + 786 + tp->update_flags = TTY_UPDATE_ALL; 787 + tty3270_set_timer(tp, 1); 401 788 return 0; 402 789 } 403 790 404 - static void 405 - con3270_deactivate(struct raw3270_view *view) 791 + static void tty3270_deactivate(struct raw3270_view *view) 406 792 { 407 - struct con3270 *cp; 793 + struct tty3270 *tp = container_of(view, struct tty3270, view); 408 794 409 - cp = (struct con3270 *) view; 410 - del_timer(&cp->timer); 795 + del_timer(&tp->timer); 411 796 } 412 797 413 - static void 414 - con3270_irq(struct con3270 *cp, struct raw3270_request *rq, struct irb *irb) 798 + static void tty3270_irq(struct tty3270 *tp, struct raw3270_request *rq, struct irb *irb) 415 799 { 416 800 /* Handle ATTN. Schedule tasklet to read aid. */ 417 - if (irb->scsw.cmd.dstat & DEV_STAT_ATTENTION) 418 - con3270_issue_read(cp); 801 + if (irb->scsw.cmd.dstat & DEV_STAT_ATTENTION) { 802 + if (!tp->throttle) 803 + tty3270_issue_read(tp, 0); 804 + else 805 + tp->attn = 1; 806 + } 419 807 420 808 if (rq) { 421 - if (irb->scsw.cmd.dstat & DEV_STAT_UNIT_CHECK) 809 + if (irb->scsw.cmd.dstat & DEV_STAT_UNIT_CHECK) { 422 810 rq->rc = -EIO; 423 - else 811 + raw3270_get_view(&tp->view); 812 + tasklet_schedule(&tp->hanglet); 813 + } else { 424 814 /* Normal end. Copy residual count. */ 425 815 rq->rescnt = irb->scsw.cmd.count; 816 + } 426 817 } else if (irb->scsw.cmd.dstat & DEV_STAT_DEV_END) { 427 818 /* Interrupt without an outstanding request -> update all */ 428 - cp->update_flags = CON_UPDATE_ALL; 429 - con3270_set_timer(cp, 1); 819 + tp->update_flags = TTY_UPDATE_ALL; 820 + tty3270_set_timer(tp, 1); 430 821 } 431 - } 432 - 433 - /* Console view to a 3270 device. */ 434 - static struct raw3270_fn con3270_fn = { 435 - .activate = con3270_activate, 436 - .deactivate = con3270_deactivate, 437 - .intv = (void *) con3270_irq 438 - }; 439 - 440 - static inline void 441 - con3270_cline_add(struct con3270 *cp) 442 - { 443 - if (!list_empty(&cp->cline->list)) 444 - /* Already added. */ 445 - return; 446 - list_add_tail(&cp->cline->list, &cp->lines); 447 - cp->nr_lines++; 448 - con3270_rebuild_update(cp); 449 - } 450 - 451 - static inline void 452 - con3270_cline_insert(struct con3270 *cp, unsigned char c) 453 - { 454 - cp->cline->string[cp->cline->len++] = 455 - cp->view.ascebc[(c < ' ') ? ' ' : c]; 456 - if (list_empty(&cp->cline->update)) { 457 - list_add_tail(&cp->cline->update, &cp->update); 458 - cp->update_flags |= CON_UPDATE_LIST; 459 - } 460 - } 461 - 462 - static inline void 463 - con3270_cline_end(struct con3270 *cp) 464 - { 465 - struct string *s; 466 - unsigned int size; 467 - 468 - /* Copy cline. */ 469 - size = (cp->cline->len < cp->view.cols - 5) ? 470 - cp->cline->len + 4 : cp->view.cols; 471 - s = con3270_alloc_string(cp, size); 472 - memcpy(s->string, cp->cline->string, cp->cline->len); 473 - if (cp->cline->len < cp->view.cols - 5) { 474 - s->string[s->len - 4] = TO_RA; 475 - s->string[s->len - 1] = 0; 476 - } else { 477 - while (--size >= cp->cline->len) 478 - s->string[size] = cp->view.ascebc[' ']; 479 - } 480 - /* Replace cline with allocated line s and reset cline. */ 481 - list_add(&s->list, &cp->cline->list); 482 - list_del_init(&cp->cline->list); 483 - if (!list_empty(&cp->cline->update)) { 484 - list_add(&s->update, &cp->cline->update); 485 - list_del_init(&cp->cline->update); 486 - } 487 - cp->cline->len = 0; 488 822 } 489 823 490 824 /* 491 - * Write a string to the 3270 console 825 + * Allocate tty3270 structure. 492 826 */ 827 + static struct tty3270 *tty3270_alloc_view(void) 828 + { 829 + struct tty3270 *tp; 830 + 831 + tp = kzalloc(sizeof(*tp), GFP_KERNEL); 832 + if (!tp) 833 + goto out_err; 834 + 835 + tp->write = raw3270_request_alloc(TTY3270_OUTPUT_BUFFER_SIZE); 836 + if (IS_ERR(tp->write)) 837 + goto out_tp; 838 + tp->read = raw3270_request_alloc(0); 839 + if (IS_ERR(tp->read)) 840 + goto out_write; 841 + tp->kreset = raw3270_request_alloc(1); 842 + if (IS_ERR(tp->kreset)) 843 + goto out_read; 844 + tp->readpartreq = raw3270_request_alloc(sizeof(sfq_read_partition)); 845 + if (IS_ERR(tp->readpartreq)) 846 + goto out_reset; 847 + tp->kbd = kbd_alloc(); 848 + if (!tp->kbd) 849 + goto out_readpartreq; 850 + 851 + tty_port_init(&tp->port); 852 + timer_setup(&tp->timer, tty3270_update, 0); 853 + tasklet_init(&tp->readlet, tty3270_read_tasklet, 854 + (unsigned long)tp->read); 855 + tasklet_init(&tp->hanglet, tty3270_hangup_tasklet, 856 + (unsigned long)tp); 857 + return tp; 858 + 859 + out_readpartreq: 860 + raw3270_request_free(tp->readpartreq); 861 + out_reset: 862 + raw3270_request_free(tp->kreset); 863 + out_read: 864 + raw3270_request_free(tp->read); 865 + out_write: 866 + raw3270_request_free(tp->write); 867 + out_tp: 868 + kfree(tp); 869 + out_err: 870 + return ERR_PTR(-ENOMEM); 871 + } 872 + 873 + /* 874 + * Free tty3270 structure. 875 + */ 876 + static void tty3270_free_view(struct tty3270 *tp) 877 + { 878 + kbd_free(tp->kbd); 879 + raw3270_request_free(tp->kreset); 880 + raw3270_request_free(tp->read); 881 + raw3270_request_free(tp->write); 882 + free_page((unsigned long)tp->converted_line); 883 + tty_port_destroy(&tp->port); 884 + kfree(tp); 885 + } 886 + 887 + /* 888 + * Allocate tty3270 screen. 889 + */ 890 + static struct tty3270_line *tty3270_alloc_screen(struct tty3270 *tp, unsigned int rows, 891 + unsigned int cols, int *allocated_out) 892 + { 893 + struct tty3270_line *screen; 894 + int allocated, lines; 895 + 896 + allocated = __roundup_pow_of_two(rows) * TTY3270_SCREEN_PAGES; 897 + screen = kcalloc(allocated, sizeof(struct tty3270_line), GFP_KERNEL); 898 + if (!screen) 899 + goto out_err; 900 + for (lines = 0; lines < allocated; lines++) { 901 + screen[lines].cells = kcalloc(cols, sizeof(struct tty3270_cell), GFP_KERNEL); 902 + if (!screen[lines].cells) 903 + goto out_screen; 904 + } 905 + *allocated_out = allocated; 906 + return screen; 907 + out_screen: 908 + while (lines--) 909 + kfree(screen[lines].cells); 910 + kfree(screen); 911 + out_err: 912 + return ERR_PTR(-ENOMEM); 913 + } 914 + 915 + static char **tty3270_alloc_recall(int cols) 916 + { 917 + char **lines; 918 + int i; 919 + 920 + lines = kmalloc_array(TTY3270_RECALL_SIZE, sizeof(char *), GFP_KERNEL); 921 + if (!lines) 922 + return NULL; 923 + for (i = 0; i < TTY3270_RECALL_SIZE; i++) { 924 + lines[i] = kcalloc(1, tty3270_input_size(cols) + 1, GFP_KERNEL); 925 + if (!lines[i]) 926 + break; 927 + } 928 + 929 + if (i == TTY3270_RECALL_SIZE) 930 + return lines; 931 + 932 + while (i--) 933 + kfree(lines[i]); 934 + kfree(lines); 935 + return NULL; 936 + } 937 + 938 + static void tty3270_free_recall(char **lines) 939 + { 940 + int i; 941 + 942 + for (i = 0; i < TTY3270_RECALL_SIZE; i++) 943 + kfree(lines[i]); 944 + kfree(lines); 945 + } 946 + 947 + /* 948 + * Free tty3270 screen. 949 + */ 950 + static void tty3270_free_screen(struct tty3270_line *screen, int old_lines) 951 + { 952 + int lines; 953 + 954 + for (lines = 0; lines < old_lines; lines++) 955 + kfree(screen[lines].cells); 956 + kfree(screen); 957 + } 958 + 959 + /* 960 + * Resize tty3270 screen 961 + */ 962 + static void tty3270_resize(struct raw3270_view *view, 963 + int new_model, int new_rows, int new_cols, 964 + int old_model, int old_rows, int old_cols) 965 + { 966 + struct tty3270 *tp = container_of(view, struct tty3270, view); 967 + struct tty3270_line *screen, *oscreen; 968 + char **old_rcl_lines, **new_rcl_lines; 969 + char *old_prompt, *new_prompt; 970 + char *old_input, *new_input; 971 + struct tty_struct *tty; 972 + struct winsize ws; 973 + int new_allocated, old_allocated = tp->allocated_lines; 974 + 975 + if (old_model == new_model && 976 + old_cols == new_cols && 977 + old_rows == new_rows) { 978 + spin_lock_irq(&tp->view.lock); 979 + tty3270_redraw(tp); 980 + spin_unlock_irq(&tp->view.lock); 981 + return; 982 + } 983 + 984 + new_input = kzalloc(tty3270_input_size(new_cols), GFP_KERNEL | GFP_DMA); 985 + if (!new_input) 986 + return; 987 + new_prompt = kzalloc(tty3270_input_size(new_cols), GFP_KERNEL); 988 + if (!new_prompt) 989 + goto out_input; 990 + screen = tty3270_alloc_screen(tp, new_rows, new_cols, &new_allocated); 991 + if (IS_ERR(screen)) 992 + goto out_prompt; 993 + new_rcl_lines = tty3270_alloc_recall(new_cols); 994 + if (!new_rcl_lines) 995 + goto out_screen; 996 + 997 + /* Switch to new output size */ 998 + spin_lock_irq(&tp->view.lock); 999 + tty3270_blank_screen(tp); 1000 + oscreen = tp->screen; 1001 + tp->screen = screen; 1002 + tp->allocated_lines = new_allocated; 1003 + tp->view.rows = new_rows; 1004 + tp->view.cols = new_cols; 1005 + tp->view.model = new_model; 1006 + tp->update_flags = TTY_UPDATE_ALL; 1007 + old_input = tp->input; 1008 + old_prompt = tp->prompt; 1009 + old_rcl_lines = tp->rcl_lines; 1010 + tp->input = new_input; 1011 + tp->prompt = new_prompt; 1012 + tp->rcl_lines = new_rcl_lines; 1013 + tp->rcl_read_index = 0; 1014 + tp->rcl_write_index = 0; 1015 + spin_unlock_irq(&tp->view.lock); 1016 + tty3270_free_screen(oscreen, old_allocated); 1017 + kfree(old_input); 1018 + kfree(old_prompt); 1019 + tty3270_free_recall(old_rcl_lines); 1020 + tty3270_set_timer(tp, 1); 1021 + /* Informat tty layer about new size */ 1022 + tty = tty_port_tty_get(&tp->port); 1023 + if (!tty) 1024 + return; 1025 + ws.ws_row = tty3270_tty_rows(tp); 1026 + ws.ws_col = tp->view.cols; 1027 + tty_do_resize(tty, &ws); 1028 + tty_kref_put(tty); 1029 + return; 1030 + out_screen: 1031 + tty3270_free_screen(screen, new_rows); 1032 + out_prompt: 1033 + kfree(new_prompt); 1034 + out_input: 1035 + kfree(new_input); 1036 + } 1037 + 1038 + /* 1039 + * Unlink tty3270 data structure from tty. 1040 + */ 1041 + static void tty3270_release(struct raw3270_view *view) 1042 + { 1043 + struct tty3270 *tp = container_of(view, struct tty3270, view); 1044 + struct tty_struct *tty = tty_port_tty_get(&tp->port); 1045 + 1046 + if (tty) { 1047 + tty->driver_data = NULL; 1048 + tty_port_tty_set(&tp->port, NULL); 1049 + tty_hangup(tty); 1050 + raw3270_put_view(&tp->view); 1051 + tty_kref_put(tty); 1052 + } 1053 + } 1054 + 1055 + /* 1056 + * Free tty3270 data structure 1057 + */ 1058 + static void tty3270_free(struct raw3270_view *view) 1059 + { 1060 + struct tty3270 *tp = container_of(view, struct tty3270, view); 1061 + 1062 + del_timer_sync(&tp->timer); 1063 + tty3270_free_screen(tp->screen, tp->allocated_lines); 1064 + free_page((unsigned long)tp->converted_line); 1065 + kfree(tp->input); 1066 + kfree(tp->prompt); 1067 + tty3270_free_view(tp); 1068 + } 1069 + 1070 + /* 1071 + * Delayed freeing of tty3270 views. 1072 + */ 1073 + static void tty3270_del_views(void) 1074 + { 1075 + int i; 1076 + 1077 + for (i = RAW3270_FIRSTMINOR; i <= tty3270_max_index; i++) { 1078 + struct raw3270_view *view = raw3270_find_view(&tty3270_fn, i); 1079 + 1080 + if (!IS_ERR(view)) 1081 + raw3270_del_view(view); 1082 + } 1083 + } 1084 + 1085 + static struct raw3270_fn tty3270_fn = { 1086 + .activate = tty3270_activate, 1087 + .deactivate = tty3270_deactivate, 1088 + .intv = (void *)tty3270_irq, 1089 + .release = tty3270_release, 1090 + .free = tty3270_free, 1091 + .resize = tty3270_resize 1092 + }; 1093 + 1094 + static int 1095 + tty3270_create_view(int index, struct tty3270 **newtp) 1096 + { 1097 + struct tty3270 *tp; 1098 + int rc; 1099 + 1100 + if (tty3270_max_index < index + 1) 1101 + tty3270_max_index = index + 1; 1102 + 1103 + /* Allocate tty3270 structure on first open. */ 1104 + tp = tty3270_alloc_view(); 1105 + if (IS_ERR(tp)) 1106 + return PTR_ERR(tp); 1107 + 1108 + rc = raw3270_add_view(&tp->view, &tty3270_fn, 1109 + index + RAW3270_FIRSTMINOR, 1110 + RAW3270_VIEW_LOCK_IRQ); 1111 + if (rc) 1112 + goto out_free_view; 1113 + 1114 + tp->screen = tty3270_alloc_screen(tp, tp->view.rows, tp->view.cols, 1115 + &tp->allocated_lines); 1116 + if (IS_ERR(tp->screen)) { 1117 + rc = PTR_ERR(tp->screen); 1118 + goto out_put_view; 1119 + } 1120 + 1121 + tp->converted_line = (void *)__get_free_page(GFP_KERNEL); 1122 + if (!tp->converted_line) { 1123 + rc = -ENOMEM; 1124 + goto out_free_screen; 1125 + } 1126 + 1127 + tp->input = kzalloc(tty3270_input_size(tp->view.cols), GFP_KERNEL | GFP_DMA); 1128 + if (!tp->input) { 1129 + rc = -ENOMEM; 1130 + goto out_free_converted_line; 1131 + } 1132 + 1133 + tp->prompt = kzalloc(tty3270_input_size(tp->view.cols), GFP_KERNEL); 1134 + if (!tp->prompt) { 1135 + rc = -ENOMEM; 1136 + goto out_free_input; 1137 + } 1138 + 1139 + tp->rcl_lines = tty3270_alloc_recall(tp->view.cols); 1140 + if (!tp->rcl_lines) { 1141 + rc = -ENOMEM; 1142 + goto out_free_prompt; 1143 + } 1144 + 1145 + /* Create blank line for every line in the tty output area. */ 1146 + tty3270_blank_screen(tp); 1147 + 1148 + tp->kbd->port = &tp->port; 1149 + tp->kbd->fn_handler[KVAL(K_INCRCONSOLE)] = tty3270_exit_tty; 1150 + tp->kbd->fn_handler[KVAL(K_SCROLLBACK)] = tty3270_scroll_backward; 1151 + tp->kbd->fn_handler[KVAL(K_SCROLLFORW)] = tty3270_scroll_forward; 1152 + tp->kbd->fn_handler[KVAL(K_CONS)] = tty3270_rcl_backward; 1153 + kbd_ascebc(tp->kbd, tp->view.ascebc); 1154 + 1155 + raw3270_activate_view(&tp->view); 1156 + raw3270_put_view(&tp->view); 1157 + *newtp = tp; 1158 + return 0; 1159 + 1160 + out_free_prompt: 1161 + kfree(tp->prompt); 1162 + out_free_input: 1163 + kfree(tp->input); 1164 + out_free_converted_line: 1165 + free_page((unsigned long)tp->converted_line); 1166 + out_free_screen: 1167 + tty3270_free_screen(tp->screen, tp->view.rows); 1168 + out_put_view: 1169 + raw3270_put_view(&tp->view); 1170 + raw3270_del_view(&tp->view); 1171 + out_free_view: 1172 + tty3270_free_view(tp); 1173 + return rc; 1174 + } 1175 + 1176 + /* 1177 + * This routine is called whenever a 3270 tty is opened first time. 1178 + */ 1179 + static int 1180 + tty3270_install(struct tty_driver *driver, struct tty_struct *tty) 1181 + { 1182 + struct raw3270_view *view; 1183 + struct tty3270 *tp; 1184 + int rc; 1185 + 1186 + /* Check if the tty3270 is already there. */ 1187 + view = raw3270_find_view(&tty3270_fn, tty->index + RAW3270_FIRSTMINOR); 1188 + if (IS_ERR(view)) { 1189 + rc = tty3270_create_view(tty->index, &tp); 1190 + if (rc) 1191 + return rc; 1192 + } else { 1193 + tp = container_of(view, struct tty3270, view); 1194 + tty->driver_data = tp; 1195 + tp->inattr = TF_INPUT; 1196 + } 1197 + 1198 + tty->winsize.ws_row = tty3270_tty_rows(tp); 1199 + tty->winsize.ws_col = tp->view.cols; 1200 + rc = tty_port_install(&tp->port, driver, tty); 1201 + if (rc) { 1202 + raw3270_put_view(&tp->view); 1203 + return rc; 1204 + } 1205 + tty->driver_data = tp; 1206 + return 0; 1207 + } 1208 + 1209 + /* 1210 + * This routine is called whenever a 3270 tty is opened. 1211 + */ 1212 + static int tty3270_open(struct tty_struct *tty, struct file *filp) 1213 + { 1214 + struct tty3270 *tp = tty->driver_data; 1215 + struct tty_port *port = &tp->port; 1216 + 1217 + port->count++; 1218 + tty_port_tty_set(port, tty); 1219 + return 0; 1220 + } 1221 + 1222 + /* 1223 + * This routine is called when the 3270 tty is closed. We wait 1224 + * for the remaining request to be completed. Then we clean up. 1225 + */ 1226 + static void tty3270_close(struct tty_struct *tty, struct file *filp) 1227 + { 1228 + struct tty3270 *tp = tty->driver_data; 1229 + 1230 + if (tty->count > 1) 1231 + return; 1232 + if (tp) 1233 + tty_port_tty_set(&tp->port, NULL); 1234 + } 1235 + 1236 + static void tty3270_cleanup(struct tty_struct *tty) 1237 + { 1238 + struct tty3270 *tp = tty->driver_data; 1239 + 1240 + if (tp) { 1241 + tty->driver_data = NULL; 1242 + raw3270_put_view(&tp->view); 1243 + } 1244 + } 1245 + 1246 + /* 1247 + * We always have room. 1248 + */ 1249 + static unsigned int tty3270_write_room(struct tty_struct *tty) 1250 + { 1251 + return INT_MAX; 1252 + } 1253 + 1254 + /* 1255 + * Insert character into the screen at the current position with the 1256 + * current color and highlight. This function does NOT do cursor movement. 1257 + */ 1258 + static void tty3270_put_character(struct tty3270 *tp, char ch) 1259 + { 1260 + struct tty3270_line *line; 1261 + struct tty3270_cell *cell; 1262 + 1263 + line = tty3270_get_write_line(tp, tp->cy); 1264 + if (line->len <= tp->cx) { 1265 + while (line->len < tp->cx) { 1266 + cell = line->cells + line->len; 1267 + cell->character = ' '; 1268 + cell->attributes = tp->attributes; 1269 + line->len++; 1270 + } 1271 + line->len++; 1272 + } 1273 + cell = line->cells + tp->cx; 1274 + cell->character = ch; 1275 + cell->attributes = tp->attributes; 1276 + line->dirty = 1; 1277 + } 1278 + 1279 + /* 1280 + * Do carriage return. 1281 + */ 1282 + static void tty3270_cr(struct tty3270 *tp) 1283 + { 1284 + tp->cx = 0; 1285 + } 1286 + 1287 + /* 1288 + * Do line feed. 1289 + */ 1290 + static void tty3270_lf(struct tty3270 *tp) 1291 + { 1292 + struct tty3270_line *line; 1293 + int i; 1294 + 1295 + if (tp->cy < tty3270_tty_rows(tp) - 1) { 1296 + tp->cy++; 1297 + } else { 1298 + tp->line_view_start = tty3270_line_increment(tp, tp->line_view_start, 1); 1299 + tp->line_write_start = tty3270_line_increment(tp, tp->line_write_start, 1); 1300 + for (i = 0; i < tty3270_tty_rows(tp); i++) 1301 + tty3270_get_view_line(tp, i)->dirty = 1; 1302 + } 1303 + 1304 + line = tty3270_get_write_line(tp, tp->cy); 1305 + line->len = 0; 1306 + line->dirty = 1; 1307 + } 1308 + 1309 + static void tty3270_ri(struct tty3270 *tp) 1310 + { 1311 + if (tp->cy > 0) 1312 + tp->cy--; 1313 + } 1314 + 1315 + static void tty3270_reset_cell(struct tty3270 *tp, struct tty3270_cell *cell) 1316 + { 1317 + cell->character = ' '; 1318 + tty3270_reset_attributes(&cell->attributes); 1319 + } 1320 + 1321 + /* 1322 + * Insert characters at current position. 1323 + */ 1324 + static void tty3270_insert_characters(struct tty3270 *tp, int n) 1325 + { 1326 + struct tty3270_line *line; 1327 + int k; 1328 + 1329 + line = tty3270_get_write_line(tp, tp->cy); 1330 + while (line->len < tp->cx) 1331 + tty3270_reset_cell(tp, &line->cells[line->len++]); 1332 + if (n > tp->view.cols - tp->cx) 1333 + n = tp->view.cols - tp->cx; 1334 + k = min_t(int, line->len - tp->cx, tp->view.cols - tp->cx - n); 1335 + while (k--) 1336 + line->cells[tp->cx + n + k] = line->cells[tp->cx + k]; 1337 + line->len += n; 1338 + if (line->len > tp->view.cols) 1339 + line->len = tp->view.cols; 1340 + while (n-- > 0) { 1341 + line->cells[tp->cx + n].character = ' '; 1342 + line->cells[tp->cx + n].attributes = tp->attributes; 1343 + } 1344 + } 1345 + 1346 + /* 1347 + * Delete characters at current position. 1348 + */ 1349 + static void tty3270_delete_characters(struct tty3270 *tp, int n) 1350 + { 1351 + struct tty3270_line *line; 1352 + int i; 1353 + 1354 + line = tty3270_get_write_line(tp, tp->cy); 1355 + if (line->len <= tp->cx) 1356 + return; 1357 + if (line->len - tp->cx <= n) { 1358 + line->len = tp->cx; 1359 + return; 1360 + } 1361 + for (i = tp->cx; i + n < line->len; i++) 1362 + line->cells[i] = line->cells[i + n]; 1363 + line->len -= n; 1364 + } 1365 + 1366 + /* 1367 + * Erase characters at current position. 1368 + */ 1369 + static void tty3270_erase_characters(struct tty3270 *tp, int n) 1370 + { 1371 + struct tty3270_line *line; 1372 + struct tty3270_cell *cell; 1373 + 1374 + line = tty3270_get_write_line(tp, tp->cy); 1375 + while (line->len > tp->cx && n-- > 0) { 1376 + cell = line->cells + tp->cx++; 1377 + tty3270_reset_cell(tp, cell); 1378 + } 1379 + tp->cx += n; 1380 + tp->cx = min_t(int, tp->cx, tp->view.cols - 1); 1381 + } 1382 + 1383 + /* 1384 + * Erase line, 3 different cases: 1385 + * Esc [ 0 K Erase from current position to end of line inclusive 1386 + * Esc [ 1 K Erase from beginning of line to current position inclusive 1387 + * Esc [ 2 K Erase entire line (without moving cursor) 1388 + */ 1389 + static void tty3270_erase_line(struct tty3270 *tp, int mode) 1390 + { 1391 + struct tty3270_line *line; 1392 + struct tty3270_cell *cell; 1393 + int i, start, end; 1394 + 1395 + line = tty3270_get_write_line(tp, tp->cy); 1396 + 1397 + switch (mode) { 1398 + case 0: 1399 + start = tp->cx; 1400 + end = tp->view.cols; 1401 + break; 1402 + case 1: 1403 + start = 0; 1404 + end = tp->cx; 1405 + break; 1406 + case 2: 1407 + start = 0; 1408 + end = tp->view.cols; 1409 + break; 1410 + default: 1411 + return; 1412 + } 1413 + 1414 + for (i = start; i < end; i++) { 1415 + cell = line->cells + i; 1416 + tty3270_reset_cell(tp, cell); 1417 + cell->attributes.b_color = tp->attributes.b_color; 1418 + } 1419 + 1420 + if (line->len <= end) 1421 + line->len = end; 1422 + } 1423 + 1424 + /* 1425 + * Erase display, 3 different cases: 1426 + * Esc [ 0 J Erase from current position to bottom of screen inclusive 1427 + * Esc [ 1 J Erase from top of screen to current position inclusive 1428 + * Esc [ 2 J Erase entire screen (without moving the cursor) 1429 + */ 1430 + static void tty3270_erase_display(struct tty3270 *tp, int mode) 1431 + { 1432 + struct tty3270_line *line; 1433 + int i, start, end; 1434 + 1435 + switch (mode) { 1436 + case 0: 1437 + tty3270_erase_line(tp, 0); 1438 + start = tp->cy + 1; 1439 + end = tty3270_tty_rows(tp); 1440 + break; 1441 + case 1: 1442 + start = 0; 1443 + end = tp->cy; 1444 + tty3270_erase_line(tp, 1); 1445 + break; 1446 + case 2: 1447 + start = 0; 1448 + end = tty3270_tty_rows(tp); 1449 + break; 1450 + default: 1451 + return; 1452 + } 1453 + for (i = start; i < end; i++) { 1454 + line = tty3270_get_write_line(tp, i); 1455 + line->len = 0; 1456 + line->dirty = 1; 1457 + } 1458 + } 1459 + 1460 + /* 1461 + * Set attributes found in an escape sequence. 1462 + * Esc [ <attr> ; <attr> ; ... m 1463 + */ 1464 + static void tty3270_set_attributes(struct tty3270 *tp) 1465 + { 1466 + int i, attr; 1467 + 1468 + for (i = 0; i <= tp->esc_npar; i++) { 1469 + attr = tp->esc_par[i]; 1470 + switch (attr) { 1471 + case 0: /* Reset */ 1472 + tty3270_reset_attributes(&tp->attributes); 1473 + break; 1474 + /* Highlight. */ 1475 + case 4: /* Start underlining. */ 1476 + tp->attributes.highlight = TTY3270_HIGHLIGHT_UNDERSCORE; 1477 + break; 1478 + case 5: /* Start blink. */ 1479 + tp->attributes.highlight = TTY3270_HIGHLIGHT_BLINK; 1480 + break; 1481 + case 7: /* Start reverse. */ 1482 + tp->attributes.highlight = TTY3270_HIGHLIGHT_REVERSE; 1483 + break; 1484 + case 24: /* End underlining */ 1485 + tp->attributes.highlight &= ~TTY3270_HIGHLIGHT_UNDERSCORE; 1486 + break; 1487 + case 25: /* End blink. */ 1488 + tp->attributes.highlight &= ~TTY3270_HIGHLIGHT_BLINK; 1489 + break; 1490 + case 27: /* End reverse. */ 1491 + tp->attributes.highlight &= ~TTY3270_HIGHLIGHT_REVERSE; 1492 + break; 1493 + /* Foreground color. */ 1494 + case 30: /* Black */ 1495 + case 31: /* Red */ 1496 + case 32: /* Green */ 1497 + case 33: /* Yellow */ 1498 + case 34: /* Blue */ 1499 + case 35: /* Magenta */ 1500 + case 36: /* Cyan */ 1501 + case 37: /* White */ 1502 + case 39: /* Black */ 1503 + tp->attributes.f_color = attr - 30; 1504 + break; 1505 + /* Background color. */ 1506 + case 40: /* Black */ 1507 + case 41: /* Red */ 1508 + case 42: /* Green */ 1509 + case 43: /* Yellow */ 1510 + case 44: /* Blue */ 1511 + case 45: /* Magenta */ 1512 + case 46: /* Cyan */ 1513 + case 47: /* White */ 1514 + case 49: /* Black */ 1515 + tp->attributes.b_color = attr - 40; 1516 + break; 1517 + } 1518 + } 1519 + } 1520 + 1521 + static inline int tty3270_getpar(struct tty3270 *tp, int ix) 1522 + { 1523 + return (tp->esc_par[ix] > 0) ? tp->esc_par[ix] : 1; 1524 + } 1525 + 1526 + static void tty3270_goto_xy(struct tty3270 *tp, int cx, int cy) 1527 + { 1528 + struct tty3270_line *line; 1529 + struct tty3270_cell *cell; 1530 + int max_cx = max(0, cx); 1531 + int max_cy = max(0, cy); 1532 + 1533 + tp->cx = min_t(int, tp->view.cols - 1, max_cx); 1534 + line = tty3270_get_write_line(tp, tp->cy); 1535 + while (line->len < tp->cx) { 1536 + cell = line->cells + line->len; 1537 + cell->character = ' '; 1538 + cell->attributes = tp->attributes; 1539 + line->len++; 1540 + } 1541 + tp->cy = min_t(int, tty3270_tty_rows(tp) - 1, max_cy); 1542 + } 1543 + 1544 + /* 1545 + * Process escape sequences. Known sequences: 1546 + * Esc 7 Save Cursor Position 1547 + * Esc 8 Restore Cursor Position 1548 + * Esc [ Pn ; Pn ; .. m Set attributes 1549 + * Esc [ Pn ; Pn H Cursor Position 1550 + * Esc [ Pn ; Pn f Cursor Position 1551 + * Esc [ Pn A Cursor Up 1552 + * Esc [ Pn B Cursor Down 1553 + * Esc [ Pn C Cursor Forward 1554 + * Esc [ Pn D Cursor Backward 1555 + * Esc [ Pn G Cursor Horizontal Absolute 1556 + * Esc [ Pn X Erase Characters 1557 + * Esc [ Ps J Erase in Display 1558 + * Esc [ Ps K Erase in Line 1559 + * // FIXME: add all the new ones. 1560 + * 1561 + * Pn is a numeric parameter, a string of zero or more decimal digits. 1562 + * Ps is a selective parameter. 1563 + */ 1564 + static void tty3270_escape_sequence(struct tty3270 *tp, char ch) 1565 + { 1566 + enum { ES_NORMAL, ES_ESC, ES_SQUARE, ES_PAREN, ES_GETPARS }; 1567 + 1568 + if (tp->esc_state == ES_NORMAL) { 1569 + if (ch == 0x1b) 1570 + /* Starting new escape sequence. */ 1571 + tp->esc_state = ES_ESC; 1572 + return; 1573 + } 1574 + if (tp->esc_state == ES_ESC) { 1575 + tp->esc_state = ES_NORMAL; 1576 + switch (ch) { 1577 + case '[': 1578 + tp->esc_state = ES_SQUARE; 1579 + break; 1580 + case '(': 1581 + tp->esc_state = ES_PAREN; 1582 + break; 1583 + case 'E': 1584 + tty3270_cr(tp); 1585 + tty3270_lf(tp); 1586 + break; 1587 + case 'M': 1588 + tty3270_ri(tp); 1589 + break; 1590 + case 'D': 1591 + tty3270_lf(tp); 1592 + break; 1593 + case 'Z': /* Respond ID. */ 1594 + kbd_puts_queue(&tp->port, "\033[?6c"); 1595 + break; 1596 + case '7': /* Save cursor position. */ 1597 + tp->saved_cx = tp->cx; 1598 + tp->saved_cy = tp->cy; 1599 + tp->saved_attributes = tp->attributes; 1600 + break; 1601 + case '8': /* Restore cursor position. */ 1602 + tty3270_goto_xy(tp, tp->saved_cx, tp->saved_cy); 1603 + tp->attributes = tp->saved_attributes; 1604 + break; 1605 + case 'c': /* Reset terminal. */ 1606 + tp->cx = 0; 1607 + tp->cy = 0; 1608 + tp->saved_cx = 0; 1609 + tp->saved_cy = 0; 1610 + tty3270_reset_attributes(&tp->attributes); 1611 + tty3270_reset_attributes(&tp->saved_attributes); 1612 + tty3270_erase_display(tp, 2); 1613 + break; 1614 + } 1615 + return; 1616 + } 1617 + 1618 + switch (tp->esc_state) { 1619 + case ES_PAREN: 1620 + tp->esc_state = ES_NORMAL; 1621 + switch (ch) { 1622 + case 'B': 1623 + tp->attributes.alternate_charset = 0; 1624 + break; 1625 + case '0': 1626 + tp->attributes.alternate_charset = 1; 1627 + break; 1628 + } 1629 + return; 1630 + case ES_SQUARE: 1631 + tp->esc_state = ES_GETPARS; 1632 + memset(tp->esc_par, 0, sizeof(tp->esc_par)); 1633 + tp->esc_npar = 0; 1634 + tp->esc_ques = (ch == '?'); 1635 + if (tp->esc_ques) 1636 + return; 1637 + fallthrough; 1638 + case ES_GETPARS: 1639 + if (ch == ';' && tp->esc_npar < ESCAPE_NPAR - 1) { 1640 + tp->esc_npar++; 1641 + return; 1642 + } 1643 + if (ch >= '0' && ch <= '9') { 1644 + tp->esc_par[tp->esc_npar] *= 10; 1645 + tp->esc_par[tp->esc_npar] += ch - '0'; 1646 + return; 1647 + } 1648 + break; 1649 + default: 1650 + break; 1651 + } 1652 + tp->esc_state = ES_NORMAL; 1653 + if (ch == 'n' && !tp->esc_ques) { 1654 + if (tp->esc_par[0] == 5) /* Status report. */ 1655 + kbd_puts_queue(&tp->port, "\033[0n"); 1656 + else if (tp->esc_par[0] == 6) { /* Cursor report. */ 1657 + char buf[40]; 1658 + 1659 + sprintf(buf, "\033[%d;%dR", tp->cy + 1, tp->cx + 1); 1660 + kbd_puts_queue(&tp->port, buf); 1661 + } 1662 + return; 1663 + } 1664 + if (tp->esc_ques) 1665 + return; 1666 + switch (ch) { 1667 + case 'm': 1668 + tty3270_set_attributes(tp); 1669 + break; 1670 + case 'H': /* Set cursor position. */ 1671 + case 'f': 1672 + tty3270_goto_xy(tp, tty3270_getpar(tp, 1) - 1, 1673 + tty3270_getpar(tp, 0) - 1); 1674 + break; 1675 + case 'd': /* Set y position. */ 1676 + tty3270_goto_xy(tp, tp->cx, tty3270_getpar(tp, 0) - 1); 1677 + break; 1678 + case 'A': /* Cursor up. */ 1679 + case 'F': 1680 + tty3270_goto_xy(tp, tp->cx, tp->cy - tty3270_getpar(tp, 0)); 1681 + break; 1682 + case 'B': /* Cursor down. */ 1683 + case 'e': 1684 + case 'E': 1685 + tty3270_goto_xy(tp, tp->cx, tp->cy + tty3270_getpar(tp, 0)); 1686 + break; 1687 + case 'C': /* Cursor forward. */ 1688 + case 'a': 1689 + tty3270_goto_xy(tp, tp->cx + tty3270_getpar(tp, 0), tp->cy); 1690 + break; 1691 + case 'D': /* Cursor backward. */ 1692 + tty3270_goto_xy(tp, tp->cx - tty3270_getpar(tp, 0), tp->cy); 1693 + break; 1694 + case 'G': /* Set x position. */ 1695 + case '`': 1696 + tty3270_goto_xy(tp, tty3270_getpar(tp, 0), tp->cy); 1697 + break; 1698 + case 'X': /* Erase Characters. */ 1699 + tty3270_erase_characters(tp, tty3270_getpar(tp, 0)); 1700 + break; 1701 + case 'J': /* Erase display. */ 1702 + tty3270_erase_display(tp, tp->esc_par[0]); 1703 + break; 1704 + case 'K': /* Erase line. */ 1705 + tty3270_erase_line(tp, tp->esc_par[0]); 1706 + break; 1707 + case 'P': /* Delete characters. */ 1708 + tty3270_delete_characters(tp, tty3270_getpar(tp, 0)); 1709 + break; 1710 + case '@': /* Insert characters. */ 1711 + tty3270_insert_characters(tp, tty3270_getpar(tp, 0)); 1712 + break; 1713 + case 's': /* Save cursor position. */ 1714 + tp->saved_cx = tp->cx; 1715 + tp->saved_cy = tp->cy; 1716 + tp->saved_attributes = tp->attributes; 1717 + break; 1718 + case 'u': /* Restore cursor position. */ 1719 + tty3270_goto_xy(tp, tp->saved_cx, tp->saved_cy); 1720 + tp->attributes = tp->saved_attributes; 1721 + break; 1722 + } 1723 + } 1724 + 1725 + /* 1726 + * String write routine for 3270 ttys 1727 + */ 1728 + static void tty3270_do_write(struct tty3270 *tp, struct tty_struct *tty, 1729 + const unsigned char *buf, int count) 1730 + { 1731 + int i_msg, i; 1732 + 1733 + spin_lock_irq(&tp->view.lock); 1734 + for (i_msg = 0; !tty->flow.stopped && i_msg < count; i_msg++) { 1735 + if (tp->esc_state != 0) { 1736 + /* Continue escape sequence. */ 1737 + tty3270_escape_sequence(tp, buf[i_msg]); 1738 + continue; 1739 + } 1740 + 1741 + switch (buf[i_msg]) { 1742 + case 0x00: 1743 + break; 1744 + case 0x07: /* '\a' -- Alarm */ 1745 + tp->wcc |= TW_PLUSALARM; 1746 + break; 1747 + case 0x08: /* Backspace. */ 1748 + if (tp->cx > 0) { 1749 + tp->cx--; 1750 + tty3270_put_character(tp, ' '); 1751 + } 1752 + break; 1753 + case 0x09: /* '\t' -- Tabulate */ 1754 + for (i = tp->cx % 8; i < 8; i++) { 1755 + if (tp->cx >= tp->view.cols) { 1756 + tty3270_cr(tp); 1757 + tty3270_lf(tp); 1758 + break; 1759 + } 1760 + tty3270_put_character(tp, ' '); 1761 + tp->cx++; 1762 + } 1763 + break; 1764 + case 0x0a: /* '\n' -- New Line */ 1765 + tty3270_cr(tp); 1766 + tty3270_lf(tp); 1767 + break; 1768 + case 0x0c: /* '\f' -- Form Feed */ 1769 + tty3270_erase_display(tp, 2); 1770 + tp->cx = 0; 1771 + tp->cy = 0; 1772 + break; 1773 + case 0x0d: /* '\r' -- Carriage Return */ 1774 + tp->cx = 0; 1775 + break; 1776 + case 0x0e: 1777 + tp->attributes.alternate_charset = 1; 1778 + break; 1779 + case 0x0f: /* SuSE "exit alternate mode" */ 1780 + tp->attributes.alternate_charset = 0; 1781 + break; 1782 + case 0x1b: /* Start escape sequence. */ 1783 + tty3270_escape_sequence(tp, buf[i_msg]); 1784 + break; 1785 + default: /* Insert normal character. */ 1786 + if (tp->cx >= tp->view.cols) { 1787 + tty3270_cr(tp); 1788 + tty3270_lf(tp); 1789 + } 1790 + tty3270_put_character(tp, buf[i_msg]); 1791 + tp->cx++; 1792 + break; 1793 + } 1794 + } 1795 + /* Setup timer to update display after 1/10 second */ 1796 + tp->update_flags |= TTY_UPDATE_LINES; 1797 + if (!timer_pending(&tp->timer)) 1798 + tty3270_set_timer(tp, msecs_to_jiffies(100)); 1799 + 1800 + spin_unlock_irq(&tp->view.lock); 1801 + } 1802 + 1803 + /* 1804 + * String write routine for 3270 ttys 1805 + */ 1806 + static int tty3270_write(struct tty_struct *tty, 1807 + const unsigned char *buf, int count) 1808 + { 1809 + struct tty3270 *tp; 1810 + 1811 + tp = tty->driver_data; 1812 + if (!tp) 1813 + return 0; 1814 + if (tp->char_count > 0) { 1815 + tty3270_do_write(tp, tty, tp->char_buf, tp->char_count); 1816 + tp->char_count = 0; 1817 + } 1818 + tty3270_do_write(tp, tty, buf, count); 1819 + return count; 1820 + } 1821 + 1822 + /* 1823 + * Put single characters to the ttys character buffer 1824 + */ 1825 + static int tty3270_put_char(struct tty_struct *tty, unsigned char ch) 1826 + { 1827 + struct tty3270 *tp; 1828 + 1829 + tp = tty->driver_data; 1830 + if (!tp || tp->char_count >= TTY3270_CHAR_BUF_SIZE) 1831 + return 0; 1832 + tp->char_buf[tp->char_count++] = ch; 1833 + return 1; 1834 + } 1835 + 1836 + /* 1837 + * Flush all characters from the ttys characeter buffer put there 1838 + * by tty3270_put_char. 1839 + */ 1840 + static void tty3270_flush_chars(struct tty_struct *tty) 1841 + { 1842 + struct tty3270 *tp; 1843 + 1844 + tp = tty->driver_data; 1845 + if (!tp) 1846 + return; 1847 + if (tp->char_count > 0) { 1848 + tty3270_do_write(tp, tty, tp->char_buf, tp->char_count); 1849 + tp->char_count = 0; 1850 + } 1851 + } 1852 + 1853 + /* 1854 + * Check for visible/invisible input switches 1855 + */ 1856 + static void tty3270_set_termios(struct tty_struct *tty, const struct ktermios *old) 1857 + { 1858 + struct tty3270 *tp; 1859 + int new; 1860 + 1861 + tp = tty->driver_data; 1862 + if (!tp) 1863 + return; 1864 + spin_lock_irq(&tp->view.lock); 1865 + if (L_ICANON(tty)) { 1866 + new = L_ECHO(tty) ? TF_INPUT : TF_INPUTN; 1867 + if (new != tp->inattr) { 1868 + tp->inattr = new; 1869 + tty3270_update_prompt(tp, ""); 1870 + tty3270_set_timer(tp, 1); 1871 + } 1872 + } 1873 + spin_unlock_irq(&tp->view.lock); 1874 + } 1875 + 1876 + /* 1877 + * Disable reading from a 3270 tty 1878 + */ 1879 + static void tty3270_throttle(struct tty_struct *tty) 1880 + { 1881 + struct tty3270 *tp; 1882 + 1883 + tp = tty->driver_data; 1884 + if (!tp) 1885 + return; 1886 + tp->throttle = 1; 1887 + } 1888 + 1889 + /* 1890 + * Enable reading from a 3270 tty 1891 + */ 1892 + static void tty3270_unthrottle(struct tty_struct *tty) 1893 + { 1894 + struct tty3270 *tp; 1895 + 1896 + tp = tty->driver_data; 1897 + if (!tp) 1898 + return; 1899 + tp->throttle = 0; 1900 + if (tp->attn) 1901 + tty3270_issue_read(tp, 1); 1902 + } 1903 + 1904 + /* 1905 + * Hang up the tty device. 1906 + */ 1907 + static void tty3270_hangup(struct tty_struct *tty) 1908 + { 1909 + struct tty3270 *tp; 1910 + 1911 + tp = tty->driver_data; 1912 + if (!tp) 1913 + return; 1914 + spin_lock_irq(&tp->view.lock); 1915 + tp->cx = 0; 1916 + tp->cy = 0; 1917 + tp->saved_cx = 0; 1918 + tp->saved_cy = 0; 1919 + tty3270_reset_attributes(&tp->attributes); 1920 + tty3270_reset_attributes(&tp->saved_attributes); 1921 + tty3270_blank_screen(tp); 1922 + tp->update_flags = TTY_UPDATE_ALL; 1923 + spin_unlock_irq(&tp->view.lock); 1924 + tty3270_set_timer(tp, 1); 1925 + } 1926 + 1927 + static void tty3270_wait_until_sent(struct tty_struct *tty, int timeout) 1928 + { 1929 + } 1930 + 1931 + static int tty3270_ioctl(struct tty_struct *tty, unsigned int cmd, 1932 + unsigned long arg) 1933 + { 1934 + struct tty3270 *tp; 1935 + 1936 + tp = tty->driver_data; 1937 + if (!tp) 1938 + return -ENODEV; 1939 + if (tty_io_error(tty)) 1940 + return -EIO; 1941 + return kbd_ioctl(tp->kbd, cmd, arg); 1942 + } 1943 + 1944 + #ifdef CONFIG_COMPAT 1945 + static long tty3270_compat_ioctl(struct tty_struct *tty, 1946 + unsigned int cmd, unsigned long arg) 1947 + { 1948 + struct tty3270 *tp; 1949 + 1950 + tp = tty->driver_data; 1951 + if (!tp) 1952 + return -ENODEV; 1953 + if (tty_io_error(tty)) 1954 + return -EIO; 1955 + return kbd_ioctl(tp->kbd, cmd, (unsigned long)compat_ptr(arg)); 1956 + } 1957 + #endif 1958 + 1959 + static const struct tty_operations tty3270_ops = { 1960 + .install = tty3270_install, 1961 + .cleanup = tty3270_cleanup, 1962 + .open = tty3270_open, 1963 + .close = tty3270_close, 1964 + .write = tty3270_write, 1965 + .put_char = tty3270_put_char, 1966 + .flush_chars = tty3270_flush_chars, 1967 + .write_room = tty3270_write_room, 1968 + .throttle = tty3270_throttle, 1969 + .unthrottle = tty3270_unthrottle, 1970 + .hangup = tty3270_hangup, 1971 + .wait_until_sent = tty3270_wait_until_sent, 1972 + .ioctl = tty3270_ioctl, 1973 + #ifdef CONFIG_COMPAT 1974 + .compat_ioctl = tty3270_compat_ioctl, 1975 + #endif 1976 + .set_termios = tty3270_set_termios 1977 + }; 1978 + 1979 + static void tty3270_create_cb(int minor) 1980 + { 1981 + tty_register_device(tty3270_driver, minor - RAW3270_FIRSTMINOR, NULL); 1982 + } 1983 + 1984 + static void tty3270_destroy_cb(int minor) 1985 + { 1986 + tty_unregister_device(tty3270_driver, minor - RAW3270_FIRSTMINOR); 1987 + } 1988 + 1989 + static struct raw3270_notifier tty3270_notifier = { 1990 + .create = tty3270_create_cb, 1991 + .destroy = tty3270_destroy_cb, 1992 + }; 1993 + 1994 + /* 1995 + * 3270 tty registration code called from tty_init(). 1996 + * Most kernel services (incl. kmalloc) are available at this poimt. 1997 + */ 1998 + static int __init tty3270_init(void) 1999 + { 2000 + struct tty_driver *driver; 2001 + int ret; 2002 + 2003 + driver = tty_alloc_driver(RAW3270_MAXDEVS, 2004 + TTY_DRIVER_REAL_RAW | 2005 + TTY_DRIVER_DYNAMIC_DEV | 2006 + TTY_DRIVER_RESET_TERMIOS); 2007 + if (IS_ERR(driver)) 2008 + return PTR_ERR(driver); 2009 + 2010 + /* 2011 + * Initialize the tty_driver structure 2012 + * Entries in tty3270_driver that are NOT initialized: 2013 + * proc_entry, set_termios, flush_buffer, set_ldisc, write_proc 2014 + */ 2015 + driver->driver_name = "tty3270"; 2016 + driver->name = "3270/tty"; 2017 + driver->major = IBM_TTY3270_MAJOR; 2018 + driver->minor_start = RAW3270_FIRSTMINOR; 2019 + driver->name_base = RAW3270_FIRSTMINOR; 2020 + driver->type = TTY_DRIVER_TYPE_SYSTEM; 2021 + driver->subtype = SYSTEM_TYPE_TTY; 2022 + driver->init_termios = tty_std_termios; 2023 + tty_set_operations(driver, &tty3270_ops); 2024 + ret = tty_register_driver(driver); 2025 + if (ret) { 2026 + tty_driver_kref_put(driver); 2027 + return ret; 2028 + } 2029 + tty3270_driver = driver; 2030 + raw3270_register_notifier(&tty3270_notifier); 2031 + return 0; 2032 + } 2033 + 2034 + static void __exit tty3270_exit(void) 2035 + { 2036 + struct tty_driver *driver; 2037 + 2038 + raw3270_unregister_notifier(&tty3270_notifier); 2039 + driver = tty3270_driver; 2040 + tty3270_driver = NULL; 2041 + tty_unregister_driver(driver); 2042 + tty_driver_kref_put(driver); 2043 + tty3270_del_views(); 2044 + } 2045 + 2046 + #if IS_ENABLED(CONFIG_TN3270_CONSOLE) 2047 + 2048 + static struct tty3270 *condev; 2049 + 493 2050 static void 494 2051 con3270_write(struct console *co, const char *str, unsigned int count) 495 2052 { 496 - struct con3270 *cp; 2053 + struct tty3270 *tp = co->data; 497 2054 unsigned long flags; 498 - unsigned char c; 2055 + char c; 499 2056 500 - cp = condev; 501 - spin_lock_irqsave(&cp->view.lock, flags); 502 - while (count-- > 0) { 2057 + spin_lock_irqsave(&tp->view.lock, flags); 2058 + while (count--) { 503 2059 c = *str++; 504 - if (cp->cline->len == 0) 505 - con3270_cline_add(cp); 506 - if (c != '\n') 507 - con3270_cline_insert(cp, c); 508 - if (c == '\n' || cp->cline->len >= cp->view.cols) 509 - con3270_cline_end(cp); 2060 + if (c == 0x0a) { 2061 + tty3270_cr(tp); 2062 + tty3270_lf(tp); 2063 + } else { 2064 + if (tp->cx >= tp->view.cols) { 2065 + tty3270_cr(tp); 2066 + tty3270_lf(tp); 2067 + } 2068 + tty3270_put_character(tp, c); 2069 + tp->cx++; 2070 + } 510 2071 } 511 - /* Setup timer to output current console buffer after 1/10 second */ 512 - cp->nr_up = 0; 513 - if (cp->view.dev && !timer_pending(&cp->timer)) 514 - con3270_set_timer(cp, HZ/10); 515 - spin_unlock_irqrestore(&cp->view.lock,flags); 2072 + spin_unlock_irqrestore(&tp->view.lock, flags); 516 2073 } 517 2074 518 2075 static struct tty_driver * ··· 2079 522 return tty3270_driver; 2080 523 } 2081 524 2082 - /* 2083 - * Wait for end of write request. 2084 - */ 2085 525 static void 2086 - con3270_wait_write(struct con3270 *cp) 526 + con3270_wait_write(struct tty3270 *tp) 2087 527 { 2088 - while (!cp->write) { 2089 - raw3270_wait_cons_dev(cp->view.dev); 528 + while (!tp->write) { 529 + raw3270_wait_cons_dev(tp->view.dev); 2090 530 barrier(); 2091 531 } 2092 532 } ··· 2099 545 static int con3270_notify(struct notifier_block *self, 2100 546 unsigned long event, void *data) 2101 547 { 2102 - struct con3270 *cp; 548 + struct tty3270 *tp; 2103 549 unsigned long flags; 550 + int rc; 2104 551 2105 - cp = condev; 2106 - if (!cp->view.dev) 552 + tp = condev; 553 + if (!tp->view.dev) 2107 554 return NOTIFY_DONE; 2108 - if (!raw3270_view_lock_unavailable(&cp->view)) 2109 - raw3270_activate_view(&cp->view); 2110 - if (!spin_trylock_irqsave(&cp->view.lock, flags)) 2111 - return NOTIFY_DONE; 2112 - con3270_wait_write(cp); 2113 - cp->nr_up = 0; 2114 - con3270_rebuild_update(cp); 2115 - con3270_update_status(cp); 2116 - while (cp->update_flags != 0) { 2117 - spin_unlock_irqrestore(&cp->view.lock, flags); 2118 - con3270_update(&cp->timer); 2119 - spin_lock_irqsave(&cp->view.lock, flags); 2120 - con3270_wait_write(cp); 555 + if (!raw3270_view_lock_unavailable(&tp->view)) { 556 + rc = raw3270_activate_view(&tp->view); 557 + if (rc) 558 + return NOTIFY_DONE; 2121 559 } 2122 - spin_unlock_irqrestore(&cp->view.lock, flags); 2123 - 560 + if (!spin_trylock_irqsave(&tp->view.lock, flags)) 561 + return NOTIFY_DONE; 562 + con3270_wait_write(tp); 563 + tp->nr_up = 0; 564 + tp->update_flags = TTY_UPDATE_ALL; 565 + while (tp->update_flags != 0) { 566 + spin_unlock_irqrestore(&tp->view.lock, flags); 567 + tty3270_update(&tp->timer); 568 + spin_lock_irqsave(&tp->view.lock, flags); 569 + con3270_wait_write(tp); 570 + } 571 + spin_unlock_irqrestore(&tp->view.lock, flags); 2124 572 return NOTIFY_DONE; 2125 573 } 2126 574 ··· 2136 580 .priority = INT_MIN + 1, /* run the callback late */ 2137 581 }; 2138 582 2139 - /* 2140 - * The console structure for the 3270 console 2141 - */ 2142 583 static struct console con3270 = { 2143 584 .name = "tty3270", 2144 585 .write = con3270_write, ··· 2143 590 .flags = CON_PRINTBUFFER, 2144 591 }; 2145 592 2146 - /* 2147 - * 3270 console initialization code called from console_init(). 2148 - */ 2149 593 static int __init 2150 594 con3270_init(void) 2151 595 { 596 + struct raw3270_view *view; 2152 597 struct raw3270 *rp; 2153 - void *cbuf; 2154 - int i; 598 + struct tty3270 *tp; 599 + int rc; 2155 600 2156 601 /* Check if 3270 is to be the console */ 2157 602 if (!CONSOLE_IS_3270) ··· 2165 614 if (IS_ERR(rp)) 2166 615 return PTR_ERR(rp); 2167 616 2168 - condev = kzalloc(sizeof(struct con3270), GFP_KERNEL | GFP_DMA); 2169 - if (!condev) 2170 - return -ENOMEM; 2171 - condev->view.dev = rp; 2172 - 2173 - condev->read = raw3270_request_alloc(0); 2174 - condev->read->callback = con3270_read_callback; 2175 - condev->read->callback_data = condev; 2176 - condev->write = raw3270_request_alloc(CON3270_OUTPUT_BUFFER_SIZE); 2177 - condev->kreset = raw3270_request_alloc(1); 2178 - 2179 - INIT_LIST_HEAD(&condev->lines); 2180 - INIT_LIST_HEAD(&condev->update); 2181 - timer_setup(&condev->timer, con3270_update, 0); 2182 - tasklet_init(&condev->readlet, con3270_read_tasklet, 2183 - (unsigned long) condev->read); 2184 - 2185 - raw3270_add_view(&condev->view, &con3270_fn, 1, RAW3270_VIEW_LOCK_IRQ); 2186 - 2187 - INIT_LIST_HEAD(&condev->freemem); 2188 - for (i = 0; i < CON3270_STRING_PAGES; i++) { 2189 - cbuf = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 2190 - add_string_memory(&condev->freemem, cbuf, PAGE_SIZE); 617 + /* Check if the tty3270 is already there. */ 618 + view = raw3270_find_view(&tty3270_fn, RAW3270_FIRSTMINOR); 619 + if (IS_ERR(view)) { 620 + rc = tty3270_create_view(0, &tp); 621 + if (rc) 622 + return rc; 623 + } else { 624 + tp = container_of(view, struct tty3270, view); 625 + tp->inattr = TF_INPUT; 2191 626 } 2192 - condev->cline = alloc_string(&condev->freemem, condev->view.cols); 2193 - condev->cline->len = 0; 2194 - con3270_create_status(condev); 2195 - condev->input = alloc_string(&condev->freemem, 80); 627 + con3270.data = tp; 628 + condev = tp; 2196 629 atomic_notifier_chain_register(&panic_notifier_list, &on_panic_nb); 2197 630 register_reboot_notifier(&on_reboot_nb); 2198 631 register_console(&con3270); 2199 632 return 0; 2200 633 } 2201 - 2202 634 console_initcall(con3270_init); 635 + #endif 636 + 637 + MODULE_LICENSE("GPL"); 638 + MODULE_ALIAS_CHARDEV_MAJOR(IBM_TTY3270_MAJOR); 639 + 640 + module_init(tty3270_init); 641 + module_exit(tty3270_exit);
+2 -2
drivers/s390/char/diag_ftp.c
··· 159 159 goto out; 160 160 } 161 161 162 - len = strlcpy(ldfpl->fident, ftp->fname, sizeof(ldfpl->fident)); 163 - if (len >= HMCDRV_FTP_FIDENT_MAX) { 162 + len = strscpy(ldfpl->fident, ftp->fname, sizeof(ldfpl->fident)); 163 + if (len < 0) { 164 164 len = -EINVAL; 165 165 goto out_free; 166 166 }
+57 -67
drivers/s390/char/fs3270.c
··· 19 19 #include <linux/slab.h> 20 20 #include <linux/types.h> 21 21 22 + #include <uapi/asm/fs3270.h> 22 23 #include <asm/ccwdev.h> 23 24 #include <asm/cio.h> 24 25 #include <asm/ebcdic.h> ··· 45 44 46 45 static DEFINE_MUTEX(fs3270_mutex); 47 46 48 - static void 49 - fs3270_wake_up(struct raw3270_request *rq, void *data) 47 + static void fs3270_wake_up(struct raw3270_request *rq, void *data) 50 48 { 51 - wake_up((wait_queue_head_t *) data); 49 + wake_up((wait_queue_head_t *)data); 52 50 } 53 51 54 - static inline int 55 - fs3270_working(struct fs3270 *fp) 52 + static inline int fs3270_working(struct fs3270 *fp) 56 53 { 57 54 /* 58 55 * The fullscreen view is in working order if the view ··· 59 60 return fp->active && raw3270_request_final(fp->init); 60 61 } 61 62 62 - static int 63 - fs3270_do_io(struct raw3270_view *view, struct raw3270_request *rq) 63 + static int fs3270_do_io(struct raw3270_view *view, struct raw3270_request *rq) 64 64 { 65 65 struct fs3270 *fp; 66 66 int rc; 67 67 68 - fp = (struct fs3270 *) view; 68 + fp = (struct fs3270 *)view; 69 69 rq->callback = fs3270_wake_up; 70 70 rq->callback_data = &fp->wait; 71 71 ··· 88 90 /* 89 91 * Switch to the fullscreen view. 90 92 */ 91 - static void 92 - fs3270_reset_callback(struct raw3270_request *rq, void *data) 93 + static void fs3270_reset_callback(struct raw3270_request *rq, void *data) 93 94 { 94 95 struct fs3270 *fp; 95 96 96 - fp = (struct fs3270 *) rq->view; 97 + fp = (struct fs3270 *)rq->view; 97 98 raw3270_request_reset(rq); 98 99 wake_up(&fp->wait); 99 100 } 100 101 101 - static void 102 - fs3270_restore_callback(struct raw3270_request *rq, void *data) 102 + static void fs3270_restore_callback(struct raw3270_request *rq, void *data) 103 103 { 104 104 struct fs3270 *fp; 105 105 106 - fp = (struct fs3270 *) rq->view; 106 + fp = (struct fs3270 *)rq->view; 107 107 if (rq->rc != 0 || rq->rescnt != 0) { 108 108 if (fp->fs_pid) 109 109 kill_pid(fp->fs_pid, SIGHUP, 1); ··· 111 115 wake_up(&fp->wait); 112 116 } 113 117 114 - static int 115 - fs3270_activate(struct raw3270_view *view) 118 + static int fs3270_activate(struct raw3270_view *view) 116 119 { 117 120 struct fs3270 *fp; 118 121 char *cp; 119 122 int rc; 120 123 121 - fp = (struct fs3270 *) view; 124 + fp = (struct fs3270 *)view; 122 125 123 126 /* If an old init command is still running just return. */ 124 127 if (!raw3270_request_final(fp->init)) 125 128 return 0; 126 129 130 + raw3270_request_set_cmd(fp->init, TC_EWRITEA); 131 + raw3270_request_set_idal(fp->init, fp->rdbuf); 132 + fp->init->rescnt = 0; 133 + cp = fp->rdbuf->data[0]; 127 134 if (fp->rdbuf_size == 0) { 128 135 /* No saved buffer. Just clear the screen. */ 129 - raw3270_request_set_cmd(fp->init, TC_EWRITEA); 136 + fp->init->ccw.count = 1; 130 137 fp->init->callback = fs3270_reset_callback; 138 + cp[0] = 0; 131 139 } else { 132 140 /* Restore fullscreen buffer saved by fs3270_deactivate. */ 133 - raw3270_request_set_cmd(fp->init, TC_EWRITEA); 134 - raw3270_request_set_idal(fp->init, fp->rdbuf); 135 141 fp->init->ccw.count = fp->rdbuf_size; 136 - cp = fp->rdbuf->data[0]; 142 + fp->init->callback = fs3270_restore_callback; 137 143 cp[0] = TW_KR; 138 144 cp[1] = TO_SBA; 139 145 cp[2] = cp[6]; ··· 144 146 cp[5] = TO_SBA; 145 147 cp[6] = 0x40; 146 148 cp[7] = 0x40; 147 - fp->init->rescnt = 0; 148 - fp->init->callback = fs3270_restore_callback; 149 149 } 150 - rc = fp->init->rc = raw3270_start_locked(view, fp->init); 150 + rc = raw3270_start_locked(view, fp->init); 151 + fp->init->rc = rc; 151 152 if (rc) 152 153 fp->init->callback(fp->init, NULL); 153 154 else ··· 157 160 /* 158 161 * Shutdown fullscreen view. 159 162 */ 160 - static void 161 - fs3270_save_callback(struct raw3270_request *rq, void *data) 163 + static void fs3270_save_callback(struct raw3270_request *rq, void *data) 162 164 { 163 165 struct fs3270 *fp; 164 166 165 - fp = (struct fs3270 *) rq->view; 167 + fp = (struct fs3270 *)rq->view; 166 168 167 169 /* Correct idal buffer element 0 address. */ 168 170 fp->rdbuf->data[0] -= 5; ··· 177 181 if (fp->fs_pid) 178 182 kill_pid(fp->fs_pid, SIGHUP, 1); 179 183 fp->rdbuf_size = 0; 180 - } else 184 + } else { 181 185 fp->rdbuf_size = fp->rdbuf->size - rq->rescnt; 186 + } 182 187 raw3270_request_reset(rq); 183 188 wake_up(&fp->wait); 184 189 } 185 190 186 - static void 187 - fs3270_deactivate(struct raw3270_view *view) 191 + static void fs3270_deactivate(struct raw3270_view *view) 188 192 { 189 193 struct fs3270 *fp; 190 194 191 - fp = (struct fs3270 *) view; 195 + fp = (struct fs3270 *)view; 192 196 fp->active = 0; 193 197 194 198 /* If an old init command is still running just return. */ ··· 214 218 fp->init->callback(fp->init, NULL); 215 219 } 216 220 217 - static void 218 - fs3270_irq(struct fs3270 *fp, struct raw3270_request *rq, struct irb *irb) 221 + static void fs3270_irq(struct fs3270 *fp, struct raw3270_request *rq, 222 + struct irb *irb) 219 223 { 220 224 /* Handle ATTN. Set indication and wake waiters for attention. */ 221 225 if (irb->scsw.cmd.dstat & DEV_STAT_ATTENTION) { ··· 235 239 /* 236 240 * Process reads from fullscreen 3270. 237 241 */ 238 - static ssize_t 239 - fs3270_read(struct file *filp, char __user *data, size_t count, loff_t *off) 242 + static ssize_t fs3270_read(struct file *filp, char __user *data, 243 + size_t count, loff_t *off) 240 244 { 241 245 struct fs3270 *fp; 242 246 struct raw3270_request *rq; 243 247 struct idal_buffer *ib; 244 248 ssize_t rc; 245 - 249 + 246 250 if (count == 0 || count > 65535) 247 251 return -EINVAL; 248 252 fp = filp->private_data; ··· 267 271 rc = -EFAULT; 268 272 else 269 273 rc = count; 270 - 271 274 } 272 275 } 273 276 raw3270_request_free(rq); 274 - } else 277 + } else { 275 278 rc = PTR_ERR(rq); 279 + } 276 280 idal_buffer_free(ib); 277 281 return rc; 278 282 } ··· 280 284 /* 281 285 * Process writes to fullscreen 3270. 282 286 */ 283 - static ssize_t 284 - fs3270_write(struct file *filp, const char __user *data, size_t count, loff_t *off) 287 + static ssize_t fs3270_write(struct file *filp, const char __user *data, 288 + size_t count, loff_t *off) 285 289 { 286 290 struct fs3270 *fp; 287 291 struct raw3270_request *rq; ··· 306 310 rc = fs3270_do_io(&fp->view, rq); 307 311 if (rc == 0) 308 312 rc = count - rq->rescnt; 309 - } else 313 + } else { 310 314 rc = -EFAULT; 315 + } 311 316 raw3270_request_free(rq); 312 - } else 317 + } else { 313 318 rc = PTR_ERR(rq); 319 + } 314 320 idal_buffer_free(ib); 315 321 return rc; 316 322 } ··· 320 322 /* 321 323 * process ioctl commands for the tube driver 322 324 */ 323 - static long 324 - fs3270_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 325 + static long fs3270_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 325 326 { 326 327 char __user *argp; 327 328 struct fs3270 *fp; ··· 367 370 /* 368 371 * Allocate fs3270 structure. 369 372 */ 370 - static struct fs3270 * 371 - fs3270_alloc_view(void) 373 + static struct fs3270 *fs3270_alloc_view(void) 372 374 { 373 375 struct fs3270 *fp; 374 376 375 - fp = kzalloc(sizeof(struct fs3270),GFP_KERNEL); 377 + fp = kzalloc(sizeof(*fp), GFP_KERNEL); 376 378 if (!fp) 377 379 return ERR_PTR(-ENOMEM); 378 380 fp->init = raw3270_request_alloc(0); ··· 385 389 /* 386 390 * Free fs3270 structure. 387 391 */ 388 - static void 389 - fs3270_free_view(struct raw3270_view *view) 392 + static void fs3270_free_view(struct raw3270_view *view) 390 393 { 391 394 struct fs3270 *fp; 392 395 393 - fp = (struct fs3270 *) view; 396 + fp = (struct fs3270 *)view; 394 397 if (fp->rdbuf) 395 398 idal_buffer_free(fp->rdbuf); 396 - raw3270_request_free(((struct fs3270 *) view)->init); 399 + raw3270_request_free(((struct fs3270 *)view)->init); 397 400 kfree(view); 398 401 } 399 402 400 403 /* 401 404 * Unlink fs3270 data structure from filp. 402 405 */ 403 - static void 404 - fs3270_release(struct raw3270_view *view) 406 + static void fs3270_release(struct raw3270_view *view) 405 407 { 406 408 struct fs3270 *fp; 407 409 408 - fp = (struct fs3270 *) view; 410 + fp = (struct fs3270 *)view; 409 411 if (fp->fs_pid) 410 412 kill_pid(fp->fs_pid, SIGHUP, 1); 411 413 } ··· 412 418 static struct raw3270_fn fs3270_fn = { 413 419 .activate = fs3270_activate, 414 420 .deactivate = fs3270_deactivate, 415 - .intv = (void *) fs3270_irq, 421 + .intv = (void *)fs3270_irq, 416 422 .release = fs3270_release, 417 423 .free = fs3270_free_view 418 424 }; ··· 420 426 /* 421 427 * This routine is called whenever a 3270 fullscreen device is opened. 422 428 */ 423 - static int 424 - fs3270_open(struct inode *inode, struct file *filp) 429 + static int fs3270_open(struct inode *inode, struct file *filp) 425 430 { 426 431 struct fs3270 *fp; 427 432 struct idal_buffer *ib; ··· 432 439 /* Check for minor 0 multiplexer. */ 433 440 if (minor == 0) { 434 441 struct tty_struct *tty = get_current_tty(); 442 + 435 443 if (!tty || tty->driver->major != IBM_TTY3270_MAJOR) { 436 444 tty_kref_put(tty); 437 445 return -ENODEV; ··· 442 448 } 443 449 mutex_lock(&fs3270_mutex); 444 450 /* Check if some other program is already using fullscreen mode. */ 445 - fp = (struct fs3270 *) raw3270_find_view(&fs3270_fn, minor); 451 + fp = (struct fs3270 *)raw3270_find_view(&fs3270_fn, minor); 446 452 if (!IS_ERR(fp)) { 447 453 raw3270_put_view(&fp->view); 448 454 rc = -EBUSY; ··· 465 471 } 466 472 467 473 /* Allocate idal-buffer. */ 468 - ib = idal_buffer_alloc(2*fp->view.rows*fp->view.cols + 5, 0); 474 + ib = idal_buffer_alloc(2 * fp->view.rows * fp->view.cols + 5, 0); 469 475 if (IS_ERR(ib)) { 470 476 raw3270_put_view(&fp->view); 471 477 raw3270_del_view(&fp->view); ··· 491 497 * This routine is called when the 3270 tty is closed. We wait 492 498 * for the remaining request to be completed. Then we clean up. 493 499 */ 494 - static int 495 - fs3270_close(struct inode *inode, struct file *filp) 500 + static int fs3270_close(struct inode *inode, struct file *filp) 496 501 { 497 502 struct fs3270 *fp; 498 503 ··· 531 538 __unregister_chrdev(IBM_FS3270_MAJOR, minor, 1, "tub"); 532 539 } 533 540 534 - static struct raw3270_notifier fs3270_notifier = 535 - { 541 + static struct raw3270_notifier fs3270_notifier = { 536 542 .create = fs3270_create_cb, 537 543 .destroy = fs3270_destroy_cb, 538 544 }; ··· 539 547 /* 540 548 * 3270 fullscreen driver initialization. 541 549 */ 542 - static int __init 543 - fs3270_init(void) 550 + static int __init fs3270_init(void) 544 551 { 545 552 int rc; 546 553 ··· 552 561 return 0; 553 562 } 554 563 555 - static void __exit 556 - fs3270_exit(void) 564 + static void __exit fs3270_exit(void) 557 565 { 558 566 raw3270_unregister_notifier(&fs3270_notifier); 559 567 device_destroy(class3270, MKDEV(IBM_FS3270_MAJOR, 0));
+205 -171
drivers/s390/char/raw3270.c
··· 30 30 #include <linux/mutex.h> 31 31 32 32 struct class *class3270; 33 + EXPORT_SYMBOL(class3270); 33 34 34 35 /* The main 3270 data structure. */ 35 36 struct raw3270 { ··· 38 37 struct ccw_device *cdev; 39 38 int minor; 40 39 41 - short model, rows, cols; 40 + int model, rows, cols; 41 + int old_model, old_rows, old_cols; 42 42 unsigned int state; 43 43 unsigned long flags; 44 44 ··· 56 54 struct raw3270_request init_readpart; 57 55 struct raw3270_request init_readmod; 58 56 unsigned char init_data[256]; 57 + struct work_struct resize_work; 59 58 }; 60 59 61 60 /* raw3270->state */ ··· 92 89 * Wait queue for device init/delete, view delete. 93 90 */ 94 91 DECLARE_WAIT_QUEUE_HEAD(raw3270_wait_queue); 92 + EXPORT_SYMBOL(raw3270_wait_queue); 95 93 96 94 static void __raw3270_disconnect(struct raw3270 *rp); 97 95 ··· 115 111 return rp->state == RAW3270_STATE_READY; 116 112 } 117 113 118 - void 119 - raw3270_buffer_address(struct raw3270 *rp, char *cp, unsigned short addr) 114 + void raw3270_buffer_address(struct raw3270 *rp, char *cp, int x, int y) 120 115 { 116 + int addr; 117 + 118 + if (x < 0) 119 + x = max_t(int, 0, rp->view->cols + x); 120 + if (y < 0) 121 + y = max_t(int, 0, rp->view->rows + y); 122 + addr = (y * rp->view->cols) + x; 121 123 if (test_bit(RAW3270_FLAGS_14BITADDR, &rp->flags)) { 122 124 cp[0] = (addr >> 8) & 0x3f; 123 125 cp[1] = addr & 0xff; ··· 132 122 cp[1] = raw3270_ebcgraf[addr & 0x3f]; 133 123 } 134 124 } 125 + EXPORT_SYMBOL(raw3270_buffer_address); 135 126 136 127 /* 137 128 * Allocate a new 3270 ccw request 138 129 */ 139 - struct raw3270_request * 140 - raw3270_request_alloc(size_t size) 130 + struct raw3270_request *raw3270_request_alloc(size_t size) 141 131 { 142 132 struct raw3270_request *rq; 143 133 144 134 /* Allocate request structure */ 145 - rq = kzalloc(sizeof(struct raw3270_request), GFP_KERNEL | GFP_DMA); 135 + rq = kzalloc(sizeof(*rq), GFP_KERNEL | GFP_DMA); 146 136 if (!rq) 147 137 return ERR_PTR(-ENOMEM); 148 138 ··· 165 155 166 156 return rq; 167 157 } 158 + EXPORT_SYMBOL(raw3270_request_alloc); 168 159 169 160 /* 170 161 * Free 3270 ccw request 171 162 */ 172 - void 173 - raw3270_request_free (struct raw3270_request *rq) 163 + void raw3270_request_free(struct raw3270_request *rq) 174 164 { 175 165 kfree(rq->buffer); 176 166 kfree(rq); 177 167 } 168 + EXPORT_SYMBOL(raw3270_request_free); 178 169 179 170 /* 180 171 * Reset request to initial state. 181 172 */ 182 - void 183 - raw3270_request_reset(struct raw3270_request *rq) 173 + int raw3270_request_reset(struct raw3270_request *rq) 184 174 { 185 - BUG_ON(!list_empty(&rq->list)); 175 + if (WARN_ON_ONCE(!list_empty(&rq->list))) 176 + return -EBUSY; 186 177 rq->ccw.cmd_code = 0; 187 178 rq->ccw.count = 0; 188 179 rq->ccw.cda = __pa(rq->buffer); 189 180 rq->ccw.flags = CCW_FLAG_SLI; 190 181 rq->rescnt = 0; 191 182 rq->rc = 0; 183 + return 0; 192 184 } 185 + EXPORT_SYMBOL(raw3270_request_reset); 193 186 194 187 /* 195 188 * Set command code to ccw of a request. 196 189 */ 197 - void 198 - raw3270_request_set_cmd(struct raw3270_request *rq, u8 cmd) 190 + void raw3270_request_set_cmd(struct raw3270_request *rq, u8 cmd) 199 191 { 200 192 rq->ccw.cmd_code = cmd; 201 193 } 194 + EXPORT_SYMBOL(raw3270_request_set_cmd); 202 195 203 196 /* 204 197 * Add data fragment to output buffer. 205 198 */ 206 - int 207 - raw3270_request_add_data(struct raw3270_request *rq, void *data, size_t size) 199 + int raw3270_request_add_data(struct raw3270_request *rq, void *data, size_t size) 208 200 { 209 201 if (size + rq->ccw.count > rq->size) 210 202 return -E2BIG; ··· 214 202 rq->ccw.count += size; 215 203 return 0; 216 204 } 205 + EXPORT_SYMBOL(raw3270_request_add_data); 217 206 218 207 /* 219 208 * Set address/length pair to ccw of a request. 220 209 */ 221 - void 222 - raw3270_request_set_data(struct raw3270_request *rq, void *data, size_t size) 210 + void raw3270_request_set_data(struct raw3270_request *rq, void *data, size_t size) 223 211 { 224 212 rq->ccw.cda = __pa(data); 225 213 rq->ccw.count = size; 226 214 } 215 + EXPORT_SYMBOL(raw3270_request_set_data); 227 216 228 217 /* 229 218 * Set idal buffer to ccw of a request. 230 219 */ 231 - void 232 - raw3270_request_set_idal(struct raw3270_request *rq, struct idal_buffer *ib) 220 + void raw3270_request_set_idal(struct raw3270_request *rq, struct idal_buffer *ib) 233 221 { 234 222 rq->ccw.cda = __pa(ib->data); 235 223 rq->ccw.count = ib->size; 236 224 rq->ccw.flags |= CCW_FLAG_IDA; 237 225 } 226 + EXPORT_SYMBOL(raw3270_request_set_idal); 238 227 239 228 /* 240 229 * Add the request to the request queue, try to start it if the 241 230 * 3270 device is idle. Return without waiting for end of i/o. 242 231 */ 243 - static int 244 - __raw3270_start(struct raw3270 *rp, struct raw3270_view *view, 245 - struct raw3270_request *rq) 232 + static int __raw3270_start(struct raw3270 *rp, struct raw3270_view *view, 233 + struct raw3270_request *rq) 246 234 { 247 235 rq->view = view; 248 236 raw3270_get_view(view); ··· 250 238 !test_bit(RAW3270_FLAGS_BUSY, &rp->flags)) { 251 239 /* No other requests are on the queue. Start this one. */ 252 240 rq->rc = ccw_device_start(rp->cdev, &rq->ccw, 253 - (unsigned long) rq, 0, 0); 241 + (unsigned long)rq, 0, 0); 254 242 if (rq->rc) { 255 243 raw3270_put_view(view); 256 244 return rq->rc; ··· 260 248 return 0; 261 249 } 262 250 263 - int 264 - raw3270_view_active(struct raw3270_view *view) 251 + int raw3270_view_active(struct raw3270_view *view) 265 252 { 266 253 struct raw3270 *rp = view->dev; 267 254 268 255 return rp && rp->view == view; 269 256 } 270 257 271 - int 272 - raw3270_start(struct raw3270_view *view, struct raw3270_request *rq) 258 + int raw3270_start(struct raw3270_view *view, struct raw3270_request *rq) 273 259 { 274 260 unsigned long flags; 275 261 struct raw3270 *rp; ··· 284 274 spin_unlock_irqrestore(get_ccwdev_lock(view->dev->cdev), flags); 285 275 return rc; 286 276 } 277 + EXPORT_SYMBOL(raw3270_start); 287 278 288 - int 289 - raw3270_start_locked(struct raw3270_view *view, struct raw3270_request *rq) 279 + int raw3270_start_request(struct raw3270_view *view, struct raw3270_request *rq, 280 + int cmd, void *data, size_t len) 281 + { 282 + int rc; 283 + 284 + rc = raw3270_request_reset(rq); 285 + if (rc) 286 + return rc; 287 + raw3270_request_set_cmd(rq, cmd); 288 + rc = raw3270_request_add_data(rq, data, len); 289 + if (rc) 290 + return rc; 291 + return raw3270_start(view, rq); 292 + } 293 + EXPORT_SYMBOL(raw3270_start_request); 294 + 295 + int raw3270_start_locked(struct raw3270_view *view, struct raw3270_request *rq) 290 296 { 291 297 struct raw3270 *rp; 292 298 int rc; ··· 316 290 rc = __raw3270_start(rp, view, rq); 317 291 return rc; 318 292 } 293 + EXPORT_SYMBOL(raw3270_start_locked); 319 294 320 - int 321 - raw3270_start_irq(struct raw3270_view *view, struct raw3270_request *rq) 295 + int raw3270_start_irq(struct raw3270_view *view, struct raw3270_request *rq) 322 296 { 323 297 struct raw3270 *rp; 324 298 ··· 328 302 list_add_tail(&rq->list, &rp->req_queue); 329 303 return 0; 330 304 } 305 + EXPORT_SYMBOL(raw3270_start_irq); 331 306 332 307 /* 333 308 * 3270 interrupt routine, called from the ccw_device layer 334 309 */ 335 - static void 336 - raw3270_irq (struct ccw_device *cdev, unsigned long intparm, struct irb *irb) 310 + static void raw3270_irq(struct ccw_device *cdev, unsigned long intparm, struct irb *irb) 337 311 { 338 312 struct raw3270 *rp; 339 313 struct raw3270_view *view; ··· 342 316 rp = dev_get_drvdata(&cdev->dev); 343 317 if (!rp) 344 318 return; 345 - rq = (struct raw3270_request *) intparm; 319 + rq = (struct raw3270_request *)intparm; 346 320 view = rq ? rq->view : rp->view; 347 321 348 322 if (!IS_ERR(irb)) { ··· 383 357 * started successful. 384 358 */ 385 359 while (!list_empty(&rp->req_queue)) { 386 - rq = list_entry(rp->req_queue.next,struct raw3270_request,list); 360 + rq = list_entry(rp->req_queue.next, struct raw3270_request, list); 387 361 rq->rc = ccw_device_start(rp->cdev, &rq->ccw, 388 - (unsigned long) rq, 0, 0); 362 + (unsigned long)rq, 0, 0); 389 363 if (rq->rc == 0) 390 364 break; 391 365 /* Start failed. Remove request and do callback. */ ··· 425 399 char ymin; 426 400 char xmax; 427 401 char ymax; 428 - } __attribute__ ((packed)) uab; 402 + } __packed uab; 429 403 struct { /* Alternate Usable Area Self-Defining Parameter */ 430 404 char l; /* Length of this Self-Defining Parm */ 431 405 char sdpid; /* 0x02 if Alternate Usable Area */ ··· 438 412 int auayr; 439 413 char awauai; 440 414 char ahauai; 441 - } __attribute__ ((packed)) aua; 442 - } __attribute__ ((packed)); 415 + } __packed aua; 416 + } __packed; 443 417 444 - static void 445 - raw3270_size_device_vm(struct raw3270 *rp) 418 + static void raw3270_size_device_vm(struct raw3270 *rp) 446 419 { 447 420 int rc, model; 448 421 struct ccw_dev_id dev_id; 449 422 struct diag210 diag_data; 423 + struct diag8c diag8c_data; 450 424 451 425 ccw_device_get_id(rp->cdev, &dev_id); 426 + rc = diag8c(&diag8c_data, &dev_id); 427 + if (!rc) { 428 + rp->model = 2; 429 + rp->rows = diag8c_data.height; 430 + rp->cols = diag8c_data.width; 431 + if (diag8c_data.flags & 1) 432 + set_bit(RAW3270_FLAGS_14BITADDR, &rp->flags); 433 + return; 434 + } 435 + 452 436 diag_data.vrdcdvno = dev_id.devno; 453 437 diag_data.vrdclen = sizeof(struct diag210); 454 438 rc = diag210(&diag_data); ··· 490 454 } 491 455 } 492 456 493 - static void 494 - raw3270_size_device(struct raw3270 *rp) 457 + static void raw3270_size_device(struct raw3270 *rp, char *init_data) 495 458 { 496 459 struct raw3270_ua *uap; 497 460 498 461 /* Got a Query Reply */ 499 - uap = (struct raw3270_ua *) (rp->init_data + 1); 462 + uap = (struct raw3270_ua *)(init_data + 1); 500 463 /* Paranoia check. */ 501 - if (rp->init_readmod.rc || rp->init_data[0] != 0x88 || 502 - uap->uab.qcode != 0x81) { 464 + if (init_data[0] != 0x88 || uap->uab.qcode != 0x81) { 503 465 /* Couldn't detect size. Use default model 2. */ 504 466 rp->model = 2; 505 467 rp->rows = 24; ··· 528 494 rp->model = 5; 529 495 } 530 496 531 - static void 532 - raw3270_size_device_done(struct raw3270 *rp) 497 + static void raw3270_resize_work(struct work_struct *work) 533 498 { 499 + struct raw3270 *rp = container_of(work, struct raw3270, resize_work); 534 500 struct raw3270_view *view; 535 501 536 - rp->view = NULL; 537 - rp->state = RAW3270_STATE_READY; 538 502 /* Notify views about new size */ 539 - list_for_each_entry(view, &rp->view_list, list) 503 + list_for_each_entry(view, &rp->view_list, list) { 540 504 if (view->fn->resize) 541 - view->fn->resize(view, rp->model, rp->rows, rp->cols); 505 + view->fn->resize(view, rp->model, rp->rows, rp->cols, 506 + rp->old_model, rp->old_rows, rp->old_cols); 507 + } 508 + rp->old_cols = rp->cols; 509 + rp->old_rows = rp->rows; 510 + rp->old_model = rp->model; 542 511 /* Setup processing done, now activate a view */ 543 512 list_for_each_entry(view, &rp->view_list, list) { 544 513 rp->view = view; ··· 551 514 } 552 515 } 553 516 554 - static void 555 - raw3270_read_modified_cb(struct raw3270_request *rq, void *data) 517 + static void raw3270_size_device_done(struct raw3270 *rp) 518 + { 519 + rp->view = NULL; 520 + rp->state = RAW3270_STATE_READY; 521 + schedule_work(&rp->resize_work); 522 + } 523 + 524 + void raw3270_read_modified_cb(struct raw3270_request *rq, void *data) 556 525 { 557 526 struct raw3270 *rp = rq->view->dev; 558 527 559 - raw3270_size_device(rp); 528 + raw3270_size_device(rp, data); 560 529 raw3270_size_device_done(rp); 561 530 } 531 + EXPORT_SYMBOL(raw3270_read_modified_cb); 562 532 563 - static void 564 - raw3270_read_modified(struct raw3270 *rp) 533 + static void raw3270_read_modified(struct raw3270 *rp) 565 534 { 566 535 if (rp->state != RAW3270_STATE_W4ATTN) 567 536 return; ··· 577 534 rp->init_readmod.ccw.cmd_code = TC_READMOD; 578 535 rp->init_readmod.ccw.flags = CCW_FLAG_SLI; 579 536 rp->init_readmod.ccw.count = sizeof(rp->init_data); 580 - rp->init_readmod.ccw.cda = (__u32) __pa(rp->init_data); 537 + rp->init_readmod.ccw.cda = (__u32)__pa(rp->init_data); 581 538 rp->init_readmod.callback = raw3270_read_modified_cb; 539 + rp->init_readmod.callback_data = rp->init_data; 582 540 rp->state = RAW3270_STATE_READMOD; 583 541 raw3270_start_irq(&rp->init_view, &rp->init_readmod); 584 542 } 585 543 586 - static void 587 - raw3270_writesf_readpart(struct raw3270 *rp) 544 + static void raw3270_writesf_readpart(struct raw3270 *rp) 588 545 { 589 - static const unsigned char wbuf[] = 590 - { 0x00, 0x07, 0x01, 0xff, 0x03, 0x00, 0x81 }; 546 + static const unsigned char wbuf[] = { 547 + 0x00, 0x07, 0x01, 0xff, 0x03, 0x00, 0x81 548 + }; 591 549 592 550 /* Store 'read partition' data stream to init_data */ 593 551 memset(&rp->init_readpart, 0, sizeof(rp->init_readpart)); ··· 597 553 rp->init_readpart.ccw.cmd_code = TC_WRITESF; 598 554 rp->init_readpart.ccw.flags = CCW_FLAG_SLI; 599 555 rp->init_readpart.ccw.count = sizeof(wbuf); 600 - rp->init_readpart.ccw.cda = (__u32) __pa(&rp->init_data); 556 + rp->init_readpart.ccw.cda = (__u32)__pa(&rp->init_data); 601 557 rp->state = RAW3270_STATE_W4ATTN; 602 558 raw3270_start_irq(&rp->init_view, &rp->init_readpart); 603 559 } ··· 605 561 /* 606 562 * Device reset 607 563 */ 608 - static void 609 - raw3270_reset_device_cb(struct raw3270_request *rq, void *data) 564 + static void raw3270_reset_device_cb(struct raw3270_request *rq, void *data) 610 565 { 611 566 struct raw3270 *rp = rq->view->dev; 612 567 ··· 617 574 } else if (MACHINE_IS_VM) { 618 575 raw3270_size_device_vm(rp); 619 576 raw3270_size_device_done(rp); 620 - } else 577 + } else { 621 578 raw3270_writesf_readpart(rp); 579 + } 622 580 memset(&rp->init_reset, 0, sizeof(rp->init_reset)); 623 581 } 624 582 625 - static int 626 - __raw3270_reset_device(struct raw3270 *rp) 583 + static int __raw3270_reset_device(struct raw3270 *rp) 627 584 { 628 585 int rc; 629 586 ··· 635 592 rp->init_reset.ccw.cmd_code = TC_EWRITEA; 636 593 rp->init_reset.ccw.flags = CCW_FLAG_SLI; 637 594 rp->init_reset.ccw.count = 1; 638 - rp->init_reset.ccw.cda = (__u32) __pa(rp->init_data); 595 + rp->init_reset.ccw.cda = (__u32)__pa(rp->init_data); 639 596 rp->init_reset.callback = raw3270_reset_device_cb; 640 597 rc = __raw3270_start(rp, &rp->init_view, &rp->init_reset); 641 598 if (rc == 0 && rp->state == RAW3270_STATE_INIT) ··· 643 600 return rc; 644 601 } 645 602 646 - static int 647 - raw3270_reset_device(struct raw3270 *rp) 603 + static int raw3270_reset_device(struct raw3270 *rp) 648 604 { 649 605 unsigned long flags; 650 606 int rc; ··· 654 612 return rc; 655 613 } 656 614 657 - int 658 - raw3270_reset(struct raw3270_view *view) 615 + int raw3270_reset(struct raw3270_view *view) 659 616 { 660 617 struct raw3270 *rp; 661 618 int rc; ··· 668 627 rc = raw3270_reset_device(view->dev); 669 628 return rc; 670 629 } 630 + EXPORT_SYMBOL(raw3270_reset); 671 631 672 - static void 673 - __raw3270_disconnect(struct raw3270 *rp) 632 + static void __raw3270_disconnect(struct raw3270 *rp) 674 633 { 675 634 struct raw3270_request *rq; 676 635 struct raw3270_view *view; ··· 679 638 rp->view = &rp->init_view; 680 639 /* Cancel all queued requests */ 681 640 while (!list_empty(&rp->req_queue)) { 682 - rq = list_entry(rp->req_queue.next,struct raw3270_request,list); 641 + rq = list_entry(rp->req_queue.next, struct raw3270_request, list); 683 642 view = rq->view; 684 643 rq->rc = -EACCES; 685 644 list_del_init(&rq->list); ··· 691 650 __raw3270_reset_device(rp); 692 651 } 693 652 694 - static void 695 - raw3270_init_irq(struct raw3270_view *view, struct raw3270_request *rq, 696 - struct irb *irb) 653 + static void raw3270_init_irq(struct raw3270_view *view, struct raw3270_request *rq, 654 + struct irb *irb) 697 655 { 698 656 struct raw3270 *rp; 699 657 ··· 718 678 /* 719 679 * Setup new 3270 device. 720 680 */ 721 - static int 722 - raw3270_setup_device(struct ccw_device *cdev, struct raw3270 *rp, char *ascebc) 681 + static int raw3270_setup_device(struct ccw_device *cdev, struct raw3270 *rp, 682 + char *ascebc) 723 683 { 724 684 struct list_head *l; 725 685 struct raw3270 *tmp; ··· 739 699 /* Set defaults. */ 740 700 rp->rows = 24; 741 701 rp->cols = 80; 702 + rp->old_rows = rp->rows; 703 + rp->old_cols = rp->cols; 742 704 743 705 INIT_LIST_HEAD(&rp->req_queue); 744 706 INIT_LIST_HEAD(&rp->view_list); ··· 748 706 rp->init_view.dev = rp; 749 707 rp->init_view.fn = &raw3270_init_fn; 750 708 rp->view = &rp->init_view; 709 + INIT_WORK(&rp->resize_work, raw3270_resize_work); 751 710 752 711 /* 753 712 * Add device to list and find the smallest unused minor ··· 807 764 if (IS_ERR(cdev)) 808 765 return ERR_CAST(cdev); 809 766 810 - rp = kzalloc(sizeof(struct raw3270), GFP_KERNEL | GFP_DMA); 767 + rp = kzalloc(sizeof(*rp), GFP_KERNEL | GFP_DMA); 811 768 ascebc = kzalloc(256, GFP_KERNEL); 812 769 rc = raw3270_setup_device(cdev, rp, ascebc); 813 770 if (rc) ··· 832 789 return rp; 833 790 } 834 791 835 - void 836 - raw3270_wait_cons_dev(struct raw3270 *rp) 792 + void raw3270_wait_cons_dev(struct raw3270 *rp) 837 793 { 838 794 unsigned long flags; 839 795 ··· 846 804 /* 847 805 * Create a 3270 device structure. 848 806 */ 849 - static struct raw3270 * 850 - raw3270_create_device(struct ccw_device *cdev) 807 + static struct raw3270 *raw3270_create_device(struct ccw_device *cdev) 851 808 { 852 809 struct raw3270 *rp; 853 810 char *ascebc; 854 811 int rc; 855 812 856 - rp = kzalloc(sizeof(struct raw3270), GFP_KERNEL | GFP_DMA); 813 + rp = kzalloc(sizeof(*rp), GFP_KERNEL | GFP_DMA); 857 814 if (!rp) 858 815 return ERR_PTR(-ENOMEM); 859 816 ascebc = kmalloc(256, GFP_KERNEL); ··· 886 845 return 0; 887 846 } 888 847 848 + static int raw3270_assign_activate_view(struct raw3270 *rp, struct raw3270_view *view) 849 + { 850 + rp->view = view; 851 + return view->fn->activate(view); 852 + } 853 + 854 + static int __raw3270_activate_view(struct raw3270 *rp, struct raw3270_view *view) 855 + { 856 + struct raw3270_view *oldview = NULL, *nv; 857 + int rc; 858 + 859 + if (rp->view == view) 860 + return 0; 861 + 862 + if (!raw3270_state_ready(rp)) 863 + return -EBUSY; 864 + 865 + if (rp->view && rp->view->fn->deactivate) { 866 + oldview = rp->view; 867 + oldview->fn->deactivate(oldview); 868 + } 869 + 870 + rc = raw3270_assign_activate_view(rp, view); 871 + if (!rc) 872 + return 0; 873 + 874 + /* Didn't work. Try to reactivate the old view. */ 875 + if (oldview) { 876 + rc = raw3270_assign_activate_view(rp, oldview); 877 + if (!rc) 878 + return 0; 879 + } 880 + 881 + /* Didn't work as well. Try any other view. */ 882 + list_for_each_entry(nv, &rp->view_list, list) { 883 + if (nv == view || nv == oldview) 884 + continue; 885 + rc = raw3270_assign_activate_view(rp, nv); 886 + if (!rc) 887 + break; 888 + rp->view = NULL; 889 + } 890 + return rc; 891 + } 892 + 889 893 /* 890 894 * Activate a view. 891 895 */ 892 - int 893 - raw3270_activate_view(struct raw3270_view *view) 896 + int raw3270_activate_view(struct raw3270_view *view) 894 897 { 895 898 struct raw3270 *rp; 896 - struct raw3270_view *oldview, *nv; 897 899 unsigned long flags; 898 900 int rc; 899 901 ··· 944 860 if (!rp) 945 861 return -ENODEV; 946 862 spin_lock_irqsave(get_ccwdev_lock(rp->cdev), flags); 947 - if (rp->view == view) 948 - rc = 0; 949 - else if (!raw3270_state_ready(rp)) 950 - rc = -EBUSY; 951 - else { 952 - oldview = NULL; 953 - if (rp->view && rp->view->fn->deactivate) { 954 - oldview = rp->view; 955 - oldview->fn->deactivate(oldview); 956 - } 957 - rp->view = view; 958 - rc = view->fn->activate(view); 959 - if (rc) { 960 - /* Didn't work. Try to reactivate the old view. */ 961 - rp->view = oldview; 962 - if (!oldview || oldview->fn->activate(oldview) != 0) { 963 - /* Didn't work as well. Try any other view. */ 964 - list_for_each_entry(nv, &rp->view_list, list) 965 - if (nv != view && nv != oldview) { 966 - rp->view = nv; 967 - if (nv->fn->activate(nv) == 0) 968 - break; 969 - rp->view = NULL; 970 - } 971 - } 972 - } 973 - } 863 + rc = __raw3270_activate_view(rp, view); 974 864 spin_unlock_irqrestore(get_ccwdev_lock(rp->cdev), flags); 975 865 return rc; 976 866 } 867 + EXPORT_SYMBOL(raw3270_activate_view); 977 868 978 869 /* 979 870 * Deactivate current view. 980 871 */ 981 - void 982 - raw3270_deactivate_view(struct raw3270_view *view) 872 + void raw3270_deactivate_view(struct raw3270_view *view) 983 873 { 984 874 unsigned long flags; 985 875 struct raw3270 *rp; ··· 980 922 } 981 923 spin_unlock_irqrestore(get_ccwdev_lock(rp->cdev), flags); 982 924 } 925 + EXPORT_SYMBOL(raw3270_deactivate_view); 983 926 984 927 /* 985 928 * Add view to device with minor "minor". 986 929 */ 987 - int 988 - raw3270_add_view(struct raw3270_view *view, struct raw3270_fn *fn, int minor, int subclass) 930 + int raw3270_add_view(struct raw3270_view *view, struct raw3270_fn *fn, 931 + int minor, int subclass) 989 932 { 990 933 unsigned long flags; 991 934 struct raw3270 *rp; ··· 1017 958 mutex_unlock(&raw3270_mutex); 1018 959 return rc; 1019 960 } 961 + EXPORT_SYMBOL(raw3270_add_view); 1020 962 1021 963 /* 1022 964 * Find specific view of device with minor "minor". 1023 965 */ 1024 - struct raw3270_view * 1025 - raw3270_find_view(struct raw3270_fn *fn, int minor) 966 + struct raw3270_view *raw3270_find_view(struct raw3270_fn *fn, int minor) 1026 967 { 1027 968 struct raw3270 *rp; 1028 969 struct raw3270_view *view, *tmp; ··· 1047 988 mutex_unlock(&raw3270_mutex); 1048 989 return view; 1049 990 } 991 + EXPORT_SYMBOL(raw3270_find_view); 1050 992 1051 993 /* 1052 994 * Remove view from device and free view structure via call to view->fn->free. 1053 995 */ 1054 - void 1055 - raw3270_del_view(struct raw3270_view *view) 996 + void raw3270_del_view(struct raw3270_view *view) 1056 997 { 1057 998 unsigned long flags; 1058 999 struct raw3270 *rp; ··· 1081 1022 if (view->fn->free) 1082 1023 view->fn->free(view); 1083 1024 } 1025 + EXPORT_SYMBOL(raw3270_del_view); 1084 1026 1085 1027 /* 1086 1028 * Remove a 3270 device structure. 1087 1029 */ 1088 - static void 1089 - raw3270_delete_device(struct raw3270 *rp) 1030 + static void raw3270_delete_device(struct raw3270 *rp) 1090 1031 { 1091 1032 struct ccw_device *cdev; 1092 1033 ··· 1109 1050 kfree(rp); 1110 1051 } 1111 1052 1112 - static int 1113 - raw3270_probe (struct ccw_device *cdev) 1053 + static int raw3270_probe(struct ccw_device *cdev) 1114 1054 { 1115 1055 return 0; 1116 1056 } ··· 1117 1059 /* 1118 1060 * Additional attributes for a 3270 device 1119 1061 */ 1120 - static ssize_t 1121 - raw3270_model_show(struct device *dev, struct device_attribute *attr, char *buf) 1062 + static ssize_t model_show(struct device *dev, struct device_attribute *attr, 1063 + char *buf) 1122 1064 { 1123 1065 return sysfs_emit(buf, "%i\n", 1124 1066 ((struct raw3270 *)dev_get_drvdata(dev))->model); 1125 1067 } 1126 - static DEVICE_ATTR(model, 0444, raw3270_model_show, NULL); 1068 + static DEVICE_ATTR_RO(model); 1127 1069 1128 - static ssize_t 1129 - raw3270_rows_show(struct device *dev, struct device_attribute *attr, char *buf) 1070 + static ssize_t rows_show(struct device *dev, struct device_attribute *attr, 1071 + char *buf) 1130 1072 { 1131 1073 return sysfs_emit(buf, "%i\n", 1132 1074 ((struct raw3270 *)dev_get_drvdata(dev))->rows); 1133 1075 } 1134 - static DEVICE_ATTR(rows, 0444, raw3270_rows_show, NULL); 1076 + static DEVICE_ATTR_RO(rows); 1135 1077 1136 1078 static ssize_t 1137 - raw3270_columns_show(struct device *dev, struct device_attribute *attr, char *buf) 1079 + columns_show(struct device *dev, struct device_attribute *attr, 1080 + char *buf) 1138 1081 { 1139 1082 return sysfs_emit(buf, "%i\n", 1140 1083 ((struct raw3270 *)dev_get_drvdata(dev))->cols); 1141 1084 } 1142 - static DEVICE_ATTR(columns, 0444, raw3270_columns_show, NULL); 1085 + static DEVICE_ATTR_RO(columns); 1143 1086 1144 - static struct attribute * raw3270_attrs[] = { 1087 + static struct attribute *raw3270_attrs[] = { 1145 1088 &dev_attr_model.attr, 1146 1089 &dev_attr_rows.attr, 1147 1090 &dev_attr_columns.attr, ··· 1174 1115 mutex_unlock(&raw3270_mutex); 1175 1116 return 0; 1176 1117 } 1118 + EXPORT_SYMBOL(raw3270_register_notifier); 1177 1119 1178 1120 void raw3270_unregister_notifier(struct raw3270_notifier *notifier) 1179 1121 { ··· 1186 1126 list_del(&notifier->list); 1187 1127 mutex_unlock(&raw3270_mutex); 1188 1128 } 1129 + EXPORT_SYMBOL(raw3270_unregister_notifier); 1189 1130 1190 1131 /* 1191 1132 * Set 3270 device online. 1192 1133 */ 1193 - static int 1194 - raw3270_set_online (struct ccw_device *cdev) 1134 + static int raw3270_set_online(struct ccw_device *cdev) 1195 1135 { 1196 1136 struct raw3270_notifier *np; 1197 1137 struct raw3270 *rp; ··· 1218 1158 /* 1219 1159 * Remove 3270 device structure. 1220 1160 */ 1221 - static void 1222 - raw3270_remove (struct ccw_device *cdev) 1161 + static void raw3270_remove(struct ccw_device *cdev) 1223 1162 { 1224 1163 unsigned long flags; 1225 1164 struct raw3270 *rp; ··· 1232 1173 * devices even if they haven't been varied online. 1233 1174 * Thus, rp may validly be NULL here. 1234 1175 */ 1235 - if (rp == NULL) 1176 + if (!rp) 1236 1177 return; 1237 1178 1238 1179 sysfs_remove_group(&cdev->dev.kobj, &raw3270_attr_group); ··· 1268 1209 /* 1269 1210 * Set 3270 device offline. 1270 1211 */ 1271 - static int 1272 - raw3270_set_offline (struct ccw_device *cdev) 1212 + static int raw3270_set_offline(struct ccw_device *cdev) 1273 1213 { 1274 1214 struct raw3270 *rp; 1275 1215 ··· 1307 1249 .int_class = IRQIO_C70, 1308 1250 }; 1309 1251 1310 - static int 1311 - raw3270_init(void) 1252 + static int raw3270_init(void) 1312 1253 { 1313 1254 struct raw3270 *rp; 1314 1255 int rc; ··· 1329 1272 return rc; 1330 1273 } 1331 1274 1332 - static void 1333 - raw3270_exit(void) 1275 + static void raw3270_exit(void) 1334 1276 { 1335 1277 ccw_driver_unregister(&raw3270_ccw_driver); 1336 1278 class_destroy(class3270); ··· 1339 1283 1340 1284 module_init(raw3270_init); 1341 1285 module_exit(raw3270_exit); 1342 - 1343 - EXPORT_SYMBOL(class3270); 1344 - EXPORT_SYMBOL(raw3270_request_alloc); 1345 - EXPORT_SYMBOL(raw3270_request_free); 1346 - EXPORT_SYMBOL(raw3270_request_reset); 1347 - EXPORT_SYMBOL(raw3270_request_set_cmd); 1348 - EXPORT_SYMBOL(raw3270_request_add_data); 1349 - EXPORT_SYMBOL(raw3270_request_set_data); 1350 - EXPORT_SYMBOL(raw3270_request_set_idal); 1351 - EXPORT_SYMBOL(raw3270_buffer_address); 1352 - EXPORT_SYMBOL(raw3270_add_view); 1353 - EXPORT_SYMBOL(raw3270_del_view); 1354 - EXPORT_SYMBOL(raw3270_find_view); 1355 - EXPORT_SYMBOL(raw3270_activate_view); 1356 - EXPORT_SYMBOL(raw3270_deactivate_view); 1357 - EXPORT_SYMBOL(raw3270_start); 1358 - EXPORT_SYMBOL(raw3270_start_locked); 1359 - EXPORT_SYMBOL(raw3270_start_irq); 1360 - EXPORT_SYMBOL(raw3270_reset); 1361 - EXPORT_SYMBOL(raw3270_register_notifier); 1362 - EXPORT_SYMBOL(raw3270_unregister_notifier); 1363 - EXPORT_SYMBOL(raw3270_wait_queue);
+36 -191
drivers/s390/char/raw3270.h
··· 8 8 * Copyright IBM Corp. 2003, 2009 9 9 */ 10 10 11 + #include <uapi/asm/raw3270.h> 11 12 #include <asm/idals.h> 12 13 #include <asm/ioctl.h> 13 - 14 - /* ioctls for fullscreen 3270 */ 15 - #define TUBICMD _IO('3', 3) /* set ccw command for fs reads. */ 16 - #define TUBOCMD _IO('3', 4) /* set ccw command for fs writes. */ 17 - #define TUBGETI _IO('3', 7) /* get ccw command for fs reads. */ 18 - #define TUBGETO _IO('3', 8) /* get ccw command for fs writes. */ 19 - #define TUBSETMOD _IO('3',12) /* FIXME: what does it do ?*/ 20 - #define TUBGETMOD _IO('3',13) /* FIXME: what does it do ?*/ 21 - 22 - /* Local Channel Commands */ 23 - #define TC_WRITE 0x01 /* Write */ 24 - #define TC_RDBUF 0x02 /* Read Buffer */ 25 - #define TC_EWRITE 0x05 /* Erase write */ 26 - #define TC_READMOD 0x06 /* Read modified */ 27 - #define TC_EWRITEA 0x0d /* Erase write alternate */ 28 - #define TC_WRITESF 0x11 /* Write structured field */ 29 - 30 - /* Buffer Control Orders */ 31 - #define TO_SF 0x1d /* Start field */ 32 - #define TO_SBA 0x11 /* Set buffer address */ 33 - #define TO_IC 0x13 /* Insert cursor */ 34 - #define TO_PT 0x05 /* Program tab */ 35 - #define TO_RA 0x3c /* Repeat to address */ 36 - #define TO_SFE 0x29 /* Start field extended */ 37 - #define TO_EUA 0x12 /* Erase unprotected to address */ 38 - #define TO_MF 0x2c /* Modify field */ 39 - #define TO_SA 0x28 /* Set attribute */ 40 - 41 - /* Field Attribute Bytes */ 42 - #define TF_INPUT 0x40 /* Visible input */ 43 - #define TF_INPUTN 0x4c /* Invisible input */ 44 - #define TF_INMDT 0xc1 /* Visible, Set-MDT */ 45 - #define TF_LOG 0x60 46 - 47 - /* Character Attribute Bytes */ 48 - #define TAT_RESET 0x00 49 - #define TAT_FIELD 0xc0 50 - #define TAT_EXTHI 0x41 51 - #define TAT_COLOR 0x42 52 - #define TAT_CHARS 0x43 53 - #define TAT_TRANS 0x46 54 - 55 - /* Extended-Highlighting Bytes */ 56 - #define TAX_RESET 0x00 57 - #define TAX_BLINK 0xf1 58 - #define TAX_REVER 0xf2 59 - #define TAX_UNDER 0xf4 60 - 61 - /* Reset value */ 62 - #define TAR_RESET 0x00 63 - 64 - /* Color values */ 65 - #define TAC_RESET 0x00 66 - #define TAC_BLUE 0xf1 67 - #define TAC_RED 0xf2 68 - #define TAC_PINK 0xf3 69 - #define TAC_GREEN 0xf4 70 - #define TAC_TURQ 0xf5 71 - #define TAC_YELLOW 0xf6 72 - #define TAC_WHITE 0xf7 73 - #define TAC_DEFAULT 0x00 74 - 75 - /* Write Control Characters */ 76 - #define TW_NONE 0x40 /* No particular action */ 77 - #define TW_KR 0xc2 /* Keyboard restore */ 78 - #define TW_PLUSALARM 0x04 /* Add this bit for alarm */ 79 - 80 - #define RAW3270_FIRSTMINOR 1 /* First minor number */ 81 - #define RAW3270_MAXDEVS 255 /* Max number of 3270 devices */ 82 - 83 - /* For TUBGETMOD and TUBSETMOD. Should include. */ 84 - struct raw3270_iocb { 85 - short model; 86 - short line_cnt; 87 - short col_cnt; 88 - short pf_cnt; 89 - short re_cnt; 90 - short map; 91 - }; 92 14 93 15 struct raw3270; 94 16 struct raw3270_view; ··· 27 105 int rc; /* return code for this request. */ 28 106 29 107 /* Callback for delivering final status. */ 30 - void (*callback)(struct raw3270_request *, void *); 108 + void (*callback)(struct raw3270_request *rq, void *data); 31 109 void *callback_data; 32 110 }; 33 111 34 112 struct raw3270_request *raw3270_request_alloc(size_t size); 35 - void raw3270_request_free(struct raw3270_request *); 36 - void raw3270_request_reset(struct raw3270_request *); 37 - void raw3270_request_set_cmd(struct raw3270_request *, u8 cmd); 38 - int raw3270_request_add_data(struct raw3270_request *, void *, size_t); 39 - void raw3270_request_set_data(struct raw3270_request *, void *, size_t); 40 - void raw3270_request_set_idal(struct raw3270_request *, struct idal_buffer *); 113 + void raw3270_request_free(struct raw3270_request *rq); 114 + int raw3270_request_reset(struct raw3270_request *rq); 115 + void raw3270_request_set_cmd(struct raw3270_request *rq, u8 cmd); 116 + int raw3270_request_add_data(struct raw3270_request *rq, void *data, size_t size); 117 + void raw3270_request_set_data(struct raw3270_request *rq, void *data, size_t size); 118 + void raw3270_request_set_idal(struct raw3270_request *rq, struct idal_buffer *ib); 41 119 42 120 static inline int 43 121 raw3270_request_final(struct raw3270_request *rq) ··· 45 123 return list_empty(&rq->list); 46 124 } 47 125 48 - void raw3270_buffer_address(struct raw3270 *, char *, unsigned short); 126 + void raw3270_buffer_address(struct raw3270 *, char *, int, int); 49 127 50 128 /* 51 129 * Functions of a 3270 view. 52 130 */ 53 131 struct raw3270_fn { 54 - int (*activate)(struct raw3270_view *); 55 - void (*deactivate)(struct raw3270_view *); 56 - void (*intv)(struct raw3270_view *, 57 - struct raw3270_request *, struct irb *); 58 - void (*release)(struct raw3270_view *); 59 - void (*free)(struct raw3270_view *); 60 - void (*resize)(struct raw3270_view *, int, int, int); 132 + int (*activate)(struct raw3270_view *rq); 133 + void (*deactivate)(struct raw3270_view *rq); 134 + void (*intv)(struct raw3270_view *view, 135 + struct raw3270_request *rq, struct irb *ib); 136 + void (*release)(struct raw3270_view *view); 137 + void (*free)(struct raw3270_view *view); 138 + void (*resize)(struct raw3270_view *view, 139 + int new_model, int new_cols, int new_rows, 140 + int old_model, int old_cols, int old_rows); 61 141 }; 62 142 63 143 /* ··· 72 148 */ 73 149 struct raw3270_view { 74 150 struct list_head list; 75 - spinlock_t lock; 151 + spinlock_t lock; /* protects members of view */ 76 152 #define RAW3270_VIEW_LOCK_IRQ 0 77 153 #define RAW3270_VIEW_LOCK_BH 1 78 154 atomic_t ref_count; ··· 83 159 unsigned char *ascebc; /* ascii -> ebcdic table */ 84 160 }; 85 161 86 - int raw3270_add_view(struct raw3270_view *, struct raw3270_fn *, int, int); 162 + int raw3270_add_view(struct raw3270_view *view, struct raw3270_fn *fn, int minor, int subclass); 87 163 int raw3270_view_lock_unavailable(struct raw3270_view *view); 88 - int raw3270_activate_view(struct raw3270_view *); 89 - void raw3270_del_view(struct raw3270_view *); 90 - void raw3270_deactivate_view(struct raw3270_view *); 91 - struct raw3270_view *raw3270_find_view(struct raw3270_fn *, int); 92 - int raw3270_start(struct raw3270_view *, struct raw3270_request *); 93 - int raw3270_start_locked(struct raw3270_view *, struct raw3270_request *); 94 - int raw3270_start_irq(struct raw3270_view *, struct raw3270_request *); 95 - int raw3270_reset(struct raw3270_view *); 96 - struct raw3270_view *raw3270_view(struct raw3270_view *); 97 - int raw3270_view_active(struct raw3270_view *); 164 + int raw3270_activate_view(struct raw3270_view *view); 165 + void raw3270_del_view(struct raw3270_view *view); 166 + void raw3270_deactivate_view(struct raw3270_view *view); 167 + struct raw3270_view *raw3270_find_view(struct raw3270_fn *fn, int minor); 168 + int raw3270_start(struct raw3270_view *view, struct raw3270_request *rq); 169 + int raw3270_start_locked(struct raw3270_view *view, struct raw3270_request *rq); 170 + int raw3270_start_irq(struct raw3270_view *view, struct raw3270_request *rq); 171 + int raw3270_reset(struct raw3270_view *view); 172 + struct raw3270_view *raw3270_view(struct raw3270_view *view); 173 + int raw3270_view_active(struct raw3270_view *view); 174 + int raw3270_start_request(struct raw3270_view *view, struct raw3270_request *rq, 175 + int cmd, void *data, size_t len); 176 + void raw3270_read_modified_cb(struct raw3270_request *rq, void *data); 98 177 99 178 /* Reference count inliner for view structures. */ 100 179 static inline void ··· 116 189 } 117 190 118 191 struct raw3270 *raw3270_setup_console(void); 119 - void raw3270_wait_cons_dev(struct raw3270 *); 192 + void raw3270_wait_cons_dev(struct raw3270 *rp); 120 193 121 194 /* Notifier for device addition/removal */ 122 195 struct raw3270_notifier { ··· 125 198 void (*destroy)(int minor); 126 199 }; 127 200 128 - int raw3270_register_notifier(struct raw3270_notifier *); 129 - void raw3270_unregister_notifier(struct raw3270_notifier *); 130 - 131 - /* 132 - * Little memory allocator for string objects. 133 - */ 134 - struct string 135 - { 136 - struct list_head list; 137 - struct list_head update; 138 - unsigned long size; 139 - unsigned long len; 140 - char string[]; 141 - } __attribute__ ((aligned(8))); 142 - 143 - static inline struct string * 144 - alloc_string(struct list_head *free_list, unsigned long len) 145 - { 146 - struct string *cs, *tmp; 147 - unsigned long size; 148 - 149 - size = (len + 7L) & -8L; 150 - list_for_each_entry(cs, free_list, list) { 151 - if (cs->size < size) 152 - continue; 153 - if (cs->size > size + sizeof(struct string)) { 154 - char *endaddr = (char *) (cs + 1) + cs->size; 155 - tmp = (struct string *) (endaddr - size) - 1; 156 - tmp->size = size; 157 - cs->size -= size + sizeof(struct string); 158 - cs = tmp; 159 - } else 160 - list_del(&cs->list); 161 - cs->len = len; 162 - INIT_LIST_HEAD(&cs->list); 163 - INIT_LIST_HEAD(&cs->update); 164 - return cs; 165 - } 166 - return NULL; 167 - } 168 - 169 - static inline unsigned long 170 - free_string(struct list_head *free_list, struct string *cs) 171 - { 172 - struct string *tmp; 173 - struct list_head *p, *left; 174 - 175 - /* Find out the left neighbour in free memory list. */ 176 - left = free_list; 177 - list_for_each(p, free_list) { 178 - if (list_entry(p, struct string, list) > cs) 179 - break; 180 - left = p; 181 - } 182 - /* Try to merge with right neighbour = next element from left. */ 183 - if (left->next != free_list) { 184 - tmp = list_entry(left->next, struct string, list); 185 - if ((char *) (cs + 1) + cs->size == (char *) tmp) { 186 - list_del(&tmp->list); 187 - cs->size += tmp->size + sizeof(struct string); 188 - } 189 - } 190 - /* Try to merge with left neighbour. */ 191 - if (left != free_list) { 192 - tmp = list_entry(left, struct string, list); 193 - if ((char *) (tmp + 1) + tmp->size == (char *) cs) { 194 - tmp->size += cs->size + sizeof(struct string); 195 - return tmp->size; 196 - } 197 - } 198 - __list_add(&cs->list, left, left->next); 199 - return cs->size; 200 - } 201 - 202 - static inline void 203 - add_string_memory(struct list_head *free_list, void *mem, unsigned long size) 204 - { 205 - struct string *cs; 206 - 207 - cs = (struct string *) mem; 208 - cs->size = size - sizeof(struct string); 209 - free_string(free_list, cs); 210 - } 211 - 201 + int raw3270_register_notifier(struct raw3270_notifier *notifier); 202 + void raw3270_unregister_notifier(struct raw3270_notifier *notifier);
+1 -1
drivers/s390/char/sclp_early.c
··· 163 163 sclp.has_linemode = 1; 164 164 } 165 165 166 - void __init sclp_early_adjust_va(void) 166 + void __init __no_sanitize_address sclp_early_adjust_va(void) 167 167 { 168 168 sclp_early_sccb = __va((unsigned long)sclp_early_sccb); 169 169 }
+3 -3
drivers/s390/char/sclp_ftp.c
··· 90 90 struct completion completion; 91 91 struct sclp_diag_sccb *sccb; 92 92 struct sclp_req *req; 93 - size_t len; 93 + ssize_t len; 94 94 int rc; 95 95 96 96 req = kzalloc(sizeof(*req), GFP_KERNEL); ··· 117 117 sccb->evbuf.mdd.ftp.length = ftp->len; 118 118 sccb->evbuf.mdd.ftp.bufaddr = virt_to_phys(ftp->buf); 119 119 120 - len = strlcpy(sccb->evbuf.mdd.ftp.fident, ftp->fname, 120 + len = strscpy(sccb->evbuf.mdd.ftp.fident, ftp->fname, 121 121 HMCDRV_FTP_FIDENT_MAX); 122 - if (len >= HMCDRV_FTP_FIDENT_MAX) { 122 + if (len < 0) { 123 123 rc = -EINVAL; 124 124 goto out_free; 125 125 }
-1963
drivers/s390/char/tty3270.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * IBM/3270 Driver - tty functions. 4 - * 5 - * Author(s): 6 - * Original 3270 Code for 2.4 written by Richard Hitt (UTS Global) 7 - * Rewritten for 2.5 by Martin Schwidefsky <schwidefsky@de.ibm.com> 8 - * -- Copyright IBM Corp. 2003 9 - */ 10 - 11 - #include <linux/module.h> 12 - #include <linux/types.h> 13 - #include <linux/kdev_t.h> 14 - #include <linux/tty.h> 15 - #include <linux/vt_kern.h> 16 - #include <linux/init.h> 17 - #include <linux/console.h> 18 - #include <linux/interrupt.h> 19 - #include <linux/workqueue.h> 20 - 21 - #include <linux/slab.h> 22 - #include <linux/memblock.h> 23 - #include <linux/compat.h> 24 - 25 - #include <asm/ccwdev.h> 26 - #include <asm/cio.h> 27 - #include <asm/ebcdic.h> 28 - #include <linux/uaccess.h> 29 - 30 - #include "raw3270.h" 31 - #include "tty3270.h" 32 - #include "keyboard.h" 33 - 34 - #define TTY3270_CHAR_BUF_SIZE 256 35 - #define TTY3270_OUTPUT_BUFFER_SIZE 1024 36 - #define TTY3270_STRING_PAGES 5 37 - 38 - struct tty_driver *tty3270_driver; 39 - static int tty3270_max_index; 40 - 41 - static struct raw3270_fn tty3270_fn; 42 - 43 - struct tty3270_cell { 44 - unsigned char character; 45 - unsigned char highlight; 46 - unsigned char f_color; 47 - }; 48 - 49 - struct tty3270_line { 50 - struct tty3270_cell *cells; 51 - int len; 52 - }; 53 - 54 - #define ESCAPE_NPAR 8 55 - 56 - /* 57 - * The main tty view data structure. 58 - * FIXME: 59 - * 1) describe line orientation & lines list concept against screen 60 - * 2) describe conversion of screen to lines 61 - * 3) describe line format. 62 - */ 63 - struct tty3270 { 64 - struct raw3270_view view; 65 - struct tty_port port; 66 - void **freemem_pages; /* Array of pages used for freemem. */ 67 - struct list_head freemem; /* List of free memory for strings. */ 68 - 69 - /* Output stuff. */ 70 - struct list_head lines; /* List of lines. */ 71 - struct list_head update; /* List of lines to update. */ 72 - unsigned char wcc; /* Write control character. */ 73 - int nr_lines; /* # lines in list. */ 74 - int nr_up; /* # lines up in history. */ 75 - unsigned long update_flags; /* Update indication bits. */ 76 - struct string *status; /* Lower right of display. */ 77 - struct raw3270_request *write; /* Single write request. */ 78 - struct timer_list timer; /* Output delay timer. */ 79 - 80 - /* Current tty screen. */ 81 - unsigned int cx, cy; /* Current output position. */ 82 - unsigned int highlight; /* Blink/reverse/underscore */ 83 - unsigned int f_color; /* Foreground color */ 84 - struct tty3270_line *screen; 85 - unsigned int n_model, n_cols, n_rows; /* New model & size */ 86 - struct work_struct resize_work; 87 - 88 - /* Input stuff. */ 89 - struct string *prompt; /* Output string for input area. */ 90 - struct string *input; /* Input string for read request. */ 91 - struct raw3270_request *read; /* Single read request. */ 92 - struct raw3270_request *kreset; /* Single keyboard reset request. */ 93 - unsigned char inattr; /* Visible/invisible input. */ 94 - int throttle, attn; /* tty throttle/unthrottle. */ 95 - struct tasklet_struct readlet; /* Tasklet to issue read request. */ 96 - struct tasklet_struct hanglet; /* Tasklet to hang up the tty. */ 97 - struct kbd_data *kbd; /* key_maps stuff. */ 98 - 99 - /* Escape sequence parsing. */ 100 - int esc_state, esc_ques, esc_npar; 101 - int esc_par[ESCAPE_NPAR]; 102 - unsigned int saved_cx, saved_cy; 103 - unsigned int saved_highlight, saved_f_color; 104 - 105 - /* Command recalling. */ 106 - struct list_head rcl_lines; /* List of recallable lines. */ 107 - struct list_head *rcl_walk; /* Point in rcl_lines list. */ 108 - int rcl_nr, rcl_max; /* Number/max number of rcl_lines. */ 109 - 110 - /* Character array for put_char/flush_chars. */ 111 - unsigned int char_count; 112 - char char_buf[TTY3270_CHAR_BUF_SIZE]; 113 - }; 114 - 115 - /* tty3270->update_flags. See tty3270_update for details. */ 116 - #define TTY_UPDATE_ERASE 1 /* Use EWRITEA instead of WRITE. */ 117 - #define TTY_UPDATE_LIST 2 /* Update lines in tty3270->update. */ 118 - #define TTY_UPDATE_INPUT 4 /* Update input line. */ 119 - #define TTY_UPDATE_STATUS 8 /* Update status line. */ 120 - #define TTY_UPDATE_ALL 16 /* Recreate screen. */ 121 - 122 - static void tty3270_update(struct timer_list *); 123 - static void tty3270_resize_work(struct work_struct *work); 124 - 125 - /* 126 - * Setup timeout for a device. On timeout trigger an update. 127 - */ 128 - static void tty3270_set_timer(struct tty3270 *tp, int expires) 129 - { 130 - mod_timer(&tp->timer, jiffies + expires); 131 - } 132 - 133 - /* 134 - * The input line are the two last lines of the screen. 135 - */ 136 - static void 137 - tty3270_update_prompt(struct tty3270 *tp, char *input, int count) 138 - { 139 - struct string *line; 140 - unsigned int off; 141 - 142 - line = tp->prompt; 143 - if (count != 0) 144 - line->string[5] = TF_INMDT; 145 - else 146 - line->string[5] = tp->inattr; 147 - if (count > tp->view.cols * 2 - 11) 148 - count = tp->view.cols * 2 - 11; 149 - memcpy(line->string + 6, input, count); 150 - line->string[6 + count] = TO_IC; 151 - /* Clear to end of input line. */ 152 - if (count < tp->view.cols * 2 - 11) { 153 - line->string[7 + count] = TO_RA; 154 - line->string[10 + count] = 0; 155 - off = tp->view.cols * tp->view.rows - 9; 156 - raw3270_buffer_address(tp->view.dev, line->string+count+8, off); 157 - line->len = 11 + count; 158 - } else 159 - line->len = 7 + count; 160 - tp->update_flags |= TTY_UPDATE_INPUT; 161 - } 162 - 163 - static void 164 - tty3270_create_prompt(struct tty3270 *tp) 165 - { 166 - static const unsigned char blueprint[] = 167 - { TO_SBA, 0, 0, 0x6e, TO_SF, TF_INPUT, 168 - /* empty input string */ 169 - TO_IC, TO_RA, 0, 0, 0 }; 170 - struct string *line; 171 - unsigned int offset; 172 - 173 - line = alloc_string(&tp->freemem, 174 - sizeof(blueprint) + tp->view.cols * 2 - 9); 175 - tp->prompt = line; 176 - tp->inattr = TF_INPUT; 177 - /* Copy blueprint to status line */ 178 - memcpy(line->string, blueprint, sizeof(blueprint)); 179 - line->len = sizeof(blueprint); 180 - /* Set output offsets. */ 181 - offset = tp->view.cols * (tp->view.rows - 2); 182 - raw3270_buffer_address(tp->view.dev, line->string + 1, offset); 183 - offset = tp->view.cols * tp->view.rows - 9; 184 - raw3270_buffer_address(tp->view.dev, line->string + 8, offset); 185 - 186 - /* Allocate input string for reading. */ 187 - tp->input = alloc_string(&tp->freemem, tp->view.cols * 2 - 9 + 6); 188 - } 189 - 190 - /* 191 - * The status line is the last line of the screen. It shows the string 192 - * "Running"/"Holding" in the lower right corner of the screen. 193 - */ 194 - static void 195 - tty3270_update_status(struct tty3270 * tp) 196 - { 197 - char *str; 198 - 199 - str = (tp->nr_up != 0) ? "History" : "Running"; 200 - memcpy(tp->status->string + 8, str, 7); 201 - codepage_convert(tp->view.ascebc, tp->status->string + 8, 7); 202 - tp->update_flags |= TTY_UPDATE_STATUS; 203 - } 204 - 205 - static void 206 - tty3270_create_status(struct tty3270 * tp) 207 - { 208 - static const unsigned char blueprint[] = 209 - { TO_SBA, 0, 0, TO_SF, TF_LOG, TO_SA, TAT_COLOR, TAC_GREEN, 210 - 0, 0, 0, 0, 0, 0, 0, TO_SF, TF_LOG, TO_SA, TAT_COLOR, 211 - TAC_RESET }; 212 - struct string *line; 213 - unsigned int offset; 214 - 215 - line = alloc_string(&tp->freemem,sizeof(blueprint)); 216 - tp->status = line; 217 - /* Copy blueprint to status line */ 218 - memcpy(line->string, blueprint, sizeof(blueprint)); 219 - /* Set address to start of status string (= last 9 characters). */ 220 - offset = tp->view.cols * tp->view.rows - 9; 221 - raw3270_buffer_address(tp->view.dev, line->string + 1, offset); 222 - } 223 - 224 - /* 225 - * Set output offsets to 3270 datastream fragment of a tty string. 226 - * (TO_SBA offset at the start and TO_RA offset at the end of the string) 227 - */ 228 - static void 229 - tty3270_update_string(struct tty3270 *tp, struct string *line, int nr) 230 - { 231 - unsigned char *cp; 232 - 233 - raw3270_buffer_address(tp->view.dev, line->string + 1, 234 - tp->view.cols * nr); 235 - cp = line->string + line->len - 4; 236 - if (*cp == TO_RA) 237 - raw3270_buffer_address(tp->view.dev, cp + 1, 238 - tp->view.cols * (nr + 1)); 239 - } 240 - 241 - /* 242 - * Rebuild update list to print all lines. 243 - */ 244 - static void 245 - tty3270_rebuild_update(struct tty3270 *tp) 246 - { 247 - struct string *s, *n; 248 - int line, nr_up; 249 - 250 - /* 251 - * Throw away update list and create a new one, 252 - * containing all lines that will fit on the screen. 253 - */ 254 - list_for_each_entry_safe(s, n, &tp->update, update) 255 - list_del_init(&s->update); 256 - line = tp->view.rows - 3; 257 - nr_up = tp->nr_up; 258 - list_for_each_entry_reverse(s, &tp->lines, list) { 259 - if (nr_up > 0) { 260 - nr_up--; 261 - continue; 262 - } 263 - tty3270_update_string(tp, s, line); 264 - list_add(&s->update, &tp->update); 265 - if (--line < 0) 266 - break; 267 - } 268 - tp->update_flags |= TTY_UPDATE_LIST; 269 - } 270 - 271 - /* 272 - * Alloc string for size bytes. If there is not enough room in 273 - * freemem, free strings until there is room. 274 - */ 275 - static struct string * 276 - tty3270_alloc_string(struct tty3270 *tp, size_t size) 277 - { 278 - struct string *s, *n; 279 - 280 - s = alloc_string(&tp->freemem, size); 281 - if (s) 282 - return s; 283 - list_for_each_entry_safe(s, n, &tp->lines, list) { 284 - BUG_ON(tp->nr_lines <= tp->view.rows - 2); 285 - list_del(&s->list); 286 - if (!list_empty(&s->update)) 287 - list_del(&s->update); 288 - tp->nr_lines--; 289 - if (free_string(&tp->freemem, s) >= size) 290 - break; 291 - } 292 - s = alloc_string(&tp->freemem, size); 293 - BUG_ON(!s); 294 - if (tp->nr_up != 0 && 295 - tp->nr_up + tp->view.rows - 2 >= tp->nr_lines) { 296 - tp->nr_up = tp->nr_lines - tp->view.rows + 2; 297 - tty3270_rebuild_update(tp); 298 - tty3270_update_status(tp); 299 - } 300 - return s; 301 - } 302 - 303 - /* 304 - * Add an empty line to the list. 305 - */ 306 - static void 307 - tty3270_blank_line(struct tty3270 *tp) 308 - { 309 - static const unsigned char blueprint[] = 310 - { TO_SBA, 0, 0, TO_SA, TAT_EXTHI, TAX_RESET, 311 - TO_SA, TAT_COLOR, TAC_RESET, TO_RA, 0, 0, 0 }; 312 - struct string *s; 313 - 314 - s = tty3270_alloc_string(tp, sizeof(blueprint)); 315 - memcpy(s->string, blueprint, sizeof(blueprint)); 316 - s->len = sizeof(blueprint); 317 - list_add_tail(&s->list, &tp->lines); 318 - tp->nr_lines++; 319 - if (tp->nr_up != 0) 320 - tp->nr_up++; 321 - } 322 - 323 - /* 324 - * Create a blank screen and remove all lines from the history. 325 - */ 326 - static void 327 - tty3270_blank_screen(struct tty3270 *tp) 328 - { 329 - struct string *s, *n; 330 - int i; 331 - 332 - for (i = 0; i < tp->view.rows - 2; i++) 333 - tp->screen[i].len = 0; 334 - tp->nr_up = 0; 335 - list_for_each_entry_safe(s, n, &tp->lines, list) { 336 - list_del(&s->list); 337 - if (!list_empty(&s->update)) 338 - list_del(&s->update); 339 - tp->nr_lines--; 340 - free_string(&tp->freemem, s); 341 - } 342 - } 343 - 344 - /* 345 - * Write request completion callback. 346 - */ 347 - static void 348 - tty3270_write_callback(struct raw3270_request *rq, void *data) 349 - { 350 - struct tty3270 *tp = container_of(rq->view, struct tty3270, view); 351 - 352 - if (rq->rc != 0) { 353 - /* Write wasn't successful. Refresh all. */ 354 - tp->update_flags = TTY_UPDATE_ALL; 355 - tty3270_set_timer(tp, 1); 356 - } 357 - raw3270_request_reset(rq); 358 - xchg(&tp->write, rq); 359 - } 360 - 361 - /* 362 - * Update 3270 display. 363 - */ 364 - static void 365 - tty3270_update(struct timer_list *t) 366 - { 367 - struct tty3270 *tp = from_timer(tp, t, timer); 368 - static char invalid_sba[2] = { 0xff, 0xff }; 369 - struct raw3270_request *wrq; 370 - unsigned long updated; 371 - struct string *s, *n; 372 - char *sba, *str; 373 - int rc, len; 374 - 375 - wrq = xchg(&tp->write, 0); 376 - if (!wrq) { 377 - tty3270_set_timer(tp, 1); 378 - return; 379 - } 380 - 381 - spin_lock(&tp->view.lock); 382 - updated = 0; 383 - if (tp->update_flags & TTY_UPDATE_ALL) { 384 - tty3270_rebuild_update(tp); 385 - tty3270_update_status(tp); 386 - tp->update_flags = TTY_UPDATE_ERASE | TTY_UPDATE_LIST | 387 - TTY_UPDATE_INPUT | TTY_UPDATE_STATUS; 388 - } 389 - if (tp->update_flags & TTY_UPDATE_ERASE) { 390 - /* Use erase write alternate to erase display. */ 391 - raw3270_request_set_cmd(wrq, TC_EWRITEA); 392 - updated |= TTY_UPDATE_ERASE; 393 - } else 394 - raw3270_request_set_cmd(wrq, TC_WRITE); 395 - 396 - raw3270_request_add_data(wrq, &tp->wcc, 1); 397 - tp->wcc = TW_NONE; 398 - 399 - /* 400 - * Update status line. 401 - */ 402 - if (tp->update_flags & TTY_UPDATE_STATUS) 403 - if (raw3270_request_add_data(wrq, tp->status->string, 404 - tp->status->len) == 0) 405 - updated |= TTY_UPDATE_STATUS; 406 - 407 - /* 408 - * Write input line. 409 - */ 410 - if (tp->update_flags & TTY_UPDATE_INPUT) 411 - if (raw3270_request_add_data(wrq, tp->prompt->string, 412 - tp->prompt->len) == 0) 413 - updated |= TTY_UPDATE_INPUT; 414 - 415 - sba = invalid_sba; 416 - 417 - if (tp->update_flags & TTY_UPDATE_LIST) { 418 - /* Write strings in the update list to the screen. */ 419 - list_for_each_entry_safe(s, n, &tp->update, update) { 420 - str = s->string; 421 - len = s->len; 422 - /* 423 - * Skip TO_SBA at the start of the string if the 424 - * last output position matches the start address 425 - * of this line. 426 - */ 427 - if (s->string[1] == sba[0] && s->string[2] == sba[1]) { 428 - str += 3; 429 - len -= 3; 430 - } 431 - if (raw3270_request_add_data(wrq, str, len) != 0) 432 - break; 433 - list_del_init(&s->update); 434 - if (s->string[s->len - 4] == TO_RA) 435 - sba = s->string + s->len - 3; 436 - else 437 - sba = invalid_sba; 438 - } 439 - if (list_empty(&tp->update)) 440 - updated |= TTY_UPDATE_LIST; 441 - } 442 - wrq->callback = tty3270_write_callback; 443 - rc = raw3270_start(&tp->view, wrq); 444 - if (rc == 0) { 445 - tp->update_flags &= ~updated; 446 - if (tp->update_flags) 447 - tty3270_set_timer(tp, 1); 448 - } else { 449 - raw3270_request_reset(wrq); 450 - xchg(&tp->write, wrq); 451 - } 452 - spin_unlock(&tp->view.lock); 453 - } 454 - 455 - /* 456 - * Command recalling. 457 - */ 458 - static void 459 - tty3270_rcl_add(struct tty3270 *tp, char *input, int len) 460 - { 461 - struct string *s; 462 - 463 - tp->rcl_walk = NULL; 464 - if (len <= 0) 465 - return; 466 - if (tp->rcl_nr >= tp->rcl_max) { 467 - s = list_entry(tp->rcl_lines.next, struct string, list); 468 - list_del(&s->list); 469 - free_string(&tp->freemem, s); 470 - tp->rcl_nr--; 471 - } 472 - s = tty3270_alloc_string(tp, len); 473 - memcpy(s->string, input, len); 474 - list_add_tail(&s->list, &tp->rcl_lines); 475 - tp->rcl_nr++; 476 - } 477 - 478 - static void 479 - tty3270_rcl_backward(struct kbd_data *kbd) 480 - { 481 - struct tty3270 *tp = container_of(kbd->port, struct tty3270, port); 482 - struct string *s; 483 - 484 - spin_lock_bh(&tp->view.lock); 485 - if (tp->inattr == TF_INPUT) { 486 - if (tp->rcl_walk && tp->rcl_walk->prev != &tp->rcl_lines) 487 - tp->rcl_walk = tp->rcl_walk->prev; 488 - else if (!list_empty(&tp->rcl_lines)) 489 - tp->rcl_walk = tp->rcl_lines.prev; 490 - s = tp->rcl_walk ? 491 - list_entry(tp->rcl_walk, struct string, list) : NULL; 492 - if (tp->rcl_walk) { 493 - s = list_entry(tp->rcl_walk, struct string, list); 494 - tty3270_update_prompt(tp, s->string, s->len); 495 - } else 496 - tty3270_update_prompt(tp, NULL, 0); 497 - tty3270_set_timer(tp, 1); 498 - } 499 - spin_unlock_bh(&tp->view.lock); 500 - } 501 - 502 - /* 503 - * Deactivate tty view. 504 - */ 505 - static void 506 - tty3270_exit_tty(struct kbd_data *kbd) 507 - { 508 - struct tty3270 *tp = container_of(kbd->port, struct tty3270, port); 509 - 510 - raw3270_deactivate_view(&tp->view); 511 - } 512 - 513 - /* 514 - * Scroll forward in history. 515 - */ 516 - static void 517 - tty3270_scroll_forward(struct kbd_data *kbd) 518 - { 519 - struct tty3270 *tp = container_of(kbd->port, struct tty3270, port); 520 - int nr_up; 521 - 522 - spin_lock_bh(&tp->view.lock); 523 - nr_up = tp->nr_up - tp->view.rows + 2; 524 - if (nr_up < 0) 525 - nr_up = 0; 526 - if (nr_up != tp->nr_up) { 527 - tp->nr_up = nr_up; 528 - tty3270_rebuild_update(tp); 529 - tty3270_update_status(tp); 530 - tty3270_set_timer(tp, 1); 531 - } 532 - spin_unlock_bh(&tp->view.lock); 533 - } 534 - 535 - /* 536 - * Scroll backward in history. 537 - */ 538 - static void 539 - tty3270_scroll_backward(struct kbd_data *kbd) 540 - { 541 - struct tty3270 *tp = container_of(kbd->port, struct tty3270, port); 542 - int nr_up; 543 - 544 - spin_lock_bh(&tp->view.lock); 545 - nr_up = tp->nr_up + tp->view.rows - 2; 546 - if (nr_up + tp->view.rows - 2 > tp->nr_lines) 547 - nr_up = tp->nr_lines - tp->view.rows + 2; 548 - if (nr_up != tp->nr_up) { 549 - tp->nr_up = nr_up; 550 - tty3270_rebuild_update(tp); 551 - tty3270_update_status(tp); 552 - tty3270_set_timer(tp, 1); 553 - } 554 - spin_unlock_bh(&tp->view.lock); 555 - } 556 - 557 - /* 558 - * Pass input line to tty. 559 - */ 560 - static void 561 - tty3270_read_tasklet(unsigned long data) 562 - { 563 - struct raw3270_request *rrq = (struct raw3270_request *)data; 564 - static char kreset_data = TW_KR; 565 - struct tty3270 *tp = container_of(rrq->view, struct tty3270, view); 566 - char *input; 567 - int len; 568 - 569 - spin_lock_bh(&tp->view.lock); 570 - /* 571 - * Two AID keys are special: For 0x7d (enter) the input line 572 - * has to be emitted to the tty and for 0x6d the screen 573 - * needs to be redrawn. 574 - */ 575 - input = NULL; 576 - len = 0; 577 - if (tp->input->string[0] == 0x7d) { 578 - /* Enter: write input to tty. */ 579 - input = tp->input->string + 6; 580 - len = tp->input->len - 6 - rrq->rescnt; 581 - if (tp->inattr != TF_INPUTN) 582 - tty3270_rcl_add(tp, input, len); 583 - if (tp->nr_up > 0) { 584 - tp->nr_up = 0; 585 - tty3270_rebuild_update(tp); 586 - tty3270_update_status(tp); 587 - } 588 - /* Clear input area. */ 589 - tty3270_update_prompt(tp, NULL, 0); 590 - tty3270_set_timer(tp, 1); 591 - } else if (tp->input->string[0] == 0x6d) { 592 - /* Display has been cleared. Redraw. */ 593 - tp->update_flags = TTY_UPDATE_ALL; 594 - tty3270_set_timer(tp, 1); 595 - } 596 - spin_unlock_bh(&tp->view.lock); 597 - 598 - /* Start keyboard reset command. */ 599 - raw3270_request_reset(tp->kreset); 600 - raw3270_request_set_cmd(tp->kreset, TC_WRITE); 601 - raw3270_request_add_data(tp->kreset, &kreset_data, 1); 602 - raw3270_start(&tp->view, tp->kreset); 603 - 604 - while (len-- > 0) 605 - kbd_keycode(tp->kbd, *input++); 606 - /* Emit keycode for AID byte. */ 607 - kbd_keycode(tp->kbd, 256 + tp->input->string[0]); 608 - 609 - raw3270_request_reset(rrq); 610 - xchg(&tp->read, rrq); 611 - raw3270_put_view(&tp->view); 612 - } 613 - 614 - /* 615 - * Read request completion callback. 616 - */ 617 - static void 618 - tty3270_read_callback(struct raw3270_request *rq, void *data) 619 - { 620 - struct tty3270 *tp = container_of(rq->view, struct tty3270, view); 621 - raw3270_get_view(rq->view); 622 - /* Schedule tasklet to pass input to tty. */ 623 - tasklet_schedule(&tp->readlet); 624 - } 625 - 626 - /* 627 - * Issue a read request. Call with device lock. 628 - */ 629 - static void 630 - tty3270_issue_read(struct tty3270 *tp, int lock) 631 - { 632 - struct raw3270_request *rrq; 633 - int rc; 634 - 635 - rrq = xchg(&tp->read, 0); 636 - if (!rrq) 637 - /* Read already scheduled. */ 638 - return; 639 - rrq->callback = tty3270_read_callback; 640 - rrq->callback_data = tp; 641 - raw3270_request_set_cmd(rrq, TC_READMOD); 642 - raw3270_request_set_data(rrq, tp->input->string, tp->input->len); 643 - /* Issue the read modified request. */ 644 - if (lock) { 645 - rc = raw3270_start(&tp->view, rrq); 646 - } else 647 - rc = raw3270_start_irq(&tp->view, rrq); 648 - if (rc) { 649 - raw3270_request_reset(rrq); 650 - xchg(&tp->read, rrq); 651 - } 652 - } 653 - 654 - /* 655 - * Hang up the tty 656 - */ 657 - static void 658 - tty3270_hangup_tasklet(unsigned long data) 659 - { 660 - struct tty3270 *tp = (struct tty3270 *)data; 661 - tty_port_tty_hangup(&tp->port, true); 662 - raw3270_put_view(&tp->view); 663 - } 664 - 665 - /* 666 - * Switch to the tty view. 667 - */ 668 - static int 669 - tty3270_activate(struct raw3270_view *view) 670 - { 671 - struct tty3270 *tp = container_of(view, struct tty3270, view); 672 - 673 - tp->update_flags = TTY_UPDATE_ALL; 674 - tty3270_set_timer(tp, 1); 675 - return 0; 676 - } 677 - 678 - static void 679 - tty3270_deactivate(struct raw3270_view *view) 680 - { 681 - struct tty3270 *tp = container_of(view, struct tty3270, view); 682 - 683 - del_timer(&tp->timer); 684 - } 685 - 686 - static void 687 - tty3270_irq(struct tty3270 *tp, struct raw3270_request *rq, struct irb *irb) 688 - { 689 - /* Handle ATTN. Schedule tasklet to read aid. */ 690 - if (irb->scsw.cmd.dstat & DEV_STAT_ATTENTION) { 691 - if (!tp->throttle) 692 - tty3270_issue_read(tp, 0); 693 - else 694 - tp->attn = 1; 695 - } 696 - 697 - if (rq) { 698 - if (irb->scsw.cmd.dstat & DEV_STAT_UNIT_CHECK) { 699 - rq->rc = -EIO; 700 - raw3270_get_view(&tp->view); 701 - tasklet_schedule(&tp->hanglet); 702 - } else { 703 - /* Normal end. Copy residual count. */ 704 - rq->rescnt = irb->scsw.cmd.count; 705 - } 706 - } else if (irb->scsw.cmd.dstat & DEV_STAT_DEV_END) { 707 - /* Interrupt without an outstanding request -> update all */ 708 - tp->update_flags = TTY_UPDATE_ALL; 709 - tty3270_set_timer(tp, 1); 710 - } 711 - } 712 - 713 - /* 714 - * Allocate tty3270 structure. 715 - */ 716 - static struct tty3270 * 717 - tty3270_alloc_view(void) 718 - { 719 - struct tty3270 *tp; 720 - int pages; 721 - 722 - tp = kzalloc(sizeof(struct tty3270), GFP_KERNEL); 723 - if (!tp) 724 - goto out_err; 725 - tp->freemem_pages = 726 - kmalloc_array(TTY3270_STRING_PAGES, sizeof(void *), 727 - GFP_KERNEL); 728 - if (!tp->freemem_pages) 729 - goto out_tp; 730 - INIT_LIST_HEAD(&tp->freemem); 731 - INIT_LIST_HEAD(&tp->lines); 732 - INIT_LIST_HEAD(&tp->update); 733 - INIT_LIST_HEAD(&tp->rcl_lines); 734 - tp->rcl_max = 20; 735 - 736 - for (pages = 0; pages < TTY3270_STRING_PAGES; pages++) { 737 - tp->freemem_pages[pages] = (void *) 738 - __get_free_pages(GFP_KERNEL|GFP_DMA, 0); 739 - if (!tp->freemem_pages[pages]) 740 - goto out_pages; 741 - add_string_memory(&tp->freemem, 742 - tp->freemem_pages[pages], PAGE_SIZE); 743 - } 744 - tp->write = raw3270_request_alloc(TTY3270_OUTPUT_BUFFER_SIZE); 745 - if (IS_ERR(tp->write)) 746 - goto out_pages; 747 - tp->read = raw3270_request_alloc(0); 748 - if (IS_ERR(tp->read)) 749 - goto out_write; 750 - tp->kreset = raw3270_request_alloc(1); 751 - if (IS_ERR(tp->kreset)) 752 - goto out_read; 753 - tp->kbd = kbd_alloc(); 754 - if (!tp->kbd) 755 - goto out_reset; 756 - 757 - tty_port_init(&tp->port); 758 - timer_setup(&tp->timer, tty3270_update, 0); 759 - tasklet_init(&tp->readlet, tty3270_read_tasklet, 760 - (unsigned long) tp->read); 761 - tasklet_init(&tp->hanglet, tty3270_hangup_tasklet, 762 - (unsigned long) tp); 763 - INIT_WORK(&tp->resize_work, tty3270_resize_work); 764 - 765 - return tp; 766 - 767 - out_reset: 768 - raw3270_request_free(tp->kreset); 769 - out_read: 770 - raw3270_request_free(tp->read); 771 - out_write: 772 - raw3270_request_free(tp->write); 773 - out_pages: 774 - while (pages--) 775 - free_pages((unsigned long) tp->freemem_pages[pages], 0); 776 - kfree(tp->freemem_pages); 777 - tty_port_destroy(&tp->port); 778 - out_tp: 779 - kfree(tp); 780 - out_err: 781 - return ERR_PTR(-ENOMEM); 782 - } 783 - 784 - /* 785 - * Free tty3270 structure. 786 - */ 787 - static void 788 - tty3270_free_view(struct tty3270 *tp) 789 - { 790 - int pages; 791 - 792 - kbd_free(tp->kbd); 793 - raw3270_request_free(tp->kreset); 794 - raw3270_request_free(tp->read); 795 - raw3270_request_free(tp->write); 796 - for (pages = 0; pages < TTY3270_STRING_PAGES; pages++) 797 - free_pages((unsigned long) tp->freemem_pages[pages], 0); 798 - kfree(tp->freemem_pages); 799 - tty_port_destroy(&tp->port); 800 - kfree(tp); 801 - } 802 - 803 - /* 804 - * Allocate tty3270 screen. 805 - */ 806 - static struct tty3270_line * 807 - tty3270_alloc_screen(unsigned int rows, unsigned int cols) 808 - { 809 - struct tty3270_line *screen; 810 - unsigned long size; 811 - int lines; 812 - 813 - size = sizeof(struct tty3270_line) * (rows - 2); 814 - screen = kzalloc(size, GFP_KERNEL); 815 - if (!screen) 816 - goto out_err; 817 - for (lines = 0; lines < rows - 2; lines++) { 818 - size = sizeof(struct tty3270_cell) * cols; 819 - screen[lines].cells = kzalloc(size, GFP_KERNEL); 820 - if (!screen[lines].cells) 821 - goto out_screen; 822 - } 823 - return screen; 824 - out_screen: 825 - while (lines--) 826 - kfree(screen[lines].cells); 827 - kfree(screen); 828 - out_err: 829 - return ERR_PTR(-ENOMEM); 830 - } 831 - 832 - /* 833 - * Free tty3270 screen. 834 - */ 835 - static void 836 - tty3270_free_screen(struct tty3270_line *screen, unsigned int rows) 837 - { 838 - int lines; 839 - 840 - for (lines = 0; lines < rows - 2; lines++) 841 - kfree(screen[lines].cells); 842 - kfree(screen); 843 - } 844 - 845 - /* 846 - * Resize tty3270 screen 847 - */ 848 - static void tty3270_resize_work(struct work_struct *work) 849 - { 850 - struct tty3270 *tp = container_of(work, struct tty3270, resize_work); 851 - struct tty3270_line *screen, *oscreen; 852 - struct tty_struct *tty; 853 - unsigned int orows; 854 - struct winsize ws; 855 - 856 - screen = tty3270_alloc_screen(tp->n_rows, tp->n_cols); 857 - if (IS_ERR(screen)) 858 - return; 859 - /* Switch to new output size */ 860 - spin_lock_bh(&tp->view.lock); 861 - tty3270_blank_screen(tp); 862 - oscreen = tp->screen; 863 - orows = tp->view.rows; 864 - tp->view.model = tp->n_model; 865 - tp->view.rows = tp->n_rows; 866 - tp->view.cols = tp->n_cols; 867 - tp->screen = screen; 868 - free_string(&tp->freemem, tp->prompt); 869 - free_string(&tp->freemem, tp->status); 870 - tty3270_create_prompt(tp); 871 - tty3270_create_status(tp); 872 - while (tp->nr_lines < tp->view.rows - 2) 873 - tty3270_blank_line(tp); 874 - tp->update_flags = TTY_UPDATE_ALL; 875 - spin_unlock_bh(&tp->view.lock); 876 - tty3270_free_screen(oscreen, orows); 877 - tty3270_set_timer(tp, 1); 878 - /* Informat tty layer about new size */ 879 - tty = tty_port_tty_get(&tp->port); 880 - if (!tty) 881 - return; 882 - ws.ws_row = tp->view.rows - 2; 883 - ws.ws_col = tp->view.cols; 884 - tty_do_resize(tty, &ws); 885 - tty_kref_put(tty); 886 - } 887 - 888 - static void 889 - tty3270_resize(struct raw3270_view *view, int model, int rows, int cols) 890 - { 891 - struct tty3270 *tp = container_of(view, struct tty3270, view); 892 - 893 - if (tp->n_model == model && tp->n_rows == rows && tp->n_cols == cols) 894 - return; 895 - tp->n_model = model; 896 - tp->n_rows = rows; 897 - tp->n_cols = cols; 898 - schedule_work(&tp->resize_work); 899 - } 900 - 901 - /* 902 - * Unlink tty3270 data structure from tty. 903 - */ 904 - static void 905 - tty3270_release(struct raw3270_view *view) 906 - { 907 - struct tty3270 *tp = container_of(view, struct tty3270, view); 908 - struct tty_struct *tty = tty_port_tty_get(&tp->port); 909 - 910 - if (tty) { 911 - tty->driver_data = NULL; 912 - tty_port_tty_set(&tp->port, NULL); 913 - tty_hangup(tty); 914 - raw3270_put_view(&tp->view); 915 - tty_kref_put(tty); 916 - } 917 - } 918 - 919 - /* 920 - * Free tty3270 data structure 921 - */ 922 - static void 923 - tty3270_free(struct raw3270_view *view) 924 - { 925 - struct tty3270 *tp = container_of(view, struct tty3270, view); 926 - 927 - del_timer_sync(&tp->timer); 928 - tty3270_free_screen(tp->screen, tp->view.rows); 929 - tty3270_free_view(tp); 930 - } 931 - 932 - /* 933 - * Delayed freeing of tty3270 views. 934 - */ 935 - static void 936 - tty3270_del_views(void) 937 - { 938 - int i; 939 - 940 - for (i = RAW3270_FIRSTMINOR; i <= tty3270_max_index; i++) { 941 - struct raw3270_view *view = raw3270_find_view(&tty3270_fn, i); 942 - if (!IS_ERR(view)) 943 - raw3270_del_view(view); 944 - } 945 - } 946 - 947 - static struct raw3270_fn tty3270_fn = { 948 - .activate = tty3270_activate, 949 - .deactivate = tty3270_deactivate, 950 - .intv = (void *) tty3270_irq, 951 - .release = tty3270_release, 952 - .free = tty3270_free, 953 - .resize = tty3270_resize 954 - }; 955 - 956 - /* 957 - * This routine is called whenever a 3270 tty is opened first time. 958 - */ 959 - static int tty3270_install(struct tty_driver *driver, struct tty_struct *tty) 960 - { 961 - struct raw3270_view *view; 962 - struct tty3270 *tp; 963 - int i, rc; 964 - 965 - /* Check if the tty3270 is already there. */ 966 - view = raw3270_find_view(&tty3270_fn, tty->index + RAW3270_FIRSTMINOR); 967 - if (!IS_ERR(view)) { 968 - tp = container_of(view, struct tty3270, view); 969 - tty->driver_data = tp; 970 - tty->winsize.ws_row = tp->view.rows - 2; 971 - tty->winsize.ws_col = tp->view.cols; 972 - tp->inattr = TF_INPUT; 973 - goto port_install; 974 - } 975 - if (tty3270_max_index < tty->index + 1) 976 - tty3270_max_index = tty->index + 1; 977 - 978 - /* Allocate tty3270 structure on first open. */ 979 - tp = tty3270_alloc_view(); 980 - if (IS_ERR(tp)) 981 - return PTR_ERR(tp); 982 - 983 - rc = raw3270_add_view(&tp->view, &tty3270_fn, 984 - tty->index + RAW3270_FIRSTMINOR, 985 - RAW3270_VIEW_LOCK_BH); 986 - if (rc) { 987 - tty3270_free_view(tp); 988 - return rc; 989 - } 990 - 991 - tp->screen = tty3270_alloc_screen(tp->view.rows, tp->view.cols); 992 - if (IS_ERR(tp->screen)) { 993 - rc = PTR_ERR(tp->screen); 994 - raw3270_put_view(&tp->view); 995 - raw3270_del_view(&tp->view); 996 - tty3270_free_view(tp); 997 - return rc; 998 - } 999 - 1000 - tty->winsize.ws_row = tp->view.rows - 2; 1001 - tty->winsize.ws_col = tp->view.cols; 1002 - 1003 - tty3270_create_prompt(tp); 1004 - tty3270_create_status(tp); 1005 - tty3270_update_status(tp); 1006 - 1007 - /* Create blank line for every line in the tty output area. */ 1008 - for (i = 0; i < tp->view.rows - 2; i++) 1009 - tty3270_blank_line(tp); 1010 - 1011 - tp->kbd->port = &tp->port; 1012 - tp->kbd->fn_handler[KVAL(K_INCRCONSOLE)] = tty3270_exit_tty; 1013 - tp->kbd->fn_handler[KVAL(K_SCROLLBACK)] = tty3270_scroll_backward; 1014 - tp->kbd->fn_handler[KVAL(K_SCROLLFORW)] = tty3270_scroll_forward; 1015 - tp->kbd->fn_handler[KVAL(K_CONS)] = tty3270_rcl_backward; 1016 - kbd_ascebc(tp->kbd, tp->view.ascebc); 1017 - 1018 - raw3270_activate_view(&tp->view); 1019 - 1020 - port_install: 1021 - rc = tty_port_install(&tp->port, driver, tty); 1022 - if (rc) { 1023 - raw3270_put_view(&tp->view); 1024 - return rc; 1025 - } 1026 - 1027 - tty->driver_data = tp; 1028 - 1029 - return 0; 1030 - } 1031 - 1032 - /* 1033 - * This routine is called whenever a 3270 tty is opened. 1034 - */ 1035 - static int 1036 - tty3270_open(struct tty_struct *tty, struct file *filp) 1037 - { 1038 - struct tty3270 *tp = tty->driver_data; 1039 - struct tty_port *port = &tp->port; 1040 - 1041 - port->count++; 1042 - tty_port_tty_set(port, tty); 1043 - return 0; 1044 - } 1045 - 1046 - /* 1047 - * This routine is called when the 3270 tty is closed. We wait 1048 - * for the remaining request to be completed. Then we clean up. 1049 - */ 1050 - static void 1051 - tty3270_close(struct tty_struct *tty, struct file * filp) 1052 - { 1053 - struct tty3270 *tp = tty->driver_data; 1054 - 1055 - if (tty->count > 1) 1056 - return; 1057 - if (tp) 1058 - tty_port_tty_set(&tp->port, NULL); 1059 - } 1060 - 1061 - static void tty3270_cleanup(struct tty_struct *tty) 1062 - { 1063 - struct tty3270 *tp = tty->driver_data; 1064 - 1065 - if (tp) { 1066 - tty->driver_data = NULL; 1067 - raw3270_put_view(&tp->view); 1068 - } 1069 - } 1070 - 1071 - /* 1072 - * We always have room. 1073 - */ 1074 - static unsigned int 1075 - tty3270_write_room(struct tty_struct *tty) 1076 - { 1077 - return INT_MAX; 1078 - } 1079 - 1080 - /* 1081 - * Insert character into the screen at the current position with the 1082 - * current color and highlight. This function does NOT do cursor movement. 1083 - */ 1084 - static void tty3270_put_character(struct tty3270 *tp, char ch) 1085 - { 1086 - struct tty3270_line *line; 1087 - struct tty3270_cell *cell; 1088 - 1089 - line = tp->screen + tp->cy; 1090 - if (line->len <= tp->cx) { 1091 - while (line->len < tp->cx) { 1092 - cell = line->cells + line->len; 1093 - cell->character = tp->view.ascebc[' ']; 1094 - cell->highlight = tp->highlight; 1095 - cell->f_color = tp->f_color; 1096 - line->len++; 1097 - } 1098 - line->len++; 1099 - } 1100 - cell = line->cells + tp->cx; 1101 - cell->character = tp->view.ascebc[(unsigned int) ch]; 1102 - cell->highlight = tp->highlight; 1103 - cell->f_color = tp->f_color; 1104 - } 1105 - 1106 - /* 1107 - * Convert a tty3270_line to a 3270 data fragment usable for output. 1108 - */ 1109 - static void 1110 - tty3270_convert_line(struct tty3270 *tp, int line_nr) 1111 - { 1112 - struct tty3270_line *line; 1113 - struct tty3270_cell *cell; 1114 - struct string *s, *n; 1115 - unsigned char highlight; 1116 - unsigned char f_color; 1117 - char *cp; 1118 - int flen, i; 1119 - 1120 - /* Determine how long the fragment will be. */ 1121 - flen = 3; /* Prefix (TO_SBA). */ 1122 - line = tp->screen + line_nr; 1123 - flen += line->len; 1124 - highlight = TAX_RESET; 1125 - f_color = TAC_RESET; 1126 - for (i = 0, cell = line->cells; i < line->len; i++, cell++) { 1127 - if (cell->highlight != highlight) { 1128 - flen += 3; /* TO_SA to switch highlight. */ 1129 - highlight = cell->highlight; 1130 - } 1131 - if (cell->f_color != f_color) { 1132 - flen += 3; /* TO_SA to switch color. */ 1133 - f_color = cell->f_color; 1134 - } 1135 - } 1136 - if (highlight != TAX_RESET) 1137 - flen += 3; /* TO_SA to reset hightlight. */ 1138 - if (f_color != TAC_RESET) 1139 - flen += 3; /* TO_SA to reset color. */ 1140 - if (line->len < tp->view.cols) 1141 - flen += 4; /* Postfix (TO_RA). */ 1142 - 1143 - /* Find the line in the list. */ 1144 - i = tp->view.rows - 2 - line_nr; 1145 - list_for_each_entry_reverse(s, &tp->lines, list) 1146 - if (--i <= 0) 1147 - break; 1148 - /* 1149 - * Check if the line needs to get reallocated. 1150 - */ 1151 - if (s->len != flen) { 1152 - /* Reallocate string. */ 1153 - n = tty3270_alloc_string(tp, flen); 1154 - list_add(&n->list, &s->list); 1155 - list_del_init(&s->list); 1156 - if (!list_empty(&s->update)) 1157 - list_del_init(&s->update); 1158 - free_string(&tp->freemem, s); 1159 - s = n; 1160 - } 1161 - 1162 - /* Write 3270 data fragment. */ 1163 - cp = s->string; 1164 - *cp++ = TO_SBA; 1165 - *cp++ = 0; 1166 - *cp++ = 0; 1167 - 1168 - highlight = TAX_RESET; 1169 - f_color = TAC_RESET; 1170 - for (i = 0, cell = line->cells; i < line->len; i++, cell++) { 1171 - if (cell->highlight != highlight) { 1172 - *cp++ = TO_SA; 1173 - *cp++ = TAT_EXTHI; 1174 - *cp++ = cell->highlight; 1175 - highlight = cell->highlight; 1176 - } 1177 - if (cell->f_color != f_color) { 1178 - *cp++ = TO_SA; 1179 - *cp++ = TAT_COLOR; 1180 - *cp++ = cell->f_color; 1181 - f_color = cell->f_color; 1182 - } 1183 - *cp++ = cell->character; 1184 - } 1185 - if (highlight != TAX_RESET) { 1186 - *cp++ = TO_SA; 1187 - *cp++ = TAT_EXTHI; 1188 - *cp++ = TAX_RESET; 1189 - } 1190 - if (f_color != TAC_RESET) { 1191 - *cp++ = TO_SA; 1192 - *cp++ = TAT_COLOR; 1193 - *cp++ = TAC_RESET; 1194 - } 1195 - if (line->len < tp->view.cols) { 1196 - *cp++ = TO_RA; 1197 - *cp++ = 0; 1198 - *cp++ = 0; 1199 - *cp++ = 0; 1200 - } 1201 - 1202 - if (tp->nr_up + line_nr < tp->view.rows - 2) { 1203 - /* Line is currently visible on screen. */ 1204 - tty3270_update_string(tp, s, line_nr); 1205 - /* Add line to update list. */ 1206 - if (list_empty(&s->update)) { 1207 - list_add_tail(&s->update, &tp->update); 1208 - tp->update_flags |= TTY_UPDATE_LIST; 1209 - } 1210 - } 1211 - } 1212 - 1213 - /* 1214 - * Do carriage return. 1215 - */ 1216 - static void 1217 - tty3270_cr(struct tty3270 *tp) 1218 - { 1219 - tp->cx = 0; 1220 - } 1221 - 1222 - /* 1223 - * Do line feed. 1224 - */ 1225 - static void 1226 - tty3270_lf(struct tty3270 *tp) 1227 - { 1228 - struct tty3270_line temp; 1229 - int i; 1230 - 1231 - tty3270_convert_line(tp, tp->cy); 1232 - if (tp->cy < tp->view.rows - 3) { 1233 - tp->cy++; 1234 - return; 1235 - } 1236 - /* Last line just filled up. Add new, blank line. */ 1237 - tty3270_blank_line(tp); 1238 - temp = tp->screen[0]; 1239 - temp.len = 0; 1240 - for (i = 0; i < tp->view.rows - 3; i++) 1241 - tp->screen[i] = tp->screen[i+1]; 1242 - tp->screen[tp->view.rows - 3] = temp; 1243 - tty3270_rebuild_update(tp); 1244 - } 1245 - 1246 - static void 1247 - tty3270_ri(struct tty3270 *tp) 1248 - { 1249 - if (tp->cy > 0) { 1250 - tty3270_convert_line(tp, tp->cy); 1251 - tp->cy--; 1252 - } 1253 - } 1254 - 1255 - /* 1256 - * Insert characters at current position. 1257 - */ 1258 - static void 1259 - tty3270_insert_characters(struct tty3270 *tp, int n) 1260 - { 1261 - struct tty3270_line *line; 1262 - int k; 1263 - 1264 - line = tp->screen + tp->cy; 1265 - while (line->len < tp->cx) { 1266 - line->cells[line->len].character = tp->view.ascebc[' ']; 1267 - line->cells[line->len].highlight = TAX_RESET; 1268 - line->cells[line->len].f_color = TAC_RESET; 1269 - line->len++; 1270 - } 1271 - if (n > tp->view.cols - tp->cx) 1272 - n = tp->view.cols - tp->cx; 1273 - k = min_t(int, line->len - tp->cx, tp->view.cols - tp->cx - n); 1274 - while (k--) 1275 - line->cells[tp->cx + n + k] = line->cells[tp->cx + k]; 1276 - line->len += n; 1277 - if (line->len > tp->view.cols) 1278 - line->len = tp->view.cols; 1279 - while (n-- > 0) { 1280 - line->cells[tp->cx + n].character = tp->view.ascebc[' ']; 1281 - line->cells[tp->cx + n].highlight = tp->highlight; 1282 - line->cells[tp->cx + n].f_color = tp->f_color; 1283 - } 1284 - } 1285 - 1286 - /* 1287 - * Delete characters at current position. 1288 - */ 1289 - static void 1290 - tty3270_delete_characters(struct tty3270 *tp, int n) 1291 - { 1292 - struct tty3270_line *line; 1293 - int i; 1294 - 1295 - line = tp->screen + tp->cy; 1296 - if (line->len <= tp->cx) 1297 - return; 1298 - if (line->len - tp->cx <= n) { 1299 - line->len = tp->cx; 1300 - return; 1301 - } 1302 - for (i = tp->cx; i + n < line->len; i++) 1303 - line->cells[i] = line->cells[i + n]; 1304 - line->len -= n; 1305 - } 1306 - 1307 - /* 1308 - * Erase characters at current position. 1309 - */ 1310 - static void 1311 - tty3270_erase_characters(struct tty3270 *tp, int n) 1312 - { 1313 - struct tty3270_line *line; 1314 - struct tty3270_cell *cell; 1315 - 1316 - line = tp->screen + tp->cy; 1317 - while (line->len > tp->cx && n-- > 0) { 1318 - cell = line->cells + tp->cx++; 1319 - cell->character = ' '; 1320 - cell->highlight = TAX_RESET; 1321 - cell->f_color = TAC_RESET; 1322 - } 1323 - tp->cx += n; 1324 - tp->cx = min_t(int, tp->cx, tp->view.cols - 1); 1325 - } 1326 - 1327 - /* 1328 - * Erase line, 3 different cases: 1329 - * Esc [ 0 K Erase from current position to end of line inclusive 1330 - * Esc [ 1 K Erase from beginning of line to current position inclusive 1331 - * Esc [ 2 K Erase entire line (without moving cursor) 1332 - */ 1333 - static void 1334 - tty3270_erase_line(struct tty3270 *tp, int mode) 1335 - { 1336 - struct tty3270_line *line; 1337 - struct tty3270_cell *cell; 1338 - int i; 1339 - 1340 - line = tp->screen + tp->cy; 1341 - if (mode == 0) 1342 - line->len = tp->cx; 1343 - else if (mode == 1) { 1344 - for (i = 0; i < tp->cx; i++) { 1345 - cell = line->cells + i; 1346 - cell->character = ' '; 1347 - cell->highlight = TAX_RESET; 1348 - cell->f_color = TAC_RESET; 1349 - } 1350 - if (line->len <= tp->cx) 1351 - line->len = tp->cx + 1; 1352 - } else if (mode == 2) 1353 - line->len = 0; 1354 - tty3270_convert_line(tp, tp->cy); 1355 - } 1356 - 1357 - /* 1358 - * Erase display, 3 different cases: 1359 - * Esc [ 0 J Erase from current position to bottom of screen inclusive 1360 - * Esc [ 1 J Erase from top of screen to current position inclusive 1361 - * Esc [ 2 J Erase entire screen (without moving the cursor) 1362 - */ 1363 - static void 1364 - tty3270_erase_display(struct tty3270 *tp, int mode) 1365 - { 1366 - int i; 1367 - 1368 - if (mode == 0) { 1369 - tty3270_erase_line(tp, 0); 1370 - for (i = tp->cy + 1; i < tp->view.rows - 2; i++) { 1371 - tp->screen[i].len = 0; 1372 - tty3270_convert_line(tp, i); 1373 - } 1374 - } else if (mode == 1) { 1375 - for (i = 0; i < tp->cy; i++) { 1376 - tp->screen[i].len = 0; 1377 - tty3270_convert_line(tp, i); 1378 - } 1379 - tty3270_erase_line(tp, 1); 1380 - } else if (mode == 2) { 1381 - for (i = 0; i < tp->view.rows - 2; i++) { 1382 - tp->screen[i].len = 0; 1383 - tty3270_convert_line(tp, i); 1384 - } 1385 - } 1386 - tty3270_rebuild_update(tp); 1387 - } 1388 - 1389 - /* 1390 - * Set attributes found in an escape sequence. 1391 - * Esc [ <attr> ; <attr> ; ... m 1392 - */ 1393 - static void 1394 - tty3270_set_attributes(struct tty3270 *tp) 1395 - { 1396 - static unsigned char f_colors[] = { 1397 - TAC_DEFAULT, TAC_RED, TAC_GREEN, TAC_YELLOW, TAC_BLUE, 1398 - TAC_PINK, TAC_TURQ, TAC_WHITE, 0, TAC_DEFAULT 1399 - }; 1400 - int i, attr; 1401 - 1402 - for (i = 0; i <= tp->esc_npar; i++) { 1403 - attr = tp->esc_par[i]; 1404 - switch (attr) { 1405 - case 0: /* Reset */ 1406 - tp->highlight = TAX_RESET; 1407 - tp->f_color = TAC_RESET; 1408 - break; 1409 - /* Highlight. */ 1410 - case 4: /* Start underlining. */ 1411 - tp->highlight = TAX_UNDER; 1412 - break; 1413 - case 5: /* Start blink. */ 1414 - tp->highlight = TAX_BLINK; 1415 - break; 1416 - case 7: /* Start reverse. */ 1417 - tp->highlight = TAX_REVER; 1418 - break; 1419 - case 24: /* End underlining */ 1420 - if (tp->highlight == TAX_UNDER) 1421 - tp->highlight = TAX_RESET; 1422 - break; 1423 - case 25: /* End blink. */ 1424 - if (tp->highlight == TAX_BLINK) 1425 - tp->highlight = TAX_RESET; 1426 - break; 1427 - case 27: /* End reverse. */ 1428 - if (tp->highlight == TAX_REVER) 1429 - tp->highlight = TAX_RESET; 1430 - break; 1431 - /* Foreground color. */ 1432 - case 30: /* Black */ 1433 - case 31: /* Red */ 1434 - case 32: /* Green */ 1435 - case 33: /* Yellow */ 1436 - case 34: /* Blue */ 1437 - case 35: /* Magenta */ 1438 - case 36: /* Cyan */ 1439 - case 37: /* White */ 1440 - case 39: /* Black */ 1441 - tp->f_color = f_colors[attr - 30]; 1442 - break; 1443 - } 1444 - } 1445 - } 1446 - 1447 - static inline int 1448 - tty3270_getpar(struct tty3270 *tp, int ix) 1449 - { 1450 - return (tp->esc_par[ix] > 0) ? tp->esc_par[ix] : 1; 1451 - } 1452 - 1453 - static void 1454 - tty3270_goto_xy(struct tty3270 *tp, int cx, int cy) 1455 - { 1456 - int max_cx = max(0, cx); 1457 - int max_cy = max(0, cy); 1458 - 1459 - tp->cx = min_t(int, tp->view.cols - 1, max_cx); 1460 - cy = min_t(int, tp->view.rows - 3, max_cy); 1461 - if (cy != tp->cy) { 1462 - tty3270_convert_line(tp, tp->cy); 1463 - tp->cy = cy; 1464 - } 1465 - } 1466 - 1467 - /* 1468 - * Process escape sequences. Known sequences: 1469 - * Esc 7 Save Cursor Position 1470 - * Esc 8 Restore Cursor Position 1471 - * Esc [ Pn ; Pn ; .. m Set attributes 1472 - * Esc [ Pn ; Pn H Cursor Position 1473 - * Esc [ Pn ; Pn f Cursor Position 1474 - * Esc [ Pn A Cursor Up 1475 - * Esc [ Pn B Cursor Down 1476 - * Esc [ Pn C Cursor Forward 1477 - * Esc [ Pn D Cursor Backward 1478 - * Esc [ Pn G Cursor Horizontal Absolute 1479 - * Esc [ Pn X Erase Characters 1480 - * Esc [ Ps J Erase in Display 1481 - * Esc [ Ps K Erase in Line 1482 - * // FIXME: add all the new ones. 1483 - * 1484 - * Pn is a numeric parameter, a string of zero or more decimal digits. 1485 - * Ps is a selective parameter. 1486 - */ 1487 - static void 1488 - tty3270_escape_sequence(struct tty3270 *tp, char ch) 1489 - { 1490 - enum { ESnormal, ESesc, ESsquare, ESgetpars }; 1491 - 1492 - if (tp->esc_state == ESnormal) { 1493 - if (ch == 0x1b) 1494 - /* Starting new escape sequence. */ 1495 - tp->esc_state = ESesc; 1496 - return; 1497 - } 1498 - if (tp->esc_state == ESesc) { 1499 - tp->esc_state = ESnormal; 1500 - switch (ch) { 1501 - case '[': 1502 - tp->esc_state = ESsquare; 1503 - break; 1504 - case 'E': 1505 - tty3270_cr(tp); 1506 - tty3270_lf(tp); 1507 - break; 1508 - case 'M': 1509 - tty3270_ri(tp); 1510 - break; 1511 - case 'D': 1512 - tty3270_lf(tp); 1513 - break; 1514 - case 'Z': /* Respond ID. */ 1515 - kbd_puts_queue(&tp->port, "\033[?6c"); 1516 - break; 1517 - case '7': /* Save cursor position. */ 1518 - tp->saved_cx = tp->cx; 1519 - tp->saved_cy = tp->cy; 1520 - tp->saved_highlight = tp->highlight; 1521 - tp->saved_f_color = tp->f_color; 1522 - break; 1523 - case '8': /* Restore cursor position. */ 1524 - tty3270_convert_line(tp, tp->cy); 1525 - tty3270_goto_xy(tp, tp->saved_cx, tp->saved_cy); 1526 - tp->highlight = tp->saved_highlight; 1527 - tp->f_color = tp->saved_f_color; 1528 - break; 1529 - case 'c': /* Reset terminal. */ 1530 - tp->cx = tp->saved_cx = 0; 1531 - tp->cy = tp->saved_cy = 0; 1532 - tp->highlight = tp->saved_highlight = TAX_RESET; 1533 - tp->f_color = tp->saved_f_color = TAC_RESET; 1534 - tty3270_erase_display(tp, 2); 1535 - break; 1536 - } 1537 - return; 1538 - } 1539 - if (tp->esc_state == ESsquare) { 1540 - tp->esc_state = ESgetpars; 1541 - memset(tp->esc_par, 0, sizeof(tp->esc_par)); 1542 - tp->esc_npar = 0; 1543 - tp->esc_ques = (ch == '?'); 1544 - if (tp->esc_ques) 1545 - return; 1546 - } 1547 - if (tp->esc_state == ESgetpars) { 1548 - if (ch == ';' && tp->esc_npar < ESCAPE_NPAR - 1) { 1549 - tp->esc_npar++; 1550 - return; 1551 - } 1552 - if (ch >= '0' && ch <= '9') { 1553 - tp->esc_par[tp->esc_npar] *= 10; 1554 - tp->esc_par[tp->esc_npar] += ch - '0'; 1555 - return; 1556 - } 1557 - } 1558 - tp->esc_state = ESnormal; 1559 - if (ch == 'n' && !tp->esc_ques) { 1560 - if (tp->esc_par[0] == 5) /* Status report. */ 1561 - kbd_puts_queue(&tp->port, "\033[0n"); 1562 - else if (tp->esc_par[0] == 6) { /* Cursor report. */ 1563 - char buf[40]; 1564 - sprintf(buf, "\033[%d;%dR", tp->cy + 1, tp->cx + 1); 1565 - kbd_puts_queue(&tp->port, buf); 1566 - } 1567 - return; 1568 - } 1569 - if (tp->esc_ques) 1570 - return; 1571 - switch (ch) { 1572 - case 'm': 1573 - tty3270_set_attributes(tp); 1574 - break; 1575 - case 'H': /* Set cursor position. */ 1576 - case 'f': 1577 - tty3270_goto_xy(tp, tty3270_getpar(tp, 1) - 1, 1578 - tty3270_getpar(tp, 0) - 1); 1579 - break; 1580 - case 'd': /* Set y position. */ 1581 - tty3270_goto_xy(tp, tp->cx, tty3270_getpar(tp, 0) - 1); 1582 - break; 1583 - case 'A': /* Cursor up. */ 1584 - case 'F': 1585 - tty3270_goto_xy(tp, tp->cx, tp->cy - tty3270_getpar(tp, 0)); 1586 - break; 1587 - case 'B': /* Cursor down. */ 1588 - case 'e': 1589 - case 'E': 1590 - tty3270_goto_xy(tp, tp->cx, tp->cy + tty3270_getpar(tp, 0)); 1591 - break; 1592 - case 'C': /* Cursor forward. */ 1593 - case 'a': 1594 - tty3270_goto_xy(tp, tp->cx + tty3270_getpar(tp, 0), tp->cy); 1595 - break; 1596 - case 'D': /* Cursor backward. */ 1597 - tty3270_goto_xy(tp, tp->cx - tty3270_getpar(tp, 0), tp->cy); 1598 - break; 1599 - case 'G': /* Set x position. */ 1600 - case '`': 1601 - tty3270_goto_xy(tp, tty3270_getpar(tp, 0), tp->cy); 1602 - break; 1603 - case 'X': /* Erase Characters. */ 1604 - tty3270_erase_characters(tp, tty3270_getpar(tp, 0)); 1605 - break; 1606 - case 'J': /* Erase display. */ 1607 - tty3270_erase_display(tp, tp->esc_par[0]); 1608 - break; 1609 - case 'K': /* Erase line. */ 1610 - tty3270_erase_line(tp, tp->esc_par[0]); 1611 - break; 1612 - case 'P': /* Delete characters. */ 1613 - tty3270_delete_characters(tp, tty3270_getpar(tp, 0)); 1614 - break; 1615 - case '@': /* Insert characters. */ 1616 - tty3270_insert_characters(tp, tty3270_getpar(tp, 0)); 1617 - break; 1618 - case 's': /* Save cursor position. */ 1619 - tp->saved_cx = tp->cx; 1620 - tp->saved_cy = tp->cy; 1621 - tp->saved_highlight = tp->highlight; 1622 - tp->saved_f_color = tp->f_color; 1623 - break; 1624 - case 'u': /* Restore cursor position. */ 1625 - tty3270_convert_line(tp, tp->cy); 1626 - tty3270_goto_xy(tp, tp->saved_cx, tp->saved_cy); 1627 - tp->highlight = tp->saved_highlight; 1628 - tp->f_color = tp->saved_f_color; 1629 - break; 1630 - } 1631 - } 1632 - 1633 - /* 1634 - * String write routine for 3270 ttys 1635 - */ 1636 - static void 1637 - tty3270_do_write(struct tty3270 *tp, struct tty_struct *tty, 1638 - const unsigned char *buf, int count) 1639 - { 1640 - int i_msg, i; 1641 - 1642 - spin_lock_bh(&tp->view.lock); 1643 - for (i_msg = 0; !tty->flow.stopped && i_msg < count; i_msg++) { 1644 - if (tp->esc_state != 0) { 1645 - /* Continue escape sequence. */ 1646 - tty3270_escape_sequence(tp, buf[i_msg]); 1647 - continue; 1648 - } 1649 - 1650 - switch (buf[i_msg]) { 1651 - case 0x07: /* '\a' -- Alarm */ 1652 - tp->wcc |= TW_PLUSALARM; 1653 - break; 1654 - case 0x08: /* Backspace. */ 1655 - if (tp->cx > 0) { 1656 - tp->cx--; 1657 - tty3270_put_character(tp, ' '); 1658 - } 1659 - break; 1660 - case 0x09: /* '\t' -- Tabulate */ 1661 - for (i = tp->cx % 8; i < 8; i++) { 1662 - if (tp->cx >= tp->view.cols) { 1663 - tty3270_cr(tp); 1664 - tty3270_lf(tp); 1665 - break; 1666 - } 1667 - tty3270_put_character(tp, ' '); 1668 - tp->cx++; 1669 - } 1670 - break; 1671 - case 0x0a: /* '\n' -- New Line */ 1672 - tty3270_cr(tp); 1673 - tty3270_lf(tp); 1674 - break; 1675 - case 0x0c: /* '\f' -- Form Feed */ 1676 - tty3270_erase_display(tp, 2); 1677 - tp->cx = tp->cy = 0; 1678 - break; 1679 - case 0x0d: /* '\r' -- Carriage Return */ 1680 - tp->cx = 0; 1681 - break; 1682 - case 0x0f: /* SuSE "exit alternate mode" */ 1683 - break; 1684 - case 0x1b: /* Start escape sequence. */ 1685 - tty3270_escape_sequence(tp, buf[i_msg]); 1686 - break; 1687 - default: /* Insert normal character. */ 1688 - if (tp->cx >= tp->view.cols) { 1689 - tty3270_cr(tp); 1690 - tty3270_lf(tp); 1691 - } 1692 - tty3270_put_character(tp, buf[i_msg]); 1693 - tp->cx++; 1694 - break; 1695 - } 1696 - } 1697 - /* Convert current line to 3270 data fragment. */ 1698 - tty3270_convert_line(tp, tp->cy); 1699 - 1700 - /* Setup timer to update display after 1/10 second */ 1701 - if (!timer_pending(&tp->timer)) 1702 - tty3270_set_timer(tp, HZ/10); 1703 - 1704 - spin_unlock_bh(&tp->view.lock); 1705 - } 1706 - 1707 - /* 1708 - * String write routine for 3270 ttys 1709 - */ 1710 - static int 1711 - tty3270_write(struct tty_struct * tty, 1712 - const unsigned char *buf, int count) 1713 - { 1714 - struct tty3270 *tp; 1715 - 1716 - tp = tty->driver_data; 1717 - if (!tp) 1718 - return 0; 1719 - if (tp->char_count > 0) { 1720 - tty3270_do_write(tp, tty, tp->char_buf, tp->char_count); 1721 - tp->char_count = 0; 1722 - } 1723 - tty3270_do_write(tp, tty, buf, count); 1724 - return count; 1725 - } 1726 - 1727 - /* 1728 - * Put single characters to the ttys character buffer 1729 - */ 1730 - static int tty3270_put_char(struct tty_struct *tty, unsigned char ch) 1731 - { 1732 - struct tty3270 *tp; 1733 - 1734 - tp = tty->driver_data; 1735 - if (!tp || tp->char_count >= TTY3270_CHAR_BUF_SIZE) 1736 - return 0; 1737 - tp->char_buf[tp->char_count++] = ch; 1738 - return 1; 1739 - } 1740 - 1741 - /* 1742 - * Flush all characters from the ttys characeter buffer put there 1743 - * by tty3270_put_char. 1744 - */ 1745 - static void 1746 - tty3270_flush_chars(struct tty_struct *tty) 1747 - { 1748 - struct tty3270 *tp; 1749 - 1750 - tp = tty->driver_data; 1751 - if (!tp) 1752 - return; 1753 - if (tp->char_count > 0) { 1754 - tty3270_do_write(tp, tty, tp->char_buf, tp->char_count); 1755 - tp->char_count = 0; 1756 - } 1757 - } 1758 - 1759 - /* 1760 - * Check for visible/invisible input switches 1761 - */ 1762 - static void 1763 - tty3270_set_termios(struct tty_struct *tty, const struct ktermios *old) 1764 - { 1765 - struct tty3270 *tp; 1766 - int new; 1767 - 1768 - tp = tty->driver_data; 1769 - if (!tp) 1770 - return; 1771 - spin_lock_bh(&tp->view.lock); 1772 - if (L_ICANON(tty)) { 1773 - new = L_ECHO(tty) ? TF_INPUT: TF_INPUTN; 1774 - if (new != tp->inattr) { 1775 - tp->inattr = new; 1776 - tty3270_update_prompt(tp, NULL, 0); 1777 - tty3270_set_timer(tp, 1); 1778 - } 1779 - } 1780 - spin_unlock_bh(&tp->view.lock); 1781 - } 1782 - 1783 - /* 1784 - * Disable reading from a 3270 tty 1785 - */ 1786 - static void 1787 - tty3270_throttle(struct tty_struct * tty) 1788 - { 1789 - struct tty3270 *tp; 1790 - 1791 - tp = tty->driver_data; 1792 - if (!tp) 1793 - return; 1794 - tp->throttle = 1; 1795 - } 1796 - 1797 - /* 1798 - * Enable reading from a 3270 tty 1799 - */ 1800 - static void 1801 - tty3270_unthrottle(struct tty_struct * tty) 1802 - { 1803 - struct tty3270 *tp; 1804 - 1805 - tp = tty->driver_data; 1806 - if (!tp) 1807 - return; 1808 - tp->throttle = 0; 1809 - if (tp->attn) 1810 - tty3270_issue_read(tp, 1); 1811 - } 1812 - 1813 - /* 1814 - * Hang up the tty device. 1815 - */ 1816 - static void 1817 - tty3270_hangup(struct tty_struct *tty) 1818 - { 1819 - struct tty3270 *tp; 1820 - 1821 - tp = tty->driver_data; 1822 - if (!tp) 1823 - return; 1824 - spin_lock_bh(&tp->view.lock); 1825 - tp->cx = tp->saved_cx = 0; 1826 - tp->cy = tp->saved_cy = 0; 1827 - tp->highlight = tp->saved_highlight = TAX_RESET; 1828 - tp->f_color = tp->saved_f_color = TAC_RESET; 1829 - tty3270_blank_screen(tp); 1830 - while (tp->nr_lines < tp->view.rows - 2) 1831 - tty3270_blank_line(tp); 1832 - tp->update_flags = TTY_UPDATE_ALL; 1833 - spin_unlock_bh(&tp->view.lock); 1834 - tty3270_set_timer(tp, 1); 1835 - } 1836 - 1837 - static void 1838 - tty3270_wait_until_sent(struct tty_struct *tty, int timeout) 1839 - { 1840 - } 1841 - 1842 - static int tty3270_ioctl(struct tty_struct *tty, unsigned int cmd, 1843 - unsigned long arg) 1844 - { 1845 - struct tty3270 *tp; 1846 - 1847 - tp = tty->driver_data; 1848 - if (!tp) 1849 - return -ENODEV; 1850 - if (tty_io_error(tty)) 1851 - return -EIO; 1852 - return kbd_ioctl(tp->kbd, cmd, arg); 1853 - } 1854 - 1855 - #ifdef CONFIG_COMPAT 1856 - static long tty3270_compat_ioctl(struct tty_struct *tty, 1857 - unsigned int cmd, unsigned long arg) 1858 - { 1859 - struct tty3270 *tp; 1860 - 1861 - tp = tty->driver_data; 1862 - if (!tp) 1863 - return -ENODEV; 1864 - if (tty_io_error(tty)) 1865 - return -EIO; 1866 - return kbd_ioctl(tp->kbd, cmd, (unsigned long)compat_ptr(arg)); 1867 - } 1868 - #endif 1869 - 1870 - static const struct tty_operations tty3270_ops = { 1871 - .install = tty3270_install, 1872 - .cleanup = tty3270_cleanup, 1873 - .open = tty3270_open, 1874 - .close = tty3270_close, 1875 - .write = tty3270_write, 1876 - .put_char = tty3270_put_char, 1877 - .flush_chars = tty3270_flush_chars, 1878 - .write_room = tty3270_write_room, 1879 - .throttle = tty3270_throttle, 1880 - .unthrottle = tty3270_unthrottle, 1881 - .hangup = tty3270_hangup, 1882 - .wait_until_sent = tty3270_wait_until_sent, 1883 - .ioctl = tty3270_ioctl, 1884 - #ifdef CONFIG_COMPAT 1885 - .compat_ioctl = tty3270_compat_ioctl, 1886 - #endif 1887 - .set_termios = tty3270_set_termios 1888 - }; 1889 - 1890 - static void tty3270_create_cb(int minor) 1891 - { 1892 - tty_register_device(tty3270_driver, minor - RAW3270_FIRSTMINOR, NULL); 1893 - } 1894 - 1895 - static void tty3270_destroy_cb(int minor) 1896 - { 1897 - tty_unregister_device(tty3270_driver, minor - RAW3270_FIRSTMINOR); 1898 - } 1899 - 1900 - static struct raw3270_notifier tty3270_notifier = 1901 - { 1902 - .create = tty3270_create_cb, 1903 - .destroy = tty3270_destroy_cb, 1904 - }; 1905 - 1906 - /* 1907 - * 3270 tty registration code called from tty_init(). 1908 - * Most kernel services (incl. kmalloc) are available at this poimt. 1909 - */ 1910 - static int __init tty3270_init(void) 1911 - { 1912 - struct tty_driver *driver; 1913 - int ret; 1914 - 1915 - driver = tty_alloc_driver(RAW3270_MAXDEVS, 1916 - TTY_DRIVER_REAL_RAW | 1917 - TTY_DRIVER_DYNAMIC_DEV | 1918 - TTY_DRIVER_RESET_TERMIOS); 1919 - if (IS_ERR(driver)) 1920 - return PTR_ERR(driver); 1921 - 1922 - /* 1923 - * Initialize the tty_driver structure 1924 - * Entries in tty3270_driver that are NOT initialized: 1925 - * proc_entry, set_termios, flush_buffer, set_ldisc, write_proc 1926 - */ 1927 - driver->driver_name = "tty3270"; 1928 - driver->name = "3270/tty"; 1929 - driver->major = IBM_TTY3270_MAJOR; 1930 - driver->minor_start = RAW3270_FIRSTMINOR; 1931 - driver->name_base = RAW3270_FIRSTMINOR; 1932 - driver->type = TTY_DRIVER_TYPE_SYSTEM; 1933 - driver->subtype = SYSTEM_TYPE_TTY; 1934 - driver->init_termios = tty_std_termios; 1935 - tty_set_operations(driver, &tty3270_ops); 1936 - ret = tty_register_driver(driver); 1937 - if (ret) { 1938 - tty_driver_kref_put(driver); 1939 - return ret; 1940 - } 1941 - tty3270_driver = driver; 1942 - raw3270_register_notifier(&tty3270_notifier); 1943 - return 0; 1944 - } 1945 - 1946 - static void __exit 1947 - tty3270_exit(void) 1948 - { 1949 - struct tty_driver *driver; 1950 - 1951 - raw3270_unregister_notifier(&tty3270_notifier); 1952 - driver = tty3270_driver; 1953 - tty3270_driver = NULL; 1954 - tty_unregister_driver(driver); 1955 - tty_driver_kref_put(driver); 1956 - tty3270_del_views(); 1957 - } 1958 - 1959 - MODULE_LICENSE("GPL"); 1960 - MODULE_ALIAS_CHARDEV_MAJOR(IBM_TTY3270_MAJOR); 1961 - 1962 - module_init(tty3270_init); 1963 - module_exit(tty3270_exit);
-15
drivers/s390/char/tty3270.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Copyright IBM Corp. 2007 4 - * 5 - */ 6 - 7 - #ifndef __DRIVERS_S390_CHAR_TTY3270_H 8 - #define __DRIVERS_S390_CHAR_TTY3270_H 9 - 10 - #include <linux/tty.h> 11 - #include <linux/tty_driver.h> 12 - 13 - extern struct tty_driver *tty3270_driver; 14 - 15 - #endif /* __DRIVERS_S390_CHAR_TTY3270_H */
+15 -6
drivers/s390/cio/css.c
··· 740 740 spin_unlock_irqrestore(&slow_subchannel_lock, flags); 741 741 } 742 742 743 - static int __unset_registered(struct device *dev, void *data) 743 + static int __unset_validpath(struct device *dev, void *data) 744 744 { 745 745 struct idset *set = data; 746 746 struct subchannel *sch = to_subchannel(dev); 747 + struct pmcw *pmcw = &sch->schib.pmcw; 747 748 748 - idset_sch_del(set, sch->schid); 749 + /* Here we want to make sure that we are considering only those subchannels 750 + * which do not have an operational device attached to it. This can be found 751 + * with the help of PAM and POM values of pmcw. OPM provides the information 752 + * about any path which is currently vary-off, so that we should not consider. 753 + */ 754 + if (sch->st == SUBCHANNEL_TYPE_IO && 755 + (sch->opm & pmcw->pam & pmcw->pom)) 756 + idset_sch_del(set, sch->schid); 757 + 749 758 return 0; 750 759 } 751 760 ··· 783 774 } 784 775 idset_fill(set); 785 776 switch (cond) { 786 - case CSS_EVAL_UNREG: 787 - bus_for_each_dev(&css_bus_type, NULL, set, __unset_registered); 777 + case CSS_EVAL_NO_PATH: 778 + bus_for_each_dev(&css_bus_type, NULL, set, __unset_validpath); 788 779 break; 789 780 case CSS_EVAL_NOT_ONLINE: 790 781 bus_for_each_dev(&css_bus_type, NULL, set, __unset_online); ··· 807 798 flush_workqueue(cio_work_q); 808 799 } 809 800 810 - /* Schedule reprobing of all unregistered subchannels. */ 801 + /* Schedule reprobing of all subchannels with no valid operational path. */ 811 802 void css_schedule_reprobe(void) 812 803 { 813 804 /* Schedule with a delay to allow merging of subsequent calls. */ 814 - css_schedule_eval_cond(CSS_EVAL_UNREG, 1 * HZ); 805 + css_schedule_eval_cond(CSS_EVAL_NO_PATH, 1 * HZ); 815 806 } 816 807 EXPORT_SYMBOL_GPL(css_schedule_reprobe); 817 808
+1 -1
drivers/s390/cio/css.h
··· 38 38 * Conditions used to specify which subchannels need evaluation 39 39 */ 40 40 enum css_eval_cond { 41 - CSS_EVAL_UNREG, /* unregistered subchannels */ 41 + CSS_EVAL_NO_PATH, /* Subchannels with no operational paths */ 42 42 CSS_EVAL_NOT_ONLINE /* sch without an online-device */ 43 43 }; 44 44
+9
drivers/s390/cio/device.c
··· 244 244 245 245 static void ccw_device_unregister(struct ccw_device *cdev) 246 246 { 247 + mutex_lock(&cdev->reg_mutex); 247 248 if (device_is_registered(&cdev->dev)) { 248 249 /* Undo device_add(). */ 249 250 device_del(&cdev->dev); 250 251 } 252 + mutex_unlock(&cdev->reg_mutex); 253 + 251 254 if (cdev->private->flags.initialized) { 252 255 cdev->private->flags.initialized = 0; 253 256 /* Release reference from device_initialize(). */ ··· 656 653 { 657 654 int ret; 658 655 656 + mutex_lock(&cdev->reg_mutex); 659 657 if (device_is_registered(&cdev->dev)) { 660 658 device_release_driver(&cdev->dev); 661 659 ret = device_attach(&cdev->dev); 662 660 WARN_ON(ret == -ENODEV); 663 661 } 662 + mutex_unlock(&cdev->reg_mutex); 664 663 } 665 664 666 665 static void ··· 745 740 INIT_LIST_HEAD(&priv->cmb_list); 746 741 init_waitqueue_head(&priv->wait_q); 747 742 timer_setup(&priv->timer, ccw_device_timeout, 0); 743 + mutex_init(&cdev->reg_mutex); 748 744 749 745 atomic_set(&priv->onoff, 0); 750 746 cdev->ccwlock = sch->lock; ··· 831 825 * be registered). We need to reprobe since we may now have sense id 832 826 * information. 833 827 */ 828 + mutex_lock(&cdev->reg_mutex); 834 829 if (device_is_registered(&cdev->dev)) { 835 830 if (!cdev->drv) { 836 831 ret = device_reprobe(&cdev->dev); ··· 854 847 spin_lock_irqsave(sch->lock, flags); 855 848 sch_set_cdev(sch, NULL); 856 849 spin_unlock_irqrestore(sch->lock, flags); 850 + mutex_unlock(&cdev->reg_mutex); 857 851 /* Release initial device reference. */ 858 852 put_device(&cdev->dev); 859 853 goto out_err; 860 854 } 861 855 out: 862 856 cdev->private->flags.recog_done = 1; 857 + mutex_unlock(&cdev->reg_mutex); 863 858 wake_up(&cdev->private->wait_q); 864 859 out_err: 865 860 if (adjust_init_count && atomic_dec_and_test(&ccw_device_init_count))
+197 -168
drivers/s390/cio/vfio_ccw_cp.c
··· 42 42 /* 43 43 * page_array_alloc() - alloc memory for page array 44 44 * @pa: page_array on which to perform the operation 45 - * @iova: target guest physical address 46 - * @len: number of bytes that should be pinned from @iova 45 + * @len: number of pages that should be pinned from @iova 47 46 * 48 47 * Attempt to allocate memory for page array. 49 48 * ··· 55 56 * -EINVAL if pa->pa_nr is not initially zero, or pa->pa_iova is not NULL 56 57 * -ENOMEM if alloc failed 57 58 */ 58 - static int page_array_alloc(struct page_array *pa, u64 iova, unsigned int len) 59 + static int page_array_alloc(struct page_array *pa, unsigned int len) 59 60 { 60 - int i; 61 - 62 61 if (pa->pa_nr || pa->pa_iova) 63 62 return -EINVAL; 64 63 65 - pa->pa_nr = ((iova & ~PAGE_MASK) + len + (PAGE_SIZE - 1)) >> PAGE_SHIFT; 66 - if (!pa->pa_nr) 64 + if (len == 0) 67 65 return -EINVAL; 68 66 69 - pa->pa_iova = kcalloc(pa->pa_nr, 70 - sizeof(*pa->pa_iova) + sizeof(*pa->pa_page), 71 - GFP_KERNEL); 72 - if (unlikely(!pa->pa_iova)) { 73 - pa->pa_nr = 0; 74 - return -ENOMEM; 75 - } 76 - pa->pa_page = (struct page **)&pa->pa_iova[pa->pa_nr]; 67 + pa->pa_nr = len; 77 68 78 - pa->pa_iova[0] = iova; 79 - pa->pa_page[0] = NULL; 80 - for (i = 1; i < pa->pa_nr; i++) { 81 - pa->pa_iova[i] = pa->pa_iova[i - 1] + PAGE_SIZE; 82 - pa->pa_page[i] = NULL; 69 + pa->pa_iova = kcalloc(len, sizeof(*pa->pa_iova), GFP_KERNEL); 70 + if (!pa->pa_iova) 71 + return -ENOMEM; 72 + 73 + pa->pa_page = kcalloc(len, sizeof(*pa->pa_page), GFP_KERNEL); 74 + if (!pa->pa_page) { 75 + kfree(pa->pa_iova); 76 + return -ENOMEM; 83 77 } 84 78 85 79 return 0; ··· 83 91 * @pa: page_array on which to perform the operation 84 92 * @vdev: the vfio device to perform the operation 85 93 * @pa_nr: number of user pages to unpin 94 + * @unaligned: were pages unaligned on the pin request 86 95 * 87 96 * Only unpin if any pages were pinned to begin with, i.e. pa_nr > 0, 88 97 * otherwise only clear pa->pa_nr 89 98 */ 90 99 static void page_array_unpin(struct page_array *pa, 91 - struct vfio_device *vdev, int pa_nr) 100 + struct vfio_device *vdev, int pa_nr, bool unaligned) 92 101 { 93 102 int unpinned = 0, npage = 1; 94 103 ··· 98 105 dma_addr_t *last = &first[npage]; 99 106 100 107 if (unpinned + npage < pa_nr && 101 - *first + npage * PAGE_SIZE == *last) { 108 + *first + npage * PAGE_SIZE == *last && 109 + !unaligned) { 102 110 npage++; 103 111 continue; 104 112 } ··· 115 121 /* 116 122 * page_array_pin() - Pin user pages in memory 117 123 * @pa: page_array on which to perform the operation 118 - * @mdev: the mediated device to perform pin operations 124 + * @vdev: the vfio device to perform pin operations 125 + * @unaligned: are pages aligned to 4K boundary? 119 126 * 120 127 * Returns number of pages pinned upon success. 121 128 * If the pin request partially succeeds, or fails completely, 122 129 * all pages are left unpinned and a negative error value is returned. 130 + * 131 + * Requests to pin "aligned" pages can be coalesced into a single 132 + * vfio_pin_pages request for the sake of efficiency, based on the 133 + * expectation of 4K page requests. Unaligned requests are probably 134 + * dealing with 2K "pages", and cannot be coalesced without 135 + * reworking this logic to incorporate that math. 123 136 */ 124 - static int page_array_pin(struct page_array *pa, struct vfio_device *vdev) 137 + static int page_array_pin(struct page_array *pa, struct vfio_device *vdev, bool unaligned) 125 138 { 126 139 int pinned = 0, npage = 1; 127 140 int ret = 0; ··· 138 137 dma_addr_t *last = &first[npage]; 139 138 140 139 if (pinned + npage < pa->pa_nr && 141 - *first + npage * PAGE_SIZE == *last) { 140 + *first + npage * PAGE_SIZE == *last && 141 + !unaligned) { 142 142 npage++; 143 143 continue; 144 144 } ··· 161 159 return ret; 162 160 163 161 err_out: 164 - page_array_unpin(pa, vdev, pinned); 162 + page_array_unpin(pa, vdev, pinned, unaligned); 165 163 return ret; 166 164 } 167 165 168 166 /* Unpin the pages before releasing the memory. */ 169 - static void page_array_unpin_free(struct page_array *pa, struct vfio_device *vdev) 167 + static void page_array_unpin_free(struct page_array *pa, struct vfio_device *vdev, bool unaligned) 170 168 { 171 - page_array_unpin(pa, vdev, pa->pa_nr); 169 + page_array_unpin(pa, vdev, pa->pa_nr, unaligned); 170 + kfree(pa->pa_page); 172 171 kfree(pa->pa_iova); 173 172 } 174 173 ··· 202 199 * idaw. 203 200 */ 204 201 205 - for (i = 0; i < pa->pa_nr; i++) 202 + for (i = 0; i < pa->pa_nr; i++) { 206 203 idaws[i] = page_to_phys(pa->pa_page[i]); 207 204 208 - /* Adjust the first IDAW, since it may not start on a page boundary */ 209 - idaws[0] += pa->pa_iova[0] & (PAGE_SIZE - 1); 205 + /* Incorporate any offset from each starting address */ 206 + idaws[i] += pa->pa_iova[i] & (PAGE_SIZE - 1); 207 + } 210 208 } 211 209 212 210 static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len) ··· 232 228 } 233 229 } 234 230 235 - /* 236 - * Within the domain (@mdev), copy @n bytes from a guest physical 237 - * address (@iova) to a host physical address (@to). 238 - */ 239 - static long copy_from_iova(struct vfio_device *vdev, void *to, u64 iova, 240 - unsigned long n) 241 - { 242 - struct page_array pa = {0}; 243 - int i, ret; 244 - unsigned long l, m; 245 - 246 - ret = page_array_alloc(&pa, iova, n); 247 - if (ret < 0) 248 - return ret; 249 - 250 - ret = page_array_pin(&pa, vdev); 251 - if (ret < 0) { 252 - page_array_unpin_free(&pa, vdev); 253 - return ret; 254 - } 255 - 256 - l = n; 257 - for (i = 0; i < pa.pa_nr; i++) { 258 - void *from = kmap_local_page(pa.pa_page[i]); 259 - 260 - m = PAGE_SIZE; 261 - if (i == 0) { 262 - from += iova & (PAGE_SIZE - 1); 263 - m -= iova & (PAGE_SIZE - 1); 264 - } 265 - 266 - m = min(l, m); 267 - memcpy(to + (n - l), from, m); 268 - kunmap_local(from); 269 - 270 - l -= m; 271 - if (l == 0) 272 - break; 273 - } 274 - 275 - page_array_unpin_free(&pa, vdev); 276 - 277 - return l; 278 - } 231 + #define idal_is_2k(_cp) (!(_cp)->orb.cmd.c64 || (_cp)->orb.cmd.i2k) 279 232 280 233 /* 281 234 * Helpers to operate ccwchain. ··· 317 356 static struct ccwchain *ccwchain_alloc(struct channel_program *cp, int len) 318 357 { 319 358 struct ccwchain *chain; 320 - void *data; 321 - size_t size; 322 359 323 - /* Make ccw address aligned to 8. */ 324 - size = ((sizeof(*chain) + 7L) & -8L) + 325 - sizeof(*chain->ch_ccw) * len + 326 - sizeof(*chain->ch_pa) * len; 327 - chain = kzalloc(size, GFP_DMA | GFP_KERNEL); 360 + chain = kzalloc(sizeof(*chain), GFP_KERNEL); 328 361 if (!chain) 329 362 return NULL; 330 363 331 - data = (u8 *)chain + ((sizeof(*chain) + 7L) & -8L); 332 - chain->ch_ccw = (struct ccw1 *)data; 364 + chain->ch_ccw = kcalloc(len, sizeof(*chain->ch_ccw), GFP_DMA | GFP_KERNEL); 365 + if (!chain->ch_ccw) 366 + goto out_err; 333 367 334 - data = (u8 *)(chain->ch_ccw) + sizeof(*chain->ch_ccw) * len; 335 - chain->ch_pa = (struct page_array *)data; 336 - 337 - chain->ch_len = len; 368 + chain->ch_pa = kcalloc(len, sizeof(*chain->ch_pa), GFP_KERNEL); 369 + if (!chain->ch_pa) 370 + goto out_err; 338 371 339 372 list_add_tail(&chain->next, &cp->ccwchain_list); 340 373 341 374 return chain; 375 + 376 + out_err: 377 + kfree(chain->ch_ccw); 378 + kfree(chain); 379 + return NULL; 342 380 } 343 381 344 382 static void ccwchain_free(struct ccwchain *chain) 345 383 { 346 384 list_del(&chain->next); 385 + kfree(chain->ch_pa); 386 + kfree(chain->ch_ccw); 347 387 kfree(chain); 348 388 } 349 389 350 390 /* Free resource for a ccw that allocated memory for its cda. */ 351 391 static void ccwchain_cda_free(struct ccwchain *chain, int idx) 352 392 { 353 - struct ccw1 *ccw = chain->ch_ccw + idx; 393 + struct ccw1 *ccw = &chain->ch_ccw[idx]; 354 394 355 395 if (ccw_is_tic(ccw)) 356 396 return; ··· 379 417 380 418 do { 381 419 cnt++; 382 - 383 - /* 384 - * As we don't want to fail direct addressing even if the 385 - * orb specified one of the unsupported formats, we defer 386 - * checking for IDAWs in unsupported formats to here. 387 - */ 388 - if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw)) 389 - return -EOPNOTSUPP; 390 420 391 421 /* 392 422 * We want to keep counting if the current CCW has the ··· 425 471 int len, ret; 426 472 427 473 /* Copy 2K (the most we support today) of possible CCWs */ 428 - len = copy_from_iova(vdev, cp->guest_cp, cda, 429 - CCWCHAIN_LEN_MAX * sizeof(struct ccw1)); 430 - if (len) 431 - return len; 474 + ret = vfio_dma_rw(vdev, cda, cp->guest_cp, CCWCHAIN_LEN_MAX * sizeof(struct ccw1), false); 475 + if (ret) 476 + return ret; 432 477 433 478 /* Convert any Format-0 CCWs to Format-1 */ 434 479 if (!cp->orb.cmd.fmt) ··· 442 489 chain = ccwchain_alloc(cp, len); 443 490 if (!chain) 444 491 return -ENOMEM; 492 + 493 + chain->ch_len = len; 445 494 chain->ch_iova = cda; 446 495 447 496 /* Copy the actual CCWs into the new chain */ ··· 465 510 int i, ret; 466 511 467 512 for (i = 0; i < chain->ch_len; i++) { 468 - tic = chain->ch_ccw + i; 513 + tic = &chain->ch_ccw[i]; 469 514 470 515 if (!ccw_is_tic(tic)) 471 516 continue; ··· 483 528 return 0; 484 529 } 485 530 486 - static int ccwchain_fetch_tic(struct ccwchain *chain, 487 - int idx, 531 + static int ccwchain_fetch_tic(struct ccw1 *ccw, 488 532 struct channel_program *cp) 489 533 { 490 - struct ccw1 *ccw = chain->ch_ccw + idx; 491 534 struct ccwchain *iter; 492 535 u32 ccw_head; 493 536 ··· 501 548 return -EFAULT; 502 549 } 503 550 504 - static int ccwchain_fetch_direct(struct ccwchain *chain, 505 - int idx, 506 - struct channel_program *cp) 551 + static unsigned long *get_guest_idal(struct ccw1 *ccw, 552 + struct channel_program *cp, 553 + int idaw_nr) 507 554 { 508 555 struct vfio_device *vdev = 509 556 &container_of(cp, struct vfio_ccw_private, cp)->vdev; 510 - struct ccw1 *ccw; 511 - struct page_array *pa; 512 - u64 iova; 513 557 unsigned long *idaws; 558 + unsigned int *idaws_f1; 559 + int idal_len = idaw_nr * sizeof(*idaws); 560 + int idaw_size = idal_is_2k(cp) ? PAGE_SIZE / 2 : PAGE_SIZE; 561 + int idaw_mask = ~(idaw_size - 1); 562 + int i, ret; 563 + 564 + idaws = kcalloc(idaw_nr, sizeof(*idaws), GFP_DMA | GFP_KERNEL); 565 + if (!idaws) 566 + return ERR_PTR(-ENOMEM); 567 + 568 + if (ccw_is_idal(ccw)) { 569 + /* Copy IDAL from guest */ 570 + ret = vfio_dma_rw(vdev, ccw->cda, idaws, idal_len, false); 571 + if (ret) { 572 + kfree(idaws); 573 + return ERR_PTR(ret); 574 + } 575 + } else { 576 + /* Fabricate an IDAL based off CCW data address */ 577 + if (cp->orb.cmd.c64) { 578 + idaws[0] = ccw->cda; 579 + for (i = 1; i < idaw_nr; i++) 580 + idaws[i] = (idaws[i - 1] + idaw_size) & idaw_mask; 581 + } else { 582 + idaws_f1 = (unsigned int *)idaws; 583 + idaws_f1[0] = ccw->cda; 584 + for (i = 1; i < idaw_nr; i++) 585 + idaws_f1[i] = (idaws_f1[i - 1] + idaw_size) & idaw_mask; 586 + } 587 + } 588 + 589 + return idaws; 590 + } 591 + 592 + /* 593 + * ccw_count_idaws() - Calculate the number of IDAWs needed to transfer 594 + * a specified amount of data 595 + * 596 + * @ccw: The Channel Command Word being translated 597 + * @cp: Channel Program being processed 598 + * 599 + * The ORB is examined, since it specifies what IDAWs could actually be 600 + * used by any CCW in the channel program, regardless of whether or not 601 + * the CCW actually does. An ORB that does not specify Format-2-IDAW 602 + * Control could still contain a CCW with an IDAL, which would be 603 + * Format-1 and thus only move 2K with each IDAW. Thus all CCWs within 604 + * the channel program must follow the same size requirements. 605 + */ 606 + static int ccw_count_idaws(struct ccw1 *ccw, 607 + struct channel_program *cp) 608 + { 609 + struct vfio_device *vdev = 610 + &container_of(cp, struct vfio_ccw_private, cp)->vdev; 611 + u64 iova; 612 + int size = cp->orb.cmd.c64 ? sizeof(u64) : sizeof(u32); 514 613 int ret; 515 614 int bytes = 1; 516 - int idaw_nr, idal_len; 517 - int i; 518 - 519 - ccw = chain->ch_ccw + idx; 520 615 521 616 if (ccw->count) 522 617 bytes = ccw->count; 523 618 524 - /* Calculate size of IDAL */ 525 619 if (ccw_is_idal(ccw)) { 526 - /* Read first IDAW to see if it's 4K-aligned or not. */ 527 - /* All subsequent IDAws will be 4K-aligned. */ 528 - ret = copy_from_iova(vdev, &iova, ccw->cda, sizeof(iova)); 620 + /* Read first IDAW to check its starting address. */ 621 + /* All subsequent IDAWs will be 2K- or 4K-aligned. */ 622 + ret = vfio_dma_rw(vdev, ccw->cda, &iova, size, false); 529 623 if (ret) 530 624 return ret; 625 + 626 + /* 627 + * Format-1 IDAWs only occupy the first 32 bits, 628 + * and bit 0 is always off. 629 + */ 630 + if (!cp->orb.cmd.c64) 631 + iova = iova >> 32; 531 632 } else { 532 633 iova = ccw->cda; 533 634 } 534 - idaw_nr = idal_nr_words((void *)iova, bytes); 535 - idal_len = idaw_nr * sizeof(*idaws); 635 + 636 + /* Format-1 IDAWs operate on 2K each */ 637 + if (!cp->orb.cmd.c64) 638 + return idal_2k_nr_words((void *)iova, bytes); 639 + 640 + /* Using the 2K variant of Format-2 IDAWs? */ 641 + if (cp->orb.cmd.i2k) 642 + return idal_2k_nr_words((void *)iova, bytes); 643 + 644 + /* The 'usual' case is 4K Format-2 IDAWs */ 645 + return idal_nr_words((void *)iova, bytes); 646 + } 647 + 648 + static int ccwchain_fetch_ccw(struct ccw1 *ccw, 649 + struct page_array *pa, 650 + struct channel_program *cp) 651 + { 652 + struct vfio_device *vdev = 653 + &container_of(cp, struct vfio_ccw_private, cp)->vdev; 654 + unsigned long *idaws; 655 + unsigned int *idaws_f1; 656 + int ret; 657 + int idaw_nr; 658 + int i; 659 + 660 + /* Calculate size of IDAL */ 661 + idaw_nr = ccw_count_idaws(ccw, cp); 662 + if (idaw_nr < 0) 663 + return idaw_nr; 536 664 537 665 /* Allocate an IDAL from host storage */ 538 - idaws = kcalloc(idaw_nr, sizeof(*idaws), GFP_DMA | GFP_KERNEL); 539 - if (!idaws) { 540 - ret = -ENOMEM; 666 + idaws = get_guest_idal(ccw, cp, idaw_nr); 667 + if (IS_ERR(idaws)) { 668 + ret = PTR_ERR(idaws); 541 669 goto out_init; 542 670 } 543 671 ··· 628 594 * required for the data transfer, since we only only support 629 595 * 4K IDAWs today. 630 596 */ 631 - pa = chain->ch_pa + idx; 632 - ret = page_array_alloc(pa, iova, bytes); 597 + ret = page_array_alloc(pa, idaw_nr); 633 598 if (ret < 0) 634 599 goto out_free_idaws; 635 600 636 - if (ccw_is_idal(ccw)) { 637 - /* Copy guest IDAL into host IDAL */ 638 - ret = copy_from_iova(vdev, idaws, ccw->cda, idal_len); 639 - if (ret) 640 - goto out_unpin; 641 - 642 - /* 643 - * Copy guest IDAWs into page_array, in case the memory they 644 - * occupy is not contiguous. 645 - */ 646 - for (i = 0; i < idaw_nr; i++) 601 + /* 602 + * Copy guest IDAWs into page_array, in case the memory they 603 + * occupy is not contiguous. 604 + */ 605 + idaws_f1 = (unsigned int *)idaws; 606 + for (i = 0; i < idaw_nr; i++) { 607 + if (cp->orb.cmd.c64) 647 608 pa->pa_iova[i] = idaws[i]; 648 - } else { 649 - /* 650 - * No action is required here; the iova addresses in page_array 651 - * were initialized sequentially in page_array_alloc() beginning 652 - * with the contents of ccw->cda. 653 - */ 609 + else 610 + pa->pa_iova[i] = idaws_f1[i]; 654 611 } 655 612 656 613 if (ccw_does_data_transfer(ccw)) { 657 - ret = page_array_pin(pa, vdev); 614 + ret = page_array_pin(pa, vdev, idal_is_2k(cp)); 658 615 if (ret < 0) 659 616 goto out_unpin; 660 617 } else { ··· 661 636 return 0; 662 637 663 638 out_unpin: 664 - page_array_unpin_free(pa, vdev); 639 + page_array_unpin_free(pa, vdev, idal_is_2k(cp)); 665 640 out_free_idaws: 666 641 kfree(idaws); 667 642 out_init: ··· 675 650 * and to get rid of the cda 2G limitiaion of ccw1, we'll translate 676 651 * direct ccws to idal ccws. 677 652 */ 678 - static int ccwchain_fetch_one(struct ccwchain *chain, 679 - int idx, 653 + static int ccwchain_fetch_one(struct ccw1 *ccw, 654 + struct page_array *pa, 680 655 struct channel_program *cp) 656 + 681 657 { 682 - struct ccw1 *ccw = chain->ch_ccw + idx; 683 - 684 658 if (ccw_is_tic(ccw)) 685 - return ccwchain_fetch_tic(chain, idx, cp); 659 + return ccwchain_fetch_tic(ccw, cp); 686 660 687 - return ccwchain_fetch_direct(chain, idx, cp); 661 + return ccwchain_fetch_ccw(ccw, pa, cp); 688 662 } 689 663 690 664 /** 691 665 * cp_init() - allocate ccwchains for a channel program. 692 666 * @cp: channel_program on which to perform the operation 693 - * @mdev: the mediated device to perform pin/unpin operations 694 667 * @orb: control block for the channel program from the guest 695 668 * 696 669 * This creates one or more ccwchain(s), and copies the raw data of ··· 731 708 /* Build a ccwchain for the first CCW segment */ 732 709 ret = ccwchain_handle_ccw(orb->cmd.cpa, cp); 733 710 734 - if (!ret) { 711 + if (!ret) 735 712 cp->initialized = true; 736 - 737 - /* It is safe to force: if it was not set but idals used 738 - * ccwchain_calc_length would have returned an error. 739 - */ 740 - cp->orb.cmd.c64 = 1; 741 - } 742 713 743 714 return ret; 744 715 } ··· 759 742 cp->initialized = false; 760 743 list_for_each_entry_safe(chain, temp, &cp->ccwchain_list, next) { 761 744 for (i = 0; i < chain->ch_len; i++) { 762 - page_array_unpin_free(chain->ch_pa + i, vdev); 745 + page_array_unpin_free(&chain->ch_pa[i], vdev, idal_is_2k(cp)); 763 746 ccwchain_cda_free(chain, i); 764 747 } 765 748 ccwchain_free(chain); ··· 806 789 int cp_prefetch(struct channel_program *cp) 807 790 { 808 791 struct ccwchain *chain; 792 + struct ccw1 *ccw; 793 + struct page_array *pa; 809 794 int len, idx, ret; 810 795 811 796 /* this is an error in the caller */ ··· 817 798 list_for_each_entry(chain, &cp->ccwchain_list, next) { 818 799 len = chain->ch_len; 819 800 for (idx = 0; idx < len; idx++) { 820 - ret = ccwchain_fetch_one(chain, idx, cp); 801 + ccw = &chain->ch_ccw[idx]; 802 + pa = &chain->ch_pa[idx]; 803 + 804 + ret = ccwchain_fetch_one(ccw, pa, cp); 821 805 if (ret) 822 806 goto out_err; 823 807 } ··· 839 817 /** 840 818 * cp_get_orb() - get the orb of the channel program 841 819 * @cp: channel_program on which to perform the operation 842 - * @intparm: new intparm for the returned orb 843 - * @lpm: candidate value of the logical-path mask for the returned orb 820 + * @sch: subchannel the operation will be performed against 844 821 * 845 822 * This function returns the address of the updated orb of the channel 846 823 * program. Channel I/O device drivers could use this orb to issue a 847 824 * ssch. 848 825 */ 849 - union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm) 826 + union orb *cp_get_orb(struct channel_program *cp, struct subchannel *sch) 850 827 { 851 828 union orb *orb; 852 829 struct ccwchain *chain; ··· 857 836 858 837 orb = &cp->orb; 859 838 860 - orb->cmd.intparm = intparm; 839 + orb->cmd.intparm = (u32)virt_to_phys(sch); 861 840 orb->cmd.fmt = 1; 862 - orb->cmd.key = PAGE_DEFAULT_KEY >> 4; 841 + 842 + /* 843 + * Everything built by vfio-ccw is a Format-2 IDAL. 844 + * If the input was a Format-1 IDAL, indicate that 845 + * 2K Format-2 IDAWs were created here. 846 + */ 847 + if (!orb->cmd.c64) 848 + orb->cmd.i2k = 1; 849 + orb->cmd.c64 = 1; 863 850 864 851 if (orb->cmd.lpm == 0) 865 - orb->cmd.lpm = lpm; 852 + orb->cmd.lpm = sch->lpm; 866 853 867 854 chain = list_first_entry(&cp->ccwchain_list, struct ccwchain, next); 868 855 cpa = chain->ch_ccw; ··· 948 919 949 920 list_for_each_entry(chain, &cp->ccwchain_list, next) { 950 921 for (i = 0; i < chain->ch_len; i++) 951 - if (page_array_iova_pinned(chain->ch_pa + i, iova, length)) 922 + if (page_array_iova_pinned(&chain->ch_pa[i], iova, length)) 952 923 return true; 953 924 } 954 925
+1 -2
drivers/s390/cio/vfio_ccw_cp.h
··· 27 27 * struct channel_program - manage information for channel program 28 28 * @ccwchain_list: list head of ccwchains 29 29 * @orb: orb for the currently processed ssch request 30 - * @mdev: the mediated device to perform page pinning/unpinning 31 30 * @initialized: whether this instance is actually initialized 32 31 * 33 32 * @ccwchain_list is the head of a ccwchain list, that contents the ··· 43 44 int cp_init(struct channel_program *cp, union orb *orb); 44 45 void cp_free(struct channel_program *cp); 45 46 int cp_prefetch(struct channel_program *cp); 46 - union orb *cp_get_orb(struct channel_program *cp, u32 intparm, u8 lpm); 47 + union orb *cp_get_orb(struct channel_program *cp, struct subchannel *sch); 47 48 void cp_update_scsw(struct channel_program *cp, union scsw *scsw); 48 49 bool cp_iova_pinned(struct channel_program *cp, u64 iova, u64 length); 49 50
+1 -1
drivers/s390/cio/vfio_ccw_drv.c
··· 225 225 struct vfio_ccw_parent *parent = dev_get_drvdata(&sch->dev); 226 226 struct vfio_ccw_private *private = dev_get_drvdata(&parent->dev); 227 227 228 - if (WARN_ON(!private)) 228 + if (!private) 229 229 return; 230 230 231 231 vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_CLOSE);
+1 -1
drivers/s390/cio/vfio_ccw_fsm.c
··· 27 27 28 28 spin_lock_irqsave(sch->lock, flags); 29 29 30 - orb = cp_get_orb(&private->cp, (u32)virt_to_phys(sch), sch->lpm); 30 + orb = cp_get_orb(&private->cp, sch); 31 31 if (!orb) { 32 32 ret = -EIO; 33 33 goto out;
+82 -34
drivers/s390/crypto/vfio_ap_ops.c
··· 30 30 #define AP_QUEUE_UNASSIGNED "unassigned" 31 31 #define AP_QUEUE_IN_USE "in use" 32 32 33 + #define MAX_RESET_CHECK_WAIT 200 /* Sleep max 200ms for reset check */ 34 + #define AP_RESET_INTERVAL 20 /* Reset sleep interval (20ms) */ 35 + 33 36 static int vfio_ap_mdev_reset_queues(struct ap_queue_table *qtable); 34 37 static struct vfio_ap_queue *vfio_ap_find_queue(int apqn); 35 38 static const struct vfio_device_ops vfio_ap_matrix_dev_ops; 36 - static int vfio_ap_mdev_reset_queue(struct vfio_ap_queue *q, unsigned int retry); 39 + static int vfio_ap_mdev_reset_queue(struct vfio_ap_queue *q); 37 40 38 41 /** 39 42 * get_update_locks_for_kvm: Acquire the locks required to dynamically update a ··· 352 349 { 353 350 *nib = vcpu->run->s.regs.gprs[2]; 354 351 352 + if (!*nib) 353 + return -EINVAL; 355 354 if (kvm_is_error_hva(gfn_to_hva(vcpu->kvm, *nib >> PAGE_SHIFT))) 356 355 return -EINVAL; 357 356 ··· 1603 1598 return q; 1604 1599 } 1605 1600 1606 - static int vfio_ap_mdev_reset_queue(struct vfio_ap_queue *q, 1607 - unsigned int retry) 1601 + static int apq_status_check(int apqn, struct ap_queue_status *status) 1602 + { 1603 + switch (status->response_code) { 1604 + case AP_RESPONSE_NORMAL: 1605 + case AP_RESPONSE_RESET_IN_PROGRESS: 1606 + if (status->queue_empty && !status->irq_enabled) 1607 + return 0; 1608 + return -EBUSY; 1609 + case AP_RESPONSE_DECONFIGURED: 1610 + /* 1611 + * If the AP queue is deconfigured, any subsequent AP command 1612 + * targeting the queue will fail with the same response code. On the 1613 + * other hand, when an AP adapter is deconfigured, the associated 1614 + * queues are reset, so let's return a value indicating the reset 1615 + * for which we're waiting completed successfully. 1616 + */ 1617 + return 0; 1618 + default: 1619 + WARN(true, 1620 + "failed to verify reset of queue %02x.%04x: TAPQ rc=%u\n", 1621 + AP_QID_CARD(apqn), AP_QID_QUEUE(apqn), 1622 + status->response_code); 1623 + return -EIO; 1624 + } 1625 + } 1626 + 1627 + static int apq_reset_check(struct vfio_ap_queue *q) 1628 + { 1629 + int ret; 1630 + int iters = MAX_RESET_CHECK_WAIT / AP_RESET_INTERVAL; 1631 + struct ap_queue_status status; 1632 + 1633 + for (; iters > 0; iters--) { 1634 + msleep(AP_RESET_INTERVAL); 1635 + status = ap_tapq(q->apqn, NULL); 1636 + ret = apq_status_check(q->apqn, &status); 1637 + if (ret != -EBUSY) 1638 + return ret; 1639 + } 1640 + WARN_ONCE(iters <= 0, 1641 + "timeout verifying reset of queue %02x.%04x (%u, %u, %u)", 1642 + AP_QID_CARD(q->apqn), AP_QID_QUEUE(q->apqn), 1643 + status.queue_empty, status.irq_enabled, status.response_code); 1644 + return ret; 1645 + } 1646 + 1647 + static int vfio_ap_mdev_reset_queue(struct vfio_ap_queue *q) 1608 1648 { 1609 1649 struct ap_queue_status status; 1610 1650 int ret; 1611 - int retry2 = 2; 1612 1651 1613 1652 if (!q) 1614 1653 return 0; ··· 1662 1613 switch (status.response_code) { 1663 1614 case AP_RESPONSE_NORMAL: 1664 1615 ret = 0; 1616 + /* if the reset has not completed, wait for it to take effect */ 1617 + if (!status.queue_empty || status.irq_enabled) 1618 + ret = apq_reset_check(q); 1665 1619 break; 1666 1620 case AP_RESPONSE_RESET_IN_PROGRESS: 1667 - if (retry--) { 1668 - msleep(20); 1669 - goto retry_zapq; 1670 - } 1671 - ret = -EBUSY; 1672 - break; 1673 - case AP_RESPONSE_Q_NOT_AVAIL: 1621 + /* 1622 + * There is a reset issued by another process in progress. Let's wait 1623 + * for that to complete. Since we have no idea whether it was a RAPQ or 1624 + * ZAPQ, then if it completes successfully, let's issue the ZAPQ. 1625 + */ 1626 + ret = apq_reset_check(q); 1627 + if (ret) 1628 + break; 1629 + goto retry_zapq; 1674 1630 case AP_RESPONSE_DECONFIGURED: 1675 - case AP_RESPONSE_CHECKSTOPPED: 1676 - WARN_ONCE(status.irq_enabled, 1677 - "PQAP/ZAPQ for %02x.%04x failed with rc=%u while IRQ enabled", 1678 - AP_QID_CARD(q->apqn), AP_QID_QUEUE(q->apqn), 1679 - status.response_code); 1680 - ret = -EBUSY; 1681 - goto free_resources; 1631 + /* 1632 + * When an AP adapter is deconfigured, the associated 1633 + * queues are reset, so let's return a value indicating the reset 1634 + * completed successfully. 1635 + */ 1636 + ret = 0; 1637 + break; 1682 1638 default: 1683 - /* things are really broken, give up */ 1684 1639 WARN(true, 1685 1640 "PQAP/ZAPQ for %02x.%04x failed with invalid rc=%u\n", 1686 1641 AP_QID_CARD(q->apqn), AP_QID_QUEUE(q->apqn), ··· 1692 1639 return -EIO; 1693 1640 } 1694 1641 1695 - /* wait for the reset to take effect */ 1696 - while (retry2--) { 1697 - if (status.queue_empty && !status.irq_enabled) 1698 - break; 1699 - msleep(20); 1700 - status = ap_tapq(q->apqn, NULL); 1701 - } 1702 - WARN_ONCE(retry2 <= 0, "unable to verify reset of queue %02x.%04x", 1703 - AP_QID_CARD(q->apqn), AP_QID_QUEUE(q->apqn)); 1704 - 1705 - free_resources: 1706 1642 vfio_ap_free_aqic_resources(q); 1707 1643 1708 1644 return ret; ··· 1703 1661 struct vfio_ap_queue *q; 1704 1662 1705 1663 hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) { 1706 - ret = vfio_ap_mdev_reset_queue(q, 1); 1664 + ret = vfio_ap_mdev_reset_queue(q); 1707 1665 /* 1708 1666 * Regardless whether a queue turns out to be busy, or 1709 1667 * is not operational, we need to continue resetting ··· 1899 1857 return ret; 1900 1858 1901 1859 q = kzalloc(sizeof(*q), GFP_KERNEL); 1902 - if (!q) 1903 - return -ENOMEM; 1860 + if (!q) { 1861 + ret = -ENOMEM; 1862 + goto err_remove_group; 1863 + } 1904 1864 1905 1865 q->apqn = to_ap_queue(&apdev->device)->qid; 1906 1866 q->saved_isc = VFIO_AP_ISC_INVALID; ··· 1920 1876 release_update_locks_for_mdev(matrix_mdev); 1921 1877 1922 1878 return 0; 1879 + 1880 + err_remove_group: 1881 + sysfs_remove_group(&apdev->device.kobj, &vfio_queue_attr_group); 1882 + return ret; 1923 1883 } 1924 1884 1925 1885 void vfio_ap_mdev_remove_queue(struct ap_device *apdev) ··· 1954 1906 } 1955 1907 } 1956 1908 1957 - vfio_ap_mdev_reset_queue(q, 1); 1909 + vfio_ap_mdev_reset_queue(q); 1958 1910 dev_set_drvdata(&apdev->device, NULL); 1959 1911 kfree(q); 1960 1912 release_update_locks_for_mdev(matrix_mdev);
+2 -4
drivers/s390/crypto/zcrypt_api.c
··· 347 347 int rc; 348 348 char name[ZCDN_MAX_NAME]; 349 349 350 - strncpy(name, skip_spaces(buf), sizeof(name)); 351 - name[sizeof(name) - 1] = '\0'; 350 + strscpy(name, skip_spaces(buf), sizeof(name)); 352 351 353 352 rc = zcdn_create(strim(name)); 354 353 ··· 364 365 int rc; 365 366 char name[ZCDN_MAX_NAME]; 366 367 367 - strncpy(name, skip_spaces(buf), sizeof(name)); 368 - name[sizeof(name) - 1] = '\0'; 368 + strscpy(name, skip_spaces(buf), sizeof(name)); 369 369 370 370 rc = zcdn_destroy(strim(name)); 371 371
+38 -126
drivers/watchdog/diag288_wdt.c
··· 27 27 #include <linux/moduleparam.h> 28 28 #include <linux/slab.h> 29 29 #include <linux/watchdog.h> 30 - #include <linux/suspend.h> 31 30 #include <asm/ebcdic.h> 32 31 #include <asm/diag.h> 33 32 #include <linux/io.h> ··· 69 70 70 71 MODULE_ALIAS("vmwatchdog"); 71 72 72 - static int __diag288(unsigned int func, unsigned int timeout, 73 - unsigned long action, unsigned int len) 73 + static char *cmd_buf; 74 + 75 + static int diag288(unsigned int func, unsigned int timeout, 76 + unsigned long action, unsigned int len) 74 77 { 75 - register unsigned long __func asm("2") = func; 76 - register unsigned long __timeout asm("3") = timeout; 77 - register unsigned long __action asm("4") = action; 78 - register unsigned long __len asm("5") = len; 78 + union register_pair r1 = { .even = func, .odd = timeout, }; 79 + union register_pair r3 = { .even = action, .odd = len, }; 79 80 int err; 81 + 82 + diag_stat_inc(DIAG_STAT_X288); 80 83 81 84 err = -EINVAL; 82 85 asm volatile( 83 - " diag %1, %3, 0x288\n" 84 - "0: la %0, 0\n" 86 + " diag %[r1],%[r3],0x288\n" 87 + "0: la %[err],0\n" 85 88 "1:\n" 86 89 EX_TABLE(0b, 1b) 87 - : "+d" (err) : "d"(__func), "d"(__timeout), 88 - "d"(__action), "d"(__len) : "1", "cc", "memory"); 90 + : [err] "+d" (err) 91 + : [r1] "d" (r1.pair), [r3] "d" (r3.pair) 92 + : "cc", "memory"); 89 93 return err; 90 94 } 91 95 92 - static int __diag288_vm(unsigned int func, unsigned int timeout, 93 - char *cmd, size_t len) 96 + static int diag288_str(unsigned int func, unsigned int timeout, char *cmd) 94 97 { 95 - diag_stat_inc(DIAG_STAT_X288); 96 - return __diag288(func, timeout, virt_to_phys(cmd), len); 98 + ssize_t len; 99 + 100 + len = strscpy(cmd_buf, cmd, MAX_CMDLEN); 101 + if (len < 0) 102 + return len; 103 + ASCEBC(cmd_buf, MAX_CMDLEN); 104 + EBC_TOUPPER(cmd_buf, MAX_CMDLEN); 105 + 106 + return diag288(func, timeout, virt_to_phys(cmd_buf), len); 97 107 } 98 - 99 - static int __diag288_lpar(unsigned int func, unsigned int timeout, 100 - unsigned long action) 101 - { 102 - diag_stat_inc(DIAG_STAT_X288); 103 - return __diag288(func, timeout, action, 0); 104 - } 105 - 106 - static unsigned long wdt_status; 107 - 108 - #define DIAG_WDOG_BUSY 0 109 108 110 109 static int wdt_start(struct watchdog_device *dev) 111 110 { 112 - char *ebc_cmd; 113 - size_t len; 114 111 int ret; 115 112 unsigned int func; 116 113 117 - if (test_and_set_bit(DIAG_WDOG_BUSY, &wdt_status)) 118 - return -EBUSY; 119 - 120 114 if (MACHINE_IS_VM) { 121 - ebc_cmd = kmalloc(MAX_CMDLEN, GFP_KERNEL); 122 - if (!ebc_cmd) { 123 - clear_bit(DIAG_WDOG_BUSY, &wdt_status); 124 - return -ENOMEM; 125 - } 126 - len = strlcpy(ebc_cmd, wdt_cmd, MAX_CMDLEN); 127 - ASCEBC(ebc_cmd, MAX_CMDLEN); 128 - EBC_TOUPPER(ebc_cmd, MAX_CMDLEN); 129 - 130 115 func = conceal_on ? (WDT_FUNC_INIT | WDT_FUNC_CONCEAL) 131 116 : WDT_FUNC_INIT; 132 - ret = __diag288_vm(func, dev->timeout, ebc_cmd, len); 117 + ret = diag288_str(func, dev->timeout, wdt_cmd); 133 118 WARN_ON(ret != 0); 134 - kfree(ebc_cmd); 135 119 } else { 136 - ret = __diag288_lpar(WDT_FUNC_INIT, 137 - dev->timeout, LPARWDT_RESTART); 120 + ret = diag288(WDT_FUNC_INIT, dev->timeout, LPARWDT_RESTART, 0); 138 121 } 139 122 140 123 if (ret) { 141 124 pr_err("The watchdog cannot be activated\n"); 142 - clear_bit(DIAG_WDOG_BUSY, &wdt_status); 143 125 return ret; 144 126 } 145 127 return 0; ··· 128 148 129 149 static int wdt_stop(struct watchdog_device *dev) 130 150 { 131 - int ret; 132 - 133 - diag_stat_inc(DIAG_STAT_X288); 134 - ret = __diag288(WDT_FUNC_CANCEL, 0, 0, 0); 135 - 136 - clear_bit(DIAG_WDOG_BUSY, &wdt_status); 137 - 138 - return ret; 151 + return diag288(WDT_FUNC_CANCEL, 0, 0, 0); 139 152 } 140 153 141 154 static int wdt_ping(struct watchdog_device *dev) 142 155 { 143 - char *ebc_cmd; 144 - size_t len; 145 156 int ret; 146 157 unsigned int func; 147 158 148 159 if (MACHINE_IS_VM) { 149 - ebc_cmd = kmalloc(MAX_CMDLEN, GFP_KERNEL); 150 - if (!ebc_cmd) 151 - return -ENOMEM; 152 - len = strlcpy(ebc_cmd, wdt_cmd, MAX_CMDLEN); 153 - ASCEBC(ebc_cmd, MAX_CMDLEN); 154 - EBC_TOUPPER(ebc_cmd, MAX_CMDLEN); 155 - 156 160 /* 157 161 * It seems to be ok to z/VM to use the init function to 158 162 * retrigger the watchdog. On LPAR WDT_FUNC_CHANGE must ··· 145 181 func = conceal_on ? (WDT_FUNC_INIT | WDT_FUNC_CONCEAL) 146 182 : WDT_FUNC_INIT; 147 183 148 - ret = __diag288_vm(func, dev->timeout, ebc_cmd, len); 184 + ret = diag288_str(func, dev->timeout, wdt_cmd); 149 185 WARN_ON(ret != 0); 150 - kfree(ebc_cmd); 151 186 } else { 152 - ret = __diag288_lpar(WDT_FUNC_CHANGE, dev->timeout, 0); 187 + ret = diag288(WDT_FUNC_CHANGE, dev->timeout, 0, 0); 153 188 } 154 189 155 190 if (ret) ··· 186 223 .max_timeout = MAX_INTERVAL, 187 224 }; 188 225 189 - /* 190 - * It makes no sense to go into suspend while the watchdog is running. 191 - * Depending on the memory size, the watchdog might trigger, while we 192 - * are still saving the memory. 193 - */ 194 - static int wdt_suspend(void) 195 - { 196 - if (test_and_set_bit(DIAG_WDOG_BUSY, &wdt_status)) { 197 - pr_err("Linux cannot be suspended while the watchdog is in use\n"); 198 - return notifier_from_errno(-EBUSY); 199 - } 200 - return NOTIFY_DONE; 201 - } 202 - 203 - static int wdt_resume(void) 204 - { 205 - clear_bit(DIAG_WDOG_BUSY, &wdt_status); 206 - return NOTIFY_DONE; 207 - } 208 - 209 - static int wdt_power_event(struct notifier_block *this, unsigned long event, 210 - void *ptr) 211 - { 212 - switch (event) { 213 - case PM_POST_HIBERNATION: 214 - case PM_POST_SUSPEND: 215 - return wdt_resume(); 216 - case PM_HIBERNATION_PREPARE: 217 - case PM_SUSPEND_PREPARE: 218 - return wdt_suspend(); 219 - default: 220 - return NOTIFY_DONE; 221 - } 222 - } 223 - 224 - static struct notifier_block wdt_power_notifier = { 225 - .notifier_call = wdt_power_event, 226 - }; 227 - 228 226 static int __init diag288_init(void) 229 227 { 230 228 int ret; 231 - char ebc_begin[] = { 232 - 194, 197, 199, 201, 213 233 - }; 234 - char *ebc_cmd; 235 229 236 230 watchdog_set_nowayout(&wdt_dev, nowayout_info); 237 231 238 232 if (MACHINE_IS_VM) { 239 - ebc_cmd = kmalloc(sizeof(ebc_begin), GFP_KERNEL); 240 - if (!ebc_cmd) { 233 + cmd_buf = kmalloc(MAX_CMDLEN, GFP_KERNEL); 234 + if (!cmd_buf) { 241 235 pr_err("The watchdog cannot be initialized\n"); 242 236 return -ENOMEM; 243 237 } 244 - memcpy(ebc_cmd, ebc_begin, sizeof(ebc_begin)); 245 - ret = __diag288_vm(WDT_FUNC_INIT, 15, 246 - ebc_cmd, sizeof(ebc_begin)); 247 - kfree(ebc_cmd); 238 + 239 + ret = diag288_str(WDT_FUNC_INIT, MIN_INTERVAL, "BEGIN"); 248 240 if (ret != 0) { 249 241 pr_err("The watchdog cannot be initialized\n"); 242 + kfree(cmd_buf); 250 243 return -EINVAL; 251 244 } 252 245 } else { 253 - if (__diag288_lpar(WDT_FUNC_INIT, 30, LPARWDT_RESTART)) { 246 + if (diag288(WDT_FUNC_INIT, WDT_DEFAULT_TIMEOUT, 247 + LPARWDT_RESTART, 0)) { 254 248 pr_err("The watchdog cannot be initialized\n"); 255 249 return -EINVAL; 256 250 } 257 251 } 258 252 259 - if (__diag288_lpar(WDT_FUNC_CANCEL, 0, 0)) { 253 + if (diag288(WDT_FUNC_CANCEL, 0, 0, 0)) { 260 254 pr_err("The watchdog cannot be deactivated\n"); 261 255 return -EINVAL; 262 256 } 263 257 264 - ret = register_pm_notifier(&wdt_power_notifier); 265 - if (ret) 266 - return ret; 267 - 268 - ret = watchdog_register_device(&wdt_dev); 269 - if (ret) 270 - unregister_pm_notifier(&wdt_power_notifier); 271 - 272 - return ret; 258 + return watchdog_register_device(&wdt_dev); 273 259 } 274 260 275 261 static void __exit diag288_exit(void) 276 262 { 277 263 watchdog_unregister_device(&wdt_dev); 278 - unregister_pm_notifier(&wdt_power_notifier); 264 + kfree(cmd_buf); 279 265 } 280 266 281 267 module_init(diag288_init);