Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6

* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: (25 commits)
mmtimer: Push BKL down into the ioctl handler
[IA64] Remove experimental status of kdump
[IA64] Update ia64 mmr list for SGI uv
[IA64] Avoid overflowing ia64_cpu_to_sapicid in acpi_map_lsapic()
[IA64] adding parameter check to module_free()
[IA64] improper printk format in acpi-cpufreq
[IA64] pv_ops: move some functions in ivt.S to avoid lack of space.
[IA64] pvops: documentation on ia64/pv_ops
[IA64] pvops: add to hooks, pv_time_ops, for steal time accounting.
[IA64] pvops: add hooks, pv_irq_ops, to paravirtualized irq related operations.
[IA64] pvops: add hooks, pv_iosapic_ops, to paravirtualize iosapic.
[IA64] pvops: define initialization hooks, pv_init_ops, for paravirtualized environment.
[IA64] pvops: paravirtualize NR_IRQS
[IA64] pvops: paravirtualize entry.S
[IA64] pvops: paravirtualize ivt.S
[IA64] pvops: paravirtualize minstate.h.
[IA64] pvops: define paravirtualized instructions for native.
[IA64] pvops: preparation for paravirtulization of hand written assembly code.
[IA64] pvops: introduce pv_cpu_ops to paravirtualize privileged instructions.
[IA64] pvops: add an early setup hook for pv_ops.
...

+2251 -387
+137
Documentation/ia64/paravirt_ops.txt
··· 1 + Paravirt_ops on IA64 2 + ==================== 3 + 21 May 2008, Isaku Yamahata <yamahata@valinux.co.jp> 4 + 5 + 6 + Introduction 7 + ------------ 8 + The aim of this documentation is to help with maintainability and/or to 9 + encourage people to use paravirt_ops/IA64. 10 + 11 + paravirt_ops (pv_ops in short) is a way for virtualization support of 12 + Linux kernel on x86. Several ways for virtualization support were 13 + proposed, paravirt_ops is the winner. 14 + On the other hand, now there are also several IA64 virtualization 15 + technologies like kvm/IA64, xen/IA64 and many other academic IA64 16 + hypervisors so that it is good to add generic virtualization 17 + infrastructure on Linux/IA64. 18 + 19 + 20 + What is paravirt_ops? 21 + --------------------- 22 + It has been developed on x86 as virtualization support via API, not ABI. 23 + It allows each hypervisor to override operations which are important for 24 + hypervisors at API level. And it allows a single kernel binary to run on 25 + all supported execution environments including native machine. 26 + Essentially paravirt_ops is a set of function pointers which represent 27 + operations corresponding to low level sensitive instructions and high 28 + level functionalities in various area. But one significant difference 29 + from usual function pointer table is that it allows optimization with 30 + binary patch. It is because some of these operations are very 31 + performance sensitive and indirect call overhead is not negligible. 32 + With binary patch, indirect C function call can be transformed into 33 + direct C function call or in-place execution to eliminate the overhead. 34 + 35 + Thus, operations of paravirt_ops are classified into three categories. 36 + - simple indirect call 37 + These operations correspond to high level functionality so that the 38 + overhead of indirect call isn't very important. 39 + 40 + - indirect call which allows optimization with binary patch 41 + Usually these operations correspond to low level instructions. They 42 + are called frequently and performance critical. So the overhead is 43 + very important. 44 + 45 + - a set of macros for hand written assembly code 46 + Hand written assembly codes (.S files) also need paravirtualization 47 + because they include sensitive instructions or some of code paths in 48 + them are very performance critical. 49 + 50 + 51 + The relation to the IA64 machine vector 52 + --------------------------------------- 53 + Linux/IA64 has the IA64 machine vector functionality which allows the 54 + kernel to switch implementations (e.g. initialization, ipi, dma api...) 55 + depending on executing platform. 56 + We can replace some implementations very easily defining a new machine 57 + vector. Thus another approach for virtualization support would be 58 + enhancing the machine vector functionality. 59 + But paravirt_ops approach was taken because 60 + - virtualization support needs wider support than machine vector does. 61 + e.g. low level instruction paravirtualization. It must be 62 + initialized very early before platform detection. 63 + 64 + - virtualization support needs more functionality like binary patch. 65 + Probably the calling overhead might not be very large compared to the 66 + emulation overhead of virtualization. However in the native case, the 67 + overhead should be eliminated completely. 68 + A single kernel binary should run on each environment including native, 69 + and the overhead of paravirt_ops on native environment should be as 70 + small as possible. 71 + 72 + - for full virtualization technology, e.g. KVM/IA64 or 73 + Xen/IA64 HVM domain, the result would be 74 + (the emulated platform machine vector. probably dig) + (pv_ops). 75 + This means that the virtualization support layer should be under 76 + the machine vector layer. 77 + 78 + Possibly it might be better to move some function pointers from 79 + paravirt_ops to machine vector. In fact, Xen domU case utilizes both 80 + pv_ops and machine vector. 81 + 82 + 83 + IA64 paravirt_ops 84 + ----------------- 85 + In this section, the concrete paravirt_ops will be discussed. 86 + Because of the architecture difference between ia64 and x86, the 87 + resulting set of functions is very different from x86 pv_ops. 88 + 89 + - C function pointer tables 90 + They are not very performance critical so that simple C indirect 91 + function call is acceptable. The following structures are defined at 92 + this moment. For details see linux/include/asm-ia64/paravirt.h 93 + - struct pv_info 94 + This structure describes the execution environment. 95 + - struct pv_init_ops 96 + This structure describes the various initialization hooks. 97 + - struct pv_iosapic_ops 98 + This structure describes hooks to iosapic operations. 99 + - struct pv_irq_ops 100 + This structure describes hooks to irq related operations 101 + - struct pv_time_op 102 + This structure describes hooks to steal time accounting. 103 + 104 + - a set of indirect calls which need optimization 105 + Currently this class of functions correspond to a subset of IA64 106 + intrinsics. At this moment the optimization with binary patch isn't 107 + implemented yet. 108 + struct pv_cpu_op is defined. For details see 109 + linux/include/asm-ia64/paravirt_privop.h 110 + Mostly they correspond to ia64 intrinsics 1-to-1. 111 + Caveat: Now they are defined as C indirect function pointers, but in 112 + order to support binary patch optimization, they will be changed 113 + using GCC extended inline assembly code. 114 + 115 + - a set of macros for hand written assembly code (.S files) 116 + For maintenance purpose, the taken approach for .S files is single 117 + source code and compile multiple times with different macros definitions. 118 + Each pv_ops instance must define those macros to compile. 119 + The important thing here is that sensitive, but non-privileged 120 + instructions must be paravirtualized and that some privileged 121 + instructions also need paravirtualization for reasonable performance. 122 + Developers who modify .S files must be aware of that. At this moment 123 + an easy checker is implemented to detect paravirtualization breakage. 124 + But it doesn't cover all the cases. 125 + 126 + Sometimes this set of macros is called pv_cpu_asm_op. But there is no 127 + corresponding structure in the source code. 128 + Those macros mostly 1:1 correspond to a subset of privileged 129 + instructions. See linux/include/asm-ia64/native/inst.h. 130 + And some functions written in assembly also need to be overrided so 131 + that each pv_ops instance have to define some macros. Again see 132 + linux/include/asm-ia64/native/inst.h. 133 + 134 + 135 + Those structures must be initialized very early before start_kernel. 136 + Probably initialized in head.S using multi entry point or some other trick. 137 + For native case implementation see linux/arch/ia64/kernel/paravirt.c.
+2 -2
arch/ia64/Kconfig
··· 540 540 strongly in flux, so no good recommendation can be made. 541 541 542 542 config CRASH_DUMP 543 - bool "kernel crash dumps (EXPERIMENTAL)" 544 - depends on EXPERIMENTAL && IA64_MCA_RECOVERY && !IA64_HP_SIM && (!SMP || HOTPLUG_CPU) 543 + bool "kernel crash dumps" 544 + depends on IA64_MCA_RECOVERY && !IA64_HP_SIM && (!SMP || HOTPLUG_CPU) 545 545 help 546 546 Generate crash dump after being started by kexec. 547 547
+6
arch/ia64/Makefile
··· 100 100 echo ' boot - Build vmlinux and bootloader for Ski simulator' 101 101 echo '* unwcheck - Check vmlinux for invalid unwind info' 102 102 endef 103 + 104 + archprepare: make_nr_irqs_h FORCE 105 + PHONY += make_nr_irqs_h FORCE 106 + 107 + make_nr_irqs_h: FORCE 108 + $(Q)$(MAKE) $(build)=arch/ia64/kernel include/asm-ia64/nr-irqs.h
+44
arch/ia64/kernel/Makefile
··· 36 36 mca_recovery-y += mca_drv.o mca_drv_asm.o 37 37 obj-$(CONFIG_IA64_MC_ERR_INJECT)+= err_inject.o 38 38 39 + obj-$(CONFIG_PARAVIRT) += paravirt.o paravirtentry.o 40 + 39 41 obj-$(CONFIG_IA64_ESI) += esi.o 40 42 ifneq ($(CONFIG_IA64_ESI),) 41 43 obj-y += esi_stub.o # must be in kernel proper ··· 72 70 # We must build gate.so before we can assemble it. 73 71 # Note: kbuild does not track this dependency due to usage of .incbin 74 72 $(obj)/gate-data.o: $(obj)/gate.so 73 + 74 + # Calculate NR_IRQ = max(IA64_NATIVE_NR_IRQS, XEN_NR_IRQS, ...) based on config 75 + define sed-y 76 + "/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}" 77 + endef 78 + quiet_cmd_nr_irqs = GEN $@ 79 + define cmd_nr_irqs 80 + (set -e; \ 81 + echo "#ifndef __ASM_NR_IRQS_H__"; \ 82 + echo "#define __ASM_NR_IRQS_H__"; \ 83 + echo "/*"; \ 84 + echo " * DO NOT MODIFY."; \ 85 + echo " *"; \ 86 + echo " * This file was generated by Kbuild"; \ 87 + echo " *"; \ 88 + echo " */"; \ 89 + echo ""; \ 90 + sed -ne $(sed-y) $<; \ 91 + echo ""; \ 92 + echo "#endif" ) > $@ 93 + endef 94 + 95 + # We use internal kbuild rules to avoid the "is up to date" message from make 96 + arch/$(SRCARCH)/kernel/nr-irqs.s: $(srctree)/arch/$(SRCARCH)/kernel/nr-irqs.c \ 97 + $(wildcard $(srctree)/include/asm-ia64/*/irq.h) 98 + $(Q)mkdir -p $(dir $@) 99 + $(call if_changed_dep,cc_s_c) 100 + 101 + include/asm-ia64/nr-irqs.h: arch/$(SRCARCH)/kernel/nr-irqs.s 102 + $(Q)mkdir -p $(dir $@) 103 + $(call cmd,nr_irqs) 104 + 105 + clean-files += $(objtree)/include/asm-ia64/nr-irqs.h 106 + 107 + # 108 + # native ivt.S and entry.S 109 + # 110 + ASM_PARAVIRT_OBJS = ivt.o entry.o 111 + define paravirtualized_native 112 + AFLAGS_$(1) += -D__IA64_ASM_PARAVIRTUALIZED_NATIVE 113 + endef 114 + $(foreach obj,$(ASM_PARAVIRT_OBJS),$(eval $(call paravirtualized_native,$(obj))))
+2 -3
arch/ia64/kernel/acpi.c
··· 774 774 */ 775 775 #ifdef CONFIG_ACPI_HOTPLUG_CPU 776 776 static 777 - int acpi_map_cpu2node(acpi_handle handle, int cpu, long physid) 777 + int acpi_map_cpu2node(acpi_handle handle, int cpu, int physid) 778 778 { 779 779 #ifdef CONFIG_ACPI_NUMA 780 780 int pxm_id; ··· 854 854 union acpi_object *obj; 855 855 struct acpi_madt_local_sapic *lsapic; 856 856 cpumask_t tmp_map; 857 - long physid; 858 - int cpu; 857 + int cpu, physid; 859 858 860 859 if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer))) 861 860 return -EINVAL;
+2 -2
arch/ia64/kernel/cpufreq/acpi-cpufreq.c
··· 51 51 retval = ia64_pal_set_pstate((u64)value); 52 52 53 53 if (retval) { 54 - dprintk("Failed to set freq to 0x%x, with error 0x%x\n", 54 + dprintk("Failed to set freq to 0x%x, with error 0x%lx\n", 55 55 value, retval); 56 56 return -ENODEV; 57 57 } ··· 74 74 75 75 if (retval) 76 76 dprintk("Failed to get current freq with " 77 - "error 0x%x, idx 0x%x\n", retval, *value); 77 + "error 0x%lx, idx 0x%x\n", retval, *value); 78 78 79 79 return (int)retval; 80 80 }
+72 -43
arch/ia64/kernel/entry.S
··· 23 23 * 11/07/2000 24 24 */ 25 25 /* 26 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 27 + * VA Linux Systems Japan K.K. 28 + * pv_ops. 29 + */ 30 + /* 26 31 * Global (preserved) predicate usage on syscall entry/exit path: 27 32 * 28 33 * pKStk: See entry.h. ··· 50 45 51 46 #include "minstate.h" 52 47 48 + #ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE 53 49 /* 54 50 * execve() is special because in case of success, we need to 55 51 * setup a null register window frame. ··· 179 173 mov rp=loc0 180 174 br.ret.sptk.many rp 181 175 END(sys_clone) 176 + #endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */ 182 177 183 178 /* 184 179 * prev_task <- ia64_switch_to(struct task_struct *next) ··· 187 180 * called. The code starting at .map relies on this. The rest of the code 188 181 * doesn't care about the interrupt masking status. 189 182 */ 190 - GLOBAL_ENTRY(ia64_switch_to) 183 + GLOBAL_ENTRY(__paravirt_switch_to) 191 184 .prologue 192 185 alloc r16=ar.pfs,1,0,0,0 193 186 DO_SAVE_SWITCH_STACK ··· 211 204 ;; 212 205 .done: 213 206 ld8 sp=[r21] // load kernel stack pointer of new task 214 - mov IA64_KR(CURRENT)=in0 // update "current" application register 207 + MOV_TO_KR(CURRENT, in0, r8, r9) // update "current" application register 215 208 mov r8=r13 // return pointer to previously running task 216 209 mov r13=in0 // set "current" pointer 217 210 ;; ··· 223 216 br.ret.sptk.many rp // boogie on out in new context 224 217 225 218 .map: 226 - rsm psr.ic // interrupts (psr.i) are already disabled here 219 + RSM_PSR_IC(r25) // interrupts (psr.i) are already disabled here 227 220 movl r25=PAGE_KERNEL 228 221 ;; 229 222 srlz.d 230 223 or r23=r25,r20 // construct PA | page properties 231 224 mov r25=IA64_GRANULE_SHIFT<<2 232 225 ;; 233 - mov cr.itir=r25 234 - mov cr.ifa=in0 // VA of next task... 226 + MOV_TO_ITIR(p0, r25, r8) 227 + MOV_TO_IFA(in0, r8) // VA of next task... 235 228 ;; 236 229 mov r25=IA64_TR_CURRENT_STACK 237 - mov IA64_KR(CURRENT_STACK)=r26 // remember last page we mapped... 230 + MOV_TO_KR(CURRENT_STACK, r26, r8, r9) // remember last page we mapped... 238 231 ;; 239 232 itr.d dtr[r25]=r23 // wire in new mapping... 240 - ssm psr.ic // reenable the psr.ic bit 241 - ;; 242 - srlz.d 233 + SSM_PSR_IC_AND_SRLZ_D(r8, r9) // reenable the psr.ic bit 243 234 br.cond.sptk .done 244 - END(ia64_switch_to) 235 + END(__paravirt_switch_to) 245 236 237 + #ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE 246 238 /* 247 239 * Note that interrupts are enabled during save_switch_stack and load_switch_stack. This 248 240 * means that we may get an interrupt with "sp" pointing to the new kernel stack while ··· 381 375 * - b7 holds address to return to 382 376 * - must not touch r8-r11 383 377 */ 384 - ENTRY(load_switch_stack) 378 + GLOBAL_ENTRY(load_switch_stack) 385 379 .prologue 386 380 .altrp b7 387 381 ··· 577 571 .ret3: 578 572 (pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk 579 573 (pUStk) rsm psr.i // disable interrupts 580 - br.cond.sptk .work_pending_syscall_end 574 + br.cond.sptk ia64_work_pending_syscall_end 581 575 582 576 strace_error: 583 577 ld8 r3=[r2] // load pt_regs.r8 ··· 642 636 adds r2=PT(R8)+16,sp // r2 = &pt_regs.r8 643 637 mov r10=r0 // clear error indication in r10 644 638 (p7) br.cond.spnt handle_syscall_error // handle potential syscall failure 639 + #ifdef CONFIG_PARAVIRT 640 + ;; 641 + br.cond.sptk.few ia64_leave_syscall 642 + ;; 643 + #endif /* CONFIG_PARAVIRT */ 645 644 END(ia64_ret_from_syscall) 645 + #ifndef CONFIG_PARAVIRT 646 646 // fall through 647 + #endif 648 + #endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */ 649 + 647 650 /* 648 651 * ia64_leave_syscall(): Same as ia64_leave_kernel, except that it doesn't 649 652 * need to switch to bank 0 and doesn't restore the scratch registers. ··· 697 682 * ar.csd: cleared 698 683 * ar.ssd: cleared 699 684 */ 700 - ENTRY(ia64_leave_syscall) 685 + GLOBAL_ENTRY(__paravirt_leave_syscall) 701 686 PT_REGS_UNWIND_INFO(0) 702 687 /* 703 688 * work.need_resched etc. mustn't get changed by this CPU before it returns to ··· 707 692 * extra work. We always check for extra work when returning to user-level. 708 693 * With CONFIG_PREEMPT, we also check for extra work when the preempt_count 709 694 * is 0. After extra work processing has been completed, execution 710 - * resumes at .work_processed_syscall with p6 set to 1 if the extra-work-check 695 + * resumes at ia64_work_processed_syscall with p6 set to 1 if the extra-work-check 711 696 * needs to be redone. 712 697 */ 713 698 #ifdef CONFIG_PREEMPT 714 - rsm psr.i // disable interrupts 699 + RSM_PSR_I(p0, r2, r18) // disable interrupts 715 700 cmp.eq pLvSys,p0=r0,r0 // pLvSys=1: leave from syscall 716 701 (pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13 717 702 ;; ··· 721 706 ;; 722 707 cmp.eq p6,p0=r21,r0 // p6 <- pUStk || (preempt_count == 0) 723 708 #else /* !CONFIG_PREEMPT */ 724 - (pUStk) rsm psr.i 709 + RSM_PSR_I(pUStk, r2, r18) 725 710 cmp.eq pLvSys,p0=r0,r0 // pLvSys=1: leave from syscall 726 711 (pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk 727 712 #endif 728 - .work_processed_syscall: 713 + .global __paravirt_work_processed_syscall; 714 + __paravirt_work_processed_syscall: 729 715 #ifdef CONFIG_VIRT_CPU_ACCOUNTING 730 716 adds r2=PT(LOADRS)+16,r12 731 717 (pUStk) mov.m r22=ar.itc // fetch time at leave ··· 760 744 (pNonSys) break 0 // bug check: we shouldn't be here if pNonSys is TRUE! 761 745 ;; 762 746 invala // M0|1 invalidate ALAT 763 - rsm psr.i | psr.ic // M2 turn off interrupts and interruption collection 747 + RSM_PSR_I_IC(r28, r29, r30) // M2 turn off interrupts and interruption collection 764 748 cmp.eq p9,p0=r0,r0 // A set p9 to indicate that we should restore cr.ifs 765 749 766 750 ld8 r29=[r2],16 // M0|1 load cr.ipsr ··· 781 765 ;; 782 766 #endif 783 767 ld8 r26=[r2],PT(B0)-PT(AR_PFS) // M0|1 load ar.pfs 784 - (pKStk) mov r22=psr // M2 read PSR now that interrupts are disabled 768 + MOV_FROM_PSR(pKStk, r22, r21) // M2 read PSR now that interrupts are disabled 785 769 nop 0 786 770 ;; 787 771 ld8 r21=[r2],PT(AR_RNAT)-PT(B0) // M0|1 load b0 ··· 814 798 815 799 srlz.d // M0 ensure interruption collection is off (for cover) 816 800 shr.u r18=r19,16 // I0|1 get byte size of existing "dirty" partition 817 - cover // B add current frame into dirty partition & set cr.ifs 801 + COVER // B add current frame into dirty partition & set cr.ifs 818 802 ;; 819 803 #ifdef CONFIG_VIRT_CPU_ACCOUNTING 820 804 mov r19=ar.bsp // M2 get new backing store pointer ··· 839 823 mov.m ar.ssd=r0 // M2 clear ar.ssd 840 824 mov f11=f0 // F clear f11 841 825 br.cond.sptk.many rbs_switch // B 842 - END(ia64_leave_syscall) 826 + END(__paravirt_leave_syscall) 843 827 828 + #ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE 844 829 #ifdef CONFIG_IA32_SUPPORT 845 830 GLOBAL_ENTRY(ia64_ret_from_ia32_execve) 846 831 PT_REGS_UNWIND_INFO(0) ··· 852 835 st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit 853 836 .mem.offset 8,0 854 837 st8.spill [r3]=r0 // clear error indication in slot for r10 and set unat bit 838 + #ifdef CONFIG_PARAVIRT 839 + ;; 840 + // don't fall through, ia64_leave_kernel may be #define'd 841 + br.cond.sptk.few ia64_leave_kernel 842 + ;; 843 + #endif /* CONFIG_PARAVIRT */ 855 844 END(ia64_ret_from_ia32_execve) 845 + #ifndef CONFIG_PARAVIRT 856 846 // fall through 847 + #endif 857 848 #endif /* CONFIG_IA32_SUPPORT */ 858 - GLOBAL_ENTRY(ia64_leave_kernel) 849 + #endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */ 850 + 851 + GLOBAL_ENTRY(__paravirt_leave_kernel) 859 852 PT_REGS_UNWIND_INFO(0) 860 853 /* 861 854 * work.need_resched etc. mustn't get changed by this CPU before it returns to ··· 879 852 * needs to be redone. 880 853 */ 881 854 #ifdef CONFIG_PREEMPT 882 - rsm psr.i // disable interrupts 855 + RSM_PSR_I(p0, r17, r31) // disable interrupts 883 856 cmp.eq p0,pLvSys=r0,r0 // pLvSys=0: leave from kernel 884 857 (pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13 885 858 ;; ··· 889 862 ;; 890 863 cmp.eq p6,p0=r21,r0 // p6 <- pUStk || (preempt_count == 0) 891 864 #else 892 - (pUStk) rsm psr.i 865 + RSM_PSR_I(pUStk, r17, r31) 893 866 cmp.eq p0,pLvSys=r0,r0 // pLvSys=0: leave from kernel 894 867 (pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk 895 868 #endif ··· 937 910 mov ar.csd=r30 938 911 mov ar.ssd=r31 939 912 ;; 940 - rsm psr.i | psr.ic // initiate turning off of interrupt and interruption collection 913 + RSM_PSR_I_IC(r23, r22, r25) // initiate turning off of interrupt and interruption collection 941 914 invala // invalidate ALAT 942 915 ;; 943 916 ld8.fill r22=[r2],24 ··· 969 942 mov ar.ccv=r15 970 943 ;; 971 944 ldf.fill f11=[r2] 972 - bsw.0 // switch back to bank 0 (no stop bit required beforehand...) 945 + BSW_0(r2, r3, r15) // switch back to bank 0 (no stop bit required beforehand...) 973 946 ;; 974 947 (pUStk) mov r18=IA64_KR(CURRENT)// M2 (12 cycle read latency) 975 948 adds r16=PT(CR_IPSR)+16,r12 ··· 977 950 978 951 #ifdef CONFIG_VIRT_CPU_ACCOUNTING 979 952 .pred.rel.mutex pUStk,pKStk 980 - (pKStk) mov r22=psr // M2 read PSR now that interrupts are disabled 953 + MOV_FROM_PSR(pKStk, r22, r29) // M2 read PSR now that interrupts are disabled 981 954 (pUStk) mov.m r22=ar.itc // M fetch time at leave 982 955 nop.i 0 983 956 ;; 984 957 #else 985 - (pKStk) mov r22=psr // M2 read PSR now that interrupts are disabled 958 + MOV_FROM_PSR(pKStk, r22, r29) // M2 read PSR now that interrupts are disabled 986 959 nop.i 0 987 960 nop.i 0 988 961 ;; ··· 1054 1027 * NOTE: alloc, loadrs, and cover can't be predicated. 1055 1028 */ 1056 1029 (pNonSys) br.cond.dpnt dont_preserve_current_frame 1057 - cover // add current frame into dirty partition and set cr.ifs 1030 + COVER // add current frame into dirty partition and set cr.ifs 1058 1031 ;; 1059 1032 mov r19=ar.bsp // get new backing store pointer 1060 1033 rbs_switch: ··· 1157 1130 (pKStk) dep r29=r22,r29,21,1 // I0 update ipsr.pp with psr.pp 1158 1131 (pLvSys)mov r16=r0 // A clear r16 for leave_syscall, no-op otherwise 1159 1132 ;; 1160 - mov cr.ipsr=r29 // M2 1133 + MOV_TO_IPSR(p0, r29, r25) // M2 1161 1134 mov ar.pfs=r26 // I0 1162 1135 (pLvSys)mov r17=r0 // A clear r17 for leave_syscall, no-op otherwise 1163 1136 1164 - (p9) mov cr.ifs=r30 // M2 1137 + MOV_TO_IFS(p9, r30, r25)// M2 1165 1138 mov b0=r21 // I0 1166 1139 (pLvSys)mov r18=r0 // A clear r18 for leave_syscall, no-op otherwise 1167 1140 1168 1141 mov ar.fpsr=r20 // M2 1169 - mov cr.iip=r28 // M2 1142 + MOV_TO_IIP(r28, r25) // M2 1170 1143 nop 0 1171 1144 ;; 1172 1145 (pUStk) mov ar.rnat=r24 // M2 must happen with RSE in lazy mode ··· 1175 1148 1176 1149 mov ar.rsc=r27 // M2 1177 1150 mov pr=r31,-1 // I0 1178 - rfi // B 1151 + RFI // B 1179 1152 1180 1153 /* 1181 1154 * On entry: ··· 1201 1174 ;; 1202 1175 (pKStk) st4 [r20]=r21 1203 1176 #endif 1204 - ssm psr.i // enable interrupts 1177 + SSM_PSR_I(p0, p6, r2) // enable interrupts 1205 1178 br.call.spnt.many rp=schedule 1206 1179 .ret9: cmp.eq p6,p0=r0,r0 // p6 <- 1 (re-check) 1207 - rsm psr.i // disable interrupts 1180 + RSM_PSR_I(p0, r2, r20) // disable interrupts 1208 1181 ;; 1209 1182 #ifdef CONFIG_PREEMPT 1210 1183 (pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13 1211 1184 ;; 1212 1185 (pKStk) st4 [r20]=r0 // preempt_count() <- 0 1213 1186 #endif 1214 - (pLvSys)br.cond.sptk.few .work_pending_syscall_end 1187 + (pLvSys)br.cond.sptk.few __paravirt_pending_syscall_end 1215 1188 br.cond.sptk.many .work_processed_kernel 1216 1189 1217 1190 .notify: 1218 1191 (pUStk) br.call.spnt.many rp=notify_resume_user 1219 1192 .ret10: cmp.ne p6,p0=r0,r0 // p6 <- 0 (don't re-check) 1220 - (pLvSys)br.cond.sptk.few .work_pending_syscall_end 1193 + (pLvSys)br.cond.sptk.few __paravirt_pending_syscall_end 1221 1194 br.cond.sptk.many .work_processed_kernel 1222 1195 1223 - .work_pending_syscall_end: 1196 + .global __paravirt_pending_syscall_end; 1197 + __paravirt_pending_syscall_end: 1224 1198 adds r2=PT(R8)+16,r12 1225 1199 adds r3=PT(R10)+16,r12 1226 1200 ;; 1227 1201 ld8 r8=[r2] 1228 1202 ld8 r10=[r3] 1229 - br.cond.sptk.many .work_processed_syscall 1203 + br.cond.sptk.many __paravirt_work_processed_syscall_target 1204 + END(__paravirt_leave_kernel) 1230 1205 1231 - END(ia64_leave_kernel) 1232 - 1206 + #ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE 1233 1207 ENTRY(handle_syscall_error) 1234 1208 /* 1235 1209 * Some system calls (e.g., ptrace, mmap) can return arbitrary values which could ··· 1272 1244 * We declare 8 input registers so the system call args get preserved, 1273 1245 * in case we need to restart a system call. 1274 1246 */ 1275 - ENTRY(notify_resume_user) 1247 + GLOBAL_ENTRY(notify_resume_user) 1276 1248 .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) 1277 1249 alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart! 1278 1250 mov r9=ar.unat ··· 1334 1306 adds sp=16,sp 1335 1307 ;; 1336 1308 ld8 r9=[sp] // load new ar.unat 1337 - mov.sptk b7=r8,ia64_leave_kernel 1309 + mov.sptk b7=r8,ia64_native_leave_kernel 1338 1310 ;; 1339 1311 mov ar.unat=r9 1340 1312 br.many b7 ··· 1693 1665 data8 sys_timerfd_gettime 1694 1666 1695 1667 .org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls 1668 + #endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */
+41
arch/ia64/kernel/head.S
··· 26 26 #include <asm/mmu_context.h> 27 27 #include <asm/asm-offsets.h> 28 28 #include <asm/pal.h> 29 + #include <asm/paravirt.h> 29 30 #include <asm/pgtable.h> 30 31 #include <asm/processor.h> 31 32 #include <asm/ptrace.h> 32 33 #include <asm/system.h> 33 34 #include <asm/mca_asm.h> 35 + #include <linux/init.h> 36 + #include <linux/linkage.h> 34 37 35 38 #ifdef CONFIG_HOTPLUG_CPU 36 39 #define SAL_PSR_BITS_TO_SET \ ··· 369 366 (isBP) movl r2=ia64_boot_param 370 367 ;; 371 368 (isBP) st8 [r2]=r28 // save the address of the boot param area passed by the bootloader 369 + 370 + #ifdef CONFIG_PARAVIRT 371 + 372 + movl r14=hypervisor_setup_hooks 373 + movl r15=hypervisor_type 374 + mov r16=num_hypervisor_hooks 375 + ;; 376 + ld8 r2=[r15] 377 + ;; 378 + cmp.ltu p7,p0=r2,r16 // array size check 379 + shladd r8=r2,3,r14 380 + ;; 381 + (p7) ld8 r9=[r8] 382 + ;; 383 + (p7) mov b1=r9 384 + (p7) cmp.ne.unc p7,p0=r9,r0 // no actual branch to NULL 385 + ;; 386 + (p7) br.call.sptk.many rp=b1 387 + 388 + __INITDATA 389 + 390 + default_setup_hook = 0 // Currently nothing needs to be done. 391 + 392 + .weak xen_setup_hook 393 + 394 + .global hypervisor_type 395 + hypervisor_type: 396 + data8 PARAVIRT_HYPERVISOR_TYPE_DEFAULT 397 + 398 + // must have the same order with PARAVIRT_HYPERVISOR_TYPE_xxx 399 + 400 + hypervisor_setup_hooks: 401 + data8 default_setup_hook 402 + data8 xen_setup_hook 403 + num_hypervisor_hooks = (. - hypervisor_setup_hooks) / 8 404 + .previous 405 + 406 + #endif 372 407 373 408 #ifdef CONFIG_SMP 374 409 (isAP) br.call.sptk.many rp=start_secondary
+29 -16
arch/ia64/kernel/iosapic.c
··· 585 585 return (iosapic_intr_info[irq].count > 1); 586 586 } 587 587 588 + struct irq_chip* 589 + ia64_native_iosapic_get_irq_chip(unsigned long trigger) 590 + { 591 + if (trigger == IOSAPIC_EDGE) 592 + return &irq_type_iosapic_edge; 593 + else 594 + return &irq_type_iosapic_level; 595 + } 596 + 588 597 static int 589 598 register_intr (unsigned int gsi, int irq, unsigned char delivery, 590 599 unsigned long polarity, unsigned long trigger) ··· 644 635 iosapic_intr_info[irq].dmode = delivery; 645 636 iosapic_intr_info[irq].trigger = trigger; 646 637 647 - if (trigger == IOSAPIC_EDGE) 648 - irq_type = &irq_type_iosapic_edge; 649 - else 650 - irq_type = &irq_type_iosapic_level; 638 + irq_type = iosapic_get_irq_chip(trigger); 651 639 652 640 idesc = irq_desc + irq; 653 - if (idesc->chip != irq_type) { 641 + if (irq_type != NULL && idesc->chip != irq_type) { 654 642 if (idesc->chip != &no_irq_type) 655 643 printk(KERN_WARNING 656 644 "%s: changing vector %d from %s to %s\n", ··· 980 974 } 981 975 982 976 void __init 977 + ia64_native_iosapic_pcat_compat_init(void) 978 + { 979 + if (pcat_compat) { 980 + /* 981 + * Disable the compatibility mode interrupts (8259 style), 982 + * needs IN/OUT support enabled. 983 + */ 984 + printk(KERN_INFO 985 + "%s: Disabling PC-AT compatible 8259 interrupts\n", 986 + __func__); 987 + outb(0xff, 0xA1); 988 + outb(0xff, 0x21); 989 + } 990 + } 991 + 992 + void __init 983 993 iosapic_system_init (int system_pcat_compat) 984 994 { 985 995 int irq; ··· 1009 987 } 1010 988 1011 989 pcat_compat = system_pcat_compat; 1012 - if (pcat_compat) { 1013 - /* 1014 - * Disable the compatibility mode interrupts (8259 style), 1015 - * needs IN/OUT support enabled. 1016 - */ 1017 - printk(KERN_INFO 1018 - "%s: Disabling PC-AT compatible 8259 interrupts\n", 1019 - __func__); 1020 - outb(0xff, 0xA1); 1021 - outb(0xff, 0x21); 1022 - } 990 + if (pcat_compat) 991 + iosapic_pcat_compat_init(); 1023 992 } 1024 993 1025 994 static inline int
+13 -6
arch/ia64/kernel/irq_ia64.c
··· 196 196 } 197 197 198 198 int 199 - assign_irq_vector (int irq) 199 + ia64_native_assign_irq_vector (int irq) 200 200 { 201 201 unsigned long flags; 202 202 int vector, cpu; ··· 222 222 } 223 223 224 224 void 225 - free_irq_vector (int vector) 225 + ia64_native_free_irq_vector (int vector) 226 226 { 227 227 if (vector < IA64_FIRST_DEVICE_VECTOR || 228 228 vector > IA64_LAST_DEVICE_VECTOR) ··· 600 600 { 601 601 BUG(); 602 602 } 603 - extern irqreturn_t handle_IPI (int irq, void *dev_id); 604 603 605 604 static struct irqaction ipi_irqaction = { 606 605 .handler = handle_IPI, ··· 622 623 #endif 623 624 624 625 void 625 - register_percpu_irq (ia64_vector vec, struct irqaction *action) 626 + ia64_native_register_percpu_irq (ia64_vector vec, struct irqaction *action) 626 627 { 627 628 irq_desc_t *desc; 628 629 unsigned int irq; ··· 637 638 } 638 639 639 640 void __init 640 - init_IRQ (void) 641 + ia64_native_register_ipi(void) 641 642 { 642 - register_percpu_irq(IA64_SPURIOUS_INT_VECTOR, NULL); 643 643 #ifdef CONFIG_SMP 644 644 register_percpu_irq(IA64_IPI_VECTOR, &ipi_irqaction); 645 645 register_percpu_irq(IA64_IPI_RESCHEDULE, &resched_irqaction); 646 646 register_percpu_irq(IA64_IPI_LOCAL_TLB_FLUSH, &tlb_irqaction); 647 + #endif 648 + } 649 + 650 + void __init 651 + init_IRQ (void) 652 + { 653 + ia64_register_ipi(); 654 + register_percpu_irq(IA64_SPURIOUS_INT_VECTOR, NULL); 655 + #ifdef CONFIG_SMP 647 656 #if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_DIG) 648 657 if (vector_domain_type != VECTOR_DOMAIN_NONE) { 649 658 BUG_ON(IA64_FIRST_DEVICE_VECTOR != IA64_IRQ_MOVE_VECTOR);
+231 -231
arch/ia64/kernel/ivt.S
··· 12 12 * 13 13 * 00/08/23 Asit Mallick <asit.k.mallick@intel.com> TLB handling for SMP 14 14 * 00/12/20 David Mosberger-Tang <davidm@hpl.hp.com> DTLB/ITLB handler now uses virtual PT. 15 + * 16 + * Copyright (C) 2005 Hewlett-Packard Co 17 + * Dan Magenheimer <dan.magenheimer@hp.com> 18 + * Xen paravirtualization 19 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 20 + * VA Linux Systems Japan K.K. 21 + * pv_ops. 22 + * Yaozu (Eddie) Dong <eddie.dong@intel.com> 15 23 */ 16 24 /* 17 25 * This file defines the interruption vector table used by the CPU. ··· 110 102 * - the faulting virtual address uses unimplemented address bits 111 103 * - the faulting virtual address has no valid page table mapping 112 104 */ 113 - mov r16=cr.ifa // get address that caused the TLB miss 105 + MOV_FROM_IFA(r16) // get address that caused the TLB miss 114 106 #ifdef CONFIG_HUGETLB_PAGE 115 107 movl r18=PAGE_SHIFT 116 - mov r25=cr.itir 108 + MOV_FROM_ITIR(r25) 117 109 #endif 118 110 ;; 119 - rsm psr.dt // use physical addressing for data 111 + RSM_PSR_DT // use physical addressing for data 120 112 mov r31=pr // save the predicate registers 121 113 mov r19=IA64_KR(PT_BASE) // get page table base address 122 114 shl r21=r16,3 // shift bit 60 into sign bit ··· 176 168 dep r21=r19,r20,3,(PAGE_SHIFT-3) // r21=pte_offset(pmd,addr) 177 169 ;; 178 170 (p7) ld8 r18=[r21] // read *pte 179 - mov r19=cr.isr // cr.isr bit 32 tells us if this is an insn miss 171 + MOV_FROM_ISR(r19) // cr.isr bit 32 tells us if this is an insn miss 180 172 ;; 181 173 (p7) tbit.z p6,p7=r18,_PAGE_P_BIT // page present bit cleared? 182 - mov r22=cr.iha // get the VHPT address that caused the TLB miss 174 + MOV_FROM_IHA(r22) // get the VHPT address that caused the TLB miss 183 175 ;; // avoid RAW on p7 184 176 (p7) tbit.nz.unc p10,p11=r19,32 // is it an instruction TLB miss? 185 177 dep r23=0,r20,0,PAGE_SHIFT // clear low bits to get page address 186 178 ;; 187 - (p10) itc.i r18 // insert the instruction TLB entry 188 - (p11) itc.d r18 // insert the data TLB entry 179 + ITC_I_AND_D(p10, p11, r18, r24) // insert the instruction TLB entry and 180 + // insert the data TLB entry 189 181 (p6) br.cond.spnt.many page_fault // handle bad address/page not present (page fault) 190 - mov cr.ifa=r22 182 + MOV_TO_IFA(r22, r24) 191 183 192 184 #ifdef CONFIG_HUGETLB_PAGE 193 - (p8) mov cr.itir=r25 // change to default page-size for VHPT 185 + MOV_TO_ITIR(p8, r25, r24) // change to default page-size for VHPT 194 186 #endif 195 187 196 188 /* ··· 200 192 */ 201 193 adds r24=__DIRTY_BITS_NO_ED|_PAGE_PL_0|_PAGE_AR_RW,r23 202 194 ;; 203 - (p7) itc.d r24 195 + ITC_D(p7, r24, r25) 204 196 ;; 205 197 #ifdef CONFIG_SMP 206 198 /* ··· 242 234 #endif 243 235 244 236 mov pr=r31,-1 // restore predicate registers 245 - rfi 237 + RFI 246 238 END(vhpt_miss) 247 239 248 240 .org ia64_ivt+0x400 ··· 256 248 * mode, walk the page table, and then re-execute the PTE read and 257 249 * go on normally after that. 258 250 */ 259 - mov r16=cr.ifa // get virtual address 251 + MOV_FROM_IFA(r16) // get virtual address 260 252 mov r29=b0 // save b0 261 253 mov r31=pr // save predicates 262 254 .itlb_fault: 263 - mov r17=cr.iha // get virtual address of PTE 255 + MOV_FROM_IHA(r17) // get virtual address of PTE 264 256 movl r30=1f // load nested fault continuation point 265 257 ;; 266 258 1: ld8 r18=[r17] // read *pte ··· 269 261 tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared? 270 262 (p6) br.cond.spnt page_fault 271 263 ;; 272 - itc.i r18 264 + ITC_I(p0, r18, r19) 273 265 ;; 274 266 #ifdef CONFIG_SMP 275 267 /* ··· 286 278 (p7) ptc.l r16,r20 287 279 #endif 288 280 mov pr=r31,-1 289 - rfi 281 + RFI 290 282 END(itlb_miss) 291 283 292 284 .org ia64_ivt+0x0800 ··· 300 292 * mode, walk the page table, and then re-execute the PTE read and 301 293 * go on normally after that. 302 294 */ 303 - mov r16=cr.ifa // get virtual address 295 + MOV_FROM_IFA(r16) // get virtual address 304 296 mov r29=b0 // save b0 305 297 mov r31=pr // save predicates 306 298 dtlb_fault: 307 - mov r17=cr.iha // get virtual address of PTE 299 + MOV_FROM_IHA(r17) // get virtual address of PTE 308 300 movl r30=1f // load nested fault continuation point 309 301 ;; 310 302 1: ld8 r18=[r17] // read *pte ··· 313 305 tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared? 314 306 (p6) br.cond.spnt page_fault 315 307 ;; 316 - itc.d r18 308 + ITC_D(p0, r18, r19) 317 309 ;; 318 310 #ifdef CONFIG_SMP 319 311 /* ··· 330 322 (p7) ptc.l r16,r20 331 323 #endif 332 324 mov pr=r31,-1 333 - rfi 325 + RFI 334 326 END(dtlb_miss) 335 327 336 328 .org ia64_ivt+0x0c00 ··· 338 330 // 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19) 339 331 ENTRY(alt_itlb_miss) 340 332 DBG_FAULT(3) 341 - mov r16=cr.ifa // get address that caused the TLB miss 333 + MOV_FROM_IFA(r16) // get address that caused the TLB miss 342 334 movl r17=PAGE_KERNEL 343 - mov r21=cr.ipsr 335 + MOV_FROM_IPSR(p0, r21) 344 336 movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff) 345 337 mov r31=pr 346 338 ;; ··· 349 341 ;; 350 342 cmp.gt p8,p0=6,r22 // user mode 351 343 ;; 352 - (p8) thash r17=r16 344 + THASH(p8, r17, r16, r23) 353 345 ;; 354 - (p8) mov cr.iha=r17 346 + MOV_TO_IHA(p8, r17, r23) 355 347 (p8) mov r29=b0 // save b0 356 348 (p8) br.cond.dptk .itlb_fault 357 349 #endif ··· 366 358 or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6 367 359 (p8) br.cond.spnt page_fault 368 360 ;; 369 - itc.i r19 // insert the TLB entry 361 + ITC_I(p0, r19, r18) // insert the TLB entry 370 362 mov pr=r31,-1 371 - rfi 363 + RFI 372 364 END(alt_itlb_miss) 373 365 374 366 .org ia64_ivt+0x1000 ··· 376 368 // 0x1000 Entry 4 (size 64 bundles) Alt DTLB (7,46) 377 369 ENTRY(alt_dtlb_miss) 378 370 DBG_FAULT(4) 379 - mov r16=cr.ifa // get address that caused the TLB miss 371 + MOV_FROM_IFA(r16) // get address that caused the TLB miss 380 372 movl r17=PAGE_KERNEL 381 - mov r20=cr.isr 373 + MOV_FROM_ISR(r20) 382 374 movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff) 383 - mov r21=cr.ipsr 375 + MOV_FROM_IPSR(p0, r21) 384 376 mov r31=pr 385 377 mov r24=PERCPU_ADDR 386 378 ;; ··· 389 381 ;; 390 382 cmp.gt p8,p0=6,r22 // access to region 0-5 391 383 ;; 392 - (p8) thash r17=r16 384 + THASH(p8, r17, r16, r25) 393 385 ;; 394 - (p8) mov cr.iha=r17 386 + MOV_TO_IHA(p8, r17, r25) 395 387 (p8) mov r29=b0 // save b0 396 388 (p8) br.cond.dptk dtlb_fault 397 389 #endif ··· 410 402 tbit.nz p9,p0=r20,IA64_ISR_NA_BIT // is non-access bit on? 411 403 ;; 412 404 (p10) sub r19=r19,r26 413 - (p10) mov cr.itir=r25 405 + MOV_TO_ITIR(p10, r25, r24) 414 406 cmp.ne p8,p0=r0,r23 415 407 (p9) cmp.eq.or.andcm p6,p7=IA64_ISR_CODE_LFETCH,r22 // check isr.code field 416 408 (p12) dep r17=-1,r17,4,1 // set ma=UC for region 6 addr ··· 419 411 dep r21=-1,r21,IA64_PSR_ED_BIT,1 420 412 ;; 421 413 or r19=r19,r17 // insert PTE control bits into r19 422 - (p6) mov cr.ipsr=r21 414 + MOV_TO_IPSR(p6, r21, r24) 423 415 ;; 424 - (p7) itc.d r19 // insert the TLB entry 416 + ITC_D(p7, r19, r18) // insert the TLB entry 425 417 mov pr=r31,-1 426 - rfi 418 + RFI 427 419 END(alt_dtlb_miss) 428 420 429 421 .org ia64_ivt+0x1400 ··· 452 444 * 453 445 * Clobbered: b0, r18, r19, r21, r22, psr.dt (cleared) 454 446 */ 455 - rsm psr.dt // switch to using physical data addressing 447 + RSM_PSR_DT // switch to using physical data addressing 456 448 mov r19=IA64_KR(PT_BASE) // get the page table base address 457 449 shl r21=r16,3 // shift bit 60 into sign bit 458 - mov r18=cr.itir 450 + MOV_FROM_ITIR(r18) 459 451 ;; 460 452 shr.u r17=r16,61 // get the region number into r17 461 453 extr.u r18=r18,2,6 // get the faulting page size ··· 515 507 FAULT(6) 516 508 END(ikey_miss) 517 509 518 - //----------------------------------------------------------------------------------- 519 - // call do_page_fault (predicates are in r31, psr.dt may be off, r16 is faulting address) 520 - ENTRY(page_fault) 521 - ssm psr.dt 522 - ;; 523 - srlz.i 524 - ;; 525 - SAVE_MIN_WITH_COVER 526 - alloc r15=ar.pfs,0,0,3,0 527 - mov out0=cr.ifa 528 - mov out1=cr.isr 529 - adds r3=8,r2 // set up second base pointer 530 - ;; 531 - ssm psr.ic | PSR_DEFAULT_BITS 532 - ;; 533 - srlz.i // guarantee that interruption collectin is on 534 - ;; 535 - (p15) ssm psr.i // restore psr.i 536 - movl r14=ia64_leave_kernel 537 - ;; 538 - SAVE_REST 539 - mov rp=r14 540 - ;; 541 - adds out2=16,r12 // out2 = pointer to pt_regs 542 - br.call.sptk.many b6=ia64_do_page_fault // ignore return address 543 - END(page_fault) 544 - 545 510 .org ia64_ivt+0x1c00 546 511 ///////////////////////////////////////////////////////////////////////////////////////// 547 512 // 0x1c00 Entry 7 (size 64 bundles) Data Key Miss (12,51) ··· 537 556 * page table TLB entry isn't present, we take a nested TLB miss hit where we look 538 557 * up the physical address of the L3 PTE and then continue at label 1 below. 539 558 */ 540 - mov r16=cr.ifa // get the address that caused the fault 559 + MOV_FROM_IFA(r16) // get the address that caused the fault 541 560 movl r30=1f // load continuation point in case of nested fault 542 561 ;; 543 - thash r17=r16 // compute virtual address of L3 PTE 562 + THASH(p0, r17, r16, r18) // compute virtual address of L3 PTE 544 563 mov r29=b0 // save b0 in case of nested fault 545 564 mov r31=pr // save pr 546 565 #ifdef CONFIG_SMP ··· 557 576 ;; 558 577 (p6) cmp.eq p6,p7=r26,r18 // Only compare if page is present 559 578 ;; 560 - (p6) itc.d r25 // install updated PTE 579 + ITC_D(p6, r25, r18) // install updated PTE 561 580 ;; 562 581 /* 563 582 * Tell the assemblers dependency-violation checker that the above "itc" instructions ··· 583 602 itc.d r18 // install updated PTE 584 603 #endif 585 604 mov pr=r31,-1 // restore pr 586 - rfi 605 + RFI 587 606 END(dirty_bit) 588 607 589 608 .org ia64_ivt+0x2400 ··· 592 611 ENTRY(iaccess_bit) 593 612 DBG_FAULT(9) 594 613 // Like Entry 8, except for instruction access 595 - mov r16=cr.ifa // get the address that caused the fault 614 + MOV_FROM_IFA(r16) // get the address that caused the fault 596 615 movl r30=1f // load continuation point in case of nested fault 597 616 mov r31=pr // save predicates 598 617 #ifdef CONFIG_ITANIUM 599 618 /* 600 619 * Erratum 10 (IFA may contain incorrect address) has "NoFix" status. 601 620 */ 602 - mov r17=cr.ipsr 621 + MOV_FROM_IPSR(p0, r17) 603 622 ;; 604 - mov r18=cr.iip 623 + MOV_FROM_IIP(r18) 605 624 tbit.z p6,p0=r17,IA64_PSR_IS_BIT // IA64 instruction set? 606 625 ;; 607 626 (p6) mov r16=r18 // if so, use cr.iip instead of cr.ifa 608 627 #endif /* CONFIG_ITANIUM */ 609 628 ;; 610 - thash r17=r16 // compute virtual address of L3 PTE 629 + THASH(p0, r17, r16, r18) // compute virtual address of L3 PTE 611 630 mov r29=b0 // save b0 in case of nested fault) 612 631 #ifdef CONFIG_SMP 613 632 mov r28=ar.ccv // save ar.ccv ··· 623 642 ;; 624 643 (p6) cmp.eq p6,p7=r26,r18 // Only if page present 625 644 ;; 626 - (p6) itc.i r25 // install updated PTE 645 + ITC_I(p6, r25, r26) // install updated PTE 627 646 ;; 628 647 /* 629 648 * Tell the assemblers dependency-violation checker that the above "itc" instructions ··· 649 668 itc.i r18 // install updated PTE 650 669 #endif /* !CONFIG_SMP */ 651 670 mov pr=r31,-1 652 - rfi 671 + RFI 653 672 END(iaccess_bit) 654 673 655 674 .org ia64_ivt+0x2800 ··· 658 677 ENTRY(daccess_bit) 659 678 DBG_FAULT(10) 660 679 // Like Entry 8, except for data access 661 - mov r16=cr.ifa // get the address that caused the fault 680 + MOV_FROM_IFA(r16) // get the address that caused the fault 662 681 movl r30=1f // load continuation point in case of nested fault 663 682 ;; 664 - thash r17=r16 // compute virtual address of L3 PTE 683 + THASH(p0, r17, r16, r18) // compute virtual address of L3 PTE 665 684 mov r31=pr 666 685 mov r29=b0 // save b0 in case of nested fault) 667 686 #ifdef CONFIG_SMP ··· 678 697 ;; 679 698 (p6) cmp.eq p6,p7=r26,r18 // Only if page is present 680 699 ;; 681 - (p6) itc.d r25 // install updated PTE 700 + ITC_D(p6, r25, r26) // install updated PTE 682 701 /* 683 702 * Tell the assemblers dependency-violation checker that the above "itc" instructions 684 703 * cannot possibly affect the following loads: ··· 702 721 #endif 703 722 mov b0=r29 // restore b0 704 723 mov pr=r31,-1 705 - rfi 724 + RFI 706 725 END(daccess_bit) 707 726 708 727 .org ia64_ivt+0x2c00 ··· 726 745 */ 727 746 DBG_FAULT(11) 728 747 mov.m r16=IA64_KR(CURRENT) // M2 r16 <- current task (12 cyc) 729 - mov r29=cr.ipsr // M2 (12 cyc) 748 + MOV_FROM_IPSR(p0, r29) // M2 (12 cyc) 730 749 mov r31=pr // I0 (2 cyc) 731 750 732 - mov r17=cr.iim // M2 (2 cyc) 751 + MOV_FROM_IIM(r17) // M2 (2 cyc) 733 752 mov.m r27=ar.rsc // M2 (12 cyc) 734 753 mov r18=__IA64_BREAK_SYSCALL // A 735 754 ··· 748 767 nop.m 0 749 768 movl r30=sys_call_table // X 750 769 751 - mov r28=cr.iip // M2 (2 cyc) 770 + MOV_FROM_IIP(r28) // M2 (2 cyc) 752 771 cmp.eq p0,p7=r18,r17 // I0 is this a system call? 753 772 (p7) br.cond.spnt non_syscall // B no -> 754 773 // ··· 845 864 #endif 846 865 mov ar.rsc=0x3 // M2 set eager mode, pl 0, LE, loadrs=0 847 866 nop 0 848 - bsw.1 // B (6 cyc) regs are saved, switch to bank 1 867 + BSW_1(r2, r14) // B (6 cyc) regs are saved, switch to bank 1 849 868 ;; 850 869 851 - ssm psr.ic | PSR_DEFAULT_BITS // M2 now it's safe to re-enable intr.-collection 870 + SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r16) // M2 now it's safe to re-enable intr.-collection 871 + // M0 ensure interruption collection is on 852 872 movl r3=ia64_ret_from_syscall // X 853 873 ;; 854 - 855 - srlz.i // M0 ensure interruption collection is on 856 874 mov rp=r3 // I0 set the real return addr 857 875 (p10) br.cond.spnt.many ia64_ret_from_syscall // B return if bad call-frame or r15 is a NaT 858 876 859 - (p15) ssm psr.i // M2 restore psr.i 877 + SSM_PSR_I(p15, p15, r16) // M2 restore psr.i 860 878 (p14) br.call.sptk.many b6=b6 // B invoke syscall-handker (ignore return addr) 861 879 br.cond.spnt.many ia64_trace_syscall // B do syscall-tracing thingamagic 862 880 // NOT REACHED ··· 875 895 ///////////////////////////////////////////////////////////////////////////////////////// 876 896 // 0x3000 Entry 12 (size 64 bundles) External Interrupt (4) 877 897 ENTRY(interrupt) 878 - DBG_FAULT(12) 879 - mov r31=pr // prepare to save predicates 880 - ;; 881 - SAVE_MIN_WITH_COVER // uses r31; defines r2 and r3 882 - ssm psr.ic | PSR_DEFAULT_BITS 883 - ;; 884 - adds r3=8,r2 // set up second base pointer for SAVE_REST 885 - srlz.i // ensure everybody knows psr.ic is back on 886 - ;; 887 - SAVE_REST 888 - ;; 889 - MCA_RECOVER_RANGE(interrupt) 890 - alloc r14=ar.pfs,0,0,2,0 // must be first in an insn group 891 - mov out0=cr.ivr // pass cr.ivr as first arg 892 - add out1=16,sp // pass pointer to pt_regs as second arg 893 - ;; 894 - srlz.d // make sure we see the effect of cr.ivr 895 - movl r14=ia64_leave_kernel 896 - ;; 897 - mov rp=r14 898 - br.call.sptk.many b6=ia64_handle_irq 898 + /* interrupt handler has become too big to fit this area. */ 899 + br.sptk.many __interrupt 899 900 END(interrupt) 900 901 901 902 .org ia64_ivt+0x3400 ··· 939 978 * - ar.fpsr: set to kernel settings 940 979 * - b6: preserved (same as on entry) 941 980 */ 981 + #ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE 942 982 GLOBAL_ENTRY(ia64_syscall_setup) 943 983 #if PT(B6) != 0 944 984 # error This code assumes that b6 is the first field in pt_regs. ··· 1031 1069 (p10) mov r8=-EINVAL 1032 1070 br.ret.sptk.many b7 1033 1071 END(ia64_syscall_setup) 1072 + #endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */ 1034 1073 1035 1074 .org ia64_ivt+0x3c00 1036 1075 ///////////////////////////////////////////////////////////////////////////////////////// ··· 1045 1082 DBG_FAULT(16) 1046 1083 FAULT(16) 1047 1084 1048 - #ifdef CONFIG_VIRT_CPU_ACCOUNTING 1085 + #if defined(CONFIG_VIRT_CPU_ACCOUNTING) && defined(__IA64_ASM_PARAVIRTUALIZED_NATIVE) 1049 1086 /* 1050 1087 * There is no particular reason for this code to be here, other than 1051 1088 * that there happens to be space here that would go unused otherwise. ··· 1055 1092 * account_sys_enter is called from SAVE_MIN* macros if accounting is 1056 1093 * enabled and if the macro is entered from user mode. 1057 1094 */ 1058 - ENTRY(account_sys_enter) 1095 + GLOBAL_ENTRY(account_sys_enter) 1059 1096 // mov.m r20=ar.itc is called in advance, and r13 is current 1060 1097 add r16=TI_AC_STAMP+IA64_TASK_SIZE,r13 1061 1098 add r17=TI_AC_LEAVE+IA64_TASK_SIZE,r13 ··· 1086 1123 DBG_FAULT(17) 1087 1124 FAULT(17) 1088 1125 1089 - ENTRY(non_syscall) 1090 - mov ar.rsc=r27 // restore ar.rsc before SAVE_MIN_WITH_COVER 1091 - ;; 1092 - SAVE_MIN_WITH_COVER 1093 - 1094 - // There is no particular reason for this code to be here, other than that 1095 - // there happens to be space here that would go unused otherwise. If this 1096 - // fault ever gets "unreserved", simply moved the following code to a more 1097 - // suitable spot... 1098 - 1099 - alloc r14=ar.pfs,0,0,2,0 1100 - mov out0=cr.iim 1101 - add out1=16,sp 1102 - adds r3=8,r2 // set up second base pointer for SAVE_REST 1103 - 1104 - ssm psr.ic | PSR_DEFAULT_BITS 1105 - ;; 1106 - srlz.i // guarantee that interruption collection is on 1107 - ;; 1108 - (p15) ssm psr.i // restore psr.i 1109 - movl r15=ia64_leave_kernel 1110 - ;; 1111 - SAVE_REST 1112 - mov rp=r15 1113 - ;; 1114 - br.call.sptk.many b6=ia64_bad_break // avoid WAW on CFM and ignore return addr 1115 - END(non_syscall) 1116 - 1117 1126 .org ia64_ivt+0x4800 1118 1127 ///////////////////////////////////////////////////////////////////////////////////////// 1119 1128 // 0x4800 Entry 18 (size 64 bundles) Reserved 1120 1129 DBG_FAULT(18) 1121 1130 FAULT(18) 1122 1131 1123 - /* 1124 - * There is no particular reason for this code to be here, other than that 1125 - * there happens to be space here that would go unused otherwise. If this 1126 - * fault ever gets "unreserved", simply moved the following code to a more 1127 - * suitable spot... 1128 - */ 1129 - 1130 - ENTRY(dispatch_unaligned_handler) 1131 - SAVE_MIN_WITH_COVER 1132 - ;; 1133 - alloc r14=ar.pfs,0,0,2,0 // now it's safe (must be first in insn group!) 1134 - mov out0=cr.ifa 1135 - adds out1=16,sp 1136 - 1137 - ssm psr.ic | PSR_DEFAULT_BITS 1138 - ;; 1139 - srlz.i // guarantee that interruption collection is on 1140 - ;; 1141 - (p15) ssm psr.i // restore psr.i 1142 - adds r3=8,r2 // set up second base pointer 1143 - ;; 1144 - SAVE_REST 1145 - movl r14=ia64_leave_kernel 1146 - ;; 1147 - mov rp=r14 1148 - br.sptk.many ia64_prepare_handle_unaligned 1149 - END(dispatch_unaligned_handler) 1150 - 1151 1132 .org ia64_ivt+0x4c00 1152 1133 ///////////////////////////////////////////////////////////////////////////////////////// 1153 1134 // 0x4c00 Entry 19 (size 64 bundles) Reserved 1154 1135 DBG_FAULT(19) 1155 1136 FAULT(19) 1156 - 1157 - /* 1158 - * There is no particular reason for this code to be here, other than that 1159 - * there happens to be space here that would go unused otherwise. If this 1160 - * fault ever gets "unreserved", simply moved the following code to a more 1161 - * suitable spot... 1162 - */ 1163 - 1164 - ENTRY(dispatch_to_fault_handler) 1165 - /* 1166 - * Input: 1167 - * psr.ic: off 1168 - * r19: fault vector number (e.g., 24 for General Exception) 1169 - * r31: contains saved predicates (pr) 1170 - */ 1171 - SAVE_MIN_WITH_COVER_R19 1172 - alloc r14=ar.pfs,0,0,5,0 1173 - mov out0=r15 1174 - mov out1=cr.isr 1175 - mov out2=cr.ifa 1176 - mov out3=cr.iim 1177 - mov out4=cr.itir 1178 - ;; 1179 - ssm psr.ic | PSR_DEFAULT_BITS 1180 - ;; 1181 - srlz.i // guarantee that interruption collection is on 1182 - ;; 1183 - (p15) ssm psr.i // restore psr.i 1184 - adds r3=8,r2 // set up second base pointer for SAVE_REST 1185 - ;; 1186 - SAVE_REST 1187 - movl r14=ia64_leave_kernel 1188 - ;; 1189 - mov rp=r14 1190 - br.call.sptk.many b6=ia64_fault 1191 - END(dispatch_to_fault_handler) 1192 1137 1193 1138 // 1194 1139 // --- End of long entries, Beginning of short entries ··· 1107 1236 // 0x5000 Entry 20 (size 16 bundles) Page Not Present (10,22,49) 1108 1237 ENTRY(page_not_present) 1109 1238 DBG_FAULT(20) 1110 - mov r16=cr.ifa 1111 - rsm psr.dt 1239 + MOV_FROM_IFA(r16) 1240 + RSM_PSR_DT 1112 1241 /* 1113 1242 * The Linux page fault handler doesn't expect non-present pages to be in 1114 1243 * the TLB. Flush the existing entry now, so we meet that expectation. ··· 1127 1256 // 0x5100 Entry 21 (size 16 bundles) Key Permission (13,25,52) 1128 1257 ENTRY(key_permission) 1129 1258 DBG_FAULT(21) 1130 - mov r16=cr.ifa 1131 - rsm psr.dt 1259 + MOV_FROM_IFA(r16) 1260 + RSM_PSR_DT 1132 1261 mov r31=pr 1133 1262 ;; 1134 1263 srlz.d ··· 1140 1269 // 0x5200 Entry 22 (size 16 bundles) Instruction Access Rights (26) 1141 1270 ENTRY(iaccess_rights) 1142 1271 DBG_FAULT(22) 1143 - mov r16=cr.ifa 1144 - rsm psr.dt 1272 + MOV_FROM_IFA(r16) 1273 + RSM_PSR_DT 1145 1274 mov r31=pr 1146 1275 ;; 1147 1276 srlz.d ··· 1153 1282 // 0x5300 Entry 23 (size 16 bundles) Data Access Rights (14,53) 1154 1283 ENTRY(daccess_rights) 1155 1284 DBG_FAULT(23) 1156 - mov r16=cr.ifa 1157 - rsm psr.dt 1285 + MOV_FROM_IFA(r16) 1286 + RSM_PSR_DT 1158 1287 mov r31=pr 1159 1288 ;; 1160 1289 srlz.d ··· 1166 1295 // 0x5400 Entry 24 (size 16 bundles) General Exception (5,32,34,36,38,39) 1167 1296 ENTRY(general_exception) 1168 1297 DBG_FAULT(24) 1169 - mov r16=cr.isr 1298 + MOV_FROM_ISR(r16) 1170 1299 mov r31=pr 1171 1300 ;; 1172 1301 cmp4.eq p6,p0=0,r16 ··· 1195 1324 ENTRY(nat_consumption) 1196 1325 DBG_FAULT(26) 1197 1326 1198 - mov r16=cr.ipsr 1199 - mov r17=cr.isr 1327 + MOV_FROM_IPSR(p0, r16) 1328 + MOV_FROM_ISR(r17) 1200 1329 mov r31=pr // save PR 1201 1330 ;; 1202 1331 and r18=0xf,r17 // r18 = cr.ipsr.code{3:0} ··· 1206 1335 dep r16=-1,r16,IA64_PSR_ED_BIT,1 1207 1336 (p6) br.cond.spnt 1f // branch if (cr.ispr.na == 0 || cr.ipsr.code{3:0} != LFETCH) 1208 1337 ;; 1209 - mov cr.ipsr=r16 // set cr.ipsr.na 1338 + MOV_TO_IPSR(p0, r16, r18) 1210 1339 mov pr=r31,-1 1211 1340 ;; 1212 - rfi 1341 + RFI 1213 1342 1214 1343 1: mov pr=r31,-1 1215 1344 ;; ··· 1231 1360 * 1232 1361 * cr.imm contains zero_ext(imm21) 1233 1362 */ 1234 - mov r18=cr.iim 1363 + MOV_FROM_IIM(r18) 1235 1364 ;; 1236 - mov r17=cr.iip 1365 + MOV_FROM_IIP(r17) 1237 1366 shl r18=r18,43 // put sign bit in position (43=64-21) 1238 1367 ;; 1239 1368 1240 - mov r16=cr.ipsr 1369 + MOV_FROM_IPSR(p0, r16) 1241 1370 shr r18=r18,39 // sign extend (39=43-4) 1242 1371 ;; 1243 1372 1244 1373 add r17=r17,r18 // now add the offset 1245 1374 ;; 1246 - mov cr.iip=r17 1375 + MOV_FROM_IIP(r17) 1247 1376 dep r16=0,r16,41,2 // clear EI 1248 1377 ;; 1249 1378 1250 - mov cr.ipsr=r16 1379 + MOV_FROM_IPSR(p0, r16) 1251 1380 ;; 1252 1381 1253 - rfi // and go back 1382 + RFI 1254 1383 END(speculation_vector) 1255 1384 1256 1385 .org ia64_ivt+0x5800 ··· 1388 1517 DBG_FAULT(46) 1389 1518 #ifdef CONFIG_IA32_SUPPORT 1390 1519 mov r31=pr 1391 - mov r16=cr.isr 1520 + MOV_FROM_ISR(r16) 1392 1521 ;; 1393 1522 extr.u r17=r16,16,8 // get ISR.code 1394 1523 mov r18=ar.eflag 1395 - mov r19=cr.iim // old eflag value 1524 + MOV_FROM_IIM(r19) // old eflag value 1396 1525 ;; 1397 1526 cmp.ne p6,p0=2,r17 1398 1527 (p6) br.cond.spnt 1f // not a system flag fault ··· 1404 1533 (p6) br.cond.spnt 1f // eflags.ac bit didn't change 1405 1534 ;; 1406 1535 mov pr=r31,-1 // restore predicate registers 1407 - rfi 1536 + RFI 1408 1537 1409 1538 1: 1410 1539 #endif // CONFIG_IA32_SUPPORT ··· 1544 1673 DBG_FAULT(67) 1545 1674 FAULT(67) 1546 1675 1676 + //----------------------------------------------------------------------------------- 1677 + // call do_page_fault (predicates are in r31, psr.dt may be off, r16 is faulting address) 1678 + ENTRY(page_fault) 1679 + SSM_PSR_DT_AND_SRLZ_I 1680 + ;; 1681 + SAVE_MIN_WITH_COVER 1682 + alloc r15=ar.pfs,0,0,3,0 1683 + MOV_FROM_IFA(out0) 1684 + MOV_FROM_ISR(out1) 1685 + SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r14, r3) 1686 + adds r3=8,r2 // set up second base pointer 1687 + SSM_PSR_I(p15, p15, r14) // restore psr.i 1688 + movl r14=ia64_leave_kernel 1689 + ;; 1690 + SAVE_REST 1691 + mov rp=r14 1692 + ;; 1693 + adds out2=16,r12 // out2 = pointer to pt_regs 1694 + br.call.sptk.many b6=ia64_do_page_fault // ignore return address 1695 + END(page_fault) 1696 + 1697 + ENTRY(non_syscall) 1698 + mov ar.rsc=r27 // restore ar.rsc before SAVE_MIN_WITH_COVER 1699 + ;; 1700 + SAVE_MIN_WITH_COVER 1701 + 1702 + // There is no particular reason for this code to be here, other than that 1703 + // there happens to be space here that would go unused otherwise. If this 1704 + // fault ever gets "unreserved", simply moved the following code to a more 1705 + // suitable spot... 1706 + 1707 + alloc r14=ar.pfs,0,0,2,0 1708 + MOV_FROM_IIM(out0) 1709 + add out1=16,sp 1710 + adds r3=8,r2 // set up second base pointer for SAVE_REST 1711 + 1712 + SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r15, r24) 1713 + // guarantee that interruption collection is on 1714 + SSM_PSR_I(p15, p15, r15) // restore psr.i 1715 + movl r15=ia64_leave_kernel 1716 + ;; 1717 + SAVE_REST 1718 + mov rp=r15 1719 + ;; 1720 + br.call.sptk.many b6=ia64_bad_break // avoid WAW on CFM and ignore return addr 1721 + END(non_syscall) 1722 + 1723 + ENTRY(__interrupt) 1724 + DBG_FAULT(12) 1725 + mov r31=pr // prepare to save predicates 1726 + ;; 1727 + SAVE_MIN_WITH_COVER // uses r31; defines r2 and r3 1728 + SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r14) 1729 + // ensure everybody knows psr.ic is back on 1730 + adds r3=8,r2 // set up second base pointer for SAVE_REST 1731 + ;; 1732 + SAVE_REST 1733 + ;; 1734 + MCA_RECOVER_RANGE(interrupt) 1735 + alloc r14=ar.pfs,0,0,2,0 // must be first in an insn group 1736 + MOV_FROM_IVR(out0, r8) // pass cr.ivr as first arg 1737 + add out1=16,sp // pass pointer to pt_regs as second arg 1738 + ;; 1739 + srlz.d // make sure we see the effect of cr.ivr 1740 + movl r14=ia64_leave_kernel 1741 + ;; 1742 + mov rp=r14 1743 + br.call.sptk.many b6=ia64_handle_irq 1744 + END(__interrupt) 1745 + 1746 + /* 1747 + * There is no particular reason for this code to be here, other than that 1748 + * there happens to be space here that would go unused otherwise. If this 1749 + * fault ever gets "unreserved", simply moved the following code to a more 1750 + * suitable spot... 1751 + */ 1752 + 1753 + ENTRY(dispatch_unaligned_handler) 1754 + SAVE_MIN_WITH_COVER 1755 + ;; 1756 + alloc r14=ar.pfs,0,0,2,0 // now it's safe (must be first in insn group!) 1757 + MOV_FROM_IFA(out0) 1758 + adds out1=16,sp 1759 + 1760 + SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r24) 1761 + // guarantee that interruption collection is on 1762 + SSM_PSR_I(p15, p15, r3) // restore psr.i 1763 + adds r3=8,r2 // set up second base pointer 1764 + ;; 1765 + SAVE_REST 1766 + movl r14=ia64_leave_kernel 1767 + ;; 1768 + mov rp=r14 1769 + br.sptk.many ia64_prepare_handle_unaligned 1770 + END(dispatch_unaligned_handler) 1771 + 1772 + /* 1773 + * There is no particular reason for this code to be here, other than that 1774 + * there happens to be space here that would go unused otherwise. If this 1775 + * fault ever gets "unreserved", simply moved the following code to a more 1776 + * suitable spot... 1777 + */ 1778 + 1779 + ENTRY(dispatch_to_fault_handler) 1780 + /* 1781 + * Input: 1782 + * psr.ic: off 1783 + * r19: fault vector number (e.g., 24 for General Exception) 1784 + * r31: contains saved predicates (pr) 1785 + */ 1786 + SAVE_MIN_WITH_COVER_R19 1787 + alloc r14=ar.pfs,0,0,5,0 1788 + MOV_FROM_ISR(out1) 1789 + MOV_FROM_IFA(out2) 1790 + MOV_FROM_IIM(out3) 1791 + MOV_FROM_ITIR(out4) 1792 + ;; 1793 + SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, out0) 1794 + // guarantee that interruption collection is on 1795 + mov out0=r15 1796 + ;; 1797 + SSM_PSR_I(p15, p15, r3) // restore psr.i 1798 + adds r3=8,r2 // set up second base pointer for SAVE_REST 1799 + ;; 1800 + SAVE_REST 1801 + movl r14=ia64_leave_kernel 1802 + ;; 1803 + mov rp=r14 1804 + br.call.sptk.many b6=ia64_fault 1805 + END(dispatch_to_fault_handler) 1806 + 1547 1807 /* 1548 1808 * Squatting in this space ... 1549 1809 * ··· 1688 1686 .prologue 1689 1687 .body 1690 1688 SAVE_MIN_WITH_COVER 1691 - ssm psr.ic | PSR_DEFAULT_BITS 1689 + SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r24) 1690 + // guarantee that interruption collection is on 1692 1691 ;; 1693 - srlz.i // guarantee that interruption collection is on 1694 - ;; 1695 - (p15) ssm psr.i // restore psr.i 1692 + SSM_PSR_I(p15, p15, r3) // restore psr.i 1696 1693 adds r3=8,r2 // set up second base pointer for SAVE_REST 1697 1694 ;; 1698 1695 alloc r14=ar.pfs,0,0,1,0 // must be first in insn group ··· 1730 1729 ENTRY(dispatch_to_ia32_handler) 1731 1730 SAVE_MIN 1732 1731 ;; 1733 - mov r14=cr.isr 1734 - ssm psr.ic | PSR_DEFAULT_BITS 1732 + MOV_FROM_ISR(r14) 1733 + SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r24) 1734 + // guarantee that interruption collection is on 1735 1735 ;; 1736 - srlz.i // guarantee that interruption collection is on 1737 - ;; 1738 - (p15) ssm psr.i 1736 + SSM_PSR_I(p15, p15, r3) 1739 1737 adds r3=8,r2 // Base pointer for SAVE_REST 1740 1738 ;; 1741 1739 SAVE_REST
+7 -6
arch/ia64/kernel/minstate.h
··· 2 2 #include <asm/cache.h> 3 3 4 4 #include "entry.h" 5 + #include "paravirt_inst.h" 5 6 6 7 #ifdef CONFIG_VIRT_CPU_ACCOUNTING 7 8 /* read ar.itc in advance, and use it before leaving bank 0 */ ··· 44 43 * Note that psr.ic is NOT turned on by this macro. This is so that 45 44 * we can pass interruption state as arguments to a handler. 46 45 */ 47 - #define DO_SAVE_MIN(COVER,SAVE_IFS,EXTRA,WORKAROUND) \ 46 + #define IA64_NATIVE_DO_SAVE_MIN(__COVER,SAVE_IFS,EXTRA,WORKAROUND) \ 48 47 mov r16=IA64_KR(CURRENT); /* M */ \ 49 48 mov r27=ar.rsc; /* M */ \ 50 49 mov r20=r1; /* A */ \ 51 50 mov r25=ar.unat; /* M */ \ 52 - mov r29=cr.ipsr; /* M */ \ 51 + MOV_FROM_IPSR(p0,r29); /* M */ \ 53 52 mov r26=ar.pfs; /* I */ \ 54 - mov r28=cr.iip; /* M */ \ 53 + MOV_FROM_IIP(r28); /* M */ \ 55 54 mov r21=ar.fpsr; /* M */ \ 56 - COVER; /* B;; (or nothing) */ \ 55 + __COVER; /* B;; (or nothing) */ \ 57 56 ;; \ 58 57 adds r16=IA64_TASK_THREAD_ON_USTACK_OFFSET,r16; \ 59 58 ;; \ ··· 245 244 1: \ 246 245 .pred.rel "mutex", pKStk, pUStk 247 246 248 - #define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover, mov r30=cr.ifs, , RSE_WORKAROUND) 249 - #define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover, mov r30=cr.ifs, mov r15=r19, RSE_WORKAROUND) 247 + #define SAVE_MIN_WITH_COVER DO_SAVE_MIN(COVER, mov r30=cr.ifs, , RSE_WORKAROUND) 248 + #define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(COVER, mov r30=cr.ifs, mov r15=r19, RSE_WORKAROUND) 250 249 #define SAVE_MIN DO_SAVE_MIN( , mov r30=r0, , )
+2 -1
arch/ia64/kernel/module.c
··· 321 321 void 322 322 module_free (struct module *mod, void *module_region) 323 323 { 324 - if (mod->arch.init_unw_table && module_region == mod->module_init) { 324 + if (mod && mod->arch.init_unw_table && 325 + module_region == mod->module_init) { 325 326 unw_remove_unwind_table(mod->arch.init_unw_table); 326 327 mod->arch.init_unw_table = NULL; 327 328 }
+24
arch/ia64/kernel/nr-irqs.c
··· 1 + /* 2 + * calculate 3 + * NR_IRQS = max(IA64_NATIVE_NR_IRQS, XEN_NR_IRQS, FOO_NR_IRQS...) 4 + * depending on config. 5 + * This must be calculated before processing asm-offset.c. 6 + */ 7 + 8 + #define ASM_OFFSETS_C 1 9 + 10 + #include <linux/kbuild.h> 11 + #include <linux/threads.h> 12 + #include <asm-ia64/native/irq.h> 13 + 14 + void foo(void) 15 + { 16 + union paravirt_nr_irqs_max { 17 + char ia64_native_nr_irqs[IA64_NATIVE_NR_IRQS]; 18 + #ifdef CONFIG_XEN 19 + char xen_nr_irqs[XEN_NR_IRQS]; 20 + #endif 21 + }; 22 + 23 + DEFINE(NR_IRQS, sizeof (union paravirt_nr_irqs_max)); 24 + }
+369
arch/ia64/kernel/paravirt.c
··· 1 + /****************************************************************************** 2 + * arch/ia64/kernel/paravirt.c 3 + * 4 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 5 + * VA Linux Systems Japan K.K. 6 + * Yaozu (Eddie) Dong <eddie.dong@intel.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License as published by 10 + * the Free Software Foundation; either version 2 of the License, or 11 + * (at your option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program; if not, write to the Free Software 20 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 + * 22 + */ 23 + 24 + #include <linux/init.h> 25 + 26 + #include <linux/compiler.h> 27 + #include <linux/io.h> 28 + #include <linux/irq.h> 29 + #include <linux/module.h> 30 + #include <linux/types.h> 31 + 32 + #include <asm/iosapic.h> 33 + #include <asm/paravirt.h> 34 + 35 + /*************************************************************************** 36 + * general info 37 + */ 38 + struct pv_info pv_info = { 39 + .kernel_rpl = 0, 40 + .paravirt_enabled = 0, 41 + .name = "bare hardware" 42 + }; 43 + 44 + /*************************************************************************** 45 + * pv_init_ops 46 + * initialization hooks. 47 + */ 48 + 49 + struct pv_init_ops pv_init_ops; 50 + 51 + /*************************************************************************** 52 + * pv_cpu_ops 53 + * intrinsics hooks. 54 + */ 55 + 56 + /* ia64_native_xxx are macros so that we have to make them real functions */ 57 + 58 + #define DEFINE_VOID_FUNC1(name) \ 59 + static void \ 60 + ia64_native_ ## name ## _func(unsigned long arg) \ 61 + { \ 62 + ia64_native_ ## name(arg); \ 63 + } \ 64 + 65 + #define DEFINE_VOID_FUNC2(name) \ 66 + static void \ 67 + ia64_native_ ## name ## _func(unsigned long arg0, \ 68 + unsigned long arg1) \ 69 + { \ 70 + ia64_native_ ## name(arg0, arg1); \ 71 + } \ 72 + 73 + #define DEFINE_FUNC0(name) \ 74 + static unsigned long \ 75 + ia64_native_ ## name ## _func(void) \ 76 + { \ 77 + return ia64_native_ ## name(); \ 78 + } 79 + 80 + #define DEFINE_FUNC1(name, type) \ 81 + static unsigned long \ 82 + ia64_native_ ## name ## _func(type arg) \ 83 + { \ 84 + return ia64_native_ ## name(arg); \ 85 + } \ 86 + 87 + DEFINE_VOID_FUNC1(fc); 88 + DEFINE_VOID_FUNC1(intrin_local_irq_restore); 89 + 90 + DEFINE_VOID_FUNC2(ptcga); 91 + DEFINE_VOID_FUNC2(set_rr); 92 + 93 + DEFINE_FUNC0(get_psr_i); 94 + 95 + DEFINE_FUNC1(thash, unsigned long); 96 + DEFINE_FUNC1(get_cpuid, int); 97 + DEFINE_FUNC1(get_pmd, int); 98 + DEFINE_FUNC1(get_rr, unsigned long); 99 + 100 + static void 101 + ia64_native_ssm_i_func(void) 102 + { 103 + ia64_native_ssm(IA64_PSR_I); 104 + } 105 + 106 + static void 107 + ia64_native_rsm_i_func(void) 108 + { 109 + ia64_native_rsm(IA64_PSR_I); 110 + } 111 + 112 + static void 113 + ia64_native_set_rr0_to_rr4_func(unsigned long val0, unsigned long val1, 114 + unsigned long val2, unsigned long val3, 115 + unsigned long val4) 116 + { 117 + ia64_native_set_rr0_to_rr4(val0, val1, val2, val3, val4); 118 + } 119 + 120 + #define CASE_GET_REG(id) \ 121 + case _IA64_REG_ ## id: \ 122 + res = ia64_native_getreg(_IA64_REG_ ## id); \ 123 + break; 124 + #define CASE_GET_AR(id) CASE_GET_REG(AR_ ## id) 125 + #define CASE_GET_CR(id) CASE_GET_REG(CR_ ## id) 126 + 127 + unsigned long 128 + ia64_native_getreg_func(int regnum) 129 + { 130 + unsigned long res = -1; 131 + switch (regnum) { 132 + CASE_GET_REG(GP); 133 + CASE_GET_REG(IP); 134 + CASE_GET_REG(PSR); 135 + CASE_GET_REG(TP); 136 + CASE_GET_REG(SP); 137 + 138 + CASE_GET_AR(KR0); 139 + CASE_GET_AR(KR1); 140 + CASE_GET_AR(KR2); 141 + CASE_GET_AR(KR3); 142 + CASE_GET_AR(KR4); 143 + CASE_GET_AR(KR5); 144 + CASE_GET_AR(KR6); 145 + CASE_GET_AR(KR7); 146 + CASE_GET_AR(RSC); 147 + CASE_GET_AR(BSP); 148 + CASE_GET_AR(BSPSTORE); 149 + CASE_GET_AR(RNAT); 150 + CASE_GET_AR(FCR); 151 + CASE_GET_AR(EFLAG); 152 + CASE_GET_AR(CSD); 153 + CASE_GET_AR(SSD); 154 + CASE_GET_AR(CFLAG); 155 + CASE_GET_AR(FSR); 156 + CASE_GET_AR(FIR); 157 + CASE_GET_AR(FDR); 158 + CASE_GET_AR(CCV); 159 + CASE_GET_AR(UNAT); 160 + CASE_GET_AR(FPSR); 161 + CASE_GET_AR(ITC); 162 + CASE_GET_AR(PFS); 163 + CASE_GET_AR(LC); 164 + CASE_GET_AR(EC); 165 + 166 + CASE_GET_CR(DCR); 167 + CASE_GET_CR(ITM); 168 + CASE_GET_CR(IVA); 169 + CASE_GET_CR(PTA); 170 + CASE_GET_CR(IPSR); 171 + CASE_GET_CR(ISR); 172 + CASE_GET_CR(IIP); 173 + CASE_GET_CR(IFA); 174 + CASE_GET_CR(ITIR); 175 + CASE_GET_CR(IIPA); 176 + CASE_GET_CR(IFS); 177 + CASE_GET_CR(IIM); 178 + CASE_GET_CR(IHA); 179 + CASE_GET_CR(LID); 180 + CASE_GET_CR(IVR); 181 + CASE_GET_CR(TPR); 182 + CASE_GET_CR(EOI); 183 + CASE_GET_CR(IRR0); 184 + CASE_GET_CR(IRR1); 185 + CASE_GET_CR(IRR2); 186 + CASE_GET_CR(IRR3); 187 + CASE_GET_CR(ITV); 188 + CASE_GET_CR(PMV); 189 + CASE_GET_CR(CMCV); 190 + CASE_GET_CR(LRR0); 191 + CASE_GET_CR(LRR1); 192 + 193 + default: 194 + printk(KERN_CRIT "wrong_getreg %d\n", regnum); 195 + break; 196 + } 197 + return res; 198 + } 199 + 200 + #define CASE_SET_REG(id) \ 201 + case _IA64_REG_ ## id: \ 202 + ia64_native_setreg(_IA64_REG_ ## id, val); \ 203 + break; 204 + #define CASE_SET_AR(id) CASE_SET_REG(AR_ ## id) 205 + #define CASE_SET_CR(id) CASE_SET_REG(CR_ ## id) 206 + 207 + void 208 + ia64_native_setreg_func(int regnum, unsigned long val) 209 + { 210 + switch (regnum) { 211 + case _IA64_REG_PSR_L: 212 + ia64_native_setreg(_IA64_REG_PSR_L, val); 213 + ia64_dv_serialize_data(); 214 + break; 215 + CASE_SET_REG(SP); 216 + CASE_SET_REG(GP); 217 + 218 + CASE_SET_AR(KR0); 219 + CASE_SET_AR(KR1); 220 + CASE_SET_AR(KR2); 221 + CASE_SET_AR(KR3); 222 + CASE_SET_AR(KR4); 223 + CASE_SET_AR(KR5); 224 + CASE_SET_AR(KR6); 225 + CASE_SET_AR(KR7); 226 + CASE_SET_AR(RSC); 227 + CASE_SET_AR(BSP); 228 + CASE_SET_AR(BSPSTORE); 229 + CASE_SET_AR(RNAT); 230 + CASE_SET_AR(FCR); 231 + CASE_SET_AR(EFLAG); 232 + CASE_SET_AR(CSD); 233 + CASE_SET_AR(SSD); 234 + CASE_SET_AR(CFLAG); 235 + CASE_SET_AR(FSR); 236 + CASE_SET_AR(FIR); 237 + CASE_SET_AR(FDR); 238 + CASE_SET_AR(CCV); 239 + CASE_SET_AR(UNAT); 240 + CASE_SET_AR(FPSR); 241 + CASE_SET_AR(ITC); 242 + CASE_SET_AR(PFS); 243 + CASE_SET_AR(LC); 244 + CASE_SET_AR(EC); 245 + 246 + CASE_SET_CR(DCR); 247 + CASE_SET_CR(ITM); 248 + CASE_SET_CR(IVA); 249 + CASE_SET_CR(PTA); 250 + CASE_SET_CR(IPSR); 251 + CASE_SET_CR(ISR); 252 + CASE_SET_CR(IIP); 253 + CASE_SET_CR(IFA); 254 + CASE_SET_CR(ITIR); 255 + CASE_SET_CR(IIPA); 256 + CASE_SET_CR(IFS); 257 + CASE_SET_CR(IIM); 258 + CASE_SET_CR(IHA); 259 + CASE_SET_CR(LID); 260 + CASE_SET_CR(IVR); 261 + CASE_SET_CR(TPR); 262 + CASE_SET_CR(EOI); 263 + CASE_SET_CR(IRR0); 264 + CASE_SET_CR(IRR1); 265 + CASE_SET_CR(IRR2); 266 + CASE_SET_CR(IRR3); 267 + CASE_SET_CR(ITV); 268 + CASE_SET_CR(PMV); 269 + CASE_SET_CR(CMCV); 270 + CASE_SET_CR(LRR0); 271 + CASE_SET_CR(LRR1); 272 + default: 273 + printk(KERN_CRIT "wrong setreg %d\n", regnum); 274 + break; 275 + } 276 + } 277 + 278 + struct pv_cpu_ops pv_cpu_ops = { 279 + .fc = ia64_native_fc_func, 280 + .thash = ia64_native_thash_func, 281 + .get_cpuid = ia64_native_get_cpuid_func, 282 + .get_pmd = ia64_native_get_pmd_func, 283 + .ptcga = ia64_native_ptcga_func, 284 + .get_rr = ia64_native_get_rr_func, 285 + .set_rr = ia64_native_set_rr_func, 286 + .set_rr0_to_rr4 = ia64_native_set_rr0_to_rr4_func, 287 + .ssm_i = ia64_native_ssm_i_func, 288 + .getreg = ia64_native_getreg_func, 289 + .setreg = ia64_native_setreg_func, 290 + .rsm_i = ia64_native_rsm_i_func, 291 + .get_psr_i = ia64_native_get_psr_i_func, 292 + .intrin_local_irq_restore 293 + = ia64_native_intrin_local_irq_restore_func, 294 + }; 295 + EXPORT_SYMBOL(pv_cpu_ops); 296 + 297 + /****************************************************************************** 298 + * replacement of hand written assembly codes. 299 + */ 300 + 301 + void 302 + paravirt_cpu_asm_init(const struct pv_cpu_asm_switch *cpu_asm_switch) 303 + { 304 + extern unsigned long paravirt_switch_to_targ; 305 + extern unsigned long paravirt_leave_syscall_targ; 306 + extern unsigned long paravirt_work_processed_syscall_targ; 307 + extern unsigned long paravirt_leave_kernel_targ; 308 + 309 + paravirt_switch_to_targ = cpu_asm_switch->switch_to; 310 + paravirt_leave_syscall_targ = cpu_asm_switch->leave_syscall; 311 + paravirt_work_processed_syscall_targ = 312 + cpu_asm_switch->work_processed_syscall; 313 + paravirt_leave_kernel_targ = cpu_asm_switch->leave_kernel; 314 + } 315 + 316 + /*************************************************************************** 317 + * pv_iosapic_ops 318 + * iosapic read/write hooks. 319 + */ 320 + 321 + static unsigned int 322 + ia64_native_iosapic_read(char __iomem *iosapic, unsigned int reg) 323 + { 324 + return __ia64_native_iosapic_read(iosapic, reg); 325 + } 326 + 327 + static void 328 + ia64_native_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val) 329 + { 330 + __ia64_native_iosapic_write(iosapic, reg, val); 331 + } 332 + 333 + struct pv_iosapic_ops pv_iosapic_ops = { 334 + .pcat_compat_init = ia64_native_iosapic_pcat_compat_init, 335 + .get_irq_chip = ia64_native_iosapic_get_irq_chip, 336 + 337 + .__read = ia64_native_iosapic_read, 338 + .__write = ia64_native_iosapic_write, 339 + }; 340 + 341 + /*************************************************************************** 342 + * pv_irq_ops 343 + * irq operations 344 + */ 345 + 346 + struct pv_irq_ops pv_irq_ops = { 347 + .register_ipi = ia64_native_register_ipi, 348 + 349 + .assign_irq_vector = ia64_native_assign_irq_vector, 350 + .free_irq_vector = ia64_native_free_irq_vector, 351 + .register_percpu_irq = ia64_native_register_percpu_irq, 352 + 353 + .resend_irq = ia64_native_resend_irq, 354 + }; 355 + 356 + /*************************************************************************** 357 + * pv_time_ops 358 + * time operations 359 + */ 360 + 361 + static int 362 + ia64_native_do_steal_accounting(unsigned long *new_itm) 363 + { 364 + return 0; 365 + } 366 + 367 + struct pv_time_ops pv_time_ops = { 368 + .do_steal_accounting = ia64_native_do_steal_accounting, 369 + };
+29
arch/ia64/kernel/paravirt_inst.h
··· 1 + /****************************************************************************** 2 + * linux/arch/ia64/xen/paravirt_inst.h 3 + * 4 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 5 + * VA Linux Systems Japan K.K. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + * 21 + */ 22 + 23 + #ifdef __IA64_ASM_PARAVIRTUALIZED_XEN 24 + #include <asm/xen/inst.h> 25 + #include <asm/xen/minstate.h> 26 + #else 27 + #include <asm/native/inst.h> 28 + #endif 29 +
+60
arch/ia64/kernel/paravirtentry.S
··· 1 + /****************************************************************************** 2 + * linux/arch/ia64/xen/paravirtentry.S 3 + * 4 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 5 + * VA Linux Systems Japan K.K. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + * 21 + */ 22 + 23 + #include <asm/asmmacro.h> 24 + #include <asm/asm-offsets.h> 25 + #include "entry.h" 26 + 27 + #define DATA8(sym, init_value) \ 28 + .pushsection .data.read_mostly ; \ 29 + .align 8 ; \ 30 + .global sym ; \ 31 + sym: ; \ 32 + data8 init_value ; \ 33 + .popsection 34 + 35 + #define BRANCH(targ, reg, breg) \ 36 + movl reg=targ ; \ 37 + ;; \ 38 + ld8 reg=[reg] ; \ 39 + ;; \ 40 + mov breg=reg ; \ 41 + br.cond.sptk.many breg 42 + 43 + #define BRANCH_PROC(sym, reg, breg) \ 44 + DATA8(paravirt_ ## sym ## _targ, ia64_native_ ## sym) ; \ 45 + GLOBAL_ENTRY(paravirt_ ## sym) ; \ 46 + BRANCH(paravirt_ ## sym ## _targ, reg, breg) ; \ 47 + END(paravirt_ ## sym) 48 + 49 + #define BRANCH_PROC_UNWINFO(sym, reg, breg) \ 50 + DATA8(paravirt_ ## sym ## _targ, ia64_native_ ## sym) ; \ 51 + GLOBAL_ENTRY(paravirt_ ## sym) ; \ 52 + PT_REGS_UNWIND_INFO(0) ; \ 53 + BRANCH(paravirt_ ## sym ## _targ, reg, breg) ; \ 54 + END(paravirt_ ## sym) 55 + 56 + 57 + BRANCH_PROC(switch_to, r22, b7) 58 + BRANCH_PROC_UNWINFO(leave_syscall, r22, b7) 59 + BRANCH_PROC(work_processed_syscall, r2, b7) 60 + BRANCH_PROC_UNWINFO(leave_kernel, r22, b7)
+10
arch/ia64/kernel/setup.c
··· 51 51 #include <asm/mca.h> 52 52 #include <asm/meminit.h> 53 53 #include <asm/page.h> 54 + #include <asm/paravirt.h> 54 55 #include <asm/patch.h> 55 56 #include <asm/pgtable.h> 56 57 #include <asm/processor.h> ··· 342 341 rsvd_region[n].end = (unsigned long) ia64_imva(_end); 343 342 n++; 344 343 344 + n += paravirt_reserve_memory(&rsvd_region[n]); 345 + 345 346 #ifdef CONFIG_BLK_DEV_INITRD 346 347 if (ia64_boot_param->initrd_start) { 347 348 rsvd_region[n].start = (unsigned long)__va(ia64_boot_param->initrd_start); ··· 522 519 { 523 520 unw_init(); 524 521 522 + paravirt_arch_setup_early(); 523 + 525 524 ia64_patch_vtop((u64) __start___vtop_patchlist, (u64) __end___vtop_patchlist); 526 525 527 526 *cmdline_p = __va(ia64_boot_param->command_line); ··· 588 583 acpi_boot_init(); 589 584 #endif 590 585 586 + paravirt_banner(); 587 + paravirt_arch_setup_console(cmdline_p); 588 + 591 589 #ifdef CONFIG_VT 592 590 if (!conswitchp) { 593 591 # if defined(CONFIG_DUMMY_CONSOLE) ··· 610 602 #endif 611 603 612 604 /* enable IA-64 Machine Check Abort Handling unless disabled */ 605 + if (paravirt_arch_setup_nomca()) 606 + nomca = 1; 613 607 if (!nomca) 614 608 ia64_mca_init(); 615 609
+2
arch/ia64/kernel/smpboot.c
··· 50 50 #include <asm/machvec.h> 51 51 #include <asm/mca.h> 52 52 #include <asm/page.h> 53 + #include <asm/paravirt.h> 53 54 #include <asm/pgalloc.h> 54 55 #include <asm/pgtable.h> 55 56 #include <asm/processor.h> ··· 643 642 cpu_set(smp_processor_id(), cpu_online_map); 644 643 cpu_set(smp_processor_id(), cpu_callin_map); 645 644 per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE; 645 + paravirt_post_smp_prepare_boot_cpu(); 646 646 } 647 647 648 648 #ifdef CONFIG_HOTPLUG_CPU
+23
arch/ia64/kernel/time.c
··· 24 24 #include <asm/machvec.h> 25 25 #include <asm/delay.h> 26 26 #include <asm/hw_irq.h> 27 + #include <asm/paravirt.h> 27 28 #include <asm/ptrace.h> 28 29 #include <asm/sal.h> 29 30 #include <asm/sections.h> ··· 49 48 50 49 #endif 51 50 51 + #ifdef CONFIG_PARAVIRT 52 + static void 53 + paravirt_clocksource_resume(void) 54 + { 55 + if (pv_time_ops.clocksource_resume) 56 + pv_time_ops.clocksource_resume(); 57 + } 58 + #endif 59 + 52 60 static struct clocksource clocksource_itc = { 53 61 .name = "itc", 54 62 .rating = 350, ··· 66 56 .mult = 0, /*to be calculated*/ 67 57 .shift = 16, 68 58 .flags = CLOCK_SOURCE_IS_CONTINUOUS, 59 + #ifdef CONFIG_PARAVIRT 60 + .resume = paravirt_clocksource_resume, 61 + #endif 69 62 }; 70 63 static struct clocksource *itc_clocksource; 71 64 ··· 170 157 171 158 profile_tick(CPU_PROFILING); 172 159 160 + if (paravirt_do_steal_accounting(&new_itm)) 161 + goto skip_process_time_accounting; 162 + 173 163 while (1) { 174 164 update_process_times(user_mode(get_irq_regs())); 175 165 ··· 201 185 local_irq_enable(); 202 186 local_irq_disable(); 203 187 } 188 + 189 + skip_process_time_accounting: 204 190 205 191 do { 206 192 /* ··· 352 334 * ITCs. Until that time we have to avoid ITC. 353 335 */ 354 336 clocksource_itc.rating = 50; 337 + 338 + paravirt_init_missing_ticks_accounting(smp_processor_id()); 339 + 340 + /* avoid softlock up message when cpu is unplug and plugged again. */ 341 + touch_softlockup_watchdog(); 355 342 356 343 /* Setup the CPU local timer tick */ 357 344 ia64_cpu_local_tick();
-1
arch/ia64/kernel/vmlinux.lds.S
··· 4 4 #include <asm/system.h> 5 5 #include <asm/pgtable.h> 6 6 7 - #define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE) 8 7 #include <asm-generic/vmlinux.lds.h> 9 8 10 9 #define IVT_TEXT \
+15 -14
drivers/char/mmtimer.c
··· 32 32 #include <linux/interrupt.h> 33 33 #include <linux/time.h> 34 34 #include <linux/math64.h> 35 + #include <linux/smp_lock.h> 35 36 36 37 #include <asm/uaccess.h> 37 38 #include <asm/sn/addrs.h> ··· 58 57 59 58 #define rtc_time() (*RTC_COUNTER_ADDR) 60 59 61 - static int mmtimer_ioctl(struct inode *inode, struct file *file, 62 - unsigned int cmd, unsigned long arg); 60 + static long mmtimer_ioctl(struct file *file, unsigned int cmd, 61 + unsigned long arg); 63 62 static int mmtimer_mmap(struct file *file, struct vm_area_struct *vma); 64 63 65 64 /* ··· 68 67 static unsigned long mmtimer_femtoperiod = 0; 69 68 70 69 static const struct file_operations mmtimer_fops = { 71 - .owner = THIS_MODULE, 72 - .mmap = mmtimer_mmap, 73 - .ioctl = mmtimer_ioctl, 70 + .owner = THIS_MODULE, 71 + .mmap = mmtimer_mmap, 72 + .unlocked_ioctl = mmtimer_ioctl, 74 73 }; 75 74 76 75 /* ··· 340 339 341 340 /** 342 341 * mmtimer_ioctl - ioctl interface for /dev/mmtimer 343 - * @inode: inode of the device 344 342 * @file: file structure for the device 345 343 * @cmd: command to execute 346 344 * @arg: optional argument to command ··· 365 365 * %MMTIMER_GETCOUNTER - Gets the current value in the counter and places it 366 366 * in the address specified by @arg. 367 367 */ 368 - static int mmtimer_ioctl(struct inode *inode, struct file *file, 369 - unsigned int cmd, unsigned long arg) 368 + static long mmtimer_ioctl(struct file *file, unsigned int cmd, 369 + unsigned long arg) 370 370 { 371 371 int ret = 0; 372 + 373 + lock_kernel(); 372 374 373 375 switch (cmd) { 374 376 case MMTIMER_GETOFFSET: /* offset of the counter */ ··· 386 384 case MMTIMER_GETRES: /* resolution of the clock in 10^-15 s */ 387 385 if(copy_to_user((unsigned long __user *)arg, 388 386 &mmtimer_femtoperiod, sizeof(unsigned long))) 389 - return -EFAULT; 387 + ret = -EFAULT; 390 388 break; 391 389 392 390 case MMTIMER_GETFREQ: /* frequency in Hz */ 393 391 if(copy_to_user((unsigned long __user *)arg, 394 392 &sn_rtc_cycles_per_second, 395 393 sizeof(unsigned long))) 396 - return -EFAULT; 397 - ret = 0; 394 + ret = -EFAULT; 398 395 break; 399 396 400 397 case MMTIMER_GETBITS: /* number of bits in the clock */ ··· 407 406 case MMTIMER_GETCOUNTER: 408 407 if(copy_to_user((unsigned long __user *)arg, 409 408 RTC_COUNTER_ADDR, sizeof(unsigned long))) 410 - return -EFAULT; 409 + ret = -EFAULT; 411 410 break; 412 411 default: 413 - ret = -ENOSYS; 412 + ret = -ENOTTY; 414 413 break; 415 414 } 416 - 415 + unlock_kernel(); 417 416 return ret; 418 417 } 419 418
+1 -1
include/asm-ia64/Kbuild
··· 5 5 header-y += fpswa.h 6 6 header-y += ia64regs.h 7 7 header-y += intel_intrin.h 8 - header-y += intrinsics.h 9 8 header-y += perfmon_default_smpl.h 10 9 header-y += ptrace_offsets.h 11 10 header-y += rse.h 12 11 header-y += ucontext.h 13 12 14 13 unifdef-y += gcc_intrin.h 14 + unifdef-y += intrinsics.h 15 15 unifdef-y += perfmon.h 16 16 unifdef-y += ustack.h
+12 -12
include/asm-ia64/gcc_intrin.h
··· 32 32 register unsigned long ia64_r13 asm ("r13") __used; 33 33 #endif 34 34 35 - #define ia64_setreg(regnum, val) \ 35 + #define ia64_native_setreg(regnum, val) \ 36 36 ({ \ 37 37 switch (regnum) { \ 38 38 case _IA64_REG_PSR_L: \ ··· 61 61 } \ 62 62 }) 63 63 64 - #define ia64_getreg(regnum) \ 64 + #define ia64_native_getreg(regnum) \ 65 65 ({ \ 66 66 __u64 ia64_intri_res; \ 67 67 \ ··· 385 385 386 386 #define ia64_invala() asm volatile ("invala" ::: "memory") 387 387 388 - #define ia64_thash(addr) \ 388 + #define ia64_native_thash(addr) \ 389 389 ({ \ 390 390 __u64 ia64_intri_res; \ 391 391 asm volatile ("thash %0=%1" : "=r"(ia64_intri_res) : "r" (addr)); \ ··· 438 438 #define ia64_set_pmd(index, val) \ 439 439 asm volatile ("mov pmd[%0]=%1" :: "r"(index), "r"(val) : "memory") 440 440 441 - #define ia64_set_rr(index, val) \ 441 + #define ia64_native_set_rr(index, val) \ 442 442 asm volatile ("mov rr[%0]=%1" :: "r"(index), "r"(val) : "memory"); 443 443 444 - #define ia64_get_cpuid(index) \ 444 + #define ia64_native_get_cpuid(index) \ 445 445 ({ \ 446 446 __u64 ia64_intri_res; \ 447 447 asm volatile ("mov %0=cpuid[%r1]" : "=r"(ia64_intri_res) : "rO"(index)); \ ··· 477 477 }) 478 478 479 479 480 - #define ia64_get_pmd(index) \ 480 + #define ia64_native_get_pmd(index) \ 481 481 ({ \ 482 482 __u64 ia64_intri_res; \ 483 483 asm volatile ("mov %0=pmd[%1]" : "=r"(ia64_intri_res) : "r"(index)); \ 484 484 ia64_intri_res; \ 485 485 }) 486 486 487 - #define ia64_get_rr(index) \ 487 + #define ia64_native_get_rr(index) \ 488 488 ({ \ 489 489 __u64 ia64_intri_res; \ 490 490 asm volatile ("mov %0=rr[%1]" : "=r"(ia64_intri_res) : "r" (index)); \ 491 491 ia64_intri_res; \ 492 492 }) 493 493 494 - #define ia64_fc(addr) asm volatile ("fc %0" :: "r"(addr) : "memory") 494 + #define ia64_native_fc(addr) asm volatile ("fc %0" :: "r"(addr) : "memory") 495 495 496 496 497 497 #define ia64_sync_i() asm volatile (";; sync.i" ::: "memory") 498 498 499 - #define ia64_ssm(mask) asm volatile ("ssm %0":: "i"((mask)) : "memory") 500 - #define ia64_rsm(mask) asm volatile ("rsm %0":: "i"((mask)) : "memory") 499 + #define ia64_native_ssm(mask) asm volatile ("ssm %0":: "i"((mask)) : "memory") 500 + #define ia64_native_rsm(mask) asm volatile ("rsm %0":: "i"((mask)) : "memory") 501 501 #define ia64_sum(mask) asm volatile ("sum %0":: "i"((mask)) : "memory") 502 502 #define ia64_rum(mask) asm volatile ("rum %0":: "i"((mask)) : "memory") 503 503 504 504 #define ia64_ptce(addr) asm volatile ("ptc.e %0" :: "r"(addr)) 505 505 506 - #define ia64_ptcga(addr, size) \ 506 + #define ia64_native_ptcga(addr, size) \ 507 507 do { \ 508 508 asm volatile ("ptc.ga %0,%1" :: "r"(addr), "r"(size) : "memory"); \ 509 509 ia64_dv_serialize_data(); \ ··· 608 608 } \ 609 609 }) 610 610 611 - #define ia64_intrin_local_irq_restore(x) \ 611 + #define ia64_native_intrin_local_irq_restore(x) \ 612 612 do { \ 613 613 asm volatile (";; cmp.ne p6,p7=%0,r0;;" \ 614 614 "(p6) ssm psr.i;" \
+19 -4
include/asm-ia64/hw_irq.h
··· 15 15 #include <asm/ptrace.h> 16 16 #include <asm/smp.h> 17 17 18 + #ifndef CONFIG_PARAVIRT 18 19 typedef u8 ia64_vector; 20 + #else 21 + typedef u16 ia64_vector; 22 + #endif 19 23 20 24 /* 21 25 * 0 special ··· 108 104 109 105 extern struct hw_interrupt_type irq_type_ia64_lsapic; /* CPU-internal interrupt controller */ 110 106 107 + #ifdef CONFIG_PARAVIRT_GUEST 108 + #include <asm/paravirt.h> 109 + #else 110 + #define ia64_register_ipi ia64_native_register_ipi 111 + #define assign_irq_vector ia64_native_assign_irq_vector 112 + #define free_irq_vector ia64_native_free_irq_vector 113 + #define register_percpu_irq ia64_native_register_percpu_irq 114 + #define ia64_resend_irq ia64_native_resend_irq 115 + #endif 116 + 117 + extern void ia64_native_register_ipi(void); 111 118 extern int bind_irq_vector(int irq, int vector, cpumask_t domain); 112 - extern int assign_irq_vector (int irq); /* allocate a free vector */ 113 - extern void free_irq_vector (int vector); 119 + extern int ia64_native_assign_irq_vector (int irq); /* allocate a free vector */ 120 + extern void ia64_native_free_irq_vector (int vector); 114 121 extern int reserve_irq_vector (int vector); 115 122 extern void __setup_vector_irq(int cpu); 116 123 extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect); 117 - extern void register_percpu_irq (ia64_vector vec, struct irqaction *action); 124 + extern void ia64_native_register_percpu_irq (ia64_vector vec, struct irqaction *action); 118 125 extern int check_irq_used (int irq); 119 126 extern void destroy_and_reserve_irq (unsigned int irq); 120 127 ··· 137 122 static inline void irq_complete_move(unsigned int irq) {} 138 123 #endif 139 124 140 - static inline void ia64_resend_irq(unsigned int vector) 125 + static inline void ia64_native_resend_irq(unsigned int vector) 141 126 { 142 127 platform_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0); 143 128 }
+21 -20
include/asm-ia64/intel_intrin.h
··· 16 16 * intrinsic 17 17 */ 18 18 19 - #define ia64_getreg __getReg 20 - #define ia64_setreg __setReg 19 + #define ia64_native_getreg __getReg 20 + #define ia64_native_setreg __setReg 21 21 22 22 #define ia64_hint __hint 23 23 #define ia64_hint_pause __hint_pause ··· 39 39 #define ia64_invala_fr __invala_fr 40 40 #define ia64_nop __nop 41 41 #define ia64_sum __sum 42 - #define ia64_ssm __ssm 42 + #define ia64_native_ssm __ssm 43 43 #define ia64_rum __rum 44 - #define ia64_rsm __rsm 45 - #define ia64_fc __fc 44 + #define ia64_native_rsm __rsm 45 + #define ia64_native_fc __fc 46 46 47 47 #define ia64_ldfs __ldfs 48 48 #define ia64_ldfd __ldfd ··· 88 88 __setIndReg(_IA64_REG_INDR_PMC, index, val) 89 89 #define ia64_set_pmd(index, val) \ 90 90 __setIndReg(_IA64_REG_INDR_PMD, index, val) 91 - #define ia64_set_rr(index, val) \ 91 + #define ia64_native_set_rr(index, val) \ 92 92 __setIndReg(_IA64_REG_INDR_RR, index, val) 93 93 94 - #define ia64_get_cpuid(index) __getIndReg(_IA64_REG_INDR_CPUID, index) 95 - #define __ia64_get_dbr(index) __getIndReg(_IA64_REG_INDR_DBR, index) 96 - #define ia64_get_ibr(index) __getIndReg(_IA64_REG_INDR_IBR, index) 97 - #define ia64_get_pkr(index) __getIndReg(_IA64_REG_INDR_PKR, index) 98 - #define ia64_get_pmc(index) __getIndReg(_IA64_REG_INDR_PMC, index) 99 - #define ia64_get_pmd(index) __getIndReg(_IA64_REG_INDR_PMD, index) 100 - #define ia64_get_rr(index) __getIndReg(_IA64_REG_INDR_RR, index) 94 + #define ia64_native_get_cpuid(index) \ 95 + __getIndReg(_IA64_REG_INDR_CPUID, index) 96 + #define __ia64_get_dbr(index) __getIndReg(_IA64_REG_INDR_DBR, index) 97 + #define ia64_get_ibr(index) __getIndReg(_IA64_REG_INDR_IBR, index) 98 + #define ia64_get_pkr(index) __getIndReg(_IA64_REG_INDR_PKR, index) 99 + #define ia64_get_pmc(index) __getIndReg(_IA64_REG_INDR_PMC, index) 100 + #define ia64_native_get_pmd(index) __getIndReg(_IA64_REG_INDR_PMD, index) 101 + #define ia64_native_get_rr(index) __getIndReg(_IA64_REG_INDR_RR, index) 101 102 102 103 #define ia64_srlz_d __dsrlz 103 104 #define ia64_srlz_i __isrlz ··· 120 119 #define ia64_ld8_acq __ld8_acq 121 120 122 121 #define ia64_sync_i __synci 123 - #define ia64_thash __thash 124 - #define ia64_ttag __ttag 122 + #define ia64_native_thash __thash 123 + #define ia64_native_ttag __ttag 125 124 #define ia64_itcd __itcd 126 125 #define ia64_itci __itci 127 126 #define ia64_itrd __itrd 128 127 #define ia64_itri __itri 129 128 #define ia64_ptce __ptce 130 129 #define ia64_ptcl __ptcl 131 - #define ia64_ptcg __ptcg 132 - #define ia64_ptcga __ptcga 130 + #define ia64_native_ptcg __ptcg 131 + #define ia64_native_ptcga __ptcga 133 132 #define ia64_ptri __ptri 134 133 #define ia64_ptrd __ptrd 135 134 #define ia64_dep_mi _m64_dep_mi ··· 146 145 #define ia64_lfetch_fault __lfetch_fault 147 146 #define ia64_lfetch_fault_excl __lfetch_fault_excl 148 147 149 - #define ia64_intrin_local_irq_restore(x) \ 148 + #define ia64_native_intrin_local_irq_restore(x) \ 150 149 do { \ 151 150 if ((x) != 0) { \ 152 - ia64_ssm(IA64_PSR_I); \ 151 + ia64_native_ssm(IA64_PSR_I); \ 153 152 ia64_srlz_d(); \ 154 153 } else { \ 155 - ia64_rsm(IA64_PSR_I); \ 154 + ia64_native_rsm(IA64_PSR_I); \ 156 155 } \ 157 156 } while (0) 158 157
+55
include/asm-ia64/intrinsics.h
··· 18 18 # include <asm/gcc_intrin.h> 19 19 #endif 20 20 21 + #define ia64_native_get_psr_i() (ia64_native_getreg(_IA64_REG_PSR) & IA64_PSR_I) 22 + 23 + #define ia64_native_set_rr0_to_rr4(val0, val1, val2, val3, val4) \ 24 + do { \ 25 + ia64_native_set_rr(0x0000000000000000UL, (val0)); \ 26 + ia64_native_set_rr(0x2000000000000000UL, (val1)); \ 27 + ia64_native_set_rr(0x4000000000000000UL, (val2)); \ 28 + ia64_native_set_rr(0x6000000000000000UL, (val3)); \ 29 + ia64_native_set_rr(0x8000000000000000UL, (val4)); \ 30 + } while (0) 31 + 21 32 /* 22 33 * Force an unresolved reference if someone tries to use 23 34 * ia64_fetch_and_add() with a bad value. ··· 194 183 #endif /* !CONFIG_IA64_DEBUG_CMPXCHG */ 195 184 196 185 #endif 186 + 187 + #ifdef __KERNEL__ 188 + #include <asm/paravirt_privop.h> 189 + #endif 190 + 191 + #ifndef __ASSEMBLY__ 192 + #if defined(CONFIG_PARAVIRT) && defined(__KERNEL__) 193 + #define IA64_INTRINSIC_API(name) pv_cpu_ops.name 194 + #define IA64_INTRINSIC_MACRO(name) paravirt_ ## name 195 + #else 196 + #define IA64_INTRINSIC_API(name) ia64_native_ ## name 197 + #define IA64_INTRINSIC_MACRO(name) ia64_native_ ## name 198 + #endif 199 + 200 + /************************************************/ 201 + /* Instructions paravirtualized for correctness */ 202 + /************************************************/ 203 + /* fc, thash, get_cpuid, get_pmd, get_eflags, set_eflags */ 204 + /* Note that "ttag" and "cover" are also privilege-sensitive; "ttag" 205 + * is not currently used (though it may be in a long-format VHPT system!) 206 + */ 207 + #define ia64_fc IA64_INTRINSIC_API(fc) 208 + #define ia64_thash IA64_INTRINSIC_API(thash) 209 + #define ia64_get_cpuid IA64_INTRINSIC_API(get_cpuid) 210 + #define ia64_get_pmd IA64_INTRINSIC_API(get_pmd) 211 + 212 + 213 + /************************************************/ 214 + /* Instructions paravirtualized for performance */ 215 + /************************************************/ 216 + #define ia64_ssm IA64_INTRINSIC_MACRO(ssm) 217 + #define ia64_rsm IA64_INTRINSIC_MACRO(rsm) 218 + #define ia64_getreg IA64_INTRINSIC_API(getreg) 219 + #define ia64_setreg IA64_INTRINSIC_API(setreg) 220 + #define ia64_set_rr IA64_INTRINSIC_API(set_rr) 221 + #define ia64_get_rr IA64_INTRINSIC_API(get_rr) 222 + #define ia64_ptcga IA64_INTRINSIC_API(ptcga) 223 + #define ia64_get_psr_i IA64_INTRINSIC_API(get_psr_i) 224 + #define ia64_intrin_local_irq_restore \ 225 + IA64_INTRINSIC_API(intrin_local_irq_restore) 226 + #define ia64_set_rr0_to_rr4 IA64_INTRINSIC_API(set_rr0_to_rr4) 227 + 228 + #endif /* !__ASSEMBLY__ */ 229 + 197 230 #endif /* _ASM_IA64_INTRINSICS_H */
+16 -2
include/asm-ia64/iosapic.h
··· 55 55 56 56 #define NR_IOSAPICS 256 57 57 58 - static inline unsigned int __iosapic_read(char __iomem *iosapic, unsigned int reg) 58 + #ifdef CONFIG_PARAVIRT_GUEST 59 + #include <asm/paravirt.h> 60 + #else 61 + #define iosapic_pcat_compat_init ia64_native_iosapic_pcat_compat_init 62 + #define __iosapic_read __ia64_native_iosapic_read 63 + #define __iosapic_write __ia64_native_iosapic_write 64 + #define iosapic_get_irq_chip ia64_native_iosapic_get_irq_chip 65 + #endif 66 + 67 + extern void __init ia64_native_iosapic_pcat_compat_init(void); 68 + extern struct irq_chip *ia64_native_iosapic_get_irq_chip(unsigned long trigger); 69 + 70 + static inline unsigned int 71 + __ia64_native_iosapic_read(char __iomem *iosapic, unsigned int reg) 59 72 { 60 73 writel(reg, iosapic + IOSAPIC_REG_SELECT); 61 74 return readl(iosapic + IOSAPIC_WINDOW); 62 75 } 63 76 64 - static inline void __iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val) 77 + static inline void 78 + __ia64_native_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val) 65 79 { 66 80 writel(reg, iosapic + IOSAPIC_REG_SELECT); 67 81 writel(val, iosapic + IOSAPIC_WINDOW);
+1 -8
include/asm-ia64/irq.h
··· 13 13 14 14 #include <linux/types.h> 15 15 #include <linux/cpumask.h> 16 - 17 - #define NR_VECTORS 256 18 - 19 - #if (NR_VECTORS + 32 * NR_CPUS) < 1024 20 - #define NR_IRQS (NR_VECTORS + 32 * NR_CPUS) 21 - #else 22 - #define NR_IRQS 1024 23 - #endif 16 + #include <asm-ia64/nr-irqs.h> 24 17 25 18 static __inline__ int 26 19 irq_canonicalize (int irq)
+1 -5
include/asm-ia64/mmu_context.h
··· 152 152 # endif 153 153 #endif 154 154 155 - ia64_set_rr(0x0000000000000000UL, rr0); 156 - ia64_set_rr(0x2000000000000000UL, rr1); 157 - ia64_set_rr(0x4000000000000000UL, rr2); 158 - ia64_set_rr(0x6000000000000000UL, rr3); 159 - ia64_set_rr(0x8000000000000000UL, rr4); 155 + ia64_set_rr0_to_rr4(rr0, rr1, rr2, rr3, rr4); 160 156 ia64_srlz_i(); /* srlz.i implies srlz.d */ 161 157 } 162 158
+175
include/asm-ia64/native/inst.h
··· 1 + /****************************************************************************** 2 + * include/asm-ia64/native/inst.h 3 + * 4 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 5 + * VA Linux Systems Japan K.K. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + * 21 + */ 22 + 23 + #define DO_SAVE_MIN IA64_NATIVE_DO_SAVE_MIN 24 + 25 + #define __paravirt_switch_to ia64_native_switch_to 26 + #define __paravirt_leave_syscall ia64_native_leave_syscall 27 + #define __paravirt_work_processed_syscall ia64_native_work_processed_syscall 28 + #define __paravirt_leave_kernel ia64_native_leave_kernel 29 + #define __paravirt_pending_syscall_end ia64_work_pending_syscall_end 30 + #define __paravirt_work_processed_syscall_target \ 31 + ia64_work_processed_syscall 32 + 33 + #ifdef CONFIG_PARAVIRT_GUEST_ASM_CLOBBER_CHECK 34 + # define PARAVIRT_POISON 0xdeadbeefbaadf00d 35 + # define CLOBBER(clob) \ 36 + ;; \ 37 + movl clob = PARAVIRT_POISON; \ 38 + ;; 39 + #else 40 + # define CLOBBER(clob) /* nothing */ 41 + #endif 42 + 43 + #define MOV_FROM_IFA(reg) \ 44 + mov reg = cr.ifa 45 + 46 + #define MOV_FROM_ITIR(reg) \ 47 + mov reg = cr.itir 48 + 49 + #define MOV_FROM_ISR(reg) \ 50 + mov reg = cr.isr 51 + 52 + #define MOV_FROM_IHA(reg) \ 53 + mov reg = cr.iha 54 + 55 + #define MOV_FROM_IPSR(pred, reg) \ 56 + (pred) mov reg = cr.ipsr 57 + 58 + #define MOV_FROM_IIM(reg) \ 59 + mov reg = cr.iim 60 + 61 + #define MOV_FROM_IIP(reg) \ 62 + mov reg = cr.iip 63 + 64 + #define MOV_FROM_IVR(reg, clob) \ 65 + mov reg = cr.ivr \ 66 + CLOBBER(clob) 67 + 68 + #define MOV_FROM_PSR(pred, reg, clob) \ 69 + (pred) mov reg = psr \ 70 + CLOBBER(clob) 71 + 72 + #define MOV_TO_IFA(reg, clob) \ 73 + mov cr.ifa = reg \ 74 + CLOBBER(clob) 75 + 76 + #define MOV_TO_ITIR(pred, reg, clob) \ 77 + (pred) mov cr.itir = reg \ 78 + CLOBBER(clob) 79 + 80 + #define MOV_TO_IHA(pred, reg, clob) \ 81 + (pred) mov cr.iha = reg \ 82 + CLOBBER(clob) 83 + 84 + #define MOV_TO_IPSR(pred, reg, clob) \ 85 + (pred) mov cr.ipsr = reg \ 86 + CLOBBER(clob) 87 + 88 + #define MOV_TO_IFS(pred, reg, clob) \ 89 + (pred) mov cr.ifs = reg \ 90 + CLOBBER(clob) 91 + 92 + #define MOV_TO_IIP(reg, clob) \ 93 + mov cr.iip = reg \ 94 + CLOBBER(clob) 95 + 96 + #define MOV_TO_KR(kr, reg, clob0, clob1) \ 97 + mov IA64_KR(kr) = reg \ 98 + CLOBBER(clob0) \ 99 + CLOBBER(clob1) 100 + 101 + #define ITC_I(pred, reg, clob) \ 102 + (pred) itc.i reg \ 103 + CLOBBER(clob) 104 + 105 + #define ITC_D(pred, reg, clob) \ 106 + (pred) itc.d reg \ 107 + CLOBBER(clob) 108 + 109 + #define ITC_I_AND_D(pred_i, pred_d, reg, clob) \ 110 + (pred_i) itc.i reg; \ 111 + (pred_d) itc.d reg \ 112 + CLOBBER(clob) 113 + 114 + #define THASH(pred, reg0, reg1, clob) \ 115 + (pred) thash reg0 = reg1 \ 116 + CLOBBER(clob) 117 + 118 + #define SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(clob0, clob1) \ 119 + ssm psr.ic | PSR_DEFAULT_BITS \ 120 + CLOBBER(clob0) \ 121 + CLOBBER(clob1) \ 122 + ;; \ 123 + srlz.i /* guarantee that interruption collectin is on */ \ 124 + ;; 125 + 126 + #define SSM_PSR_IC_AND_SRLZ_D(clob0, clob1) \ 127 + ssm psr.ic \ 128 + CLOBBER(clob0) \ 129 + CLOBBER(clob1) \ 130 + ;; \ 131 + srlz.d 132 + 133 + #define RSM_PSR_IC(clob) \ 134 + rsm psr.ic \ 135 + CLOBBER(clob) 136 + 137 + #define SSM_PSR_I(pred, pred_clob, clob) \ 138 + (pred) ssm psr.i \ 139 + CLOBBER(clob) 140 + 141 + #define RSM_PSR_I(pred, clob0, clob1) \ 142 + (pred) rsm psr.i \ 143 + CLOBBER(clob0) \ 144 + CLOBBER(clob1) 145 + 146 + #define RSM_PSR_I_IC(clob0, clob1, clob2) \ 147 + rsm psr.i | psr.ic \ 148 + CLOBBER(clob0) \ 149 + CLOBBER(clob1) \ 150 + CLOBBER(clob2) 151 + 152 + #define RSM_PSR_DT \ 153 + rsm psr.dt 154 + 155 + #define SSM_PSR_DT_AND_SRLZ_I \ 156 + ssm psr.dt \ 157 + ;; \ 158 + srlz.i 159 + 160 + #define BSW_0(clob0, clob1, clob2) \ 161 + bsw.0 \ 162 + CLOBBER(clob0) \ 163 + CLOBBER(clob1) \ 164 + CLOBBER(clob2) 165 + 166 + #define BSW_1(clob0, clob1) \ 167 + bsw.1 \ 168 + CLOBBER(clob0) \ 169 + CLOBBER(clob1) 170 + 171 + #define COVER \ 172 + cover 173 + 174 + #define RFI \ 175 + rfi
+35
include/asm-ia64/native/irq.h
··· 1 + /****************************************************************************** 2 + * include/asm-ia64/native/irq.h 3 + * 4 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 5 + * VA Linux Systems Japan K.K. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + * 21 + * moved from linux/include/asm-ia64/irq.h. 22 + */ 23 + 24 + #ifndef _ASM_IA64_NATIVE_IRQ_H 25 + #define _ASM_IA64_NATIVE_IRQ_H 26 + 27 + #define NR_VECTORS 256 28 + 29 + #if (NR_VECTORS + 32 * NR_CPUS) < 1024 30 + #define IA64_NATIVE_NR_IRQS (NR_VECTORS + 32 * NR_CPUS) 31 + #else 32 + #define IA64_NATIVE_NR_IRQS 1024 33 + #endif 34 + 35 + #endif /* _ASM_IA64_NATIVE_IRQ_H */
+255
include/asm-ia64/paravirt.h
··· 1 + /****************************************************************************** 2 + * include/asm-ia64/paravirt.h 3 + * 4 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 5 + * VA Linux Systems Japan K.K. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + * 21 + */ 22 + 23 + 24 + #ifndef __ASM_PARAVIRT_H 25 + #define __ASM_PARAVIRT_H 26 + 27 + #ifdef CONFIG_PARAVIRT_GUEST 28 + 29 + #define PARAVIRT_HYPERVISOR_TYPE_DEFAULT 0 30 + #define PARAVIRT_HYPERVISOR_TYPE_XEN 1 31 + 32 + #ifndef __ASSEMBLY__ 33 + 34 + #include <asm/hw_irq.h> 35 + #include <asm/meminit.h> 36 + 37 + /****************************************************************************** 38 + * general info 39 + */ 40 + struct pv_info { 41 + unsigned int kernel_rpl; 42 + int paravirt_enabled; 43 + const char *name; 44 + }; 45 + 46 + extern struct pv_info pv_info; 47 + 48 + static inline int paravirt_enabled(void) 49 + { 50 + return pv_info.paravirt_enabled; 51 + } 52 + 53 + static inline unsigned int get_kernel_rpl(void) 54 + { 55 + return pv_info.kernel_rpl; 56 + } 57 + 58 + /****************************************************************************** 59 + * initialization hooks. 60 + */ 61 + struct rsvd_region; 62 + 63 + struct pv_init_ops { 64 + void (*banner)(void); 65 + 66 + int (*reserve_memory)(struct rsvd_region *region); 67 + 68 + void (*arch_setup_early)(void); 69 + void (*arch_setup_console)(char **cmdline_p); 70 + int (*arch_setup_nomca)(void); 71 + 72 + void (*post_smp_prepare_boot_cpu)(void); 73 + }; 74 + 75 + extern struct pv_init_ops pv_init_ops; 76 + 77 + static inline void paravirt_banner(void) 78 + { 79 + if (pv_init_ops.banner) 80 + pv_init_ops.banner(); 81 + } 82 + 83 + static inline int paravirt_reserve_memory(struct rsvd_region *region) 84 + { 85 + if (pv_init_ops.reserve_memory) 86 + return pv_init_ops.reserve_memory(region); 87 + return 0; 88 + } 89 + 90 + static inline void paravirt_arch_setup_early(void) 91 + { 92 + if (pv_init_ops.arch_setup_early) 93 + pv_init_ops.arch_setup_early(); 94 + } 95 + 96 + static inline void paravirt_arch_setup_console(char **cmdline_p) 97 + { 98 + if (pv_init_ops.arch_setup_console) 99 + pv_init_ops.arch_setup_console(cmdline_p); 100 + } 101 + 102 + static inline int paravirt_arch_setup_nomca(void) 103 + { 104 + if (pv_init_ops.arch_setup_nomca) 105 + return pv_init_ops.arch_setup_nomca(); 106 + return 0; 107 + } 108 + 109 + static inline void paravirt_post_smp_prepare_boot_cpu(void) 110 + { 111 + if (pv_init_ops.post_smp_prepare_boot_cpu) 112 + pv_init_ops.post_smp_prepare_boot_cpu(); 113 + } 114 + 115 + /****************************************************************************** 116 + * replacement of iosapic operations. 117 + */ 118 + 119 + struct pv_iosapic_ops { 120 + void (*pcat_compat_init)(void); 121 + 122 + struct irq_chip *(*get_irq_chip)(unsigned long trigger); 123 + 124 + unsigned int (*__read)(char __iomem *iosapic, unsigned int reg); 125 + void (*__write)(char __iomem *iosapic, unsigned int reg, u32 val); 126 + }; 127 + 128 + extern struct pv_iosapic_ops pv_iosapic_ops; 129 + 130 + static inline void 131 + iosapic_pcat_compat_init(void) 132 + { 133 + if (pv_iosapic_ops.pcat_compat_init) 134 + pv_iosapic_ops.pcat_compat_init(); 135 + } 136 + 137 + static inline struct irq_chip* 138 + iosapic_get_irq_chip(unsigned long trigger) 139 + { 140 + return pv_iosapic_ops.get_irq_chip(trigger); 141 + } 142 + 143 + static inline unsigned int 144 + __iosapic_read(char __iomem *iosapic, unsigned int reg) 145 + { 146 + return pv_iosapic_ops.__read(iosapic, reg); 147 + } 148 + 149 + static inline void 150 + __iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val) 151 + { 152 + return pv_iosapic_ops.__write(iosapic, reg, val); 153 + } 154 + 155 + /****************************************************************************** 156 + * replacement of irq operations. 157 + */ 158 + 159 + struct pv_irq_ops { 160 + void (*register_ipi)(void); 161 + 162 + int (*assign_irq_vector)(int irq); 163 + void (*free_irq_vector)(int vector); 164 + 165 + void (*register_percpu_irq)(ia64_vector vec, 166 + struct irqaction *action); 167 + 168 + void (*resend_irq)(unsigned int vector); 169 + }; 170 + 171 + extern struct pv_irq_ops pv_irq_ops; 172 + 173 + static inline void 174 + ia64_register_ipi(void) 175 + { 176 + pv_irq_ops.register_ipi(); 177 + } 178 + 179 + static inline int 180 + assign_irq_vector(int irq) 181 + { 182 + return pv_irq_ops.assign_irq_vector(irq); 183 + } 184 + 185 + static inline void 186 + free_irq_vector(int vector) 187 + { 188 + return pv_irq_ops.free_irq_vector(vector); 189 + } 190 + 191 + static inline void 192 + register_percpu_irq(ia64_vector vec, struct irqaction *action) 193 + { 194 + pv_irq_ops.register_percpu_irq(vec, action); 195 + } 196 + 197 + static inline void 198 + ia64_resend_irq(unsigned int vector) 199 + { 200 + pv_irq_ops.resend_irq(vector); 201 + } 202 + 203 + /****************************************************************************** 204 + * replacement of time operations. 205 + */ 206 + 207 + extern struct itc_jitter_data_t itc_jitter_data; 208 + extern volatile int time_keeper_id; 209 + 210 + struct pv_time_ops { 211 + void (*init_missing_ticks_accounting)(int cpu); 212 + int (*do_steal_accounting)(unsigned long *new_itm); 213 + 214 + void (*clocksource_resume)(void); 215 + }; 216 + 217 + extern struct pv_time_ops pv_time_ops; 218 + 219 + static inline void 220 + paravirt_init_missing_ticks_accounting(int cpu) 221 + { 222 + if (pv_time_ops.init_missing_ticks_accounting) 223 + pv_time_ops.init_missing_ticks_accounting(cpu); 224 + } 225 + 226 + static inline int 227 + paravirt_do_steal_accounting(unsigned long *new_itm) 228 + { 229 + return pv_time_ops.do_steal_accounting(new_itm); 230 + } 231 + 232 + #endif /* !__ASSEMBLY__ */ 233 + 234 + #else 235 + /* fallback for native case */ 236 + 237 + #ifndef __ASSEMBLY__ 238 + 239 + #define paravirt_banner() do { } while (0) 240 + #define paravirt_reserve_memory(region) 0 241 + 242 + #define paravirt_arch_setup_early() do { } while (0) 243 + #define paravirt_arch_setup_console(cmdline_p) do { } while (0) 244 + #define paravirt_arch_setup_nomca() 0 245 + #define paravirt_post_smp_prepare_boot_cpu() do { } while (0) 246 + 247 + #define paravirt_init_missing_ticks_accounting(cpu) do { } while (0) 248 + #define paravirt_do_steal_accounting(new_itm) 0 249 + 250 + #endif /* __ASSEMBLY__ */ 251 + 252 + 253 + #endif /* CONFIG_PARAVIRT_GUEST */ 254 + 255 + #endif /* __ASM_PARAVIRT_H */
+114
include/asm-ia64/paravirt_privop.h
··· 1 + /****************************************************************************** 2 + * include/asm-ia64/paravirt_privops.h 3 + * 4 + * Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp> 5 + * VA Linux Systems Japan K.K. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation; either version 2 of the License, or 10 + * (at your option) any later version. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program; if not, write to the Free Software 19 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 + * 21 + */ 22 + 23 + #ifndef _ASM_IA64_PARAVIRT_PRIVOP_H 24 + #define _ASM_IA64_PARAVIRT_PRIVOP_H 25 + 26 + #ifdef CONFIG_PARAVIRT 27 + 28 + #ifndef __ASSEMBLY__ 29 + 30 + #include <linux/types.h> 31 + #include <asm/kregs.h> /* for IA64_PSR_I */ 32 + 33 + /****************************************************************************** 34 + * replacement of intrinsics operations. 35 + */ 36 + 37 + struct pv_cpu_ops { 38 + void (*fc)(unsigned long addr); 39 + unsigned long (*thash)(unsigned long addr); 40 + unsigned long (*get_cpuid)(int index); 41 + unsigned long (*get_pmd)(int index); 42 + unsigned long (*getreg)(int reg); 43 + void (*setreg)(int reg, unsigned long val); 44 + void (*ptcga)(unsigned long addr, unsigned long size); 45 + unsigned long (*get_rr)(unsigned long index); 46 + void (*set_rr)(unsigned long index, unsigned long val); 47 + void (*set_rr0_to_rr4)(unsigned long val0, unsigned long val1, 48 + unsigned long val2, unsigned long val3, 49 + unsigned long val4); 50 + void (*ssm_i)(void); 51 + void (*rsm_i)(void); 52 + unsigned long (*get_psr_i)(void); 53 + void (*intrin_local_irq_restore)(unsigned long flags); 54 + }; 55 + 56 + extern struct pv_cpu_ops pv_cpu_ops; 57 + 58 + extern void ia64_native_setreg_func(int regnum, unsigned long val); 59 + extern unsigned long ia64_native_getreg_func(int regnum); 60 + 61 + /************************************************/ 62 + /* Instructions paravirtualized for performance */ 63 + /************************************************/ 64 + 65 + /* mask for ia64_native_ssm/rsm() must be constant.("i" constraing). 66 + * static inline function doesn't satisfy it. */ 67 + #define paravirt_ssm(mask) \ 68 + do { \ 69 + if ((mask) == IA64_PSR_I) \ 70 + pv_cpu_ops.ssm_i(); \ 71 + else \ 72 + ia64_native_ssm(mask); \ 73 + } while (0) 74 + 75 + #define paravirt_rsm(mask) \ 76 + do { \ 77 + if ((mask) == IA64_PSR_I) \ 78 + pv_cpu_ops.rsm_i(); \ 79 + else \ 80 + ia64_native_rsm(mask); \ 81 + } while (0) 82 + 83 + /****************************************************************************** 84 + * replacement of hand written assembly codes. 85 + */ 86 + struct pv_cpu_asm_switch { 87 + unsigned long switch_to; 88 + unsigned long leave_syscall; 89 + unsigned long work_processed_syscall; 90 + unsigned long leave_kernel; 91 + }; 92 + void paravirt_cpu_asm_init(const struct pv_cpu_asm_switch *cpu_asm_switch); 93 + 94 + #endif /* __ASSEMBLY__ */ 95 + 96 + #define IA64_PARAVIRT_ASM_FUNC(name) paravirt_ ## name 97 + 98 + #else 99 + 100 + /* fallback for native case */ 101 + #define IA64_PARAVIRT_ASM_FUNC(name) ia64_native_ ## name 102 + 103 + #endif /* CONFIG_PARAVIRT */ 104 + 105 + /* these routines utilize privilege-sensitive or performance-sensitive 106 + * privileged instructions so the code must be replaced with 107 + * paravirtualized versions */ 108 + #define ia64_switch_to IA64_PARAVIRT_ASM_FUNC(switch_to) 109 + #define ia64_leave_syscall IA64_PARAVIRT_ASM_FUNC(leave_syscall) 110 + #define ia64_work_processed_syscall \ 111 + IA64_PARAVIRT_ASM_FUNC(work_processed_syscall) 112 + #define ia64_leave_kernel IA64_PARAVIRT_ASM_FUNC(leave_kernel) 113 + 114 + #endif /* _ASM_IA64_PARAVIRT_PRIVOP_H */
+2
include/asm-ia64/smp.h
··· 15 15 #include <linux/kernel.h> 16 16 #include <linux/cpumask.h> 17 17 #include <linux/bitops.h> 18 + #include <linux/irqreturn.h> 18 19 19 20 #include <asm/io.h> 20 21 #include <asm/param.h> ··· 121 120 extern void __init init_smp_config (void); 122 121 extern void smp_do_timer (struct pt_regs *regs); 123 122 123 + extern irqreturn_t handle_IPI(int irq, void *dev_id); 124 124 extern void smp_send_reschedule (int cpu); 125 125 extern void identify_siblings (struct cpuinfo_ia64 *); 126 126 extern int is_multithreading_enabled(void);
+9 -2
include/asm-ia64/system.h
··· 26 26 */ 27 27 #define KERNEL_START (GATE_ADDR+__IA64_UL_CONST(0x100000000)) 28 28 #define PERCPU_ADDR (-PERCPU_PAGE_SIZE) 29 + #define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE) 29 30 30 31 #ifndef __ASSEMBLY__ 31 32 ··· 123 122 * write a floating-point register right before reading the PSR 124 123 * and that writes to PSR.mfl 125 124 */ 125 + #ifdef CONFIG_PARAVIRT 126 + #define __local_save_flags() ia64_get_psr_i() 127 + #else 128 + #define __local_save_flags() ia64_getreg(_IA64_REG_PSR) 129 + #endif 130 + 126 131 #define __local_irq_save(x) \ 127 132 do { \ 128 133 ia64_stop(); \ 129 - (x) = ia64_getreg(_IA64_REG_PSR); \ 134 + (x) = __local_save_flags(); \ 130 135 ia64_stop(); \ 131 136 ia64_rsm(IA64_PSR_I); \ 132 137 } while (0) ··· 180 173 #endif /* !CONFIG_IA64_DEBUG_IRQ */ 181 174 182 175 #define local_irq_enable() ({ ia64_stop(); ia64_ssm(IA64_PSR_I); ia64_srlz_d(); }) 183 - #define local_save_flags(flags) ({ ia64_stop(); (flags) = ia64_getreg(_IA64_REG_PSR); }) 176 + #define local_save_flags(flags) ({ ia64_stop(); (flags) = __local_save_flags(); }) 184 177 185 178 #define irqs_disabled() \ 186 179 ({ \
+415 -8
include/asm-ia64/uv/uv_mmrs.h
··· 11 11 #ifndef __ASM_IA64_UV_MMRS__ 12 12 #define __ASM_IA64_UV_MMRS__ 13 13 14 - /* 15 - * AUTO GENERATED - Do not edit 16 - */ 14 + #define UV_MMR_ENABLE (1UL << 63) 17 15 18 - #define UV_MMR_ENABLE (1UL << 63) 16 + /* ========================================================================= */ 17 + /* UVH_BAU_DATA_CONFIG */ 18 + /* ========================================================================= */ 19 + #define UVH_BAU_DATA_CONFIG 0x61680UL 20 + #define UVH_BAU_DATA_CONFIG_32 0x0438 21 + 22 + #define UVH_BAU_DATA_CONFIG_VECTOR_SHFT 0 23 + #define UVH_BAU_DATA_CONFIG_VECTOR_MASK 0x00000000000000ffUL 24 + #define UVH_BAU_DATA_CONFIG_DM_SHFT 8 25 + #define UVH_BAU_DATA_CONFIG_DM_MASK 0x0000000000000700UL 26 + #define UVH_BAU_DATA_CONFIG_DESTMODE_SHFT 11 27 + #define UVH_BAU_DATA_CONFIG_DESTMODE_MASK 0x0000000000000800UL 28 + #define UVH_BAU_DATA_CONFIG_STATUS_SHFT 12 29 + #define UVH_BAU_DATA_CONFIG_STATUS_MASK 0x0000000000001000UL 30 + #define UVH_BAU_DATA_CONFIG_P_SHFT 13 31 + #define UVH_BAU_DATA_CONFIG_P_MASK 0x0000000000002000UL 32 + #define UVH_BAU_DATA_CONFIG_T_SHFT 15 33 + #define UVH_BAU_DATA_CONFIG_T_MASK 0x0000000000008000UL 34 + #define UVH_BAU_DATA_CONFIG_M_SHFT 16 35 + #define UVH_BAU_DATA_CONFIG_M_MASK 0x0000000000010000UL 36 + #define UVH_BAU_DATA_CONFIG_APIC_ID_SHFT 32 37 + #define UVH_BAU_DATA_CONFIG_APIC_ID_MASK 0xffffffff00000000UL 38 + 39 + union uvh_bau_data_config_u { 40 + unsigned long v; 41 + struct uvh_bau_data_config_s { 42 + unsigned long vector_ : 8; /* RW */ 43 + unsigned long dm : 3; /* RW */ 44 + unsigned long destmode : 1; /* RW */ 45 + unsigned long status : 1; /* RO */ 46 + unsigned long p : 1; /* RO */ 47 + unsigned long rsvd_14 : 1; /* */ 48 + unsigned long t : 1; /* RO */ 49 + unsigned long m : 1; /* RW */ 50 + unsigned long rsvd_17_31: 15; /* */ 51 + unsigned long apic_id : 32; /* RW */ 52 + } s; 53 + }; 54 + 55 + /* ========================================================================= */ 56 + /* UVH_EVENT_OCCURRED0 */ 57 + /* ========================================================================= */ 58 + #define UVH_EVENT_OCCURRED0 0x70000UL 59 + #define UVH_EVENT_OCCURRED0_32 0x005e8 60 + 61 + #define UVH_EVENT_OCCURRED0_LB_HCERR_SHFT 0 62 + #define UVH_EVENT_OCCURRED0_LB_HCERR_MASK 0x0000000000000001UL 63 + #define UVH_EVENT_OCCURRED0_GR0_HCERR_SHFT 1 64 + #define UVH_EVENT_OCCURRED0_GR0_HCERR_MASK 0x0000000000000002UL 65 + #define UVH_EVENT_OCCURRED0_GR1_HCERR_SHFT 2 66 + #define UVH_EVENT_OCCURRED0_GR1_HCERR_MASK 0x0000000000000004UL 67 + #define UVH_EVENT_OCCURRED0_LH_HCERR_SHFT 3 68 + #define UVH_EVENT_OCCURRED0_LH_HCERR_MASK 0x0000000000000008UL 69 + #define UVH_EVENT_OCCURRED0_RH_HCERR_SHFT 4 70 + #define UVH_EVENT_OCCURRED0_RH_HCERR_MASK 0x0000000000000010UL 71 + #define UVH_EVENT_OCCURRED0_XN_HCERR_SHFT 5 72 + #define UVH_EVENT_OCCURRED0_XN_HCERR_MASK 0x0000000000000020UL 73 + #define UVH_EVENT_OCCURRED0_SI_HCERR_SHFT 6 74 + #define UVH_EVENT_OCCURRED0_SI_HCERR_MASK 0x0000000000000040UL 75 + #define UVH_EVENT_OCCURRED0_LB_AOERR0_SHFT 7 76 + #define UVH_EVENT_OCCURRED0_LB_AOERR0_MASK 0x0000000000000080UL 77 + #define UVH_EVENT_OCCURRED0_GR0_AOERR0_SHFT 8 78 + #define UVH_EVENT_OCCURRED0_GR0_AOERR0_MASK 0x0000000000000100UL 79 + #define UVH_EVENT_OCCURRED0_GR1_AOERR0_SHFT 9 80 + #define UVH_EVENT_OCCURRED0_GR1_AOERR0_MASK 0x0000000000000200UL 81 + #define UVH_EVENT_OCCURRED0_LH_AOERR0_SHFT 10 82 + #define UVH_EVENT_OCCURRED0_LH_AOERR0_MASK 0x0000000000000400UL 83 + #define UVH_EVENT_OCCURRED0_RH_AOERR0_SHFT 11 84 + #define UVH_EVENT_OCCURRED0_RH_AOERR0_MASK 0x0000000000000800UL 85 + #define UVH_EVENT_OCCURRED0_XN_AOERR0_SHFT 12 86 + #define UVH_EVENT_OCCURRED0_XN_AOERR0_MASK 0x0000000000001000UL 87 + #define UVH_EVENT_OCCURRED0_SI_AOERR0_SHFT 13 88 + #define UVH_EVENT_OCCURRED0_SI_AOERR0_MASK 0x0000000000002000UL 89 + #define UVH_EVENT_OCCURRED0_LB_AOERR1_SHFT 14 90 + #define UVH_EVENT_OCCURRED0_LB_AOERR1_MASK 0x0000000000004000UL 91 + #define UVH_EVENT_OCCURRED0_GR0_AOERR1_SHFT 15 92 + #define UVH_EVENT_OCCURRED0_GR0_AOERR1_MASK 0x0000000000008000UL 93 + #define UVH_EVENT_OCCURRED0_GR1_AOERR1_SHFT 16 94 + #define UVH_EVENT_OCCURRED0_GR1_AOERR1_MASK 0x0000000000010000UL 95 + #define UVH_EVENT_OCCURRED0_LH_AOERR1_SHFT 17 96 + #define UVH_EVENT_OCCURRED0_LH_AOERR1_MASK 0x0000000000020000UL 97 + #define UVH_EVENT_OCCURRED0_RH_AOERR1_SHFT 18 98 + #define UVH_EVENT_OCCURRED0_RH_AOERR1_MASK 0x0000000000040000UL 99 + #define UVH_EVENT_OCCURRED0_XN_AOERR1_SHFT 19 100 + #define UVH_EVENT_OCCURRED0_XN_AOERR1_MASK 0x0000000000080000UL 101 + #define UVH_EVENT_OCCURRED0_SI_AOERR1_SHFT 20 102 + #define UVH_EVENT_OCCURRED0_SI_AOERR1_MASK 0x0000000000100000UL 103 + #define UVH_EVENT_OCCURRED0_RH_VPI_INT_SHFT 21 104 + #define UVH_EVENT_OCCURRED0_RH_VPI_INT_MASK 0x0000000000200000UL 105 + #define UVH_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_SHFT 22 106 + #define UVH_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000400000UL 107 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_0_SHFT 23 108 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_0_MASK 0x0000000000800000UL 109 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_1_SHFT 24 110 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_1_MASK 0x0000000001000000UL 111 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_2_SHFT 25 112 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_2_MASK 0x0000000002000000UL 113 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_3_SHFT 26 114 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_3_MASK 0x0000000004000000UL 115 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_4_SHFT 27 116 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_4_MASK 0x0000000008000000UL 117 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_5_SHFT 28 118 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_5_MASK 0x0000000010000000UL 119 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_6_SHFT 29 120 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_6_MASK 0x0000000020000000UL 121 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_7_SHFT 30 122 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_7_MASK 0x0000000040000000UL 123 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_8_SHFT 31 124 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_8_MASK 0x0000000080000000UL 125 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_9_SHFT 32 126 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_9_MASK 0x0000000100000000UL 127 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_10_SHFT 33 128 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_10_MASK 0x0000000200000000UL 129 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_11_SHFT 34 130 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_11_MASK 0x0000000400000000UL 131 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_12_SHFT 35 132 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_12_MASK 0x0000000800000000UL 133 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_13_SHFT 36 134 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_13_MASK 0x0000001000000000UL 135 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_14_SHFT 37 136 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_14_MASK 0x0000002000000000UL 137 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_15_SHFT 38 138 + #define UVH_EVENT_OCCURRED0_LB_IRQ_INT_15_MASK 0x0000004000000000UL 139 + #define UVH_EVENT_OCCURRED0_L1_NMI_INT_SHFT 39 140 + #define UVH_EVENT_OCCURRED0_L1_NMI_INT_MASK 0x0000008000000000UL 141 + #define UVH_EVENT_OCCURRED0_STOP_CLOCK_SHFT 40 142 + #define UVH_EVENT_OCCURRED0_STOP_CLOCK_MASK 0x0000010000000000UL 143 + #define UVH_EVENT_OCCURRED0_ASIC_TO_L1_SHFT 41 144 + #define UVH_EVENT_OCCURRED0_ASIC_TO_L1_MASK 0x0000020000000000UL 145 + #define UVH_EVENT_OCCURRED0_L1_TO_ASIC_SHFT 42 146 + #define UVH_EVENT_OCCURRED0_L1_TO_ASIC_MASK 0x0000040000000000UL 147 + #define UVH_EVENT_OCCURRED0_LTC_INT_SHFT 43 148 + #define UVH_EVENT_OCCURRED0_LTC_INT_MASK 0x0000080000000000UL 149 + #define UVH_EVENT_OCCURRED0_LA_SEQ_TRIGGER_SHFT 44 150 + #define UVH_EVENT_OCCURRED0_LA_SEQ_TRIGGER_MASK 0x0000100000000000UL 151 + #define UVH_EVENT_OCCURRED0_IPI_INT_SHFT 45 152 + #define UVH_EVENT_OCCURRED0_IPI_INT_MASK 0x0000200000000000UL 153 + #define UVH_EVENT_OCCURRED0_EXTIO_INT0_SHFT 46 154 + #define UVH_EVENT_OCCURRED0_EXTIO_INT0_MASK 0x0000400000000000UL 155 + #define UVH_EVENT_OCCURRED0_EXTIO_INT1_SHFT 47 156 + #define UVH_EVENT_OCCURRED0_EXTIO_INT1_MASK 0x0000800000000000UL 157 + #define UVH_EVENT_OCCURRED0_EXTIO_INT2_SHFT 48 158 + #define UVH_EVENT_OCCURRED0_EXTIO_INT2_MASK 0x0001000000000000UL 159 + #define UVH_EVENT_OCCURRED0_EXTIO_INT3_SHFT 49 160 + #define UVH_EVENT_OCCURRED0_EXTIO_INT3_MASK 0x0002000000000000UL 161 + #define UVH_EVENT_OCCURRED0_PROFILE_INT_SHFT 50 162 + #define UVH_EVENT_OCCURRED0_PROFILE_INT_MASK 0x0004000000000000UL 163 + #define UVH_EVENT_OCCURRED0_RTC0_SHFT 51 164 + #define UVH_EVENT_OCCURRED0_RTC0_MASK 0x0008000000000000UL 165 + #define UVH_EVENT_OCCURRED0_RTC1_SHFT 52 166 + #define UVH_EVENT_OCCURRED0_RTC1_MASK 0x0010000000000000UL 167 + #define UVH_EVENT_OCCURRED0_RTC2_SHFT 53 168 + #define UVH_EVENT_OCCURRED0_RTC2_MASK 0x0020000000000000UL 169 + #define UVH_EVENT_OCCURRED0_RTC3_SHFT 54 170 + #define UVH_EVENT_OCCURRED0_RTC3_MASK 0x0040000000000000UL 171 + #define UVH_EVENT_OCCURRED0_BAU_DATA_SHFT 55 172 + #define UVH_EVENT_OCCURRED0_BAU_DATA_MASK 0x0080000000000000UL 173 + #define UVH_EVENT_OCCURRED0_POWER_MANAGEMENT_REQ_SHFT 56 174 + #define UVH_EVENT_OCCURRED0_POWER_MANAGEMENT_REQ_MASK 0x0100000000000000UL 175 + union uvh_event_occurred0_u { 176 + unsigned long v; 177 + struct uvh_event_occurred0_s { 178 + unsigned long lb_hcerr : 1; /* RW, W1C */ 179 + unsigned long gr0_hcerr : 1; /* RW, W1C */ 180 + unsigned long gr1_hcerr : 1; /* RW, W1C */ 181 + unsigned long lh_hcerr : 1; /* RW, W1C */ 182 + unsigned long rh_hcerr : 1; /* RW, W1C */ 183 + unsigned long xn_hcerr : 1; /* RW, W1C */ 184 + unsigned long si_hcerr : 1; /* RW, W1C */ 185 + unsigned long lb_aoerr0 : 1; /* RW, W1C */ 186 + unsigned long gr0_aoerr0 : 1; /* RW, W1C */ 187 + unsigned long gr1_aoerr0 : 1; /* RW, W1C */ 188 + unsigned long lh_aoerr0 : 1; /* RW, W1C */ 189 + unsigned long rh_aoerr0 : 1; /* RW, W1C */ 190 + unsigned long xn_aoerr0 : 1; /* RW, W1C */ 191 + unsigned long si_aoerr0 : 1; /* RW, W1C */ 192 + unsigned long lb_aoerr1 : 1; /* RW, W1C */ 193 + unsigned long gr0_aoerr1 : 1; /* RW, W1C */ 194 + unsigned long gr1_aoerr1 : 1; /* RW, W1C */ 195 + unsigned long lh_aoerr1 : 1; /* RW, W1C */ 196 + unsigned long rh_aoerr1 : 1; /* RW, W1C */ 197 + unsigned long xn_aoerr1 : 1; /* RW, W1C */ 198 + unsigned long si_aoerr1 : 1; /* RW, W1C */ 199 + unsigned long rh_vpi_int : 1; /* RW, W1C */ 200 + unsigned long system_shutdown_int : 1; /* RW, W1C */ 201 + unsigned long lb_irq_int_0 : 1; /* RW, W1C */ 202 + unsigned long lb_irq_int_1 : 1; /* RW, W1C */ 203 + unsigned long lb_irq_int_2 : 1; /* RW, W1C */ 204 + unsigned long lb_irq_int_3 : 1; /* RW, W1C */ 205 + unsigned long lb_irq_int_4 : 1; /* RW, W1C */ 206 + unsigned long lb_irq_int_5 : 1; /* RW, W1C */ 207 + unsigned long lb_irq_int_6 : 1; /* RW, W1C */ 208 + unsigned long lb_irq_int_7 : 1; /* RW, W1C */ 209 + unsigned long lb_irq_int_8 : 1; /* RW, W1C */ 210 + unsigned long lb_irq_int_9 : 1; /* RW, W1C */ 211 + unsigned long lb_irq_int_10 : 1; /* RW, W1C */ 212 + unsigned long lb_irq_int_11 : 1; /* RW, W1C */ 213 + unsigned long lb_irq_int_12 : 1; /* RW, W1C */ 214 + unsigned long lb_irq_int_13 : 1; /* RW, W1C */ 215 + unsigned long lb_irq_int_14 : 1; /* RW, W1C */ 216 + unsigned long lb_irq_int_15 : 1; /* RW, W1C */ 217 + unsigned long l1_nmi_int : 1; /* RW, W1C */ 218 + unsigned long stop_clock : 1; /* RW, W1C */ 219 + unsigned long asic_to_l1 : 1; /* RW, W1C */ 220 + unsigned long l1_to_asic : 1; /* RW, W1C */ 221 + unsigned long ltc_int : 1; /* RW, W1C */ 222 + unsigned long la_seq_trigger : 1; /* RW, W1C */ 223 + unsigned long ipi_int : 1; /* RW, W1C */ 224 + unsigned long extio_int0 : 1; /* RW, W1C */ 225 + unsigned long extio_int1 : 1; /* RW, W1C */ 226 + unsigned long extio_int2 : 1; /* RW, W1C */ 227 + unsigned long extio_int3 : 1; /* RW, W1C */ 228 + unsigned long profile_int : 1; /* RW, W1C */ 229 + unsigned long rtc0 : 1; /* RW, W1C */ 230 + unsigned long rtc1 : 1; /* RW, W1C */ 231 + unsigned long rtc2 : 1; /* RW, W1C */ 232 + unsigned long rtc3 : 1; /* RW, W1C */ 233 + unsigned long bau_data : 1; /* RW, W1C */ 234 + unsigned long power_management_req : 1; /* RW, W1C */ 235 + unsigned long rsvd_57_63 : 7; /* */ 236 + } s; 237 + }; 238 + 239 + /* ========================================================================= */ 240 + /* UVH_EVENT_OCCURRED0_ALIAS */ 241 + /* ========================================================================= */ 242 + #define UVH_EVENT_OCCURRED0_ALIAS 0x0000000000070008UL 243 + #define UVH_EVENT_OCCURRED0_ALIAS_32 0x005f0 244 + 245 + /* ========================================================================= */ 246 + /* UVH_INT_CMPB */ 247 + /* ========================================================================= */ 248 + #define UVH_INT_CMPB 0x22080UL 249 + 250 + #define UVH_INT_CMPB_REAL_TIME_CMPB_SHFT 0 251 + #define UVH_INT_CMPB_REAL_TIME_CMPB_MASK 0x00ffffffffffffffUL 252 + 253 + union uvh_int_cmpb_u { 254 + unsigned long v; 255 + struct uvh_int_cmpb_s { 256 + unsigned long real_time_cmpb : 56; /* RW */ 257 + unsigned long rsvd_56_63 : 8; /* */ 258 + } s; 259 + }; 260 + 261 + /* ========================================================================= */ 262 + /* UVH_INT_CMPC */ 263 + /* ========================================================================= */ 264 + #define UVH_INT_CMPC 0x22100UL 265 + 266 + #define UVH_INT_CMPC_REAL_TIME_CMPC_SHFT 0 267 + #define UVH_INT_CMPC_REAL_TIME_CMPC_MASK 0x00ffffffffffffffUL 268 + 269 + union uvh_int_cmpc_u { 270 + unsigned long v; 271 + struct uvh_int_cmpc_s { 272 + unsigned long real_time_cmpc : 56; /* RW */ 273 + unsigned long rsvd_56_63 : 8; /* */ 274 + } s; 275 + }; 276 + 277 + /* ========================================================================= */ 278 + /* UVH_INT_CMPD */ 279 + /* ========================================================================= */ 280 + #define UVH_INT_CMPD 0x22180UL 281 + 282 + #define UVH_INT_CMPD_REAL_TIME_CMPD_SHFT 0 283 + #define UVH_INT_CMPD_REAL_TIME_CMPD_MASK 0x00ffffffffffffffUL 284 + 285 + union uvh_int_cmpd_u { 286 + unsigned long v; 287 + struct uvh_int_cmpd_s { 288 + unsigned long real_time_cmpd : 56; /* RW */ 289 + unsigned long rsvd_56_63 : 8; /* */ 290 + } s; 291 + }; 19 292 20 293 /* ========================================================================= */ 21 294 /* UVH_NODE_ID */ ··· 384 111 385 112 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT 28 386 113 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffff0000000UL 387 - #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_GR4_SHFT 46 388 - #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_GR4_MASK 0x0000400000000000UL 114 + #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_GR4_SHFT 48 115 + #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_GR4_MASK 0x0001000000000000UL 389 116 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_SHFT 52 390 117 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_MASK 0x00f0000000000000UL 391 118 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63 ··· 396 123 struct uvh_rh_gam_gru_overlay_config_mmr_s { 397 124 unsigned long rsvd_0_27: 28; /* */ 398 125 unsigned long base : 18; /* RW */ 126 + unsigned long rsvd_46_47: 2; /* */ 399 127 unsigned long gr4 : 1; /* RW */ 400 - unsigned long rsvd_47_51: 5; /* */ 128 + unsigned long rsvd_49_51: 3; /* */ 401 129 unsigned long n_gru : 4; /* RW */ 402 130 unsigned long rsvd_56_62: 7; /* */ 403 131 unsigned long enable : 1; /* RW */ ··· 431 157 /* ========================================================================= */ 432 158 /* UVH_RTC */ 433 159 /* ========================================================================= */ 434 - #define UVH_RTC 0x28000UL 160 + #define UVH_RTC 0x340000UL 435 161 436 162 #define UVH_RTC_REAL_TIME_CLOCK_SHFT 0 437 163 #define UVH_RTC_REAL_TIME_CLOCK_MASK 0x00ffffffffffffffUL ··· 441 167 struct uvh_rtc_s { 442 168 unsigned long real_time_clock : 56; /* RW */ 443 169 unsigned long rsvd_56_63 : 8; /* */ 170 + } s; 171 + }; 172 + 173 + /* ========================================================================= */ 174 + /* UVH_RTC1_INT_CONFIG */ 175 + /* ========================================================================= */ 176 + #define UVH_RTC1_INT_CONFIG 0x615c0UL 177 + 178 + #define UVH_RTC1_INT_CONFIG_VECTOR_SHFT 0 179 + #define UVH_RTC1_INT_CONFIG_VECTOR_MASK 0x00000000000000ffUL 180 + #define UVH_RTC1_INT_CONFIG_DM_SHFT 8 181 + #define UVH_RTC1_INT_CONFIG_DM_MASK 0x0000000000000700UL 182 + #define UVH_RTC1_INT_CONFIG_DESTMODE_SHFT 11 183 + #define UVH_RTC1_INT_CONFIG_DESTMODE_MASK 0x0000000000000800UL 184 + #define UVH_RTC1_INT_CONFIG_STATUS_SHFT 12 185 + #define UVH_RTC1_INT_CONFIG_STATUS_MASK 0x0000000000001000UL 186 + #define UVH_RTC1_INT_CONFIG_P_SHFT 13 187 + #define UVH_RTC1_INT_CONFIG_P_MASK 0x0000000000002000UL 188 + #define UVH_RTC1_INT_CONFIG_T_SHFT 15 189 + #define UVH_RTC1_INT_CONFIG_T_MASK 0x0000000000008000UL 190 + #define UVH_RTC1_INT_CONFIG_M_SHFT 16 191 + #define UVH_RTC1_INT_CONFIG_M_MASK 0x0000000000010000UL 192 + #define UVH_RTC1_INT_CONFIG_APIC_ID_SHFT 32 193 + #define UVH_RTC1_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL 194 + 195 + union uvh_rtc1_int_config_u { 196 + unsigned long v; 197 + struct uvh_rtc1_int_config_s { 198 + unsigned long vector_ : 8; /* RW */ 199 + unsigned long dm : 3; /* RW */ 200 + unsigned long destmode : 1; /* RW */ 201 + unsigned long status : 1; /* RO */ 202 + unsigned long p : 1; /* RO */ 203 + unsigned long rsvd_14 : 1; /* */ 204 + unsigned long t : 1; /* RO */ 205 + unsigned long m : 1; /* RW */ 206 + unsigned long rsvd_17_31: 15; /* */ 207 + unsigned long apic_id : 32; /* RW */ 208 + } s; 209 + }; 210 + 211 + /* ========================================================================= */ 212 + /* UVH_RTC2_INT_CONFIG */ 213 + /* ========================================================================= */ 214 + #define UVH_RTC2_INT_CONFIG 0x61600UL 215 + 216 + #define UVH_RTC2_INT_CONFIG_VECTOR_SHFT 0 217 + #define UVH_RTC2_INT_CONFIG_VECTOR_MASK 0x00000000000000ffUL 218 + #define UVH_RTC2_INT_CONFIG_DM_SHFT 8 219 + #define UVH_RTC2_INT_CONFIG_DM_MASK 0x0000000000000700UL 220 + #define UVH_RTC2_INT_CONFIG_DESTMODE_SHFT 11 221 + #define UVH_RTC2_INT_CONFIG_DESTMODE_MASK 0x0000000000000800UL 222 + #define UVH_RTC2_INT_CONFIG_STATUS_SHFT 12 223 + #define UVH_RTC2_INT_CONFIG_STATUS_MASK 0x0000000000001000UL 224 + #define UVH_RTC2_INT_CONFIG_P_SHFT 13 225 + #define UVH_RTC2_INT_CONFIG_P_MASK 0x0000000000002000UL 226 + #define UVH_RTC2_INT_CONFIG_T_SHFT 15 227 + #define UVH_RTC2_INT_CONFIG_T_MASK 0x0000000000008000UL 228 + #define UVH_RTC2_INT_CONFIG_M_SHFT 16 229 + #define UVH_RTC2_INT_CONFIG_M_MASK 0x0000000000010000UL 230 + #define UVH_RTC2_INT_CONFIG_APIC_ID_SHFT 32 231 + #define UVH_RTC2_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL 232 + 233 + union uvh_rtc2_int_config_u { 234 + unsigned long v; 235 + struct uvh_rtc2_int_config_s { 236 + unsigned long vector_ : 8; /* RW */ 237 + unsigned long dm : 3; /* RW */ 238 + unsigned long destmode : 1; /* RW */ 239 + unsigned long status : 1; /* RO */ 240 + unsigned long p : 1; /* RO */ 241 + unsigned long rsvd_14 : 1; /* */ 242 + unsigned long t : 1; /* RO */ 243 + unsigned long m : 1; /* RW */ 244 + unsigned long rsvd_17_31: 15; /* */ 245 + unsigned long apic_id : 32; /* RW */ 246 + } s; 247 + }; 248 + 249 + /* ========================================================================= */ 250 + /* UVH_RTC3_INT_CONFIG */ 251 + /* ========================================================================= */ 252 + #define UVH_RTC3_INT_CONFIG 0x61640UL 253 + 254 + #define UVH_RTC3_INT_CONFIG_VECTOR_SHFT 0 255 + #define UVH_RTC3_INT_CONFIG_VECTOR_MASK 0x00000000000000ffUL 256 + #define UVH_RTC3_INT_CONFIG_DM_SHFT 8 257 + #define UVH_RTC3_INT_CONFIG_DM_MASK 0x0000000000000700UL 258 + #define UVH_RTC3_INT_CONFIG_DESTMODE_SHFT 11 259 + #define UVH_RTC3_INT_CONFIG_DESTMODE_MASK 0x0000000000000800UL 260 + #define UVH_RTC3_INT_CONFIG_STATUS_SHFT 12 261 + #define UVH_RTC3_INT_CONFIG_STATUS_MASK 0x0000000000001000UL 262 + #define UVH_RTC3_INT_CONFIG_P_SHFT 13 263 + #define UVH_RTC3_INT_CONFIG_P_MASK 0x0000000000002000UL 264 + #define UVH_RTC3_INT_CONFIG_T_SHFT 15 265 + #define UVH_RTC3_INT_CONFIG_T_MASK 0x0000000000008000UL 266 + #define UVH_RTC3_INT_CONFIG_M_SHFT 16 267 + #define UVH_RTC3_INT_CONFIG_M_MASK 0x0000000000010000UL 268 + #define UVH_RTC3_INT_CONFIG_APIC_ID_SHFT 32 269 + #define UVH_RTC3_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL 270 + 271 + union uvh_rtc3_int_config_u { 272 + unsigned long v; 273 + struct uvh_rtc3_int_config_s { 274 + unsigned long vector_ : 8; /* RW */ 275 + unsigned long dm : 3; /* RW */ 276 + unsigned long destmode : 1; /* RW */ 277 + unsigned long status : 1; /* RO */ 278 + unsigned long p : 1; /* RO */ 279 + unsigned long rsvd_14 : 1; /* */ 280 + unsigned long t : 1; /* RO */ 281 + unsigned long m : 1; /* RW */ 282 + unsigned long rsvd_17_31: 15; /* */ 283 + unsigned long apic_id : 32; /* RW */ 284 + } s; 285 + }; 286 + 287 + /* ========================================================================= */ 288 + /* UVH_RTC_INC_RATIO */ 289 + /* ========================================================================= */ 290 + #define UVH_RTC_INC_RATIO 0x350000UL 291 + 292 + #define UVH_RTC_INC_RATIO_FRACTION_SHFT 0 293 + #define UVH_RTC_INC_RATIO_FRACTION_MASK 0x00000000000fffffUL 294 + #define UVH_RTC_INC_RATIO_RATIO_SHFT 20 295 + #define UVH_RTC_INC_RATIO_RATIO_MASK 0x0000000000700000UL 296 + 297 + union uvh_rtc_inc_ratio_u { 298 + unsigned long v; 299 + struct uvh_rtc_inc_ratio_s { 300 + unsigned long fraction : 20; /* RW */ 301 + unsigned long ratio : 3; /* RW */ 302 + unsigned long rsvd_23_63: 41; /* */ 444 303 } s; 445 304 }; 446 305