Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

[PATCH] FRV: Use virtual interrupt disablement

Make the FRV arch use virtual interrupt disablement because accesses to the
processor status register (PSR) are relatively slow and because we will
soon have the need to deal with multiple interrupt controls at the same
time (separate h/w and inter-core interrupts).

The way this is done is to dedicate one of the four integer condition code
registers (ICC2) to maintaining a virtual interrupt disablement state
whilst inside the kernel. This uses the ICC2.Z flag (Zero) to indicate
whether the interrupts are virtually disabled and the ICC2.C flag (Carry)
to indicate whether the interrupts are physically disabled.

ICC2.Z is set to indicate interrupts are virtually disabled. ICC2.C is set
to indicate interrupts are physically enabled. Under normal running
conditions Z==0 and C==1.

Disabling interrupts with local_irq_disable() doesn't then actually
physically disable interrupts - it merely sets ICC2.Z to 1. Should an
interrupt then happen, the exception prologue will note ICC2.Z is set and
branch out of line using one instruction (an unlikely BEQ). Here it will
physically disable interrupts and clear ICC2.C.

When it comes time to enable interrupts (local_irq_enable()), this simply
clears the ICC2.Z flag and invokes a trap #2 if both Z and C flags are
clear (the HI integer condition). This can be done with the TIHI
conditional trap instruction.

The trap then physically reenables interrupts and sets ICC2.C again. Upon
returning the interrupt will be taken as interrupts will then be enabled.
Note that whilst processing the trap, the whole exceptions system is
disabled, and so an interrupt can't happen till it returns.

If no pending interrupt had happened, ICC2.C would still be set, the HI
condition would not be fulfilled, and no trap will happen.

Saving interrupts (local_irq_save) is simply a matter of pulling the ICC2.Z
flag out of the CCR register, shifting it down and masking it off. This
gives a result of 0 if interrupts were enabled and 1 if they weren't.

Restoring interrupts (local_irq_restore) is then a matter of taking the
saved value mentioned previously and XOR'ing it against 1. If it was one,
the result will be zero, and if it was zero the result will be non-zero.
This result is then used to affect the ICC2.Z flag directly (it is a
condition code flag after all). An XOR instruction does not affect the
Carry flag, and so that bit of state is unchanged. The two flags can then
be sampled to see if they're both zero using the trap (TIHI) as for the
unconditional reenablement (local_irq_enable).

This patch also:

(1) Modifies the debugging stub (break.S) to handle single-stepping crossing
into the trap #2 handler and into virtually disabled interrupts.

(2) Removes superseded fixup pointers from the second instructions in the trap
tables (there's no a separate fixup table for this).

(3) Declares the trap #3 vector for use in .org directives in the trap table.

(4) Moves irq_enter() and irq_exit() in do_IRQ() to avoid problems with
virtual interrupt handling, and removes the duplicate code that has now
been folded into irq_exit() (softirq and preemption handling).

(5) Tells the compiler in the arch Makefile that ICC2 is now reserved.

(6) Documents the in-kernel ABI, including the virtual interrupts.

(7) Renames the old irq management functions to different names.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by

David Howells and committed by
Linus Torvalds
28baebae 68f624fc

+489 -61
+234
Documentation/fujitsu/frv/kernel-ABI.txt
··· 1 + ================================= 2 + INTERNAL KERNEL ABI FOR FR-V ARCH 3 + ================================= 4 + 5 + The internal FRV kernel ABI is not quite the same as the userspace ABI. A number of the registers 6 + are used for special purposed, and the ABI is not consistent between modules vs core, and MMU vs 7 + no-MMU. 8 + 9 + This partly stems from the fact that FRV CPUs do not have a separate supervisor stack pointer, and 10 + most of them do not have any scratch registers, thus requiring at least one general purpose 11 + register to be clobbered in such an event. Also, within the kernel core, it is possible to simply 12 + jump or call directly between functions using a relative offset. This cannot be extended to modules 13 + for the displacement is likely to be too far. Thus in modules the address of a function to call 14 + must be calculated in a register and then used, requiring two extra instructions. 15 + 16 + This document has the following sections: 17 + 18 + (*) System call register ABI 19 + (*) CPU operating modes 20 + (*) Internal kernel-mode register ABI 21 + (*) Internal debug-mode register ABI 22 + (*) Virtual interrupt handling 23 + 24 + 25 + ======================== 26 + SYSTEM CALL REGISTER ABI 27 + ======================== 28 + 29 + When a system call is made, the following registers are effective: 30 + 31 + REGISTERS CALL RETURN 32 + =============== ======================= ======================= 33 + GR7 System call number Preserved 34 + GR8 Syscall arg #1 Return value 35 + GR9-GR13 Syscall arg #2-6 Preserved 36 + 37 + 38 + =================== 39 + CPU OPERATING MODES 40 + =================== 41 + 42 + The FR-V CPU has three basic operating modes. In order of increasing capability: 43 + 44 + (1) User mode. 45 + 46 + Basic userspace running mode. 47 + 48 + (2) Kernel mode. 49 + 50 + Normal kernel mode. There are many additional control registers available that may be 51 + accessed in this mode, in addition to all the stuff available to user mode. This has two 52 + submodes: 53 + 54 + (a) Exceptions enabled (PSR.T == 1). 55 + 56 + Exceptions will invoke the appropriate normal kernel mode handler. On entry to the 57 + handler, the PSR.T bit will be cleared. 58 + 59 + (b) Exceptions disabled (PSR.T == 0). 60 + 61 + No exceptions or interrupts may happen. Any mandatory exceptions will cause the CPU to 62 + halt unless the CPU is told to jump into debug mode instead. 63 + 64 + (3) Debug mode. 65 + 66 + No exceptions may happen in this mode. Memory protection and management exceptions will be 67 + flagged for later consideration, but the exception handler won't be invoked. Debugging traps 68 + such as hardware breakpoints and watchpoints will be ignored. This mode is entered only by 69 + debugging events obtained from the other two modes. 70 + 71 + All kernel mode registers may be accessed, plus a few extra debugging specific registers. 72 + 73 + 74 + ================================= 75 + INTERNAL KERNEL-MODE REGISTER ABI 76 + ================================= 77 + 78 + There are a number of permanent register assignments that are set up by entry.S in the exception 79 + prologue. Note that there is a complete set of exception prologues for each of user->kernel 80 + transition and kernel->kernel transition. There are also user->debug and kernel->debug mode 81 + transition prologues. 82 + 83 + 84 + REGISTER FLAVOUR USE 85 + =============== ======= ==================================================== 86 + GR1 Supervisor stack pointer 87 + GR15 Current thread info pointer 88 + GR16 GP-Rel base register for small data 89 + GR28 Current exception frame pointer (__frame) 90 + GR29 Current task pointer (current) 91 + GR30 Destroyed by kernel mode entry 92 + GR31 NOMMU Destroyed by debug mode entry 93 + GR31 MMU Destroyed by TLB miss kernel mode entry 94 + CCR.ICC2 Virtual interrupt disablement tracking 95 + CCCR.CC3 Cleared by exception prologue (atomic op emulation) 96 + SCR0 MMU See mmu-layout.txt. 97 + SCR1 MMU See mmu-layout.txt. 98 + SCR2 MMU Save for EAR0 (destroyed by icache insns in debug mode) 99 + SCR3 MMU Save for GR31 during debug exceptions 100 + DAMR/IAMR NOMMU Fixed memory protection layout. 101 + DAMR/IAMR MMU See mmu-layout.txt. 102 + 103 + 104 + Certain registers are also used or modified across function calls: 105 + 106 + REGISTER CALL RETURN 107 + =============== =============================== =============================== 108 + GR0 Fixed Zero - 109 + GR2 Function call frame pointer 110 + GR3 Special Preserved 111 + GR3-GR7 - Clobbered 112 + GR8 Function call arg #1 Return value (or clobbered) 113 + GR9 Function call arg #2 Return value MSW (or clobbered) 114 + GR10-GR13 Function call arg #3-#6 Clobbered 115 + GR14 - Clobbered 116 + GR15-GR16 Special Preserved 117 + GR17-GR27 - Preserved 118 + GR28-GR31 Special Only accessed explicitly 119 + LR Return address after CALL Clobbered 120 + CCR/CCCR - Mostly Clobbered 121 + 122 + 123 + ================================ 124 + INTERNAL DEBUG-MODE REGISTER ABI 125 + ================================ 126 + 127 + This is the same as the kernel-mode register ABI for functions calls. The difference is that in 128 + debug-mode there's a different stack and a different exception frame. Almost all the global 129 + registers from kernel-mode (including the stack pointer) may be changed. 130 + 131 + REGISTER FLAVOUR USE 132 + =============== ======= ==================================================== 133 + GR1 Debug stack pointer 134 + GR16 GP-Rel base register for small data 135 + GR31 Current debug exception frame pointer (__debug_frame) 136 + SCR3 MMU Saved value of GR31 137 + 138 + 139 + Note that debug mode is able to interfere with the kernel's emulated atomic ops, so it must be 140 + exceedingly careful not to do any that would interact with the main kernel in this regard. Hence 141 + the debug mode code (gdbstub) is almost completely self-contained. The only external code used is 142 + the sprintf family of functions. 143 + 144 + Futhermore, break.S is so complicated because single-step mode does not switch off on entry to an 145 + exception. That means unless manually disabled, single-stepping will blithely go on stepping into 146 + things like interrupts. See gdbstub.txt for more information. 147 + 148 + 149 + ========================== 150 + VIRTUAL INTERRUPT HANDLING 151 + ========================== 152 + 153 + Because accesses to the PSR is so slow, and to disable interrupts we have to access it twice (once 154 + to read and once to write), we don't actually disable interrupts at all if we don't have to. What 155 + we do instead is use the ICC2 condition code flags to note virtual disablement, such that if we 156 + then do take an interrupt, we note the flag, really disable interrupts, set another flag and resume 157 + execution at the point the interrupt happened. Setting condition flags as a side effect of an 158 + arithmetic or logical instruction is really fast. This use of the ICC2 only occurs within the 159 + kernel - it does not affect userspace. 160 + 161 + The flags we use are: 162 + 163 + (*) CCR.ICC2.Z [Zero flag] 164 + 165 + Set to virtually disable interrupts, clear when interrupts are virtually enabled. Can be 166 + modified by logical instructions without affecting the Carry flag. 167 + 168 + (*) CCR.ICC2.C [Carry flag] 169 + 170 + Clear to indicate hardware interrupts are really disabled, set otherwise. 171 + 172 + 173 + What happens is this: 174 + 175 + (1) Normal kernel-mode operation. 176 + 177 + ICC2.Z is 0, ICC2.C is 1. 178 + 179 + (2) An interrupt occurs. The exception prologue examines ICC2.Z and determines that nothing needs 180 + doing. This is done simply with an unlikely BEQ instruction. 181 + 182 + (3) The interrupts are disabled (local_irq_disable) 183 + 184 + ICC2.Z is set to 1. 185 + 186 + (4) If interrupts were then re-enabled (local_irq_enable): 187 + 188 + ICC2.Z would be set to 0. 189 + 190 + A TIHI #2 instruction (trap #2 if condition HI - Z==0 && C==0) would be used to trap if 191 + interrupts were now virtually enabled, but physically disabled - which they're not, so the 192 + trap isn't taken. The kernel would then be back to state (1). 193 + 194 + (5) An interrupt occurs. The exception prologue examines ICC2.Z and determines that the interrupt 195 + shouldn't actually have happened. It jumps aside, and there disabled interrupts by setting 196 + PSR.PIL to 14 and then it clears ICC2.C. 197 + 198 + (6) If interrupts were then saved and disabled again (local_irq_save): 199 + 200 + ICC2.Z would be shifted into the save variable and masked off (giving a 1). 201 + 202 + ICC2.Z would then be set to 1 (thus unchanged), and ICC2.C would be unaffected (ie: 0). 203 + 204 + (7) If interrupts were then restored from state (6) (local_irq_restore): 205 + 206 + ICC2.Z would be set to indicate the result of XOR'ing the saved value (ie: 1) with 1, which 207 + gives a result of 0 - thus leaving ICC2.Z set. 208 + 209 + ICC2.C would remain unaffected (ie: 0). 210 + 211 + A TIHI #2 instruction would be used to again assay the current state, but this would do 212 + nothing as Z==1. 213 + 214 + (8) If interrupts were then enabled (local_irq_enable): 215 + 216 + ICC2.Z would be cleared. ICC2.C would be left unaffected. Both flags would now be 0. 217 + 218 + A TIHI #2 instruction again issued to assay the current state would then trap as both Z==0 219 + [interrupts virtually enabled] and C==0 [interrupts really disabled] would then be true. 220 + 221 + (9) The trap #2 handler would simply enable hardware interrupts (set PSR.PIL to 0), set ICC2.C to 222 + 1 and return. 223 + 224 + (10) Immediately upon returning, the pending interrupt would be taken. 225 + 226 + (11) The interrupt handler would take the path of actually processing the interrupt (ICC2.Z is 227 + clear, BEQ fails as per step (2)). 228 + 229 + (12) The interrupt handler would then set ICC2.C to 1 since hardware interrupts are definitely 230 + enabled - or else the kernel wouldn't be here. 231 + 232 + (13) On return from the interrupt handler, things would be back to state (1). 233 + 234 + This trap (#2) is only available in kernel mode. In user mode it will result in SIGILL.
+1 -1
arch/frv/Makefile
··· 81 81 # - reserve CC3 for use with atomic ops 82 82 # - all the extra registers are dealt with only at context switch time 83 83 CFLAGS += -mno-fdpic -mgpr-32 -msoft-float -mno-media 84 - CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15 84 + CFLAGS += -ffixed-fcc3 -ffixed-cc3 -ffixed-gr15 -ffixed-icc2 85 85 AFLAGS += -mno-fdpic 86 86 ASFLAGS += -mno-fdpic 87 87
+73 -4
arch/frv/kernel/break.S
··· 200 200 movsg bpcsr,gr2 201 201 sethi.p %hi(__entry_kernel_external_interrupt),gr3 202 202 setlo %lo(__entry_kernel_external_interrupt),gr3 203 - subcc gr2,gr3,gr0,icc0 203 + subcc.p gr2,gr3,gr0,icc0 204 + sethi %hi(__entry_uspace_external_interrupt),gr3 205 + setlo.p %lo(__entry_uspace_external_interrupt),gr3 204 206 beq icc0,#2,__break_step_kernel_external_interrupt 205 - sethi.p %hi(__entry_uspace_external_interrupt),gr3 206 - setlo %lo(__entry_uspace_external_interrupt),gr3 207 - subcc gr2,gr3,gr0,icc0 207 + subcc.p gr2,gr3,gr0,icc0 208 + sethi %hi(__entry_kernel_external_interrupt_virtually_disabled),gr3 209 + setlo.p %lo(__entry_kernel_external_interrupt_virtually_disabled),gr3 208 210 beq icc0,#2,__break_step_uspace_external_interrupt 211 + subcc.p gr2,gr3,gr0,icc0 212 + sethi %hi(__entry_kernel_external_interrupt_virtual_reenable),gr3 213 + setlo.p %lo(__entry_kernel_external_interrupt_virtual_reenable),gr3 214 + beq icc0,#2,__break_step_kernel_external_interrupt_virtually_disabled 215 + subcc gr2,gr3,gr0,icc0 216 + beq icc0,#2,__break_step_kernel_external_interrupt_virtual_reenable 209 217 210 218 LEDS 0x2007,gr2 211 219 ··· 262 254 # step through an external interrupt from kernel mode 263 255 .globl __break_step_kernel_external_interrupt 264 256 __break_step_kernel_external_interrupt: 257 + # deal with virtual interrupt disablement 258 + beq icc2,#0,__break_step_kernel_external_interrupt_virtually_disabled 259 + 265 260 sethi.p %hi(__entry_kernel_external_interrupt_reentry),gr3 266 261 setlo %lo(__entry_kernel_external_interrupt_reentry),gr3 267 262 ··· 304 293 movsg scr3,gr31 305 294 #endif 306 295 rett #1 296 + 297 + # we single-stepped into an interrupt handler whilst interrupts were merely virtually disabled 298 + # need to really disable interrupts, set flag, fix up and return 299 + __break_step_kernel_external_interrupt_virtually_disabled: 300 + movsg psr,gr2 301 + andi gr2,#~PSR_PIL,gr2 302 + ori gr2,#PSR_PIL_14,gr2 /* debugging interrupts only */ 303 + movgs gr2,psr 304 + 305 + ldi @(gr31,#REG_CCR),gr3 306 + movgs gr3,ccr 307 + subcc.p gr0,gr0,gr0,icc2 /* leave Z set, clear C */ 308 + 309 + # exceptions must've been enabled and we must've been in supervisor mode 310 + setlos BPSR_BET|BPSR_BS,gr3 311 + movgs gr3,bpsr 312 + 313 + # return to where the interrupt happened 314 + movsg pcsr,gr2 315 + movgs gr2,bpcsr 316 + 317 + lddi.p @(gr31,#REG_GR(2)),gr2 318 + 319 + xor gr31,gr31,gr31 320 + movgs gr0,brr 321 + #ifdef CONFIG_MMU 322 + movsg scr3,gr31 323 + #endif 324 + rett #1 325 + 326 + # we stepped through into the virtual interrupt reenablement trap 327 + # 328 + # we also want to single step anyway, but after fixing up so that we get an event on the 329 + # instruction after the broken-into exception returns 330 + .globl __break_step_kernel_external_interrupt_virtual_reenable 331 + __break_step_kernel_external_interrupt_virtual_reenable: 332 + movsg psr,gr2 333 + andi gr2,#~PSR_PIL,gr2 334 + movgs gr2,psr 335 + 336 + ldi @(gr31,#REG_CCR),gr3 337 + movgs gr3,ccr 338 + subicc gr0,#1,gr0,icc2 /* clear Z, set C */ 339 + 340 + # save the adjusted ICC2 341 + movsg ccr,gr3 342 + sti gr3,@(gr31,#REG_CCR) 343 + 344 + # exceptions must've been enabled and we must've been in supervisor mode 345 + setlos BPSR_BET|BPSR_BS,gr3 346 + movgs gr3,bpsr 347 + 348 + # return to where the trap happened 349 + movsg pcsr,gr2 350 + movgs gr2,bpcsr 351 + 352 + # and then process the single step 353 + bra __break_continue 307 354 308 355 # step through an internal exception from uspace mode 309 356 .globl __break_step_uspace_softprog_interrupt
+34 -5
arch/frv/kernel/entry-table.S
··· 116 116 .long __break_step_uspace_external_interrupt 117 117 .section .trap.kernel 118 118 .org \tbr_tt 119 + # deal with virtual interrupt disablement 120 + beq icc2,#0,__entry_kernel_external_interrupt_virtually_disabled 119 121 bra __entry_kernel_external_interrupt 120 122 .section .trap.fixup.kernel 121 123 .org \tbr_tt >> 2 ··· 261 259 .org TBR_TT_TRAP0 262 260 .rept 127 263 261 bra __entry_uspace_softprog_interrupt 264 - bra __break_step_uspace_softprog_interrupt 265 - .long 0,0 262 + .long 0,0,0 266 263 .endr 267 264 .org TBR_TT_BREAK 268 265 bra __entry_break 269 266 .long 0,0,0 270 267 268 + .section .trap.fixup.user 269 + .org TBR_TT_TRAP0 >> 2 270 + .rept 127 271 + .long __break_step_uspace_softprog_interrupt 272 + .endr 273 + .org TBR_TT_BREAK >> 2 274 + .long 0 275 + 271 276 # miscellaneous kernel mode entry points 272 277 .section .trap.kernel 273 278 .org TBR_TT_TRAP0 274 - .rept 127 275 279 bra __entry_kernel_softprog_interrupt 276 - bra __break_step_kernel_softprog_interrupt 277 - .long 0,0 280 + .org TBR_TT_TRAP1 281 + bra __entry_kernel_softprog_interrupt 282 + 283 + # trap #2 in kernel - reenable interrupts 284 + .org TBR_TT_TRAP2 285 + bra __entry_kernel_external_interrupt_virtual_reenable 286 + 287 + # miscellaneous kernel traps 288 + .org TBR_TT_TRAP3 289 + .rept 124 290 + bra __entry_kernel_softprog_interrupt 291 + .long 0,0,0 278 292 .endr 279 293 .org TBR_TT_BREAK 280 294 bra __entry_break 281 295 .long 0,0,0 296 + 297 + .section .trap.fixup.kernel 298 + .org TBR_TT_TRAP0 >> 2 299 + .long __break_step_kernel_softprog_interrupt 300 + .long __break_step_kernel_softprog_interrupt 301 + .long __break_step_kernel_external_interrupt_virtual_reenable 302 + .rept 124 303 + .long __break_step_kernel_softprog_interrupt 304 + .endr 305 + .org TBR_TT_BREAK >> 2 306 + .long 0 282 307 283 308 # miscellaneous debug mode entry points 284 309 .section .trap.break
+58 -7
arch/frv/kernel/entry.S
··· 141 141 142 142 movsg gner0,gr4 143 143 movsg gner1,gr5 144 - stdi gr4,@(gr28,#REG_GNER0) 144 + stdi.p gr4,@(gr28,#REG_GNER0) 145 + 146 + # interrupts start off fully disabled in the interrupt handler 147 + subcc gr0,gr0,gr0,icc2 /* set Z and clear C */ 145 148 146 149 # set up kernel global registers 147 150 sethi.p %hi(__kernel_current_task),gr5 ··· 196 193 .type __entry_kernel_external_interrupt,@function 197 194 __entry_kernel_external_interrupt: 198 195 LEDS 0x6210 199 - 200 - sub sp,gr15,gr31 201 - LEDS32 196 + // sub sp,gr15,gr31 197 + // LEDS32 202 198 203 199 # set up the stack pointer 204 200 or.p sp,gr0,gr30 ··· 233 231 stdi gr24,@(gr28,#REG_GR(24)) 234 232 stdi gr26,@(gr28,#REG_GR(26)) 235 233 sti gr29,@(gr28,#REG_GR(29)) 236 - stdi gr30,@(gr28,#REG_GR(30)) 234 + stdi.p gr30,@(gr28,#REG_GR(30)) 235 + 236 + # note virtual interrupts will be fully enabled upon return 237 + subicc gr0,#1,gr0,icc2 /* clear Z, set C */ 237 238 238 239 movsg tbr ,gr20 239 240 movsg psr ,gr22 ··· 272 267 273 268 movsg gner0,gr4 274 269 movsg gner1,gr5 275 - stdi gr4,@(gr28,#REG_GNER0) 270 + stdi.p gr4,@(gr28,#REG_GNER0) 271 + 272 + # interrupts start off fully disabled in the interrupt handler 273 + subcc gr0,gr0,gr0,icc2 /* set Z and clear C */ 276 274 277 275 # set the return address 278 276 sethi.p %hi(__entry_return_from_kernel_interrupt),gr4 ··· 299 291 300 292 .size __entry_kernel_external_interrupt,.-__entry_kernel_external_interrupt 301 293 294 + ############################################################################### 295 + # 296 + # deal with interrupts that were actually virtually disabled 297 + # - we need to really disable them, flag the fact and return immediately 298 + # - if you change this, you must alter break.S also 299 + # 300 + ############################################################################### 301 + .balign L1_CACHE_BYTES 302 + .globl __entry_kernel_external_interrupt_virtually_disabled 303 + .type __entry_kernel_external_interrupt_virtually_disabled,@function 304 + __entry_kernel_external_interrupt_virtually_disabled: 305 + movsg psr,gr30 306 + andi gr30,#~PSR_PIL,gr30 307 + ori gr30,#PSR_PIL_14,gr30 ; debugging interrupts only 308 + movgs gr30,psr 309 + subcc gr0,gr0,gr0,icc2 ; leave Z set, clear C 310 + rett #0 311 + 312 + .size __entry_kernel_external_interrupt_virtually_disabled,.-__entry_kernel_external_interrupt_virtually_disabled 313 + 314 + ############################################################################### 315 + # 316 + # deal with re-enablement of interrupts that were pending when virtually re-enabled 317 + # - set ICC2.C, re-enable the real interrupts and return 318 + # - we can clear ICC2.Z because we shouldn't be here if it's not 0 [due to TIHI] 319 + # - if you change this, you must alter break.S also 320 + # 321 + ############################################################################### 322 + .balign L1_CACHE_BYTES 323 + .globl __entry_kernel_external_interrupt_virtual_reenable 324 + .type __entry_kernel_external_interrupt_virtual_reenable,@function 325 + __entry_kernel_external_interrupt_virtual_reenable: 326 + movsg psr,gr30 327 + andi gr30,#~PSR_PIL,gr30 ; re-enable interrupts 328 + movgs gr30,psr 329 + subicc gr0,#1,gr0,icc2 ; clear Z, set C 330 + rett #0 331 + 332 + .size __entry_kernel_external_interrupt_virtual_reenable,.-__entry_kernel_external_interrupt_virtual_reenable 302 333 303 334 ############################################################################### 304 335 # ··· 382 335 383 336 sethi.p %hi(__entry_return_from_user_exception),gr23 384 337 setlo %lo(__entry_return_from_user_exception),gr23 338 + 385 339 bra __entry_common 386 340 387 341 .size __entry_uspace_softprog_interrupt,.-__entry_uspace_softprog_interrupt ··· 543 495 544 496 movsg gner0,gr4 545 497 movsg gner1,gr5 546 - stdi gr4,@(gr28,#REG_GNER0) 498 + stdi.p gr4,@(gr28,#REG_GNER0) 499 + 500 + # set up virtual interrupt disablement 501 + subicc gr0,#1,gr0,icc2 /* clear Z flag, set C flag */ 547 502 548 503 # set up kernel global registers 549 504 sethi.p %hi(__kernel_current_task),gr5
+3
arch/frv/kernel/head.S
··· 513 513 movgs gr0,ccr 514 514 movgs gr0,cccr 515 515 516 + # initialise the virtual interrupt handling 517 + subcc gr0,gr0,gr0,icc2 /* set Z, clear C */ 518 + 516 519 #ifdef CONFIG_MMU 517 520 movgs gr3,scr2 518 521 movgs gr3,scr3
+3 -38
arch/frv/kernel/irq.c
··· 287 287 struct irq_source *source; 288 288 int level, cpu; 289 289 290 + irq_enter(); 291 + 290 292 level = (__frame->tbr >> 4) & 0xf; 291 293 cpu = smp_processor_id(); 292 - 293 - #if 0 294 - { 295 - static u32 irqcount; 296 - *(volatile u32 *) 0xe1200004 = ~((irqcount++ << 8) | level); 297 - *(volatile u16 *) 0xffc00100 = (u16) ~0x9999; 298 - mb(); 299 - } 300 - #endif 301 294 302 295 if ((unsigned long) __frame - (unsigned long) (current + 1) < 512) 303 296 BUG(); ··· 301 308 302 309 kstat_this_cpu.irqs[level]++; 303 310 304 - irq_enter(); 305 - 306 311 for (source = frv_irq_levels[level].sources; source; source = source->next) 307 312 source->doirq(source); 308 313 309 - irq_exit(); 310 - 311 314 __clr_MASK(level); 312 315 313 - /* only process softirqs if we didn't interrupt another interrupt handler */ 314 - if ((__frame->psr & PSR_PIL) == PSR_PIL_0) 315 - if (local_softirq_pending()) 316 - do_softirq(); 317 - 318 - #ifdef CONFIG_PREEMPT 319 - local_irq_disable(); 320 - while (--current->preempt_count == 0) { 321 - if (!(__frame->psr & PSR_S) || 322 - current->need_resched == 0 || 323 - in_interrupt()) 324 - break; 325 - current->preempt_count++; 326 - local_irq_enable(); 327 - preempt_schedule(); 328 - local_irq_disable(); 329 - } 330 - #endif 331 - 332 - #if 0 333 - { 334 - *(volatile u16 *) 0xffc00100 = (u16) ~0x6666; 335 - mb(); 336 - } 337 - #endif 316 + irq_exit(); 338 317 339 318 } /* end do_IRQ() */ 340 319
+1
include/asm-frv/spr-regs.h
··· 98 98 #define TBR_TT_TRAP0 (0x80 << 4) 99 99 #define TBR_TT_TRAP1 (0x81 << 4) 100 100 #define TBR_TT_TRAP2 (0x82 << 4) 101 + #define TBR_TT_TRAP3 (0x83 << 4) 101 102 #define TBR_TT_TRAP126 (0xfe << 4) 102 103 #define TBR_TT_BREAK (0xff << 4) 103 104
+82 -6
include/asm-frv/system.h
··· 40 40 41 41 /* 42 42 * interrupt flag manipulation 43 + * - use virtual interrupt management since touching the PSR is slow 44 + * - ICC2.Z: T if interrupts virtually disabled 45 + * - ICC2.C: F if interrupts really disabled 46 + * - if Z==1 upon interrupt: 47 + * - C is set to 0 48 + * - interrupts are really disabled 49 + * - entry.S returns immediately 50 + * - uses TIHI (TRAP if Z==0 && C==0) #2 to really reenable interrupts 51 + * - if taken, the trap: 52 + * - sets ICC2.C 53 + * - enables interrupts 43 54 */ 44 - #define local_irq_disable() \ 55 + #define local_irq_disable() \ 56 + do { \ 57 + /* set Z flag, but don't change the C flag */ \ 58 + asm volatile(" andcc gr0,gr0,gr0,icc2 \n" \ 59 + : \ 60 + : \ 61 + : "memory", "icc2" \ 62 + ); \ 63 + } while(0) 64 + 65 + #define local_irq_enable() \ 66 + do { \ 67 + /* clear Z flag and then test the C flag */ \ 68 + asm volatile(" oricc gr0,#1,gr0,icc2 \n" \ 69 + " tihi icc2,gr0,#2 \n" \ 70 + : \ 71 + : \ 72 + : "memory", "icc2" \ 73 + ); \ 74 + } while(0) 75 + 76 + #define local_save_flags(flags) \ 77 + do { \ 78 + typecheck(unsigned long, flags); \ 79 + asm volatile("movsg ccr,%0" \ 80 + : "=r"(flags) \ 81 + : \ 82 + : "memory"); \ 83 + \ 84 + /* shift ICC2.Z to bit 0 */ \ 85 + flags >>= 26; \ 86 + \ 87 + /* make flags 1 if interrupts disabled, 0 otherwise */ \ 88 + flags &= 1UL; \ 89 + } while(0) 90 + 91 + #define irqs_disabled() \ 92 + ({unsigned long flags; local_save_flags(flags); flags; }) 93 + 94 + #define local_irq_save(flags) \ 95 + do { \ 96 + typecheck(unsigned long, flags); \ 97 + local_save_flags(flags); \ 98 + local_irq_disable(); \ 99 + } while(0) 100 + 101 + #define local_irq_restore(flags) \ 102 + do { \ 103 + typecheck(unsigned long, flags); \ 104 + \ 105 + /* load the Z flag by turning 1 if disabled into 0 if disabled \ 106 + * and thus setting the Z flag but not the C flag */ \ 107 + asm volatile(" xoricc %0,#1,gr0,icc2 \n" \ 108 + /* then test Z=0 and C=0 */ \ 109 + " tihi icc2,gr0,#2 \n" \ 110 + : \ 111 + : "r"(flags) \ 112 + : "memory", "icc2" \ 113 + ); \ 114 + \ 115 + } while(0) 116 + 117 + /* 118 + * real interrupt flag manipulation 119 + */ 120 + #define __local_irq_disable() \ 45 121 do { \ 46 122 unsigned long psr; \ 47 123 asm volatile(" movsg psr,%0 \n" \ ··· 129 53 : "memory"); \ 130 54 } while(0) 131 55 132 - #define local_irq_enable() \ 56 + #define __local_irq_enable() \ 133 57 do { \ 134 58 unsigned long psr; \ 135 59 asm volatile(" movsg psr,%0 \n" \ ··· 140 64 : "memory"); \ 141 65 } while(0) 142 66 143 - #define local_save_flags(flags) \ 67 + #define __local_save_flags(flags) \ 144 68 do { \ 145 69 typecheck(unsigned long, flags); \ 146 70 asm("movsg psr,%0" \ ··· 149 73 : "memory"); \ 150 74 } while(0) 151 75 152 - #define local_irq_save(flags) \ 76 + #define __local_irq_save(flags) \ 153 77 do { \ 154 78 unsigned long npsr; \ 155 79 typecheck(unsigned long, flags); \ ··· 162 86 : "memory"); \ 163 87 } while(0) 164 88 165 - #define local_irq_restore(flags) \ 89 + #define __local_irq_restore(flags) \ 166 90 do { \ 167 91 typecheck(unsigned long, flags); \ 168 92 asm volatile(" movgs %0,psr \n" \ ··· 171 95 : "memory"); \ 172 96 } while(0) 173 97 174 - #define irqs_disabled() \ 98 + #define __irqs_disabled() \ 175 99 ((__get_PSR() & PSR_PIL) >= PSR_PIL_14) 176 100 177 101 /*