[MIPS] SMTC: Instant IPI replay.

SMTC pseudo-interrupts between TCs are deferred and queued if the target
TC is interrupt-inhibited (IXMT). In the first SMTC prototypes, these
queued IPIs were serviced on return to user mode, or on entry into the
kernel idle loop. The INSTANT_REPLAY option dispatches them as part of
local_irq_restore() processing, which adds runtime overhead (hence the
option to turn it off), but ensures that IPIs are handled promptly even
under heavy I/O interrupt load.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>

+70 -22
+14
arch/mips/Kconfig
··· 1568 depends on MIPS_MT 1569 default y 1570 1571 config MIPS_VPE_LOADER_TOM 1572 bool "Load VPE program into memory hidden from linux" 1573 depends on MIPS_VPE_LOADER
··· 1568 depends on MIPS_MT 1569 default y 1570 1571 + config MIPS_MT_SMTC_INSTANT_REPLAY 1572 + bool "Low-latency Dispatch of Deferred SMTC IPIs" 1573 + depends on MIPS_MT_SMTC 1574 + default y 1575 + help 1576 + SMTC pseudo-interrupts between TCs are deferred and queued 1577 + if the target TC is interrupt-inhibited (IXMT). In the first 1578 + SMTC prototypes, these queued IPIs were serviced on return 1579 + to user mode, or on entry into the kernel idle loop. The 1580 + INSTANT_REPLAY option dispatches them as part of local_irq_restore() 1581 + processing, which adds runtime overhead (hence the option to turn 1582 + it off), but ensures that IPIs are handled promptly even under 1583 + heavy I/O interrupt load. 1584 + 1585 config MIPS_VPE_LOADER_TOM 1586 bool "Load VPE program into memory hidden from linux" 1587 depends on MIPS_VPE_LOADER
+34 -22
arch/mips/kernel/smtc.c
··· 1017 * SMTC-specific hacks invoked from elsewhere in the kernel. 1018 */ 1019 1020 void smtc_idle_loop_hook(void) 1021 { 1022 #ifdef SMTC_IDLE_HOOK_DEBUG ··· 1140 if (pdb_msg != &id_ho_db_msg[0]) 1141 printk("CPU%d: %s", smp_processor_id(), id_ho_db_msg); 1142 #endif /* SMTC_IDLE_HOOK_DEBUG */ 1143 - /* 1144 - * To the extent that we've ever turned interrupts off, 1145 - * we may have accumulated deferred IPIs. This is subtle. 1146 - * If we use the smtc_ipi_qdepth() macro, we'll get an 1147 - * exact number - but we'll also disable interrupts 1148 - * and create a window of failure where a new IPI gets 1149 - * queued after we test the depth but before we re-enable 1150 - * interrupts. So long as IXMT never gets set, however, 1151 - * we should be OK: If we pick up something and dispatch 1152 - * it here, that's great. If we see nothing, but concurrent 1153 - * with this operation, another TC sends us an IPI, IXMT 1154 - * is clear, and we'll handle it as a real pseudo-interrupt 1155 - * and not a pseudo-pseudo interrupt. 1156 - */ 1157 - if (IPIQ[smp_processor_id()].depth > 0) { 1158 - struct smtc_ipi *pipi; 1159 - extern void self_ipi(struct smtc_ipi *); 1160 1161 - if ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()])) != NULL) { 1162 - self_ipi(pipi); 1163 - smtc_cpu_stats[smp_processor_id()].selfipis++; 1164 - } 1165 - } 1166 } 1167 1168 void smtc_soft_dump(void)
··· 1017 * SMTC-specific hacks invoked from elsewhere in the kernel. 1018 */ 1019 1020 + void smtc_ipi_replay(void) 1021 + { 1022 + /* 1023 + * To the extent that we've ever turned interrupts off, 1024 + * we may have accumulated deferred IPIs. This is subtle. 1025 + * If we use the smtc_ipi_qdepth() macro, we'll get an 1026 + * exact number - but we'll also disable interrupts 1027 + * and create a window of failure where a new IPI gets 1028 + * queued after we test the depth but before we re-enable 1029 + * interrupts. So long as IXMT never gets set, however, 1030 + * we should be OK: If we pick up something and dispatch 1031 + * it here, that's great. If we see nothing, but concurrent 1032 + * with this operation, another TC sends us an IPI, IXMT 1033 + * is clear, and we'll handle it as a real pseudo-interrupt 1034 + * and not a pseudo-pseudo interrupt. 1035 + */ 1036 + if (IPIQ[smp_processor_id()].depth > 0) { 1037 + struct smtc_ipi *pipi; 1038 + extern void self_ipi(struct smtc_ipi *); 1039 + 1040 + while ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()]))) { 1041 + self_ipi(pipi); 1042 + smtc_cpu_stats[smp_processor_id()].selfipis++; 1043 + } 1044 + } 1045 + } 1046 + 1047 void smtc_idle_loop_hook(void) 1048 { 1049 #ifdef SMTC_IDLE_HOOK_DEBUG ··· 1113 if (pdb_msg != &id_ho_db_msg[0]) 1114 printk("CPU%d: %s", smp_processor_id(), id_ho_db_msg); 1115 #endif /* SMTC_IDLE_HOOK_DEBUG */ 1116 1117 + /* 1118 + * Replay any accumulated deferred IPIs. If "Instant Replay" 1119 + * is in use, there should never be any. 1120 + */ 1121 + #ifndef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY 1122 + smtc_ipi_replay(); 1123 + #endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */ 1124 } 1125 1126 void smtc_soft_dump(void)
+22
include/asm-mips/irqflags.h
··· 15 16 #include <asm/hazards.h> 17 18 __asm__ ( 19 " .macro raw_local_irq_enable \n" 20 " .set push \n" ··· 214 : "=r" (__tmp1) \ 215 : "0" (flags) \ 216 : "memory"); \ 217 } while(0) 218 219 static inline int raw_irqs_disabled_flags(unsigned long flags)
··· 15 16 #include <asm/hazards.h> 17 18 + /* 19 + * CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY does prompt replay of deferred IPIs, 20 + * at the cost of branch and call overhead on each local_irq_restore() 21 + */ 22 + 23 + #ifdef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY 24 + 25 + extern void smtc_ipi_replay(void); 26 + 27 + #define irq_restore_epilog(flags) \ 28 + do { \ 29 + if (!(flags & 0x0400)) \ 30 + smtc_ipi_replay(); \ 31 + } while (0) 32 + 33 + #else 34 + 35 + #define irq_restore_epilog(ignore) do { } while (0) 36 + 37 + #endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */ 38 + 39 __asm__ ( 40 " .macro raw_local_irq_enable \n" 41 " .set push \n" ··· 193 : "=r" (__tmp1) \ 194 : "0" (flags) \ 195 : "memory"); \ 196 + irq_restore_epilog(flags); \ 197 } while(0) 198 199 static inline int raw_irqs_disabled_flags(unsigned long flags)