Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

powerpc/paravirt: Improve vcpu_is_preempted

PowerVM Hypervisor dispatches on a whole core basis. In a shared LPAR, a
CPU from a core that is CEDED or preempted may have a larger latency. In
such a scenario, its preferable to choose a different CPU to run.

If one of the CPUs in the core is active, i.e neither CEDED nor
preempted, then consider this CPU as not preempted.

Also if any of the CPUs in the core has yielded but OS has not requested
CEDE or CONFER, then consider this CPU to be preempted.

Correct detection of preempted CPUs is important for detecting idle
CPUs/cores in task scheduler.

Tested-by: Aboorva Devarajan <aboorvad@linux.vnet.ibm.com>
Reviewed-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20231019091452.95260-1-srikar@linux.vnet.ibm.com

authored by

Srikar Dronamraju and committed by
Michael Ellerman
efce8422 e08c43e6

+44 -3
+44 -3
arch/powerpc/include/asm/paravirt.h
··· 71 71 { 72 72 plpar_hcall_norets_notrace(H_CONFER, -1, 0); 73 73 } 74 + 75 + static inline bool is_vcpu_idle(int vcpu) 76 + { 77 + return lppaca_of(vcpu).idle; 78 + } 74 79 #else 75 80 static inline bool is_shared_processor(void) 76 81 { ··· 105 100 ___bad_prod_cpu(); /* This would be a bug */ 106 101 } 107 102 103 + static inline bool is_vcpu_idle(int vcpu) 104 + { 105 + return false; 106 + } 108 107 #endif 109 108 110 109 #define vcpu_is_preempted vcpu_is_preempted ··· 130 121 if (!is_shared_processor()) 131 122 return false; 132 123 124 + /* 125 + * If the hypervisor has dispatched the target CPU on a physical 126 + * processor, then the target CPU is definitely not preempted. 127 + */ 128 + if (!(yield_count_of(cpu) & 1)) 129 + return false; 130 + 131 + /* 132 + * If the target CPU has yielded to Hypervisor but OS has not 133 + * requested idle then the target CPU is definitely preempted. 134 + */ 135 + if (!is_vcpu_idle(cpu)) 136 + return true; 137 + 133 138 #ifdef CONFIG_PPC_SPLPAR 134 139 if (!is_kvm_guest()) { 135 - int first_cpu; 140 + int first_cpu, i; 136 141 137 142 /* 138 143 * The result of vcpu_is_preempted() is used in a ··· 172 149 */ 173 150 if (cpu_first_thread_sibling(cpu) == first_cpu) 174 151 return false; 152 + 153 + /* 154 + * If any of the threads of the target CPU's core are not 155 + * preempted or ceded, then consider target CPU to be 156 + * non-preempted. 157 + */ 158 + first_cpu = cpu_first_thread_sibling(cpu); 159 + for (i = first_cpu; i < first_cpu + threads_per_core; i++) { 160 + if (i == cpu) 161 + continue; 162 + if (!(yield_count_of(i) & 1)) 163 + return false; 164 + if (!is_vcpu_idle(i)) 165 + return true; 166 + } 175 167 } 176 168 #endif 177 169 178 - if (yield_count_of(cpu) & 1) 179 - return true; 170 + /* 171 + * None of the threads in target CPU's core are running but none of 172 + * them were preempted too. Hence assume the target CPU to be 173 + * non-preempted. 174 + */ 180 175 return false; 181 176 } 182 177