sched: fix MC/HT scheduler optimization, without breaking the FUZZ logic.

First fix the check
if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task)
with this
if (*imbalance < busiest_load_per_task)

As the current check is always false for nice 0 tasks (as
SCHED_LOAD_SCALE_FUZZ is same as busiest_load_per_task for nice 0
tasks).

With the above change, imbalance was getting reset to 0 in the corner
case condition, making the FUZZ logic fail. Fix it by not corrupting the
imbalance and change the imbalance, only when it finds that the HT/MC
optimization is needed.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by Suresh Siddha and committed by Ingo Molnar 7fd0d2dd b21010ed

+3 -5
+3 -5
kernel/sched.c
··· 2512 2512 * a think about bumping its value to force at least one task to be 2513 2513 * moved 2514 2514 */ 2515 - if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) { 2515 + if (*imbalance < busiest_load_per_task) { 2516 2516 unsigned long tmp, pwr_now, pwr_move; 2517 2517 unsigned int imbn; 2518 2518 ··· 2564 2564 pwr_move /= SCHED_LOAD_SCALE; 2565 2565 2566 2566 /* Move if we gain throughput */ 2567 - if (pwr_move <= pwr_now) 2568 - goto out_balanced; 2569 - 2570 - *imbalance = busiest_load_per_task; 2567 + if (pwr_move > pwr_now) 2568 + *imbalance = busiest_load_per_task; 2571 2569 } 2572 2570 2573 2571 return busiest;