sched: fix niced_granularity() shift

fix niced_granularity(). This resulted in under-scheduling for
CPU-bound negative nice level tasks (and this in turn caused
higher than necessary latencies in nice-0 tasks).

Signed-off-by: Ingo Molnar <mingo@elte.hu>

+1 -1
+1 -1
kernel/sched_fair.c
··· 291 291 /* 292 292 * It will always fit into 'long': 293 293 */ 294 - return (long) (tmp >> WMULT_SHIFT); 294 + return (long) (tmp >> (WMULT_SHIFT-NICE_0_SHIFT)); 295 295 } 296 296 297 297 static inline void