sched: Move sched_avg_update() to update_cpu_load()

Currently sched_avg_update() (which updates rt_avg stats in the rq)
is getting called from scale_rt_power() (in the load balance context)
which doesn't take rq->lock.

Fix it by moving the sched_avg_update() to more appropriate
update_cpu_load() where the CFS load gets updated as well.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1282596171.2694.3.camel@sbsiddha-MOBL3>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by Suresh Siddha and committed by Ingo Molnar da2b71ed d56557af

+6 -2
+6
kernel/sched.c
··· 1294 static void sched_rt_avg_update(struct rq *rq, u64 rt_delta) 1295 { 1296 } 1297 #endif /* CONFIG_SMP */ 1298 1299 #if BITS_PER_LONG == 32 ··· 3186 3187 this_rq->cpu_load[i] = (old_load * (scale - 1) + new_load) >> i; 3188 } 3189 } 3190 3191 static void update_cpu_load_active(struct rq *this_rq)
··· 1294 static void sched_rt_avg_update(struct rq *rq, u64 rt_delta) 1295 { 1296 } 1297 + 1298 + static void sched_avg_update(struct rq *rq) 1299 + { 1300 + } 1301 #endif /* CONFIG_SMP */ 1302 1303 #if BITS_PER_LONG == 32 ··· 3182 3183 this_rq->cpu_load[i] = (old_load * (scale - 1) + new_load) >> i; 3184 } 3185 + 3186 + sched_avg_update(this_rq); 3187 } 3188 3189 static void update_cpu_load_active(struct rq *this_rq)
-2
kernel/sched_fair.c
··· 2268 struct rq *rq = cpu_rq(cpu); 2269 u64 total, available; 2270 2271 - sched_avg_update(rq); 2272 - 2273 total = sched_avg_period() + (rq->clock - rq->age_stamp); 2274 available = total - rq->rt_avg; 2275
··· 2268 struct rq *rq = cpu_rq(cpu); 2269 u64 total, available; 2270 2271 total = sched_avg_period() + (rq->clock - rq->age_stamp); 2272 available = total - rq->rt_avg; 2273