[PATCH] ZVC: Overstep counters

Increments and decrements are usually grouped rather than mixed. We can
optimize the inc and dec functions for that case.

Increment and decrement the counters by 50% more than the threshold in
those cases and set the differential accordingly. This decreases the need
to update the atomic counters.

The idea came originally from Andrew Morton. The overstepping alone was
sufficient to address the contention issue found when updating the global
and the per zone counters from 160 processors.

Also remove some code in dec_zone_page_state.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by Christoph Lameter and committed by Linus Torvalds a302eb4e b63fe1ba

+5 -15
+5 -15
mm/vmstat.c
··· 190 190 (*p)++; 191 191 192 192 if (unlikely(*p > STAT_THRESHOLD)) { 193 - zone_page_state_add(*p, zone, item); 194 - *p = 0; 193 + zone_page_state_add(*p + STAT_THRESHOLD / 2, zone, item); 194 + *p = -STAT_THRESHOLD / 2; 195 195 } 196 196 } 197 197 ··· 209 209 (*p)--; 210 210 211 211 if (unlikely(*p < -STAT_THRESHOLD)) { 212 - zone_page_state_add(*p, zone, item); 213 - *p = 0; 212 + zone_page_state_add(*p - STAT_THRESHOLD / 2, zone, item); 213 + *p = STAT_THRESHOLD /2; 214 214 } 215 215 } 216 216 EXPORT_SYMBOL(__dec_zone_page_state); ··· 239 239 void dec_zone_page_state(struct page *page, enum zone_stat_item item) 240 240 { 241 241 unsigned long flags; 242 - struct zone *zone; 243 - s8 *p; 244 242 245 - zone = page_zone(page); 246 243 local_irq_save(flags); 247 - p = diff_pointer(zone, item); 248 - 249 - (*p)--; 250 - 251 - if (unlikely(*p < -STAT_THRESHOLD)) { 252 - zone_page_state_add(*p, zone, item); 253 - *p = 0; 254 - } 244 + __dec_zone_page_state(page, item); 255 245 local_irq_restore(flags); 256 246 } 257 247 EXPORT_SYMBOL(dec_zone_page_state);