Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

writeback: remove the internal 5% low bound on dirty_ratio

The dirty_ratio was silently limited in global_dirty_limits() to >= 5%.
This is not a user expected behavior. And it's inconsistent with
calc_period_shift(), which uses the plain vm_dirty_ratio value.

Let's remove the internal bound.

At the same time, fix balance_dirty_pages() to work with the
dirty_thresh=0 case. This allows applications to proceed when
dirty+writeback pages are all cleaned.

And ">" fits with the name "exceeded" better than ">=" does. Neil thinks
it is an aesthetic improvement as well as a functional one :)

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Cc: Jan Kara <jack@suse.cz>
Proposed-by: Con Kolivas <kernel@kolivas.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Reviewed-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Neil Brown <neilb@suse.de>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Michael Rubin <mrubin@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Wu Fengguang and committed by
Linus Torvalds
4cbec4c8 0e093d99

+6 -12
+1 -1
fs/fs-writeback.c
··· 582 582 global_dirty_limits(&background_thresh, &dirty_thresh); 583 583 584 584 return (global_page_state(NR_FILE_DIRTY) + 585 - global_page_state(NR_UNSTABLE_NFS) >= background_thresh); 585 + global_page_state(NR_UNSTABLE_NFS) > background_thresh); 586 586 } 587 587 588 588 /*
+5 -11
mm/page-writeback.c
··· 415 415 416 416 if (vm_dirty_bytes) 417 417 dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE); 418 - else { 419 - int dirty_ratio; 420 - 421 - dirty_ratio = vm_dirty_ratio; 422 - if (dirty_ratio < 5) 423 - dirty_ratio = 5; 424 - dirty = (dirty_ratio * available_memory) / 100; 425 - } 418 + else 419 + dirty = (vm_dirty_ratio * available_memory) / 100; 426 420 427 421 if (dirty_background_bytes) 428 422 background = DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE); ··· 504 510 * catch-up. This avoids (excessively) small writeouts 505 511 * when the bdi limits are ramping up. 506 512 */ 507 - if (nr_reclaimable + nr_writeback < 513 + if (nr_reclaimable + nr_writeback <= 508 514 (background_thresh + dirty_thresh) / 2) 509 515 break; 510 516 ··· 536 542 * the last resort safeguard. 537 543 */ 538 544 dirty_exceeded = 539 - (bdi_nr_reclaimable + bdi_nr_writeback >= bdi_thresh) 540 - || (nr_reclaimable + nr_writeback >= dirty_thresh); 545 + (bdi_nr_reclaimable + bdi_nr_writeback > bdi_thresh) 546 + || (nr_reclaimable + nr_writeback > dirty_thresh); 541 547 542 548 if (!dirty_exceeded) 543 549 break;