blk-throttle: Reset group slice when limits are changed

Lina reported that if throttle limits are initially very high and then
dropped, then no new bio might be dispatched for a long time. And the
reason being that after dropping the limits we don't reset the existing
slice and do the rate calculation with new low rate and account the bios
dispatched at high rate. To fix it, reset the slice upon rate change.

https://lkml.org/lkml/2011/3/10/298

Another problem with very high limit is that we never queued the
bio on throtl service tree. That means we kept on extending the
group slice but never trimmed it. Fix that also by regulary
trimming the slice even if bio is not being queued up.

Reported-by: Lina Lu <lulina_nuaa@foxmail.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>

authored by Vivek Goyal and committed by Jens Axboe 04521db0 9026e521

+24 -1
+24 -1
block/blk-throttle.c
··· 756 " riops=%u wiops=%u", tg->bps[READ], tg->bps[WRITE], 757 tg->iops[READ], tg->iops[WRITE]); 758 759 if (throtl_tg_on_rr(tg)) 760 tg_update_disptime(td, tg); 761 } ··· 830 831 struct delayed_work *dwork = &td->throtl_work; 832 833 - if (total_nr_queued(td) > 0) { 834 /* 835 * We might have a work scheduled to be executed in future. 836 * Cancel that and schedule a new one. ··· 1012 /* Bio is with-in rate limit of group */ 1013 if (tg_may_dispatch(td, tg, bio, NULL)) { 1014 throtl_charge_bio(tg, bio); 1015 goto out; 1016 } 1017
··· 756 " riops=%u wiops=%u", tg->bps[READ], tg->bps[WRITE], 757 tg->iops[READ], tg->iops[WRITE]); 758 759 + /* 760 + * Restart the slices for both READ and WRITES. It 761 + * might happen that a group's limit are dropped 762 + * suddenly and we don't want to account recently 763 + * dispatched IO with new low rate 764 + */ 765 + throtl_start_new_slice(td, tg, 0); 766 + throtl_start_new_slice(td, tg, 1); 767 + 768 if (throtl_tg_on_rr(tg)) 769 tg_update_disptime(td, tg); 770 } ··· 821 822 struct delayed_work *dwork = &td->throtl_work; 823 824 + /* schedule work if limits changed even if no bio is queued */ 825 + if (total_nr_queued(td) > 0 || td->limits_changed) { 826 /* 827 * We might have a work scheduled to be executed in future. 828 * Cancel that and schedule a new one. ··· 1002 /* Bio is with-in rate limit of group */ 1003 if (tg_may_dispatch(td, tg, bio, NULL)) { 1004 throtl_charge_bio(tg, bio); 1005 + 1006 + /* 1007 + * We need to trim slice even when bios are not being queued 1008 + * otherwise it might happen that a bio is not queued for 1009 + * a long time and slice keeps on extending and trim is not 1010 + * called for a long time. Now if limits are reduced suddenly 1011 + * we take into account all the IO dispatched so far at new 1012 + * low rate and * newly queued IO gets a really long dispatch 1013 + * time. 1014 + * 1015 + * So keep on trimming slice even if bio is not queued. 1016 + */ 1017 + throtl_trim_slice(td, tg, rw); 1018 goto out; 1019 } 1020