Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
fork

Configure Feed

Select the types of activity you want to include in your feed.

md/raid5: schedule_construction should abort if nothing to do.

Since commit 1ed850f356a0a422013846b5291acff08815008b
md/raid5: make sure to_read and to_write never go negative.

It has been possible for handle_stripe_dirtying to be called
when there isn't actually any work to do.
It then calls schedule_reconstruction() which will set R5_LOCKED
on the parity block(s) even when nothing else is happening.
This then causes problems in do_release_stripe().

So add checks to schedule_reconstruction() so that if it doesn't
find anything to do, it just aborts.

This bug was introduced in v3.7, so the patch is suitable
for -stable kernels since then.

Cc: stable@vger.kernel.org (v3.7+)
Reported-by: majianpeng <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>

NeilBrown ce7d363a f3378b48

+22 -16
+22 -16
drivers/md/raid5.c
··· 2283 2283 int level = conf->level; 2284 2284 2285 2285 if (rcw) { 2286 - /* if we are not expanding this is a proper write request, and 2287 - * there will be bios with new data to be drained into the 2288 - * stripe cache 2289 - */ 2290 - if (!expand) { 2291 - sh->reconstruct_state = reconstruct_state_drain_run; 2292 - set_bit(STRIPE_OP_BIODRAIN, &s->ops_request); 2293 - } else 2294 - sh->reconstruct_state = reconstruct_state_run; 2295 - 2296 - set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request); 2297 2286 2298 2287 for (i = disks; i--; ) { 2299 2288 struct r5dev *dev = &sh->dev[i]; ··· 2295 2306 s->locked++; 2296 2307 } 2297 2308 } 2309 + /* if we are not expanding this is a proper write request, and 2310 + * there will be bios with new data to be drained into the 2311 + * stripe cache 2312 + */ 2313 + if (!expand) { 2314 + if (!s->locked) 2315 + /* False alarm, nothing to do */ 2316 + return; 2317 + sh->reconstruct_state = reconstruct_state_drain_run; 2318 + set_bit(STRIPE_OP_BIODRAIN, &s->ops_request); 2319 + } else 2320 + sh->reconstruct_state = reconstruct_state_run; 2321 + 2322 + set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request); 2323 + 2298 2324 if (s->locked + conf->max_degraded == disks) 2299 2325 if (!test_and_set_bit(STRIPE_FULL_WRITE, &sh->state)) 2300 2326 atomic_inc(&conf->pending_full_writes); ··· 2317 2313 BUG_ON(level == 6); 2318 2314 BUG_ON(!(test_bit(R5_UPTODATE, &sh->dev[pd_idx].flags) || 2319 2315 test_bit(R5_Wantcompute, &sh->dev[pd_idx].flags))); 2320 - 2321 - sh->reconstruct_state = reconstruct_state_prexor_drain_run; 2322 - set_bit(STRIPE_OP_PREXOR, &s->ops_request); 2323 - set_bit(STRIPE_OP_BIODRAIN, &s->ops_request); 2324 - set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request); 2325 2316 2326 2317 for (i = disks; i--; ) { 2327 2318 struct r5dev *dev = &sh->dev[i]; ··· 2332 2333 s->locked++; 2333 2334 } 2334 2335 } 2336 + if (!s->locked) 2337 + /* False alarm - nothing to do */ 2338 + return; 2339 + sh->reconstruct_state = reconstruct_state_prexor_drain_run; 2340 + set_bit(STRIPE_OP_PREXOR, &s->ops_request); 2341 + set_bit(STRIPE_OP_BIODRAIN, &s->ops_request); 2342 + set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request); 2335 2343 } 2336 2344 2337 2345 /* keep the parity disk(s) locked while asynchronous operations