md: fix possible deadlock in handling flush requests.

As recorded in
https://bugzilla.kernel.org/show_bug.cgi?id=24012

it is possible for a flush request through md to hang. This is due to
an interaction between the recursion avoidance in
generic_make_request, the insistence in md of only having one flush
active at a time, and the possibility of dm (or md) submitting two
flush requests to a device from the one generic_make_request.

If a generic_make_request call into dm causes two flush requests to be
queued (as happens if the dm table has two targets - they get one
each), these two will be queued inside generic_make_request.

Assume they are for the same md device.
The first is processed and causes 1 or more flush requests to be sent
to lower devices. These get queued within generic_make_request too.
Then the second flush to the md device gets handled and it blocks
waiting for the first flush to complete. But it won't complete until
the two lower-device requests complete, and they haven't even been
submitted yet as they are on the generic_make_request queue.

The deadlock can be broken by using a separate thread to submit the
requests to lower devices. md has such a thread readily available:
md_wq.

So use it to submit these requests.

Reported-by: Giacomo Catenazzi <cate@cateee.net>
Tested-by: Giacomo Catenazzi <cate@cateee.net>
Signed-off-by: NeilBrown <neilb@suse.de>

NeilBrown a035fc3e a7a07e69

+4 -2
+4 -2
drivers/md/md.c
··· 373 374 static void md_submit_flush_data(struct work_struct *ws); 375 376 - static void submit_flushes(mddev_t *mddev) 377 { 378 mdk_rdev_t *rdev; 379 380 INIT_WORK(&mddev->flush_work, md_submit_flush_data); ··· 433 mddev->flush_bio = bio; 434 spin_unlock_irq(&mddev->write_lock); 435 436 - submit_flushes(mddev); 437 } 438 EXPORT_SYMBOL(md_flush_request); 439
··· 373 374 static void md_submit_flush_data(struct work_struct *ws); 375 376 + static void submit_flushes(struct work_struct *ws) 377 { 378 + mddev_t *mddev = container_of(ws, mddev_t, flush_work); 379 mdk_rdev_t *rdev; 380 381 INIT_WORK(&mddev->flush_work, md_submit_flush_data); ··· 432 mddev->flush_bio = bio; 433 spin_unlock_irq(&mddev->write_lock); 434 435 + INIT_WORK(&mddev->flush_work, submit_flushes); 436 + queue_work(md_wq, &mddev->flush_work); 437 } 438 EXPORT_SYMBOL(md_flush_request); 439