md/raid1: really fix recovery looping when single good device fails.

Commit 4044ba58dd15cb01797c4fd034f39ef4a75f7cc3 supposedly fixed a
problem where if a raid1 with just one good device gets a read-error
during recovery, the recovery would abort and immediately restart in
an infinite loop.

However it depended on raid1_remove_disk removing the spare device
from the array. But that does not happen in this case. So add a test
so that in the 'recovery_disabled' case, the device will be removed.

This suitable for any kernel since 2.6.29 which is when
recovery_disabled was introduced.

Cc: stable@kernel.org
Reported-by: Sebastian Färber <faerber@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>

NeilBrown 8f9e0ee3 c26a44ed

+1
+1
drivers/md/raid1.c
··· 1161 * is not possible. 1162 */ 1163 if (!test_bit(Faulty, &rdev->flags) && 1164 mddev->degraded < conf->raid_disks) { 1165 err = -EBUSY; 1166 goto abort;
··· 1161 * is not possible. 1162 */ 1163 if (!test_bit(Faulty, &rdev->flags) && 1164 + !mddev->recovery_disabled && 1165 mddev->degraded < conf->raid_disks) { 1166 err = -EBUSY; 1167 goto abort;