md: fix two problems with setting the "re-add" device state.
authorNeilBrown <neilb@suse.com>
Thu, 26 Apr 2018 04:46:29 +0000 (14:46 +1000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Tue, 3 Jul 2018 09:24:59 +0000 (11:24 +0200)
commitdfeb333b590c78d0dbadfa00a56635f1b2489d58
tree460ba0ab0c7655dd91e35e615bf8d7eef7987991
parent88896a963b4e80f5d4f6fe508ab7ce399d7bcb4c
md: fix two problems with setting the "re-add" device state.

commit 011abdc9df559ec75779bb7c53a744c69b2a94c6 upstream.

If "re-add" is written to the "state" file for a device
which is faulty, this has an effect similar to removing
and re-adding the device.  It should take up the
same slot in the array that it previously had, and
an accelerated (e.g. bitmap-based) rebuild should happen.

The slot that "it previously had" is determined by
rdev->saved_raid_disk.
However this is not set when a device fails (only when a device
is added), and it is cleared when resync completes.
This means that "re-add" will normally work once, but may not work a
second time.

This patch includes two fixes.
1/ when a device fails, record the ->raid_disk value in
    ->saved_raid_disk before clearing ->raid_disk
2/ when "re-add" is written to a device for which
    ->saved_raid_disk is not set, fail.

I think this is suitable for stable as it can
cause re-adding a device to be forced to do a full
resync which takes a lot longer and so puts data at
more risk.

Cc: <stable@vger.kernel.org> (v4.1)
Fixes: 97f6cd39da22 ("md-cluster: re-add capabilities")
Signed-off-by: NeilBrown <neilb@suse.com>
Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/md/md.c