dm cache: fix access beyond end of origin device

In order to avoid wasting cache space a partial block at the end of the
origin device is not cached. Unfortunately, the check for such a
partial block at the end of the origin device was flawed.

Fix accesses beyond the end of the origin device that occured due to
attempted promotion of an undetected partial block by:

- initializing the per bio data struct to allow cache_end_io to work properly
- recognizing access to the partial block at the end of the origin device
- avoiding out of bounds access to the discard bitset

Otherwise, users can experience errors like the following:

attempt to access beyond end of device
dm-5: rw=0, want=20971520, limit=20971456
...
device-mapper: cache: promotion failed; couldn't copy block

Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org

authored by Heinz Mauelshagen and committed by Mike Snitzer e893fba9 8b9d9666

+3 -5
+3 -5
drivers/md/dm-cache-target.c
··· 2465 2465 bool discarded_block; 2466 2466 struct dm_bio_prison_cell *cell; 2467 2467 struct policy_result lookup_result; 2468 - struct per_bio_data *pb; 2468 + struct per_bio_data *pb = init_per_bio_data(bio, pb_data_size); 2469 2469 2470 - if (from_oblock(block) > from_oblock(cache->origin_blocks)) { 2470 + if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) { 2471 2471 /* 2472 2472 * This can only occur if the io goes to a partial block at 2473 2473 * the end of the origin device. We don't cache these. 2474 2474 * Just remap to the origin and carry on. 2475 2475 */ 2476 - remap_to_origin_clear_discard(cache, bio, block); 2476 + remap_to_origin(cache, bio); 2477 2477 return DM_MAPIO_REMAPPED; 2478 2478 } 2479 - 2480 - pb = init_per_bio_data(bio, pb_data_size); 2481 2479 2482 2480 if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) { 2483 2481 defer_bio(cache, bio);