Btrfs: Don't try to compress pages past i_size

The compression code had some checks to make sure we were only
compressing bytes inside of i_size, but it wasn't catching every
case. To make things worse, some incorrect math about the number
of bytes remaining would make it try to compress more pages than the
file really had.

The fix used here is to fall back to the non-compression code in this
case, which does all the proper cleanup of delalloc and other accounting.

Signed-off-by: Chris Mason <chris.mason@oracle.com>

+14
+14
fs/btrfs/inode.c
··· 360 360 nr_pages = (end >> PAGE_CACHE_SHIFT) - (start >> PAGE_CACHE_SHIFT) + 1; 361 361 nr_pages = min(nr_pages, (128 * 1024UL) / PAGE_CACHE_SIZE); 362 362 363 + /* 364 + * we don't want to send crud past the end of i_size through 365 + * compression, that's just a waste of CPU time. So, if the 366 + * end of the file is before the start of our current 367 + * requested range of bytes, we bail out to the uncompressed 368 + * cleanup code that can deal with all of this. 369 + * 370 + * It isn't really the fastest way to fix things, but this is a 371 + * very uncommon corner. 372 + */ 373 + if (actual_end <= start) 374 + goto cleanup_and_bail_uncompressed; 375 + 363 376 total_compressed = actual_end - start; 364 377 365 378 /* we want to make sure that amount of ram required to uncompress ··· 517 504 goto again; 518 505 } 519 506 } else { 507 + cleanup_and_bail_uncompressed: 520 508 /* 521 509 * No compression, but we still need to write the pages in 522 510 * the file we've been given so far. redirty the locked