···34 journals with two different restart pages. We sanity check both and35 either use the only sane one or the more recent one of the two in the36 case that both are valid.37- - Modify fs/ntfs/malloc.h::ntfs_malloc_nofs() to do the kmalloc() based38- allocations with __GFP_HIGHMEM, analogous to how the vmalloc() based39- allocations are done.40 - Add fs/ntfs/malloc.h::ntfs_malloc_nofs_nofail() which is analogous to41 ntfs_malloc_nofs() but it performs allocations with __GFP_NOFAIL and42 hence cannot fail.···87 in the first buffer head instead of a driver global spin lock to88 improve scalability.89 - Minor fix to error handling and error message display in90- fs/ntfs/aops.c::ntfs_prepare_nonresident_write(). 000091922.1.23 - Implement extension of resident files and make writing safe as well as93 many bug fixes, cleanups, and enhancements...
···34 journals with two different restart pages. We sanity check both and35 either use the only sane one or the more recent one of the two in the36 case that both are valid.00037 - Add fs/ntfs/malloc.h::ntfs_malloc_nofs_nofail() which is analogous to38 ntfs_malloc_nofs() but it performs allocations with __GFP_NOFAIL and39 hence cannot fail.···90 in the first buffer head instead of a driver global spin lock to91 improve scalability.92 - Minor fix to error handling and error message display in93+ fs/ntfs/aops.c::ntfs_prepare_nonresident_write().94+ - Change the mount options {u,f,d}mask to always parse the number as95+ an octal number to conform to how chmod(1) works, too. Thanks to96+ Giuseppe Bilotta and Horst von Brand for pointing out the errors of97+ my ways.98992.1.23 - Implement extension of resident files and make writing safe as well as100 many bug fixes, cleanups, and enhancements...
+1-1
fs/ntfs/malloc.h
···45 if (likely(size <= PAGE_SIZE)) {46 BUG_ON(!size);47 /* kmalloc() has per-CPU caches so is faster for now. */48- return kmalloc(PAGE_SIZE, gfp_mask);49 /* return (void *)__get_free_page(gfp_mask); */50 }51 if (likely(size >> PAGE_SHIFT < num_physpages))
···45 if (likely(size <= PAGE_SIZE)) {46 BUG_ON(!size);47 /* kmalloc() has per-CPU caches so is faster for now. */48+ return kmalloc(PAGE_SIZE, gfp_mask & ~__GFP_HIGHMEM);49 /* return (void *)__get_free_page(gfp_mask); */50 }51 if (likely(size >> PAGE_SHIFT < num_physpages))