Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

frontswap: s/put_page/store/g s/get_page/load

Sounds so much more natural.

Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

+76 -76
+25 -25
Documentation/vm/frontswap.txt
··· 21 21 conform to certain policies as follows: 22 22 23 23 An "init" prepares the device to receive frontswap pages associated 24 - with the specified swap device number (aka "type"). A "put_page" will 24 + with the specified swap device number (aka "type"). A "store" will 25 25 copy the page to transcendent memory and associate it with the type and 26 - offset associated with the page. A "get_page" will copy the page, if found, 26 + offset associated with the page. A "load" will copy the page, if found, 27 27 from transcendent memory into kernel memory, but will NOT remove the page 28 28 from from transcendent memory. An "invalidate_page" will remove the page 29 29 from transcendent memory and an "invalidate_area" will remove ALL pages 30 30 associated with the swap type (e.g., like swapoff) and notify the "device" 31 - to refuse further puts with that swap type. 31 + to refuse further stores with that swap type. 32 32 33 - Once a page is successfully put, a matching get on the page will normally 33 + Once a page is successfully stored, a matching load on the page will normally 34 34 succeed. So when the kernel finds itself in a situation where it needs 35 - to swap out a page, it first attempts to use frontswap. If the put returns 35 + to swap out a page, it first attempts to use frontswap. If the store returns 36 36 success, the data has been successfully saved to transcendent memory and 37 37 a disk write and, if the data is later read back, a disk read are avoided. 38 - If a put returns failure, transcendent memory has rejected the data, and the 38 + If a store returns failure, transcendent memory has rejected the data, and the 39 39 page can be written to swap as usual. 40 40 41 41 If a backend chooses, frontswap can be configured as a "writethrough ··· 44 44 in order to allow the backend to arbitrarily "reclaim" space used to 45 45 store frontswap pages to more completely manage its memory usage. 46 46 47 - Note that if a page is put and the page already exists in transcendent memory 48 - (a "duplicate" put), either the put succeeds and the data is overwritten, 49 - or the put fails AND the page is invalidated. This ensures stale data may 47 + Note that if a page is stored and the page already exists in transcendent memory 48 + (a "duplicate" store), either the store succeeds and the data is overwritten, 49 + or the store fails AND the page is invalidated. This ensures stale data may 50 50 never be obtained from frontswap. 51 51 52 52 If properly configured, monitoring of frontswap is done via debugfs in 53 53 the /sys/kernel/debug/frontswap directory. The effectiveness of 54 54 frontswap can be measured (across all swap devices) with: 55 55 56 - failed_puts - how many put attempts have failed 57 - gets - how many gets were attempted (all should succeed) 58 - succ_puts - how many put attempts have succeeded 56 + failed_stores - how many store attempts have failed 57 + loads - how many loads were attempted (all should succeed) 58 + succ_stores - how many store attempts have succeeded 59 59 invalidates - how many invalidates were attempted 60 60 61 61 A backend implementation may provide additional metrics. ··· 125 125 swap device. If CONFIG_FRONTSWAP is enabled but no frontswap "backend" 126 126 registers, there is one extra global variable compared to zero for 127 127 every swap page read or written. If CONFIG_FRONTSWAP is enabled 128 - AND a frontswap backend registers AND the backend fails every "put" 128 + AND a frontswap backend registers AND the backend fails every "store" 129 129 request (i.e. provides no memory despite claiming it might), 130 130 CPU overhead is still negligible -- and since every frontswap fail 131 131 precedes a swap page write-to-disk, the system is highly likely ··· 159 159 160 160 Whenever a swap-device is swapon'd frontswap_init() is called, 161 161 passing the swap device number (aka "type") as a parameter. 162 - This notifies frontswap to expect attempts to "put" swap pages 162 + This notifies frontswap to expect attempts to "store" swap pages 163 163 associated with that number. 164 164 165 165 Whenever the swap subsystem is readying a page to write to a swap 166 - device (c.f swap_writepage()), frontswap_put_page is called. Frontswap 166 + device (c.f swap_writepage()), frontswap_store is called. Frontswap 167 167 consults with the frontswap backend and if the backend says it does NOT 168 - have room, frontswap_put_page returns -1 and the kernel swaps the page 168 + have room, frontswap_store returns -1 and the kernel swaps the page 169 169 to the swap device as normal. Note that the response from the frontswap 170 170 backend is unpredictable to the kernel; it may choose to never accept a 171 171 page, it could accept every ninth page, or it might accept every ··· 177 177 otherwise have written the data. 178 178 179 179 When the swap subsystem needs to swap-in a page (swap_readpage()), 180 - it first calls frontswap_get_page() which checks the frontswap_map to 180 + it first calls frontswap_load() which checks the frontswap_map to 181 181 see if the page was earlier accepted by the frontswap backend. If 182 182 it was, the page of data is filled from the frontswap backend and 183 183 the swap-in is complete. If not, the normal swap-in code is ··· 185 185 186 186 So every time the frontswap backend accepts a page, a swap device read 187 187 and (potentially) a swap device write are replaced by a "frontswap backend 188 - put" and (possibly) a "frontswap backend get", which are presumably much 188 + store" and (possibly) a "frontswap backend loads", which are presumably much 189 189 faster. 190 190 191 191 4) Can't frontswap be configured as a "special" swap device that is ··· 215 215 the write of some pages for a significant amount of time. Synchrony is 216 216 required to ensure the dynamicity of the backend and to avoid thorny race 217 217 conditions that would unnecessarily and greatly complicate frontswap 218 - and/or the block I/O subsystem. That said, only the initial "put" 219 - and "get" operations need be synchronous. A separate asynchronous thread 218 + and/or the block I/O subsystem. That said, only the initial "store" 219 + and "load" operations need be synchronous. A separate asynchronous thread 220 220 is free to manipulate the pages stored by frontswap. For example, 221 221 the "remotification" thread in RAMster uses standard asynchronous 222 222 kernel sockets to move compressed frontswap pages to a remote machine. ··· 229 229 then force guests to do their own swapping. 230 230 231 231 There is a downside to the transcendent memory specifications for 232 - frontswap: Since any "put" might fail, there must always be a real 232 + frontswap: Since any "store" might fail, there must always be a real 233 233 slot on a real swap device to swap the page. Thus frontswap must be 234 234 implemented as a "shadow" to every swapon'd device with the potential 235 235 capability of holding every page that the swap device might have held ··· 240 240 can still use frontswap but a backend for such devices must configure 241 241 some kind of "ghost" swap device and ensure that it is never used. 242 242 243 - 5) Why this weird definition about "duplicate puts"? If a page 244 - has been previously successfully put, can't it always be 243 + 5) Why this weird definition about "duplicate stores"? If a page 244 + has been previously successfully stored, can't it always be 245 245 successfully overwritten? 246 246 247 247 Nearly always it can, but no, sometimes it cannot. Consider an example 248 248 where data is compressed and the original 4K page has been compressed 249 249 to 1K. Now an attempt is made to overwrite the page with data that 250 250 is non-compressible and so would take the entire 4K. But the backend 251 - has no more space. In this case, the put must be rejected. Whenever 252 - frontswap rejects a put that would overwrite, it also must invalidate 251 + has no more space. In this case, the store must be rejected. Whenever 252 + frontswap rejects a store that would overwrite, it also must invalidate 253 253 the old data and ensure that it is no longer accessible. Since the 254 254 swap subsystem then writes the new data to the read swap device, 255 255 this is the correct course of action to ensure coherency.
+4 -4
drivers/staging/ramster/zcache-main.c
··· 3002 3002 return oid; 3003 3003 } 3004 3004 3005 - static int zcache_frontswap_put_page(unsigned type, pgoff_t offset, 3005 + static int zcache_frontswap_store(unsigned type, pgoff_t offset, 3006 3006 struct page *page) 3007 3007 { 3008 3008 u64 ind64 = (u64)offset; ··· 3025 3025 3026 3026 /* returns 0 if the page was successfully gotten from frontswap, -1 if 3027 3027 * was not present (should never happen!) */ 3028 - static int zcache_frontswap_get_page(unsigned type, pgoff_t offset, 3028 + static int zcache_frontswap_load(unsigned type, pgoff_t offset, 3029 3029 struct page *page) 3030 3030 { 3031 3031 u64 ind64 = (u64)offset; ··· 3080 3080 } 3081 3081 3082 3082 static struct frontswap_ops zcache_frontswap_ops = { 3083 - .put_page = zcache_frontswap_put_page, 3084 - .get_page = zcache_frontswap_get_page, 3083 + .store = zcache_frontswap_store, 3084 + .load = zcache_frontswap_load, 3085 3085 .invalidate_page = zcache_frontswap_flush_page, 3086 3086 .invalidate_area = zcache_frontswap_flush_area, 3087 3087 .init = zcache_frontswap_init
+5 -5
drivers/staging/zcache/zcache-main.c
··· 1835 1835 * Swizzling increases objects per swaptype, increasing tmem concurrency 1836 1836 * for heavy swaploads. Later, larger nr_cpus -> larger SWIZ_BITS 1837 1837 * Setting SWIZ_BITS to 27 basically reconstructs the swap entry from 1838 - * frontswap_get_page(), but has side-effects. Hence using 8. 1838 + * frontswap_load(), but has side-effects. Hence using 8. 1839 1839 */ 1840 1840 #define SWIZ_BITS 8 1841 1841 #define SWIZ_MASK ((1 << SWIZ_BITS) - 1) ··· 1849 1849 return oid; 1850 1850 } 1851 1851 1852 - static int zcache_frontswap_put_page(unsigned type, pgoff_t offset, 1852 + static int zcache_frontswap_store(unsigned type, pgoff_t offset, 1853 1853 struct page *page) 1854 1854 { 1855 1855 u64 ind64 = (u64)offset; ··· 1870 1870 1871 1871 /* returns 0 if the page was successfully gotten from frontswap, -1 if 1872 1872 * was not present (should never happen!) */ 1873 - static int zcache_frontswap_get_page(unsigned type, pgoff_t offset, 1873 + static int zcache_frontswap_load(unsigned type, pgoff_t offset, 1874 1874 struct page *page) 1875 1875 { 1876 1876 u64 ind64 = (u64)offset; ··· 1919 1919 } 1920 1920 1921 1921 static struct frontswap_ops zcache_frontswap_ops = { 1922 - .put_page = zcache_frontswap_put_page, 1923 - .get_page = zcache_frontswap_get_page, 1922 + .store = zcache_frontswap_store, 1923 + .load = zcache_frontswap_load, 1924 1924 .invalidate_page = zcache_frontswap_flush_page, 1925 1925 .invalidate_area = zcache_frontswap_flush_area, 1926 1926 .init = zcache_frontswap_init
+4 -4
drivers/xen/tmem.c
··· 269 269 } 270 270 271 271 /* returns 0 if the page was successfully put into frontswap, -1 if not */ 272 - static int tmem_frontswap_put_page(unsigned type, pgoff_t offset, 272 + static int tmem_frontswap_store(unsigned type, pgoff_t offset, 273 273 struct page *page) 274 274 { 275 275 u64 ind64 = (u64)offset; ··· 295 295 * returns 0 if the page was successfully gotten from frontswap, -1 if 296 296 * was not present (should never happen!) 297 297 */ 298 - static int tmem_frontswap_get_page(unsigned type, pgoff_t offset, 298 + static int tmem_frontswap_load(unsigned type, pgoff_t offset, 299 299 struct page *page) 300 300 { 301 301 u64 ind64 = (u64)offset; ··· 362 362 __setup("nofrontswap", no_frontswap); 363 363 364 364 static struct frontswap_ops __initdata tmem_frontswap_ops = { 365 - .put_page = tmem_frontswap_put_page, 366 - .get_page = tmem_frontswap_get_page, 365 + .store = tmem_frontswap_store, 366 + .load = tmem_frontswap_load, 367 367 .invalidate_page = tmem_frontswap_flush_page, 368 368 .invalidate_area = tmem_frontswap_flush_area, 369 369 .init = tmem_frontswap_init
+8 -8
include/linux/frontswap.h
··· 7 7 8 8 struct frontswap_ops { 9 9 void (*init)(unsigned); 10 - int (*put_page)(unsigned, pgoff_t, struct page *); 11 - int (*get_page)(unsigned, pgoff_t, struct page *); 10 + int (*store)(unsigned, pgoff_t, struct page *); 11 + int (*load)(unsigned, pgoff_t, struct page *); 12 12 void (*invalidate_page)(unsigned, pgoff_t); 13 13 void (*invalidate_area)(unsigned); 14 14 }; ··· 21 21 extern void frontswap_writethrough(bool); 22 22 23 23 extern void __frontswap_init(unsigned type); 24 - extern int __frontswap_put_page(struct page *page); 25 - extern int __frontswap_get_page(struct page *page); 24 + extern int __frontswap_store(struct page *page); 25 + extern int __frontswap_load(struct page *page); 26 26 extern void __frontswap_invalidate_page(unsigned, pgoff_t); 27 27 extern void __frontswap_invalidate_area(unsigned); 28 28 ··· 88 88 } 89 89 #endif 90 90 91 - static inline int frontswap_put_page(struct page *page) 91 + static inline int frontswap_store(struct page *page) 92 92 { 93 93 int ret = -1; 94 94 95 95 if (frontswap_enabled) 96 - ret = __frontswap_put_page(page); 96 + ret = __frontswap_store(page); 97 97 return ret; 98 98 } 99 99 100 - static inline int frontswap_get_page(struct page *page) 100 + static inline int frontswap_load(struct page *page) 101 101 { 102 102 int ret = -1; 103 103 104 104 if (frontswap_enabled) 105 - ret = __frontswap_get_page(page); 105 + ret = __frontswap_load(page); 106 106 return ret; 107 107 } 108 108
+28 -28
mm/frontswap.c
··· 39 39 EXPORT_SYMBOL(frontswap_enabled); 40 40 41 41 /* 42 - * If enabled, frontswap_put will return failure even on success. As 42 + * If enabled, frontswap_store will return failure even on success. As 43 43 * a result, the swap subsystem will always write the page to swap, in 44 44 * effect converting frontswap into a writethrough cache. In this mode, 45 45 * there is no direct reduction in swap writes, but a frontswap backend ··· 54 54 * properly configured). These are for information only so are not protected 55 55 * against increment races. 56 56 */ 57 - static u64 frontswap_gets; 58 - static u64 frontswap_succ_puts; 59 - static u64 frontswap_failed_puts; 57 + static u64 frontswap_loads; 58 + static u64 frontswap_succ_stores; 59 + static u64 frontswap_failed_stores; 60 60 static u64 frontswap_invalidates; 61 61 62 - static inline void inc_frontswap_gets(void) { 63 - frontswap_gets++; 62 + static inline void inc_frontswap_loads(void) { 63 + frontswap_loads++; 64 64 } 65 - static inline void inc_frontswap_succ_puts(void) { 66 - frontswap_succ_puts++; 65 + static inline void inc_frontswap_succ_stores(void) { 66 + frontswap_succ_stores++; 67 67 } 68 - static inline void inc_frontswap_failed_puts(void) { 69 - frontswap_failed_puts++; 68 + static inline void inc_frontswap_failed_stores(void) { 69 + frontswap_failed_stores++; 70 70 } 71 71 static inline void inc_frontswap_invalidates(void) { 72 72 frontswap_invalidates++; 73 73 } 74 74 #else 75 - static inline void inc_frontswap_gets(void) { } 76 - static inline void inc_frontswap_succ_puts(void) { } 77 - static inline void inc_frontswap_failed_puts(void) { } 75 + static inline void inc_frontswap_loads(void) { } 76 + static inline void inc_frontswap_succ_stores(void) { } 77 + static inline void inc_frontswap_failed_stores(void) { } 78 78 static inline void inc_frontswap_invalidates(void) { } 79 79 #endif 80 80 /* ··· 116 116 EXPORT_SYMBOL(__frontswap_init); 117 117 118 118 /* 119 - * "Put" data from a page to frontswap and associate it with the page's 119 + * "Store" data from a page to frontswap and associate it with the page's 120 120 * swaptype and offset. Page must be locked and in the swap cache. 121 121 * If frontswap already contains a page with matching swaptype and 122 122 * offset, the frontswap implmentation may either overwrite the data and 123 123 * return success or invalidate the page from frontswap and return failure. 124 124 */ 125 - int __frontswap_put_page(struct page *page) 125 + int __frontswap_store(struct page *page) 126 126 { 127 127 int ret = -1, dup = 0; 128 128 swp_entry_t entry = { .val = page_private(page), }; ··· 134 134 BUG_ON(sis == NULL); 135 135 if (frontswap_test(sis, offset)) 136 136 dup = 1; 137 - ret = (*frontswap_ops.put_page)(type, offset, page); 137 + ret = (*frontswap_ops.store)(type, offset, page); 138 138 if (ret == 0) { 139 139 frontswap_set(sis, offset); 140 - inc_frontswap_succ_puts(); 140 + inc_frontswap_succ_stores(); 141 141 if (!dup) 142 142 atomic_inc(&sis->frontswap_pages); 143 143 } else if (dup) { ··· 147 147 */ 148 148 frontswap_clear(sis, offset); 149 149 atomic_dec(&sis->frontswap_pages); 150 - inc_frontswap_failed_puts(); 150 + inc_frontswap_failed_stores(); 151 151 } else 152 - inc_frontswap_failed_puts(); 152 + inc_frontswap_failed_stores(); 153 153 if (frontswap_writethrough_enabled) 154 154 /* report failure so swap also writes to swap device */ 155 155 ret = -1; 156 156 return ret; 157 157 } 158 - EXPORT_SYMBOL(__frontswap_put_page); 158 + EXPORT_SYMBOL(__frontswap_store); 159 159 160 160 /* 161 161 * "Get" data from frontswap associated with swaptype and offset that were 162 162 * specified when the data was put to frontswap and use it to fill the 163 163 * specified page with data. Page must be locked and in the swap cache. 164 164 */ 165 - int __frontswap_get_page(struct page *page) 165 + int __frontswap_load(struct page *page) 166 166 { 167 167 int ret = -1; 168 168 swp_entry_t entry = { .val = page_private(page), }; ··· 173 173 BUG_ON(!PageLocked(page)); 174 174 BUG_ON(sis == NULL); 175 175 if (frontswap_test(sis, offset)) 176 - ret = (*frontswap_ops.get_page)(type, offset, page); 176 + ret = (*frontswap_ops.load)(type, offset, page); 177 177 if (ret == 0) 178 - inc_frontswap_gets(); 178 + inc_frontswap_loads(); 179 179 return ret; 180 180 } 181 - EXPORT_SYMBOL(__frontswap_get_page); 181 + EXPORT_SYMBOL(__frontswap_load); 182 182 183 183 /* 184 184 * Invalidate any data from frontswap associated with the specified swaptype ··· 301 301 struct dentry *root = debugfs_create_dir("frontswap", NULL); 302 302 if (root == NULL) 303 303 return -ENXIO; 304 - debugfs_create_u64("gets", S_IRUGO, root, &frontswap_gets); 305 - debugfs_create_u64("succ_puts", S_IRUGO, root, &frontswap_succ_puts); 306 - debugfs_create_u64("failed_puts", S_IRUGO, root, 307 - &frontswap_failed_puts); 304 + debugfs_create_u64("loads", S_IRUGO, root, &frontswap_loads); 305 + debugfs_create_u64("succ_stores", S_IRUGO, root, &frontswap_succ_stores); 306 + debugfs_create_u64("failed_stores", S_IRUGO, root, 307 + &frontswap_failed_stores); 308 308 debugfs_create_u64("invalidates", S_IRUGO, 309 309 root, &frontswap_invalidates); 310 310 #endif
+2 -2
mm/page_io.c
··· 99 99 unlock_page(page); 100 100 goto out; 101 101 } 102 - if (frontswap_put_page(page) == 0) { 102 + if (frontswap_store(page) == 0) { 103 103 set_page_writeback(page); 104 104 unlock_page(page); 105 105 end_page_writeback(page); ··· 129 129 130 130 VM_BUG_ON(!PageLocked(page)); 131 131 VM_BUG_ON(PageUptodate(page)); 132 - if (frontswap_get_page(page) == 0) { 132 + if (frontswap_load(page) == 0) { 133 133 SetPageUptodate(page); 134 134 unlock_page(page); 135 135 goto out;