Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

staging: ion: Add private buffer flag to skip page pooling on free

Currently, when we free a buffer it might actually just go back into a
heap-specific page pool rather than going back to the system. This poses
a problem because sometimes (like when we're running a shrinker in low
memory conditions) we need to force the memory associated with the
buffer to truly be relinquished to the system rather than just going
back into a page pool.

There isn't a use case for this flag by Ion clients, so make it a
private flag. The main use case right now is to provide a mechanism for
the deferred free code to force stale buffers to bypass page pooling.

Cc: Colin Cross <ccross@android.com>
Cc: Android Kernel Team <kernel-team@android.com>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
[jstultz: Minor commit subject tweak]
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

authored by

Mitchel Humpherys and committed by
Greg Kroah-Hartman
53a91c68 b9daf0b6

+59 -6
+16 -3
drivers/staging/android/ion/ion_heap.c
··· 178 178 return size; 179 179 } 180 180 181 - size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) 181 + static size_t _ion_heap_freelist_drain(struct ion_heap *heap, size_t size, 182 + bool skip_pools) 182 183 { 183 184 struct ion_buffer *buffer; 184 185 size_t total_drained = 0; ··· 198 197 list); 199 198 list_del(&buffer->list); 200 199 heap->free_list_size -= buffer->size; 200 + if (skip_pools) 201 + buffer->private_flags |= ION_PRIV_FLAG_SHRINKER_FREE; 201 202 total_drained += buffer->size; 202 203 spin_unlock(&heap->free_lock); 203 204 ion_buffer_destroy(buffer); ··· 208 205 spin_unlock(&heap->free_lock); 209 206 210 207 return total_drained; 208 + } 209 + 210 + size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) 211 + { 212 + return _ion_heap_freelist_drain(heap, size, false); 213 + } 214 + 215 + size_t ion_heap_freelist_shrink(struct ion_heap *heap, size_t size) 216 + { 217 + return _ion_heap_freelist_drain(heap, size, true); 211 218 } 212 219 213 220 static int ion_heap_deferred_free(void *data) ··· 291 278 292 279 /* 293 280 * shrink the free list first, no point in zeroing the memory if we're 294 - * just going to reclaim it 281 + * just going to reclaim it. Also, skip any possible page pooling. 295 282 */ 296 283 if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) 297 - freed = ion_heap_freelist_drain(heap, to_scan * PAGE_SIZE) / 284 + freed = ion_heap_freelist_shrink(heap, to_scan * PAGE_SIZE) / 298 285 PAGE_SIZE; 299 286 300 287 to_scan -= freed;
+41 -1
drivers/staging/android/ion/ion_priv.h
··· 38 38 * @dev: back pointer to the ion_device 39 39 * @heap: back pointer to the heap the buffer came from 40 40 * @flags: buffer specific flags 41 + * @private_flags: internal buffer specific flags 41 42 * @size: size of the buffer 42 43 * @priv_virt: private data to the buffer representable as 43 44 * a void * ··· 67 66 struct ion_device *dev; 68 67 struct ion_heap *heap; 69 68 unsigned long flags; 69 + unsigned long private_flags; 70 70 size_t size; 71 71 union { 72 72 void *priv_virt; ··· 100 98 * @map_user map memory to userspace 101 99 * 102 100 * allocate, phys, and map_user return 0 on success, -errno on error. 103 - * map_dma and map_kernel return pointer on success, ERR_PTR on error. 101 + * map_dma and map_kernel return pointer on success, ERR_PTR on 102 + * error. @free will be called with ION_PRIV_FLAG_SHRINKER_FREE set in 103 + * the buffer's private_flags when called from a shrinker. In that 104 + * case, the pages being free'd must be truly free'd back to the 105 + * system, not put in a page pool or otherwise cached. 104 106 */ 105 107 struct ion_heap_ops { 106 108 int (*allocate)(struct ion_heap *heap, ··· 127 121 * heap flags - flags between the heaps and core ion code 128 122 */ 129 123 #define ION_HEAP_FLAG_DEFER_FREE (1 << 0) 124 + 125 + /** 126 + * private flags - flags internal to ion 127 + */ 128 + /* 129 + * Buffer is being freed from a shrinker function. Skip any possible 130 + * heap-specific caching mechanism (e.g. page pools). Guarantees that 131 + * any buffer storage that came from the system allocator will be 132 + * returned to the system allocator. 133 + */ 134 + #define ION_PRIV_FLAG_SHRINKER_FREE (1 << 0) 130 135 131 136 /** 132 137 * struct ion_heap - represents a heap in the system ··· 273 256 * total memory on the freelist. 274 257 */ 275 258 size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size); 259 + 260 + /** 261 + * ion_heap_freelist_shrink - drain the deferred free 262 + * list, skipping any heap-specific 263 + * pooling or caching mechanisms 264 + * 265 + * @heap: the heap 266 + * @size: amount of memory to drain in bytes 267 + * 268 + * Drains the indicated amount of memory from the deferred freelist immediately. 269 + * Returns the total amount freed. The total freed may be higher depending 270 + * on the size of the items in the list, or lower if there is insufficient 271 + * total memory on the freelist. 272 + * 273 + * Unlike with @ion_heap_freelist_drain, don't put any pages back into 274 + * page pools or otherwise cache the pages. Everything must be 275 + * genuinely free'd back to the system. If you're free'ing from a 276 + * shrinker you probably want to use this. Note that this relies on 277 + * the heap.ops.free callback honoring the ION_PRIV_FLAG_SHRINKER_FREE 278 + * flag. 279 + */ 280 + size_t ion_heap_freelist_shrink(struct ion_heap *heap, 281 + size_t size); 276 282 277 283 /** 278 284 * ion_heap_freelist_size - returns the size of the freelist in bytes
+2 -2
drivers/staging/android/ion/ion_system_heap.c
··· 90 90 { 91 91 bool cached = ion_buffer_cached(buffer); 92 92 93 - if (!cached) { 93 + if (!cached && !(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) { 94 94 struct ion_page_pool *pool = heap->pools[order_to_index(order)]; 95 95 ion_page_pool_free(pool, page); 96 96 } else { ··· 209 209 210 210 /* uncached pages come from the page pools, zero them before returning 211 211 for security purposes (other allocations are zerod at alloc time */ 212 - if (!cached) 212 + if (!cached && !(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) 213 213 ion_heap_buffer_zero(buffer); 214 214 215 215 for_each_sg(table->sgl, sg, table->nents, i)