Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

staging: ion: remove from the tree

The ION android code has long been marked to be removed, now that we
dma-buf support merged into the real part of the kernel.

It was thought that we could wait to remove the ion kernel at a later
time, but as the out-of-tree Android fork of the ion code has diverged
quite a bit, and any Android device using the ion interface uses that
forked version and not this in-tree version, the in-tree copy of the
code is abandonded and not used by anyone.

Combine this abandoned codebase with the need to make changes to it in
order to keep the kernel building properly, which then causes merge
issues when merging those changes into the out-of-tree Android code, and
you end up with two different groups of people (the in-kernel-tree
developers, and the Android kernel developers) who are both annoyed at
the current situation. Because of this problem, just drop the in-kernel
copy of the ion code now, as it's not used, and is only causing problems
for everyone involved.

Cc: "Arve Hjønnevåg" <arve@android.com>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: Christian Brauner <christian@brauner.io>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Laura Abbott <laura@labbott.name>
Cc: Martijn Coenen <maco@android.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Todd Kjos <tkjos@android.com>
Acked-by: Shuah Khan <skhan@linuxfoundation.org>
Link: https://lore.kernel.org/r/20200827123627.538189-1-gregkh@linuxfoundation.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+1 -3362
-10
MAINTAINERS
··· 1173 1173 F: Documentation/devicetree/bindings/rtc/google,goldfish-rtc.txt 1174 1174 F: drivers/rtc/rtc-goldfish.c 1175 1175 1176 - ANDROID ION DRIVER 1177 - M: Laura Abbott <labbott@redhat.com> 1178 - M: Sumit Semwal <sumit.semwal@linaro.org> 1179 - L: devel@driverdev.osuosl.org 1180 - L: dri-devel@lists.freedesktop.org 1181 - L: linaro-mm-sig@lists.linaro.org (moderated for non-subscribers) 1182 - S: Supported 1183 - F: drivers/staging/android/ion 1184 - F: drivers/staging/android/uapi/ion.h 1185 - 1186 1176 AOA (Apple Onboard Audio) ALSA DRIVER 1187 1177 M: Johannes Berg <johannes@sipsolutions.net> 1188 1178 L: linuxppc-dev@lists.ozlabs.org
-2
drivers/staging/android/Kconfig
··· 14 14 It is, in theory, a good memory allocator for low-memory devices, 15 15 because it can discard shared memory units when under memory pressure. 16 16 17 - source "drivers/staging/android/ion/Kconfig" 18 - 19 17 endif # if ANDROID 20 18 21 19 endmenu
-2
drivers/staging/android/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 ccflags-y += -I$(src) # needed for trace events 3 3 4 - obj-y += ion/ 5 - 6 4 obj-$(CONFIG_ASHMEM) += ashmem.o
-5
drivers/staging/android/TODO
··· 4 4 - add proper arch dependencies as needed 5 5 - audit userspace interfaces to make sure they are sane 6 6 7 - 8 - ion/ 9 - - Split /dev/ion up into multiple nodes (e.g. /dev/ion/heap0) 10 - - Better test framework (integration with VGEM was suggested) 11 - 12 7 Please send patches to Greg Kroah-Hartman <greg@kroah.com> and Cc: 13 8 Arve Hjønnevåg <arve@android.com> and Riley Andrews <riandrews@android.com>
-27
drivers/staging/android/ion/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - menuconfig ION 3 - bool "Ion Memory Manager" 4 - depends on HAS_DMA && MMU 5 - select GENERIC_ALLOCATOR 6 - select DMA_SHARED_BUFFER 7 - help 8 - Choose this option to enable the ION Memory Manager, 9 - used by Android to efficiently allocate buffers 10 - from userspace that can be shared between drivers. 11 - If you're not using Android its probably safe to 12 - say N here. 13 - 14 - config ION_SYSTEM_HEAP 15 - bool "Ion system heap" 16 - depends on ION 17 - help 18 - Choose this option to enable the Ion system heap. The system heap 19 - is backed by pages from the buddy allocator. If in doubt, say Y. 20 - 21 - config ION_CMA_HEAP 22 - bool "Ion CMA heap support" 23 - depends on ION && DMA_CMA 24 - help 25 - Choose this option to enable CMA heaps with Ion. This heap is backed 26 - by the Contiguous Memory Allocator (CMA). If your system has these 27 - regions, you should say Y here.
-4
drivers/staging/android/ion/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_ION) += ion.o ion_heap.o 3 - obj-$(CONFIG_ION_SYSTEM_HEAP) += ion_system_heap.o ion_page_pool.o 4 - obj-$(CONFIG_ION_CMA_HEAP) += ion_cma_heap.o
-649
drivers/staging/android/ion/ion.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * ION Memory Allocator 4 - * 5 - * Copyright (C) 2011 Google, Inc. 6 - */ 7 - 8 - #include <linux/debugfs.h> 9 - #include <linux/device.h> 10 - #include <linux/dma-buf.h> 11 - #include <linux/err.h> 12 - #include <linux/export.h> 13 - #include <linux/file.h> 14 - #include <linux/freezer.h> 15 - #include <linux/fs.h> 16 - #include <linux/kthread.h> 17 - #include <linux/list.h> 18 - #include <linux/miscdevice.h> 19 - #include <linux/mm.h> 20 - #include <linux/mm_types.h> 21 - #include <linux/rbtree.h> 22 - #include <linux/sched/task.h> 23 - #include <linux/slab.h> 24 - #include <linux/uaccess.h> 25 - #include <linux/vmalloc.h> 26 - 27 - #include "ion.h" 28 - 29 - static struct ion_device *internal_dev; 30 - static int heap_id; 31 - 32 - /* this function should only be called while dev->lock is held */ 33 - static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, 34 - struct ion_device *dev, 35 - unsigned long len, 36 - unsigned long flags) 37 - { 38 - struct ion_buffer *buffer; 39 - int ret; 40 - 41 - buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); 42 - if (!buffer) 43 - return ERR_PTR(-ENOMEM); 44 - 45 - buffer->heap = heap; 46 - buffer->flags = flags; 47 - buffer->dev = dev; 48 - buffer->size = len; 49 - 50 - ret = heap->ops->allocate(heap, buffer, len, flags); 51 - 52 - if (ret) { 53 - if (!(heap->flags & ION_HEAP_FLAG_DEFER_FREE)) 54 - goto err2; 55 - 56 - ion_heap_freelist_drain(heap, 0); 57 - ret = heap->ops->allocate(heap, buffer, len, flags); 58 - if (ret) 59 - goto err2; 60 - } 61 - 62 - if (!buffer->sg_table) { 63 - WARN_ONCE(1, "This heap needs to set the sgtable"); 64 - ret = -EINVAL; 65 - goto err1; 66 - } 67 - 68 - spin_lock(&heap->stat_lock); 69 - heap->num_of_buffers++; 70 - heap->num_of_alloc_bytes += len; 71 - if (heap->num_of_alloc_bytes > heap->alloc_bytes_wm) 72 - heap->alloc_bytes_wm = heap->num_of_alloc_bytes; 73 - spin_unlock(&heap->stat_lock); 74 - 75 - INIT_LIST_HEAD(&buffer->attachments); 76 - mutex_init(&buffer->lock); 77 - return buffer; 78 - 79 - err1: 80 - heap->ops->free(buffer); 81 - err2: 82 - kfree(buffer); 83 - return ERR_PTR(ret); 84 - } 85 - 86 - void ion_buffer_destroy(struct ion_buffer *buffer) 87 - { 88 - if (buffer->kmap_cnt > 0) { 89 - pr_warn_once("%s: buffer still mapped in the kernel\n", 90 - __func__); 91 - buffer->heap->ops->unmap_kernel(buffer->heap, buffer); 92 - } 93 - buffer->heap->ops->free(buffer); 94 - spin_lock(&buffer->heap->stat_lock); 95 - buffer->heap->num_of_buffers--; 96 - buffer->heap->num_of_alloc_bytes -= buffer->size; 97 - spin_unlock(&buffer->heap->stat_lock); 98 - 99 - kfree(buffer); 100 - } 101 - 102 - static void _ion_buffer_destroy(struct ion_buffer *buffer) 103 - { 104 - struct ion_heap *heap = buffer->heap; 105 - 106 - if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) 107 - ion_heap_freelist_add(heap, buffer); 108 - else 109 - ion_buffer_destroy(buffer); 110 - } 111 - 112 - static void *ion_buffer_kmap_get(struct ion_buffer *buffer) 113 - { 114 - void *vaddr; 115 - 116 - if (buffer->kmap_cnt) { 117 - buffer->kmap_cnt++; 118 - return buffer->vaddr; 119 - } 120 - vaddr = buffer->heap->ops->map_kernel(buffer->heap, buffer); 121 - if (WARN_ONCE(!vaddr, 122 - "heap->ops->map_kernel should return ERR_PTR on error")) 123 - return ERR_PTR(-EINVAL); 124 - if (IS_ERR(vaddr)) 125 - return vaddr; 126 - buffer->vaddr = vaddr; 127 - buffer->kmap_cnt++; 128 - return vaddr; 129 - } 130 - 131 - static void ion_buffer_kmap_put(struct ion_buffer *buffer) 132 - { 133 - buffer->kmap_cnt--; 134 - if (!buffer->kmap_cnt) { 135 - buffer->heap->ops->unmap_kernel(buffer->heap, buffer); 136 - buffer->vaddr = NULL; 137 - } 138 - } 139 - 140 - static struct sg_table *dup_sg_table(struct sg_table *table) 141 - { 142 - struct sg_table *new_table; 143 - int ret, i; 144 - struct scatterlist *sg, *new_sg; 145 - 146 - new_table = kzalloc(sizeof(*new_table), GFP_KERNEL); 147 - if (!new_table) 148 - return ERR_PTR(-ENOMEM); 149 - 150 - ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL); 151 - if (ret) { 152 - kfree(new_table); 153 - return ERR_PTR(-ENOMEM); 154 - } 155 - 156 - new_sg = new_table->sgl; 157 - for_each_sgtable_sg(table, sg, i) { 158 - memcpy(new_sg, sg, sizeof(*sg)); 159 - new_sg->dma_address = 0; 160 - new_sg = sg_next(new_sg); 161 - } 162 - 163 - return new_table; 164 - } 165 - 166 - static void free_duped_table(struct sg_table *table) 167 - { 168 - sg_free_table(table); 169 - kfree(table); 170 - } 171 - 172 - struct ion_dma_buf_attachment { 173 - struct device *dev; 174 - struct sg_table *table; 175 - struct list_head list; 176 - }; 177 - 178 - static int ion_dma_buf_attach(struct dma_buf *dmabuf, 179 - struct dma_buf_attachment *attachment) 180 - { 181 - struct ion_dma_buf_attachment *a; 182 - struct sg_table *table; 183 - struct ion_buffer *buffer = dmabuf->priv; 184 - 185 - a = kzalloc(sizeof(*a), GFP_KERNEL); 186 - if (!a) 187 - return -ENOMEM; 188 - 189 - table = dup_sg_table(buffer->sg_table); 190 - if (IS_ERR(table)) { 191 - kfree(a); 192 - return -ENOMEM; 193 - } 194 - 195 - a->table = table; 196 - a->dev = attachment->dev; 197 - INIT_LIST_HEAD(&a->list); 198 - 199 - attachment->priv = a; 200 - 201 - mutex_lock(&buffer->lock); 202 - list_add(&a->list, &buffer->attachments); 203 - mutex_unlock(&buffer->lock); 204 - 205 - return 0; 206 - } 207 - 208 - static void ion_dma_buf_detach(struct dma_buf *dmabuf, 209 - struct dma_buf_attachment *attachment) 210 - { 211 - struct ion_dma_buf_attachment *a = attachment->priv; 212 - struct ion_buffer *buffer = dmabuf->priv; 213 - 214 - mutex_lock(&buffer->lock); 215 - list_del(&a->list); 216 - mutex_unlock(&buffer->lock); 217 - free_duped_table(a->table); 218 - 219 - kfree(a); 220 - } 221 - 222 - static struct sg_table *ion_map_dma_buf(struct dma_buf_attachment *attachment, 223 - enum dma_data_direction direction) 224 - { 225 - struct ion_dma_buf_attachment *a = attachment->priv; 226 - struct sg_table *table; 227 - int ret; 228 - 229 - table = a->table; 230 - 231 - ret = dma_map_sgtable(attachment->dev, table, direction, 0); 232 - if (ret) 233 - return ERR_PTR(ret); 234 - 235 - return table; 236 - } 237 - 238 - static void ion_unmap_dma_buf(struct dma_buf_attachment *attachment, 239 - struct sg_table *table, 240 - enum dma_data_direction direction) 241 - { 242 - dma_unmap_sgtable(attachment->dev, table, direction, 0); 243 - } 244 - 245 - static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) 246 - { 247 - struct ion_buffer *buffer = dmabuf->priv; 248 - int ret = 0; 249 - 250 - if (!buffer->heap->ops->map_user) { 251 - pr_err("%s: this heap does not define a method for mapping to userspace\n", 252 - __func__); 253 - return -EINVAL; 254 - } 255 - 256 - if (!(buffer->flags & ION_FLAG_CACHED)) 257 - vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 258 - 259 - mutex_lock(&buffer->lock); 260 - /* now map it to userspace */ 261 - ret = buffer->heap->ops->map_user(buffer->heap, buffer, vma); 262 - mutex_unlock(&buffer->lock); 263 - 264 - if (ret) 265 - pr_err("%s: failure mapping buffer to userspace\n", 266 - __func__); 267 - 268 - return ret; 269 - } 270 - 271 - static void ion_dma_buf_release(struct dma_buf *dmabuf) 272 - { 273 - struct ion_buffer *buffer = dmabuf->priv; 274 - 275 - _ion_buffer_destroy(buffer); 276 - } 277 - 278 - static int ion_dma_buf_begin_cpu_access(struct dma_buf *dmabuf, 279 - enum dma_data_direction direction) 280 - { 281 - struct ion_buffer *buffer = dmabuf->priv; 282 - void *vaddr; 283 - struct ion_dma_buf_attachment *a; 284 - int ret = 0; 285 - 286 - /* 287 - * TODO: Move this elsewhere because we don't always need a vaddr 288 - */ 289 - if (buffer->heap->ops->map_kernel) { 290 - mutex_lock(&buffer->lock); 291 - vaddr = ion_buffer_kmap_get(buffer); 292 - if (IS_ERR(vaddr)) { 293 - ret = PTR_ERR(vaddr); 294 - goto unlock; 295 - } 296 - mutex_unlock(&buffer->lock); 297 - } 298 - 299 - mutex_lock(&buffer->lock); 300 - list_for_each_entry(a, &buffer->attachments, list) 301 - dma_sync_sgtable_for_cpu(a->dev, a->table, direction); 302 - 303 - unlock: 304 - mutex_unlock(&buffer->lock); 305 - return ret; 306 - } 307 - 308 - static int ion_dma_buf_end_cpu_access(struct dma_buf *dmabuf, 309 - enum dma_data_direction direction) 310 - { 311 - struct ion_buffer *buffer = dmabuf->priv; 312 - struct ion_dma_buf_attachment *a; 313 - 314 - if (buffer->heap->ops->map_kernel) { 315 - mutex_lock(&buffer->lock); 316 - ion_buffer_kmap_put(buffer); 317 - mutex_unlock(&buffer->lock); 318 - } 319 - 320 - mutex_lock(&buffer->lock); 321 - list_for_each_entry(a, &buffer->attachments, list) 322 - dma_sync_sgtable_for_device(a->dev, a->table, direction); 323 - mutex_unlock(&buffer->lock); 324 - 325 - return 0; 326 - } 327 - 328 - static const struct dma_buf_ops dma_buf_ops = { 329 - .map_dma_buf = ion_map_dma_buf, 330 - .unmap_dma_buf = ion_unmap_dma_buf, 331 - .mmap = ion_mmap, 332 - .release = ion_dma_buf_release, 333 - .attach = ion_dma_buf_attach, 334 - .detach = ion_dma_buf_detach, 335 - .begin_cpu_access = ion_dma_buf_begin_cpu_access, 336 - .end_cpu_access = ion_dma_buf_end_cpu_access, 337 - }; 338 - 339 - static int ion_alloc(size_t len, unsigned int heap_id_mask, unsigned int flags) 340 - { 341 - struct ion_device *dev = internal_dev; 342 - struct ion_buffer *buffer = NULL; 343 - struct ion_heap *heap; 344 - DEFINE_DMA_BUF_EXPORT_INFO(exp_info); 345 - int fd; 346 - struct dma_buf *dmabuf; 347 - 348 - pr_debug("%s: len %zu heap_id_mask %u flags %x\n", __func__, 349 - len, heap_id_mask, flags); 350 - /* 351 - * traverse the list of heaps available in this system in priority 352 - * order. If the heap type is supported by the client, and matches the 353 - * request of the caller allocate from it. Repeat until allocate has 354 - * succeeded or all heaps have been tried 355 - */ 356 - len = PAGE_ALIGN(len); 357 - 358 - if (!len) 359 - return -EINVAL; 360 - 361 - down_read(&dev->lock); 362 - plist_for_each_entry(heap, &dev->heaps, node) { 363 - /* if the caller didn't specify this heap id */ 364 - if (!((1 << heap->id) & heap_id_mask)) 365 - continue; 366 - buffer = ion_buffer_create(heap, dev, len, flags); 367 - if (!IS_ERR(buffer)) 368 - break; 369 - } 370 - up_read(&dev->lock); 371 - 372 - if (!buffer) 373 - return -ENODEV; 374 - 375 - if (IS_ERR(buffer)) 376 - return PTR_ERR(buffer); 377 - 378 - exp_info.ops = &dma_buf_ops; 379 - exp_info.size = buffer->size; 380 - exp_info.flags = O_RDWR; 381 - exp_info.priv = buffer; 382 - 383 - dmabuf = dma_buf_export(&exp_info); 384 - if (IS_ERR(dmabuf)) { 385 - _ion_buffer_destroy(buffer); 386 - return PTR_ERR(dmabuf); 387 - } 388 - 389 - fd = dma_buf_fd(dmabuf, O_CLOEXEC); 390 - if (fd < 0) 391 - dma_buf_put(dmabuf); 392 - 393 - return fd; 394 - } 395 - 396 - static int ion_query_heaps(struct ion_heap_query *query) 397 - { 398 - struct ion_device *dev = internal_dev; 399 - struct ion_heap_data __user *buffer = u64_to_user_ptr(query->heaps); 400 - int ret = -EINVAL, cnt = 0, max_cnt; 401 - struct ion_heap *heap; 402 - struct ion_heap_data hdata; 403 - 404 - memset(&hdata, 0, sizeof(hdata)); 405 - 406 - down_read(&dev->lock); 407 - if (!buffer) { 408 - query->cnt = dev->heap_cnt; 409 - ret = 0; 410 - goto out; 411 - } 412 - 413 - if (query->cnt <= 0) 414 - goto out; 415 - 416 - max_cnt = query->cnt; 417 - 418 - plist_for_each_entry(heap, &dev->heaps, node) { 419 - strncpy(hdata.name, heap->name, MAX_HEAP_NAME); 420 - hdata.name[sizeof(hdata.name) - 1] = '\0'; 421 - hdata.type = heap->type; 422 - hdata.heap_id = heap->id; 423 - 424 - if (copy_to_user(&buffer[cnt], &hdata, sizeof(hdata))) { 425 - ret = -EFAULT; 426 - goto out; 427 - } 428 - 429 - cnt++; 430 - if (cnt >= max_cnt) 431 - break; 432 - } 433 - 434 - query->cnt = cnt; 435 - ret = 0; 436 - out: 437 - up_read(&dev->lock); 438 - return ret; 439 - } 440 - 441 - union ion_ioctl_arg { 442 - struct ion_allocation_data allocation; 443 - struct ion_heap_query query; 444 - }; 445 - 446 - static int validate_ioctl_arg(unsigned int cmd, union ion_ioctl_arg *arg) 447 - { 448 - switch (cmd) { 449 - case ION_IOC_HEAP_QUERY: 450 - if (arg->query.reserved0 || 451 - arg->query.reserved1 || 452 - arg->query.reserved2) 453 - return -EINVAL; 454 - break; 455 - default: 456 - break; 457 - } 458 - 459 - return 0; 460 - } 461 - 462 - static long ion_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 463 - { 464 - int ret = 0; 465 - union ion_ioctl_arg data; 466 - 467 - if (_IOC_SIZE(cmd) > sizeof(data)) 468 - return -EINVAL; 469 - 470 - /* 471 - * The copy_from_user is unconditional here for both read and write 472 - * to do the validate. If there is no write for the ioctl, the 473 - * buffer is cleared 474 - */ 475 - if (copy_from_user(&data, (void __user *)arg, _IOC_SIZE(cmd))) 476 - return -EFAULT; 477 - 478 - ret = validate_ioctl_arg(cmd, &data); 479 - if (ret) { 480 - pr_warn_once("%s: ioctl validate failed\n", __func__); 481 - return ret; 482 - } 483 - 484 - if (!(_IOC_DIR(cmd) & _IOC_WRITE)) 485 - memset(&data, 0, sizeof(data)); 486 - 487 - switch (cmd) { 488 - case ION_IOC_ALLOC: 489 - { 490 - int fd; 491 - 492 - fd = ion_alloc(data.allocation.len, 493 - data.allocation.heap_id_mask, 494 - data.allocation.flags); 495 - if (fd < 0) 496 - return fd; 497 - 498 - data.allocation.fd = fd; 499 - 500 - break; 501 - } 502 - case ION_IOC_HEAP_QUERY: 503 - ret = ion_query_heaps(&data.query); 504 - break; 505 - default: 506 - return -ENOTTY; 507 - } 508 - 509 - if (_IOC_DIR(cmd) & _IOC_READ) { 510 - if (copy_to_user((void __user *)arg, &data, _IOC_SIZE(cmd))) 511 - return -EFAULT; 512 - } 513 - return ret; 514 - } 515 - 516 - static const struct file_operations ion_fops = { 517 - .owner = THIS_MODULE, 518 - .unlocked_ioctl = ion_ioctl, 519 - .compat_ioctl = compat_ptr_ioctl, 520 - }; 521 - 522 - static int debug_shrink_set(void *data, u64 val) 523 - { 524 - struct ion_heap *heap = data; 525 - struct shrink_control sc; 526 - int objs; 527 - 528 - sc.gfp_mask = GFP_HIGHUSER; 529 - sc.nr_to_scan = val; 530 - 531 - if (!val) { 532 - objs = heap->shrinker.count_objects(&heap->shrinker, &sc); 533 - sc.nr_to_scan = objs; 534 - } 535 - 536 - heap->shrinker.scan_objects(&heap->shrinker, &sc); 537 - return 0; 538 - } 539 - 540 - static int debug_shrink_get(void *data, u64 *val) 541 - { 542 - struct ion_heap *heap = data; 543 - struct shrink_control sc; 544 - int objs; 545 - 546 - sc.gfp_mask = GFP_HIGHUSER; 547 - sc.nr_to_scan = 0; 548 - 549 - objs = heap->shrinker.count_objects(&heap->shrinker, &sc); 550 - *val = objs; 551 - return 0; 552 - } 553 - 554 - DEFINE_SIMPLE_ATTRIBUTE(debug_shrink_fops, debug_shrink_get, 555 - debug_shrink_set, "%llu\n"); 556 - 557 - void ion_device_add_heap(struct ion_heap *heap) 558 - { 559 - struct ion_device *dev = internal_dev; 560 - int ret; 561 - struct dentry *heap_root; 562 - char debug_name[64]; 563 - 564 - if (!heap->ops->allocate || !heap->ops->free) 565 - pr_err("%s: can not add heap with invalid ops struct.\n", 566 - __func__); 567 - 568 - spin_lock_init(&heap->free_lock); 569 - spin_lock_init(&heap->stat_lock); 570 - heap->free_list_size = 0; 571 - 572 - if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) 573 - ion_heap_init_deferred_free(heap); 574 - 575 - if ((heap->flags & ION_HEAP_FLAG_DEFER_FREE) || heap->ops->shrink) { 576 - ret = ion_heap_init_shrinker(heap); 577 - if (ret) 578 - pr_err("%s: Failed to register shrinker\n", __func__); 579 - } 580 - 581 - heap->dev = dev; 582 - heap->num_of_buffers = 0; 583 - heap->num_of_alloc_bytes = 0; 584 - heap->alloc_bytes_wm = 0; 585 - 586 - heap_root = debugfs_create_dir(heap->name, dev->debug_root); 587 - debugfs_create_u64("num_of_buffers", 588 - 0444, heap_root, 589 - &heap->num_of_buffers); 590 - debugfs_create_u64("num_of_alloc_bytes", 591 - 0444, 592 - heap_root, 593 - &heap->num_of_alloc_bytes); 594 - debugfs_create_u64("alloc_bytes_wm", 595 - 0444, 596 - heap_root, 597 - &heap->alloc_bytes_wm); 598 - 599 - if (heap->shrinker.count_objects && 600 - heap->shrinker.scan_objects) { 601 - snprintf(debug_name, 64, "%s_shrink", heap->name); 602 - debugfs_create_file(debug_name, 603 - 0644, 604 - heap_root, 605 - heap, 606 - &debug_shrink_fops); 607 - } 608 - 609 - down_write(&dev->lock); 610 - heap->id = heap_id++; 611 - /* 612 - * use negative heap->id to reverse the priority -- when traversing 613 - * the list later attempt higher id numbers first 614 - */ 615 - plist_node_init(&heap->node, -heap->id); 616 - plist_add(&heap->node, &dev->heaps); 617 - 618 - dev->heap_cnt++; 619 - up_write(&dev->lock); 620 - } 621 - EXPORT_SYMBOL(ion_device_add_heap); 622 - 623 - static int ion_device_create(void) 624 - { 625 - struct ion_device *idev; 626 - int ret; 627 - 628 - idev = kzalloc(sizeof(*idev), GFP_KERNEL); 629 - if (!idev) 630 - return -ENOMEM; 631 - 632 - idev->dev.minor = MISC_DYNAMIC_MINOR; 633 - idev->dev.name = "ion"; 634 - idev->dev.fops = &ion_fops; 635 - idev->dev.parent = NULL; 636 - ret = misc_register(&idev->dev); 637 - if (ret) { 638 - pr_err("ion: failed to register misc device.\n"); 639 - kfree(idev); 640 - return ret; 641 - } 642 - 643 - idev->debug_root = debugfs_create_dir("ion", NULL); 644 - init_rwsem(&idev->lock); 645 - plist_head_init(&idev->heaps); 646 - internal_dev = idev; 647 - return 0; 648 - } 649 - subsys_initcall(ion_device_create);
-302
drivers/staging/android/ion/ion.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * ION Memory Allocator kernel interface header 4 - * 5 - * Copyright (C) 2011 Google, Inc. 6 - */ 7 - 8 - #ifndef _ION_H 9 - #define _ION_H 10 - 11 - #include <linux/device.h> 12 - #include <linux/dma-direction.h> 13 - #include <linux/kref.h> 14 - #include <linux/mm_types.h> 15 - #include <linux/mutex.h> 16 - #include <linux/rbtree.h> 17 - #include <linux/sched.h> 18 - #include <linux/shrinker.h> 19 - #include <linux/types.h> 20 - #include <linux/miscdevice.h> 21 - 22 - #include "../uapi/ion.h" 23 - 24 - /** 25 - * struct ion_buffer - metadata for a particular buffer 26 - * @list: element in list of deferred freeable buffers 27 - * @dev: back pointer to the ion_device 28 - * @heap: back pointer to the heap the buffer came from 29 - * @flags: buffer specific flags 30 - * @private_flags: internal buffer specific flags 31 - * @size: size of the buffer 32 - * @priv_virt: private data to the buffer representable as 33 - * a void * 34 - * @lock: protects the buffers cnt fields 35 - * @kmap_cnt: number of times the buffer is mapped to the kernel 36 - * @vaddr: the kernel mapping if kmap_cnt is not zero 37 - * @sg_table: the sg table for the buffer 38 - * @attachments: list of devices attached to this buffer 39 - */ 40 - struct ion_buffer { 41 - struct list_head list; 42 - struct ion_device *dev; 43 - struct ion_heap *heap; 44 - unsigned long flags; 45 - unsigned long private_flags; 46 - size_t size; 47 - void *priv_virt; 48 - struct mutex lock; 49 - int kmap_cnt; 50 - void *vaddr; 51 - struct sg_table *sg_table; 52 - struct list_head attachments; 53 - }; 54 - 55 - void ion_buffer_destroy(struct ion_buffer *buffer); 56 - 57 - /** 58 - * struct ion_device - the metadata of the ion device node 59 - * @dev: the actual misc device 60 - * @lock: rwsem protecting the tree of heaps and clients 61 - */ 62 - struct ion_device { 63 - struct miscdevice dev; 64 - struct rw_semaphore lock; 65 - struct plist_head heaps; 66 - struct dentry *debug_root; 67 - int heap_cnt; 68 - }; 69 - 70 - /** 71 - * struct ion_heap_ops - ops to operate on a given heap 72 - * @allocate: allocate memory 73 - * @free: free memory 74 - * @map_kernel map memory to the kernel 75 - * @unmap_kernel unmap memory to the kernel 76 - * @map_user map memory to userspace 77 - * 78 - * allocate, phys, and map_user return 0 on success, -errno on error. 79 - * map_dma and map_kernel return pointer on success, ERR_PTR on 80 - * error. @free will be called with ION_PRIV_FLAG_SHRINKER_FREE set in 81 - * the buffer's private_flags when called from a shrinker. In that 82 - * case, the pages being free'd must be truly free'd back to the 83 - * system, not put in a page pool or otherwise cached. 84 - */ 85 - struct ion_heap_ops { 86 - int (*allocate)(struct ion_heap *heap, 87 - struct ion_buffer *buffer, unsigned long len, 88 - unsigned long flags); 89 - void (*free)(struct ion_buffer *buffer); 90 - void * (*map_kernel)(struct ion_heap *heap, struct ion_buffer *buffer); 91 - void (*unmap_kernel)(struct ion_heap *heap, struct ion_buffer *buffer); 92 - int (*map_user)(struct ion_heap *mapper, struct ion_buffer *buffer, 93 - struct vm_area_struct *vma); 94 - int (*shrink)(struct ion_heap *heap, gfp_t gfp_mask, int nr_to_scan); 95 - }; 96 - 97 - /** 98 - * heap flags - flags between the heaps and core ion code 99 - */ 100 - #define ION_HEAP_FLAG_DEFER_FREE BIT(0) 101 - 102 - /** 103 - * private flags - flags internal to ion 104 - */ 105 - /* 106 - * Buffer is being freed from a shrinker function. Skip any possible 107 - * heap-specific caching mechanism (e.g. page pools). Guarantees that 108 - * any buffer storage that came from the system allocator will be 109 - * returned to the system allocator. 110 - */ 111 - #define ION_PRIV_FLAG_SHRINKER_FREE BIT(0) 112 - 113 - /** 114 - * struct ion_heap - represents a heap in the system 115 - * @node: rb node to put the heap on the device's tree of heaps 116 - * @dev: back pointer to the ion_device 117 - * @type: type of heap 118 - * @ops: ops struct as above 119 - * @flags: flags 120 - * @id: id of heap, also indicates priority of this heap when 121 - * allocating. These are specified by platform data and 122 - * MUST be unique 123 - * @name: used for debugging 124 - * @shrinker: a shrinker for the heap 125 - * @free_list: free list head if deferred free is used 126 - * @free_list_size size of the deferred free list in bytes 127 - * @lock: protects the free list 128 - * @waitqueue: queue to wait on from deferred free thread 129 - * @task: task struct of deferred free thread 130 - * @num_of_buffers the number of currently allocated buffers 131 - * @num_of_alloc_bytes the number of allocated bytes 132 - * @alloc_bytes_wm the number of allocated bytes watermark 133 - * 134 - * Represents a pool of memory from which buffers can be made. In some 135 - * systems the only heap is regular system memory allocated via vmalloc. 136 - * On others, some blocks might require large physically contiguous buffers 137 - * that are allocated from a specially reserved heap. 138 - */ 139 - struct ion_heap { 140 - struct plist_node node; 141 - struct ion_device *dev; 142 - enum ion_heap_type type; 143 - struct ion_heap_ops *ops; 144 - unsigned long flags; 145 - unsigned int id; 146 - const char *name; 147 - 148 - /* deferred free support */ 149 - struct shrinker shrinker; 150 - struct list_head free_list; 151 - size_t free_list_size; 152 - spinlock_t free_lock; 153 - wait_queue_head_t waitqueue; 154 - struct task_struct *task; 155 - 156 - /* heap statistics */ 157 - u64 num_of_buffers; 158 - u64 num_of_alloc_bytes; 159 - u64 alloc_bytes_wm; 160 - 161 - /* protect heap statistics */ 162 - spinlock_t stat_lock; 163 - }; 164 - 165 - /** 166 - * ion_device_add_heap - adds a heap to the ion device 167 - * @heap: the heap to add 168 - */ 169 - void ion_device_add_heap(struct ion_heap *heap); 170 - 171 - /** 172 - * some helpers for common operations on buffers using the sg_table 173 - * and vaddr fields 174 - */ 175 - void *ion_heap_map_kernel(struct ion_heap *heap, struct ion_buffer *buffer); 176 - void ion_heap_unmap_kernel(struct ion_heap *heap, struct ion_buffer *buffer); 177 - int ion_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer, 178 - struct vm_area_struct *vma); 179 - int ion_heap_buffer_zero(struct ion_buffer *buffer); 180 - 181 - /** 182 - * ion_heap_init_shrinker 183 - * @heap: the heap 184 - * 185 - * If a heap sets the ION_HEAP_FLAG_DEFER_FREE flag or defines the shrink op 186 - * this function will be called to setup a shrinker to shrink the freelists 187 - * and call the heap's shrink op. 188 - */ 189 - int ion_heap_init_shrinker(struct ion_heap *heap); 190 - 191 - /** 192 - * ion_heap_init_deferred_free -- initialize deferred free functionality 193 - * @heap: the heap 194 - * 195 - * If a heap sets the ION_HEAP_FLAG_DEFER_FREE flag this function will 196 - * be called to setup deferred frees. Calls to free the buffer will 197 - * return immediately and the actual free will occur some time later 198 - */ 199 - int ion_heap_init_deferred_free(struct ion_heap *heap); 200 - 201 - /** 202 - * ion_heap_freelist_add - add a buffer to the deferred free list 203 - * @heap: the heap 204 - * @buffer: the buffer 205 - * 206 - * Adds an item to the deferred freelist. 207 - */ 208 - void ion_heap_freelist_add(struct ion_heap *heap, struct ion_buffer *buffer); 209 - 210 - /** 211 - * ion_heap_freelist_drain - drain the deferred free list 212 - * @heap: the heap 213 - * @size: amount of memory to drain in bytes 214 - * 215 - * Drains the indicated amount of memory from the deferred freelist immediately. 216 - * Returns the total amount freed. The total freed may be higher depending 217 - * on the size of the items in the list, or lower if there is insufficient 218 - * total memory on the freelist. 219 - */ 220 - size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size); 221 - 222 - /** 223 - * ion_heap_freelist_shrink - drain the deferred free 224 - * list, skipping any heap-specific 225 - * pooling or caching mechanisms 226 - * 227 - * @heap: the heap 228 - * @size: amount of memory to drain in bytes 229 - * 230 - * Drains the indicated amount of memory from the deferred freelist immediately. 231 - * Returns the total amount freed. The total freed may be higher depending 232 - * on the size of the items in the list, or lower if there is insufficient 233 - * total memory on the freelist. 234 - * 235 - * Unlike with @ion_heap_freelist_drain, don't put any pages back into 236 - * page pools or otherwise cache the pages. Everything must be 237 - * genuinely free'd back to the system. If you're free'ing from a 238 - * shrinker you probably want to use this. Note that this relies on 239 - * the heap.ops.free callback honoring the ION_PRIV_FLAG_SHRINKER_FREE 240 - * flag. 241 - */ 242 - size_t ion_heap_freelist_shrink(struct ion_heap *heap, 243 - size_t size); 244 - 245 - /** 246 - * ion_heap_freelist_size - returns the size of the freelist in bytes 247 - * @heap: the heap 248 - */ 249 - size_t ion_heap_freelist_size(struct ion_heap *heap); 250 - 251 - /** 252 - * functions for creating and destroying a heap pool -- allows you 253 - * to keep a pool of pre allocated memory to use from your heap. Keeping 254 - * a pool of memory that is ready for dma, ie any cached mapping have been 255 - * invalidated from the cache, provides a significant performance benefit on 256 - * many systems 257 - */ 258 - 259 - /** 260 - * struct ion_page_pool - pagepool struct 261 - * @high_count: number of highmem items in the pool 262 - * @low_count: number of lowmem items in the pool 263 - * @high_items: list of highmem items 264 - * @low_items: list of lowmem items 265 - * @mutex: lock protecting this struct and especially the count 266 - * item list 267 - * @gfp_mask: gfp_mask to use from alloc 268 - * @order: order of pages in the pool 269 - * @list: plist node for list of pools 270 - * 271 - * Allows you to keep a pool of pre allocated pages to use from your heap. 272 - * Keeping a pool of pages that is ready for dma, ie any cached mapping have 273 - * been invalidated from the cache, provides a significant performance benefit 274 - * on many systems 275 - */ 276 - struct ion_page_pool { 277 - int high_count; 278 - int low_count; 279 - struct list_head high_items; 280 - struct list_head low_items; 281 - struct mutex mutex; 282 - gfp_t gfp_mask; 283 - unsigned int order; 284 - struct plist_node list; 285 - }; 286 - 287 - struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order); 288 - void ion_page_pool_destroy(struct ion_page_pool *pool); 289 - struct page *ion_page_pool_alloc(struct ion_page_pool *pool); 290 - void ion_page_pool_free(struct ion_page_pool *pool, struct page *page); 291 - 292 - /** ion_page_pool_shrink - shrinks the size of the memory cached in the pool 293 - * @pool: the pool 294 - * @gfp_mask: the memory type to reclaim 295 - * @nr_to_scan: number of items to shrink in pages 296 - * 297 - * returns the number of items freed in pages 298 - */ 299 - int ion_page_pool_shrink(struct ion_page_pool *pool, gfp_t gfp_mask, 300 - int nr_to_scan); 301 - 302 - #endif /* _ION_H */
-138
drivers/staging/android/ion/ion_cma_heap.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * ION Memory Allocator CMA heap exporter 4 - * 5 - * Copyright (C) Linaro 2012 6 - * Author: <benjamin.gaignard@linaro.org> for ST-Ericsson. 7 - */ 8 - 9 - #include <linux/device.h> 10 - #include <linux/slab.h> 11 - #include <linux/errno.h> 12 - #include <linux/err.h> 13 - #include <linux/cma.h> 14 - #include <linux/scatterlist.h> 15 - #include <linux/highmem.h> 16 - 17 - #include "ion.h" 18 - 19 - struct ion_cma_heap { 20 - struct ion_heap heap; 21 - struct cma *cma; 22 - }; 23 - 24 - #define to_cma_heap(x) container_of(x, struct ion_cma_heap, heap) 25 - 26 - /* ION CMA heap operations functions */ 27 - static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, 28 - unsigned long len, 29 - unsigned long flags) 30 - { 31 - struct ion_cma_heap *cma_heap = to_cma_heap(heap); 32 - struct sg_table *table; 33 - struct page *pages; 34 - unsigned long size = PAGE_ALIGN(len); 35 - unsigned long nr_pages = size >> PAGE_SHIFT; 36 - unsigned long align = get_order(size); 37 - int ret; 38 - 39 - if (align > CONFIG_CMA_ALIGNMENT) 40 - align = CONFIG_CMA_ALIGNMENT; 41 - 42 - pages = cma_alloc(cma_heap->cma, nr_pages, align, false); 43 - if (!pages) 44 - return -ENOMEM; 45 - 46 - if (PageHighMem(pages)) { 47 - unsigned long nr_clear_pages = nr_pages; 48 - struct page *page = pages; 49 - 50 - while (nr_clear_pages > 0) { 51 - void *vaddr = kmap_atomic(page); 52 - 53 - memset(vaddr, 0, PAGE_SIZE); 54 - kunmap_atomic(vaddr); 55 - page++; 56 - nr_clear_pages--; 57 - } 58 - } else { 59 - memset(page_address(pages), 0, size); 60 - } 61 - 62 - table = kmalloc(sizeof(*table), GFP_KERNEL); 63 - if (!table) 64 - goto err; 65 - 66 - ret = sg_alloc_table(table, 1, GFP_KERNEL); 67 - if (ret) 68 - goto free_mem; 69 - 70 - sg_set_page(table->sgl, pages, size, 0); 71 - 72 - buffer->priv_virt = pages; 73 - buffer->sg_table = table; 74 - return 0; 75 - 76 - free_mem: 77 - kfree(table); 78 - err: 79 - cma_release(cma_heap->cma, pages, nr_pages); 80 - return -ENOMEM; 81 - } 82 - 83 - static void ion_cma_free(struct ion_buffer *buffer) 84 - { 85 - struct ion_cma_heap *cma_heap = to_cma_heap(buffer->heap); 86 - struct page *pages = buffer->priv_virt; 87 - unsigned long nr_pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT; 88 - 89 - /* release memory */ 90 - cma_release(cma_heap->cma, pages, nr_pages); 91 - /* release sg table */ 92 - sg_free_table(buffer->sg_table); 93 - kfree(buffer->sg_table); 94 - } 95 - 96 - static struct ion_heap_ops ion_cma_ops = { 97 - .allocate = ion_cma_allocate, 98 - .free = ion_cma_free, 99 - .map_user = ion_heap_map_user, 100 - .map_kernel = ion_heap_map_kernel, 101 - .unmap_kernel = ion_heap_unmap_kernel, 102 - }; 103 - 104 - static struct ion_heap *__ion_cma_heap_create(struct cma *cma) 105 - { 106 - struct ion_cma_heap *cma_heap; 107 - 108 - cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL); 109 - 110 - if (!cma_heap) 111 - return ERR_PTR(-ENOMEM); 112 - 113 - cma_heap->heap.ops = &ion_cma_ops; 114 - cma_heap->cma = cma; 115 - cma_heap->heap.type = ION_HEAP_TYPE_DMA; 116 - return &cma_heap->heap; 117 - } 118 - 119 - static int __ion_add_cma_heaps(struct cma *cma, void *data) 120 - { 121 - struct ion_heap *heap; 122 - 123 - heap = __ion_cma_heap_create(cma); 124 - if (IS_ERR(heap)) 125 - return PTR_ERR(heap); 126 - 127 - heap->name = cma_get_name(cma); 128 - 129 - ion_device_add_heap(heap); 130 - return 0; 131 - } 132 - 133 - static int ion_add_cma_heaps(void) 134 - { 135 - cma_for_each_area(__ion_add_cma_heaps, NULL); 136 - return 0; 137 - } 138 - device_initcall(ion_add_cma_heaps);
-286
drivers/staging/android/ion/ion_heap.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * ION Memory Allocator generic heap helpers 4 - * 5 - * Copyright (C) 2011 Google, Inc. 6 - */ 7 - 8 - #include <linux/err.h> 9 - #include <linux/freezer.h> 10 - #include <linux/kthread.h> 11 - #include <linux/mm.h> 12 - #include <linux/rtmutex.h> 13 - #include <linux/sched.h> 14 - #include <uapi/linux/sched/types.h> 15 - #include <linux/scatterlist.h> 16 - #include <linux/vmalloc.h> 17 - 18 - #include "ion.h" 19 - 20 - void *ion_heap_map_kernel(struct ion_heap *heap, 21 - struct ion_buffer *buffer) 22 - { 23 - struct sg_page_iter piter; 24 - void *vaddr; 25 - pgprot_t pgprot; 26 - struct sg_table *table = buffer->sg_table; 27 - int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE; 28 - struct page **pages = vmalloc(array_size(npages, 29 - sizeof(struct page *))); 30 - struct page **tmp = pages; 31 - 32 - if (!pages) 33 - return ERR_PTR(-ENOMEM); 34 - 35 - if (buffer->flags & ION_FLAG_CACHED) 36 - pgprot = PAGE_KERNEL; 37 - else 38 - pgprot = pgprot_writecombine(PAGE_KERNEL); 39 - 40 - for_each_sgtable_page(table, &piter, 0) { 41 - BUG_ON(tmp - pages >= npages); 42 - *tmp++ = sg_page_iter_page(&piter); 43 - } 44 - 45 - vaddr = vmap(pages, npages, VM_MAP, pgprot); 46 - vfree(pages); 47 - 48 - if (!vaddr) 49 - return ERR_PTR(-ENOMEM); 50 - 51 - return vaddr; 52 - } 53 - 54 - void ion_heap_unmap_kernel(struct ion_heap *heap, 55 - struct ion_buffer *buffer) 56 - { 57 - vunmap(buffer->vaddr); 58 - } 59 - 60 - int ion_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer, 61 - struct vm_area_struct *vma) 62 - { 63 - struct sg_page_iter piter; 64 - struct sg_table *table = buffer->sg_table; 65 - unsigned long addr = vma->vm_start; 66 - int ret; 67 - 68 - for_each_sgtable_page(table, &piter, vma->vm_pgoff) { 69 - struct page *page = sg_page_iter_page(&piter); 70 - 71 - ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE, 72 - vma->vm_page_prot); 73 - if (ret) 74 - return ret; 75 - addr += PAGE_SIZE; 76 - if (addr >= vma->vm_end) 77 - return 0; 78 - } 79 - 80 - return 0; 81 - } 82 - 83 - static int ion_heap_clear_pages(struct page **pages, int num, pgprot_t pgprot) 84 - { 85 - void *addr = vmap(pages, num, VM_MAP, pgprot); 86 - 87 - if (!addr) 88 - return -ENOMEM; 89 - memset(addr, 0, PAGE_SIZE * num); 90 - vunmap(addr); 91 - 92 - return 0; 93 - } 94 - 95 - static int ion_heap_sglist_zero(struct sg_table *sgt, pgprot_t pgprot) 96 - { 97 - int p = 0; 98 - int ret = 0; 99 - struct sg_page_iter piter; 100 - struct page *pages[32]; 101 - 102 - for_each_sgtable_page(sgt, &piter, 0) { 103 - pages[p++] = sg_page_iter_page(&piter); 104 - if (p == ARRAY_SIZE(pages)) { 105 - ret = ion_heap_clear_pages(pages, p, pgprot); 106 - if (ret) 107 - return ret; 108 - p = 0; 109 - } 110 - } 111 - if (p) 112 - ret = ion_heap_clear_pages(pages, p, pgprot); 113 - 114 - return ret; 115 - } 116 - 117 - int ion_heap_buffer_zero(struct ion_buffer *buffer) 118 - { 119 - struct sg_table *table = buffer->sg_table; 120 - pgprot_t pgprot; 121 - 122 - if (buffer->flags & ION_FLAG_CACHED) 123 - pgprot = PAGE_KERNEL; 124 - else 125 - pgprot = pgprot_writecombine(PAGE_KERNEL); 126 - 127 - return ion_heap_sglist_zero(table, pgprot); 128 - } 129 - 130 - void ion_heap_freelist_add(struct ion_heap *heap, struct ion_buffer *buffer) 131 - { 132 - spin_lock(&heap->free_lock); 133 - list_add(&buffer->list, &heap->free_list); 134 - heap->free_list_size += buffer->size; 135 - spin_unlock(&heap->free_lock); 136 - wake_up(&heap->waitqueue); 137 - } 138 - 139 - size_t ion_heap_freelist_size(struct ion_heap *heap) 140 - { 141 - size_t size; 142 - 143 - spin_lock(&heap->free_lock); 144 - size = heap->free_list_size; 145 - spin_unlock(&heap->free_lock); 146 - 147 - return size; 148 - } 149 - 150 - static size_t _ion_heap_freelist_drain(struct ion_heap *heap, size_t size, 151 - bool skip_pools) 152 - { 153 - struct ion_buffer *buffer; 154 - size_t total_drained = 0; 155 - 156 - if (ion_heap_freelist_size(heap) == 0) 157 - return 0; 158 - 159 - spin_lock(&heap->free_lock); 160 - if (size == 0) 161 - size = heap->free_list_size; 162 - 163 - while (!list_empty(&heap->free_list)) { 164 - if (total_drained >= size) 165 - break; 166 - buffer = list_first_entry(&heap->free_list, struct ion_buffer, 167 - list); 168 - list_del(&buffer->list); 169 - heap->free_list_size -= buffer->size; 170 - if (skip_pools) 171 - buffer->private_flags |= ION_PRIV_FLAG_SHRINKER_FREE; 172 - total_drained += buffer->size; 173 - spin_unlock(&heap->free_lock); 174 - ion_buffer_destroy(buffer); 175 - spin_lock(&heap->free_lock); 176 - } 177 - spin_unlock(&heap->free_lock); 178 - 179 - return total_drained; 180 - } 181 - 182 - size_t ion_heap_freelist_drain(struct ion_heap *heap, size_t size) 183 - { 184 - return _ion_heap_freelist_drain(heap, size, false); 185 - } 186 - 187 - size_t ion_heap_freelist_shrink(struct ion_heap *heap, size_t size) 188 - { 189 - return _ion_heap_freelist_drain(heap, size, true); 190 - } 191 - 192 - static int ion_heap_deferred_free(void *data) 193 - { 194 - struct ion_heap *heap = data; 195 - 196 - while (true) { 197 - struct ion_buffer *buffer; 198 - 199 - wait_event_freezable(heap->waitqueue, 200 - ion_heap_freelist_size(heap) > 0); 201 - 202 - spin_lock(&heap->free_lock); 203 - if (list_empty(&heap->free_list)) { 204 - spin_unlock(&heap->free_lock); 205 - continue; 206 - } 207 - buffer = list_first_entry(&heap->free_list, struct ion_buffer, 208 - list); 209 - list_del(&buffer->list); 210 - heap->free_list_size -= buffer->size; 211 - spin_unlock(&heap->free_lock); 212 - ion_buffer_destroy(buffer); 213 - } 214 - 215 - return 0; 216 - } 217 - 218 - int ion_heap_init_deferred_free(struct ion_heap *heap) 219 - { 220 - INIT_LIST_HEAD(&heap->free_list); 221 - init_waitqueue_head(&heap->waitqueue); 222 - heap->task = kthread_run(ion_heap_deferred_free, heap, 223 - "%s", heap->name); 224 - if (IS_ERR(heap->task)) { 225 - pr_err("%s: creating thread for deferred free failed\n", 226 - __func__); 227 - return PTR_ERR_OR_ZERO(heap->task); 228 - } 229 - sched_set_normal(heap->task, 19); 230 - 231 - return 0; 232 - } 233 - 234 - static unsigned long ion_heap_shrink_count(struct shrinker *shrinker, 235 - struct shrink_control *sc) 236 - { 237 - struct ion_heap *heap = container_of(shrinker, struct ion_heap, 238 - shrinker); 239 - int total = 0; 240 - 241 - total = ion_heap_freelist_size(heap) / PAGE_SIZE; 242 - 243 - if (heap->ops->shrink) 244 - total += heap->ops->shrink(heap, sc->gfp_mask, 0); 245 - 246 - return total; 247 - } 248 - 249 - static unsigned long ion_heap_shrink_scan(struct shrinker *shrinker, 250 - struct shrink_control *sc) 251 - { 252 - struct ion_heap *heap = container_of(shrinker, struct ion_heap, 253 - shrinker); 254 - int freed = 0; 255 - int to_scan = sc->nr_to_scan; 256 - 257 - if (to_scan == 0) 258 - return 0; 259 - 260 - /* 261 - * shrink the free list first, no point in zeroing the memory if we're 262 - * just going to reclaim it. Also, skip any possible page pooling. 263 - */ 264 - if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) 265 - freed = ion_heap_freelist_shrink(heap, to_scan * PAGE_SIZE) / 266 - PAGE_SIZE; 267 - 268 - to_scan -= freed; 269 - if (to_scan <= 0) 270 - return freed; 271 - 272 - if (heap->ops->shrink) 273 - freed += heap->ops->shrink(heap, sc->gfp_mask, to_scan); 274 - 275 - return freed; 276 - } 277 - 278 - int ion_heap_init_shrinker(struct ion_heap *heap) 279 - { 280 - heap->shrinker.count_objects = ion_heap_shrink_count; 281 - heap->shrinker.scan_objects = ion_heap_shrink_scan; 282 - heap->shrinker.seeks = DEFAULT_SEEKS; 283 - heap->shrinker.batch = 0; 284 - 285 - return register_shrinker(&heap->shrinker); 286 - }
-155
drivers/staging/android/ion/ion_page_pool.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * ION Memory Allocator page pool helpers 4 - * 5 - * Copyright (C) 2011 Google, Inc. 6 - */ 7 - 8 - #include <linux/list.h> 9 - #include <linux/slab.h> 10 - #include <linux/swap.h> 11 - #include <linux/sched/signal.h> 12 - 13 - #include "ion.h" 14 - 15 - static inline struct page *ion_page_pool_alloc_pages(struct ion_page_pool *pool) 16 - { 17 - if (fatal_signal_pending(current)) 18 - return NULL; 19 - return alloc_pages(pool->gfp_mask, pool->order); 20 - } 21 - 22 - static void ion_page_pool_free_pages(struct ion_page_pool *pool, 23 - struct page *page) 24 - { 25 - __free_pages(page, pool->order); 26 - } 27 - 28 - static void ion_page_pool_add(struct ion_page_pool *pool, struct page *page) 29 - { 30 - mutex_lock(&pool->mutex); 31 - if (PageHighMem(page)) { 32 - list_add_tail(&page->lru, &pool->high_items); 33 - pool->high_count++; 34 - } else { 35 - list_add_tail(&page->lru, &pool->low_items); 36 - pool->low_count++; 37 - } 38 - 39 - mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, 40 - 1 << pool->order); 41 - mutex_unlock(&pool->mutex); 42 - } 43 - 44 - static struct page *ion_page_pool_remove(struct ion_page_pool *pool, bool high) 45 - { 46 - struct page *page; 47 - 48 - if (high) { 49 - BUG_ON(!pool->high_count); 50 - page = list_first_entry(&pool->high_items, struct page, lru); 51 - pool->high_count--; 52 - } else { 53 - BUG_ON(!pool->low_count); 54 - page = list_first_entry(&pool->low_items, struct page, lru); 55 - pool->low_count--; 56 - } 57 - 58 - list_del(&page->lru); 59 - mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE, 60 - -(1 << pool->order)); 61 - return page; 62 - } 63 - 64 - struct page *ion_page_pool_alloc(struct ion_page_pool *pool) 65 - { 66 - struct page *page = NULL; 67 - 68 - BUG_ON(!pool); 69 - 70 - mutex_lock(&pool->mutex); 71 - if (pool->high_count) 72 - page = ion_page_pool_remove(pool, true); 73 - else if (pool->low_count) 74 - page = ion_page_pool_remove(pool, false); 75 - mutex_unlock(&pool->mutex); 76 - 77 - if (!page) 78 - page = ion_page_pool_alloc_pages(pool); 79 - 80 - return page; 81 - } 82 - 83 - void ion_page_pool_free(struct ion_page_pool *pool, struct page *page) 84 - { 85 - BUG_ON(pool->order != compound_order(page)); 86 - 87 - ion_page_pool_add(pool, page); 88 - } 89 - 90 - static int ion_page_pool_total(struct ion_page_pool *pool, bool high) 91 - { 92 - int count = pool->low_count; 93 - 94 - if (high) 95 - count += pool->high_count; 96 - 97 - return count << pool->order; 98 - } 99 - 100 - int ion_page_pool_shrink(struct ion_page_pool *pool, gfp_t gfp_mask, 101 - int nr_to_scan) 102 - { 103 - int freed = 0; 104 - bool high; 105 - 106 - if (current_is_kswapd()) 107 - high = true; 108 - else 109 - high = !!(gfp_mask & __GFP_HIGHMEM); 110 - 111 - if (nr_to_scan == 0) 112 - return ion_page_pool_total(pool, high); 113 - 114 - while (freed < nr_to_scan) { 115 - struct page *page; 116 - 117 - mutex_lock(&pool->mutex); 118 - if (pool->low_count) { 119 - page = ion_page_pool_remove(pool, false); 120 - } else if (high && pool->high_count) { 121 - page = ion_page_pool_remove(pool, true); 122 - } else { 123 - mutex_unlock(&pool->mutex); 124 - break; 125 - } 126 - mutex_unlock(&pool->mutex); 127 - ion_page_pool_free_pages(pool, page); 128 - freed += (1 << pool->order); 129 - } 130 - 131 - return freed; 132 - } 133 - 134 - struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order) 135 - { 136 - struct ion_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL); 137 - 138 - if (!pool) 139 - return NULL; 140 - pool->high_count = 0; 141 - pool->low_count = 0; 142 - INIT_LIST_HEAD(&pool->low_items); 143 - INIT_LIST_HEAD(&pool->high_items); 144 - pool->gfp_mask = gfp_mask | __GFP_COMP; 145 - pool->order = order; 146 - mutex_init(&pool->mutex); 147 - plist_node_init(&pool->list, order); 148 - 149 - return pool; 150 - } 151 - 152 - void ion_page_pool_destroy(struct ion_page_pool *pool) 153 - { 154 - kfree(pool); 155 - }
-377
drivers/staging/android/ion/ion_system_heap.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * ION Memory Allocator system heap exporter 4 - * 5 - * Copyright (C) 2011 Google, Inc. 6 - */ 7 - 8 - #include <asm/page.h> 9 - #include <linux/dma-mapping.h> 10 - #include <linux/err.h> 11 - #include <linux/highmem.h> 12 - #include <linux/mm.h> 13 - #include <linux/scatterlist.h> 14 - #include <linux/slab.h> 15 - #include <linux/vmalloc.h> 16 - 17 - #include "ion.h" 18 - 19 - #define NUM_ORDERS ARRAY_SIZE(orders) 20 - 21 - static gfp_t high_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN | 22 - __GFP_NORETRY) & ~__GFP_RECLAIM; 23 - static gfp_t low_order_gfp_flags = GFP_HIGHUSER | __GFP_ZERO; 24 - static const unsigned int orders[] = {8, 4, 0}; 25 - 26 - static int order_to_index(unsigned int order) 27 - { 28 - int i; 29 - 30 - for (i = 0; i < NUM_ORDERS; i++) 31 - if (order == orders[i]) 32 - return i; 33 - BUG(); 34 - return -1; 35 - } 36 - 37 - static inline unsigned int order_to_size(int order) 38 - { 39 - return PAGE_SIZE << order; 40 - } 41 - 42 - struct ion_system_heap { 43 - struct ion_heap heap; 44 - struct ion_page_pool *pools[NUM_ORDERS]; 45 - }; 46 - 47 - static struct page *alloc_buffer_page(struct ion_system_heap *heap, 48 - struct ion_buffer *buffer, 49 - unsigned long order) 50 - { 51 - struct ion_page_pool *pool = heap->pools[order_to_index(order)]; 52 - 53 - return ion_page_pool_alloc(pool); 54 - } 55 - 56 - static void free_buffer_page(struct ion_system_heap *heap, 57 - struct ion_buffer *buffer, struct page *page) 58 - { 59 - struct ion_page_pool *pool; 60 - unsigned int order = compound_order(page); 61 - 62 - /* go to system */ 63 - if (buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE) { 64 - __free_pages(page, order); 65 - return; 66 - } 67 - 68 - pool = heap->pools[order_to_index(order)]; 69 - 70 - ion_page_pool_free(pool, page); 71 - } 72 - 73 - static struct page *alloc_largest_available(struct ion_system_heap *heap, 74 - struct ion_buffer *buffer, 75 - unsigned long size, 76 - unsigned int max_order) 77 - { 78 - struct page *page; 79 - int i; 80 - 81 - for (i = 0; i < NUM_ORDERS; i++) { 82 - if (size < order_to_size(orders[i])) 83 - continue; 84 - if (max_order < orders[i]) 85 - continue; 86 - 87 - page = alloc_buffer_page(heap, buffer, orders[i]); 88 - if (!page) 89 - continue; 90 - 91 - return page; 92 - } 93 - 94 - return NULL; 95 - } 96 - 97 - static int ion_system_heap_allocate(struct ion_heap *heap, 98 - struct ion_buffer *buffer, 99 - unsigned long size, 100 - unsigned long flags) 101 - { 102 - struct ion_system_heap *sys_heap = container_of(heap, 103 - struct ion_system_heap, 104 - heap); 105 - struct sg_table *table; 106 - struct scatterlist *sg; 107 - struct list_head pages; 108 - struct page *page, *tmp_page; 109 - int i = 0; 110 - unsigned long size_remaining = PAGE_ALIGN(size); 111 - unsigned int max_order = orders[0]; 112 - 113 - if (size / PAGE_SIZE > totalram_pages() / 2) 114 - return -ENOMEM; 115 - 116 - INIT_LIST_HEAD(&pages); 117 - while (size_remaining > 0) { 118 - page = alloc_largest_available(sys_heap, buffer, size_remaining, 119 - max_order); 120 - if (!page) 121 - goto free_pages; 122 - list_add_tail(&page->lru, &pages); 123 - size_remaining -= page_size(page); 124 - max_order = compound_order(page); 125 - i++; 126 - } 127 - table = kmalloc(sizeof(*table), GFP_KERNEL); 128 - if (!table) 129 - goto free_pages; 130 - 131 - if (sg_alloc_table(table, i, GFP_KERNEL)) 132 - goto free_table; 133 - 134 - sg = table->sgl; 135 - list_for_each_entry_safe(page, tmp_page, &pages, lru) { 136 - sg_set_page(sg, page, page_size(page), 0); 137 - sg = sg_next(sg); 138 - list_del(&page->lru); 139 - } 140 - 141 - buffer->sg_table = table; 142 - return 0; 143 - 144 - free_table: 145 - kfree(table); 146 - free_pages: 147 - list_for_each_entry_safe(page, tmp_page, &pages, lru) 148 - free_buffer_page(sys_heap, buffer, page); 149 - return -ENOMEM; 150 - } 151 - 152 - static void ion_system_heap_free(struct ion_buffer *buffer) 153 - { 154 - struct ion_system_heap *sys_heap = container_of(buffer->heap, 155 - struct ion_system_heap, 156 - heap); 157 - struct sg_table *table = buffer->sg_table; 158 - struct scatterlist *sg; 159 - int i; 160 - 161 - /* zero the buffer before goto page pool */ 162 - if (!(buffer->private_flags & ION_PRIV_FLAG_SHRINKER_FREE)) 163 - ion_heap_buffer_zero(buffer); 164 - 165 - for_each_sgtable_sg(table, sg, i) 166 - free_buffer_page(sys_heap, buffer, sg_page(sg)); 167 - sg_free_table(table); 168 - kfree(table); 169 - } 170 - 171 - static int ion_system_heap_shrink(struct ion_heap *heap, gfp_t gfp_mask, 172 - int nr_to_scan) 173 - { 174 - struct ion_page_pool *pool; 175 - struct ion_system_heap *sys_heap; 176 - int nr_total = 0; 177 - int i, nr_freed; 178 - int only_scan = 0; 179 - 180 - sys_heap = container_of(heap, struct ion_system_heap, heap); 181 - 182 - if (!nr_to_scan) 183 - only_scan = 1; 184 - 185 - for (i = 0; i < NUM_ORDERS; i++) { 186 - pool = sys_heap->pools[i]; 187 - 188 - if (only_scan) { 189 - nr_total += ion_page_pool_shrink(pool, 190 - gfp_mask, 191 - nr_to_scan); 192 - 193 - } else { 194 - nr_freed = ion_page_pool_shrink(pool, 195 - gfp_mask, 196 - nr_to_scan); 197 - nr_to_scan -= nr_freed; 198 - nr_total += nr_freed; 199 - if (nr_to_scan <= 0) 200 - break; 201 - } 202 - } 203 - return nr_total; 204 - } 205 - 206 - static struct ion_heap_ops system_heap_ops = { 207 - .allocate = ion_system_heap_allocate, 208 - .free = ion_system_heap_free, 209 - .map_kernel = ion_heap_map_kernel, 210 - .unmap_kernel = ion_heap_unmap_kernel, 211 - .map_user = ion_heap_map_user, 212 - .shrink = ion_system_heap_shrink, 213 - }; 214 - 215 - static void ion_system_heap_destroy_pools(struct ion_page_pool **pools) 216 - { 217 - int i; 218 - 219 - for (i = 0; i < NUM_ORDERS; i++) 220 - if (pools[i]) 221 - ion_page_pool_destroy(pools[i]); 222 - } 223 - 224 - static int ion_system_heap_create_pools(struct ion_page_pool **pools) 225 - { 226 - int i; 227 - 228 - for (i = 0; i < NUM_ORDERS; i++) { 229 - struct ion_page_pool *pool; 230 - gfp_t gfp_flags = low_order_gfp_flags; 231 - 232 - if (orders[i] > 4) 233 - gfp_flags = high_order_gfp_flags; 234 - 235 - pool = ion_page_pool_create(gfp_flags, orders[i]); 236 - if (!pool) 237 - goto err_create_pool; 238 - pools[i] = pool; 239 - } 240 - 241 - return 0; 242 - 243 - err_create_pool: 244 - ion_system_heap_destroy_pools(pools); 245 - return -ENOMEM; 246 - } 247 - 248 - static struct ion_heap *__ion_system_heap_create(void) 249 - { 250 - struct ion_system_heap *heap; 251 - 252 - heap = kzalloc(sizeof(*heap), GFP_KERNEL); 253 - if (!heap) 254 - return ERR_PTR(-ENOMEM); 255 - heap->heap.ops = &system_heap_ops; 256 - heap->heap.type = ION_HEAP_TYPE_SYSTEM; 257 - heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE; 258 - 259 - if (ion_system_heap_create_pools(heap->pools)) 260 - goto free_heap; 261 - 262 - return &heap->heap; 263 - 264 - free_heap: 265 - kfree(heap); 266 - return ERR_PTR(-ENOMEM); 267 - } 268 - 269 - static int ion_system_heap_create(void) 270 - { 271 - struct ion_heap *heap; 272 - 273 - heap = __ion_system_heap_create(); 274 - if (IS_ERR(heap)) 275 - return PTR_ERR(heap); 276 - heap->name = "ion_system_heap"; 277 - 278 - ion_device_add_heap(heap); 279 - 280 - return 0; 281 - } 282 - device_initcall(ion_system_heap_create); 283 - 284 - static int ion_system_contig_heap_allocate(struct ion_heap *heap, 285 - struct ion_buffer *buffer, 286 - unsigned long len, 287 - unsigned long flags) 288 - { 289 - int order = get_order(len); 290 - struct page *page; 291 - struct sg_table *table; 292 - unsigned long i; 293 - int ret; 294 - 295 - page = alloc_pages(low_order_gfp_flags | __GFP_NOWARN, order); 296 - if (!page) 297 - return -ENOMEM; 298 - 299 - split_page(page, order); 300 - 301 - len = PAGE_ALIGN(len); 302 - for (i = len >> PAGE_SHIFT; i < (1 << order); i++) 303 - __free_page(page + i); 304 - 305 - table = kmalloc(sizeof(*table), GFP_KERNEL); 306 - if (!table) { 307 - ret = -ENOMEM; 308 - goto free_pages; 309 - } 310 - 311 - ret = sg_alloc_table(table, 1, GFP_KERNEL); 312 - if (ret) 313 - goto free_table; 314 - 315 - sg_set_page(table->sgl, page, len, 0); 316 - 317 - buffer->sg_table = table; 318 - 319 - return 0; 320 - 321 - free_table: 322 - kfree(table); 323 - free_pages: 324 - for (i = 0; i < len >> PAGE_SHIFT; i++) 325 - __free_page(page + i); 326 - 327 - return ret; 328 - } 329 - 330 - static void ion_system_contig_heap_free(struct ion_buffer *buffer) 331 - { 332 - struct sg_table *table = buffer->sg_table; 333 - struct page *page = sg_page(table->sgl); 334 - unsigned long pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT; 335 - unsigned long i; 336 - 337 - for (i = 0; i < pages; i++) 338 - __free_page(page + i); 339 - sg_free_table(table); 340 - kfree(table); 341 - } 342 - 343 - static struct ion_heap_ops kmalloc_ops = { 344 - .allocate = ion_system_contig_heap_allocate, 345 - .free = ion_system_contig_heap_free, 346 - .map_kernel = ion_heap_map_kernel, 347 - .unmap_kernel = ion_heap_unmap_kernel, 348 - .map_user = ion_heap_map_user, 349 - }; 350 - 351 - static struct ion_heap *__ion_system_contig_heap_create(void) 352 - { 353 - struct ion_heap *heap; 354 - 355 - heap = kzalloc(sizeof(*heap), GFP_KERNEL); 356 - if (!heap) 357 - return ERR_PTR(-ENOMEM); 358 - heap->ops = &kmalloc_ops; 359 - heap->type = ION_HEAP_TYPE_SYSTEM_CONTIG; 360 - heap->name = "ion_system_contig_heap"; 361 - 362 - return heap; 363 - } 364 - 365 - static int ion_system_contig_heap_create(void) 366 - { 367 - struct ion_heap *heap; 368 - 369 - heap = __ion_system_contig_heap_create(); 370 - if (IS_ERR(heap)) 371 - return PTR_ERR(heap); 372 - 373 - ion_device_add_heap(heap); 374 - 375 - return 0; 376 - } 377 - device_initcall(ion_system_contig_heap_create);
-127
drivers/staging/android/uapi/ion.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - /* 3 - * drivers/staging/android/uapi/ion.h 4 - * 5 - * Copyright (C) 2011 Google, Inc. 6 - */ 7 - 8 - #ifndef _UAPI_LINUX_ION_H 9 - #define _UAPI_LINUX_ION_H 10 - 11 - #include <linux/ioctl.h> 12 - #include <linux/types.h> 13 - 14 - /** 15 - * enum ion_heap_types - list of all possible types of heaps 16 - * @ION_HEAP_TYPE_SYSTEM: memory allocated via vmalloc 17 - * @ION_HEAP_TYPE_SYSTEM_CONTIG: memory allocated via kmalloc 18 - * @ION_HEAP_TYPE_CARVEOUT: memory allocated from a prereserved 19 - * carveout heap, allocations are physically 20 - * contiguous 21 - * @ION_HEAP_TYPE_DMA: memory allocated via DMA API 22 - * @ION_NUM_HEAPS: helper for iterating over heaps, a bit mask 23 - * is used to identify the heaps, so only 32 24 - * total heap types are supported 25 - */ 26 - enum ion_heap_type { 27 - ION_HEAP_TYPE_SYSTEM, 28 - ION_HEAP_TYPE_SYSTEM_CONTIG, 29 - ION_HEAP_TYPE_CARVEOUT, 30 - ION_HEAP_TYPE_CHUNK, 31 - ION_HEAP_TYPE_DMA, 32 - ION_HEAP_TYPE_CUSTOM, /* 33 - * must be last so device specific heaps always 34 - * are at the end of this enum 35 - */ 36 - }; 37 - 38 - #define ION_NUM_HEAP_IDS (sizeof(unsigned int) * 8) 39 - 40 - /** 41 - * allocation flags - the lower 16 bits are used by core ion, the upper 16 42 - * bits are reserved for use by the heaps themselves. 43 - */ 44 - 45 - /* 46 - * mappings of this buffer should be cached, ion will do cache maintenance 47 - * when the buffer is mapped for dma 48 - */ 49 - #define ION_FLAG_CACHED 1 50 - 51 - /** 52 - * DOC: Ion Userspace API 53 - * 54 - * create a client by opening /dev/ion 55 - * most operations handled via following ioctls 56 - * 57 - */ 58 - 59 - /** 60 - * struct ion_allocation_data - metadata passed from userspace for allocations 61 - * @len: size of the allocation 62 - * @heap_id_mask: mask of heap ids to allocate from 63 - * @flags: flags passed to heap 64 - * @handle: pointer that will be populated with a cookie to use to 65 - * refer to this allocation 66 - * 67 - * Provided by userspace as an argument to the ioctl 68 - */ 69 - struct ion_allocation_data { 70 - __u64 len; 71 - __u32 heap_id_mask; 72 - __u32 flags; 73 - __u32 fd; 74 - __u32 unused; 75 - }; 76 - 77 - #define MAX_HEAP_NAME 32 78 - 79 - /** 80 - * struct ion_heap_data - data about a heap 81 - * @name - first 32 characters of the heap name 82 - * @type - heap type 83 - * @heap_id - heap id for the heap 84 - */ 85 - struct ion_heap_data { 86 - char name[MAX_HEAP_NAME]; 87 - __u32 type; 88 - __u32 heap_id; 89 - __u32 reserved0; 90 - __u32 reserved1; 91 - __u32 reserved2; 92 - }; 93 - 94 - /** 95 - * struct ion_heap_query - collection of data about all heaps 96 - * @cnt - total number of heaps to be copied 97 - * @heaps - buffer to copy heap data 98 - */ 99 - struct ion_heap_query { 100 - __u32 cnt; /* Total number of heaps to be copied */ 101 - __u32 reserved0; /* align to 64bits */ 102 - __u64 heaps; /* buffer to be populated */ 103 - __u32 reserved1; 104 - __u32 reserved2; 105 - }; 106 - 107 - #define ION_IOC_MAGIC 'I' 108 - 109 - /** 110 - * DOC: ION_IOC_ALLOC - allocate memory 111 - * 112 - * Takes an ion_allocation_data struct and returns it with the handle field 113 - * populated with the opaque handle for the allocation. 114 - */ 115 - #define ION_IOC_ALLOC _IOWR(ION_IOC_MAGIC, 0, \ 116 - struct ion_allocation_data) 117 - 118 - /** 119 - * DOC: ION_IOC_HEAP_QUERY - information about available heaps 120 - * 121 - * Takes an ion_heap_query structure and populates information about 122 - * available Ion heaps. 123 - */ 124 - #define ION_IOC_HEAP_QUERY _IOWR(ION_IOC_MAGIC, 8, \ 125 - struct ion_heap_query) 126 - 127 - #endif /* _UAPI_LINUX_ION_H */
+1 -2
tools/testing/selftests/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - TARGETS = android 3 - TARGETS += arm64 2 + TARGETS = arm64 4 3 TARGETS += bpf 5 4 TARGETS += breakpoints 6 5 TARGETS += capabilities
-39
tools/testing/selftests/android/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - SUBDIRS := ion 3 - 4 - TEST_PROGS := run.sh 5 - 6 - .PHONY: all clean 7 - 8 - include ../lib.mk 9 - 10 - all: 11 - @for DIR in $(SUBDIRS); do \ 12 - BUILD_TARGET=$(OUTPUT)/$$DIR; \ 13 - mkdir $$BUILD_TARGET -p; \ 14 - make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\ 15 - #SUBDIR test prog name should be in the form: SUBDIR_test.sh \ 16 - TEST=$$DIR"_test.sh"; \ 17 - if [ -e $$DIR/$$TEST ]; then \ 18 - rsync -a $$DIR/$$TEST $$BUILD_TARGET/; \ 19 - fi \ 20 - done 21 - 22 - override define INSTALL_RULE 23 - mkdir -p $(INSTALL_PATH) 24 - install -t $(INSTALL_PATH) $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) 25 - 26 - @for SUBDIR in $(SUBDIRS); do \ 27 - BUILD_TARGET=$(OUTPUT)/$$SUBDIR; \ 28 - mkdir $$BUILD_TARGET -p; \ 29 - $(MAKE) OUTPUT=$$BUILD_TARGET -C $$SUBDIR INSTALL_PATH=$(INSTALL_PATH)/$$SUBDIR install; \ 30 - done; 31 - endef 32 - 33 - override define CLEAN 34 - @for DIR in $(SUBDIRS); do \ 35 - BUILD_TARGET=$(OUTPUT)/$$DIR; \ 36 - mkdir $$BUILD_TARGET -p; \ 37 - make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\ 38 - done 39 - endef
-5
tools/testing/selftests/android/config
··· 1 - CONFIG_ANDROID=y 2 - CONFIG_STAGING=y 3 - CONFIG_ION=y 4 - CONFIG_ION_SYSTEM_HEAP=y 5 - CONFIG_DRM_VGEM=y
-4
tools/testing/selftests/android/ion/.gitignore
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - ionapp_export 3 - ionapp_import 4 - ionmap_test
-20
tools/testing/selftests/android/ion/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - 3 - INCLUDEDIR := -I. -I../../../../../drivers/staging/android/uapi/ -I../../../../../usr/include/ 4 - CFLAGS := $(CFLAGS) $(INCLUDEDIR) -Wall -O2 -g 5 - 6 - TEST_GEN_FILES := ionapp_export ionapp_import ionmap_test 7 - 8 - all: $(TEST_GEN_FILES) 9 - 10 - $(TEST_GEN_FILES): ipcsocket.c ionutils.c 11 - 12 - TEST_PROGS := ion_test.sh 13 - 14 - KSFT_KHDR_INSTALL := 1 15 - top_srcdir = ../../../../.. 16 - include ../../lib.mk 17 - 18 - $(OUTPUT)/ionapp_export: ionapp_export.c ipcsocket.c ionutils.c 19 - $(OUTPUT)/ionapp_import: ionapp_import.c ipcsocket.c ionutils.c 20 - $(OUTPUT)/ionmap_test: ionmap_test.c ionutils.c ipcsocket.c
-101
tools/testing/selftests/android/ion/README
··· 1 - ION BUFFER SHARING UTILITY 2 - ========================== 3 - File: ion_test.sh : Utility to test ION driver buffer sharing mechanism. 4 - Author: Pintu Kumar <pintu.ping@gmail.com> 5 - 6 - Introduction: 7 - ------------- 8 - This is a test utility to verify ION buffer sharing in user space 9 - between 2 independent processes. 10 - It uses unix domain socket (with SCM_RIGHTS) as IPC to transfer an FD to 11 - another process to share the same buffer. 12 - This utility demonstrates how ION buffer sharing can be implemented between 13 - two user space processes, using various heap types. 14 - The following heap types are supported by ION driver. 15 - ION_HEAP_TYPE_SYSTEM (0) 16 - ION_HEAP_TYPE_SYSTEM_CONTIG (1) 17 - ION_HEAP_TYPE_CARVEOUT (2) 18 - ION_HEAP_TYPE_CHUNK (3) 19 - ION_HEAP_TYPE_DMA (4) 20 - 21 - By default only the SYSTEM and SYSTEM_CONTIG heaps are supported. 22 - Each heap is associated with the respective heap id. 23 - This utility is designed in the form of client/server program. 24 - The server part (ionapp_export) is the exporter of the buffer. 25 - It is responsible for creating an ION client, allocating the buffer based on 26 - the heap id, writing some data to this buffer and then exporting the FD 27 - (associated with this buffer) to another process using socket IPC. 28 - This FD is called as buffer FD (which is different than the ION client FD). 29 - 30 - The client part (ionapp_import) is the importer of the buffer. 31 - It retrives the FD from the socket data and installs into its address space. 32 - This new FD internally points to the same kernel buffer. 33 - So first it reads the data that is stored in this buffer and prints it. 34 - Then it writes the different size of data (it could be different data) to the 35 - same buffer. 36 - Finally the buffer FD must be closed by both the exporter and importer. 37 - Thus the same kernel buffer is shared among two user space processes using 38 - ION driver and only one time allocation. 39 - 40 - Prerequisite: 41 - ------------- 42 - This utility works only if /dev/ion interface is present. 43 - The following configs needs to be enabled in kernel to include ion driver. 44 - CONFIG_ANDROID=y 45 - CONFIG_STAGING=y 46 - CONFIG_ION=y 47 - CONFIG_ION_SYSTEM_HEAP=y 48 - 49 - This utility requires to be run as root user. 50 - 51 - 52 - Compile and test: 53 - ----------------- 54 - This utility is made to be run as part of kselftest framework in kernel. 55 - To compile and run using kselftest you can simply do the following from the 56 - kernel top directory. 57 - linux$ make TARGETS=android kselftest 58 - Or you can also use: 59 - linux$ make -C tools/testing/selftests TARGETS=android run_tests 60 - Using the selftest it can directly execute the ion_test.sh script to test the 61 - buffer sharing using ion system heap. 62 - Currently the heap size is hard coded as just 10 bytes inside this script. 63 - You need to be a root user to run under selftest. 64 - 65 - You can also compile and test manually using the following steps: 66 - ion$ make 67 - These will generate 2 executable: ionapp_export, ionapp_import 68 - Now you can run the export and import manually by specifying the heap type 69 - and the heap size. 70 - You can also directly execute the shell script to run the test automatically. 71 - Simply use the following command to run the test. 72 - ion$ sudo ./ion_test.sh 73 - 74 - Test Results: 75 - ------------- 76 - The utility is verified on Ubuntu-32 bit system with Linux Kernel 4.14. 77 - Here is the snapshot of the test result using kselftest. 78 - 79 - linux# make TARGETS=android kselftest 80 - heap_type: 0, heap_size: 10 81 - -------------------------------------- 82 - heap type: 0 83 - heap id: 1 84 - heap name: ion_system_heap 85 - -------------------------------------- 86 - Fill buffer content: 87 - 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 88 - Sharing fd: 6, Client fd: 5 89 - <ion_close_buffer_fd>: buffer release successfully.... 90 - Received buffer fd: 4 91 - Read buffer content: 92 - 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0x0 0x0 0x0 0x0 0x0 0x0 93 - 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 94 - Fill buffer content: 95 - 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 96 - 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 0xfd 97 - 0xfd 0xfd 98 - <ion_close_buffer_fd>: buffer release successfully.... 99 - ion_test.sh: heap_type: 0 - [PASS] 100 - 101 - ion_test.sh: done
-134
tools/testing/selftests/android/ion/ion.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * ion.h 4 - * 5 - * Copyright (C) 2011 Google, Inc. 6 - */ 7 - 8 - /* This file is copied from drivers/staging/android/uapi/ion.h 9 - * This local copy is required for the selftest to pass, when build 10 - * outside the kernel source tree. 11 - * Please keep this file in sync with its original file until the 12 - * ion driver is moved outside the staging tree. 13 - */ 14 - 15 - #ifndef _UAPI_LINUX_ION_H 16 - #define _UAPI_LINUX_ION_H 17 - 18 - #include <linux/ioctl.h> 19 - #include <linux/types.h> 20 - 21 - /** 22 - * enum ion_heap_types - list of all possible types of heaps 23 - * @ION_HEAP_TYPE_SYSTEM: memory allocated via vmalloc 24 - * @ION_HEAP_TYPE_SYSTEM_CONTIG: memory allocated via kmalloc 25 - * @ION_HEAP_TYPE_CARVEOUT: memory allocated from a prereserved 26 - * carveout heap, allocations are physically 27 - * contiguous 28 - * @ION_HEAP_TYPE_DMA: memory allocated via DMA API 29 - * @ION_NUM_HEAPS: helper for iterating over heaps, a bit mask 30 - * is used to identify the heaps, so only 32 31 - * total heap types are supported 32 - */ 33 - enum ion_heap_type { 34 - ION_HEAP_TYPE_SYSTEM, 35 - ION_HEAP_TYPE_SYSTEM_CONTIG, 36 - ION_HEAP_TYPE_CARVEOUT, 37 - ION_HEAP_TYPE_CHUNK, 38 - ION_HEAP_TYPE_DMA, 39 - ION_HEAP_TYPE_CUSTOM, /* 40 - * must be last so device specific heaps always 41 - * are at the end of this enum 42 - */ 43 - }; 44 - 45 - #define ION_NUM_HEAP_IDS (sizeof(unsigned int) * 8) 46 - 47 - /** 48 - * allocation flags - the lower 16 bits are used by core ion, the upper 16 49 - * bits are reserved for use by the heaps themselves. 50 - */ 51 - 52 - /* 53 - * mappings of this buffer should be cached, ion will do cache maintenance 54 - * when the buffer is mapped for dma 55 - */ 56 - #define ION_FLAG_CACHED 1 57 - 58 - /** 59 - * DOC: Ion Userspace API 60 - * 61 - * create a client by opening /dev/ion 62 - * most operations handled via following ioctls 63 - * 64 - */ 65 - 66 - /** 67 - * struct ion_allocation_data - metadata passed from userspace for allocations 68 - * @len: size of the allocation 69 - * @heap_id_mask: mask of heap ids to allocate from 70 - * @flags: flags passed to heap 71 - * @handle: pointer that will be populated with a cookie to use to 72 - * refer to this allocation 73 - * 74 - * Provided by userspace as an argument to the ioctl 75 - */ 76 - struct ion_allocation_data { 77 - __u64 len; 78 - __u32 heap_id_mask; 79 - __u32 flags; 80 - __u32 fd; 81 - __u32 unused; 82 - }; 83 - 84 - #define MAX_HEAP_NAME 32 85 - 86 - /** 87 - * struct ion_heap_data - data about a heap 88 - * @name - first 32 characters of the heap name 89 - * @type - heap type 90 - * @heap_id - heap id for the heap 91 - */ 92 - struct ion_heap_data { 93 - char name[MAX_HEAP_NAME]; 94 - __u32 type; 95 - __u32 heap_id; 96 - __u32 reserved0; 97 - __u32 reserved1; 98 - __u32 reserved2; 99 - }; 100 - 101 - /** 102 - * struct ion_heap_query - collection of data about all heaps 103 - * @cnt - total number of heaps to be copied 104 - * @heaps - buffer to copy heap data 105 - */ 106 - struct ion_heap_query { 107 - __u32 cnt; /* Total number of heaps to be copied */ 108 - __u32 reserved0; /* align to 64bits */ 109 - __u64 heaps; /* buffer to be populated */ 110 - __u32 reserved1; 111 - __u32 reserved2; 112 - }; 113 - 114 - #define ION_IOC_MAGIC 'I' 115 - 116 - /** 117 - * DOC: ION_IOC_ALLOC - allocate memory 118 - * 119 - * Takes an ion_allocation_data struct and returns it with the handle field 120 - * populated with the opaque handle for the allocation. 121 - */ 122 - #define ION_IOC_ALLOC _IOWR(ION_IOC_MAGIC, 0, \ 123 - struct ion_allocation_data) 124 - 125 - /** 126 - * DOC: ION_IOC_HEAP_QUERY - information about available heaps 127 - * 128 - * Takes an ion_heap_query structure and populates information about 129 - * available Ion heaps. 130 - */ 131 - #define ION_IOC_HEAP_QUERY _IOWR(ION_IOC_MAGIC, 8, \ 132 - struct ion_heap_query) 133 - 134 - #endif /* _UAPI_LINUX_ION_H */
-58
tools/testing/selftests/android/ion/ion_test.sh
··· 1 - #!/bin/bash 2 - 3 - heapsize=4096 4 - TCID="ion_test.sh" 5 - errcode=0 6 - 7 - # Kselftest framework requirement - SKIP code is 4. 8 - ksft_skip=4 9 - 10 - run_test() 11 - { 12 - heaptype=$1 13 - ./ionapp_export -i $heaptype -s $heapsize & 14 - sleep 1 15 - ./ionapp_import 16 - if [ $? -ne 0 ]; then 17 - echo "$TCID: heap_type: $heaptype - [FAIL]" 18 - errcode=1 19 - else 20 - echo "$TCID: heap_type: $heaptype - [PASS]" 21 - fi 22 - sleep 1 23 - echo "" 24 - } 25 - 26 - check_root() 27 - { 28 - uid=$(id -u) 29 - if [ $uid -ne 0 ]; then 30 - echo $TCID: must be run as root >&2 31 - exit $ksft_skip 32 - fi 33 - } 34 - 35 - check_device() 36 - { 37 - DEVICE=/dev/ion 38 - if [ ! -e $DEVICE ]; then 39 - echo $TCID: No $DEVICE device found >&2 40 - echo $TCID: May be CONFIG_ION is not set >&2 41 - exit $ksft_skip 42 - fi 43 - } 44 - 45 - main_function() 46 - { 47 - check_device 48 - check_root 49 - 50 - # ION_SYSTEM_HEAP TEST 51 - run_test 0 52 - # ION_SYSTEM_CONTIG_HEAP TEST 53 - run_test 1 54 - } 55 - 56 - main_function 57 - echo "$TCID: done" 58 - exit $errcode
-127
tools/testing/selftests/android/ion/ionapp_export.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * ionapp_export.c 4 - * 5 - * It is a user space utility to create and export android 6 - * ion memory buffer fd to another process using unix domain socket as IPC. 7 - * This acts like a server for ionapp_import(client). 8 - * So, this server has to be started first before the client. 9 - * 10 - * Copyright (C) 2017 Pintu Kumar <pintu.ping@gmail.com> 11 - */ 12 - 13 - #include <stdio.h> 14 - #include <stdlib.h> 15 - #include <string.h> 16 - #include <unistd.h> 17 - #include <errno.h> 18 - #include <sys/time.h> 19 - #include "ionutils.h" 20 - #include "ipcsocket.h" 21 - 22 - 23 - void print_usage(int argc, char *argv[]) 24 - { 25 - printf("Usage: %s [-h <help>] [-i <heap id>] [-s <size in bytes>]\n", 26 - argv[0]); 27 - } 28 - 29 - int main(int argc, char *argv[]) 30 - { 31 - int opt, ret, status, heapid; 32 - int sockfd, client_fd, shared_fd; 33 - unsigned char *map_buf; 34 - unsigned long map_len, heap_type, heap_size, flags; 35 - struct ion_buffer_info info; 36 - struct socket_info skinfo; 37 - 38 - if (argc < 2) { 39 - print_usage(argc, argv); 40 - return -1; 41 - } 42 - 43 - heap_size = 0; 44 - flags = 0; 45 - heap_type = ION_HEAP_TYPE_SYSTEM; 46 - 47 - while ((opt = getopt(argc, argv, "hi:s:")) != -1) { 48 - switch (opt) { 49 - case 'h': 50 - print_usage(argc, argv); 51 - exit(0); 52 - break; 53 - case 'i': 54 - heapid = atoi(optarg); 55 - switch (heapid) { 56 - case 0: 57 - heap_type = ION_HEAP_TYPE_SYSTEM; 58 - break; 59 - case 1: 60 - heap_type = ION_HEAP_TYPE_SYSTEM_CONTIG; 61 - break; 62 - default: 63 - printf("ERROR: heap type not supported\n"); 64 - exit(1); 65 - } 66 - break; 67 - case 's': 68 - heap_size = atoi(optarg); 69 - break; 70 - default: 71 - print_usage(argc, argv); 72 - exit(1); 73 - break; 74 - } 75 - } 76 - 77 - if (heap_size <= 0) { 78 - printf("heap_size cannot be 0\n"); 79 - print_usage(argc, argv); 80 - exit(1); 81 - } 82 - 83 - printf("heap_type: %ld, heap_size: %ld\n", heap_type, heap_size); 84 - info.heap_type = heap_type; 85 - info.heap_size = heap_size; 86 - info.flag_type = flags; 87 - 88 - /* This is server: open the socket connection first */ 89 - /* Here; 1 indicates server or exporter */ 90 - status = opensocket(&sockfd, SOCKET_NAME, 1); 91 - if (status < 0) { 92 - fprintf(stderr, "<%s>: Failed opensocket.\n", __func__); 93 - goto err_socket; 94 - } 95 - skinfo.sockfd = sockfd; 96 - 97 - ret = ion_export_buffer_fd(&info); 98 - if (ret < 0) { 99 - fprintf(stderr, "FAILED: ion_get_buffer_fd\n"); 100 - goto err_export; 101 - } 102 - client_fd = info.ionfd; 103 - shared_fd = info.buffd; 104 - map_buf = info.buffer; 105 - map_len = info.buflen; 106 - write_buffer(map_buf, map_len); 107 - 108 - /* share ion buf fd with other user process */ 109 - printf("Sharing fd: %d, Client fd: %d\n", shared_fd, client_fd); 110 - skinfo.datafd = shared_fd; 111 - skinfo.buflen = map_len; 112 - 113 - ret = socket_send_fd(&skinfo); 114 - if (ret < 0) { 115 - fprintf(stderr, "FAILED: socket_send_fd\n"); 116 - goto err_send; 117 - } 118 - 119 - err_send: 120 - err_export: 121 - ion_close_buffer_fd(&info); 122 - 123 - err_socket: 124 - closesocket(sockfd, SOCKET_NAME); 125 - 126 - return 0; 127 - }
-79
tools/testing/selftests/android/ion/ionapp_import.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * ionapp_import.c 4 - * 5 - * It is a user space utility to receive android ion memory buffer fd 6 - * over unix domain socket IPC that can be exported by ionapp_export. 7 - * This acts like a client for ionapp_export. 8 - * 9 - * Copyright (C) 2017 Pintu Kumar <pintu.ping@gmail.com> 10 - */ 11 - 12 - #include <stdio.h> 13 - #include <stdlib.h> 14 - #include <unistd.h> 15 - #include <string.h> 16 - #include "ionutils.h" 17 - #include "ipcsocket.h" 18 - 19 - 20 - int main(void) 21 - { 22 - int ret, status; 23 - int sockfd, shared_fd; 24 - unsigned char *map_buf; 25 - unsigned long map_len; 26 - struct ion_buffer_info info; 27 - struct socket_info skinfo; 28 - 29 - /* This is the client part. Here 0 means client or importer */ 30 - status = opensocket(&sockfd, SOCKET_NAME, 0); 31 - if (status < 0) { 32 - fprintf(stderr, "No exporter exists...\n"); 33 - ret = status; 34 - goto err_socket; 35 - } 36 - 37 - skinfo.sockfd = sockfd; 38 - 39 - ret = socket_receive_fd(&skinfo); 40 - if (ret < 0) { 41 - fprintf(stderr, "Failed: socket_receive_fd\n"); 42 - goto err_recv; 43 - } 44 - 45 - shared_fd = skinfo.datafd; 46 - printf("Received buffer fd: %d\n", shared_fd); 47 - if (shared_fd <= 0) { 48 - fprintf(stderr, "ERROR: improper buf fd\n"); 49 - ret = -1; 50 - goto err_fd; 51 - } 52 - 53 - memset(&info, 0, sizeof(info)); 54 - info.buffd = shared_fd; 55 - info.buflen = ION_BUFFER_LEN; 56 - 57 - ret = ion_import_buffer_fd(&info); 58 - if (ret < 0) { 59 - fprintf(stderr, "Failed: ion_use_buffer_fd\n"); 60 - goto err_import; 61 - } 62 - 63 - map_buf = info.buffer; 64 - map_len = info.buflen; 65 - read_buffer(map_buf, map_len); 66 - 67 - /* Write probably new data to the same buffer again */ 68 - map_len = ION_BUFFER_LEN; 69 - write_buffer(map_buf, map_len); 70 - 71 - err_import: 72 - ion_close_buffer_fd(&info); 73 - err_fd: 74 - err_recv: 75 - err_socket: 76 - closesocket(sockfd, SOCKET_NAME); 77 - 78 - return ret; 79 - }
-136
tools/testing/selftests/android/ion/ionmap_test.c
··· 1 - #include <errno.h> 2 - #include <fcntl.h> 3 - #include <stdio.h> 4 - #include <stdint.h> 5 - #include <string.h> 6 - #include <unistd.h> 7 - 8 - #include <sys/ioctl.h> 9 - #include <sys/types.h> 10 - #include <sys/stat.h> 11 - 12 - #include <linux/dma-buf.h> 13 - 14 - #include <drm/drm.h> 15 - 16 - #include "ion.h" 17 - #include "ionutils.h" 18 - 19 - int check_vgem(int fd) 20 - { 21 - drm_version_t version = { 0 }; 22 - char name[5]; 23 - int ret; 24 - 25 - version.name_len = 4; 26 - version.name = name; 27 - 28 - ret = ioctl(fd, DRM_IOCTL_VERSION, &version); 29 - if (ret) 30 - return 1; 31 - 32 - return strcmp(name, "vgem"); 33 - } 34 - 35 - int open_vgem(void) 36 - { 37 - int i, fd; 38 - const char *drmstr = "/dev/dri/card"; 39 - 40 - fd = -1; 41 - for (i = 0; i < 16; i++) { 42 - char name[80]; 43 - 44 - sprintf(name, "%s%u", drmstr, i); 45 - 46 - fd = open(name, O_RDWR); 47 - if (fd < 0) 48 - continue; 49 - 50 - if (check_vgem(fd)) { 51 - close(fd); 52 - continue; 53 - } else { 54 - break; 55 - } 56 - 57 - } 58 - return fd; 59 - } 60 - 61 - int import_vgem_fd(int vgem_fd, int dma_buf_fd, uint32_t *handle) 62 - { 63 - struct drm_prime_handle import_handle = { 0 }; 64 - int ret; 65 - 66 - import_handle.fd = dma_buf_fd; 67 - import_handle.flags = 0; 68 - import_handle.handle = 0; 69 - 70 - ret = ioctl(vgem_fd, DRM_IOCTL_PRIME_FD_TO_HANDLE, &import_handle); 71 - if (ret == 0) 72 - *handle = import_handle.handle; 73 - return ret; 74 - } 75 - 76 - void close_handle(int vgem_fd, uint32_t handle) 77 - { 78 - struct drm_gem_close close = { 0 }; 79 - 80 - close.handle = handle; 81 - ioctl(vgem_fd, DRM_IOCTL_GEM_CLOSE, &close); 82 - } 83 - 84 - int main() 85 - { 86 - int ret, vgem_fd; 87 - struct ion_buffer_info info; 88 - uint32_t handle = 0; 89 - struct dma_buf_sync sync = { 0 }; 90 - 91 - info.heap_type = ION_HEAP_TYPE_SYSTEM; 92 - info.heap_size = 4096; 93 - info.flag_type = ION_FLAG_CACHED; 94 - 95 - ret = ion_export_buffer_fd(&info); 96 - if (ret < 0) { 97 - printf("ion buffer alloc failed\n"); 98 - return -1; 99 - } 100 - 101 - vgem_fd = open_vgem(); 102 - if (vgem_fd < 0) { 103 - ret = vgem_fd; 104 - printf("Failed to open vgem\n"); 105 - goto out_ion; 106 - } 107 - 108 - ret = import_vgem_fd(vgem_fd, info.buffd, &handle); 109 - 110 - if (ret < 0) { 111 - printf("Failed to import buffer\n"); 112 - goto out_vgem; 113 - } 114 - 115 - sync.flags = DMA_BUF_SYNC_START | DMA_BUF_SYNC_RW; 116 - ret = ioctl(info.buffd, DMA_BUF_IOCTL_SYNC, &sync); 117 - if (ret) 118 - printf("sync start failed %d\n", errno); 119 - 120 - memset(info.buffer, 0xff, 4096); 121 - 122 - sync.flags = DMA_BUF_SYNC_END | DMA_BUF_SYNC_RW; 123 - ret = ioctl(info.buffd, DMA_BUF_IOCTL_SYNC, &sync); 124 - if (ret) 125 - printf("sync end failed %d\n", errno); 126 - 127 - close_handle(vgem_fd, handle); 128 - ret = 0; 129 - 130 - out_vgem: 131 - close(vgem_fd); 132 - out_ion: 133 - ion_close_buffer_fd(&info); 134 - printf("done.\n"); 135 - return ret; 136 - }
-253
tools/testing/selftests/android/ion/ionutils.c
··· 1 - #include <stdio.h> 2 - #include <string.h> 3 - #include <unistd.h> 4 - #include <fcntl.h> 5 - #include <errno.h> 6 - //#include <stdint.h> 7 - #include <sys/ioctl.h> 8 - #include <sys/mman.h> 9 - #include "ionutils.h" 10 - #include "ipcsocket.h" 11 - 12 - 13 - void write_buffer(void *buffer, unsigned long len) 14 - { 15 - int i; 16 - unsigned char *ptr = (unsigned char *)buffer; 17 - 18 - if (!ptr) { 19 - fprintf(stderr, "<%s>: Invalid buffer...\n", __func__); 20 - return; 21 - } 22 - 23 - printf("Fill buffer content:\n"); 24 - memset(ptr, 0xfd, len); 25 - for (i = 0; i < len; i++) 26 - printf("0x%x ", ptr[i]); 27 - printf("\n"); 28 - } 29 - 30 - void read_buffer(void *buffer, unsigned long len) 31 - { 32 - int i; 33 - unsigned char *ptr = (unsigned char *)buffer; 34 - 35 - if (!ptr) { 36 - fprintf(stderr, "<%s>: Invalid buffer...\n", __func__); 37 - return; 38 - } 39 - 40 - printf("Read buffer content:\n"); 41 - for (i = 0; i < len; i++) 42 - printf("0x%x ", ptr[i]); 43 - printf("\n"); 44 - } 45 - 46 - int ion_export_buffer_fd(struct ion_buffer_info *ion_info) 47 - { 48 - int i, ret, ionfd, buffer_fd; 49 - unsigned int heap_id; 50 - unsigned long maplen; 51 - unsigned char *map_buffer; 52 - struct ion_allocation_data alloc_data; 53 - struct ion_heap_query query; 54 - struct ion_heap_data heap_data[MAX_HEAP_COUNT]; 55 - 56 - if (!ion_info) { 57 - fprintf(stderr, "<%s>: Invalid ion info\n", __func__); 58 - return -1; 59 - } 60 - 61 - /* Create an ION client */ 62 - ionfd = open(ION_DEVICE, O_RDWR); 63 - if (ionfd < 0) { 64 - fprintf(stderr, "<%s>: Failed to open ion client: %s\n", 65 - __func__, strerror(errno)); 66 - return -1; 67 - } 68 - 69 - memset(&query, 0, sizeof(query)); 70 - query.cnt = MAX_HEAP_COUNT; 71 - query.heaps = (unsigned long int)&heap_data[0]; 72 - /* Query ION heap_id_mask from ION heap */ 73 - ret = ioctl(ionfd, ION_IOC_HEAP_QUERY, &query); 74 - if (ret < 0) { 75 - fprintf(stderr, "<%s>: Failed: ION_IOC_HEAP_QUERY: %s\n", 76 - __func__, strerror(errno)); 77 - goto err_query; 78 - } 79 - 80 - heap_id = MAX_HEAP_COUNT + 1; 81 - for (i = 0; i < query.cnt; i++) { 82 - if (heap_data[i].type == ion_info->heap_type) { 83 - heap_id = heap_data[i].heap_id; 84 - break; 85 - } 86 - } 87 - 88 - if (heap_id > MAX_HEAP_COUNT) { 89 - fprintf(stderr, "<%s>: ERROR: heap type does not exists\n", 90 - __func__); 91 - goto err_heap; 92 - } 93 - 94 - alloc_data.len = ion_info->heap_size; 95 - alloc_data.heap_id_mask = 1 << heap_id; 96 - alloc_data.flags = ion_info->flag_type; 97 - 98 - /* Allocate memory for this ION client as per heap_type */ 99 - ret = ioctl(ionfd, ION_IOC_ALLOC, &alloc_data); 100 - if (ret < 0) { 101 - fprintf(stderr, "<%s>: Failed: ION_IOC_ALLOC: %s\n", 102 - __func__, strerror(errno)); 103 - goto err_alloc; 104 - } 105 - 106 - /* This will return a valid buffer fd */ 107 - buffer_fd = alloc_data.fd; 108 - maplen = alloc_data.len; 109 - 110 - if (buffer_fd < 0 || maplen <= 0) { 111 - fprintf(stderr, "<%s>: Invalid map data, fd: %d, len: %ld\n", 112 - __func__, buffer_fd, maplen); 113 - goto err_fd_data; 114 - } 115 - 116 - /* Create memory mapped buffer for the buffer fd */ 117 - map_buffer = (unsigned char *)mmap(NULL, maplen, PROT_READ|PROT_WRITE, 118 - MAP_SHARED, buffer_fd, 0); 119 - if (map_buffer == MAP_FAILED) { 120 - fprintf(stderr, "<%s>: Failed: mmap: %s\n", 121 - __func__, strerror(errno)); 122 - goto err_mmap; 123 - } 124 - 125 - ion_info->ionfd = ionfd; 126 - ion_info->buffd = buffer_fd; 127 - ion_info->buffer = map_buffer; 128 - ion_info->buflen = maplen; 129 - 130 - return 0; 131 - 132 - munmap(map_buffer, maplen); 133 - 134 - err_fd_data: 135 - err_mmap: 136 - /* in case of error: close the buffer fd */ 137 - if (buffer_fd) 138 - close(buffer_fd); 139 - 140 - err_query: 141 - err_heap: 142 - err_alloc: 143 - /* In case of error: close the ion client fd */ 144 - if (ionfd) 145 - close(ionfd); 146 - 147 - return -1; 148 - } 149 - 150 - int ion_import_buffer_fd(struct ion_buffer_info *ion_info) 151 - { 152 - int buffd; 153 - unsigned char *map_buf; 154 - unsigned long map_len; 155 - 156 - if (!ion_info) { 157 - fprintf(stderr, "<%s>: Invalid ion info\n", __func__); 158 - return -1; 159 - } 160 - 161 - map_len = ion_info->buflen; 162 - buffd = ion_info->buffd; 163 - 164 - if (buffd < 0 || map_len <= 0) { 165 - fprintf(stderr, "<%s>: Invalid map data, fd: %d, len: %ld\n", 166 - __func__, buffd, map_len); 167 - goto err_buffd; 168 - } 169 - 170 - map_buf = (unsigned char *)mmap(NULL, map_len, PROT_READ|PROT_WRITE, 171 - MAP_SHARED, buffd, 0); 172 - if (map_buf == MAP_FAILED) { 173 - printf("<%s>: Failed - mmap: %s\n", 174 - __func__, strerror(errno)); 175 - goto err_mmap; 176 - } 177 - 178 - ion_info->buffer = map_buf; 179 - ion_info->buflen = map_len; 180 - 181 - return 0; 182 - 183 - err_mmap: 184 - if (buffd) 185 - close(buffd); 186 - 187 - err_buffd: 188 - return -1; 189 - } 190 - 191 - void ion_close_buffer_fd(struct ion_buffer_info *ion_info) 192 - { 193 - if (ion_info) { 194 - /* unmap the buffer properly in the end */ 195 - munmap(ion_info->buffer, ion_info->buflen); 196 - /* close the buffer fd */ 197 - if (ion_info->buffd > 0) 198 - close(ion_info->buffd); 199 - /* Finally, close the client fd */ 200 - if (ion_info->ionfd > 0) 201 - close(ion_info->ionfd); 202 - } 203 - } 204 - 205 - int socket_send_fd(struct socket_info *info) 206 - { 207 - int status; 208 - int fd, sockfd; 209 - struct socketdata skdata; 210 - 211 - if (!info) { 212 - fprintf(stderr, "<%s>: Invalid socket info\n", __func__); 213 - return -1; 214 - } 215 - 216 - sockfd = info->sockfd; 217 - fd = info->datafd; 218 - memset(&skdata, 0, sizeof(skdata)); 219 - skdata.data = fd; 220 - skdata.len = sizeof(skdata.data); 221 - status = sendtosocket(sockfd, &skdata); 222 - if (status < 0) { 223 - fprintf(stderr, "<%s>: Failed: sendtosocket\n", __func__); 224 - return -1; 225 - } 226 - 227 - return 0; 228 - } 229 - 230 - int socket_receive_fd(struct socket_info *info) 231 - { 232 - int status; 233 - int fd, sockfd; 234 - struct socketdata skdata; 235 - 236 - if (!info) { 237 - fprintf(stderr, "<%s>: Invalid socket info\n", __func__); 238 - return -1; 239 - } 240 - 241 - sockfd = info->sockfd; 242 - memset(&skdata, 0, sizeof(skdata)); 243 - status = receivefromsocket(sockfd, &skdata); 244 - if (status < 0) { 245 - fprintf(stderr, "<%s>: Failed: receivefromsocket\n", __func__); 246 - return -1; 247 - } 248 - 249 - fd = (int)skdata.data; 250 - info->datafd = fd; 251 - 252 - return status; 253 - }
-55
tools/testing/selftests/android/ion/ionutils.h
··· 1 - #ifndef __ION_UTILS_H 2 - #define __ION_UTILS_H 3 - 4 - #include "ion.h" 5 - 6 - #define SOCKET_NAME "ion_socket" 7 - #define ION_DEVICE "/dev/ion" 8 - 9 - #define ION_BUFFER_LEN 4096 10 - #define MAX_HEAP_COUNT ION_HEAP_TYPE_CUSTOM 11 - 12 - struct socket_info { 13 - int sockfd; 14 - int datafd; 15 - unsigned long buflen; 16 - }; 17 - 18 - struct ion_buffer_info { 19 - int ionfd; 20 - int buffd; 21 - unsigned int heap_type; 22 - unsigned int flag_type; 23 - unsigned long heap_size; 24 - unsigned long buflen; 25 - unsigned char *buffer; 26 - }; 27 - 28 - 29 - /* This is used to fill the data into the mapped buffer */ 30 - void write_buffer(void *buffer, unsigned long len); 31 - 32 - /* This is used to read the data from the exported buffer */ 33 - void read_buffer(void *buffer, unsigned long len); 34 - 35 - /* This is used to create an ION buffer FD for the kernel buffer 36 - * So you can export this same buffer to others in the form of FD 37 - */ 38 - int ion_export_buffer_fd(struct ion_buffer_info *ion_info); 39 - 40 - /* This is used to import or map an exported FD. 41 - * So we point to same buffer without making a copy. Hence zero-copy. 42 - */ 43 - int ion_import_buffer_fd(struct ion_buffer_info *ion_info); 44 - 45 - /* This is used to close all references for the ION client */ 46 - void ion_close_buffer_fd(struct ion_buffer_info *ion_info); 47 - 48 - /* This is used to send FD to another process using socket IPC */ 49 - int socket_send_fd(struct socket_info *skinfo); 50 - 51 - /* This is used to receive FD from another process using socket IPC */ 52 - int socket_receive_fd(struct socket_info *skinfo); 53 - 54 - 55 - #endif
-227
tools/testing/selftests/android/ion/ipcsocket.c
··· 1 - #include <stdio.h> 2 - #include <stdlib.h> 3 - #include <string.h> 4 - #include <unistd.h> 5 - #include <sys/types.h> 6 - #include <sys/socket.h> 7 - #include <sys/time.h> 8 - #include <sys/un.h> 9 - #include <errno.h> 10 - 11 - #include "ipcsocket.h" 12 - 13 - 14 - int opensocket(int *sockfd, const char *name, int connecttype) 15 - { 16 - int ret, temp = 1; 17 - 18 - if (!name || strlen(name) > MAX_SOCK_NAME_LEN) { 19 - fprintf(stderr, "<%s>: Invalid socket name.\n", __func__); 20 - return -1; 21 - } 22 - 23 - ret = socket(PF_LOCAL, SOCK_STREAM, 0); 24 - if (ret < 0) { 25 - fprintf(stderr, "<%s>: Failed socket: <%s>\n", 26 - __func__, strerror(errno)); 27 - return ret; 28 - } 29 - 30 - *sockfd = ret; 31 - if (setsockopt(*sockfd, SOL_SOCKET, SO_REUSEADDR, 32 - (char *)&temp, sizeof(int)) < 0) { 33 - fprintf(stderr, "<%s>: Failed setsockopt: <%s>\n", 34 - __func__, strerror(errno)); 35 - goto err; 36 - } 37 - 38 - sprintf(sock_name, "/tmp/%s", name); 39 - 40 - if (connecttype == 1) { 41 - /* This is for Server connection */ 42 - struct sockaddr_un skaddr; 43 - int clientfd; 44 - socklen_t sklen; 45 - 46 - unlink(sock_name); 47 - memset(&skaddr, 0, sizeof(skaddr)); 48 - skaddr.sun_family = AF_LOCAL; 49 - strcpy(skaddr.sun_path, sock_name); 50 - 51 - ret = bind(*sockfd, (struct sockaddr *)&skaddr, 52 - SUN_LEN(&skaddr)); 53 - if (ret < 0) { 54 - fprintf(stderr, "<%s>: Failed bind: <%s>\n", 55 - __func__, strerror(errno)); 56 - goto err; 57 - } 58 - 59 - ret = listen(*sockfd, 5); 60 - if (ret < 0) { 61 - fprintf(stderr, "<%s>: Failed listen: <%s>\n", 62 - __func__, strerror(errno)); 63 - goto err; 64 - } 65 - 66 - memset(&skaddr, 0, sizeof(skaddr)); 67 - sklen = sizeof(skaddr); 68 - 69 - ret = accept(*sockfd, (struct sockaddr *)&skaddr, 70 - (socklen_t *)&sklen); 71 - if (ret < 0) { 72 - fprintf(stderr, "<%s>: Failed accept: <%s>\n", 73 - __func__, strerror(errno)); 74 - goto err; 75 - } 76 - 77 - clientfd = ret; 78 - *sockfd = clientfd; 79 - } else { 80 - /* This is for client connection */ 81 - struct sockaddr_un skaddr; 82 - 83 - memset(&skaddr, 0, sizeof(skaddr)); 84 - skaddr.sun_family = AF_LOCAL; 85 - strcpy(skaddr.sun_path, sock_name); 86 - 87 - ret = connect(*sockfd, (struct sockaddr *)&skaddr, 88 - SUN_LEN(&skaddr)); 89 - if (ret < 0) { 90 - fprintf(stderr, "<%s>: Failed connect: <%s>\n", 91 - __func__, strerror(errno)); 92 - goto err; 93 - } 94 - } 95 - 96 - return 0; 97 - 98 - err: 99 - if (*sockfd) 100 - close(*sockfd); 101 - 102 - return ret; 103 - } 104 - 105 - int sendtosocket(int sockfd, struct socketdata *skdata) 106 - { 107 - int ret, buffd; 108 - unsigned int len; 109 - char cmsg_b[CMSG_SPACE(sizeof(int))]; 110 - struct cmsghdr *cmsg; 111 - struct msghdr msgh; 112 - struct iovec iov; 113 - struct timeval timeout; 114 - fd_set selFDs; 115 - 116 - if (!skdata) { 117 - fprintf(stderr, "<%s>: socketdata is NULL\n", __func__); 118 - return -1; 119 - } 120 - 121 - FD_ZERO(&selFDs); 122 - FD_SET(0, &selFDs); 123 - FD_SET(sockfd, &selFDs); 124 - timeout.tv_sec = 20; 125 - timeout.tv_usec = 0; 126 - 127 - ret = select(sockfd+1, NULL, &selFDs, NULL, &timeout); 128 - if (ret < 0) { 129 - fprintf(stderr, "<%s>: Failed select: <%s>\n", 130 - __func__, strerror(errno)); 131 - return -1; 132 - } 133 - 134 - if (FD_ISSET(sockfd, &selFDs)) { 135 - buffd = skdata->data; 136 - len = skdata->len; 137 - memset(&msgh, 0, sizeof(msgh)); 138 - msgh.msg_control = &cmsg_b; 139 - msgh.msg_controllen = CMSG_LEN(len); 140 - iov.iov_base = "OK"; 141 - iov.iov_len = 2; 142 - msgh.msg_iov = &iov; 143 - msgh.msg_iovlen = 1; 144 - cmsg = CMSG_FIRSTHDR(&msgh); 145 - cmsg->cmsg_level = SOL_SOCKET; 146 - cmsg->cmsg_type = SCM_RIGHTS; 147 - cmsg->cmsg_len = CMSG_LEN(len); 148 - memcpy(CMSG_DATA(cmsg), &buffd, len); 149 - 150 - ret = sendmsg(sockfd, &msgh, MSG_DONTWAIT); 151 - if (ret < 0) { 152 - fprintf(stderr, "<%s>: Failed sendmsg: <%s>\n", 153 - __func__, strerror(errno)); 154 - return -1; 155 - } 156 - } 157 - 158 - return 0; 159 - } 160 - 161 - int receivefromsocket(int sockfd, struct socketdata *skdata) 162 - { 163 - int ret, buffd; 164 - unsigned int len = 0; 165 - char cmsg_b[CMSG_SPACE(sizeof(int))]; 166 - struct cmsghdr *cmsg; 167 - struct msghdr msgh; 168 - struct iovec iov; 169 - fd_set recvFDs; 170 - char data[32]; 171 - 172 - if (!skdata) { 173 - fprintf(stderr, "<%s>: socketdata is NULL\n", __func__); 174 - return -1; 175 - } 176 - 177 - FD_ZERO(&recvFDs); 178 - FD_SET(0, &recvFDs); 179 - FD_SET(sockfd, &recvFDs); 180 - 181 - ret = select(sockfd+1, &recvFDs, NULL, NULL, NULL); 182 - if (ret < 0) { 183 - fprintf(stderr, "<%s>: Failed select: <%s>\n", 184 - __func__, strerror(errno)); 185 - return -1; 186 - } 187 - 188 - if (FD_ISSET(sockfd, &recvFDs)) { 189 - len = sizeof(buffd); 190 - memset(&msgh, 0, sizeof(msgh)); 191 - msgh.msg_control = &cmsg_b; 192 - msgh.msg_controllen = CMSG_LEN(len); 193 - iov.iov_base = data; 194 - iov.iov_len = sizeof(data)-1; 195 - msgh.msg_iov = &iov; 196 - msgh.msg_iovlen = 1; 197 - cmsg = CMSG_FIRSTHDR(&msgh); 198 - cmsg->cmsg_level = SOL_SOCKET; 199 - cmsg->cmsg_type = SCM_RIGHTS; 200 - cmsg->cmsg_len = CMSG_LEN(len); 201 - 202 - ret = recvmsg(sockfd, &msgh, MSG_DONTWAIT); 203 - if (ret < 0) { 204 - fprintf(stderr, "<%s>: Failed recvmsg: <%s>\n", 205 - __func__, strerror(errno)); 206 - return -1; 207 - } 208 - 209 - memcpy(&buffd, CMSG_DATA(cmsg), len); 210 - skdata->data = buffd; 211 - skdata->len = len; 212 - } 213 - return 0; 214 - } 215 - 216 - int closesocket(int sockfd, char *name) 217 - { 218 - char sockname[MAX_SOCK_NAME_LEN]; 219 - 220 - if (sockfd) 221 - close(sockfd); 222 - sprintf(sockname, "/tmp/%s", name); 223 - unlink(sockname); 224 - shutdown(sockfd, 2); 225 - 226 - return 0; 227 - }
-35
tools/testing/selftests/android/ion/ipcsocket.h
··· 1 - 2 - #ifndef _IPCSOCKET_H 3 - #define _IPCSOCKET_H 4 - 5 - 6 - #define MAX_SOCK_NAME_LEN 64 7 - 8 - char sock_name[MAX_SOCK_NAME_LEN]; 9 - 10 - /* This structure is responsible for holding the IPC data 11 - * data: hold the buffer fd 12 - * len: just the length of 32-bit integer fd 13 - */ 14 - struct socketdata { 15 - int data; 16 - unsigned int len; 17 - }; 18 - 19 - /* This API is used to open the IPC socket connection 20 - * name: implies a unique socket name in the system 21 - * connecttype: implies server(0) or client(1) 22 - */ 23 - int opensocket(int *sockfd, const char *name, int connecttype); 24 - 25 - /* This is the API to send socket data over IPC socket */ 26 - int sendtosocket(int sockfd, struct socketdata *data); 27 - 28 - /* This is the API to receive socket data over IPC socket */ 29 - int receivefromsocket(int sockfd, struct socketdata *data); 30 - 31 - /* This is the API to close the socket connection */ 32 - int closesocket(int sockfd, char *name); 33 - 34 - 35 - #endif
-3
tools/testing/selftests/android/run.sh
··· 1 - #!/bin/sh 2 - 3 - (cd ion; ./ion_test.sh)