Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

dma-buf-map: Rename to iosys-map

Rename struct dma_buf_map to struct iosys_map and corresponding APIs.
Over time dma-buf-map grew up to more functionality than the one used by
dma-buf: in fact it's just a shim layer to abstract system memory, that
can be accessed via regular load and store, from IO memory that needs to
be acessed via arch helpers.

The idea is to extend this API so it can fulfill other needs, internal
to a single driver. Example: in the i915 driver it's desired to share
the implementation for integrated graphics, which uses mostly system
memory, with discrete graphics, which may need to access IO memory.

The conversion was mostly done with the following semantic patch:

@r1@
@@
- struct dma_buf_map
+ struct iosys_map

@r2@
@@
(
- DMA_BUF_MAP_INIT_VADDR
+ IOSYS_MAP_INIT_VADDR
|
- dma_buf_map_set_vaddr
+ iosys_map_set_vaddr
|
- dma_buf_map_set_vaddr_iomem
+ iosys_map_set_vaddr_iomem
|
- dma_buf_map_is_equal
+ iosys_map_is_equal
|
- dma_buf_map_is_null
+ iosys_map_is_null
|
- dma_buf_map_is_set
+ iosys_map_is_set
|
- dma_buf_map_clear
+ iosys_map_clear
|
- dma_buf_map_memcpy_to
+ iosys_map_memcpy_to
|
- dma_buf_map_incr
+ iosys_map_incr
)

@@
@@
- #include <linux/dma-buf-map.h>
+ #include <linux/iosys-map.h>

Then some files had their includes adjusted and some comments were
update to remove mentions to dma-buf-map.

Since this is not specific to dma-buf anymore, move the documentation to
the "Bus-Independent Device Accesses" section.

v2:
- Squash patches

v3:
- Fix wrong removal of dma-buf.h from MAINTAINERS
- Move documentation from dma-buf.rst to device-io.rst

v4:
- Change documentation title and level

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Sumit Semwal <sumit.semwal@linaro.org>
Acked-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20220204170541.829227-1-lucas.demarchi@intel.com

+599 -568
+9
Documentation/driver-api/device-io.rst
··· 502 502 Not using these wrappers may make drivers unusable on certain platforms with 503 503 stricter rules for mapping I/O memory. 504 504 505 + Generalizing Access to System and I/O Memory 506 + ============================================ 507 + 508 + .. kernel-doc:: include/linux/iosys-map.h 509 + :doc: overview 510 + 511 + .. kernel-doc:: include/linux/iosys-map.h 512 + :internal: 513 + 505 514 Public Functions Provided 506 515 ========================= 507 516
-9
Documentation/driver-api/dma-buf.rst
··· 128 128 .. kernel-doc:: include/linux/dma-buf.h 129 129 :internal: 130 130 131 - Buffer Mapping Helpers 132 - ~~~~~~~~~~~~~~~~~~~~~~ 133 - 134 - .. kernel-doc:: include/linux/dma-buf-map.h 135 - :doc: overview 136 - 137 - .. kernel-doc:: include/linux/dma-buf-map.h 138 - :internal: 139 - 140 131 Reservation Objects 141 132 ------------------- 142 133
+10 -10
Documentation/gpu/todo.rst
··· 222 222 Most drivers can use drm_fbdev_generic_setup(). Driver have to implement 223 223 atomic modesetting and GEM vmap support. Historically, generic fbdev emulation 224 224 expected the framebuffer in system memory or system-like memory. By employing 225 - struct dma_buf_map, drivers with frambuffers in I/O memory can be supported 225 + struct iosys_map, drivers with frambuffers in I/O memory can be supported 226 226 as well. 227 227 228 228 Contact: Maintainer of the driver you plan to convert ··· 234 234 235 235 A number of callback functions in drm_fbdev_fb_ops could benefit from 236 236 being rewritten without dependencies on the fbdev module. Some of the 237 - helpers could further benefit from using struct dma_buf_map instead of 237 + helpers could further benefit from using struct iosys_map instead of 238 238 raw pointers. 239 239 240 240 Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter ··· 434 434 435 435 Level: Intermediate 436 436 437 - Use struct dma_buf_map throughout codebase 438 - ------------------------------------------ 437 + Use struct iosys_map throughout codebase 438 + ---------------------------------------- 439 439 440 - Pointers to shared device memory are stored in struct dma_buf_map. Each 440 + Pointers to shared device memory are stored in struct iosys_map. Each 441 441 instance knows whether it refers to system or I/O memory. Most of the DRM-wide 442 - interface have been converted to use struct dma_buf_map, but implementations 442 + interface have been converted to use struct iosys_map, but implementations 443 443 often still use raw pointers. 444 444 445 - The task is to use struct dma_buf_map where it makes sense. 445 + The task is to use struct iosys_map where it makes sense. 446 446 447 - * Memory managers should use struct dma_buf_map for dma-buf-imported buffers. 448 - * TTM might benefit from using struct dma_buf_map internally. 449 - * Framebuffer copying and blitting helpers should operate on struct dma_buf_map. 447 + * Memory managers should use struct iosys_map for dma-buf-imported buffers. 448 + * TTM might benefit from using struct iosys_map internally. 449 + * Framebuffer copying and blitting helpers should operate on struct iosys_map. 450 450 451 451 Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter 452 452
+8 -1
MAINTAINERS
··· 5734 5734 F: Documentation/driver-api/dma-buf.rst 5735 5735 F: drivers/dma-buf/ 5736 5736 F: include/linux/*fence.h 5737 - F: include/linux/dma-buf* 5737 + F: include/linux/dma-buf.h 5738 5738 F: include/linux/dma-resv.h 5739 5739 K: \bdma_(?:buf|fence|resv)\b 5740 5740 ··· 10049 10049 F: include/linux/iova.h 10050 10050 F: include/linux/of_iommu.h 10051 10051 F: include/uapi/linux/iommu.h 10052 + 10053 + IOSYS-MAP HELPERS 10054 + M: Thomas Zimmermann <tzimmermann@suse.de> 10055 + L: dri-devel@lists.freedesktop.org 10056 + S: Maintained 10057 + T: git git://anongit.freedesktop.org/drm/drm-misc 10058 + F: include/linux/iosys-map.h 10052 10059 10053 10060 IO_URING 10054 10061 M: Jens Axboe <axboe@kernel.dk>
+11 -11
drivers/dma-buf/dma-buf.c
··· 1047 1047 * 1048 1048 * Interfaces:: 1049 1049 * 1050 - * void \*dma_buf_vmap(struct dma_buf \*dmabuf, struct dma_buf_map \*map) 1051 - * void dma_buf_vunmap(struct dma_buf \*dmabuf, struct dma_buf_map \*map) 1050 + * void \*dma_buf_vmap(struct dma_buf \*dmabuf, struct iosys_map \*map) 1051 + * void dma_buf_vunmap(struct dma_buf \*dmabuf, struct iosys_map \*map) 1052 1052 * 1053 1053 * The vmap call can fail if there is no vmap support in the exporter, or if 1054 1054 * it runs out of vmalloc space. Note that the dma-buf layer keeps a reference ··· 1260 1260 * 1261 1261 * Returns 0 on success, or a negative errno code otherwise. 1262 1262 */ 1263 - int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) 1263 + int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) 1264 1264 { 1265 - struct dma_buf_map ptr; 1265 + struct iosys_map ptr; 1266 1266 int ret = 0; 1267 1267 1268 - dma_buf_map_clear(map); 1268 + iosys_map_clear(map); 1269 1269 1270 1270 if (WARN_ON(!dmabuf)) 1271 1271 return -EINVAL; ··· 1276 1276 mutex_lock(&dmabuf->lock); 1277 1277 if (dmabuf->vmapping_counter) { 1278 1278 dmabuf->vmapping_counter++; 1279 - BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr)); 1279 + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); 1280 1280 *map = dmabuf->vmap_ptr; 1281 1281 goto out_unlock; 1282 1282 } 1283 1283 1284 - BUG_ON(dma_buf_map_is_set(&dmabuf->vmap_ptr)); 1284 + BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); 1285 1285 1286 1286 ret = dmabuf->ops->vmap(dmabuf, &ptr); 1287 1287 if (WARN_ON_ONCE(ret)) ··· 1303 1303 * @dmabuf: [in] buffer to vunmap 1304 1304 * @map: [in] vmap pointer to vunmap 1305 1305 */ 1306 - void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) 1306 + void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) 1307 1307 { 1308 1308 if (WARN_ON(!dmabuf)) 1309 1309 return; 1310 1310 1311 - BUG_ON(dma_buf_map_is_null(&dmabuf->vmap_ptr)); 1311 + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); 1312 1312 BUG_ON(dmabuf->vmapping_counter == 0); 1313 - BUG_ON(!dma_buf_map_is_equal(&dmabuf->vmap_ptr, map)); 1313 + BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); 1314 1314 1315 1315 mutex_lock(&dmabuf->lock); 1316 1316 if (--dmabuf->vmapping_counter == 0) { 1317 1317 if (dmabuf->ops->vunmap) 1318 1318 dmabuf->ops->vunmap(dmabuf, map); 1319 - dma_buf_map_clear(&dmabuf->vmap_ptr); 1319 + iosys_map_clear(&dmabuf->vmap_ptr); 1320 1320 } 1321 1321 mutex_unlock(&dmabuf->lock); 1322 1322 }
+5 -5
drivers/dma-buf/heaps/cma_heap.c
··· 202 202 return vaddr; 203 203 } 204 204 205 - static int cma_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) 205 + static int cma_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map) 206 206 { 207 207 struct cma_heap_buffer *buffer = dmabuf->priv; 208 208 void *vaddr; ··· 211 211 mutex_lock(&buffer->lock); 212 212 if (buffer->vmap_cnt) { 213 213 buffer->vmap_cnt++; 214 - dma_buf_map_set_vaddr(map, buffer->vaddr); 214 + iosys_map_set_vaddr(map, buffer->vaddr); 215 215 goto out; 216 216 } 217 217 ··· 222 222 } 223 223 buffer->vaddr = vaddr; 224 224 buffer->vmap_cnt++; 225 - dma_buf_map_set_vaddr(map, buffer->vaddr); 225 + iosys_map_set_vaddr(map, buffer->vaddr); 226 226 out: 227 227 mutex_unlock(&buffer->lock); 228 228 229 229 return ret; 230 230 } 231 231 232 - static void cma_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) 232 + static void cma_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) 233 233 { 234 234 struct cma_heap_buffer *buffer = dmabuf->priv; 235 235 ··· 239 239 buffer->vaddr = NULL; 240 240 } 241 241 mutex_unlock(&buffer->lock); 242 - dma_buf_map_clear(map); 242 + iosys_map_clear(map); 243 243 } 244 244 245 245 static void cma_heap_dma_buf_release(struct dma_buf *dmabuf)
+5 -5
drivers/dma-buf/heaps/system_heap.c
··· 241 241 return vaddr; 242 242 } 243 243 244 - static int system_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) 244 + static int system_heap_vmap(struct dma_buf *dmabuf, struct iosys_map *map) 245 245 { 246 246 struct system_heap_buffer *buffer = dmabuf->priv; 247 247 void *vaddr; ··· 250 250 mutex_lock(&buffer->lock); 251 251 if (buffer->vmap_cnt) { 252 252 buffer->vmap_cnt++; 253 - dma_buf_map_set_vaddr(map, buffer->vaddr); 253 + iosys_map_set_vaddr(map, buffer->vaddr); 254 254 goto out; 255 255 } 256 256 ··· 262 262 263 263 buffer->vaddr = vaddr; 264 264 buffer->vmap_cnt++; 265 - dma_buf_map_set_vaddr(map, buffer->vaddr); 265 + iosys_map_set_vaddr(map, buffer->vaddr); 266 266 out: 267 267 mutex_unlock(&buffer->lock); 268 268 269 269 return ret; 270 270 } 271 271 272 - static void system_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map) 272 + static void system_heap_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) 273 273 { 274 274 struct system_heap_buffer *buffer = dmabuf->priv; 275 275 ··· 279 279 buffer->vaddr = NULL; 280 280 } 281 281 mutex_unlock(&buffer->lock); 282 - dma_buf_map_clear(map); 282 + iosys_map_clear(map); 283 283 } 284 284 285 285 static void system_heap_dma_buf_release(struct dma_buf *dmabuf)
+1 -1
drivers/gpu/drm/ast/ast_drv.h
··· 107 107 108 108 struct { 109 109 struct drm_gem_vram_object *gbo; 110 - struct dma_buf_map map; 110 + struct iosys_map map; 111 111 u64 off; 112 112 } hwc[AST_DEFAULT_HWC_NUM]; 113 113
+4 -4
drivers/gpu/drm/ast/ast_mode.c
··· 801 801 struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(new_state); 802 802 struct drm_framebuffer *fb = new_state->fb; 803 803 struct ast_private *ast = to_ast_private(plane->dev); 804 - struct dma_buf_map dst_map = 804 + struct iosys_map dst_map = 805 805 ast_cursor_plane->hwc[ast_cursor_plane->next_hwc_index].map; 806 806 u64 dst_off = 807 807 ast_cursor_plane->hwc[ast_cursor_plane->next_hwc_index].off; 808 - struct dma_buf_map src_map = shadow_plane_state->data[0]; 808 + struct iosys_map src_map = shadow_plane_state->data[0]; 809 809 unsigned int offset_x, offset_y; 810 810 u16 x, y; 811 811 u8 x_offset, y_offset; ··· 883 883 struct ast_cursor_plane *ast_cursor_plane = to_ast_cursor_plane(plane); 884 884 size_t i; 885 885 struct drm_gem_vram_object *gbo; 886 - struct dma_buf_map map; 886 + struct iosys_map map; 887 887 888 888 for (i = 0; i < ARRAY_SIZE(ast_cursor_plane->hwc); ++i) { 889 889 gbo = ast_cursor_plane->hwc[i].gbo; ··· 910 910 struct drm_plane *cursor_plane = &ast_cursor_plane->base; 911 911 size_t size, i; 912 912 struct drm_gem_vram_object *gbo; 913 - struct dma_buf_map map; 913 + struct iosys_map map; 914 914 int ret; 915 915 s64 off; 916 916
+9 -9
drivers/gpu/drm/drm_cache.c
··· 28 28 * Authors: Thomas Hellström <thomas-at-tungstengraphics-dot-com> 29 29 */ 30 30 31 - #include <linux/dma-buf-map.h> 31 + #include <linux/cc_platform.h> 32 32 #include <linux/export.h> 33 33 #include <linux/highmem.h> 34 - #include <linux/cc_platform.h> 34 + #include <linux/iosys-map.h> 35 35 #include <xen/xen.h> 36 36 37 37 #include <drm/drm_cache.h> ··· 214 214 } 215 215 EXPORT_SYMBOL(drm_need_swiotlb); 216 216 217 - static void memcpy_fallback(struct dma_buf_map *dst, 218 - const struct dma_buf_map *src, 217 + static void memcpy_fallback(struct iosys_map *dst, 218 + const struct iosys_map *src, 219 219 unsigned long len) 220 220 { 221 221 if (!dst->is_iomem && !src->is_iomem) { 222 222 memcpy(dst->vaddr, src->vaddr, len); 223 223 } else if (!src->is_iomem) { 224 - dma_buf_map_memcpy_to(dst, src->vaddr, len); 224 + iosys_map_memcpy_to(dst, src->vaddr, len); 225 225 } else if (!dst->is_iomem) { 226 226 memcpy_fromio(dst->vaddr, src->vaddr_iomem, len); 227 227 } else { ··· 305 305 * Tries an arch optimized memcpy for prefetching reading out of a WC region, 306 306 * and if no such beast is available, falls back to a normal memcpy. 307 307 */ 308 - void drm_memcpy_from_wc(struct dma_buf_map *dst, 309 - const struct dma_buf_map *src, 308 + void drm_memcpy_from_wc(struct iosys_map *dst, 309 + const struct iosys_map *src, 310 310 unsigned long len) 311 311 { 312 312 if (WARN_ON(in_interrupt())) { ··· 343 343 static_branch_enable(&has_movntdqa); 344 344 } 345 345 #else 346 - void drm_memcpy_from_wc(struct dma_buf_map *dst, 347 - const struct dma_buf_map *src, 346 + void drm_memcpy_from_wc(struct iosys_map *dst, 347 + const struct iosys_map *src, 348 348 unsigned long len) 349 349 { 350 350 WARN_ON(in_interrupt());
+5 -4
drivers/gpu/drm/drm_client.c
··· 3 3 * Copyright 2018 Noralf Trønnes 4 4 */ 5 5 6 - #include <linux/dma-buf-map.h> 6 + #include <linux/iosys-map.h> 7 7 #include <linux/list.h> 8 8 #include <linux/module.h> 9 9 #include <linux/mutex.h> ··· 309 309 * 0 on success, or a negative errno code otherwise. 310 310 */ 311 311 int 312 - drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) 312 + drm_client_buffer_vmap(struct drm_client_buffer *buffer, 313 + struct iosys_map *map_copy) 313 314 { 314 - struct dma_buf_map *map = &buffer->map; 315 + struct iosys_map *map = &buffer->map; 315 316 int ret; 316 317 317 318 /* ··· 343 342 */ 344 343 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) 345 344 { 346 - struct dma_buf_map *map = &buffer->map; 345 + struct iosys_map *map = &buffer->map; 347 346 348 347 drm_gem_vunmap(buffer->gem, map); 349 348 }
+6 -6
drivers/gpu/drm/drm_fb_helper.c
··· 373 373 374 374 static void drm_fb_helper_damage_blit_real(struct drm_fb_helper *fb_helper, 375 375 struct drm_clip_rect *clip, 376 - struct dma_buf_map *dst) 376 + struct iosys_map *dst) 377 377 { 378 378 struct drm_framebuffer *fb = fb_helper->fb; 379 379 unsigned int cpp = fb->format->cpp[0]; ··· 382 382 size_t len = (clip->x2 - clip->x1) * cpp; 383 383 unsigned int y; 384 384 385 - dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ 385 + iosys_map_incr(dst, offset); /* go to first pixel within clip rect */ 386 386 387 387 for (y = clip->y1; y < clip->y2; y++) { 388 - dma_buf_map_memcpy_to(dst, src, len); 389 - dma_buf_map_incr(dst, fb->pitches[0]); 388 + iosys_map_memcpy_to(dst, src, len); 389 + iosys_map_incr(dst, fb->pitches[0]); 390 390 src += fb->pitches[0]; 391 391 } 392 392 } ··· 395 395 struct drm_clip_rect *clip) 396 396 { 397 397 struct drm_client_buffer *buffer = fb_helper->buffer; 398 - struct dma_buf_map map, dst; 398 + struct iosys_map map, dst; 399 399 int ret; 400 400 401 401 /* ··· 2322 2322 struct drm_framebuffer *fb; 2323 2323 struct fb_info *fbi; 2324 2324 u32 format; 2325 - struct dma_buf_map map; 2325 + struct iosys_map map; 2326 2326 int ret; 2327 2327 2328 2328 drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
+6 -6
drivers/gpu/drm/drm_gem.c
··· 36 36 #include <linux/pagemap.h> 37 37 #include <linux/shmem_fs.h> 38 38 #include <linux/dma-buf.h> 39 - #include <linux/dma-buf-map.h> 39 + #include <linux/iosys-map.h> 40 40 #include <linux/mem_encrypt.h> 41 41 #include <linux/pagevec.h> 42 42 ··· 1165 1165 obj->funcs->unpin(obj); 1166 1166 } 1167 1167 1168 - int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 1168 + int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) 1169 1169 { 1170 1170 int ret; 1171 1171 ··· 1175 1175 ret = obj->funcs->vmap(obj, map); 1176 1176 if (ret) 1177 1177 return ret; 1178 - else if (dma_buf_map_is_null(map)) 1178 + else if (iosys_map_is_null(map)) 1179 1179 return -ENOMEM; 1180 1180 1181 1181 return 0; 1182 1182 } 1183 1183 EXPORT_SYMBOL(drm_gem_vmap); 1184 1184 1185 - void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) 1185 + void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) 1186 1186 { 1187 - if (dma_buf_map_is_null(map)) 1187 + if (iosys_map_is_null(map)) 1188 1188 return; 1189 1189 1190 1190 if (obj->funcs->vunmap) 1191 1191 obj->funcs->vunmap(obj, map); 1192 1192 1193 1193 /* Always set the mapping to NULL. Callers may rely on this. */ 1194 - dma_buf_map_clear(map); 1194 + iosys_map_clear(map); 1195 1195 } 1196 1196 EXPORT_SYMBOL(drm_gem_vunmap); 1197 1197
+5 -4
drivers/gpu/drm/drm_gem_cma_helper.c
··· 209 209 void drm_gem_cma_free(struct drm_gem_cma_object *cma_obj) 210 210 { 211 211 struct drm_gem_object *gem_obj = &cma_obj->base; 212 - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(cma_obj->vaddr); 212 + struct iosys_map map = IOSYS_MAP_INIT_VADDR(cma_obj->vaddr); 213 213 214 214 if (gem_obj->import_attach) { 215 215 if (cma_obj->vaddr) ··· 480 480 * Returns: 481 481 * 0 on success, or a negative error code otherwise. 482 482 */ 483 - int drm_gem_cma_vmap(struct drm_gem_cma_object *cma_obj, struct dma_buf_map *map) 483 + int drm_gem_cma_vmap(struct drm_gem_cma_object *cma_obj, 484 + struct iosys_map *map) 484 485 { 485 - dma_buf_map_set_vaddr(map, cma_obj->vaddr); 486 + iosys_map_set_vaddr(map, cma_obj->vaddr); 486 487 487 488 return 0; 488 489 } ··· 558 557 { 559 558 struct drm_gem_cma_object *cma_obj; 560 559 struct drm_gem_object *obj; 561 - struct dma_buf_map map; 560 + struct iosys_map map; 562 561 int ret; 563 562 564 563 ret = dma_buf_vmap(attach->dmabuf, &map);
+8 -8
drivers/gpu/drm/drm_gem_framebuffer_helper.c
··· 321 321 * @data: returns the data address for each BO, can be NULL 322 322 * 323 323 * This function maps all buffer objects of the given framebuffer into 324 - * kernel address space and stores them in struct dma_buf_map. If the 324 + * kernel address space and stores them in struct iosys_map. If the 325 325 * mapping operation fails for one of the BOs, the function unmaps the 326 326 * already established mappings automatically. 327 327 * ··· 335 335 * 0 on success, or a negative errno code otherwise. 336 336 */ 337 337 int drm_gem_fb_vmap(struct drm_framebuffer *fb, 338 - struct dma_buf_map map[static DRM_FORMAT_MAX_PLANES], 339 - struct dma_buf_map data[DRM_FORMAT_MAX_PLANES]) 338 + struct iosys_map map[static DRM_FORMAT_MAX_PLANES], 339 + struct iosys_map data[DRM_FORMAT_MAX_PLANES]) 340 340 { 341 341 struct drm_gem_object *obj; 342 342 unsigned int i; ··· 345 345 for (i = 0; i < DRM_FORMAT_MAX_PLANES; ++i) { 346 346 obj = drm_gem_fb_get_obj(fb, i); 347 347 if (!obj) { 348 - dma_buf_map_clear(&map[i]); 348 + iosys_map_clear(&map[i]); 349 349 continue; 350 350 } 351 351 ret = drm_gem_vmap(obj, &map[i]); ··· 356 356 if (data) { 357 357 for (i = 0; i < DRM_FORMAT_MAX_PLANES; ++i) { 358 358 memcpy(&data[i], &map[i], sizeof(data[i])); 359 - if (dma_buf_map_is_null(&data[i])) 359 + if (iosys_map_is_null(&data[i])) 360 360 continue; 361 - dma_buf_map_incr(&data[i], fb->offsets[i]); 361 + iosys_map_incr(&data[i], fb->offsets[i]); 362 362 } 363 363 } 364 364 ··· 386 386 * See drm_gem_fb_vmap() for more information. 387 387 */ 388 388 void drm_gem_fb_vunmap(struct drm_framebuffer *fb, 389 - struct dma_buf_map map[static DRM_FORMAT_MAX_PLANES]) 389 + struct iosys_map map[static DRM_FORMAT_MAX_PLANES]) 390 390 { 391 391 unsigned int i = DRM_FORMAT_MAX_PLANES; 392 392 struct drm_gem_object *obj; ··· 396 396 obj = drm_gem_fb_get_obj(fb, i); 397 397 if (!obj) 398 398 continue; 399 - if (dma_buf_map_is_null(&map[i])) 399 + if (iosys_map_is_null(&map[i])) 400 400 continue; 401 401 drm_gem_vunmap(obj, &map[i]); 402 402 }
+9 -6
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 286 286 } 287 287 EXPORT_SYMBOL(drm_gem_shmem_unpin); 288 288 289 - static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) 289 + static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, 290 + struct iosys_map *map) 290 291 { 291 292 struct drm_gem_object *obj = &shmem->base; 292 293 int ret = 0; 293 294 294 295 if (shmem->vmap_use_count++ > 0) { 295 - dma_buf_map_set_vaddr(map, shmem->vaddr); 296 + iosys_map_set_vaddr(map, shmem->vaddr); 296 297 return 0; 297 298 } 298 299 ··· 320 319 if (!shmem->vaddr) 321 320 ret = -ENOMEM; 322 321 else 323 - dma_buf_map_set_vaddr(map, shmem->vaddr); 322 + iosys_map_set_vaddr(map, shmem->vaddr); 324 323 } 325 324 326 325 if (ret) { ··· 354 353 * Returns: 355 354 * 0 on success or a negative error code on failure. 356 355 */ 357 - int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) 356 + int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, 357 + struct iosys_map *map) 358 358 { 359 359 int ret; 360 360 ··· 370 368 EXPORT_SYMBOL(drm_gem_shmem_vmap); 371 369 372 370 static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, 373 - struct dma_buf_map *map) 371 + struct iosys_map *map) 374 372 { 375 373 struct drm_gem_object *obj = &shmem->base; 376 374 ··· 402 400 * This function hides the differences between dma-buf imported and natively 403 401 * allocated objects. 404 402 */ 405 - void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) 403 + void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, 404 + struct iosys_map *map) 406 405 { 407 406 mutex_lock(&shmem->vmap_lock); 408 407 drm_gem_shmem_vunmap_locked(shmem, map);
+2 -2
drivers/gpu/drm/drm_gem_ttm_helper.c
··· 61 61 * 0 on success, or a negative errno code otherwise. 62 62 */ 63 63 int drm_gem_ttm_vmap(struct drm_gem_object *gem, 64 - struct dma_buf_map *map) 64 + struct iosys_map *map) 65 65 { 66 66 struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); 67 67 ··· 78 78 * &drm_gem_object_funcs.vmap callback. 79 79 */ 80 80 void drm_gem_ttm_vunmap(struct drm_gem_object *gem, 81 - struct dma_buf_map *map) 81 + struct iosys_map *map) 82 82 { 83 83 struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); 84 84
+14 -11
drivers/gpu/drm/drm_gem_vram_helper.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 3 - #include <linux/dma-buf-map.h> 3 + #include <linux/iosys-map.h> 4 4 #include <linux/module.h> 5 5 6 6 #include <drm/drm_debugfs.h> ··· 116 116 */ 117 117 118 118 WARN_ON(gbo->vmap_use_count); 119 - WARN_ON(dma_buf_map_is_set(&gbo->map)); 119 + WARN_ON(iosys_map_is_set(&gbo->map)); 120 120 121 121 drm_gem_object_release(&gbo->bo.base); 122 122 } ··· 365 365 EXPORT_SYMBOL(drm_gem_vram_unpin); 366 366 367 367 static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, 368 - struct dma_buf_map *map) 368 + struct iosys_map *map) 369 369 { 370 370 int ret; 371 371 ··· 377 377 * page mapping might still be around. Only vmap if the there's 378 378 * no mapping present. 379 379 */ 380 - if (dma_buf_map_is_null(&gbo->map)) { 380 + if (iosys_map_is_null(&gbo->map)) { 381 381 ret = ttm_bo_vmap(&gbo->bo, &gbo->map); 382 382 if (ret) 383 383 return ret; ··· 391 391 } 392 392 393 393 static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, 394 - struct dma_buf_map *map) 394 + struct iosys_map *map) 395 395 { 396 396 struct drm_device *dev = gbo->bo.base.dev; 397 397 398 398 if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count)) 399 399 return; 400 400 401 - if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) 401 + if (drm_WARN_ON_ONCE(dev, !iosys_map_is_equal(&gbo->map, map))) 402 402 return; /* BUG: map not mapped from this BO */ 403 403 404 404 if (--gbo->vmap_use_count > 0) ··· 428 428 * Returns: 429 429 * 0 on success, or a negative error code otherwise. 430 430 */ 431 - int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) 431 + int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct iosys_map *map) 432 432 { 433 433 int ret; 434 434 ··· 463 463 * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See 464 464 * the documentation for drm_gem_vram_vmap() for more information. 465 465 */ 466 - void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) 466 + void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, 467 + struct iosys_map *map) 467 468 { 468 469 int ret; 469 470 ··· 568 567 return; 569 568 570 569 ttm_bo_vunmap(bo, &gbo->map); 571 - dma_buf_map_clear(&gbo->map); /* explicitly clear mapping for next vmap call */ 570 + iosys_map_clear(&gbo->map); /* explicitly clear mapping for next vmap call */ 572 571 } 573 572 574 573 static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo, ··· 803 802 * Returns: 804 803 * 0 on success, or a negative error code otherwise. 805 804 */ 806 - static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map) 805 + static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, 806 + struct iosys_map *map) 807 807 { 808 808 struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 809 809 ··· 817 815 * @gem: The GEM object to unmap 818 816 * @map: Kernel virtual address where the VRAM GEM object was mapped 819 817 */ 820 - static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map) 818 + static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, 819 + struct iosys_map *map) 821 820 { 822 821 struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); 823 822
+3 -3
drivers/gpu/drm/drm_internal.h
··· 33 33 34 34 struct dentry; 35 35 struct dma_buf; 36 - struct dma_buf_map; 36 + struct iosys_map; 37 37 struct drm_connector; 38 38 struct drm_crtc; 39 39 struct drm_framebuffer; ··· 174 174 175 175 int drm_gem_pin(struct drm_gem_object *obj); 176 176 void drm_gem_unpin(struct drm_gem_object *obj); 177 - int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); 178 - void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); 177 + int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map); 178 + void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map); 179 179 180 180 int drm_gem_dumb_destroy(struct drm_file *file, struct drm_device *dev, 181 181 u32 handle);
+4 -4
drivers/gpu/drm/drm_mipi_dbi.c
··· 201 201 struct drm_rect *clip, bool swap) 202 202 { 203 203 struct drm_gem_object *gem = drm_gem_fb_get_obj(fb, 0); 204 - struct dma_buf_map map[DRM_FORMAT_MAX_PLANES]; 205 - struct dma_buf_map data[DRM_FORMAT_MAX_PLANES]; 204 + struct iosys_map map[DRM_FORMAT_MAX_PLANES]; 205 + struct iosys_map data[DRM_FORMAT_MAX_PLANES]; 206 206 void *src; 207 207 int ret; 208 208 ··· 258 258 259 259 static void mipi_dbi_fb_dirty(struct drm_framebuffer *fb, struct drm_rect *rect) 260 260 { 261 - struct dma_buf_map map[DRM_FORMAT_MAX_PLANES]; 262 - struct dma_buf_map data[DRM_FORMAT_MAX_PLANES]; 261 + struct iosys_map map[DRM_FORMAT_MAX_PLANES]; 262 + struct iosys_map data[DRM_FORMAT_MAX_PLANES]; 263 263 struct mipi_dbi_dev *dbidev = drm_to_mipi_dbi_dev(fb->dev); 264 264 unsigned int height = rect->y2 - rect->y1; 265 265 unsigned int width = rect->x2 - rect->x1;
+2 -2
drivers/gpu/drm/drm_prime.c
··· 674 674 * 675 675 * Returns 0 on success or a negative errno code otherwise. 676 676 */ 677 - int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map) 677 + int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map) 678 678 { 679 679 struct drm_gem_object *obj = dma_buf->priv; 680 680 ··· 690 690 * Releases a kernel virtual mapping. This can be used as the 691 691 * &dma_buf_ops.vunmap callback. Calls into &drm_gem_object_funcs.vunmap for device specific handling. 692 692 */ 693 - void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map) 693 + void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map) 694 694 { 695 695 struct drm_gem_object *obj = dma_buf->priv; 696 696
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_drv.h
··· 49 49 50 50 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); 51 51 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); 52 - int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); 52 + int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); 53 53 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, 54 54 struct dma_buf_attachment *attach, struct sg_table *sg); 55 55 int etnaviv_gem_prime_pin(struct drm_gem_object *obj);
+4 -4
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
··· 25 25 return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages); 26 26 } 27 27 28 - int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 28 + int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) 29 29 { 30 30 void *vaddr; 31 31 32 32 vaddr = etnaviv_gem_vmap(obj); 33 33 if (!vaddr) 34 34 return -ENOMEM; 35 - dma_buf_map_set_vaddr(map, vaddr); 35 + iosys_map_set_vaddr(map, vaddr); 36 36 37 37 return 0; 38 38 } ··· 62 62 63 63 static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj) 64 64 { 65 - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(etnaviv_obj->vaddr); 65 + struct iosys_map map = IOSYS_MAP_INIT_VADDR(etnaviv_obj->vaddr); 66 66 67 67 if (etnaviv_obj->vaddr) 68 68 dma_buf_vunmap(etnaviv_obj->base.import_attach->dmabuf, &map); ··· 77 77 78 78 static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj) 79 79 { 80 - struct dma_buf_map map; 80 + struct iosys_map map; 81 81 int ret; 82 82 83 83 lockdep_assert_held(&etnaviv_obj->lock);
+2 -2
drivers/gpu/drm/gud/gud_pipe.c
··· 152 152 { 153 153 struct dma_buf_attachment *import_attach = fb->obj[0]->import_attach; 154 154 u8 compression = gdrm->compression; 155 - struct dma_buf_map map[DRM_FORMAT_MAX_PLANES]; 156 - struct dma_buf_map map_data[DRM_FORMAT_MAX_PLANES]; 155 + struct iosys_map map[DRM_FORMAT_MAX_PLANES]; 156 + struct iosys_map map_data[DRM_FORMAT_MAX_PLANES]; 157 157 void *vaddr, *buf; 158 158 size_t pitch, len; 159 159 int ret = 0;
+3 -2
drivers/gpu/drm/hyperv/hyperv_drm_modeset.c
··· 19 19 #include "hyperv_drm.h" 20 20 21 21 static int hyperv_blit_to_vram_rect(struct drm_framebuffer *fb, 22 - const struct dma_buf_map *map, 22 + const struct iosys_map *map, 23 23 struct drm_rect *rect) 24 24 { 25 25 struct hyperv_drm_device *hv = to_hv(fb->dev); ··· 38 38 return 0; 39 39 } 40 40 41 - static int hyperv_blit_to_vram_fullscreen(struct drm_framebuffer *fb, const struct dma_buf_map *map) 41 + static int hyperv_blit_to_vram_fullscreen(struct drm_framebuffer *fb, 42 + const struct iosys_map *map) 42 43 { 43 44 struct drm_rect fullscreen = { 44 45 .x1 = 0,
+5 -3
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
··· 74 74 kfree(sg); 75 75 } 76 76 77 - static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map) 77 + static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, 78 + struct iosys_map *map) 78 79 { 79 80 struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); 80 81 void *vaddr; ··· 84 83 if (IS_ERR(vaddr)) 85 84 return PTR_ERR(vaddr); 86 85 87 - dma_buf_map_set_vaddr(map, vaddr); 86 + iosys_map_set_vaddr(map, vaddr); 88 87 89 88 return 0; 90 89 } 91 90 92 - static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map) 91 + static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, 92 + struct iosys_map *map) 93 93 { 94 94 struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); 95 95
+3 -3
drivers/gpu/drm/i915/gem/selftests/i915_gem_dmabuf.c
··· 266 266 struct drm_i915_gem_object *obj; 267 267 struct dma_buf *dmabuf; 268 268 void *obj_map, *dma_map; 269 - struct dma_buf_map map; 269 + struct iosys_map map; 270 270 u32 pattern[] = { 0, 0xaa, 0xcc, 0x55, 0xff }; 271 271 int err, i; 272 272 ··· 349 349 struct drm_i915_private *i915 = arg; 350 350 struct drm_i915_gem_object *obj; 351 351 struct dma_buf *dmabuf; 352 - struct dma_buf_map map; 352 + struct iosys_map map; 353 353 void *ptr; 354 354 int err; 355 355 ··· 400 400 struct drm_i915_private *i915 = arg; 401 401 struct drm_i915_gem_object *obj; 402 402 struct dma_buf *dmabuf; 403 - struct dma_buf_map map; 403 + struct iosys_map map; 404 404 void *ptr; 405 405 int err; 406 406
+3 -3
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
··· 61 61 kfree(mock); 62 62 } 63 63 64 - static int mock_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map) 64 + static int mock_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map) 65 65 { 66 66 struct mock_dmabuf *mock = to_mock(dma_buf); 67 67 void *vaddr; ··· 69 69 vaddr = vm_map_ram(mock->pages, mock->npages, 0); 70 70 if (!vaddr) 71 71 return -ENOMEM; 72 - dma_buf_map_set_vaddr(map, vaddr); 72 + iosys_map_set_vaddr(map, vaddr); 73 73 74 74 return 0; 75 75 } 76 76 77 - static void mock_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map) 77 + static void mock_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map) 78 78 { 79 79 struct mock_dmabuf *mock = to_mock(dma_buf); 80 80
+2 -1
drivers/gpu/drm/lima/lima_gem.c
··· 2 2 /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */ 3 3 4 4 #include <linux/mm.h> 5 + #include <linux/iosys-map.h> 5 6 #include <linux/sync_file.h> 6 7 #include <linux/pagemap.h> 7 8 #include <linux/shmem_fs.h> ··· 183 182 return drm_gem_shmem_pin(&bo->base); 184 183 } 185 184 186 - static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 185 + static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) 187 186 { 188 187 struct lima_bo *bo = to_lima_bo(obj); 189 188
+2 -2
drivers/gpu/drm/lima/lima_sched.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */ 3 3 4 - #include <linux/dma-buf-map.h> 4 + #include <linux/iosys-map.h> 5 5 #include <linux/kthread.h> 6 6 #include <linux/slab.h> 7 7 #include <linux/vmalloc.h> ··· 284 284 struct lima_dump_chunk_buffer *buffer_chunk; 285 285 u32 size, task_size, mem_size; 286 286 int i; 287 - struct dma_buf_map map; 287 + struct iosys_map map; 288 288 int ret; 289 289 290 290 mutex_lock(&dev->error_task_list_lock);
+4 -3
drivers/gpu/drm/mediatek/mtk_drm_gem.c
··· 220 220 return &mtk_gem->base; 221 221 } 222 222 223 - int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 223 + int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) 224 224 { 225 225 struct mtk_drm_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 226 226 struct sg_table *sgt = NULL; ··· 247 247 248 248 out: 249 249 kfree(sgt); 250 - dma_buf_map_set_vaddr(map, mtk_gem->kvaddr); 250 + iosys_map_set_vaddr(map, mtk_gem->kvaddr); 251 251 252 252 return 0; 253 253 } 254 254 255 - void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) 255 + void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, 256 + struct iosys_map *map) 256 257 { 257 258 struct mtk_drm_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 258 259 void *vaddr = map->vaddr;
+3 -2
drivers/gpu/drm/mediatek/mtk_drm_gem.h
··· 42 42 struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj); 43 43 struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, 44 44 struct dma_buf_attachment *attach, struct sg_table *sg); 45 - int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); 46 - void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); 45 + int mtk_drm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); 46 + void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, 47 + struct iosys_map *map); 47 48 48 49 #endif
+2 -2
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 9 9 */ 10 10 11 11 #include <linux/delay.h> 12 - #include <linux/dma-buf-map.h> 12 + #include <linux/iosys-map.h> 13 13 14 14 #include <drm/drm_atomic_helper.h> 15 15 #include <drm/drm_atomic_state_helper.h> ··· 845 845 846 846 static void 847 847 mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb, 848 - struct drm_rect *clip, const struct dma_buf_map *map) 848 + struct drm_rect *clip, const struct iosys_map *map) 849 849 { 850 850 void __iomem *dst = mdev->vram; 851 851 void *vmap = map->vaddr; /* TODO: Use mapping abstraction properly */
+2 -2
drivers/gpu/drm/msm/msm_drv.h
··· 309 309 void msm_gem_shrinker_cleanup(struct drm_device *dev); 310 310 311 311 struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); 312 - int msm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); 313 - void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); 312 + int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); 313 + void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map); 314 314 struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, 315 315 struct dma_buf_attachment *attach, struct sg_table *sg); 316 316 int msm_gem_prime_pin(struct drm_gem_object *obj);
+3 -3
drivers/gpu/drm/msm/msm_gem_prime.c
··· 22 22 return drm_prime_pages_to_sg(obj->dev, msm_obj->pages, npages); 23 23 } 24 24 25 - int msm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 25 + int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) 26 26 { 27 27 void *vaddr; 28 28 29 29 vaddr = msm_gem_get_vaddr(obj); 30 30 if (IS_ERR(vaddr)) 31 31 return PTR_ERR(vaddr); 32 - dma_buf_map_set_vaddr(map, vaddr); 32 + iosys_map_set_vaddr(map, vaddr); 33 33 34 34 return 0; 35 35 } 36 36 37 - void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) 37 + void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map) 38 38 { 39 39 msm_gem_put_vaddr(obj); 40 40 }
+7 -6
drivers/gpu/drm/panfrost/panfrost_perfcnt.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* Copyright 2019 Collabora Ltd */ 3 3 4 - #include <drm/drm_file.h> 5 - #include <drm/drm_gem_shmem_helper.h> 6 - #include <drm/panfrost_drm.h> 7 4 #include <linux/completion.h> 8 - #include <linux/dma-buf-map.h> 9 5 #include <linux/iopoll.h> 6 + #include <linux/iosys-map.h> 10 7 #include <linux/pm_runtime.h> 11 8 #include <linux/slab.h> 12 9 #include <linux/uaccess.h> 10 + 11 + #include <drm/drm_file.h> 12 + #include <drm/drm_gem_shmem_helper.h> 13 + #include <drm/panfrost_drm.h> 13 14 14 15 #include "panfrost_device.h" 15 16 #include "panfrost_features.h" ··· 74 73 { 75 74 struct panfrost_file_priv *user = file_priv->driver_priv; 76 75 struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; 77 - struct dma_buf_map map; 76 + struct iosys_map map; 78 77 struct drm_gem_shmem_object *bo; 79 78 u32 cfg, as; 80 79 int ret; ··· 182 181 { 183 182 struct panfrost_file_priv *user = file_priv->driver_priv; 184 183 struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; 185 - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf); 184 + struct iosys_map map = IOSYS_MAP_INIT_VADDR(perfcnt->buf); 186 185 187 186 if (user != perfcnt->user) 188 187 return -EINVAL;
+4 -4
drivers/gpu/drm/qxl/qxl_display.c
··· 25 25 26 26 #include <linux/crc32.h> 27 27 #include <linux/delay.h> 28 - #include <linux/dma-buf-map.h> 28 + #include <linux/iosys-map.h> 29 29 30 30 #include <drm/drm_drv.h> 31 31 #include <drm/drm_atomic.h> ··· 566 566 { 567 567 static const u32 size = 64 * 64 * 4; 568 568 struct qxl_bo *cursor_bo; 569 - struct dma_buf_map cursor_map; 570 - struct dma_buf_map user_map; 569 + struct iosys_map cursor_map; 570 + struct iosys_map user_map; 571 571 struct qxl_cursor cursor; 572 572 int ret; 573 573 ··· 1183 1183 { 1184 1184 int ret; 1185 1185 struct drm_gem_object *gobj; 1186 - struct dma_buf_map map; 1186 + struct iosys_map map; 1187 1187 int monitors_config_size = sizeof(struct qxl_monitors_config) + 1188 1188 qxl_num_crtc * sizeof(struct qxl_head); 1189 1189
+3 -3
drivers/gpu/drm/qxl/qxl_draw.c
··· 20 20 * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 21 21 */ 22 22 23 - #include <linux/dma-buf-map.h> 23 + #include <linux/iosys-map.h> 24 24 25 25 #include <drm/drm_fourcc.h> 26 26 ··· 44 44 unsigned int num_clips, 45 45 struct qxl_bo *clips_bo) 46 46 { 47 - struct dma_buf_map map; 47 + struct iosys_map map; 48 48 struct qxl_clip_rects *dev_clips; 49 49 int ret; 50 50 ··· 146 146 int stride = fb->pitches[0]; 147 147 /* depth is not actually interesting, we don't mask with it */ 148 148 int depth = fb->format->cpp[0] * 8; 149 - struct dma_buf_map surface_map; 149 + struct iosys_map surface_map; 150 150 uint8_t *surface_base; 151 151 struct qxl_release *release; 152 152 struct qxl_bo *clips_bo;
+5 -5
drivers/gpu/drm/qxl/qxl_drv.h
··· 30 30 * Definitions taken from spice-protocol, plus kernel driver specific bits. 31 31 */ 32 32 33 - #include <linux/dma-buf-map.h> 33 + #include <linux/iosys-map.h> 34 34 #include <linux/dma-fence.h> 35 35 #include <linux/firmware.h> 36 36 #include <linux/platform_device.h> ··· 50 50 51 51 #include "qxl_dev.h" 52 52 53 - struct dma_buf_map; 53 + struct iosys_map; 54 54 55 55 #define DRIVER_AUTHOR "Dave Airlie" 56 56 ··· 81 81 /* Protected by tbo.reserved */ 82 82 struct ttm_place placements[3]; 83 83 struct ttm_placement placement; 84 - struct dma_buf_map map; 84 + struct iosys_map map; 85 85 void *kptr; 86 86 unsigned int map_count; 87 87 int type; ··· 431 431 struct drm_gem_object *qxl_gem_prime_import_sg_table( 432 432 struct drm_device *dev, struct dma_buf_attachment *attach, 433 433 struct sg_table *sgt); 434 - int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); 434 + int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); 435 435 void qxl_gem_prime_vunmap(struct drm_gem_object *obj, 436 - struct dma_buf_map *map); 436 + struct iosys_map *map); 437 437 438 438 /* qxl_irq.c */ 439 439 int qxl_irq_init(struct qxl_device *qdev);
+4 -4
drivers/gpu/drm/qxl/qxl_object.c
··· 23 23 * Alon Levy 24 24 */ 25 25 26 - #include <linux/dma-buf-map.h> 26 + #include <linux/iosys-map.h> 27 27 #include <linux/io-mapping.h> 28 28 29 29 #include "qxl_drv.h" ··· 158 158 return 0; 159 159 } 160 160 161 - int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map) 161 + int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map) 162 162 { 163 163 int r; 164 164 ··· 184 184 return 0; 185 185 } 186 186 187 - int qxl_bo_vmap(struct qxl_bo *bo, struct dma_buf_map *map) 187 + int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map) 188 188 { 189 189 int r; 190 190 ··· 210 210 void *rptr; 211 211 int ret; 212 212 struct io_mapping *map; 213 - struct dma_buf_map bo_map; 213 + struct iosys_map bo_map; 214 214 215 215 if (bo->tbo.resource->mem_type == TTM_PL_VRAM) 216 216 map = qdev->vram_mapping;
+2 -2
drivers/gpu/drm/qxl/qxl_object.h
··· 59 59 u32 priority, 60 60 struct qxl_surface *surf, 61 61 struct qxl_bo **bo_ptr); 62 - int qxl_bo_vmap(struct qxl_bo *bo, struct dma_buf_map *map); 63 - int qxl_bo_vmap_locked(struct qxl_bo *bo, struct dma_buf_map *map); 62 + int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map); 63 + int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map); 64 64 int qxl_bo_vunmap(struct qxl_bo *bo); 65 65 void qxl_bo_vunmap_locked(struct qxl_bo *bo); 66 66 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
+2 -2
drivers/gpu/drm/qxl/qxl_prime.c
··· 54 54 return ERR_PTR(-ENOSYS); 55 55 } 56 56 57 - int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 57 + int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) 58 58 { 59 59 struct qxl_bo *bo = gem_to_qxl_bo(obj); 60 60 int ret; ··· 67 67 } 68 68 69 69 void qxl_gem_prime_vunmap(struct drm_gem_object *obj, 70 - struct dma_buf_map *map) 70 + struct iosys_map *map) 71 71 { 72 72 struct qxl_bo *bo = gem_to_qxl_bo(obj); 73 73
+1
drivers/gpu/drm/radeon/radeon_gem.c
··· 26 26 * Jerome Glisse 27 27 */ 28 28 29 + #include <linux/iosys-map.h> 29 30 #include <linux/pci.h> 30 31 31 32 #include <drm/drm_device.h>
+5 -4
drivers/gpu/drm/rockchip/rockchip_drm_gem.c
··· 510 510 return ERR_PTR(ret); 511 511 } 512 512 513 - int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 513 + int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) 514 514 { 515 515 struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); 516 516 ··· 519 519 pgprot_writecombine(PAGE_KERNEL)); 520 520 if (!vaddr) 521 521 return -ENOMEM; 522 - dma_buf_map_set_vaddr(map, vaddr); 522 + iosys_map_set_vaddr(map, vaddr); 523 523 return 0; 524 524 } 525 525 526 526 if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING) 527 527 return -ENOMEM; 528 - dma_buf_map_set_vaddr(map, rk_obj->kvaddr); 528 + iosys_map_set_vaddr(map, rk_obj->kvaddr); 529 529 530 530 return 0; 531 531 } 532 532 533 - void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) 533 + void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, 534 + struct iosys_map *map) 534 535 { 535 536 struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); 536 537
+3 -2
drivers/gpu/drm/rockchip/rockchip_drm_gem.h
··· 31 31 rockchip_gem_prime_import_sg_table(struct drm_device *dev, 32 32 struct dma_buf_attachment *attach, 33 33 struct sg_table *sg); 34 - int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); 35 - void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); 34 + int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); 35 + void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, 36 + struct iosys_map *map); 36 37 37 38 struct rockchip_gem_object * 38 39 rockchip_gem_create_object(struct drm_device *drm, unsigned int size,
+5 -5
drivers/gpu/drm/tegra/gem.c
··· 174 174 static void *tegra_bo_mmap(struct host1x_bo *bo) 175 175 { 176 176 struct tegra_bo *obj = host1x_to_tegra_bo(bo); 177 - struct dma_buf_map map; 177 + struct iosys_map map; 178 178 int ret; 179 179 180 180 if (obj->vaddr) { ··· 191 191 static void tegra_bo_munmap(struct host1x_bo *bo, void *addr) 192 192 { 193 193 struct tegra_bo *obj = host1x_to_tegra_bo(bo); 194 - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(addr); 194 + struct iosys_map map = IOSYS_MAP_INIT_VADDR(addr); 195 195 196 196 if (obj->vaddr) 197 197 return; ··· 699 699 return __tegra_gem_mmap(gem, vma); 700 700 } 701 701 702 - static int tegra_gem_prime_vmap(struct dma_buf *buf, struct dma_buf_map *map) 702 + static int tegra_gem_prime_vmap(struct dma_buf *buf, struct iosys_map *map) 703 703 { 704 704 struct drm_gem_object *gem = buf->priv; 705 705 struct tegra_bo *bo = to_tegra_bo(gem); 706 706 707 - dma_buf_map_set_vaddr(map, bo->vaddr); 707 + iosys_map_set_vaddr(map, bo->vaddr); 708 708 709 709 return 0; 710 710 } 711 711 712 - static void tegra_gem_prime_vunmap(struct dma_buf *buf, struct dma_buf_map *map) 712 + static void tegra_gem_prime_vunmap(struct dma_buf *buf, struct iosys_map *map) 713 713 { 714 714 } 715 715
+5 -3
drivers/gpu/drm/tiny/cirrus.c
··· 16 16 * Copyright 1999-2001 Jeff Garzik <jgarzik@pobox.com> 17 17 */ 18 18 19 - #include <linux/dma-buf-map.h> 19 + #include <linux/iosys-map.h> 20 20 #include <linux/module.h> 21 21 #include <linux/pci.h> 22 22 ··· 312 312 return 0; 313 313 } 314 314 315 - static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, const struct dma_buf_map *map, 315 + static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, 316 + const struct iosys_map *map, 316 317 struct drm_rect *rect) 317 318 { 318 319 struct cirrus_device *cirrus = to_cirrus(fb->dev); ··· 345 344 return 0; 346 345 } 347 346 348 - static int cirrus_fb_blit_fullscreen(struct drm_framebuffer *fb, const struct dma_buf_map *map) 347 + static int cirrus_fb_blit_fullscreen(struct drm_framebuffer *fb, 348 + const struct iosys_map *map) 349 349 { 350 350 struct drm_rect fullscreen = { 351 351 .x1 = 0,
+4 -3
drivers/gpu/drm/tiny/gm12u320.c
··· 95 95 struct drm_rect rect; 96 96 int frame; 97 97 int draw_status_timeout; 98 - struct dma_buf_map src_map; 98 + struct iosys_map src_map; 99 99 } fb_update; 100 100 }; 101 101 ··· 395 395 GM12U320_ERR("Frame update error: %d\n", ret); 396 396 } 397 397 398 - static void gm12u320_fb_mark_dirty(struct drm_framebuffer *fb, const struct dma_buf_map *map, 398 + static void gm12u320_fb_mark_dirty(struct drm_framebuffer *fb, 399 + const struct iosys_map *map, 399 400 struct drm_rect *dirty) 400 401 { 401 402 struct gm12u320_device *gm12u320 = to_gm12u320(fb->dev); ··· 439 438 mutex_lock(&gm12u320->fb_update.lock); 440 439 old_fb = gm12u320->fb_update.fb; 441 440 gm12u320->fb_update.fb = NULL; 442 - dma_buf_map_clear(&gm12u320->fb_update.src_map); 441 + iosys_map_clear(&gm12u320->fb_update.src_map); 443 442 mutex_unlock(&gm12u320->fb_update.lock); 444 443 445 444 drm_framebuffer_put(old_fb);
+8 -8
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 33 33 #include <drm/ttm/ttm_placement.h> 34 34 #include <drm/drm_cache.h> 35 35 #include <drm/drm_vma_manager.h> 36 - #include <linux/dma-buf-map.h> 36 + #include <linux/iosys-map.h> 37 37 #include <linux/io.h> 38 38 #include <linux/highmem.h> 39 39 #include <linux/wait.h> ··· 93 93 { 94 94 const struct ttm_kmap_iter_ops *dst_ops = dst_iter->ops; 95 95 const struct ttm_kmap_iter_ops *src_ops = src_iter->ops; 96 - struct dma_buf_map src_map, dst_map; 96 + struct iosys_map src_map, dst_map; 97 97 pgoff_t i; 98 98 99 99 /* Single TTM move. NOP */ ··· 385 385 } 386 386 EXPORT_SYMBOL(ttm_bo_kunmap); 387 387 388 - int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) 388 + int ttm_bo_vmap(struct ttm_buffer_object *bo, struct iosys_map *map) 389 389 { 390 390 struct ttm_resource *mem = bo->resource; 391 391 int ret; ··· 413 413 if (!vaddr_iomem) 414 414 return -ENOMEM; 415 415 416 - dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); 416 + iosys_map_set_vaddr_iomem(map, vaddr_iomem); 417 417 418 418 } else { 419 419 struct ttm_operation_ctx ctx = { ··· 437 437 if (!vaddr) 438 438 return -ENOMEM; 439 439 440 - dma_buf_map_set_vaddr(map, vaddr); 440 + iosys_map_set_vaddr(map, vaddr); 441 441 } 442 442 443 443 return 0; 444 444 } 445 445 EXPORT_SYMBOL(ttm_bo_vmap); 446 446 447 - void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) 447 + void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct iosys_map *map) 448 448 { 449 449 struct ttm_resource *mem = bo->resource; 450 450 451 - if (dma_buf_map_is_null(map)) 451 + if (iosys_map_is_null(map)) 452 452 return; 453 453 454 454 if (!map->is_iomem) 455 455 vunmap(map->vaddr); 456 456 else if (!mem->bus.addr) 457 457 iounmap(map->vaddr_iomem); 458 - dma_buf_map_clear(map); 458 + iosys_map_clear(map); 459 459 460 460 ttm_mem_io_free(bo->bdev, bo->resource); 461 461 }
+21 -21
drivers/gpu/drm/ttm/ttm_resource.c
··· 22 22 * Authors: Christian König 23 23 */ 24 24 25 - #include <linux/dma-buf-map.h> 25 + #include <linux/iosys-map.h> 26 26 #include <linux/io-mapping.h> 27 27 #include <linux/scatterlist.h> 28 28 ··· 209 209 EXPORT_SYMBOL(ttm_resource_manager_debug); 210 210 211 211 static void ttm_kmap_iter_iomap_map_local(struct ttm_kmap_iter *iter, 212 - struct dma_buf_map *dmap, 212 + struct iosys_map *dmap, 213 213 pgoff_t i) 214 214 { 215 215 struct ttm_kmap_iter_iomap *iter_io = ··· 236 236 addr = io_mapping_map_local_wc(iter_io->iomap, iter_io->cache.offs + 237 237 (((resource_size_t)i - iter_io->cache.i) 238 238 << PAGE_SHIFT)); 239 - dma_buf_map_set_vaddr_iomem(dmap, addr); 239 + iosys_map_set_vaddr_iomem(dmap, addr); 240 240 } 241 241 242 242 static void ttm_kmap_iter_iomap_unmap_local(struct ttm_kmap_iter *iter, 243 - struct dma_buf_map *map) 243 + struct iosys_map *map) 244 244 { 245 245 io_mapping_unmap_local(map->vaddr_iomem); 246 246 } ··· 291 291 */ 292 292 293 293 static void ttm_kmap_iter_linear_io_map_local(struct ttm_kmap_iter *iter, 294 - struct dma_buf_map *dmap, 294 + struct iosys_map *dmap, 295 295 pgoff_t i) 296 296 { 297 297 struct ttm_kmap_iter_linear_io *iter_io = 298 298 container_of(iter, typeof(*iter_io), base); 299 299 300 300 *dmap = iter_io->dmap; 301 - dma_buf_map_incr(dmap, i * PAGE_SIZE); 301 + iosys_map_incr(dmap, i * PAGE_SIZE); 302 302 } 303 303 304 304 static const struct ttm_kmap_iter_ops ttm_kmap_iter_linear_io_ops = { ··· 334 334 } 335 335 336 336 if (mem->bus.addr) { 337 - dma_buf_map_set_vaddr(&iter_io->dmap, mem->bus.addr); 337 + iosys_map_set_vaddr(&iter_io->dmap, mem->bus.addr); 338 338 iter_io->needs_unmap = false; 339 339 } else { 340 340 size_t bus_size = (size_t)mem->num_pages << PAGE_SHIFT; ··· 342 342 iter_io->needs_unmap = true; 343 343 memset(&iter_io->dmap, 0, sizeof(iter_io->dmap)); 344 344 if (mem->bus.caching == ttm_write_combined) 345 - dma_buf_map_set_vaddr_iomem(&iter_io->dmap, 346 - ioremap_wc(mem->bus.offset, 347 - bus_size)); 345 + iosys_map_set_vaddr_iomem(&iter_io->dmap, 346 + ioremap_wc(mem->bus.offset, 347 + bus_size)); 348 348 else if (mem->bus.caching == ttm_cached) 349 - dma_buf_map_set_vaddr(&iter_io->dmap, 350 - memremap(mem->bus.offset, bus_size, 351 - MEMREMAP_WB | 352 - MEMREMAP_WT | 353 - MEMREMAP_WC)); 349 + iosys_map_set_vaddr(&iter_io->dmap, 350 + memremap(mem->bus.offset, bus_size, 351 + MEMREMAP_WB | 352 + MEMREMAP_WT | 353 + MEMREMAP_WC)); 354 354 355 355 /* If uncached requested or if mapping cached or wc failed */ 356 - if (dma_buf_map_is_null(&iter_io->dmap)) 357 - dma_buf_map_set_vaddr_iomem(&iter_io->dmap, 358 - ioremap(mem->bus.offset, 359 - bus_size)); 356 + if (iosys_map_is_null(&iter_io->dmap)) 357 + iosys_map_set_vaddr_iomem(&iter_io->dmap, 358 + ioremap(mem->bus.offset, 359 + bus_size)); 360 360 361 - if (dma_buf_map_is_null(&iter_io->dmap)) { 361 + if (iosys_map_is_null(&iter_io->dmap)) { 362 362 ret = -ENOMEM; 363 363 goto out_io_free; 364 364 } ··· 387 387 struct ttm_device *bdev, 388 388 struct ttm_resource *mem) 389 389 { 390 - if (iter_io->needs_unmap && dma_buf_map_is_set(&iter_io->dmap)) { 390 + if (iter_io->needs_unmap && iosys_map_is_set(&iter_io->dmap)) { 391 391 if (iter_io->dmap.is_iomem) 392 392 iounmap(iter_io->dmap.vaddr_iomem); 393 393 else
+4 -4
drivers/gpu/drm/ttm/ttm_tt.c
··· 406 406 } 407 407 408 408 static void ttm_kmap_iter_tt_map_local(struct ttm_kmap_iter *iter, 409 - struct dma_buf_map *dmap, 409 + struct iosys_map *dmap, 410 410 pgoff_t i) 411 411 { 412 412 struct ttm_kmap_iter_tt *iter_tt = 413 413 container_of(iter, typeof(*iter_tt), base); 414 414 415 - dma_buf_map_set_vaddr(dmap, kmap_local_page_prot(iter_tt->tt->pages[i], 416 - iter_tt->prot)); 415 + iosys_map_set_vaddr(dmap, kmap_local_page_prot(iter_tt->tt->pages[i], 416 + iter_tt->prot)); 417 417 } 418 418 419 419 static void ttm_kmap_iter_tt_unmap_local(struct ttm_kmap_iter *iter, 420 - struct dma_buf_map *map) 420 + struct iosys_map *map) 421 421 { 422 422 kunmap_local(map->vaddr); 423 423 }
+2 -1
drivers/gpu/drm/udl/udl_modeset.c
··· 264 264 return 0; 265 265 } 266 266 267 - static int udl_handle_damage(struct drm_framebuffer *fb, const struct dma_buf_map *map, 267 + static int udl_handle_damage(struct drm_framebuffer *fb, 268 + const struct iosys_map *map, 268 269 int x, int y, int width, int height) 269 270 { 270 271 struct drm_device *dev = fb->dev;
+2 -2
drivers/gpu/drm/vboxvideo/vbox_mode.c
··· 10 10 * Hans de Goede <hdegoede@redhat.com> 11 11 */ 12 12 13 - #include <linux/dma-buf-map.h> 13 + #include <linux/iosys-map.h> 14 14 #include <linux/export.h> 15 15 16 16 #include <drm/drm_atomic.h> ··· 398 398 u32 height = new_state->crtc_h; 399 399 struct drm_shadow_plane_state *shadow_plane_state = 400 400 to_drm_shadow_plane_state(new_state); 401 - struct dma_buf_map map = shadow_plane_state->data[0]; 401 + struct iosys_map map = shadow_plane_state->data[0]; 402 402 u8 *src = map.vaddr; /* TODO: Use mapping abstraction properly */ 403 403 size_t data_size, mask_size; 404 404 u32 flags;
+2 -2
drivers/gpu/drm/vkms/vkms_composer.c
··· 157 157 void *vaddr; 158 158 void (*pixel_blend)(const u8 *p_src, u8 *p_dst); 159 159 160 - if (WARN_ON(dma_buf_map_is_null(&primary_composer->map[0]))) 160 + if (WARN_ON(iosys_map_is_null(&primary_composer->map[0]))) 161 161 return; 162 162 163 163 vaddr = plane_composer->map[0].vaddr; ··· 187 187 } 188 188 } 189 189 190 - if (WARN_ON(dma_buf_map_is_null(&primary_composer->map[0]))) 190 + if (WARN_ON(iosys_map_is_null(&primary_composer->map[0]))) 191 191 return -EINVAL; 192 192 193 193 vaddr = primary_composer->map[0].vaddr;
+3 -3
drivers/gpu/drm/vkms/vkms_drv.h
··· 21 21 #define YRES_MAX 8192 22 22 23 23 struct vkms_writeback_job { 24 - struct dma_buf_map map[DRM_FORMAT_MAX_PLANES]; 25 - struct dma_buf_map data[DRM_FORMAT_MAX_PLANES]; 24 + struct iosys_map map[DRM_FORMAT_MAX_PLANES]; 25 + struct iosys_map data[DRM_FORMAT_MAX_PLANES]; 26 26 }; 27 27 28 28 struct vkms_composer { 29 29 struct drm_framebuffer fb; 30 30 struct drm_rect src, dst; 31 - struct dma_buf_map map[4]; 31 + struct iosys_map map[4]; 32 32 unsigned int offset; 33 33 unsigned int pitch; 34 34 unsigned int cpp;
+1 -1
drivers/gpu/drm/vkms/vkms_plane.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 3 - #include <linux/dma-buf-map.h> 3 + #include <linux/iosys-map.h> 4 4 5 5 #include <drm/drm_atomic.h> 6 6 #include <drm/drm_atomic_helper.h>
+1 -1
drivers/gpu/drm/vkms/vkms_writeback.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 3 - #include <linux/dma-buf-map.h> 3 + #include <linux/iosys-map.h> 4 4 5 5 #include <drm/drm_atomic.h> 6 6 #include <drm/drm_fourcc.h>
+4 -3
drivers/gpu/drm/xen/xen_drm_front_gem.c
··· 280 280 return &xen_obj->base; 281 281 } 282 282 283 - int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map) 283 + int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, 284 + struct iosys_map *map) 284 285 { 285 286 struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); 286 287 void *vaddr; ··· 294 293 VM_MAP, PAGE_KERNEL); 295 294 if (!vaddr) 296 295 return -ENOMEM; 297 - dma_buf_map_set_vaddr(map, vaddr); 296 + iosys_map_set_vaddr(map, vaddr); 298 297 299 298 return 0; 300 299 } 301 300 302 301 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, 303 - struct dma_buf_map *map) 302 + struct iosys_map *map) 304 303 { 305 304 vunmap(map->vaddr); 306 305 }
+3 -3
drivers/gpu/drm/xen/xen_drm_front_gem.h
··· 12 12 #define __XEN_DRM_FRONT_GEM_H 13 13 14 14 struct dma_buf_attachment; 15 - struct dma_buf_map; 15 + struct iosys_map; 16 16 struct drm_device; 17 17 struct drm_gem_object; 18 18 struct sg_table; ··· 32 32 void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj); 33 33 34 34 int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, 35 - struct dma_buf_map *map); 35 + struct iosys_map *map); 36 36 37 37 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, 38 - struct dma_buf_map *map); 38 + struct iosys_map *map); 39 39 40 40 #endif /* __XEN_DRM_FRONT_GEM_H */
+4 -4
drivers/media/common/videobuf2/videobuf2-dma-contig.c
··· 99 99 return buf->vaddr; 100 100 101 101 if (buf->db_attach) { 102 - struct dma_buf_map map; 102 + struct iosys_map map; 103 103 104 104 if (!dma_buf_vmap(buf->db_attach->dmabuf, &map)) 105 105 buf->vaddr = map.vaddr; ··· 446 446 return 0; 447 447 } 448 448 449 - static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map) 449 + static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct iosys_map *map) 450 450 { 451 451 struct vb2_dc_buf *buf; 452 452 void *vaddr; ··· 456 456 if (!vaddr) 457 457 return -EINVAL; 458 458 459 - dma_buf_map_set_vaddr(map, vaddr); 459 + iosys_map_set_vaddr(map, vaddr); 460 460 461 461 return 0; 462 462 } ··· 737 737 { 738 738 struct vb2_dc_buf *buf = mem_priv; 739 739 struct sg_table *sgt = buf->dma_sgt; 740 - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr); 740 + struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr); 741 741 742 742 if (WARN_ON(!buf->db_attach)) { 743 743 pr_err("trying to unpin a not attached buffer\n");
+5 -4
drivers/media/common/videobuf2/videobuf2-dma-sg.c
··· 303 303 static void *vb2_dma_sg_vaddr(struct vb2_buffer *vb, void *buf_priv) 304 304 { 305 305 struct vb2_dma_sg_buf *buf = buf_priv; 306 - struct dma_buf_map map; 306 + struct iosys_map map; 307 307 int ret; 308 308 309 309 BUG_ON(!buf); ··· 492 492 return 0; 493 493 } 494 494 495 - static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map) 495 + static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf, 496 + struct iosys_map *map) 496 497 { 497 498 struct vb2_dma_sg_buf *buf = dbuf->priv; 498 499 499 - dma_buf_map_set_vaddr(map, buf->vaddr); 500 + iosys_map_set_vaddr(map, buf->vaddr); 500 501 501 502 return 0; 502 503 } ··· 582 581 { 583 582 struct vb2_dma_sg_buf *buf = mem_priv; 584 583 struct sg_table *sgt = buf->dma_sgt; 585 - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr); 584 + struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr); 586 585 587 586 if (WARN_ON(!buf->db_attach)) { 588 587 pr_err("trying to unpin a not attached buffer\n");
+6 -5
drivers/media/common/videobuf2/videobuf2-vmalloc.c
··· 312 312 vb2_vmalloc_put(dbuf->priv); 313 313 } 314 314 315 - static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct dma_buf_map *map) 315 + static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf, 316 + struct iosys_map *map) 316 317 { 317 318 struct vb2_vmalloc_buf *buf = dbuf->priv; 318 319 319 - dma_buf_map_set_vaddr(map, buf->vaddr); 320 + iosys_map_set_vaddr(map, buf->vaddr); 320 321 321 322 return 0; 322 323 } ··· 373 372 static int vb2_vmalloc_map_dmabuf(void *mem_priv) 374 373 { 375 374 struct vb2_vmalloc_buf *buf = mem_priv; 376 - struct dma_buf_map map; 375 + struct iosys_map map; 377 376 int ret; 378 377 379 378 ret = dma_buf_vmap(buf->dbuf, &map); ··· 387 386 static void vb2_vmalloc_unmap_dmabuf(void *mem_priv) 388 387 { 389 388 struct vb2_vmalloc_buf *buf = mem_priv; 390 - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr); 389 + struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr); 391 390 392 391 dma_buf_vunmap(buf->dbuf, &map); 393 392 buf->vaddr = NULL; ··· 396 395 static void vb2_vmalloc_detach_dmabuf(void *mem_priv) 397 396 { 398 397 struct vb2_vmalloc_buf *buf = mem_priv; 399 - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buf->vaddr); 398 + struct iosys_map map = IOSYS_MAP_INIT_VADDR(buf->vaddr); 400 399 401 400 if (buf->vaddr) 402 401 dma_buf_vunmap(buf->dbuf, &map);
+2 -2
drivers/misc/fastrpc.c
··· 587 587 kfree(a); 588 588 } 589 589 590 - static int fastrpc_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map) 590 + static int fastrpc_vmap(struct dma_buf *dmabuf, struct iosys_map *map) 591 591 { 592 592 struct fastrpc_buf *buf = dmabuf->priv; 593 593 594 - dma_buf_map_set_vaddr(map, buf->virt); 594 + iosys_map_set_vaddr(map, buf->virt); 595 595 596 596 return 0; 597 597 }
+3 -3
include/drm/drm_cache.h
··· 35 35 36 36 #include <linux/scatterlist.h> 37 37 38 - struct dma_buf_map; 38 + struct iosys_map; 39 39 40 40 void drm_clflush_pages(struct page *pages[], unsigned long num_pages); 41 41 void drm_clflush_sg(struct sg_table *st); ··· 74 74 75 75 void drm_memcpy_init_early(void); 76 76 77 - void drm_memcpy_from_wc(struct dma_buf_map *dst, 78 - const struct dma_buf_map *src, 77 + void drm_memcpy_from_wc(struct iosys_map *dst, 78 + const struct iosys_map *src, 79 79 unsigned long len); 80 80 #endif
+4 -3
include/drm/drm_client.h
··· 3 3 #ifndef _DRM_CLIENT_H_ 4 4 #define _DRM_CLIENT_H_ 5 5 6 - #include <linux/dma-buf-map.h> 6 + #include <linux/iosys-map.h> 7 7 #include <linux/lockdep.h> 8 8 #include <linux/mutex.h> 9 9 #include <linux/types.h> ··· 144 144 /** 145 145 * @map: Virtual address for the buffer 146 146 */ 147 - struct dma_buf_map map; 147 + struct iosys_map map; 148 148 149 149 /** 150 150 * @fb: DRM framebuffer ··· 156 156 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); 157 157 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); 158 158 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); 159 - int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); 159 + int drm_client_buffer_vmap(struct drm_client_buffer *buffer, 160 + struct iosys_map *map); 160 161 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); 161 162 162 163 int drm_client_modeset_create(struct drm_client_dev *client);
+3 -3
include/drm/drm_gem.h
··· 39 39 40 40 #include <drm/drm_vma_manager.h> 41 41 42 - struct dma_buf_map; 42 + struct iosys_map; 43 43 struct drm_gem_object; 44 44 45 45 /** ··· 139 139 * 140 140 * This callback is optional. 141 141 */ 142 - int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map); 142 + int (*vmap)(struct drm_gem_object *obj, struct iosys_map *map); 143 143 144 144 /** 145 145 * @vunmap: ··· 149 149 * 150 150 * This callback is optional. 151 151 */ 152 - void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map); 152 + void (*vunmap)(struct drm_gem_object *obj, struct iosys_map *map); 153 153 154 154 /** 155 155 * @mmap:
+3 -3
include/drm/drm_gem_atomic_helper.h
··· 3 3 #ifndef __DRM_GEM_ATOMIC_HELPER_H__ 4 4 #define __DRM_GEM_ATOMIC_HELPER_H__ 5 5 6 - #include <linux/dma-buf-map.h> 6 + #include <linux/iosys-map.h> 7 7 8 8 #include <drm/drm_fourcc.h> 9 9 #include <drm/drm_plane.h> ··· 59 59 * The memory mappings stored in map should be established in the plane's 60 60 * prepare_fb callback and removed in the cleanup_fb callback. 61 61 */ 62 - struct dma_buf_map map[DRM_FORMAT_MAX_PLANES]; 62 + struct iosys_map map[DRM_FORMAT_MAX_PLANES]; 63 63 64 64 /** 65 65 * @data: Address of each framebuffer BO's data ··· 67 67 * The address of the data stored in each mapping. This is different 68 68 * for framebuffers with non-zero offset fields. 69 69 */ 70 - struct dma_buf_map data[DRM_FORMAT_MAX_PLANES]; 70 + struct iosys_map data[DRM_FORMAT_MAX_PLANES]; 71 71 }; 72 72 73 73 /**
+4 -2
include/drm/drm_gem_cma_helper.h
··· 38 38 void drm_gem_cma_print_info(const struct drm_gem_cma_object *cma_obj, 39 39 struct drm_printer *p, unsigned int indent); 40 40 struct sg_table *drm_gem_cma_get_sg_table(struct drm_gem_cma_object *cma_obj); 41 - int drm_gem_cma_vmap(struct drm_gem_cma_object *cma_obj, struct dma_buf_map *map); 41 + int drm_gem_cma_vmap(struct drm_gem_cma_object *cma_obj, 42 + struct iosys_map *map); 42 43 int drm_gem_cma_mmap(struct drm_gem_cma_object *cma_obj, struct vm_area_struct *vma); 43 44 44 45 extern const struct vm_operations_struct drm_gem_cma_vm_ops; ··· 107 106 * Returns: 108 107 * 0 on success or a negative error code on failure. 109 108 */ 110 - static inline int drm_gem_cma_object_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 109 + static inline int drm_gem_cma_object_vmap(struct drm_gem_object *obj, 110 + struct iosys_map *map) 111 111 { 112 112 struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); 113 113
+4 -4
include/drm/drm_gem_framebuffer_helper.h
··· 2 2 #define __DRM_GEM_FB_HELPER_H__ 3 3 4 4 #include <linux/dma-buf.h> 5 - #include <linux/dma-buf-map.h> 5 + #include <linux/iosys-map.h> 6 6 7 7 #include <drm/drm_fourcc.h> 8 8 ··· 40 40 const struct drm_mode_fb_cmd2 *mode_cmd); 41 41 42 42 int drm_gem_fb_vmap(struct drm_framebuffer *fb, 43 - struct dma_buf_map map[static DRM_FORMAT_MAX_PLANES], 44 - struct dma_buf_map data[DRM_FORMAT_MAX_PLANES]); 43 + struct iosys_map map[static DRM_FORMAT_MAX_PLANES], 44 + struct iosys_map data[DRM_FORMAT_MAX_PLANES]); 45 45 void drm_gem_fb_vunmap(struct drm_framebuffer *fb, 46 - struct dma_buf_map map[static DRM_FORMAT_MAX_PLANES]); 46 + struct iosys_map map[static DRM_FORMAT_MAX_PLANES]); 47 47 int drm_gem_fb_begin_cpu_access(struct drm_framebuffer *fb, enum dma_data_direction dir); 48 48 void drm_gem_fb_end_cpu_access(struct drm_framebuffer *fb, enum dma_data_direction dir); 49 49
+8 -4
include/drm/drm_gem_shmem_helper.h
··· 113 113 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); 114 114 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem); 115 115 void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem); 116 - int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map); 117 - void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map); 116 + int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem, 117 + struct iosys_map *map); 118 + void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem, 119 + struct iosys_map *map); 118 120 int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma); 119 121 120 122 int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv); ··· 228 226 * Returns: 229 227 * 0 on success or a negative error code on failure. 230 228 */ 231 - static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) 229 + static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj, 230 + struct iosys_map *map) 232 231 { 233 232 struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); 234 233 ··· 244 241 * This function wraps drm_gem_shmem_vunmap(). Drivers that employ the shmem helpers should 245 242 * use it as their &drm_gem_object_funcs.vunmap handler. 246 243 */ 247 - static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) 244 + static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj, 245 + struct iosys_map *map) 248 246 { 249 247 struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); 250 248
+3 -3
include/drm/drm_gem_ttm_helper.h
··· 10 10 #include <drm/ttm/ttm_bo_api.h> 11 11 #include <drm/ttm/ttm_bo_driver.h> 12 12 13 - struct dma_buf_map; 13 + struct iosys_map; 14 14 15 15 #define drm_gem_ttm_of_gem(gem_obj) \ 16 16 container_of(gem_obj, struct ttm_buffer_object, base) ··· 18 18 void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, 19 19 const struct drm_gem_object *gem); 20 20 int drm_gem_ttm_vmap(struct drm_gem_object *gem, 21 - struct dma_buf_map *map); 21 + struct iosys_map *map); 22 22 void drm_gem_ttm_vunmap(struct drm_gem_object *gem, 23 - struct dma_buf_map *map); 23 + struct iosys_map *map); 24 24 int drm_gem_ttm_mmap(struct drm_gem_object *gem, 25 25 struct vm_area_struct *vma); 26 26
+5 -4
include/drm/drm_gem_vram_helper.h
··· 12 12 #include <drm/ttm/ttm_bo_driver.h> 13 13 14 14 #include <linux/container_of.h> 15 - #include <linux/dma-buf-map.h> 15 + #include <linux/iosys-map.h> 16 16 17 17 struct drm_mode_create_dumb; 18 18 struct drm_plane; ··· 51 51 */ 52 52 struct drm_gem_vram_object { 53 53 struct ttm_buffer_object bo; 54 - struct dma_buf_map map; 54 + struct iosys_map map; 55 55 56 56 /** 57 57 * @vmap_use_count: ··· 97 97 s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo); 98 98 int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); 99 99 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); 100 - int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); 101 - void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); 100 + int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct iosys_map *map); 101 + void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, 102 + struct iosys_map *map); 102 103 103 104 int drm_gem_vram_fill_create_dumb(struct drm_file *file, 104 105 struct drm_device *dev,
+3 -3
include/drm/drm_prime.h
··· 54 54 struct dma_buf_export_info; 55 55 struct dma_buf; 56 56 struct dma_buf_attachment; 57 - struct dma_buf_map; 57 + struct iosys_map; 58 58 59 59 enum dma_data_direction; 60 60 ··· 83 83 void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach, 84 84 struct sg_table *sgt, 85 85 enum dma_data_direction dir); 86 - int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map); 87 - void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map); 86 + int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map); 87 + void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map); 88 88 89 89 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); 90 90 int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
+5 -5
include/drm/ttm/ttm_bo_api.h
··· 47 47 48 48 struct ttm_device; 49 49 50 - struct dma_buf_map; 50 + struct iosys_map; 51 51 52 52 struct drm_mm_node; 53 53 ··· 481 481 * ttm_bo_vmap 482 482 * 483 483 * @bo: The buffer object. 484 - * @map: pointer to a struct dma_buf_map representing the map. 484 + * @map: pointer to a struct iosys_map representing the map. 485 485 * 486 486 * Sets up a kernel virtual mapping, using ioremap or vmap to the 487 487 * data in the buffer object. The parameter @map returns the virtual 488 - * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). 488 + * address as struct iosys_map. Unmap the buffer with ttm_bo_vunmap(). 489 489 * 490 490 * Returns 491 491 * -ENOMEM: Out of memory. 492 492 * -EINVAL: Invalid range. 493 493 */ 494 - int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); 494 + int ttm_bo_vmap(struct ttm_buffer_object *bo, struct iosys_map *map); 495 495 496 496 /** 497 497 * ttm_bo_vunmap ··· 501 501 * 502 502 * Unmaps a kernel map set up by ttm_bo_vmap(). 503 503 */ 504 - void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); 504 + void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct iosys_map *map); 505 505 506 506 /** 507 507 * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
+5 -5
include/drm/ttm/ttm_kmap_iter.h
··· 8 8 #include <linux/types.h> 9 9 10 10 struct ttm_kmap_iter; 11 - struct dma_buf_map; 11 + struct iosys_map; 12 12 13 13 /** 14 14 * struct ttm_kmap_iter_ops - Ops structure for a struct ··· 24 24 * kmap_local semantics. 25 25 * @res_iter: Pointer to the struct ttm_kmap_iter representing 26 26 * the resource. 27 - * @dmap: The struct dma_buf_map holding the virtual address after 27 + * @dmap: The struct iosys_map holding the virtual address after 28 28 * the operation. 29 29 * @i: The location within the resource to map. PAGE_SIZE granularity. 30 30 */ 31 31 void (*map_local)(struct ttm_kmap_iter *res_iter, 32 - struct dma_buf_map *dmap, pgoff_t i); 32 + struct iosys_map *dmap, pgoff_t i); 33 33 /** 34 34 * unmap_local() - Unmap a PAGE_SIZE part of the resource previously 35 35 * mapped using kmap_local. 36 36 * @res_iter: Pointer to the struct ttm_kmap_iter representing 37 37 * the resource. 38 - * @dmap: The struct dma_buf_map holding the virtual address after 38 + * @dmap: The struct iosys_map holding the virtual address after 39 39 * the operation. 40 40 */ 41 41 void (*unmap_local)(struct ttm_kmap_iter *res_iter, 42 - struct dma_buf_map *dmap); 42 + struct iosys_map *dmap); 43 43 bool maps_tt; 44 44 }; 45 45
+3 -3
include/drm/ttm/ttm_resource.h
··· 27 27 28 28 #include <linux/types.h> 29 29 #include <linux/mutex.h> 30 - #include <linux/dma-buf-map.h> 30 + #include <linux/iosys-map.h> 31 31 #include <linux/dma-fence.h> 32 32 #include <drm/drm_print.h> 33 33 #include <drm/ttm/ttm_caching.h> ··· 41 41 struct ttm_place; 42 42 struct ttm_buffer_object; 43 43 struct ttm_placement; 44 - struct dma_buf_map; 44 + struct iosys_map; 45 45 struct io_mapping; 46 46 struct sg_table; 47 47 struct scatterlist; ··· 207 207 */ 208 208 struct ttm_kmap_iter_linear_io { 209 209 struct ttm_kmap_iter base; 210 - struct dma_buf_map dmap; 210 + struct iosys_map dmap; 211 211 bool needs_unmap; 212 212 }; 213 213
-266
include/linux/dma-buf-map.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Pointer to dma-buf-mapped memory, plus helpers. 4 - */ 5 - 6 - #ifndef __DMA_BUF_MAP_H__ 7 - #define __DMA_BUF_MAP_H__ 8 - 9 - #include <linux/io.h> 10 - #include <linux/string.h> 11 - 12 - /** 13 - * DOC: overview 14 - * 15 - * Calling dma-buf's vmap operation returns a pointer to the buffer's memory. 16 - * Depending on the location of the buffer, users may have to access it with 17 - * I/O operations or memory load/store operations. For example, copying to 18 - * system memory could be done with memcpy(), copying to I/O memory would be 19 - * done with memcpy_toio(). 20 - * 21 - * .. code-block:: c 22 - * 23 - * void *vaddr = ...; // pointer to system memory 24 - * memcpy(vaddr, src, len); 25 - * 26 - * void *vaddr_iomem = ...; // pointer to I/O memory 27 - * memcpy_toio(vaddr, _iomem, src, len); 28 - * 29 - * When using dma-buf's vmap operation, the returned pointer is encoded as 30 - * :c:type:`struct dma_buf_map <dma_buf_map>`. 31 - * :c:type:`struct dma_buf_map <dma_buf_map>` stores the buffer's address in 32 - * system or I/O memory and a flag that signals the required method of 33 - * accessing the buffer. Use the returned instance and the helper functions 34 - * to access the buffer's memory in the correct way. 35 - * 36 - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are 37 - * actually independent from the dma-buf infrastructure. When sharing buffers 38 - * among devices, drivers have to know the location of the memory to access 39 - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>` 40 - * solves this problem for dma-buf and its users. If other drivers or 41 - * sub-systems require similar functionality, the type could be generalized 42 - * and moved to a more prominent header file. 43 - * 44 - * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is 45 - * considered bad style. Rather then accessing its fields directly, use one 46 - * of the provided helper functions, or implement your own. For example, 47 - * instances of :c:type:`struct dma_buf_map <dma_buf_map>` can be initialized 48 - * statically with DMA_BUF_MAP_INIT_VADDR(), or at runtime with 49 - * dma_buf_map_set_vaddr(). These helpers will set an address in system memory. 50 - * 51 - * .. code-block:: c 52 - * 53 - * struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(0xdeadbeaf); 54 - * 55 - * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); 56 - * 57 - * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). 58 - * 59 - * .. code-block:: c 60 - * 61 - * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); 62 - * 63 - * Instances of struct dma_buf_map do not have to be cleaned up, but 64 - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings 65 - * always refer to system memory. 66 - * 67 - * .. code-block:: c 68 - * 69 - * dma_buf_map_clear(&map); 70 - * 71 - * Test if a mapping is valid with either dma_buf_map_is_set() or 72 - * dma_buf_map_is_null(). 73 - * 74 - * .. code-block:: c 75 - * 76 - * if (dma_buf_map_is_set(&map) != dma_buf_map_is_null(&map)) 77 - * // always true 78 - * 79 - * Instances of :c:type:`struct dma_buf_map <dma_buf_map>` can be compared 80 - * for equality with dma_buf_map_is_equal(). Mappings the point to different 81 - * memory spaces, system or I/O, are never equal. That's even true if both 82 - * spaces are located in the same address space, both mappings contain the 83 - * same address value, or both mappings refer to NULL. 84 - * 85 - * .. code-block:: c 86 - * 87 - * struct dma_buf_map sys_map; // refers to system memory 88 - * struct dma_buf_map io_map; // refers to I/O memory 89 - * 90 - * if (dma_buf_map_is_equal(&sys_map, &io_map)) 91 - * // always false 92 - * 93 - * A set up instance of struct dma_buf_map can be used to access or manipulate 94 - * the buffer memory. Depending on the location of the memory, the provided 95 - * helpers will pick the correct operations. Data can be copied into the memory 96 - * with dma_buf_map_memcpy_to(). The address can be manipulated with 97 - * dma_buf_map_incr(). 98 - * 99 - * .. code-block:: c 100 - * 101 - * const void *src = ...; // source buffer 102 - * size_t len = ...; // length of src 103 - * 104 - * dma_buf_map_memcpy_to(&map, src, len); 105 - * dma_buf_map_incr(&map, len); // go to first byte after the memcpy 106 - */ 107 - 108 - /** 109 - * struct dma_buf_map - Pointer to vmap'ed dma-buf memory. 110 - * @vaddr_iomem: The buffer's address if in I/O memory 111 - * @vaddr: The buffer's address if in system memory 112 - * @is_iomem: True if the dma-buf memory is located in I/O 113 - * memory, or false otherwise. 114 - */ 115 - struct dma_buf_map { 116 - union { 117 - void __iomem *vaddr_iomem; 118 - void *vaddr; 119 - }; 120 - bool is_iomem; 121 - }; 122 - 123 - /** 124 - * DMA_BUF_MAP_INIT_VADDR - Initializes struct dma_buf_map to an address in system memory 125 - * @vaddr_: A system-memory address 126 - */ 127 - #define DMA_BUF_MAP_INIT_VADDR(vaddr_) \ 128 - { \ 129 - .vaddr = (vaddr_), \ 130 - .is_iomem = false, \ 131 - } 132 - 133 - /** 134 - * dma_buf_map_set_vaddr - Sets a dma-buf mapping structure to an address in system memory 135 - * @map: The dma-buf mapping structure 136 - * @vaddr: A system-memory address 137 - * 138 - * Sets the address and clears the I/O-memory flag. 139 - */ 140 - static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr) 141 - { 142 - map->vaddr = vaddr; 143 - map->is_iomem = false; 144 - } 145 - 146 - /** 147 - * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory 148 - * @map: The dma-buf mapping structure 149 - * @vaddr_iomem: An I/O-memory address 150 - * 151 - * Sets the address and the I/O-memory flag. 152 - */ 153 - static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, 154 - void __iomem *vaddr_iomem) 155 - { 156 - map->vaddr_iomem = vaddr_iomem; 157 - map->is_iomem = true; 158 - } 159 - 160 - /** 161 - * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality 162 - * @lhs: The dma-buf mapping structure 163 - * @rhs: A dma-buf mapping structure to compare with 164 - * 165 - * Two dma-buf mapping structures are equal if they both refer to the same type of memory 166 - * and to the same address within that memory. 167 - * 168 - * Returns: 169 - * True is both structures are equal, or false otherwise. 170 - */ 171 - static inline bool dma_buf_map_is_equal(const struct dma_buf_map *lhs, 172 - const struct dma_buf_map *rhs) 173 - { 174 - if (lhs->is_iomem != rhs->is_iomem) 175 - return false; 176 - else if (lhs->is_iomem) 177 - return lhs->vaddr_iomem == rhs->vaddr_iomem; 178 - else 179 - return lhs->vaddr == rhs->vaddr; 180 - } 181 - 182 - /** 183 - * dma_buf_map_is_null - Tests for a dma-buf mapping to be NULL 184 - * @map: The dma-buf mapping structure 185 - * 186 - * Depending on the state of struct dma_buf_map.is_iomem, tests if the 187 - * mapping is NULL. 188 - * 189 - * Returns: 190 - * True if the mapping is NULL, or false otherwise. 191 - */ 192 - static inline bool dma_buf_map_is_null(const struct dma_buf_map *map) 193 - { 194 - if (map->is_iomem) 195 - return !map->vaddr_iomem; 196 - return !map->vaddr; 197 - } 198 - 199 - /** 200 - * dma_buf_map_is_set - Tests is the dma-buf mapping has been set 201 - * @map: The dma-buf mapping structure 202 - * 203 - * Depending on the state of struct dma_buf_map.is_iomem, tests if the 204 - * mapping has been set. 205 - * 206 - * Returns: 207 - * True if the mapping is been set, or false otherwise. 208 - */ 209 - static inline bool dma_buf_map_is_set(const struct dma_buf_map *map) 210 - { 211 - return !dma_buf_map_is_null(map); 212 - } 213 - 214 - /** 215 - * dma_buf_map_clear - Clears a dma-buf mapping structure 216 - * @map: The dma-buf mapping structure 217 - * 218 - * Clears all fields to zero; including struct dma_buf_map.is_iomem. So 219 - * mapping structures that were set to point to I/O memory are reset for 220 - * system memory. Pointers are cleared to NULL. This is the default. 221 - */ 222 - static inline void dma_buf_map_clear(struct dma_buf_map *map) 223 - { 224 - if (map->is_iomem) { 225 - map->vaddr_iomem = NULL; 226 - map->is_iomem = false; 227 - } else { 228 - map->vaddr = NULL; 229 - } 230 - } 231 - 232 - /** 233 - * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping 234 - * @dst: The dma-buf mapping structure 235 - * @src: The source buffer 236 - * @len: The number of byte in src 237 - * 238 - * Copies data into a dma-buf mapping. The source buffer is in system 239 - * memory. Depending on the buffer's location, the helper picks the correct 240 - * method of accessing the memory. 241 - */ 242 - static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) 243 - { 244 - if (dst->is_iomem) 245 - memcpy_toio(dst->vaddr_iomem, src, len); 246 - else 247 - memcpy(dst->vaddr, src, len); 248 - } 249 - 250 - /** 251 - * dma_buf_map_incr - Increments the address stored in a dma-buf mapping 252 - * @map: The dma-buf mapping structure 253 - * @incr: The number of bytes to increment 254 - * 255 - * Increments the address stored in a dma-buf mapping. Depending on the 256 - * buffer's location, the correct value will be updated. 257 - */ 258 - static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) 259 - { 260 - if (map->is_iomem) 261 - map->vaddr_iomem += incr; 262 - else 263 - map->vaddr += incr; 264 - } 265 - 266 - #endif /* __DMA_BUF_MAP_H__ */
+6 -6
include/linux/dma-buf.h
··· 13 13 #ifndef __DMA_BUF_H__ 14 14 #define __DMA_BUF_H__ 15 15 16 - #include <linux/dma-buf-map.h> 16 + #include <linux/iosys-map.h> 17 17 #include <linux/file.h> 18 18 #include <linux/err.h> 19 19 #include <linux/scatterlist.h> ··· 283 283 */ 284 284 int (*mmap)(struct dma_buf *, struct vm_area_struct *vma); 285 285 286 - int (*vmap)(struct dma_buf *dmabuf, struct dma_buf_map *map); 287 - void (*vunmap)(struct dma_buf *dmabuf, struct dma_buf_map *map); 286 + int (*vmap)(struct dma_buf *dmabuf, struct iosys_map *map); 287 + void (*vunmap)(struct dma_buf *dmabuf, struct iosys_map *map); 288 288 }; 289 289 290 290 /** ··· 347 347 * @vmap_ptr: 348 348 * The current vmap ptr if @vmapping_counter > 0. Protected by @lock. 349 349 */ 350 - struct dma_buf_map vmap_ptr; 350 + struct iosys_map vmap_ptr; 351 351 352 352 /** 353 353 * @exp_name: ··· 628 628 629 629 int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, 630 630 unsigned long); 631 - int dma_buf_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map); 632 - void dma_buf_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map); 631 + int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map); 632 + void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map); 633 633 #endif /* __DMA_BUF_H__ */
+257
include/linux/iosys-map.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Pointer abstraction for IO/system memory 4 + */ 5 + 6 + #ifndef __IOSYS_MAP_H__ 7 + #define __IOSYS_MAP_H__ 8 + 9 + #include <linux/io.h> 10 + #include <linux/string.h> 11 + 12 + /** 13 + * DOC: overview 14 + * 15 + * When accessing a memory region, depending on its location, users may have to 16 + * access it with I/O operations or memory load/store operations. For example, 17 + * copying to system memory could be done with memcpy(), copying to I/O memory 18 + * would be done with memcpy_toio(). 19 + * 20 + * .. code-block:: c 21 + * 22 + * void *vaddr = ...; // pointer to system memory 23 + * memcpy(vaddr, src, len); 24 + * 25 + * void *vaddr_iomem = ...; // pointer to I/O memory 26 + * memcpy_toio(vaddr, _iomem, src, len); 27 + * 28 + * The user of such pointer may not have information about the mapping of that 29 + * region or may want to have a single code path to handle operations on that 30 + * buffer, regardless if it's located in system or IO memory. The type 31 + * :c:type:`struct iosys_map <iosys_map>` and its helpers abstract that so the 32 + * buffer can be passed around to other drivers or have separate duties inside 33 + * the same driver for allocation, read and write operations. 34 + * 35 + * Open-coding access to :c:type:`struct iosys_map <iosys_map>` is considered 36 + * bad style. Rather then accessing its fields directly, use one of the provided 37 + * helper functions, or implement your own. For example, instances of 38 + * :c:type:`struct iosys_map <iosys_map>` can be initialized statically with 39 + * IOSYS_MAP_INIT_VADDR(), or at runtime with iosys_map_set_vaddr(). These 40 + * helpers will set an address in system memory. 41 + * 42 + * .. code-block:: c 43 + * 44 + * struct iosys_map map = IOSYS_MAP_INIT_VADDR(0xdeadbeaf); 45 + * 46 + * iosys_map_set_vaddr(&map, 0xdeadbeaf); 47 + * 48 + * To set an address in I/O memory, use iosys_map_set_vaddr_iomem(). 49 + * 50 + * .. code-block:: c 51 + * 52 + * iosys_map_set_vaddr_iomem(&map, 0xdeadbeaf); 53 + * 54 + * Instances of struct iosys_map do not have to be cleaned up, but 55 + * can be cleared to NULL with iosys_map_clear(). Cleared mappings 56 + * always refer to system memory. 57 + * 58 + * .. code-block:: c 59 + * 60 + * iosys_map_clear(&map); 61 + * 62 + * Test if a mapping is valid with either iosys_map_is_set() or 63 + * iosys_map_is_null(). 64 + * 65 + * .. code-block:: c 66 + * 67 + * if (iosys_map_is_set(&map) != iosys_map_is_null(&map)) 68 + * // always true 69 + * 70 + * Instances of :c:type:`struct iosys_map <iosys_map>` can be compared for 71 + * equality with iosys_map_is_equal(). Mappings that point to different memory 72 + * spaces, system or I/O, are never equal. That's even true if both spaces are 73 + * located in the same address space, both mappings contain the same address 74 + * value, or both mappings refer to NULL. 75 + * 76 + * .. code-block:: c 77 + * 78 + * struct iosys_map sys_map; // refers to system memory 79 + * struct iosys_map io_map; // refers to I/O memory 80 + * 81 + * if (iosys_map_is_equal(&sys_map, &io_map)) 82 + * // always false 83 + * 84 + * A set up instance of struct iosys_map can be used to access or manipulate the 85 + * buffer memory. Depending on the location of the memory, the provided helpers 86 + * will pick the correct operations. Data can be copied into the memory with 87 + * iosys_map_memcpy_to(). The address can be manipulated with iosys_map_incr(). 88 + * 89 + * .. code-block:: c 90 + * 91 + * const void *src = ...; // source buffer 92 + * size_t len = ...; // length of src 93 + * 94 + * iosys_map_memcpy_to(&map, src, len); 95 + * iosys_map_incr(&map, len); // go to first byte after the memcpy 96 + */ 97 + 98 + /** 99 + * struct iosys_map - Pointer to IO/system memory 100 + * @vaddr_iomem: The buffer's address if in I/O memory 101 + * @vaddr: The buffer's address if in system memory 102 + * @is_iomem: True if the buffer is located in I/O memory, or false 103 + * otherwise. 104 + */ 105 + struct iosys_map { 106 + union { 107 + void __iomem *vaddr_iomem; 108 + void *vaddr; 109 + }; 110 + bool is_iomem; 111 + }; 112 + 113 + /** 114 + * IOSYS_MAP_INIT_VADDR - Initializes struct iosys_map to an address in system memory 115 + * @vaddr_: A system-memory address 116 + */ 117 + #define IOSYS_MAP_INIT_VADDR(vaddr_) \ 118 + { \ 119 + .vaddr = (vaddr_), \ 120 + .is_iomem = false, \ 121 + } 122 + 123 + /** 124 + * iosys_map_set_vaddr - Sets a iosys mapping structure to an address in system memory 125 + * @map: The iosys_map structure 126 + * @vaddr: A system-memory address 127 + * 128 + * Sets the address and clears the I/O-memory flag. 129 + */ 130 + static inline void iosys_map_set_vaddr(struct iosys_map *map, void *vaddr) 131 + { 132 + map->vaddr = vaddr; 133 + map->is_iomem = false; 134 + } 135 + 136 + /** 137 + * iosys_map_set_vaddr_iomem - Sets a iosys mapping structure to an address in I/O memory 138 + * @map: The iosys_map structure 139 + * @vaddr_iomem: An I/O-memory address 140 + * 141 + * Sets the address and the I/O-memory flag. 142 + */ 143 + static inline void iosys_map_set_vaddr_iomem(struct iosys_map *map, 144 + void __iomem *vaddr_iomem) 145 + { 146 + map->vaddr_iomem = vaddr_iomem; 147 + map->is_iomem = true; 148 + } 149 + 150 + /** 151 + * iosys_map_is_equal - Compares two iosys mapping structures for equality 152 + * @lhs: The iosys_map structure 153 + * @rhs: A iosys_map structure to compare with 154 + * 155 + * Two iosys mapping structures are equal if they both refer to the same type of memory 156 + * and to the same address within that memory. 157 + * 158 + * Returns: 159 + * True is both structures are equal, or false otherwise. 160 + */ 161 + static inline bool iosys_map_is_equal(const struct iosys_map *lhs, 162 + const struct iosys_map *rhs) 163 + { 164 + if (lhs->is_iomem != rhs->is_iomem) 165 + return false; 166 + else if (lhs->is_iomem) 167 + return lhs->vaddr_iomem == rhs->vaddr_iomem; 168 + else 169 + return lhs->vaddr == rhs->vaddr; 170 + } 171 + 172 + /** 173 + * iosys_map_is_null - Tests for a iosys mapping to be NULL 174 + * @map: The iosys_map structure 175 + * 176 + * Depending on the state of struct iosys_map.is_iomem, tests if the 177 + * mapping is NULL. 178 + * 179 + * Returns: 180 + * True if the mapping is NULL, or false otherwise. 181 + */ 182 + static inline bool iosys_map_is_null(const struct iosys_map *map) 183 + { 184 + if (map->is_iomem) 185 + return !map->vaddr_iomem; 186 + return !map->vaddr; 187 + } 188 + 189 + /** 190 + * iosys_map_is_set - Tests if the iosys mapping has been set 191 + * @map: The iosys_map structure 192 + * 193 + * Depending on the state of struct iosys_map.is_iomem, tests if the 194 + * mapping has been set. 195 + * 196 + * Returns: 197 + * True if the mapping is been set, or false otherwise. 198 + */ 199 + static inline bool iosys_map_is_set(const struct iosys_map *map) 200 + { 201 + return !iosys_map_is_null(map); 202 + } 203 + 204 + /** 205 + * iosys_map_clear - Clears a iosys mapping structure 206 + * @map: The iosys_map structure 207 + * 208 + * Clears all fields to zero, including struct iosys_map.is_iomem, so 209 + * mapping structures that were set to point to I/O memory are reset for 210 + * system memory. Pointers are cleared to NULL. This is the default. 211 + */ 212 + static inline void iosys_map_clear(struct iosys_map *map) 213 + { 214 + if (map->is_iomem) { 215 + map->vaddr_iomem = NULL; 216 + map->is_iomem = false; 217 + } else { 218 + map->vaddr = NULL; 219 + } 220 + } 221 + 222 + /** 223 + * iosys_map_memcpy_to - Memcpy into iosys mapping 224 + * @dst: The iosys_map structure 225 + * @src: The source buffer 226 + * @len: The number of byte in src 227 + * 228 + * Copies data into a iosys mapping. The source buffer is in system 229 + * memory. Depending on the buffer's location, the helper picks the correct 230 + * method of accessing the memory. 231 + */ 232 + static inline void iosys_map_memcpy_to(struct iosys_map *dst, const void *src, 233 + size_t len) 234 + { 235 + if (dst->is_iomem) 236 + memcpy_toio(dst->vaddr_iomem, src, len); 237 + else 238 + memcpy(dst->vaddr, src, len); 239 + } 240 + 241 + /** 242 + * iosys_map_incr - Increments the address stored in a iosys mapping 243 + * @map: The iosys_map structure 244 + * @incr: The number of bytes to increment 245 + * 246 + * Increments the address stored in a iosys mapping. Depending on the 247 + * buffer's location, the correct value will be updated. 248 + */ 249 + static inline void iosys_map_incr(struct iosys_map *map, size_t incr) 250 + { 251 + if (map->is_iomem) 252 + map->vaddr_iomem += incr; 253 + else 254 + map->vaddr += incr; 255 + } 256 + 257 + #endif /* __IOSYS_MAP_H__ */