Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'topic/drm-misc-2015-11-26' of git://anongit.freedesktop.org/drm-intel into drm-next

Here's the first drm-misc pull, with really mostly misc stuff all over.
Somewhat invasive is only Ville's change to mark the arg struct for
fb_create const - that might conflict with a new driver pull. So better to
get in fast.

* tag 'topic/drm-misc-2015-11-26' of git://anongit.freedesktop.org/drm-intel:
drm/mm: use list_next_entry
drm/i915: fix potential dangling else problems in for_each_ macros
drm: fix potential dangling else problems in for_each_ macros
drm/sysfs: Send out uevent when connector->force changes
drm/atomic: Small documentation fix.
drm/mm: rewrite drm_mm_for_each_hole
drm/sysfs: Grab lock for edid/modes_show
drm: Print the src/dst/clip rectangles in error in drm_plane_helper
drm: Add "prefix" parameter to drm_rect_debug_print()
drm: Keep coordinates in the typical x, y, w, h order instead of x, y, h, w
drm: Pass the user drm_mode_fb_cmd2 as const to .fb_create()
drm: modes: replace simple_strtoul by kstrtouint
drm: Describe the Rotation property bits.
drm: Remove unused fbdev_list members
GPU-DRM: Delete unnecessary checks before drm_property_unreference_blob()
drm/dp: add eDP DPCD backlight control bit definitions
drm/tegra: Remove local fbdev emulation Kconfig option
drm/imx: Remove local fbdev emulation Kconfig option
drm/gem: Update/Polish docs
drm: Update GEM refcounting docs

+378 -291
+12 -36
Documentation/DocBook/gpu.tmpl
··· 615 615 <function>drm_gem_object_init</function>. Storage for private GEM 616 616 objects must be managed by drivers. 617 617 </para> 618 - <para> 619 - Drivers that do not need to extend GEM objects with private information 620 - can call the <function>drm_gem_object_alloc</function> function to 621 - allocate and initialize a struct <structname>drm_gem_object</structname> 622 - instance. The GEM core will call the optional driver 623 - <methodname>gem_init_object</methodname> operation after initializing 624 - the GEM object with <function>drm_gem_object_init</function>. 625 - <synopsis>int (*gem_init_object) (struct drm_gem_object *obj);</synopsis> 626 - </para> 627 - <para> 628 - No alloc-and-init function exists for private GEM objects. 629 - </para> 630 618 </sect3> 631 619 <sect3> 632 620 <title>GEM Objects Lifetime</title> ··· 623 635 acquired and release by <function>calling drm_gem_object_reference</function> 624 636 and <function>drm_gem_object_unreference</function> respectively. The 625 637 caller must hold the <structname>drm_device</structname> 626 - <structfield>struct_mutex</structfield> lock. As a convenience, GEM 627 - provides the <function>drm_gem_object_reference_unlocked</function> and 628 - <function>drm_gem_object_unreference_unlocked</function> functions that 629 - can be called without holding the lock. 638 + <structfield>struct_mutex</structfield> lock when calling 639 + <function>drm_gem_object_reference</function>. As a convenience, GEM 640 + provides <function>drm_gem_object_unreference_unlocked</function> 641 + functions that can be called without holding the lock. 630 642 </para> 631 643 <para> 632 644 When the last reference to a GEM object is released the GEM core calls ··· 637 649 </para> 638 650 <para> 639 651 <synopsis>void (*gem_free_object) (struct drm_gem_object *obj);</synopsis> 640 - Drivers are responsible for freeing all GEM object resources, including 641 - the resources created by the GEM core. If an mmap offset has been 642 - created for the object (in which case 643 - <structname>drm_gem_object</structname>::<structfield>map_list</structfield>::<structfield>map</structfield> 644 - is not NULL) it must be freed by a call to 645 - <function>drm_gem_free_mmap_offset</function>. The shmfs backing store 646 - must be released by calling <function>drm_gem_object_release</function> 647 - (that function can safely be called if no shmfs backing store has been 648 - created). 652 + Drivers are responsible for freeing all GEM object resources. This includes 653 + the resources created by the GEM core, which need to be released with 654 + <function>drm_gem_object_release</function>. 649 655 </para> 650 656 </sect3> 651 657 <sect3> ··· 722 740 DRM identifies the GEM object to be mapped by a fake offset passed 723 741 through the mmap offset argument. Prior to being mapped, a GEM object 724 742 must thus be associated with a fake offset. To do so, drivers must call 725 - <function>drm_gem_create_mmap_offset</function> on the object. The 726 - function allocates a fake offset range from a pool and stores the 727 - offset divided by PAGE_SIZE in 728 - <literal>obj-&gt;map_list.hash.key</literal>. Care must be taken not to 729 - call <function>drm_gem_create_mmap_offset</function> if a fake offset 730 - has already been allocated for the object. This can be tested by 731 - <literal>obj-&gt;map_list.map</literal> being non-NULL. 743 + <function>drm_gem_create_mmap_offset</function> on the object. 732 744 </para> 733 745 <para> 734 746 Once allocated, the fake offset value 735 - (<literal>obj-&gt;map_list.hash.key &lt;&lt; PAGE_SHIFT</literal>) 736 747 must be passed to the application in a driver-specific way and can then 737 748 be used as the mmap offset argument. 738 749 </para> ··· 811 836 abstracted from the client in libdrm. 812 837 </para> 813 838 </sect3> 814 - <sect3> 815 - <title>GEM Function Reference</title> 839 + </sect2> 840 + <sect2> 841 + <title>GEM Function Reference</title> 816 842 !Edrivers/gpu/drm/drm_gem.c 817 - </sect3> 843 + !Iinclude/drm/drm_gem.h 818 844 </sect2> 819 845 <sect2> 820 846 <title>VMA Offset Manager</title>
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 481 481 int 482 482 amdgpu_framebuffer_init(struct drm_device *dev, 483 483 struct amdgpu_framebuffer *rfb, 484 - struct drm_mode_fb_cmd2 *mode_cmd, 484 + const struct drm_mode_fb_cmd2 *mode_cmd, 485 485 struct drm_gem_object *obj) 486 486 { 487 487 int ret; ··· 498 498 static struct drm_framebuffer * 499 499 amdgpu_user_framebuffer_create(struct drm_device *dev, 500 500 struct drm_file *file_priv, 501 - struct drm_mode_fb_cmd2 *mode_cmd) 501 + const struct drm_mode_fb_cmd2 *mode_cmd) 502 502 { 503 503 struct drm_gem_object *obj; 504 504 struct amdgpu_framebuffer *amdgpu_fb;
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
··· 45 45 struct amdgpu_fbdev { 46 46 struct drm_fb_helper helper; 47 47 struct amdgpu_framebuffer rfb; 48 - struct list_head fbdev_list; 49 48 struct amdgpu_device *adev; 50 49 }; 51 50
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
··· 551 551 552 552 int amdgpu_framebuffer_init(struct drm_device *dev, 553 553 struct amdgpu_framebuffer *rfb, 554 - struct drm_mode_fb_cmd2 *mode_cmd, 554 + const struct drm_mode_fb_cmd2 *mode_cmd, 555 555 struct drm_gem_object *obj); 556 556 557 557 int amdgpufb_remove(struct drm_device *dev, struct drm_framebuffer *fb);
+2 -2
drivers/gpu/drm/armada/armada_fb.c
··· 35 35 }; 36 36 37 37 struct armada_framebuffer *armada_framebuffer_create(struct drm_device *dev, 38 - struct drm_mode_fb_cmd2 *mode, struct armada_gem_object *obj) 38 + const struct drm_mode_fb_cmd2 *mode, struct armada_gem_object *obj) 39 39 { 40 40 struct armada_framebuffer *dfb; 41 41 uint8_t format, config; ··· 101 101 } 102 102 103 103 static struct drm_framebuffer *armada_fb_create(struct drm_device *dev, 104 - struct drm_file *dfile, struct drm_mode_fb_cmd2 *mode) 104 + struct drm_file *dfile, const struct drm_mode_fb_cmd2 *mode) 105 105 { 106 106 struct armada_gem_object *obj; 107 107 struct armada_framebuffer *dfb;
+1 -1
drivers/gpu/drm/armada/armada_fb.h
··· 19 19 #define drm_fb_obj(fb) drm_fb_to_armada_fb(fb)->obj 20 20 21 21 struct armada_framebuffer *armada_framebuffer_create(struct drm_device *, 22 - struct drm_mode_fb_cmd2 *, struct armada_gem_object *); 22 + const struct drm_mode_fb_cmd2 *, struct armada_gem_object *); 23 23 24 24 #endif
+1 -2
drivers/gpu/drm/ast/ast_drv.h
··· 256 256 struct ast_fbdev { 257 257 struct drm_fb_helper helper; 258 258 struct ast_framebuffer afb; 259 - struct list_head fbdev_list; 260 259 void *sysram; 261 260 int size; 262 261 struct ttm_bo_kmap_obj mapping; ··· 308 309 309 310 int ast_framebuffer_init(struct drm_device *dev, 310 311 struct ast_framebuffer *ast_fb, 311 - struct drm_mode_fb_cmd2 *mode_cmd, 312 + const struct drm_mode_fb_cmd2 *mode_cmd, 312 313 struct drm_gem_object *obj); 313 314 314 315 int ast_fbdev_init(struct drm_device *dev);
+1 -1
drivers/gpu/drm/ast/ast_fb.c
··· 163 163 }; 164 164 165 165 static int astfb_create_object(struct ast_fbdev *afbdev, 166 - struct drm_mode_fb_cmd2 *mode_cmd, 166 + const struct drm_mode_fb_cmd2 *mode_cmd, 167 167 struct drm_gem_object **gobj_p) 168 168 { 169 169 struct drm_device *dev = afbdev->helper.dev;
+2 -2
drivers/gpu/drm/ast/ast_main.c
··· 309 309 310 310 int ast_framebuffer_init(struct drm_device *dev, 311 311 struct ast_framebuffer *ast_fb, 312 - struct drm_mode_fb_cmd2 *mode_cmd, 312 + const struct drm_mode_fb_cmd2 *mode_cmd, 313 313 struct drm_gem_object *obj) 314 314 { 315 315 int ret; ··· 327 327 static struct drm_framebuffer * 328 328 ast_user_framebuffer_create(struct drm_device *dev, 329 329 struct drm_file *filp, 330 - struct drm_mode_fb_cmd2 *mode_cmd) 330 + const struct drm_mode_fb_cmd2 *mode_cmd) 331 331 { 332 332 struct drm_gem_object *obj; 333 333 struct ast_framebuffer *ast_fb;
+1 -1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
··· 402 402 } 403 403 404 404 static struct drm_framebuffer *atmel_hlcdc_fb_create(struct drm_device *dev, 405 - struct drm_file *file_priv, struct drm_mode_fb_cmd2 *mode_cmd) 405 + struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd) 406 406 { 407 407 return drm_fb_cma_create(dev, file_priv, mode_cmd); 408 408 }
+1 -1
drivers/gpu/drm/bochs/bochs.h
··· 149 149 150 150 int bochs_framebuffer_init(struct drm_device *dev, 151 151 struct bochs_framebuffer *gfb, 152 - struct drm_mode_fb_cmd2 *mode_cmd, 152 + const struct drm_mode_fb_cmd2 *mode_cmd, 153 153 struct drm_gem_object *obj); 154 154 int bochs_bo_pin(struct bochs_bo *bo, u32 pl_flag, u64 *gpu_addr); 155 155 int bochs_bo_unpin(struct bochs_bo *bo);
+1 -1
drivers/gpu/drm/bochs/bochs_fbdev.c
··· 34 34 }; 35 35 36 36 static int bochsfb_create_object(struct bochs_device *bochs, 37 - struct drm_mode_fb_cmd2 *mode_cmd, 37 + const struct drm_mode_fb_cmd2 *mode_cmd, 38 38 struct drm_gem_object **gobj_p) 39 39 { 40 40 struct drm_device *dev = bochs->dev;
+2 -2
drivers/gpu/drm/bochs/bochs_mm.c
··· 484 484 485 485 int bochs_framebuffer_init(struct drm_device *dev, 486 486 struct bochs_framebuffer *gfb, 487 - struct drm_mode_fb_cmd2 *mode_cmd, 487 + const struct drm_mode_fb_cmd2 *mode_cmd, 488 488 struct drm_gem_object *obj) 489 489 { 490 490 int ret; ··· 502 502 static struct drm_framebuffer * 503 503 bochs_user_framebuffer_create(struct drm_device *dev, 504 504 struct drm_file *filp, 505 - struct drm_mode_fb_cmd2 *mode_cmd) 505 + const struct drm_mode_fb_cmd2 *mode_cmd) 506 506 { 507 507 struct drm_gem_object *obj; 508 508 struct bochs_framebuffer *bochs_fb;
+1 -2
drivers/gpu/drm/cirrus/cirrus_drv.h
··· 153 153 struct cirrus_fbdev { 154 154 struct drm_fb_helper helper; 155 155 struct cirrus_framebuffer gfb; 156 - struct list_head fbdev_list; 157 156 void *sysram; 158 157 int size; 159 158 int x1, y1, x2, y2; /* dirty rect */ ··· 206 207 207 208 int cirrus_framebuffer_init(struct drm_device *dev, 208 209 struct cirrus_framebuffer *gfb, 209 - struct drm_mode_fb_cmd2 *mode_cmd, 210 + const struct drm_mode_fb_cmd2 *mode_cmd, 210 211 struct drm_gem_object *obj); 211 212 212 213 bool cirrus_check_framebuffer(struct cirrus_device *cdev, int width, int height,
+1 -1
drivers/gpu/drm/cirrus/cirrus_fbdev.c
··· 135 135 }; 136 136 137 137 static int cirrusfb_create_object(struct cirrus_fbdev *afbdev, 138 - struct drm_mode_fb_cmd2 *mode_cmd, 138 + const struct drm_mode_fb_cmd2 *mode_cmd, 139 139 struct drm_gem_object **gobj_p) 140 140 { 141 141 struct drm_device *dev = afbdev->helper.dev;
+2 -2
drivers/gpu/drm/cirrus/cirrus_main.c
··· 29 29 30 30 int cirrus_framebuffer_init(struct drm_device *dev, 31 31 struct cirrus_framebuffer *gfb, 32 - struct drm_mode_fb_cmd2 *mode_cmd, 32 + const struct drm_mode_fb_cmd2 *mode_cmd, 33 33 struct drm_gem_object *obj) 34 34 { 35 35 int ret; ··· 47 47 static struct drm_framebuffer * 48 48 cirrus_user_framebuffer_create(struct drm_device *dev, 49 49 struct drm_file *filp, 50 - struct drm_mode_fb_cmd2 *mode_cmd) 50 + const struct drm_mode_fb_cmd2 *mode_cmd) 51 51 { 52 52 struct cirrus_device *cdev = dev->dev_private; 53 53 struct drm_gem_object *obj;
+4 -7
drivers/gpu/drm/drm_atomic.c
··· 316 316 if (mode && memcmp(&state->mode, mode, sizeof(*mode)) == 0) 317 317 return 0; 318 318 319 - if (state->mode_blob) 320 - drm_property_unreference_blob(state->mode_blob); 319 + drm_property_unreference_blob(state->mode_blob); 321 320 state->mode_blob = NULL; 322 321 323 322 if (mode) { ··· 362 363 if (blob == state->mode_blob) 363 364 return 0; 364 365 365 - if (state->mode_blob) 366 - drm_property_unreference_blob(state->mode_blob); 366 + drm_property_unreference_blob(state->mode_blob); 367 367 state->mode_blob = NULL; 368 368 369 369 if (blob) { ··· 417 419 struct drm_property_blob *mode = 418 420 drm_property_lookup_blob(dev, val); 419 421 ret = drm_atomic_set_mode_prop_for_crtc(state, mode); 420 - if (mode) 421 - drm_property_unreference_blob(mode); 422 + drm_property_unreference_blob(mode); 422 423 return ret; 423 424 } 424 425 else if (crtc->funcs->atomic_set_property) ··· 1430 1433 } 1431 1434 1432 1435 /** 1433 - * drm_atomic_update_old_fb -- Unset old_fb pointers and set plane->fb pointers. 1436 + * drm_atomic_clean_old_fb -- Unset old_fb pointers and set plane->fb pointers. 1434 1437 * 1435 1438 * @dev: drm device to check. 1436 1439 * @plane_mask: plane mask for planes that were updated.
+9 -10
drivers/gpu/drm/drm_atomic_helper.c
··· 1485 1485 drm_atomic_set_fb_for_plane(plane_state, fb); 1486 1486 plane_state->crtc_x = crtc_x; 1487 1487 plane_state->crtc_y = crtc_y; 1488 - plane_state->crtc_h = crtc_h; 1489 1488 plane_state->crtc_w = crtc_w; 1489 + plane_state->crtc_h = crtc_h; 1490 1490 plane_state->src_x = src_x; 1491 1491 plane_state->src_y = src_y; 1492 - plane_state->src_h = src_h; 1493 1492 plane_state->src_w = src_w; 1493 + plane_state->src_h = src_h; 1494 1494 1495 1495 if (plane == crtc->cursor) 1496 1496 state->legacy_cursor_update = true; ··· 1609 1609 drm_atomic_set_fb_for_plane(plane_state, NULL); 1610 1610 plane_state->crtc_x = 0; 1611 1611 plane_state->crtc_y = 0; 1612 - plane_state->crtc_h = 0; 1613 1612 plane_state->crtc_w = 0; 1613 + plane_state->crtc_h = 0; 1614 1614 plane_state->src_x = 0; 1615 1615 plane_state->src_y = 0; 1616 - plane_state->src_h = 0; 1617 1616 plane_state->src_w = 0; 1617 + plane_state->src_h = 0; 1618 1618 1619 1619 return 0; 1620 1620 } ··· 1797 1797 drm_atomic_set_fb_for_plane(primary_state, set->fb); 1798 1798 primary_state->crtc_x = 0; 1799 1799 primary_state->crtc_y = 0; 1800 - primary_state->crtc_h = vdisplay; 1801 1800 primary_state->crtc_w = hdisplay; 1801 + primary_state->crtc_h = vdisplay; 1802 1802 primary_state->src_x = set->x << 16; 1803 1803 primary_state->src_y = set->y << 16; 1804 1804 if (primary_state->rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270))) { 1805 - primary_state->src_h = hdisplay << 16; 1806 1805 primary_state->src_w = vdisplay << 16; 1806 + primary_state->src_h = hdisplay << 16; 1807 1807 } else { 1808 - primary_state->src_h = vdisplay << 16; 1809 1808 primary_state->src_w = hdisplay << 16; 1809 + primary_state->src_h = vdisplay << 16; 1810 1810 } 1811 1811 1812 1812 commit: ··· 2184 2184 */ 2185 2185 void drm_atomic_helper_crtc_reset(struct drm_crtc *crtc) 2186 2186 { 2187 - if (crtc->state && crtc->state->mode_blob) 2187 + if (crtc->state) 2188 2188 drm_property_unreference_blob(crtc->state->mode_blob); 2189 2189 kfree(crtc->state); 2190 2190 crtc->state = kzalloc(sizeof(*crtc->state), GFP_KERNEL); ··· 2252 2252 void __drm_atomic_helper_crtc_destroy_state(struct drm_crtc *crtc, 2253 2253 struct drm_crtc_state *state) 2254 2254 { 2255 - if (state->mode_blob) 2256 - drm_property_unreference_blob(state->mode_blob); 2255 + drm_property_unreference_blob(state->mode_blob); 2257 2256 } 2258 2257 EXPORT_SYMBOL(__drm_atomic_helper_crtc_destroy_state); 2259 2258
+2 -2
drivers/gpu/drm/drm_crtc.c
··· 45 45 46 46 static struct drm_framebuffer * 47 47 internal_framebuffer_create(struct drm_device *dev, 48 - struct drm_mode_fb_cmd2 *r, 48 + const struct drm_mode_fb_cmd2 *r, 49 49 struct drm_file *file_priv); 50 50 51 51 /* Avoid boilerplate. I'm tired of typing. */ ··· 3235 3235 3236 3236 static struct drm_framebuffer * 3237 3237 internal_framebuffer_create(struct drm_device *dev, 3238 - struct drm_mode_fb_cmd2 *r, 3238 + const struct drm_mode_fb_cmd2 *r, 3239 3239 struct drm_file *file_priv) 3240 3240 { 3241 3241 struct drm_mode_config *config = &dev->mode_config;
+1 -1
drivers/gpu/drm/drm_crtc_helper.c
··· 818 818 * metadata fields. 819 819 */ 820 820 void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb, 821 - struct drm_mode_fb_cmd2 *mode_cmd) 821 + const struct drm_mode_fb_cmd2 *mode_cmd) 822 822 { 823 823 int i; 824 824
+2 -2
drivers/gpu/drm/drm_fb_cma_helper.c
··· 74 74 }; 75 75 76 76 static struct drm_fb_cma *drm_fb_cma_alloc(struct drm_device *dev, 77 - struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_cma_object **obj, 77 + const const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_cma_object **obj, 78 78 unsigned int num_planes) 79 79 { 80 80 struct drm_fb_cma *fb_cma; ··· 107 107 * checked before calling this function. 108 108 */ 109 109 struct drm_framebuffer *drm_fb_cma_create(struct drm_device *dev, 110 - struct drm_file *file_priv, struct drm_mode_fb_cmd2 *mode_cmd) 110 + struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd) 111 111 { 112 112 struct drm_fb_cma *fb_cma; 113 113 struct drm_gem_cma_object *objs[4];
+32 -3
drivers/gpu/drm/drm_gem.c
··· 244 244 * @filp: drm file-private structure to use for the handle look up 245 245 * @handle: userspace handle to delete 246 246 * 247 - * Removes the GEM handle from the @filp lookup table and if this is the last 248 - * handle also cleans up linked resources like GEM names. 247 + * Removes the GEM handle from the @filp lookup table which has been added with 248 + * drm_gem_handle_create(). If this is the last handle also cleans up linked 249 + * resources like GEM names. 249 250 */ 250 251 int 251 252 drm_gem_handle_delete(struct drm_file *filp, u32 handle) ··· 315 314 * This expects the dev->object_name_lock to be held already and will drop it 316 315 * before returning. Used to avoid races in establishing new handles when 317 316 * importing an object from either an flink name or a dma-buf. 317 + * 318 + * Handles must be release again through drm_gem_handle_delete(). This is done 319 + * when userspace closes @file_priv for all attached handles, or through the 320 + * GEM_CLOSE ioctl for individual handles. 318 321 */ 319 322 int 320 323 drm_gem_handle_create_tail(struct drm_file *file_priv, ··· 546 541 } 547 542 EXPORT_SYMBOL(drm_gem_put_pages); 548 543 549 - /** Returns a reference to the object named by the handle. */ 544 + /** 545 + * drm_gem_object_lookup - look up a GEM object from it's handle 546 + * @dev: DRM device 547 + * @filp: DRM file private date 548 + * @handle: userspace handle 549 + * 550 + * Returns: 551 + * 552 + * A reference to the object named by the handle if such exists on @filp, NULL 553 + * otherwise. 554 + */ 550 555 struct drm_gem_object * 551 556 drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp, 552 557 u32 handle) ··· 789 774 } 790 775 EXPORT_SYMBOL(drm_gem_object_free); 791 776 777 + /** 778 + * drm_gem_vm_open - vma->ops->open implementation for GEM 779 + * @vma: VM area structure 780 + * 781 + * This function implements the #vm_operations_struct open() callback for GEM 782 + * drivers. This must be used together with drm_gem_vm_close(). 783 + */ 792 784 void drm_gem_vm_open(struct vm_area_struct *vma) 793 785 { 794 786 struct drm_gem_object *obj = vma->vm_private_data; ··· 804 782 } 805 783 EXPORT_SYMBOL(drm_gem_vm_open); 806 784 785 + /** 786 + * drm_gem_vm_close - vma->ops->close implementation for GEM 787 + * @vma: VM area structure 788 + * 789 + * This function implements the #vm_operations_struct close() callback for GEM 790 + * drivers. This must be used together with drm_gem_vm_open(). 791 + */ 807 792 void drm_gem_vm_close(struct vm_area_struct *vma) 808 793 { 809 794 struct drm_gem_object *obj = vma->vm_private_data;
+10 -4
drivers/gpu/drm/drm_modes.c
··· 1230 1230 unsigned int xres = 0, yres = 0, bpp = 32, refresh = 0; 1231 1231 bool yres_specified = false, cvt = false, rb = false; 1232 1232 bool interlace = false, margins = false, was_digit = false; 1233 - int i; 1233 + int i, err; 1234 1234 enum drm_connector_force force = DRM_FORCE_UNSPECIFIED; 1235 1235 1236 1236 #ifdef CONFIG_FB ··· 1250 1250 case '@': 1251 1251 if (!refresh_specified && !bpp_specified && 1252 1252 !yres_specified && !cvt && !rb && was_digit) { 1253 - refresh = simple_strtol(&name[i+1], NULL, 10); 1253 + err = kstrtouint(&name[i + 1], 10, &refresh); 1254 + if (err) 1255 + return false; 1254 1256 refresh_specified = true; 1255 1257 was_digit = false; 1256 1258 } else ··· 1261 1259 case '-': 1262 1260 if (!bpp_specified && !yres_specified && !cvt && 1263 1261 !rb && was_digit) { 1264 - bpp = simple_strtol(&name[i+1], NULL, 10); 1262 + err = kstrtouint(&name[i + 1], 10, &bpp); 1263 + if (err) 1264 + return false; 1265 1265 bpp_specified = true; 1266 1266 was_digit = false; 1267 1267 } else ··· 1271 1267 break; 1272 1268 case 'x': 1273 1269 if (!yres_specified && was_digit) { 1274 - yres = simple_strtol(&name[i+1], NULL, 10); 1270 + err = kstrtouint(&name[i + 1], 10, &yres); 1271 + if (err) 1272 + return false; 1275 1273 yres_specified = true; 1276 1274 was_digit = false; 1277 1275 } else
+4
drivers/gpu/drm/drm_plane_helper.c
··· 164 164 vscale = drm_rect_calc_vscale(src, dest, min_scale, max_scale); 165 165 if (hscale < 0 || vscale < 0) { 166 166 DRM_DEBUG_KMS("Invalid scaling of plane\n"); 167 + drm_rect_debug_print("src: ", src, true); 168 + drm_rect_debug_print("dst: ", dest, false); 167 169 return -ERANGE; 168 170 } 169 171 ··· 182 180 183 181 if (!can_position && !drm_rect_equals(dest, clip)) { 184 182 DRM_DEBUG_KMS("Plane must cover entire CRTC\n"); 183 + drm_rect_debug_print("dst: ", dest, false); 184 + drm_rect_debug_print("clip: ", clip, false); 185 185 return -EINVAL; 186 186 } 187 187
+23 -23
drivers/gpu/drm/drm_probe_helper.c
··· 147 147 list_for_each_entry(mode, &connector->modes, head) 148 148 mode->status = MODE_UNVERIFIED; 149 149 150 + old_status = connector->status; 151 + 150 152 if (connector->force) { 151 153 if (connector->force == DRM_FORCE_ON || 152 154 connector->force == DRM_FORCE_ON_DIGITAL) ··· 158 156 if (connector->funcs->force) 159 157 connector->funcs->force(connector); 160 158 } else { 161 - old_status = connector->status; 162 - 163 159 connector->status = connector->funcs->detect(connector, true); 160 + } 161 + 162 + /* 163 + * Normally either the driver's hpd code or the poll loop should 164 + * pick up any changes and fire the hotplug event. But if 165 + * userspace sneaks in a probe, we might miss a change. Hence 166 + * check here, and if anything changed start the hotplug code. 167 + */ 168 + if (old_status != connector->status) { 169 + DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %d to %d\n", 170 + connector->base.id, 171 + connector->name, 172 + old_status, connector->status); 164 173 165 174 /* 166 - * Normally either the driver's hpd code or the poll loop should 167 - * pick up any changes and fire the hotplug event. But if 168 - * userspace sneaks in a probe, we might miss a change. Hence 169 - * check here, and if anything changed start the hotplug code. 175 + * The hotplug event code might call into the fb 176 + * helpers, and so expects that we do not hold any 177 + * locks. Fire up the poll struct instead, it will 178 + * disable itself again. 170 179 */ 171 - if (old_status != connector->status) { 172 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %d to %d\n", 173 - connector->base.id, 174 - connector->name, 175 - old_status, connector->status); 176 - 177 - /* 178 - * The hotplug event code might call into the fb 179 - * helpers, and so expects that we do not hold any 180 - * locks. Fire up the poll struct instead, it will 181 - * disable itself again. 182 - */ 183 - dev->mode_config.delayed_event = true; 184 - if (dev->mode_config.poll_enabled) 185 - schedule_delayed_work(&dev->mode_config.output_poll_work, 186 - 0); 187 - } 180 + dev->mode_config.delayed_event = true; 181 + if (dev->mode_config.poll_enabled) 182 + schedule_delayed_work(&dev->mode_config.output_poll_work, 183 + 0); 188 184 } 189 185 190 186 /* Re-enable polling in case the global poll config changed. */
+4 -3
drivers/gpu/drm/drm_rect.c
··· 275 275 276 276 /** 277 277 * drm_rect_debug_print - print the rectangle information 278 + * @prefix: prefix string 278 279 * @r: rectangle to print 279 280 * @fixed_point: rectangle is in 16.16 fixed point format 280 281 */ 281 - void drm_rect_debug_print(const struct drm_rect *r, bool fixed_point) 282 + void drm_rect_debug_print(const char *prefix, const struct drm_rect *r, bool fixed_point) 282 283 { 283 284 int w = drm_rect_width(r); 284 285 int h = drm_rect_height(r); 285 286 286 287 if (fixed_point) 287 - DRM_DEBUG_KMS("%d.%06ux%d.%06u%+d.%06u%+d.%06u\n", 288 + DRM_DEBUG_KMS("%s%d.%06ux%d.%06u%+d.%06u%+d.%06u\n", prefix, 288 289 w >> 16, ((w & 0xffff) * 15625) >> 10, 289 290 h >> 16, ((h & 0xffff) * 15625) >> 10, 290 291 r->x1 >> 16, ((r->x1 & 0xffff) * 15625) >> 10, 291 292 r->y1 >> 16, ((r->y1 & 0xffff) * 15625) >> 10); 292 293 else 293 - DRM_DEBUG_KMS("%dx%d%+d%+d\n", w, h, r->x1, r->y1); 294 + DRM_DEBUG_KMS("%s%dx%d%+d%+d\n", prefix, w, h, r->x1, r->y1); 294 295 } 295 296 EXPORT_SYMBOL(drm_rect_debug_print); 296 297
+25 -29
drivers/gpu/drm/drm_sysfs.c
··· 167 167 { 168 168 struct drm_connector *connector = to_drm_connector(device); 169 169 struct drm_device *dev = connector->dev; 170 - enum drm_connector_status old_status; 170 + enum drm_connector_force old_force; 171 171 int ret; 172 172 173 173 ret = mutex_lock_interruptible(&dev->mode_config.mutex); 174 174 if (ret) 175 175 return ret; 176 176 177 - old_status = connector->status; 177 + old_force = connector->force; 178 178 179 - if (sysfs_streq(buf, "detect")) { 179 + if (sysfs_streq(buf, "detect")) 180 180 connector->force = 0; 181 - connector->status = connector->funcs->detect(connector, true); 182 - } else if (sysfs_streq(buf, "on")) { 181 + else if (sysfs_streq(buf, "on")) 183 182 connector->force = DRM_FORCE_ON; 184 - } else if (sysfs_streq(buf, "on-digital")) { 183 + else if (sysfs_streq(buf, "on-digital")) 185 184 connector->force = DRM_FORCE_ON_DIGITAL; 186 - } else if (sysfs_streq(buf, "off")) { 185 + else if (sysfs_streq(buf, "off")) 187 186 connector->force = DRM_FORCE_OFF; 188 - } else 187 + else 189 188 ret = -EINVAL; 190 189 191 - if (ret == 0 && connector->force) { 192 - if (connector->force == DRM_FORCE_ON || 193 - connector->force == DRM_FORCE_ON_DIGITAL) 194 - connector->status = connector_status_connected; 195 - else 196 - connector->status = connector_status_disconnected; 197 - if (connector->funcs->force) 198 - connector->funcs->force(connector); 199 - } 200 - 201 - if (old_status != connector->status) { 202 - DRM_DEBUG_KMS("[CONNECTOR:%d:%s] status updated from %d to %d\n", 190 + if (old_force != connector->force || !connector->force) { 191 + DRM_DEBUG_KMS("[CONNECTOR:%d:%s] force updated from %d to %d or reprobing\n", 203 192 connector->base.id, 204 193 connector->name, 205 - old_status, connector->status); 194 + old_force, connector->force); 206 195 207 - dev->mode_config.delayed_event = true; 208 - if (dev->mode_config.poll_enabled) 209 - schedule_delayed_work(&dev->mode_config.output_poll_work, 210 - 0); 196 + connector->funcs->fill_modes(connector, 197 + dev->mode_config.max_width, 198 + dev->mode_config.max_height); 211 199 } 212 200 213 201 mutex_unlock(&dev->mode_config.mutex); ··· 244 256 struct drm_connector *connector = to_drm_connector(connector_dev); 245 257 unsigned char *edid; 246 258 size_t size; 259 + ssize_t ret = 0; 247 260 261 + mutex_lock(&connector->dev->mode_config.mutex); 248 262 if (!connector->edid_blob_ptr) 249 - return 0; 263 + goto unlock; 250 264 251 265 edid = connector->edid_blob_ptr->data; 252 266 size = connector->edid_blob_ptr->length; 253 267 if (!edid) 254 - return 0; 268 + goto unlock; 255 269 256 270 if (off >= size) 257 - return 0; 271 + goto unlock; 258 272 259 273 if (off + count > size) 260 274 count = size - off; 261 275 memcpy(buf, edid + off, count); 262 276 263 - return count; 277 + ret = count; 278 + unlock: 279 + mutex_unlock(&connector->dev->mode_config.mutex); 280 + 281 + return ret; 264 282 } 265 283 266 284 static ssize_t modes_show(struct device *device, ··· 277 283 struct drm_display_mode *mode; 278 284 int written = 0; 279 285 286 + mutex_lock(&connector->dev->mode_config.mutex); 280 287 list_for_each_entry(mode, &connector->modes, head) { 281 288 written += snprintf(buf + written, PAGE_SIZE - written, "%s\n", 282 289 mode->name); 283 290 } 291 + mutex_unlock(&connector->dev->mode_config.mutex); 284 292 285 293 return written; 286 294 }
+2 -2
drivers/gpu/drm/exynos/exynos_drm_fb.c
··· 117 117 118 118 struct drm_framebuffer * 119 119 exynos_drm_framebuffer_init(struct drm_device *dev, 120 - struct drm_mode_fb_cmd2 *mode_cmd, 120 + const struct drm_mode_fb_cmd2 *mode_cmd, 121 121 struct exynos_drm_gem **exynos_gem, 122 122 int count) 123 123 { ··· 154 154 155 155 static struct drm_framebuffer * 156 156 exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv, 157 - struct drm_mode_fb_cmd2 *mode_cmd) 157 + const struct drm_mode_fb_cmd2 *mode_cmd) 158 158 { 159 159 struct exynos_drm_gem *exynos_gem[MAX_FB_BUFFER]; 160 160 struct drm_gem_object *obj;
+1 -1
drivers/gpu/drm/exynos/exynos_drm_fb.h
··· 18 18 19 19 struct drm_framebuffer * 20 20 exynos_drm_framebuffer_init(struct drm_device *dev, 21 - struct drm_mode_fb_cmd2 *mode_cmd, 21 + const struct drm_mode_fb_cmd2 *mode_cmd, 22 22 struct exynos_drm_gem **exynos_gem, 23 23 int count); 24 24
+3 -3
drivers/gpu/drm/gma500/framebuffer.c
··· 241 241 */ 242 242 static int psb_framebuffer_init(struct drm_device *dev, 243 243 struct psb_framebuffer *fb, 244 - struct drm_mode_fb_cmd2 *mode_cmd, 244 + const struct drm_mode_fb_cmd2 *mode_cmd, 245 245 struct gtt_range *gt) 246 246 { 247 247 u32 bpp, depth; ··· 284 284 285 285 static struct drm_framebuffer *psb_framebuffer_create 286 286 (struct drm_device *dev, 287 - struct drm_mode_fb_cmd2 *mode_cmd, 287 + const struct drm_mode_fb_cmd2 *mode_cmd, 288 288 struct gtt_range *gt) 289 289 { 290 290 struct psb_framebuffer *fb; ··· 488 488 */ 489 489 static struct drm_framebuffer *psb_user_framebuffer_create 490 490 (struct drm_device *dev, struct drm_file *filp, 491 - struct drm_mode_fb_cmd2 *cmd) 491 + const struct drm_mode_fb_cmd2 *cmd) 492 492 { 493 493 struct gtt_range *r; 494 494 struct drm_gem_object *obj;
+6 -6
drivers/gpu/drm/i915/i915_drv.h
··· 288 288 list_for_each_entry(intel_plane, \ 289 289 &(dev)->mode_config.plane_list, \ 290 290 base.head) \ 291 - if ((intel_plane)->pipe == (intel_crtc)->pipe) 291 + for_each_if ((intel_plane)->pipe == (intel_crtc)->pipe) 292 292 293 293 #define for_each_intel_crtc(dev, intel_crtc) \ 294 294 list_for_each_entry(intel_crtc, &dev->mode_config.crtc_list, base.head) ··· 305 305 306 306 #define for_each_encoder_on_crtc(dev, __crtc, intel_encoder) \ 307 307 list_for_each_entry((intel_encoder), &(dev)->mode_config.encoder_list, base.head) \ 308 - if ((intel_encoder)->base.crtc == (__crtc)) 308 + for_each_if ((intel_encoder)->base.crtc == (__crtc)) 309 309 310 310 #define for_each_connector_on_encoder(dev, __encoder, intel_connector) \ 311 311 list_for_each_entry((intel_connector), &(dev)->mode_config.connector_list, base.head) \ 312 - if ((intel_connector)->base.encoder == (__encoder)) 312 + for_each_if ((intel_connector)->base.encoder == (__encoder)) 313 313 314 314 #define for_each_power_domain(domain, mask) \ 315 315 for ((domain) = 0; (domain) < POWER_DOMAIN_NUM; (domain)++) \ 316 - if ((1 << (domain)) & (mask)) 316 + for_each_if ((1 << (domain)) & (mask)) 317 317 318 318 struct drm_i915_private; 319 319 struct i915_mm_struct; ··· 734 734 for ((i__) = 0, (domain__) = &(dev_priv__)->uncore.fw_domain[0]; \ 735 735 (i__) < FW_DOMAIN_ID_COUNT; \ 736 736 (i__)++, (domain__) = &(dev_priv__)->uncore.fw_domain[i__]) \ 737 - if (((mask__) & (dev_priv__)->uncore.fw_domains) & (1 << (i__))) 737 + for_each_if (((mask__) & (dev_priv__)->uncore.fw_domains) & (1 << (i__))) 738 738 739 739 #define for_each_fw_domain(domain__, dev_priv__, i__) \ 740 740 for_each_fw_domain_mask(domain__, FORCEWAKE_ALL, dev_priv__, i__) ··· 1979 1979 /* Iterate over initialised rings */ 1980 1980 #define for_each_ring(ring__, dev_priv__, i__) \ 1981 1981 for ((i__) = 0; (i__) < I915_NUM_RINGS; (i__)++) \ 1982 - if (((ring__) = &(dev_priv__)->ring[(i__)]), intel_ring_initialized((ring__))) 1982 + for_each_if ((((ring__) = &(dev_priv__)->ring[(i__)]), intel_ring_initialized((ring__)))) 1983 1983 1984 1984 enum hdmi_force_audio { 1985 1985 HDMI_AUDIO_OFF_DVI = -2, /* no aux data for HDMI-DVI converter */
+2 -2
drivers/gpu/drm/i915/intel_display.c
··· 12281 12281 list_for_each_entry((intel_crtc), \ 12282 12282 &(dev)->mode_config.crtc_list, \ 12283 12283 base.head) \ 12284 - if (mask & (1 <<(intel_crtc)->pipe)) 12284 + for_each_if (mask & (1 <<(intel_crtc)->pipe)) 12285 12285 12286 12286 static bool 12287 12287 intel_compare_m_n(unsigned int m, unsigned int n, ··· 14377 14377 static struct drm_framebuffer * 14378 14378 intel_user_framebuffer_create(struct drm_device *dev, 14379 14379 struct drm_file *filp, 14380 - struct drm_mode_fb_cmd2 *user_mode_cmd) 14380 + const struct drm_mode_fb_cmd2 *user_mode_cmd) 14381 14381 { 14382 14382 struct drm_i915_gem_object *obj; 14383 14383 struct drm_mode_fb_cmd2 mode_cmd = *user_mode_cmd;
-2
drivers/gpu/drm/i915/intel_drv.h
··· 123 123 struct intel_fbdev { 124 124 struct drm_fb_helper helper; 125 125 struct intel_framebuffer *fb; 126 - struct list_head fbdev_list; 127 - struct drm_display_mode *our_mode; 128 126 int preferred_bpp; 129 127 }; 130 128
+1 -1
drivers/gpu/drm/i915/intel_dsi.h
··· 117 117 118 118 #define for_each_dsi_port(__port, __ports_mask) \ 119 119 for ((__port) = PORT_A; (__port) < I915_MAX_PORTS; (__port)++) \ 120 - if ((__ports_mask) & (1 << (__port))) 120 + for_each_if ((__ports_mask) & (1 << (__port))) 121 121 122 122 static inline struct intel_dsi *enc_to_intel_dsi(struct drm_encoder *encoder) 123 123 {
+2 -2
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 57 57 i < (power_domains)->power_well_count && \ 58 58 ((power_well) = &(power_domains)->power_wells[i]); \ 59 59 i++) \ 60 - if ((power_well)->domains & (domain_mask)) 60 + for_each_if ((power_well)->domains & (domain_mask)) 61 61 62 62 #define for_each_power_well_rev(i, power_well, domain_mask, power_domains) \ 63 63 for (i = (power_domains)->power_well_count - 1; \ 64 64 i >= 0 && ((power_well) = &(power_domains)->power_wells[i]);\ 65 65 i--) \ 66 - if ((power_well)->domains & (domain_mask)) 66 + for_each_if ((power_well)->domains & (domain_mask)) 67 67 68 68 bool intel_display_power_well_is_enabled(struct drm_i915_private *dev_priv, 69 69 int power_well_id);
+4 -4
drivers/gpu/drm/i915/intel_sprite.c
··· 832 832 hscale = drm_rect_calc_hscale(src, dst, min_scale, max_scale); 833 833 if (hscale < 0) { 834 834 DRM_DEBUG_KMS("Horizontal scaling factor out of limits\n"); 835 - drm_rect_debug_print(src, true); 836 - drm_rect_debug_print(dst, false); 835 + drm_rect_debug_print("src: ", src, true); 836 + drm_rect_debug_print("dst: ", dst, false); 837 837 838 838 return hscale; 839 839 } ··· 841 841 vscale = drm_rect_calc_vscale(src, dst, min_scale, max_scale); 842 842 if (vscale < 0) { 843 843 DRM_DEBUG_KMS("Vertical scaling factor out of limits\n"); 844 - drm_rect_debug_print(src, true); 845 - drm_rect_debug_print(dst, false); 844 + drm_rect_debug_print("src: ", src, true); 845 + drm_rect_debug_print("dst: ", dst, false); 846 846 847 847 return vscale; 848 848 }
-9
drivers/gpu/drm/imx/Kconfig
··· 10 10 help 11 11 enable i.MX graphics support 12 12 13 - config DRM_IMX_FB_HELPER 14 - tristate "provide legacy framebuffer /dev/fb0" 15 - select DRM_KMS_CMA_HELPER 16 - depends on DRM_IMX 17 - help 18 - The DRM framework can provide a legacy /dev/fb0 framebuffer 19 - for your device. This is necessary to get a framebuffer console 20 - and also for applications using the legacy framebuffer API 21 - 22 13 config DRM_IMX_PARALLEL_DISPLAY 23 14 tristate "Support for parallel displays" 24 15 select DRM_PANEL
+3 -9
drivers/gpu/drm/imx/imx-drm-core.c
··· 49 49 struct imx_drm_crtc_helper_funcs imx_drm_helper_funcs; 50 50 }; 51 51 52 + #if IS_ENABLED(CONFIG_DRM_FBDEV_EMULATION) 52 53 static int legacyfb_depth = 16; 53 54 module_param(legacyfb_depth, int, 0444); 55 + #endif 54 56 55 57 int imx_drm_crtc_id(struct imx_drm_crtc *crtc) 56 58 { ··· 62 60 63 61 static void imx_drm_driver_lastclose(struct drm_device *drm) 64 62 { 65 - #if IS_ENABLED(CONFIG_DRM_IMX_FB_HELPER) 66 63 struct imx_drm_device *imxdrm = drm->dev_private; 67 64 68 65 if (imxdrm->fbhelper) 69 66 drm_fbdev_cma_restore_mode(imxdrm->fbhelper); 70 - #endif 71 67 } 72 68 73 69 static int imx_drm_driver_unload(struct drm_device *drm) 74 70 { 75 - #if IS_ENABLED(CONFIG_DRM_IMX_FB_HELPER) 76 71 struct imx_drm_device *imxdrm = drm->dev_private; 77 - #endif 78 72 79 73 drm_kms_helper_poll_fini(drm); 80 74 81 - #if IS_ENABLED(CONFIG_DRM_IMX_FB_HELPER) 82 75 if (imxdrm->fbhelper) 83 76 drm_fbdev_cma_fini(imxdrm->fbhelper); 84 - #endif 85 77 86 78 component_unbind_all(drm->dev, drm); 87 79 ··· 211 215 212 216 static void imx_drm_output_poll_changed(struct drm_device *drm) 213 217 { 214 - #if IS_ENABLED(CONFIG_DRM_IMX_FB_HELPER) 215 218 struct imx_drm_device *imxdrm = drm->dev_private; 216 219 217 220 drm_fbdev_cma_hotplug_event(imxdrm->fbhelper); 218 - #endif 219 221 } 220 222 221 223 static struct drm_mode_config_funcs imx_drm_mode_config_funcs = { ··· 302 308 * The fb helper takes copies of key hardware information, so the 303 309 * crtcs/connectors/encoders must not change after this point. 304 310 */ 305 - #if IS_ENABLED(CONFIG_DRM_IMX_FB_HELPER) 311 + #if IS_ENABLED(CONFIG_DRM_FBDEV_EMULATION) 306 312 if (legacyfb_depth != 16 && legacyfb_depth != 32) { 307 313 dev_warn(drm->dev, "Invalid legacyfb_depth. Defaulting to 16bpp\n"); 308 314 legacyfb_depth = 16;
+1 -1
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 252 252 /* mgag200_main.c */ 253 253 int mgag200_framebuffer_init(struct drm_device *dev, 254 254 struct mga_framebuffer *mfb, 255 - struct drm_mode_fb_cmd2 *mode_cmd, 255 + const struct drm_mode_fb_cmd2 *mode_cmd, 256 256 struct drm_gem_object *obj); 257 257 258 258
+1 -1
drivers/gpu/drm/mgag200/mgag200_fb.c
··· 138 138 }; 139 139 140 140 static int mgag200fb_create_object(struct mga_fbdev *afbdev, 141 - struct drm_mode_fb_cmd2 *mode_cmd, 141 + const struct drm_mode_fb_cmd2 *mode_cmd, 142 142 struct drm_gem_object **gobj_p) 143 143 { 144 144 struct drm_device *dev = afbdev->helper.dev;
+2 -2
drivers/gpu/drm/mgag200/mgag200_main.c
··· 29 29 30 30 int mgag200_framebuffer_init(struct drm_device *dev, 31 31 struct mga_framebuffer *gfb, 32 - struct drm_mode_fb_cmd2 *mode_cmd, 32 + const struct drm_mode_fb_cmd2 *mode_cmd, 33 33 struct drm_gem_object *obj) 34 34 { 35 35 int ret; ··· 47 47 static struct drm_framebuffer * 48 48 mgag200_user_framebuffer_create(struct drm_device *dev, 49 49 struct drm_file *filp, 50 - struct drm_mode_fb_cmd2 *mode_cmd) 50 + const struct drm_mode_fb_cmd2 *mode_cmd) 51 51 { 52 52 struct drm_gem_object *obj; 53 53 struct mga_framebuffer *mga_fb;
+2 -2
drivers/gpu/drm/msm/msm_drv.h
··· 240 240 struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); 241 241 const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); 242 242 struct drm_framebuffer *msm_framebuffer_init(struct drm_device *dev, 243 - struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos); 243 + const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos); 244 244 struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, 245 - struct drm_file *file, struct drm_mode_fb_cmd2 *mode_cmd); 245 + struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd); 246 246 247 247 struct drm_fb_helper *msm_fbdev_init(struct drm_device *dev); 248 248
+2 -2
drivers/gpu/drm/msm/msm_fb.c
··· 138 138 } 139 139 140 140 struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, 141 - struct drm_file *file, struct drm_mode_fb_cmd2 *mode_cmd) 141 + struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd) 142 142 { 143 143 struct drm_gem_object *bos[4] = {0}; 144 144 struct drm_framebuffer *fb; ··· 168 168 } 169 169 170 170 struct drm_framebuffer *msm_framebuffer_init(struct drm_device *dev, 171 - struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos) 171 + const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos) 172 172 { 173 173 struct msm_drm_private *priv = dev->dev_private; 174 174 struct msm_kms *kms = priv->kms;
+2 -2
drivers/gpu/drm/nouveau/nouveau_display.c
··· 246 246 int 247 247 nouveau_framebuffer_init(struct drm_device *dev, 248 248 struct nouveau_framebuffer *nv_fb, 249 - struct drm_mode_fb_cmd2 *mode_cmd, 249 + const struct drm_mode_fb_cmd2 *mode_cmd, 250 250 struct nouveau_bo *nvbo) 251 251 { 252 252 struct nouveau_display *disp = nouveau_display(dev); ··· 272 272 static struct drm_framebuffer * 273 273 nouveau_user_framebuffer_create(struct drm_device *dev, 274 274 struct drm_file *file_priv, 275 - struct drm_mode_fb_cmd2 *mode_cmd) 275 + const struct drm_mode_fb_cmd2 *mode_cmd) 276 276 { 277 277 struct nouveau_framebuffer *nouveau_fb; 278 278 struct drm_gem_object *gem;
+1 -1
drivers/gpu/drm/nouveau/nouveau_display.h
··· 23 23 } 24 24 25 25 int nouveau_framebuffer_init(struct drm_device *, struct nouveau_framebuffer *, 26 - struct drm_mode_fb_cmd2 *, struct nouveau_bo *); 26 + const struct drm_mode_fb_cmd2 *, struct nouveau_bo *); 27 27 28 28 struct nouveau_page_flip_state { 29 29 struct list_head head;
-1
drivers/gpu/drm/nouveau/nouveau_fbcon.h
··· 34 34 struct nouveau_fbdev { 35 35 struct drm_fb_helper helper; 36 36 struct nouveau_framebuffer nouveau_fb; 37 - struct list_head fbdev_list; 38 37 struct drm_device *dev; 39 38 unsigned int saved_flags; 40 39 struct nvif_object surf2d;
+3 -3
drivers/gpu/drm/omapdrm/omap_drv.h
··· 172 172 uint32_t omap_framebuffer_get_formats(uint32_t *pixel_formats, 173 173 uint32_t max_formats, enum omap_color_mode supported_modes); 174 174 struct drm_framebuffer *omap_framebuffer_create(struct drm_device *dev, 175 - struct drm_file *file, struct drm_mode_fb_cmd2 *mode_cmd); 175 + struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd); 176 176 struct drm_framebuffer *omap_framebuffer_init(struct drm_device *dev, 177 - struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos); 177 + const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos); 178 178 struct drm_gem_object *omap_framebuffer_bo(struct drm_framebuffer *fb, int p); 179 179 int omap_framebuffer_pin(struct drm_framebuffer *fb); 180 180 void omap_framebuffer_unpin(struct drm_framebuffer *fb); ··· 248 248 249 249 static inline int objects_lookup(struct drm_device *dev, 250 250 struct drm_file *filp, uint32_t pixel_format, 251 - struct drm_gem_object **bos, uint32_t *handles) 251 + struct drm_gem_object **bos, const uint32_t *handles) 252 252 { 253 253 int i, n = drm_format_num_planes(pixel_format); 254 254
+2 -2
drivers/gpu/drm/omapdrm/omap_fb.c
··· 364 364 #endif 365 365 366 366 struct drm_framebuffer *omap_framebuffer_create(struct drm_device *dev, 367 - struct drm_file *file, struct drm_mode_fb_cmd2 *mode_cmd) 367 + struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd) 368 368 { 369 369 struct drm_gem_object *bos[4]; 370 370 struct drm_framebuffer *fb; ··· 386 386 } 387 387 388 388 struct drm_framebuffer *omap_framebuffer_init(struct drm_device *dev, 389 - struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos) 389 + const struct drm_mode_fb_cmd2 *mode_cmd, struct drm_gem_object **bos) 390 390 { 391 391 struct omap_framebuffer *omap_fb = NULL; 392 392 struct drm_framebuffer *fb = NULL;
+2 -2
drivers/gpu/drm/qxl/qxl_display.c
··· 521 521 int 522 522 qxl_framebuffer_init(struct drm_device *dev, 523 523 struct qxl_framebuffer *qfb, 524 - struct drm_mode_fb_cmd2 *mode_cmd, 524 + const struct drm_mode_fb_cmd2 *mode_cmd, 525 525 struct drm_gem_object *obj) 526 526 { 527 527 int ret; ··· 1003 1003 static struct drm_framebuffer * 1004 1004 qxl_user_framebuffer_create(struct drm_device *dev, 1005 1005 struct drm_file *file_priv, 1006 - struct drm_mode_fb_cmd2 *mode_cmd) 1006 + const struct drm_mode_fb_cmd2 *mode_cmd) 1007 1007 { 1008 1008 struct drm_gem_object *obj; 1009 1009 struct qxl_framebuffer *qxl_fb;
+1 -1
drivers/gpu/drm/qxl/qxl_drv.h
··· 390 390 int 391 391 qxl_framebuffer_init(struct drm_device *dev, 392 392 struct qxl_framebuffer *rfb, 393 - struct drm_mode_fb_cmd2 *mode_cmd, 393 + const struct drm_mode_fb_cmd2 *mode_cmd, 394 394 struct drm_gem_object *obj); 395 395 void qxl_display_read_client_monitors_config(struct qxl_device *qdev); 396 396 void qxl_send_monitors_config(struct qxl_device *qdev);
+1 -2
drivers/gpu/drm/qxl/qxl_fb.c
··· 40 40 struct qxl_fbdev { 41 41 struct drm_fb_helper helper; 42 42 struct qxl_framebuffer qfb; 43 - struct list_head fbdev_list; 44 43 struct qxl_device *qdev; 45 44 46 45 spinlock_t delayed_ops_lock; ··· 282 283 } 283 284 284 285 static int qxlfb_create_pinned_object(struct qxl_fbdev *qfbdev, 285 - struct drm_mode_fb_cmd2 *mode_cmd, 286 + const struct drm_mode_fb_cmd2 *mode_cmd, 286 287 struct drm_gem_object **gobj_p) 287 288 { 288 289 struct qxl_device *qdev = qfbdev->qdev;
+2 -2
drivers/gpu/drm/radeon/radeon_display.c
··· 1292 1292 int 1293 1293 radeon_framebuffer_init(struct drm_device *dev, 1294 1294 struct radeon_framebuffer *rfb, 1295 - struct drm_mode_fb_cmd2 *mode_cmd, 1295 + const struct drm_mode_fb_cmd2 *mode_cmd, 1296 1296 struct drm_gem_object *obj) 1297 1297 { 1298 1298 int ret; ··· 1309 1309 static struct drm_framebuffer * 1310 1310 radeon_user_framebuffer_create(struct drm_device *dev, 1311 1311 struct drm_file *file_priv, 1312 - struct drm_mode_fb_cmd2 *mode_cmd) 1312 + const struct drm_mode_fb_cmd2 *mode_cmd) 1313 1313 { 1314 1314 struct drm_gem_object *obj; 1315 1315 struct radeon_framebuffer *radeon_fb;
-1
drivers/gpu/drm/radeon/radeon_fb.c
··· 44 44 struct radeon_fbdev { 45 45 struct drm_fb_helper helper; 46 46 struct radeon_framebuffer rfb; 47 - struct list_head fbdev_list; 48 47 struct radeon_device *rdev; 49 48 }; 50 49
+1 -1
drivers/gpu/drm/radeon/radeon_mode.h
··· 929 929 u16 *blue, int regno); 930 930 int radeon_framebuffer_init(struct drm_device *dev, 931 931 struct radeon_framebuffer *rfb, 932 - struct drm_mode_fb_cmd2 *mode_cmd, 932 + const struct drm_mode_fb_cmd2 *mode_cmd, 933 933 struct drm_gem_object *obj); 934 934 935 935 int radeonfb_remove(struct drm_device *dev, struct drm_framebuffer *fb);
+1 -1
drivers/gpu/drm/rcar-du/rcar_du_kms.c
··· 136 136 137 137 static struct drm_framebuffer * 138 138 rcar_du_fb_create(struct drm_device *dev, struct drm_file *file_priv, 139 - struct drm_mode_fb_cmd2 *mode_cmd) 139 + const struct drm_mode_fb_cmd2 *mode_cmd) 140 140 { 141 141 struct rcar_du_device *rcdu = dev->dev_private; 142 142 const struct rcar_du_format_info *format;
+3 -3
drivers/gpu/drm/rockchip/rockchip_drm_fb.c
··· 72 72 }; 73 73 74 74 static struct rockchip_drm_fb * 75 - rockchip_fb_alloc(struct drm_device *dev, struct drm_mode_fb_cmd2 *mode_cmd, 75 + rockchip_fb_alloc(struct drm_device *dev, const struct drm_mode_fb_cmd2 *mode_cmd, 76 76 struct drm_gem_object **obj, unsigned int num_planes) 77 77 { 78 78 struct rockchip_drm_fb *rockchip_fb; ··· 102 102 103 103 static struct drm_framebuffer * 104 104 rockchip_user_fb_create(struct drm_device *dev, struct drm_file *file_priv, 105 - struct drm_mode_fb_cmd2 *mode_cmd) 105 + const struct drm_mode_fb_cmd2 *mode_cmd) 106 106 { 107 107 struct rockchip_drm_fb *rockchip_fb; 108 108 struct drm_gem_object *objs[ROCKCHIP_MAX_FB_BUFFER]; ··· 173 173 174 174 struct drm_framebuffer * 175 175 rockchip_drm_framebuffer_init(struct drm_device *dev, 176 - struct drm_mode_fb_cmd2 *mode_cmd, 176 + const struct drm_mode_fb_cmd2 *mode_cmd, 177 177 struct drm_gem_object *obj) 178 178 { 179 179 struct rockchip_drm_fb *rockchip_fb;
+1 -1
drivers/gpu/drm/rockchip/rockchip_drm_fb.h
··· 17 17 18 18 struct drm_framebuffer * 19 19 rockchip_drm_framebuffer_init(struct drm_device *dev, 20 - struct drm_mode_fb_cmd2 *mode_cmd, 20 + const struct drm_mode_fb_cmd2 *mode_cmd, 21 21 struct drm_gem_object *obj); 22 22 void rockchip_drm_framebuffer_fini(struct drm_framebuffer *fb); 23 23
+1 -1
drivers/gpu/drm/shmobile/shmob_drm_kms.c
··· 104 104 105 105 static struct drm_framebuffer * 106 106 shmob_drm_fb_create(struct drm_device *dev, struct drm_file *file_priv, 107 - struct drm_mode_fb_cmd2 *mode_cmd) 107 + const struct drm_mode_fb_cmd2 *mode_cmd) 108 108 { 109 109 const struct shmob_drm_format_info *format; 110 110
-12
drivers/gpu/drm/tegra/Kconfig
··· 16 16 17 17 if DRM_TEGRA 18 18 19 - config DRM_TEGRA_FBDEV 20 - bool "Enable legacy fbdev support" 21 - select DRM_KMS_FB_HELPER 22 - select FB_SYS_FILLRECT 23 - select FB_SYS_COPYAREA 24 - select FB_SYS_IMAGEBLIT 25 - default y 26 - help 27 - Choose this option if you have a need for the legacy fbdev support. 28 - Note that this support also provides the Linux console on top of 29 - the Tegra modesetting driver. 30 - 31 19 config DRM_TEGRA_DEBUG 32 20 bool "NVIDIA Tegra DRM debug support" 33 21 help
+2 -2
drivers/gpu/drm/tegra/drm.c
··· 106 106 107 107 static const struct drm_mode_config_funcs tegra_drm_mode_funcs = { 108 108 .fb_create = tegra_fb_create, 109 - #ifdef CONFIG_DRM_TEGRA_FBDEV 109 + #ifdef CONFIG_DRM_FBDEV_EMULATION 110 110 .output_poll_changed = tegra_fb_output_poll_changed, 111 111 #endif 112 112 .atomic_check = drm_atomic_helper_check, ··· 260 260 261 261 static void tegra_drm_lastclose(struct drm_device *drm) 262 262 { 263 - #ifdef CONFIG_DRM_TEGRA_FBDEV 263 + #ifdef CONFIG_DRM_FBDEV_EMULATION 264 264 struct tegra_drm *tegra = drm->dev_private; 265 265 266 266 tegra_fbdev_restore_mode(tegra->fbdev);
+4 -4
drivers/gpu/drm/tegra/drm.h
··· 30 30 unsigned int num_planes; 31 31 }; 32 32 33 - #ifdef CONFIG_DRM_TEGRA_FBDEV 33 + #ifdef CONFIG_DRM_FBDEV_EMULATION 34 34 struct tegra_fbdev { 35 35 struct drm_fb_helper base; 36 36 struct tegra_fb *fb; ··· 46 46 struct mutex clients_lock; 47 47 struct list_head clients; 48 48 49 - #ifdef CONFIG_DRM_TEGRA_FBDEV 49 + #ifdef CONFIG_DRM_FBDEV_EMULATION 50 50 struct tegra_fbdev *fbdev; 51 51 #endif 52 52 ··· 268 268 struct tegra_bo_tiling *tiling); 269 269 struct drm_framebuffer *tegra_fb_create(struct drm_device *drm, 270 270 struct drm_file *file, 271 - struct drm_mode_fb_cmd2 *cmd); 271 + const struct drm_mode_fb_cmd2 *cmd); 272 272 int tegra_drm_fb_prepare(struct drm_device *drm); 273 273 void tegra_drm_fb_free(struct drm_device *drm); 274 274 int tegra_drm_fb_init(struct drm_device *drm); 275 275 void tegra_drm_fb_exit(struct drm_device *drm); 276 - #ifdef CONFIG_DRM_TEGRA_FBDEV 276 + #ifdef CONFIG_DRM_FBDEV_EMULATION 277 277 void tegra_fbdev_restore_mode(struct tegra_fbdev *fbdev); 278 278 void tegra_fb_output_poll_changed(struct drm_device *drm); 279 279 #endif
+8 -8
drivers/gpu/drm/tegra/fb.c
··· 18 18 return container_of(fb, struct tegra_fb, base); 19 19 } 20 20 21 - #ifdef CONFIG_DRM_TEGRA_FBDEV 21 + #ifdef CONFIG_DRM_FBDEV_EMULATION 22 22 static inline struct tegra_fbdev *to_tegra_fbdev(struct drm_fb_helper *helper) 23 23 { 24 24 return container_of(helper, struct tegra_fbdev, base); ··· 92 92 }; 93 93 94 94 static struct tegra_fb *tegra_fb_alloc(struct drm_device *drm, 95 - struct drm_mode_fb_cmd2 *mode_cmd, 95 + const struct drm_mode_fb_cmd2 *mode_cmd, 96 96 struct tegra_bo **planes, 97 97 unsigned int num_planes) 98 98 { ··· 131 131 132 132 struct drm_framebuffer *tegra_fb_create(struct drm_device *drm, 133 133 struct drm_file *file, 134 - struct drm_mode_fb_cmd2 *cmd) 134 + const struct drm_mode_fb_cmd2 *cmd) 135 135 { 136 136 unsigned int hsub, vsub, i; 137 137 struct tegra_bo *planes[4]; ··· 181 181 return ERR_PTR(err); 182 182 } 183 183 184 - #ifdef CONFIG_DRM_TEGRA_FBDEV 184 + #ifdef CONFIG_DRM_FBDEV_EMULATION 185 185 static struct fb_ops tegra_fb_ops = { 186 186 .owner = THIS_MODULE, 187 187 .fb_fillrect = drm_fb_helper_sys_fillrect, ··· 370 370 371 371 int tegra_drm_fb_prepare(struct drm_device *drm) 372 372 { 373 - #ifdef CONFIG_DRM_TEGRA_FBDEV 373 + #ifdef CONFIG_DRM_FBDEV_EMULATION 374 374 struct tegra_drm *tegra = drm->dev_private; 375 375 376 376 tegra->fbdev = tegra_fbdev_create(drm); ··· 383 383 384 384 void tegra_drm_fb_free(struct drm_device *drm) 385 385 { 386 - #ifdef CONFIG_DRM_TEGRA_FBDEV 386 + #ifdef CONFIG_DRM_FBDEV_EMULATION 387 387 struct tegra_drm *tegra = drm->dev_private; 388 388 389 389 tegra_fbdev_free(tegra->fbdev); ··· 392 392 393 393 int tegra_drm_fb_init(struct drm_device *drm) 394 394 { 395 - #ifdef CONFIG_DRM_TEGRA_FBDEV 395 + #ifdef CONFIG_DRM_FBDEV_EMULATION 396 396 struct tegra_drm *tegra = drm->dev_private; 397 397 int err; 398 398 ··· 407 407 408 408 void tegra_drm_fb_exit(struct drm_device *drm) 409 409 { 410 - #ifdef CONFIG_DRM_TEGRA_FBDEV 410 + #ifdef CONFIG_DRM_FBDEV_EMULATION 411 411 struct tegra_drm *tegra = drm->dev_private; 412 412 413 413 tegra_fbdev_exit(tegra->fbdev);
+1 -1
drivers/gpu/drm/tilcdc/tilcdc_drv.c
··· 46 46 static struct of_device_id tilcdc_of_match[]; 47 47 48 48 static struct drm_framebuffer *tilcdc_fb_create(struct drm_device *dev, 49 - struct drm_file *file_priv, struct drm_mode_fb_cmd2 *mode_cmd) 49 + struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd) 50 50 { 51 51 return drm_fb_cma_create(dev, file_priv, mode_cmd); 52 52 }
+1 -1
drivers/gpu/drm/udl/udl_drv.h
··· 108 108 struct drm_framebuffer * 109 109 udl_fb_user_fb_create(struct drm_device *dev, 110 110 struct drm_file *file, 111 - struct drm_mode_fb_cmd2 *mode_cmd); 111 + const struct drm_mode_fb_cmd2 *mode_cmd); 112 112 113 113 int udl_render_hline(struct drm_device *dev, int bpp, struct urb **urb_ptr, 114 114 const char *front, char **urb_buf_ptr,
+2 -3
drivers/gpu/drm/udl/udl_fb.c
··· 33 33 struct udl_fbdev { 34 34 struct drm_fb_helper helper; 35 35 struct udl_framebuffer ufb; 36 - struct list_head fbdev_list; 37 36 int fb_count; 38 37 }; 39 38 ··· 455 456 static int 456 457 udl_framebuffer_init(struct drm_device *dev, 457 458 struct udl_framebuffer *ufb, 458 - struct drm_mode_fb_cmd2 *mode_cmd, 459 + const struct drm_mode_fb_cmd2 *mode_cmd, 459 460 struct udl_gem_object *obj) 460 461 { 461 462 int ret; ··· 623 624 struct drm_framebuffer * 624 625 udl_fb_user_fb_create(struct drm_device *dev, 625 626 struct drm_file *file, 626 - struct drm_mode_fb_cmd2 *mode_cmd) 627 + const struct drm_mode_fb_cmd2 *mode_cmd) 627 628 { 628 629 struct drm_gem_object *obj; 629 630 struct udl_framebuffer *ufb;
+2 -2
drivers/gpu/drm/virtio/virtgpu_display.c
··· 215 215 int 216 216 virtio_gpu_framebuffer_init(struct drm_device *dev, 217 217 struct virtio_gpu_framebuffer *vgfb, 218 - struct drm_mode_fb_cmd2 *mode_cmd, 218 + const struct drm_mode_fb_cmd2 *mode_cmd, 219 219 struct drm_gem_object *obj) 220 220 { 221 221 int ret; ··· 465 465 static struct drm_framebuffer * 466 466 virtio_gpu_user_framebuffer_create(struct drm_device *dev, 467 467 struct drm_file *file_priv, 468 - struct drm_mode_fb_cmd2 *mode_cmd) 468 + const struct drm_mode_fb_cmd2 *mode_cmd) 469 469 { 470 470 struct drm_gem_object *obj = NULL; 471 471 struct virtio_gpu_framebuffer *virtio_gpu_fb;
+1 -1
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 328 328 /* virtio_gpu_display.c */ 329 329 int virtio_gpu_framebuffer_init(struct drm_device *dev, 330 330 struct virtio_gpu_framebuffer *vgfb, 331 - struct drm_mode_fb_cmd2 *mode_cmd, 331 + const struct drm_mode_fb_cmd2 *mode_cmd, 332 332 struct drm_gem_object *obj); 333 333 int virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev); 334 334 void virtio_gpu_modeset_fini(struct virtio_gpu_device *vgdev);
-1
drivers/gpu/drm/virtio/virtgpu_fb.c
··· 32 32 struct virtio_gpu_fbdev { 33 33 struct drm_fb_helper helper; 34 34 struct virtio_gpu_framebuffer vgfb; 35 - struct list_head fbdev_list; 36 35 struct virtio_gpu_device *vgdev; 37 36 struct delayed_work work; 38 37 };
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 930 930 931 931 static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev, 932 932 struct drm_file *file_priv, 933 - struct drm_mode_fb_cmd2 *mode_cmd2) 933 + const struct drm_mode_fb_cmd2 *mode_cmd2) 934 934 { 935 935 struct vmw_private *dev_priv = vmw_priv(dev); 936 936 struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
+3
include/drm/drmP.h
··· 1111 1111 return true; 1112 1112 } 1113 1113 1114 + /* helper for handling conditionals in various for_each macros */ 1115 + #define for_each_if(condition) if (!(condition)) {} else 1116 + 1114 1117 #endif
+3 -3
include/drm/drm_atomic.h
··· 149 149 ((connector) = (state)->connectors[__i], \ 150 150 (connector_state) = (state)->connector_states[__i], 1); \ 151 151 (__i)++) \ 152 - if (connector) 152 + for_each_if (connector) 153 153 154 154 #define for_each_crtc_in_state(state, crtc, crtc_state, __i) \ 155 155 for ((__i) = 0; \ ··· 157 157 ((crtc) = (state)->crtcs[__i], \ 158 158 (crtc_state) = (state)->crtc_states[__i], 1); \ 159 159 (__i)++) \ 160 - if (crtc_state) 160 + for_each_if (crtc_state) 161 161 162 162 #define for_each_plane_in_state(state, plane, plane_state, __i) \ 163 163 for ((__i) = 0; \ ··· 165 165 ((plane) = (state)->planes[__i], \ 166 166 (plane_state) = (state)->plane_states[__i], 1); \ 167 167 (__i)++) \ 168 - if (plane_state) 168 + for_each_if (plane_state) 169 169 static inline bool 170 170 drm_atomic_crtc_needs_modeset(struct drm_crtc_state *state) 171 171 {
+8 -4
include/drm/drm_crtc.h
··· 85 85 return (uint64_t)*((uint64_t *)&val); 86 86 } 87 87 88 - /* rotation property bits */ 88 + /* 89 + * Rotation property bits. DRM_ROTATE_<degrees> rotates the image by the 90 + * specified amount in degrees in counter clockwise direction. DRM_REFLECT_X and 91 + * DRM_REFLECT_Y reflects the image along the specified axis prior to rotation 92 + */ 89 93 #define DRM_ROTATE_MASK 0x0f 90 94 #define DRM_ROTATE_0 0 91 95 #define DRM_ROTATE_90 1 ··· 996 992 struct drm_mode_config_funcs { 997 993 struct drm_framebuffer *(*fb_create)(struct drm_device *dev, 998 994 struct drm_file *file_priv, 999 - struct drm_mode_fb_cmd2 *mode_cmd); 995 + const struct drm_mode_fb_cmd2 *mode_cmd); 1000 996 void (*output_poll_changed)(struct drm_device *dev); 1001 997 1002 998 int (*atomic_check)(struct drm_device *dev, ··· 1170 1166 */ 1171 1167 #define drm_for_each_plane_mask(plane, dev, plane_mask) \ 1172 1168 list_for_each_entry((plane), &(dev)->mode_config.plane_list, head) \ 1173 - if ((plane_mask) & (1 << drm_plane_index(plane))) 1169 + for_each_if ((plane_mask) & (1 << drm_plane_index(plane))) 1174 1170 1175 1171 1176 1172 #define obj_to_crtc(x) container_of(x, struct drm_crtc, base) ··· 1547 1543 /* Plane list iterator for legacy (overlay only) planes. */ 1548 1544 #define drm_for_each_legacy_plane(plane, dev) \ 1549 1545 list_for_each_entry(plane, &(dev)->mode_config.plane_list, head) \ 1550 - if (plane->type == DRM_PLANE_TYPE_OVERLAY) 1546 + for_each_if (plane->type == DRM_PLANE_TYPE_OVERLAY) 1551 1547 1552 1548 #define drm_for_each_plane(plane, dev) \ 1553 1549 list_for_each_entry(plane, &(dev)->mode_config.plane_list, head)
+1 -1
include/drm/drm_crtc_helper.h
··· 197 197 extern void drm_helper_move_panel_connectors_to_head(struct drm_device *); 198 198 199 199 extern void drm_helper_mode_fill_fb_struct(struct drm_framebuffer *fb, 200 - struct drm_mode_fb_cmd2 *mode_cmd); 200 + const struct drm_mode_fb_cmd2 *mode_cmd); 201 201 202 202 static inline void drm_crtc_helper_add(struct drm_crtc *crtc, 203 203 const struct drm_crtc_helper_funcs *funcs)
+36
include/drm/drm_dp_helper.h
··· 455 455 # define DP_EDP_14 0x03 456 456 457 457 #define DP_EDP_GENERAL_CAP_1 0x701 458 + # define DP_EDP_TCON_BACKLIGHT_ADJUSTMENT_CAP (1 << 0) 459 + # define DP_EDP_BACKLIGHT_PIN_ENABLE_CAP (1 << 1) 460 + # define DP_EDP_BACKLIGHT_AUX_ENABLE_CAP (1 << 2) 461 + # define DP_EDP_PANEL_SELF_TEST_PIN_ENABLE_CAP (1 << 3) 462 + # define DP_EDP_PANEL_SELF_TEST_AUX_ENABLE_CAP (1 << 4) 463 + # define DP_EDP_FRC_ENABLE_CAP (1 << 5) 464 + # define DP_EDP_COLOR_ENGINE_CAP (1 << 6) 465 + # define DP_EDP_SET_POWER_CAP (1 << 7) 458 466 459 467 #define DP_EDP_BACKLIGHT_ADJUSTMENT_CAP 0x702 468 + # define DP_EDP_BACKLIGHT_BRIGHTNESS_PWM_PIN_CAP (1 << 0) 469 + # define DP_EDP_BACKLIGHT_BRIGHTNESS_AUX_SET_CAP (1 << 1) 470 + # define DP_EDP_BACKLIGHT_BRIGHTNESS_BYTE_COUNT (1 << 2) 471 + # define DP_EDP_BACKLIGHT_AUX_PWM_PRODUCT_CAP (1 << 3) 472 + # define DP_EDP_BACKLIGHT_FREQ_PWM_PIN_PASSTHRU_CAP (1 << 4) 473 + # define DP_EDP_BACKLIGHT_FREQ_AUX_SET_CAP (1 << 5) 474 + # define DP_EDP_DYNAMIC_BACKLIGHT_CAP (1 << 6) 475 + # define DP_EDP_VBLANK_BACKLIGHT_UPDATE_CAP (1 << 7) 460 476 461 477 #define DP_EDP_GENERAL_CAP_2 0x703 478 + # define DP_EDP_OVERDRIVE_ENGINE_ENABLED (1 << 0) 462 479 463 480 #define DP_EDP_GENERAL_CAP_3 0x704 /* eDP 1.4 */ 481 + # define DP_EDP_X_REGION_CAP_MASK (0xf << 0) 482 + # define DP_EDP_X_REGION_CAP_SHIFT 0 483 + # define DP_EDP_Y_REGION_CAP_MASK (0xf << 4) 484 + # define DP_EDP_Y_REGION_CAP_SHIFT 4 464 485 465 486 #define DP_EDP_DISPLAY_CONTROL_REGISTER 0x720 487 + # define DP_EDP_BACKLIGHT_ENABLE (1 << 0) 488 + # define DP_EDP_BLACK_VIDEO_ENABLE (1 << 1) 489 + # define DP_EDP_FRC_ENABLE (1 << 2) 490 + # define DP_EDP_COLOR_ENGINE_ENABLE (1 << 3) 491 + # define DP_EDP_VBLANK_BACKLIGHT_UPDATE_ENABLE (1 << 7) 466 492 467 493 #define DP_EDP_BACKLIGHT_MODE_SET_REGISTER 0x721 494 + # define DP_EDP_BACKLIGHT_CONTROL_MODE_MASK (3 << 0) 495 + # define DP_EDP_BACKLIGHT_CONTROL_MODE_PWM (0 << 0) 496 + # define DP_EDP_BACKLIGHT_CONTROL_MODE_PRESET (1 << 0) 497 + # define DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD (2 << 0) 498 + # define DP_EDP_BACKLIGHT_CONTROL_MODE_PRODUCT (3 << 0) 499 + # define DP_EDP_BACKLIGHT_FREQ_PWM_PIN_PASSTHRU_ENABLE (1 << 2) 500 + # define DP_EDP_BACKLIGHT_FREQ_AUX_SET_ENABLE (1 << 3) 501 + # define DP_EDP_DYNAMIC_BACKLIGHT_ENABLE (1 << 4) 502 + # define DP_EDP_REGIONAL_BACKLIGHT_ENABLE (1 << 5) 503 + # define DP_EDP_UPDATE_REGION_BRIGHTNESS (1 << 6) /* eDP 1.4 */ 468 504 469 505 #define DP_EDP_BACKLIGHT_BRIGHTNESS_MSB 0x722 470 506 #define DP_EDP_BACKLIGHT_BRIGHTNESS_LSB 0x723
+1 -1
include/drm/drm_fb_cma_helper.h
··· 18 18 void drm_fbdev_cma_hotplug_event(struct drm_fbdev_cma *fbdev_cma); 19 19 20 20 struct drm_framebuffer *drm_fb_cma_create(struct drm_device *dev, 21 - struct drm_file *file_priv, struct drm_mode_fb_cmd2 *mode_cmd); 21 + struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd); 22 22 23 23 struct drm_gem_cma_object *drm_fb_cma_get_gem_obj(struct drm_framebuffer *fb, 24 24 unsigned int plane);
+92 -14
include/drm/drm_gem.h
··· 35 35 */ 36 36 37 37 /** 38 - * This structure defines the drm_mm memory object, which will be used by the 39 - * DRM for its buffer objects. 38 + * struct drm_gem_object - GEM buffer object 39 + * 40 + * This structure defines the generic parts for GEM buffer objects, which are 41 + * mostly around handling mmap and userspace handles. 42 + * 43 + * Buffer objects are often abbreviated to BO. 40 44 */ 41 45 struct drm_gem_object { 42 - /** Reference count of this object */ 46 + /** 47 + * @refcount: 48 + * 49 + * Reference count of this object 50 + * 51 + * Please use drm_gem_object_reference() to acquire and 52 + * drm_gem_object_unreference() or drm_gem_object_unreference_unlocked() 53 + * to release a reference to a GEM buffer object. 54 + */ 43 55 struct kref refcount; 44 56 45 57 /** 46 - * handle_count - gem file_priv handle count of this object 58 + * @handle_count: 59 + * 60 + * This is the GEM file_priv handle count of this object. 47 61 * 48 62 * Each handle also holds a reference. Note that when the handle_count 49 63 * drops to 0 any global names (e.g. the id in the flink namespace) will 50 64 * be cleared. 51 65 * 52 66 * Protected by dev->object_name_lock. 53 - * */ 67 + */ 54 68 unsigned handle_count; 55 69 56 - /** Related drm device */ 70 + /** 71 + * @dev: DRM dev this object belongs to. 72 + */ 57 73 struct drm_device *dev; 58 74 59 - /** File representing the shmem storage */ 75 + /** 76 + * @filp: 77 + * 78 + * SHMEM file node used as backing storage for swappable buffer objects. 79 + * GEM also supports driver private objects with driver-specific backing 80 + * storage (contiguous CMA memory, special reserved blocks). In this 81 + * case @filp is NULL. 82 + */ 60 83 struct file *filp; 61 84 62 - /* Mapping info for this object */ 85 + /** 86 + * @vma_node: 87 + * 88 + * Mapping info for this object to support mmap. Drivers are supposed to 89 + * allocate the mmap offset using drm_gem_create_mmap_offset(). The 90 + * offset itself can be retrieved using drm_vma_node_offset_addr(). 91 + * 92 + * Memory mapping itself is handled by drm_gem_mmap(), which also checks 93 + * that userspace is allowed to access the object. 94 + */ 63 95 struct drm_vma_offset_node vma_node; 64 96 65 97 /** 98 + * @size: 99 + * 66 100 * Size of the object, in bytes. Immutable over the object's 67 101 * lifetime. 68 102 */ 69 103 size_t size; 70 104 71 105 /** 106 + * @name: 107 + * 72 108 * Global name for this object, starts at 1. 0 means unnamed. 73 - * Access is covered by the object_name_lock in the related drm_device 109 + * Access is covered by dev->object_name_lock. This is used by the GEM_FLINK 110 + * and GEM_OPEN ioctls. 74 111 */ 75 112 int name; 76 113 77 114 /** 78 - * Memory domains. These monitor which caches contain read/write data 115 + * @read_domains: 116 + * 117 + * Read memory domains. These monitor which caches contain read/write data 79 118 * related to the object. When transitioning from one set of domains 80 119 * to another, the driver is called to ensure that caches are suitably 81 - * flushed and invalidated 120 + * flushed and invalidated. 82 121 */ 83 122 uint32_t read_domains; 123 + 124 + /** 125 + * @write_domain: Corresponding unique write memory domain. 126 + */ 84 127 uint32_t write_domain; 85 128 86 129 /** 130 + * @pending_read_domains: 131 + * 87 132 * While validating an exec operation, the 88 133 * new read/write domain values are computed here. 89 134 * They will be transferred to the above values 90 135 * at the point that any cache flushing occurs 91 136 */ 92 137 uint32_t pending_read_domains; 138 + 139 + /** 140 + * @pending_write_domain: Write domain similar to @pending_read_domains. 141 + */ 93 142 uint32_t pending_write_domain; 94 143 95 144 /** 96 - * dma_buf - dma buf associated with this GEM object 145 + * @dma_buf: 146 + * 147 + * dma-buf associated with this GEM object. 97 148 * 98 149 * Pointer to the dma-buf associated with this gem object (either 99 150 * through importing or exporting). We break the resulting reference 100 151 * loop when the last gem handle for this object is released. 101 152 * 102 - * Protected by obj->object_name_lock 153 + * Protected by obj->object_name_lock. 103 154 */ 104 155 struct dma_buf *dma_buf; 105 156 106 157 /** 107 - * import_attach - dma buf attachment backing this object 158 + * @import_attach: 159 + * 160 + * dma-buf attachment backing this object. 108 161 * 109 162 * Any foreign dma_buf imported as a gem object has this set to the 110 163 * attachment point for the device. This is invariant over the lifetime ··· 186 133 struct vm_area_struct *vma); 187 134 int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma); 188 135 136 + /** 137 + * drm_gem_object_reference - acquire a GEM BO reference 138 + * @obj: GEM buffer object 139 + * 140 + * This acquires additional reference to @obj. It is illegal to call this 141 + * without already holding a reference. No locks required. 142 + */ 189 143 static inline void 190 144 drm_gem_object_reference(struct drm_gem_object *obj) 191 145 { 192 146 kref_get(&obj->refcount); 193 147 } 194 148 149 + /** 150 + * drm_gem_object_unreference - release a GEM BO reference 151 + * @obj: GEM buffer object 152 + * 153 + * This releases a reference to @obj. Callers must hold the dev->struct_mutex 154 + * lock when calling this function, even when the driver doesn't use 155 + * dev->struct_mutex for anything. 156 + * 157 + * For drivers not encumbered with legacy locking use 158 + * drm_gem_object_unreference_unlocked() instead. 159 + */ 195 160 static inline void 196 161 drm_gem_object_unreference(struct drm_gem_object *obj) 197 162 { ··· 220 149 } 221 150 } 222 151 152 + /** 153 + * drm_gem_object_unreference_unlocked - release a GEM BO reference 154 + * @obj: GEM buffer object 155 + * 156 + * This releases a reference to @obj. Callers must not hold the 157 + * dev->struct_mutex lock when calling this function. 158 + */ 223 159 static inline void 224 160 drm_gem_object_unreference_unlocked(struct drm_gem_object *obj) 225 161 {
+10 -16
include/drm/drm_mm.h
··· 148 148 149 149 static inline u64 __drm_mm_hole_node_end(struct drm_mm_node *hole_node) 150 150 { 151 - return list_entry(hole_node->node_list.next, 152 - struct drm_mm_node, node_list)->start; 151 + return list_next_entry(hole_node, node_list)->start; 153 152 } 154 153 155 154 /** ··· 179 180 &(mm)->head_node.node_list, \ 180 181 node_list) 181 182 183 + #define __drm_mm_for_each_hole(entry, mm, hole_start, hole_end, backwards) \ 184 + for (entry = list_entry((backwards) ? (mm)->hole_stack.prev : (mm)->hole_stack.next, struct drm_mm_node, hole_stack); \ 185 + &entry->hole_stack != &(mm)->hole_stack ? \ 186 + hole_start = drm_mm_hole_node_start(entry), \ 187 + hole_end = drm_mm_hole_node_end(entry), \ 188 + 1 : 0; \ 189 + entry = list_entry((backwards) ? entry->hole_stack.prev : entry->hole_stack.next, struct drm_mm_node, hole_stack)) 190 + 182 191 /** 183 192 * drm_mm_for_each_hole - iterator to walk over all holes 184 193 * @entry: drm_mm_node used internally to track progress ··· 207 200 * going backwards. 208 201 */ 209 202 #define drm_mm_for_each_hole(entry, mm, hole_start, hole_end) \ 210 - for (entry = list_entry((mm)->hole_stack.next, struct drm_mm_node, hole_stack); \ 211 - &entry->hole_stack != &(mm)->hole_stack ? \ 212 - hole_start = drm_mm_hole_node_start(entry), \ 213 - hole_end = drm_mm_hole_node_end(entry), \ 214 - 1 : 0; \ 215 - entry = list_entry(entry->hole_stack.next, struct drm_mm_node, hole_stack)) 216 - 217 - #define __drm_mm_for_each_hole(entry, mm, hole_start, hole_end, backwards) \ 218 - for (entry = list_entry((backwards) ? (mm)->hole_stack.prev : (mm)->hole_stack.next, struct drm_mm_node, hole_stack); \ 219 - &entry->hole_stack != &(mm)->hole_stack ? \ 220 - hole_start = drm_mm_hole_node_start(entry), \ 221 - hole_end = drm_mm_hole_node_end(entry), \ 222 - 1 : 0; \ 223 - entry = list_entry((backwards) ? entry->hole_stack.prev : entry->hole_stack.next, struct drm_mm_node, hole_stack)) 203 + __drm_mm_for_each_hole(entry, mm, hole_start, hole_end, 0) 224 204 225 205 /* 226 206 * Basic range manager support (drm_mm.c)
+2 -1
include/drm/drm_rect.h
··· 162 162 int drm_rect_calc_vscale_relaxed(struct drm_rect *src, 163 163 struct drm_rect *dst, 164 164 int min_vscale, int max_vscale); 165 - void drm_rect_debug_print(const struct drm_rect *r, bool fixed_point); 165 + void drm_rect_debug_print(const char *prefix, 166 + const struct drm_rect *r, bool fixed_point); 166 167 void drm_rect_rotate(struct drm_rect *r, 167 168 int width, int height, 168 169 unsigned int rotation);