Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'topic/drm-misc-2015-10-19' of git://anongit.freedesktop.org/drm-intel into drm-next

More drm-misc for 4.4.
- fb refcount fix in atomic fbdev
- various locking reworks to reduce drm_global_mutex and dev->struct_mutex
- rename docbook to gpu.tmpl and include vga_switcheroo stuff, plus more
vga_switcheroo (Lukas Wunner)
- viewport check fixes for atomic drivers from Ville
- DRM_DEBUG_VBL from Ville
- non-contentious header fixes from Mikko Rapeli
- small things all over

* tag 'topic/drm-misc-2015-10-19' of git://anongit.freedesktop.org/drm-intel: (31 commits)
drm/fb-helper: Fix fb refcounting in pan_display_atomic
drm/fb-helper: Set plane rotation directly
drm: fix mutex leak in drm_dp_get_mst_branch_device
drm: Check plane src coordinates correctly during page flip for atomic drivers
drm: Check crtc viewport correctly with rotated primary plane on atomic drivers
drm: Refactor plane src coordinate checks
drm: Swap w/h when converting the mode to src coordidates for a rotated primary plane
drm: Don't leak fb when plane crtc coodinates are bad
ALSA: hda - Spell vga_switcheroo consistently
drm/gem: Use kref_get_unless_zero for the weak mmap references
drm/vgem: Drop vgem_drm_gem_mmap
drm: Fix return value of drm_framebuffer_init()
drm/gem: Use container_of in drm_gem_object_free
drm/gem: Check locking in drm_gem_object_unreference
drm/gem: Drop struct_mutex requirement from drm_gem_mmap_obj
drm/i810_drm.h: include drm/drm.h
r128_drm.h: include drm/drm.h
savage_drm.h: include <drm/drm.h>
gpu/doc: Convert to markdown harder
gpu/doc: Add vga_switcheroo documentation
...

+444 -402
+1 -1
Documentation/DocBook/Makefile
··· 14 14 genericirq.xml s390-drivers.xml uio-howto.xml scsi.xml \ 15 15 80211.xml debugobjects.xml sh.xml regulator.xml \ 16 16 alsa-driver-api.xml writing-an-alsa-driver.xml \ 17 - tracepoint.xml drm.xml media_api.xml w1.xml \ 17 + tracepoint.xml gpu.xml media_api.xml w1.xml \ 18 18 writing_musb_glue_layer.xml crypto-API.xml iio.xml 19 19 20 20 include Documentation/DocBook/media/Makefile
+82 -12
Documentation/DocBook/drm.tmpl Documentation/DocBook/gpu.tmpl
··· 2 2 <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" 3 3 "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []> 4 4 5 - <book id="drmDevelopersGuide"> 5 + <book id="gpuDevelopersGuide"> 6 6 <bookinfo> 7 - <title>Linux DRM Developer's Guide</title> 7 + <title>Linux GPU Driver Developer's Guide</title> 8 8 9 9 <authorgroup> 10 10 <author> ··· 40 40 </address> 41 41 </affiliation> 42 42 </author> 43 + <author> 44 + <firstname>Lukas</firstname> 45 + <surname>Wunner</surname> 46 + <contrib>vga_switcheroo documentation</contrib> 47 + <affiliation> 48 + <address> 49 + <email>lukas@wunner.de</email> 50 + </address> 51 + </affiliation> 52 + </author> 43 53 </authorgroup> 44 54 45 55 <copyright> ··· 60 50 <copyright> 61 51 <year>2012</year> 62 52 <holder>Laurent Pinchart</holder> 53 + </copyright> 54 + <copyright> 55 + <year>2015</year> 56 + <holder>Lukas Wunner</holder> 63 57 </copyright> 64 58 65 59 <legalnotice> ··· 83 69 <revremark>Added extensive documentation about driver internals. 84 70 </revremark> 85 71 </revision> 72 + <revision> 73 + <revnumber>1.1</revnumber> 74 + <date>2015-10-11</date> 75 + <authorinitials>LW</authorinitials> 76 + <revremark>Added vga_switcheroo documentation. 77 + </revremark> 78 + </revision> 86 79 </revhistory> 87 80 </bookinfo> 88 81 ··· 99 78 <title>DRM Core</title> 100 79 <partintro> 101 80 <para> 102 - This first part of the DRM Developer's Guide documents core DRM code, 103 - helper libraries for writing drivers and generic userspace interfaces 104 - exposed by DRM drivers. 81 + This first part of the GPU Driver Developer's Guide documents core DRM 82 + code, helper libraries for writing drivers and generic userspace 83 + interfaces exposed by DRM drivers. 105 84 </para> 106 85 </partintro> 107 86 ··· 3604 3583 plane properties to default value, so that a subsequent open of the 3605 3584 device will not inherit state from the previous user. It can also be 3606 3585 used to execute delayed power switching state changes, e.g. in 3607 - conjunction with the vga_switcheroo infrastructure. Beyond that KMS 3608 - drivers should not do any further cleanup. Only legacy UMS drivers might 3609 - need to clean up device state so that the vga console or an independent 3610 - fbdev driver could take over. 3586 + conjunction with the vga_switcheroo infrastructure (see 3587 + <xref linkend="vga_switcheroo"/>). Beyond that KMS drivers should not 3588 + do any further cleanup. Only legacy UMS drivers might need to clean up 3589 + device state so that the vga console or an independent fbdev driver 3590 + could take over. 3611 3591 </para> 3612 3592 </sect2> 3613 3593 <sect2> ··· 3706 3684 </para></listitem> 3707 3685 <listitem><para> 3708 3686 DRM_UNLOCKED - The ioctl handler will be called without locking 3709 - the DRM global mutex 3687 + the DRM global mutex. This is the enforced default for kms drivers 3688 + (i.e. using the DRIVER_MODESET flag) and hence shouldn't be used 3689 + any more for new drivers. 3710 3690 </para></listitem> 3711 3691 </itemizedlist> 3712 3692 </para> ··· 3911 3887 3912 3888 <partintro> 3913 3889 <para> 3914 - This second part of the DRM Developer's Guide documents driver code, 3915 - implementation details and also all the driver-specific userspace 3890 + This second part of the GPU Driver Developer's Guide documents driver 3891 + code, implementation details and also all the driver-specific userspace 3916 3892 interfaces. Especially since all hardware-acceleration interfaces to 3917 3893 userspace are driver specific for efficiency and other reasons these 3918 3894 interfaces can be rather substantial. Hence every driver has its own ··· 4237 4213 </chapter> 4238 4214 !Cdrivers/gpu/drm/i915/i915_irq.c 4239 4215 </part> 4216 + 4217 + <part id="vga_switcheroo"> 4218 + <title>vga_switcheroo</title> 4219 + <partintro> 4220 + !Pdrivers/gpu/vga/vga_switcheroo.c Overview 4221 + </partintro> 4222 + 4223 + <chapter id="modes_of_use"> 4224 + <title>Modes of Use</title> 4225 + <sect1> 4226 + <title>Manual switching and manual power control</title> 4227 + !Pdrivers/gpu/vga/vga_switcheroo.c Manual switching and manual power control 4228 + </sect1> 4229 + <sect1> 4230 + <title>Driver power control</title> 4231 + !Pdrivers/gpu/vga/vga_switcheroo.c Driver power control 4232 + </sect1> 4233 + </chapter> 4234 + 4235 + <chapter id="pubfunctions"> 4236 + <title>Public functions</title> 4237 + !Edrivers/gpu/vga/vga_switcheroo.c 4238 + </chapter> 4239 + 4240 + <chapter id="pubstructures"> 4241 + <title>Public structures</title> 4242 + !Finclude/linux/vga_switcheroo.h vga_switcheroo_handler 4243 + !Finclude/linux/vga_switcheroo.h vga_switcheroo_client_ops 4244 + </chapter> 4245 + 4246 + <chapter id="pubconstants"> 4247 + <title>Public constants</title> 4248 + !Finclude/linux/vga_switcheroo.h vga_switcheroo_client_id 4249 + !Finclude/linux/vga_switcheroo.h vga_switcheroo_state 4250 + </chapter> 4251 + 4252 + <chapter id="privstructures"> 4253 + <title>Private structures</title> 4254 + !Fdrivers/gpu/vga/vga_switcheroo.c vgasr_priv 4255 + !Fdrivers/gpu/vga/vga_switcheroo.c vga_switcheroo_client 4256 + </chapter> 4257 + 4258 + !Cdrivers/gpu/vga/vga_switcheroo.c 4259 + !Cinclude/linux/vga_switcheroo.h 4260 + </part> 4261 + 4240 4262 </book>
+12 -12
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 689 689 } 690 690 691 691 const struct drm_ioctl_desc amdgpu_ioctls_kms[] = { 692 - DRM_IOCTL_DEF_DRV(AMDGPU_GEM_CREATE, amdgpu_gem_create_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 693 - DRM_IOCTL_DEF_DRV(AMDGPU_CTX, amdgpu_ctx_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 694 - DRM_IOCTL_DEF_DRV(AMDGPU_BO_LIST, amdgpu_bo_list_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 692 + DRM_IOCTL_DEF_DRV(AMDGPU_GEM_CREATE, amdgpu_gem_create_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 693 + DRM_IOCTL_DEF_DRV(AMDGPU_CTX, amdgpu_ctx_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 694 + DRM_IOCTL_DEF_DRV(AMDGPU_BO_LIST, amdgpu_bo_list_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 695 695 /* KMS */ 696 - DRM_IOCTL_DEF_DRV(AMDGPU_GEM_MMAP, amdgpu_gem_mmap_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 697 - DRM_IOCTL_DEF_DRV(AMDGPU_GEM_WAIT_IDLE, amdgpu_gem_wait_idle_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 698 - DRM_IOCTL_DEF_DRV(AMDGPU_CS, amdgpu_cs_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 699 - DRM_IOCTL_DEF_DRV(AMDGPU_INFO, amdgpu_info_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 700 - DRM_IOCTL_DEF_DRV(AMDGPU_WAIT_CS, amdgpu_cs_wait_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 701 - DRM_IOCTL_DEF_DRV(AMDGPU_GEM_METADATA, amdgpu_gem_metadata_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 702 - DRM_IOCTL_DEF_DRV(AMDGPU_GEM_VA, amdgpu_gem_va_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 703 - DRM_IOCTL_DEF_DRV(AMDGPU_GEM_OP, amdgpu_gem_op_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 704 - DRM_IOCTL_DEF_DRV(AMDGPU_GEM_USERPTR, amdgpu_gem_userptr_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 696 + DRM_IOCTL_DEF_DRV(AMDGPU_GEM_MMAP, amdgpu_gem_mmap_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 697 + DRM_IOCTL_DEF_DRV(AMDGPU_GEM_WAIT_IDLE, amdgpu_gem_wait_idle_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 698 + DRM_IOCTL_DEF_DRV(AMDGPU_CS, amdgpu_cs_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 699 + DRM_IOCTL_DEF_DRV(AMDGPU_INFO, amdgpu_info_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 700 + DRM_IOCTL_DEF_DRV(AMDGPU_WAIT_CS, amdgpu_cs_wait_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 701 + DRM_IOCTL_DEF_DRV(AMDGPU_GEM_METADATA, amdgpu_gem_metadata_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 702 + DRM_IOCTL_DEF_DRV(AMDGPU_GEM_VA, amdgpu_gem_va_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 703 + DRM_IOCTL_DEF_DRV(AMDGPU_GEM_OP, amdgpu_gem_op_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 704 + DRM_IOCTL_DEF_DRV(AMDGPU_GEM_USERPTR, amdgpu_gem_userptr_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 705 705 }; 706 706 int amdgpu_max_kms_ioctl = ARRAY_SIZE(amdgpu_ioctls_kms);
+3 -6
drivers/gpu/drm/armada/armada_drv.c
··· 163 163 } 164 164 165 165 static struct drm_ioctl_desc armada_ioctls[] = { 166 - DRM_IOCTL_DEF_DRV(ARMADA_GEM_CREATE, armada_gem_create_ioctl, 167 - DRM_UNLOCKED), 168 - DRM_IOCTL_DEF_DRV(ARMADA_GEM_MMAP, armada_gem_mmap_ioctl, 169 - DRM_UNLOCKED), 170 - DRM_IOCTL_DEF_DRV(ARMADA_GEM_PWRITE, armada_gem_pwrite_ioctl, 171 - DRM_UNLOCKED), 166 + DRM_IOCTL_DEF_DRV(ARMADA_GEM_CREATE, armada_gem_create_ioctl,0), 167 + DRM_IOCTL_DEF_DRV(ARMADA_GEM_MMAP, armada_gem_mmap_ioctl, 0), 168 + DRM_IOCTL_DEF_DRV(ARMADA_GEM_PWRITE, armada_gem_pwrite_ioctl, 0), 172 169 }; 173 170 174 171 static void armada_drm_lastclose(struct drm_device *dev)
+7 -2
drivers/gpu/drm/drm_atomic_helper.c
··· 1790 1790 primary_state->crtc_w = set->mode->hdisplay; 1791 1791 primary_state->src_x = set->x << 16; 1792 1792 primary_state->src_y = set->y << 16; 1793 - primary_state->src_h = set->mode->vdisplay << 16; 1794 - primary_state->src_w = set->mode->hdisplay << 16; 1793 + if (primary_state->rotation & (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270))) { 1794 + primary_state->src_h = set->mode->hdisplay << 16; 1795 + primary_state->src_w = set->mode->vdisplay << 16; 1796 + } else { 1797 + primary_state->src_h = set->mode->vdisplay << 16; 1798 + primary_state->src_w = set->mode->hdisplay << 16; 1799 + } 1795 1800 1796 1801 commit: 1797 1802 ret = update_output_state(state, set);
+46 -37
drivers/gpu/drm/drm_crtc.c
··· 306 306 * reference counted modeset objects like framebuffers. 307 307 * 308 308 * Returns: 309 - * New unique (relative to other objects in @dev) integer identifier for the 310 - * object. 309 + * Zero on success, error code on failure. 311 310 */ 312 311 int drm_mode_object_get(struct drm_device *dev, 313 312 struct drm_mode_object *obj, uint32_t obj_type) ··· 422 423 out: 423 424 mutex_unlock(&dev->mode_config.fb_lock); 424 425 425 - return 0; 426 + return ret; 426 427 } 427 428 EXPORT_SYMBOL(drm_framebuffer_init); 428 429 ··· 676 677 677 678 crtc->dev = dev; 678 679 crtc->funcs = funcs; 679 - crtc->invert_dimensions = false; 680 680 681 681 drm_modeset_lock_init(&crtc->mutex); 682 682 ret = drm_mode_object_get(dev, &crtc->base, DRM_MODE_OBJECT_CRTC); ··· 2284 2286 return -EINVAL; 2285 2287 } 2286 2288 2289 + static int check_src_coords(uint32_t src_x, uint32_t src_y, 2290 + uint32_t src_w, uint32_t src_h, 2291 + const struct drm_framebuffer *fb) 2292 + { 2293 + unsigned int fb_width, fb_height; 2294 + 2295 + fb_width = fb->width << 16; 2296 + fb_height = fb->height << 16; 2297 + 2298 + /* Make sure source coordinates are inside the fb. */ 2299 + if (src_w > fb_width || 2300 + src_x > fb_width - src_w || 2301 + src_h > fb_height || 2302 + src_y > fb_height - src_h) { 2303 + DRM_DEBUG_KMS("Invalid source coordinates " 2304 + "%u.%06ux%u.%06u+%u.%06u+%u.%06u\n", 2305 + src_w >> 16, ((src_w & 0xffff) * 15625) >> 10, 2306 + src_h >> 16, ((src_h & 0xffff) * 15625) >> 10, 2307 + src_x >> 16, ((src_x & 0xffff) * 15625) >> 10, 2308 + src_y >> 16, ((src_y & 0xffff) * 15625) >> 10); 2309 + return -ENOSPC; 2310 + } 2311 + 2312 + return 0; 2313 + } 2314 + 2287 2315 /* 2288 2316 * setplane_internal - setplane handler for internal callers 2289 2317 * ··· 2329 2305 uint32_t src_w, uint32_t src_h) 2330 2306 { 2331 2307 int ret = 0; 2332 - unsigned int fb_width, fb_height; 2333 2308 2334 2309 /* No fb means shut it down */ 2335 2310 if (!fb) { ··· 2365 2342 crtc_y > INT_MAX - (int32_t) crtc_h) { 2366 2343 DRM_DEBUG_KMS("Invalid CRTC coordinates %ux%u+%d+%d\n", 2367 2344 crtc_w, crtc_h, crtc_x, crtc_y); 2368 - return -ERANGE; 2369 - } 2370 - 2371 - 2372 - fb_width = fb->width << 16; 2373 - fb_height = fb->height << 16; 2374 - 2375 - /* Make sure source coordinates are inside the fb. */ 2376 - if (src_w > fb_width || 2377 - src_x > fb_width - src_w || 2378 - src_h > fb_height || 2379 - src_y > fb_height - src_h) { 2380 - DRM_DEBUG_KMS("Invalid source coordinates " 2381 - "%u.%06ux%u.%06u+%u.%06u+%u.%06u\n", 2382 - src_w >> 16, ((src_w & 0xffff) * 15625) >> 10, 2383 - src_h >> 16, ((src_h & 0xffff) * 15625) >> 10, 2384 - src_x >> 16, ((src_x & 0xffff) * 15625) >> 10, 2385 - src_y >> 16, ((src_y & 0xffff) * 15625) >> 10); 2386 - ret = -ENOSPC; 2345 + ret = -ERANGE; 2387 2346 goto out; 2388 2347 } 2348 + 2349 + ret = check_src_coords(src_x, src_y, src_w, src_h, fb); 2350 + if (ret) 2351 + goto out; 2389 2352 2390 2353 plane->old_fb = plane->fb; 2391 2354 ret = plane->funcs->update_plane(plane, crtc, fb, ··· 2562 2553 2563 2554 drm_crtc_get_hv_timing(mode, &hdisplay, &vdisplay); 2564 2555 2565 - if (crtc->invert_dimensions) 2556 + if (crtc->state && 2557 + crtc->primary->state->rotation & (BIT(DRM_ROTATE_90) | 2558 + BIT(DRM_ROTATE_270))) 2566 2559 swap(hdisplay, vdisplay); 2567 2560 2568 - if (hdisplay > fb->width || 2569 - vdisplay > fb->height || 2570 - x > fb->width - hdisplay || 2571 - y > fb->height - vdisplay) { 2572 - DRM_DEBUG_KMS("Invalid fb size %ux%u for CRTC viewport %ux%u+%d+%d%s.\n", 2573 - fb->width, fb->height, hdisplay, vdisplay, x, y, 2574 - crtc->invert_dimensions ? " (inverted)" : ""); 2575 - return -ENOSPC; 2576 - } 2577 - 2578 - return 0; 2561 + return check_src_coords(x << 16, y << 16, 2562 + hdisplay << 16, vdisplay << 16, fb); 2579 2563 } 2580 2564 EXPORT_SYMBOL(drm_crtc_check_viewport); 2581 2565 ··· 5183 5181 goto out; 5184 5182 } 5185 5183 5186 - ret = drm_crtc_check_viewport(crtc, crtc->x, crtc->y, &crtc->mode, fb); 5184 + if (crtc->state) { 5185 + const struct drm_plane_state *state = crtc->primary->state; 5186 + 5187 + ret = check_src_coords(state->src_x, state->src_y, 5188 + state->src_w, state->src_h, fb); 5189 + } else { 5190 + ret = drm_crtc_check_viewport(crtc, crtc->x, crtc->y, &crtc->mode, fb); 5191 + } 5187 5192 if (ret) 5188 5193 goto out; 5189 5194
+4 -3
drivers/gpu/drm/drm_dp_mst_topology.c
··· 1194 1194 1195 1195 list_for_each_entry(port, &mstb->ports, next) { 1196 1196 if (port->port_num == port_num) { 1197 - if (!port->mstb) { 1197 + mstb = port->mstb; 1198 + if (!mstb) { 1198 1199 DRM_ERROR("failed to lookup MSTB with lct %d, rad %02x\n", lct, rad[0]); 1199 - return NULL; 1200 + goto out; 1200 1201 } 1201 1202 1202 - mstb = port->mstb; 1203 1203 break; 1204 1204 } 1205 1205 } 1206 1206 } 1207 1207 kref_get(&mstb->kref); 1208 + out: 1208 1209 mutex_unlock(&mgr->lock); 1209 1210 return mstb; 1210 1211 }
+1 -3
drivers/gpu/drm/drm_drv.c
··· 37 37 #include "drm_legacy.h" 38 38 #include "drm_internal.h" 39 39 40 - unsigned int drm_debug = 0; /* 1 to enable debug output */ 40 + unsigned int drm_debug = 0; /* bitmask of DRM_UT_x */ 41 41 EXPORT_SYMBOL(drm_debug); 42 - 43 - bool drm_atomic = 0; 44 42 45 43 MODULE_AUTHOR(CORE_AUTHOR); 46 44 MODULE_DESCRIPTION(CORE_DESC);
+27 -8
drivers/gpu/drm/drm_fb_helper.c
··· 360 360 goto fail; 361 361 } 362 362 363 - ret = drm_atomic_plane_set_property(plane, plane_state, 364 - dev->mode_config.rotation_property, 365 - BIT(DRM_ROTATE_0)); 366 - if (ret != 0) 367 - goto fail; 363 + plane_state->rotation = BIT(DRM_ROTATE_0); 368 364 369 365 /* disable non-primary: */ 370 366 if (plane->type == DRM_PLANE_TYPE_PRIMARY) ··· 1231 1235 EXPORT_SYMBOL(drm_fb_helper_set_par); 1232 1236 1233 1237 static int pan_display_atomic(struct fb_var_screeninfo *var, 1234 - struct fb_info *info) 1238 + struct fb_info *info) 1235 1239 { 1236 1240 struct drm_fb_helper *fb_helper = info->par; 1237 1241 struct drm_device *dev = fb_helper->dev; ··· 1249 1253 1250 1254 mode_set = &fb_helper->crtc_info[i].mode_set; 1251 1255 1256 + mode_set->crtc->primary->old_fb = mode_set->crtc->primary->fb; 1257 + 1252 1258 mode_set->x = var->xoffset; 1253 1259 mode_set->y = var->yoffset; 1254 1260 ··· 1266 1268 info->var.xoffset = var->xoffset; 1267 1269 info->var.yoffset = var->yoffset; 1268 1270 1269 - return 0; 1270 1271 1271 1272 fail: 1273 + for(i = 0; i < fb_helper->crtc_count; i++) { 1274 + struct drm_mode_set *mode_set; 1275 + struct drm_plane *plane; 1276 + 1277 + mode_set = &fb_helper->crtc_info[i].mode_set; 1278 + plane = mode_set->crtc->primary; 1279 + 1280 + if (ret == 0) { 1281 + struct drm_framebuffer *new_fb = plane->state->fb; 1282 + 1283 + if (new_fb) 1284 + drm_framebuffer_reference(new_fb); 1285 + plane->fb = new_fb; 1286 + plane->crtc = plane->state->crtc; 1287 + 1288 + if (plane->old_fb) 1289 + drm_framebuffer_unreference(plane->old_fb); 1290 + } 1291 + plane->old_fb = NULL; 1292 + } 1293 + 1272 1294 if (ret == -EDEADLK) 1273 1295 goto backoff; 1274 1296 1275 - drm_atomic_state_free(state); 1297 + if (ret != 0) 1298 + drm_atomic_state_free(state); 1276 1299 1277 1300 return ret; 1278 1301
+30 -17
drivers/gpu/drm/drm_gem.c
··· 763 763 void 764 764 drm_gem_object_free(struct kref *kref) 765 765 { 766 - struct drm_gem_object *obj = (struct drm_gem_object *) kref; 766 + struct drm_gem_object *obj = 767 + container_of(kref, struct drm_gem_object, refcount); 767 768 struct drm_device *dev = obj->dev; 768 769 769 770 WARN_ON(!mutex_is_locked(&dev->struct_mutex)); ··· 811 810 * drm_gem_mmap() prevents unprivileged users from mapping random objects. So 812 811 * callers must verify access restrictions before calling this helper. 813 812 * 814 - * NOTE: This function has to be protected with dev->struct_mutex 815 - * 816 813 * Return 0 or success or -EINVAL if the object size is smaller than the VMA 817 814 * size, or if no gem_vm_ops are provided. 818 815 */ ··· 818 819 struct vm_area_struct *vma) 819 820 { 820 821 struct drm_device *dev = obj->dev; 821 - 822 - lockdep_assert_held(&dev->struct_mutex); 823 822 824 823 /* Check for valid size. */ 825 824 if (obj_size < vma->vm_end - vma->vm_start) ··· 862 865 { 863 866 struct drm_file *priv = filp->private_data; 864 867 struct drm_device *dev = priv->minor->dev; 865 - struct drm_gem_object *obj; 868 + struct drm_gem_object *obj = NULL; 866 869 struct drm_vma_offset_node *node; 867 870 int ret; 868 871 869 872 if (drm_device_is_unplugged(dev)) 870 873 return -ENODEV; 871 874 872 - mutex_lock(&dev->struct_mutex); 875 + drm_vma_offset_lock_lookup(dev->vma_offset_manager); 876 + node = drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager, 877 + vma->vm_pgoff, 878 + vma_pages(vma)); 879 + if (likely(node)) { 880 + obj = container_of(node, struct drm_gem_object, vma_node); 881 + /* 882 + * When the object is being freed, after it hits 0-refcnt it 883 + * proceeds to tear down the object. In the process it will 884 + * attempt to remove the VMA offset and so acquire this 885 + * mgr->vm_lock. Therefore if we find an object with a 0-refcnt 886 + * that matches our range, we know it is in the process of being 887 + * destroyed and will be freed as soon as we release the lock - 888 + * so we have to check for the 0-refcnted object and treat it as 889 + * invalid. 890 + */ 891 + if (!kref_get_unless_zero(&obj->refcount)) 892 + obj = NULL; 893 + } 894 + drm_vma_offset_unlock_lookup(dev->vma_offset_manager); 873 895 874 - node = drm_vma_offset_exact_lookup(dev->vma_offset_manager, 875 - vma->vm_pgoff, 876 - vma_pages(vma)); 877 - if (!node) { 878 - mutex_unlock(&dev->struct_mutex); 896 + if (!obj) 879 897 return -EINVAL; 880 - } else if (!drm_vma_node_is_allowed(node, filp)) { 881 - mutex_unlock(&dev->struct_mutex); 898 + 899 + if (!drm_vma_node_is_allowed(node, filp)) { 900 + drm_gem_object_unreference_unlocked(obj); 882 901 return -EACCES; 883 902 } 884 903 885 - obj = container_of(node, struct drm_gem_object, vma_node); 886 - ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT, vma); 904 + ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT, 905 + vma); 887 906 888 - mutex_unlock(&dev->struct_mutex); 907 + drm_gem_object_unreference_unlocked(obj); 889 908 890 909 return ret; 891 910 }
-2
drivers/gpu/drm/drm_gem_cma_helper.c
··· 484 484 struct drm_device *dev = obj->dev; 485 485 int ret; 486 486 487 - mutex_lock(&dev->struct_mutex); 488 487 ret = drm_gem_mmap_obj(obj, obj->size, vma); 489 - mutex_unlock(&dev->struct_mutex); 490 488 if (ret < 0) 491 489 return ret; 492 490
+8 -2
drivers/gpu/drm/drm_ioctl.c
··· 691 691 char stack_kdata[128]; 692 692 char *kdata = NULL; 693 693 unsigned int usize, asize, drv_size; 694 + bool is_driver_ioctl; 694 695 695 696 dev = file_priv->minor->dev; 696 697 697 698 if (drm_device_is_unplugged(dev)) 698 699 return -ENODEV; 699 700 700 - if (nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END) { 701 + is_driver_ioctl = nr >= DRM_COMMAND_BASE && nr < DRM_COMMAND_END; 702 + 703 + if (is_driver_ioctl) { 701 704 /* driver ioctl */ 702 705 if (nr - DRM_COMMAND_BASE >= dev->driver->num_ioctls) 703 706 goto err_i1; ··· 759 756 memset(kdata, 0, usize); 760 757 } 761 758 762 - if (ioctl->flags & DRM_UNLOCKED) 759 + /* Enforce sane locking for kms driver ioctls. Core ioctls are 760 + * too messy still. */ 761 + if ((drm_core_check_feature(dev, DRIVER_MODESET) && is_driver_ioctl) || 762 + (ioctl->flags & DRM_UNLOCKED)) 763 763 retcode = func(dev, kdata, file_priv); 764 764 else { 765 765 mutex_lock(&drm_global_mutex);
+13 -13
drivers/gpu/drm/drm_irq.c
··· 213 213 diff = DIV_ROUND_CLOSEST_ULL(diff_ns, framedur_ns); 214 214 215 215 if (diff == 0 && flags & DRM_CALLED_FROM_VBLIRQ) 216 - DRM_DEBUG("crtc %u: Redundant vblirq ignored." 217 - " diff_ns = %lld, framedur_ns = %d)\n", 218 - pipe, (long long) diff_ns, framedur_ns); 216 + DRM_DEBUG_VBL("crtc %u: Redundant vblirq ignored." 217 + " diff_ns = %lld, framedur_ns = %d)\n", 218 + pipe, (long long) diff_ns, framedur_ns); 219 219 } else { 220 220 /* some kind of default for drivers w/o accurate vbl timestamping */ 221 221 diff = (flags & DRM_CALLED_FROM_VBLIRQ) != 0; 222 222 } 223 223 224 - DRM_DEBUG("updating vblank count on crtc %u:" 225 - " current=%u, diff=%u, hw=%u hw_last=%u\n", 226 - pipe, vblank->count, diff, cur_vblank, vblank->last); 224 + DRM_DEBUG_VBL("updating vblank count on crtc %u:" 225 + " current=%u, diff=%u, hw=%u hw_last=%u\n", 226 + pipe, vblank->count, diff, cur_vblank, vblank->last); 227 227 228 228 if (diff == 0) { 229 229 WARN_ON_ONCE(cur_vblank != vblank->last); ··· 800 800 etime = ktime_sub_ns(etime, delta_ns); 801 801 *vblank_time = ktime_to_timeval(etime); 802 802 803 - DRM_DEBUG("crtc %u : v 0x%x p(%d,%d)@ %ld.%ld -> %ld.%ld [e %d us, %d rep]\n", 804 - pipe, vbl_status, hpos, vpos, 805 - (long)tv_etime.tv_sec, (long)tv_etime.tv_usec, 806 - (long)vblank_time->tv_sec, (long)vblank_time->tv_usec, 807 - duration_ns/1000, i); 803 + DRM_DEBUG_VBL("crtc %u : v 0x%x p(%d,%d)@ %ld.%ld -> %ld.%ld [e %d us, %d rep]\n", 804 + pipe, vbl_status, hpos, vpos, 805 + (long)tv_etime.tv_sec, (long)tv_etime.tv_usec, 806 + (long)vblank_time->tv_sec, (long)vblank_time->tv_usec, 807 + duration_ns/1000, i); 808 808 809 809 return ret; 810 810 } ··· 1272 1272 list_for_each_entry_safe(e, t, &dev->vblank_event_list, base.link) { 1273 1273 if (e->pipe != pipe) 1274 1274 continue; 1275 - DRM_DEBUG("Sending premature vblank event on disable: \ 1276 - wanted %d, current %d\n", 1275 + DRM_DEBUG("Sending premature vblank event on disable: " 1276 + "wanted %d, current %d\n", 1277 1277 e->event.sequence, seq); 1278 1278 list_del(&e->base.link); 1279 1279 drm_vblank_put(dev, pipe);
+12 -28
drivers/gpu/drm/drm_vma_manager.c
··· 112 112 EXPORT_SYMBOL(drm_vma_offset_manager_destroy); 113 113 114 114 /** 115 - * drm_vma_offset_lookup() - Find node in offset space 115 + * drm_vma_offset_lookup_locked() - Find node in offset space 116 116 * @mgr: Manager object 117 117 * @start: Start address for object (page-based) 118 118 * @pages: Size of object (page-based) ··· 122 122 * region and the given node will be returned, as long as the node spans the 123 123 * whole requested area (given the size in number of pages as @pages). 124 124 * 125 + * Note that before lookup the vma offset manager lookup lock must be acquired 126 + * with drm_vma_offset_lock_lookup(). See there for an example. This can then be 127 + * used to implement weakly referenced lookups using kref_get_unless_zero(). 128 + * 129 + * Example: 130 + * drm_vma_offset_lock_lookup(mgr); 131 + * node = drm_vma_offset_lookup_locked(mgr); 132 + * if (node) 133 + * kref_get_unless_zero(container_of(node, sth, entr)); 134 + * drm_vma_offset_unlock_lookup(mgr); 135 + * 125 136 * RETURNS: 126 137 * Returns NULL if no suitable node can be found. Otherwise, the best match 127 138 * is returned. It's the caller's responsibility to make sure the node doesn't 128 139 * get destroyed before the caller can access it. 129 - */ 130 - struct drm_vma_offset_node *drm_vma_offset_lookup(struct drm_vma_offset_manager *mgr, 131 - unsigned long start, 132 - unsigned long pages) 133 - { 134 - struct drm_vma_offset_node *node; 135 - 136 - read_lock(&mgr->vm_lock); 137 - node = drm_vma_offset_lookup_locked(mgr, start, pages); 138 - read_unlock(&mgr->vm_lock); 139 - 140 - return node; 141 - } 142 - EXPORT_SYMBOL(drm_vma_offset_lookup); 143 - 144 - /** 145 - * drm_vma_offset_lookup_locked() - Find node in offset space 146 - * @mgr: Manager object 147 - * @start: Start address for object (page-based) 148 - * @pages: Size of object (page-based) 149 - * 150 - * Same as drm_vma_offset_lookup() but requires the caller to lock offset lookup 151 - * manually. See drm_vma_offset_lock_lookup() for an example. 152 - * 153 - * RETURNS: 154 - * Returns NULL if no suitable node can be found. Otherwise, the best match 155 - * is returned. 156 140 */ 157 141 struct drm_vma_offset_node *drm_vma_offset_lookup_locked(struct drm_vma_offset_manager *mgr, 158 142 unsigned long start,
+10 -10
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 405 405 406 406 static const struct drm_ioctl_desc exynos_ioctls[] = { 407 407 DRM_IOCTL_DEF_DRV(EXYNOS_GEM_CREATE, exynos_drm_gem_create_ioctl, 408 - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), 408 + DRM_AUTH | DRM_RENDER_ALLOW), 409 409 DRM_IOCTL_DEF_DRV(EXYNOS_GEM_GET, exynos_drm_gem_get_ioctl, 410 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 410 + DRM_RENDER_ALLOW), 411 411 DRM_IOCTL_DEF_DRV(EXYNOS_VIDI_CONNECTION, vidi_connection_ioctl, 412 - DRM_UNLOCKED | DRM_AUTH), 412 + DRM_AUTH), 413 413 DRM_IOCTL_DEF_DRV(EXYNOS_G2D_GET_VER, exynos_g2d_get_ver_ioctl, 414 - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), 414 + DRM_AUTH | DRM_RENDER_ALLOW), 415 415 DRM_IOCTL_DEF_DRV(EXYNOS_G2D_SET_CMDLIST, exynos_g2d_set_cmdlist_ioctl, 416 - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), 416 + DRM_AUTH | DRM_RENDER_ALLOW), 417 417 DRM_IOCTL_DEF_DRV(EXYNOS_G2D_EXEC, exynos_g2d_exec_ioctl, 418 - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), 418 + DRM_AUTH | DRM_RENDER_ALLOW), 419 419 DRM_IOCTL_DEF_DRV(EXYNOS_IPP_GET_PROPERTY, exynos_drm_ipp_get_property, 420 - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), 420 + DRM_AUTH | DRM_RENDER_ALLOW), 421 421 DRM_IOCTL_DEF_DRV(EXYNOS_IPP_SET_PROPERTY, exynos_drm_ipp_set_property, 422 - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), 422 + DRM_AUTH | DRM_RENDER_ALLOW), 423 423 DRM_IOCTL_DEF_DRV(EXYNOS_IPP_QUEUE_BUF, exynos_drm_ipp_queue_buf, 424 - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), 424 + DRM_AUTH | DRM_RENDER_ALLOW), 425 425 DRM_IOCTL_DEF_DRV(EXYNOS_IPP_CMD_CTRL, exynos_drm_ipp_cmd_ctrl, 426 - DRM_UNLOCKED | DRM_AUTH | DRM_RENDER_ALLOW), 426 + DRM_AUTH | DRM_RENDER_ALLOW), 427 427 }; 428 428 429 429 static const struct file_operations exynos_drm_driver_fops = {
+35 -35
drivers/gpu/drm/i915/i915_dma.c
··· 1299 1299 DRM_IOCTL_DEF_DRV(I915_GET_VBLANK_PIPE, drm_noop, DRM_AUTH), 1300 1300 DRM_IOCTL_DEF_DRV(I915_VBLANK_SWAP, drm_noop, DRM_AUTH), 1301 1301 DRM_IOCTL_DEF_DRV(I915_HWS_ADDR, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 1302 - DRM_IOCTL_DEF_DRV(I915_GEM_INIT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY|DRM_UNLOCKED), 1303 - DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER, i915_gem_execbuffer, DRM_AUTH|DRM_UNLOCKED), 1304 - DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER2, i915_gem_execbuffer2, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 1305 - DRM_IOCTL_DEF_DRV(I915_GEM_PIN, i915_gem_reject_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY|DRM_UNLOCKED), 1306 - DRM_IOCTL_DEF_DRV(I915_GEM_UNPIN, i915_gem_reject_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY|DRM_UNLOCKED), 1307 - DRM_IOCTL_DEF_DRV(I915_GEM_BUSY, i915_gem_busy_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 1308 - DRM_IOCTL_DEF_DRV(I915_GEM_SET_CACHING, i915_gem_set_caching_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1309 - DRM_IOCTL_DEF_DRV(I915_GEM_GET_CACHING, i915_gem_get_caching_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1310 - DRM_IOCTL_DEF_DRV(I915_GEM_THROTTLE, i915_gem_throttle_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 1311 - DRM_IOCTL_DEF_DRV(I915_GEM_ENTERVT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY|DRM_UNLOCKED), 1312 - DRM_IOCTL_DEF_DRV(I915_GEM_LEAVEVT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY|DRM_UNLOCKED), 1313 - DRM_IOCTL_DEF_DRV(I915_GEM_CREATE, i915_gem_create_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1314 - DRM_IOCTL_DEF_DRV(I915_GEM_PREAD, i915_gem_pread_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1315 - DRM_IOCTL_DEF_DRV(I915_GEM_PWRITE, i915_gem_pwrite_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1316 - DRM_IOCTL_DEF_DRV(I915_GEM_MMAP, i915_gem_mmap_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1317 - DRM_IOCTL_DEF_DRV(I915_GEM_MMAP_GTT, i915_gem_mmap_gtt_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1318 - DRM_IOCTL_DEF_DRV(I915_GEM_SET_DOMAIN, i915_gem_set_domain_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1319 - DRM_IOCTL_DEF_DRV(I915_GEM_SW_FINISH, i915_gem_sw_finish_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1320 - DRM_IOCTL_DEF_DRV(I915_GEM_SET_TILING, i915_gem_set_tiling, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1321 - DRM_IOCTL_DEF_DRV(I915_GEM_GET_TILING, i915_gem_get_tiling, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1322 - DRM_IOCTL_DEF_DRV(I915_GEM_GET_APERTURE, i915_gem_get_aperture_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1323 - DRM_IOCTL_DEF_DRV(I915_GET_PIPE_FROM_CRTC_ID, intel_get_pipe_from_crtc_id, DRM_UNLOCKED), 1324 - DRM_IOCTL_DEF_DRV(I915_GEM_MADVISE, i915_gem_madvise_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1325 - DRM_IOCTL_DEF_DRV(I915_OVERLAY_PUT_IMAGE, intel_overlay_put_image, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), 1326 - DRM_IOCTL_DEF_DRV(I915_OVERLAY_ATTRS, intel_overlay_attrs, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), 1327 - DRM_IOCTL_DEF_DRV(I915_SET_SPRITE_COLORKEY, intel_sprite_set_colorkey, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), 1328 - DRM_IOCTL_DEF_DRV(I915_GET_SPRITE_COLORKEY, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED), 1329 - DRM_IOCTL_DEF_DRV(I915_GEM_WAIT, i915_gem_wait_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 1330 - DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_CREATE, i915_gem_context_create_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1331 - DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1332 - DRM_IOCTL_DEF_DRV(I915_REG_READ, i915_reg_read_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1333 - DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_get_reset_stats_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1334 - DRM_IOCTL_DEF_DRV(I915_GEM_USERPTR, i915_gem_userptr_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1335 - DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_GETPARAM, i915_gem_context_getparam_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1336 - DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_SETPARAM, i915_gem_context_setparam_ioctl, DRM_UNLOCKED|DRM_RENDER_ALLOW), 1302 + DRM_IOCTL_DEF_DRV(I915_GEM_INIT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 1303 + DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER, i915_gem_execbuffer, DRM_AUTH), 1304 + DRM_IOCTL_DEF_DRV(I915_GEM_EXECBUFFER2, i915_gem_execbuffer2, DRM_AUTH|DRM_RENDER_ALLOW), 1305 + DRM_IOCTL_DEF_DRV(I915_GEM_PIN, i915_gem_reject_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY), 1306 + DRM_IOCTL_DEF_DRV(I915_GEM_UNPIN, i915_gem_reject_pin_ioctl, DRM_AUTH|DRM_ROOT_ONLY), 1307 + DRM_IOCTL_DEF_DRV(I915_GEM_BUSY, i915_gem_busy_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 1308 + DRM_IOCTL_DEF_DRV(I915_GEM_SET_CACHING, i915_gem_set_caching_ioctl, DRM_RENDER_ALLOW), 1309 + DRM_IOCTL_DEF_DRV(I915_GEM_GET_CACHING, i915_gem_get_caching_ioctl, DRM_RENDER_ALLOW), 1310 + DRM_IOCTL_DEF_DRV(I915_GEM_THROTTLE, i915_gem_throttle_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 1311 + DRM_IOCTL_DEF_DRV(I915_GEM_ENTERVT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 1312 + DRM_IOCTL_DEF_DRV(I915_GEM_LEAVEVT, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 1313 + DRM_IOCTL_DEF_DRV(I915_GEM_CREATE, i915_gem_create_ioctl, DRM_RENDER_ALLOW), 1314 + DRM_IOCTL_DEF_DRV(I915_GEM_PREAD, i915_gem_pread_ioctl, DRM_RENDER_ALLOW), 1315 + DRM_IOCTL_DEF_DRV(I915_GEM_PWRITE, i915_gem_pwrite_ioctl, DRM_RENDER_ALLOW), 1316 + DRM_IOCTL_DEF_DRV(I915_GEM_MMAP, i915_gem_mmap_ioctl, DRM_RENDER_ALLOW), 1317 + DRM_IOCTL_DEF_DRV(I915_GEM_MMAP_GTT, i915_gem_mmap_gtt_ioctl, DRM_RENDER_ALLOW), 1318 + DRM_IOCTL_DEF_DRV(I915_GEM_SET_DOMAIN, i915_gem_set_domain_ioctl, DRM_RENDER_ALLOW), 1319 + DRM_IOCTL_DEF_DRV(I915_GEM_SW_FINISH, i915_gem_sw_finish_ioctl, DRM_RENDER_ALLOW), 1320 + DRM_IOCTL_DEF_DRV(I915_GEM_SET_TILING, i915_gem_set_tiling, DRM_RENDER_ALLOW), 1321 + DRM_IOCTL_DEF_DRV(I915_GEM_GET_TILING, i915_gem_get_tiling, DRM_RENDER_ALLOW), 1322 + DRM_IOCTL_DEF_DRV(I915_GEM_GET_APERTURE, i915_gem_get_aperture_ioctl, DRM_RENDER_ALLOW), 1323 + DRM_IOCTL_DEF_DRV(I915_GET_PIPE_FROM_CRTC_ID, intel_get_pipe_from_crtc_id, 0), 1324 + DRM_IOCTL_DEF_DRV(I915_GEM_MADVISE, i915_gem_madvise_ioctl, DRM_RENDER_ALLOW), 1325 + DRM_IOCTL_DEF_DRV(I915_OVERLAY_PUT_IMAGE, intel_overlay_put_image, DRM_MASTER|DRM_CONTROL_ALLOW), 1326 + DRM_IOCTL_DEF_DRV(I915_OVERLAY_ATTRS, intel_overlay_attrs, DRM_MASTER|DRM_CONTROL_ALLOW), 1327 + DRM_IOCTL_DEF_DRV(I915_SET_SPRITE_COLORKEY, intel_sprite_set_colorkey, DRM_MASTER|DRM_CONTROL_ALLOW), 1328 + DRM_IOCTL_DEF_DRV(I915_GET_SPRITE_COLORKEY, drm_noop, DRM_MASTER|DRM_CONTROL_ALLOW), 1329 + DRM_IOCTL_DEF_DRV(I915_GEM_WAIT, i915_gem_wait_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 1330 + DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_CREATE, i915_gem_context_create_ioctl, DRM_RENDER_ALLOW), 1331 + DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_DESTROY, i915_gem_context_destroy_ioctl, DRM_RENDER_ALLOW), 1332 + DRM_IOCTL_DEF_DRV(I915_REG_READ, i915_reg_read_ioctl, DRM_RENDER_ALLOW), 1333 + DRM_IOCTL_DEF_DRV(I915_GET_RESET_STATS, i915_get_reset_stats_ioctl, DRM_RENDER_ALLOW), 1334 + DRM_IOCTL_DEF_DRV(I915_GEM_USERPTR, i915_gem_userptr_ioctl, DRM_RENDER_ALLOW), 1335 + DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_GETPARAM, i915_gem_context_getparam_ioctl, DRM_RENDER_ALLOW), 1336 + DRM_IOCTL_DEF_DRV(I915_GEM_CONTEXT_SETPARAM, i915_gem_context_setparam_ioctl, DRM_RENDER_ALLOW), 1337 1337 }; 1338 1338 1339 1339 int i915_max_ioctl = ARRAY_SIZE(i915_ioctls);
+7 -7
drivers/gpu/drm/msm/msm_drv.c
··· 932 932 } 933 933 934 934 static const struct drm_ioctl_desc msm_ioctls[] = { 935 - DRM_IOCTL_DEF_DRV(MSM_GET_PARAM, msm_ioctl_get_param, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 936 - DRM_IOCTL_DEF_DRV(MSM_GEM_NEW, msm_ioctl_gem_new, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 937 - DRM_IOCTL_DEF_DRV(MSM_GEM_INFO, msm_ioctl_gem_info, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 938 - DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_PREP, msm_ioctl_gem_cpu_prep, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 939 - DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_FINI, msm_ioctl_gem_cpu_fini, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 940 - DRM_IOCTL_DEF_DRV(MSM_GEM_SUBMIT, msm_ioctl_gem_submit, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 941 - DRM_IOCTL_DEF_DRV(MSM_WAIT_FENCE, msm_ioctl_wait_fence, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 935 + DRM_IOCTL_DEF_DRV(MSM_GET_PARAM, msm_ioctl_get_param, DRM_AUTH|DRM_RENDER_ALLOW), 936 + DRM_IOCTL_DEF_DRV(MSM_GEM_NEW, msm_ioctl_gem_new, DRM_AUTH|DRM_RENDER_ALLOW), 937 + DRM_IOCTL_DEF_DRV(MSM_GEM_INFO, msm_ioctl_gem_info, DRM_AUTH|DRM_RENDER_ALLOW), 938 + DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_PREP, msm_ioctl_gem_cpu_prep, DRM_AUTH|DRM_RENDER_ALLOW), 939 + DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_FINI, msm_ioctl_gem_cpu_fini, DRM_AUTH|DRM_RENDER_ALLOW), 940 + DRM_IOCTL_DEF_DRV(MSM_GEM_SUBMIT, msm_ioctl_gem_submit, DRM_AUTH|DRM_RENDER_ALLOW), 941 + DRM_IOCTL_DEF_DRV(MSM_WAIT_FENCE, msm_ioctl_wait_fence, DRM_AUTH|DRM_RENDER_ALLOW), 942 942 }; 943 943 944 944 static const struct vm_operations_struct vm_ops = {
-5
drivers/gpu/drm/msm/msm_fbdev.c
··· 68 68 if (drm_device_is_unplugged(dev)) 69 69 return -ENODEV; 70 70 71 - mutex_lock(&dev->struct_mutex); 72 - 73 71 ret = drm_gem_mmap_obj(drm_obj, drm_obj->size, vma); 74 - 75 - mutex_unlock(&dev->struct_mutex); 76 - 77 72 if (ret) { 78 73 pr_err("%s:drm_gem_mmap_obj fail\n", __func__); 79 74 return ret;
-2
drivers/gpu/drm/msm/msm_gem_prime.c
··· 45 45 { 46 46 int ret; 47 47 48 - mutex_lock(&obj->dev->struct_mutex); 49 48 ret = drm_gem_mmap_obj(obj, obj->size, vma); 50 - mutex_unlock(&obj->dev->struct_mutex); 51 49 if (ret < 0) 52 50 return ret; 53 51
+12 -12
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 862 862 863 863 static const struct drm_ioctl_desc 864 864 nouveau_ioctls[] = { 865 - DRM_IOCTL_DEF_DRV(NOUVEAU_GETPARAM, nouveau_abi16_ioctl_getparam, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 866 - DRM_IOCTL_DEF_DRV(NOUVEAU_SETPARAM, nouveau_abi16_ioctl_setparam, DRM_UNLOCKED|DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 867 - DRM_IOCTL_DEF_DRV(NOUVEAU_CHANNEL_ALLOC, nouveau_abi16_ioctl_channel_alloc, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 868 - DRM_IOCTL_DEF_DRV(NOUVEAU_CHANNEL_FREE, nouveau_abi16_ioctl_channel_free, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 869 - DRM_IOCTL_DEF_DRV(NOUVEAU_GROBJ_ALLOC, nouveau_abi16_ioctl_grobj_alloc, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 870 - DRM_IOCTL_DEF_DRV(NOUVEAU_NOTIFIEROBJ_ALLOC, nouveau_abi16_ioctl_notifierobj_alloc, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 871 - DRM_IOCTL_DEF_DRV(NOUVEAU_GPUOBJ_FREE, nouveau_abi16_ioctl_gpuobj_free, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 872 - DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_NEW, nouveau_gem_ioctl_new, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 873 - DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_PUSHBUF, nouveau_gem_ioctl_pushbuf, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 874 - DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_CPU_PREP, nouveau_gem_ioctl_cpu_prep, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 875 - DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_CPU_FINI, nouveau_gem_ioctl_cpu_fini, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 876 - DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_INFO, nouveau_gem_ioctl_info, DRM_UNLOCKED|DRM_AUTH|DRM_RENDER_ALLOW), 865 + DRM_IOCTL_DEF_DRV(NOUVEAU_GETPARAM, nouveau_abi16_ioctl_getparam, DRM_AUTH|DRM_RENDER_ALLOW), 866 + DRM_IOCTL_DEF_DRV(NOUVEAU_SETPARAM, nouveau_abi16_ioctl_setparam, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 867 + DRM_IOCTL_DEF_DRV(NOUVEAU_CHANNEL_ALLOC, nouveau_abi16_ioctl_channel_alloc, DRM_AUTH|DRM_RENDER_ALLOW), 868 + DRM_IOCTL_DEF_DRV(NOUVEAU_CHANNEL_FREE, nouveau_abi16_ioctl_channel_free, DRM_AUTH|DRM_RENDER_ALLOW), 869 + DRM_IOCTL_DEF_DRV(NOUVEAU_GROBJ_ALLOC, nouveau_abi16_ioctl_grobj_alloc, DRM_AUTH|DRM_RENDER_ALLOW), 870 + DRM_IOCTL_DEF_DRV(NOUVEAU_NOTIFIEROBJ_ALLOC, nouveau_abi16_ioctl_notifierobj_alloc, DRM_AUTH|DRM_RENDER_ALLOW), 871 + DRM_IOCTL_DEF_DRV(NOUVEAU_GPUOBJ_FREE, nouveau_abi16_ioctl_gpuobj_free, DRM_AUTH|DRM_RENDER_ALLOW), 872 + DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_NEW, nouveau_gem_ioctl_new, DRM_AUTH|DRM_RENDER_ALLOW), 873 + DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_PUSHBUF, nouveau_gem_ioctl_pushbuf, DRM_AUTH|DRM_RENDER_ALLOW), 874 + DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_CPU_PREP, nouveau_gem_ioctl_cpu_prep, DRM_AUTH|DRM_RENDER_ALLOW), 875 + DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_CPU_FINI, nouveau_gem_ioctl_cpu_fini, DRM_AUTH|DRM_RENDER_ALLOW), 876 + DRM_IOCTL_DEF_DRV(NOUVEAU_GEM_INFO, nouveau_gem_ioctl_info, DRM_AUTH|DRM_RENDER_ALLOW), 877 877 }; 878 878 879 879 long
-3
drivers/gpu/drm/omapdrm/omap_crtc.c
··· 412 412 dispc_mgr_go(omap_crtc->channel); 413 413 omap_irq_register(crtc->dev, &omap_crtc->vblank_irq); 414 414 } 415 - 416 - crtc->invert_dimensions = !!(crtc->primary->state->rotation & 417 - (BIT(DRM_ROTATE_90) | BIT(DRM_ROTATE_270))); 418 415 } 419 416 420 417 static int omap_crtc_atomic_set_property(struct drm_crtc *crtc,
+6 -6
drivers/gpu/drm/omapdrm/omap_drv.c
··· 626 626 } 627 627 628 628 static const struct drm_ioctl_desc ioctls[DRM_COMMAND_END - DRM_COMMAND_BASE] = { 629 - DRM_IOCTL_DEF_DRV(OMAP_GET_PARAM, ioctl_get_param, DRM_UNLOCKED|DRM_AUTH), 630 - DRM_IOCTL_DEF_DRV(OMAP_SET_PARAM, ioctl_set_param, DRM_UNLOCKED|DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 631 - DRM_IOCTL_DEF_DRV(OMAP_GEM_NEW, ioctl_gem_new, DRM_UNLOCKED|DRM_AUTH), 632 - DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_PREP, ioctl_gem_cpu_prep, DRM_UNLOCKED|DRM_AUTH), 633 - DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_FINI, ioctl_gem_cpu_fini, DRM_UNLOCKED|DRM_AUTH), 634 - DRM_IOCTL_DEF_DRV(OMAP_GEM_INFO, ioctl_gem_info, DRM_UNLOCKED|DRM_AUTH), 629 + DRM_IOCTL_DEF_DRV(OMAP_GET_PARAM, ioctl_get_param, DRM_AUTH), 630 + DRM_IOCTL_DEF_DRV(OMAP_SET_PARAM, ioctl_set_param, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 631 + DRM_IOCTL_DEF_DRV(OMAP_GEM_NEW, ioctl_gem_new, DRM_AUTH), 632 + DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_PREP, ioctl_gem_cpu_prep, DRM_AUTH), 633 + DRM_IOCTL_DEF_DRV(OMAP_GEM_CPU_FINI, ioctl_gem_cpu_fini, DRM_AUTH), 634 + DRM_IOCTL_DEF_DRV(OMAP_GEM_INFO, ioctl_gem_info, DRM_AUTH), 635 635 }; 636 636 637 637 /*
-3
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
··· 140 140 struct vm_area_struct *vma) 141 141 { 142 142 struct drm_gem_object *obj = buffer->priv; 143 - struct drm_device *dev = obj->dev; 144 143 int ret = 0; 145 144 146 145 if (WARN_ON(!obj->filp)) 147 146 return -EINVAL; 148 147 149 - mutex_lock(&dev->struct_mutex); 150 148 ret = drm_gem_mmap_obj(obj, omap_gem_mmap_size(obj), vma); 151 - mutex_unlock(&dev->struct_mutex); 152 149 if (ret < 0) 153 150 return ret; 154 151
+7 -7
drivers/gpu/drm/qxl/qxl_ioctl.c
··· 422 422 } 423 423 424 424 const struct drm_ioctl_desc qxl_ioctls[] = { 425 - DRM_IOCTL_DEF_DRV(QXL_ALLOC, qxl_alloc_ioctl, DRM_AUTH|DRM_UNLOCKED), 425 + DRM_IOCTL_DEF_DRV(QXL_ALLOC, qxl_alloc_ioctl, DRM_AUTH), 426 426 427 - DRM_IOCTL_DEF_DRV(QXL_MAP, qxl_map_ioctl, DRM_AUTH|DRM_UNLOCKED), 427 + DRM_IOCTL_DEF_DRV(QXL_MAP, qxl_map_ioctl, DRM_AUTH), 428 428 429 429 DRM_IOCTL_DEF_DRV(QXL_EXECBUFFER, qxl_execbuffer_ioctl, 430 - DRM_AUTH|DRM_UNLOCKED), 430 + DRM_AUTH), 431 431 DRM_IOCTL_DEF_DRV(QXL_UPDATE_AREA, qxl_update_area_ioctl, 432 - DRM_AUTH|DRM_UNLOCKED), 432 + DRM_AUTH), 433 433 DRM_IOCTL_DEF_DRV(QXL_GETPARAM, qxl_getparam_ioctl, 434 - DRM_AUTH|DRM_UNLOCKED), 434 + DRM_AUTH), 435 435 DRM_IOCTL_DEF_DRV(QXL_CLIENTCAP, qxl_clientcap_ioctl, 436 - DRM_AUTH|DRM_UNLOCKED), 436 + DRM_AUTH), 437 437 438 438 DRM_IOCTL_DEF_DRV(QXL_ALLOC_SURF, qxl_alloc_surf_ioctl, 439 - DRM_AUTH|DRM_UNLOCKED), 439 + DRM_AUTH), 440 440 }; 441 441 442 442 int qxl_max_ioctls = ARRAY_SIZE(qxl_ioctls);
+15 -15
drivers/gpu/drm/radeon/radeon_kms.c
··· 876 876 DRM_IOCTL_DEF_DRV(RADEON_SURF_ALLOC, drm_invalid_op, DRM_AUTH), 877 877 DRM_IOCTL_DEF_DRV(RADEON_SURF_FREE, drm_invalid_op, DRM_AUTH), 878 878 /* KMS */ 879 - DRM_IOCTL_DEF_DRV(RADEON_GEM_INFO, radeon_gem_info_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 880 - DRM_IOCTL_DEF_DRV(RADEON_GEM_CREATE, radeon_gem_create_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 881 - DRM_IOCTL_DEF_DRV(RADEON_GEM_MMAP, radeon_gem_mmap_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 882 - DRM_IOCTL_DEF_DRV(RADEON_GEM_SET_DOMAIN, radeon_gem_set_domain_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 883 - DRM_IOCTL_DEF_DRV(RADEON_GEM_PREAD, radeon_gem_pread_ioctl, DRM_AUTH|DRM_UNLOCKED), 884 - DRM_IOCTL_DEF_DRV(RADEON_GEM_PWRITE, radeon_gem_pwrite_ioctl, DRM_AUTH|DRM_UNLOCKED), 885 - DRM_IOCTL_DEF_DRV(RADEON_GEM_WAIT_IDLE, radeon_gem_wait_idle_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 886 - DRM_IOCTL_DEF_DRV(RADEON_CS, radeon_cs_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 887 - DRM_IOCTL_DEF_DRV(RADEON_INFO, radeon_info_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 888 - DRM_IOCTL_DEF_DRV(RADEON_GEM_SET_TILING, radeon_gem_set_tiling_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 889 - DRM_IOCTL_DEF_DRV(RADEON_GEM_GET_TILING, radeon_gem_get_tiling_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 890 - DRM_IOCTL_DEF_DRV(RADEON_GEM_BUSY, radeon_gem_busy_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 891 - DRM_IOCTL_DEF_DRV(RADEON_GEM_VA, radeon_gem_va_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 892 - DRM_IOCTL_DEF_DRV(RADEON_GEM_OP, radeon_gem_op_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 893 - DRM_IOCTL_DEF_DRV(RADEON_GEM_USERPTR, radeon_gem_userptr_ioctl, DRM_AUTH|DRM_UNLOCKED|DRM_RENDER_ALLOW), 879 + DRM_IOCTL_DEF_DRV(RADEON_GEM_INFO, radeon_gem_info_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 880 + DRM_IOCTL_DEF_DRV(RADEON_GEM_CREATE, radeon_gem_create_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 881 + DRM_IOCTL_DEF_DRV(RADEON_GEM_MMAP, radeon_gem_mmap_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 882 + DRM_IOCTL_DEF_DRV(RADEON_GEM_SET_DOMAIN, radeon_gem_set_domain_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 883 + DRM_IOCTL_DEF_DRV(RADEON_GEM_PREAD, radeon_gem_pread_ioctl, DRM_AUTH), 884 + DRM_IOCTL_DEF_DRV(RADEON_GEM_PWRITE, radeon_gem_pwrite_ioctl, DRM_AUTH), 885 + DRM_IOCTL_DEF_DRV(RADEON_GEM_WAIT_IDLE, radeon_gem_wait_idle_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 886 + DRM_IOCTL_DEF_DRV(RADEON_CS, radeon_cs_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 887 + DRM_IOCTL_DEF_DRV(RADEON_INFO, radeon_info_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 888 + DRM_IOCTL_DEF_DRV(RADEON_GEM_SET_TILING, radeon_gem_set_tiling_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 889 + DRM_IOCTL_DEF_DRV(RADEON_GEM_GET_TILING, radeon_gem_get_tiling_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 890 + DRM_IOCTL_DEF_DRV(RADEON_GEM_BUSY, radeon_gem_busy_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 891 + DRM_IOCTL_DEF_DRV(RADEON_GEM_VA, radeon_gem_va_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 892 + DRM_IOCTL_DEF_DRV(RADEON_GEM_OP, radeon_gem_op_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 893 + DRM_IOCTL_DEF_DRV(RADEON_GEM_USERPTR, radeon_gem_userptr_ioctl, DRM_AUTH|DRM_RENDER_ALLOW), 894 894 }; 895 895 int radeon_max_kms_ioctl = ARRAY_SIZE(radeon_ioctls_kms);
-3
drivers/gpu/drm/rockchip/rockchip_drm_gem.c
··· 79 79 int rockchip_gem_mmap_buf(struct drm_gem_object *obj, 80 80 struct vm_area_struct *vma) 81 81 { 82 - struct drm_device *drm = obj->dev; 83 82 int ret; 84 83 85 - mutex_lock(&drm->struct_mutex); 86 84 ret = drm_gem_mmap_obj(obj, obj->size, vma); 87 - mutex_unlock(&drm->struct_mutex); 88 85 if (ret) 89 86 return ret; 90 87
+14 -14
drivers/gpu/drm/tegra/drm.c
··· 778 778 779 779 static const struct drm_ioctl_desc tegra_drm_ioctls[] = { 780 780 #ifdef CONFIG_DRM_TEGRA_STAGING 781 - DRM_IOCTL_DEF_DRV(TEGRA_GEM_CREATE, tegra_gem_create, DRM_UNLOCKED), 782 - DRM_IOCTL_DEF_DRV(TEGRA_GEM_MMAP, tegra_gem_mmap, DRM_UNLOCKED), 783 - DRM_IOCTL_DEF_DRV(TEGRA_SYNCPT_READ, tegra_syncpt_read, DRM_UNLOCKED), 784 - DRM_IOCTL_DEF_DRV(TEGRA_SYNCPT_INCR, tegra_syncpt_incr, DRM_UNLOCKED), 785 - DRM_IOCTL_DEF_DRV(TEGRA_SYNCPT_WAIT, tegra_syncpt_wait, DRM_UNLOCKED), 786 - DRM_IOCTL_DEF_DRV(TEGRA_OPEN_CHANNEL, tegra_open_channel, DRM_UNLOCKED), 787 - DRM_IOCTL_DEF_DRV(TEGRA_CLOSE_CHANNEL, tegra_close_channel, DRM_UNLOCKED), 788 - DRM_IOCTL_DEF_DRV(TEGRA_GET_SYNCPT, tegra_get_syncpt, DRM_UNLOCKED), 789 - DRM_IOCTL_DEF_DRV(TEGRA_SUBMIT, tegra_submit, DRM_UNLOCKED), 790 - DRM_IOCTL_DEF_DRV(TEGRA_GET_SYNCPT_BASE, tegra_get_syncpt_base, DRM_UNLOCKED), 791 - DRM_IOCTL_DEF_DRV(TEGRA_GEM_SET_TILING, tegra_gem_set_tiling, DRM_UNLOCKED), 792 - DRM_IOCTL_DEF_DRV(TEGRA_GEM_GET_TILING, tegra_gem_get_tiling, DRM_UNLOCKED), 793 - DRM_IOCTL_DEF_DRV(TEGRA_GEM_SET_FLAGS, tegra_gem_set_flags, DRM_UNLOCKED), 794 - DRM_IOCTL_DEF_DRV(TEGRA_GEM_GET_FLAGS, tegra_gem_get_flags, DRM_UNLOCKED), 781 + DRM_IOCTL_DEF_DRV(TEGRA_GEM_CREATE, tegra_gem_create, 0), 782 + DRM_IOCTL_DEF_DRV(TEGRA_GEM_MMAP, tegra_gem_mmap, 0), 783 + DRM_IOCTL_DEF_DRV(TEGRA_SYNCPT_READ, tegra_syncpt_read, 0), 784 + DRM_IOCTL_DEF_DRV(TEGRA_SYNCPT_INCR, tegra_syncpt_incr, 0), 785 + DRM_IOCTL_DEF_DRV(TEGRA_SYNCPT_WAIT, tegra_syncpt_wait, 0), 786 + DRM_IOCTL_DEF_DRV(TEGRA_OPEN_CHANNEL, tegra_open_channel, 0), 787 + DRM_IOCTL_DEF_DRV(TEGRA_CLOSE_CHANNEL, tegra_close_channel, 0), 788 + DRM_IOCTL_DEF_DRV(TEGRA_GET_SYNCPT, tegra_get_syncpt, 0), 789 + DRM_IOCTL_DEF_DRV(TEGRA_SUBMIT, tegra_submit, 0), 790 + DRM_IOCTL_DEF_DRV(TEGRA_GET_SYNCPT_BASE, tegra_get_syncpt_base, 0), 791 + DRM_IOCTL_DEF_DRV(TEGRA_GEM_SET_TILING, tegra_gem_set_tiling, 0), 792 + DRM_IOCTL_DEF_DRV(TEGRA_GEM_GET_TILING, tegra_gem_get_tiling, 0), 793 + DRM_IOCTL_DEF_DRV(TEGRA_GEM_SET_FLAGS, tegra_gem_set_flags, 0), 794 + DRM_IOCTL_DEF_DRV(TEGRA_GEM_GET_FLAGS, tegra_gem_get_flags, 0), 795 795 #endif 796 796 }; 797 797
+1 -54
drivers/gpu/drm/vgem/vgem_drv.c
··· 235 235 return ret; 236 236 } 237 237 238 - int vgem_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma) 239 - { 240 - struct drm_file *priv = filp->private_data; 241 - struct drm_device *dev = priv->minor->dev; 242 - struct drm_vma_offset_node *node; 243 - struct drm_gem_object *obj; 244 - struct drm_vgem_gem_object *vgem_obj; 245 - int ret = 0; 246 - 247 - mutex_lock(&dev->struct_mutex); 248 - 249 - node = drm_vma_offset_exact_lookup(dev->vma_offset_manager, 250 - vma->vm_pgoff, 251 - vma_pages(vma)); 252 - if (!node) { 253 - ret = -EINVAL; 254 - goto out_unlock; 255 - } else if (!drm_vma_node_is_allowed(node, filp)) { 256 - ret = -EACCES; 257 - goto out_unlock; 258 - } 259 - 260 - obj = container_of(node, struct drm_gem_object, vma_node); 261 - 262 - vgem_obj = to_vgem_bo(obj); 263 - 264 - if (obj->dma_buf && vgem_obj->use_dma_buf) { 265 - ret = dma_buf_mmap(obj->dma_buf, vma, 0); 266 - goto out_unlock; 267 - } 268 - 269 - if (!obj->dev->driver->gem_vm_ops) { 270 - ret = -EINVAL; 271 - goto out_unlock; 272 - } 273 - 274 - vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP; 275 - vma->vm_ops = obj->dev->driver->gem_vm_ops; 276 - vma->vm_private_data = vgem_obj; 277 - vma->vm_page_prot = 278 - pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); 279 - 280 - mutex_unlock(&dev->struct_mutex); 281 - drm_gem_vm_open(vma); 282 - return ret; 283 - 284 - out_unlock: 285 - mutex_unlock(&dev->struct_mutex); 286 - 287 - return ret; 288 - } 289 - 290 - 291 238 static struct drm_ioctl_desc vgem_ioctls[] = { 292 239 }; 293 240 294 241 static const struct file_operations vgem_driver_fops = { 295 242 .owner = THIS_MODULE, 296 243 .open = drm_open, 297 - .mmap = vgem_drm_gem_mmap, 244 + .mmap = drm_gem_mmap, 298 245 .poll = drm_poll, 299 246 .read = drm_read, 300 247 .unlocked_ioctl = drm_ioctl,
+27 -27
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 146 146 147 147 static const struct drm_ioctl_desc vmw_ioctls[] = { 148 148 VMW_IOCTL_DEF(VMW_GET_PARAM, vmw_getparam_ioctl, 149 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 149 + DRM_AUTH | DRM_RENDER_ALLOW), 150 150 VMW_IOCTL_DEF(VMW_ALLOC_DMABUF, vmw_dmabuf_alloc_ioctl, 151 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 151 + DRM_AUTH | DRM_RENDER_ALLOW), 152 152 VMW_IOCTL_DEF(VMW_UNREF_DMABUF, vmw_dmabuf_unref_ioctl, 153 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 153 + DRM_RENDER_ALLOW), 154 154 VMW_IOCTL_DEF(VMW_CURSOR_BYPASS, 155 155 vmw_kms_cursor_bypass_ioctl, 156 - DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED), 156 + DRM_MASTER | DRM_CONTROL_ALLOW), 157 157 158 158 VMW_IOCTL_DEF(VMW_CONTROL_STREAM, vmw_overlay_ioctl, 159 - DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED), 159 + DRM_MASTER | DRM_CONTROL_ALLOW), 160 160 VMW_IOCTL_DEF(VMW_CLAIM_STREAM, vmw_stream_claim_ioctl, 161 - DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED), 161 + DRM_MASTER | DRM_CONTROL_ALLOW), 162 162 VMW_IOCTL_DEF(VMW_UNREF_STREAM, vmw_stream_unref_ioctl, 163 - DRM_MASTER | DRM_CONTROL_ALLOW | DRM_UNLOCKED), 163 + DRM_MASTER | DRM_CONTROL_ALLOW), 164 164 165 165 VMW_IOCTL_DEF(VMW_CREATE_CONTEXT, vmw_context_define_ioctl, 166 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 166 + DRM_AUTH | DRM_RENDER_ALLOW), 167 167 VMW_IOCTL_DEF(VMW_UNREF_CONTEXT, vmw_context_destroy_ioctl, 168 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 168 + DRM_RENDER_ALLOW), 169 169 VMW_IOCTL_DEF(VMW_CREATE_SURFACE, vmw_surface_define_ioctl, 170 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 170 + DRM_AUTH | DRM_RENDER_ALLOW), 171 171 VMW_IOCTL_DEF(VMW_UNREF_SURFACE, vmw_surface_destroy_ioctl, 172 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 172 + DRM_RENDER_ALLOW), 173 173 VMW_IOCTL_DEF(VMW_REF_SURFACE, vmw_surface_reference_ioctl, 174 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 175 - VMW_IOCTL_DEF(VMW_EXECBUF, NULL, DRM_AUTH | DRM_UNLOCKED | 174 + DRM_AUTH | DRM_RENDER_ALLOW), 175 + VMW_IOCTL_DEF(VMW_EXECBUF, NULL, DRM_AUTH | 176 176 DRM_RENDER_ALLOW), 177 177 VMW_IOCTL_DEF(VMW_FENCE_WAIT, vmw_fence_obj_wait_ioctl, 178 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 178 + DRM_RENDER_ALLOW), 179 179 VMW_IOCTL_DEF(VMW_FENCE_SIGNALED, 180 180 vmw_fence_obj_signaled_ioctl, 181 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 181 + DRM_RENDER_ALLOW), 182 182 VMW_IOCTL_DEF(VMW_FENCE_UNREF, vmw_fence_obj_unref_ioctl, 183 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 183 + DRM_RENDER_ALLOW), 184 184 VMW_IOCTL_DEF(VMW_FENCE_EVENT, vmw_fence_event_ioctl, 185 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 185 + DRM_AUTH | DRM_RENDER_ALLOW), 186 186 VMW_IOCTL_DEF(VMW_GET_3D_CAP, vmw_get_cap_3d_ioctl, 187 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 187 + DRM_AUTH | DRM_RENDER_ALLOW), 188 188 189 189 /* these allow direct access to the framebuffers mark as master only */ 190 190 VMW_IOCTL_DEF(VMW_PRESENT, vmw_present_ioctl, 191 - DRM_MASTER | DRM_AUTH | DRM_UNLOCKED), 191 + DRM_MASTER | DRM_AUTH), 192 192 VMW_IOCTL_DEF(VMW_PRESENT_READBACK, 193 193 vmw_present_readback_ioctl, 194 - DRM_MASTER | DRM_AUTH | DRM_UNLOCKED), 194 + DRM_MASTER | DRM_AUTH), 195 195 VMW_IOCTL_DEF(VMW_UPDATE_LAYOUT, 196 196 vmw_kms_update_layout_ioctl, 197 - DRM_MASTER | DRM_UNLOCKED), 197 + DRM_MASTER), 198 198 VMW_IOCTL_DEF(VMW_CREATE_SHADER, 199 199 vmw_shader_define_ioctl, 200 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 200 + DRM_AUTH | DRM_RENDER_ALLOW), 201 201 VMW_IOCTL_DEF(VMW_UNREF_SHADER, 202 202 vmw_shader_destroy_ioctl, 203 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 203 + DRM_RENDER_ALLOW), 204 204 VMW_IOCTL_DEF(VMW_GB_SURFACE_CREATE, 205 205 vmw_gb_surface_define_ioctl, 206 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 206 + DRM_AUTH | DRM_RENDER_ALLOW), 207 207 VMW_IOCTL_DEF(VMW_GB_SURFACE_REF, 208 208 vmw_gb_surface_reference_ioctl, 209 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 209 + DRM_AUTH | DRM_RENDER_ALLOW), 210 210 VMW_IOCTL_DEF(VMW_SYNCCPU, 211 211 vmw_user_dmabuf_synccpu_ioctl, 212 - DRM_UNLOCKED | DRM_RENDER_ALLOW), 212 + DRM_RENDER_ALLOW), 213 213 VMW_IOCTL_DEF(VMW_CREATE_EXTENDED_CONTEXT, 214 214 vmw_extended_context_define_ioctl, 215 - DRM_AUTH | DRM_UNLOCKED | DRM_RENDER_ALLOW), 215 + DRM_AUTH | DRM_RENDER_ALLOW), 216 216 }; 217 217 218 218 static struct pci_device_id vmw_pci_id_list[] = {
+20 -16
drivers/gpu/vga/vga_switcheroo.c
··· 84 84 * @fb_info: framebuffer to which console is remapped on switching 85 85 * @pwr_state: current power state 86 86 * @ops: client callbacks 87 - * @id: client identifier, see enum vga_switcheroo_client_id. 88 - * Determining the id requires the handler, so GPUs are initially 89 - * assigned -1 and later given their true id in vga_switcheroo_enable() 87 + * @id: client identifier. Determining the id requires the handler, 88 + * so gpus are initially assigned VGA_SWITCHEROO_UNKNOWN_ID 89 + * and later given their true id in vga_switcheroo_enable() 90 90 * @active: whether the outputs are currently switched to this client 91 91 * @driver_power_control: whether power state is controlled by the driver's 92 92 * runtime pm. If true, writing ON and OFF to the vga_switcheroo debugfs ··· 100 100 struct vga_switcheroo_client { 101 101 struct pci_dev *pdev; 102 102 struct fb_info *fb_info; 103 - int pwr_state; 103 + enum vga_switcheroo_state pwr_state; 104 104 const struct vga_switcheroo_client_ops *ops; 105 - int id; 105 + enum vga_switcheroo_client_id id; 106 106 bool active; 107 107 bool driver_power_control; 108 108 struct list_head list; ··· 145 145 146 146 #define ID_BIT_AUDIO 0x100 147 147 #define client_is_audio(c) ((c)->id & ID_BIT_AUDIO) 148 - #define client_is_vga(c) ((c)->id == -1 || !client_is_audio(c)) 148 + #define client_is_vga(c) ((c)->id == VGA_SWITCHEROO_UNKNOWN_ID || \ 149 + !client_is_audio(c)) 149 150 #define client_id(c) ((c)->id & ~ID_BIT_AUDIO) 150 151 151 152 static int vga_switcheroo_debugfs_init(struct vgasr_priv *priv); ··· 174 173 vgasr_priv.handler->init(); 175 174 176 175 list_for_each_entry(client, &vgasr_priv.clients, list) { 177 - if (client->id != -1) 176 + if (client->id != VGA_SWITCHEROO_UNKNOWN_ID) 178 177 continue; 179 178 ret = vgasr_priv.handler->get_client_id(client->pdev); 180 179 if (ret < 0) ··· 233 232 234 233 static int register_client(struct pci_dev *pdev, 235 234 const struct vga_switcheroo_client_ops *ops, 236 - int id, bool active, bool driver_power_control) 235 + enum vga_switcheroo_client_id id, bool active, 236 + bool driver_power_control) 237 237 { 238 238 struct vga_switcheroo_client *client; 239 239 ··· 279 277 const struct vga_switcheroo_client_ops *ops, 280 278 bool driver_power_control) 281 279 { 282 - return register_client(pdev, ops, -1, 280 + return register_client(pdev, ops, VGA_SWITCHEROO_UNKNOWN_ID, 283 281 pdev == vga_default_device(), 284 282 driver_power_control); 285 283 } ··· 289 287 * vga_switcheroo_register_audio_client - register audio client 290 288 * @pdev: client pci device 291 289 * @ops: client callbacks 292 - * @id: client identifier, see enum vga_switcheroo_client_id 290 + * @id: client identifier 293 291 * 294 292 * Register audio client (audio device on a GPU). The power state of the 295 293 * client is assumed to be ON. ··· 298 296 */ 299 297 int vga_switcheroo_register_audio_client(struct pci_dev *pdev, 300 298 const struct vga_switcheroo_client_ops *ops, 301 - int id) 299 + enum vga_switcheroo_client_id id) 302 300 { 303 301 return register_client(pdev, ops, id | ID_BIT_AUDIO, false, false); 304 302 } ··· 316 314 } 317 315 318 316 static struct vga_switcheroo_client * 319 - find_client_from_id(struct list_head *head, int client_id) 317 + find_client_from_id(struct list_head *head, 318 + enum vga_switcheroo_client_id client_id) 320 319 { 321 320 struct vga_switcheroo_client *client; 322 321 ··· 347 344 * 348 345 * Return: Power state. 349 346 */ 350 - int vga_switcheroo_get_client_state(struct pci_dev *pdev) 347 + enum vga_switcheroo_state vga_switcheroo_get_client_state(struct pci_dev *pdev) 351 348 { 352 349 struct vga_switcheroo_client *client; 353 350 enum vga_switcheroo_state ret; ··· 499 496 return 0; 500 497 } 501 498 502 - static void set_audio_state(int id, int state) 499 + static void set_audio_state(enum vga_switcheroo_client_id id, 500 + enum vga_switcheroo_state state) 503 501 { 504 502 struct vga_switcheroo_client *client; 505 503 ··· 587 583 int ret; 588 584 bool delay = false, can_switch; 589 585 bool just_mux = false; 590 - int client_id = -1; 586 + enum vga_switcheroo_client_id client_id = VGA_SWITCHEROO_UNKNOWN_ID; 591 587 struct vga_switcheroo_client *client = NULL; 592 588 593 589 if (cnt > 63) ··· 656 652 client_id = VGA_SWITCHEROO_DIS; 657 653 } 658 654 659 - if (client_id == -1) 655 + if (client_id == VGA_SWITCHEROO_UNKNOWN_ID) 660 656 goto out; 661 657 client = find_client_from_id(&vgasr_priv.clients, client_id); 662 658 if (!client)
+10 -1
include/drm/drmP.h
··· 107 107 * ATOMIC: used in the atomic code. 108 108 * This is the category used by the DRM_DEBUG_ATOMIC() macro. 109 109 * 110 + * VBL: used for verbose debug message in the vblank code 111 + * This is the category used by the DRM_DEBUG_VBL() macro. 112 + * 110 113 * Enabling verbose debug messages is done through the drm.debug parameter, 111 114 * each category being enabled by a bit. 112 115 * ··· 117 114 * drm.debug=0x2 will enable DRIVER messages 118 115 * drm.debug=0x3 will enable CORE and DRIVER messages 119 116 * ... 120 - * drm.debug=0xf will enable all messages 117 + * drm.debug=0x3f will enable all messages 121 118 * 122 119 * An interesting feature is that it's possible to enable verbose logging at 123 120 * run-time by echoing the debug value in its sysfs node: ··· 128 125 #define DRM_UT_KMS 0x04 129 126 #define DRM_UT_PRIME 0x08 130 127 #define DRM_UT_ATOMIC 0x10 128 + #define DRM_UT_VBL 0x20 131 129 132 130 extern __printf(2, 3) 133 131 void drm_ut_debug_printk(const char *function_name, ··· 219 215 #define DRM_DEBUG_ATOMIC(fmt, args...) \ 220 216 do { \ 221 217 if (unlikely(drm_debug & DRM_UT_ATOMIC)) \ 218 + drm_ut_debug_printk(__func__, fmt, ##args); \ 219 + } while (0) 220 + #define DRM_DEBUG_VBL(fmt, args...) \ 221 + do { \ 222 + if (unlikely(drm_debug & DRM_UT_VBL)) \ 222 223 drm_ut_debug_printk(__func__, fmt, ##args); \ 223 224 } while (0) 224 225
-5
include/drm/drm_crtc.h
··· 407 407 * @enabled: is this CRTC enabled? 408 408 * @mode: current mode timings 409 409 * @hwmode: mode timings as programmed to hw regs 410 - * @invert_dimensions: for purposes of error checking crtc vs fb sizes, 411 - * invert the width/height of the crtc. This is used if the driver 412 - * is performing 90 or 270 degree rotated scanout 413 410 * @x: x position on screen 414 411 * @y: y position on screen 415 412 * @funcs: CRTC control functions ··· 454 457 * crtc, panel scaling etc. Needed for timestamping etc. 455 458 */ 456 459 struct drm_display_mode hwmode; 457 - 458 - bool invert_dimensions; 459 460 460 461 int x, y; 461 462 const struct drm_crtc_funcs *funcs;
+4 -1
include/drm/drm_gem.h
··· 142 142 static inline void 143 143 drm_gem_object_unreference(struct drm_gem_object *obj) 144 144 { 145 - if (obj != NULL) 145 + if (obj != NULL) { 146 + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); 147 + 146 148 kref_put(&obj->refcount, drm_gem_object_free); 149 + } 147 150 } 148 151 149 152 static inline void
+7 -17
include/drm/drm_vma_manager.h
··· 54 54 unsigned long page_offset, unsigned long size); 55 55 void drm_vma_offset_manager_destroy(struct drm_vma_offset_manager *mgr); 56 56 57 - struct drm_vma_offset_node *drm_vma_offset_lookup(struct drm_vma_offset_manager *mgr, 58 - unsigned long start, 59 - unsigned long pages); 60 57 struct drm_vma_offset_node *drm_vma_offset_lookup_locked(struct drm_vma_offset_manager *mgr, 61 58 unsigned long start, 62 59 unsigned long pages); ··· 68 71 struct file *filp); 69 72 70 73 /** 71 - * drm_vma_offset_exact_lookup() - Look up node by exact address 74 + * drm_vma_offset_exact_lookup_locked() - Look up node by exact address 72 75 * @mgr: Manager object 73 76 * @start: Start address (page-based, not byte-based) 74 77 * @pages: Size of object (page-based) 75 78 * 76 - * Same as drm_vma_offset_lookup() but does not allow any offset into the node. 79 + * Same as drm_vma_offset_lookup_locked() but does not allow any offset into the node. 77 80 * It only returns the exact object with the given start address. 78 81 * 79 82 * RETURNS: 80 83 * Node at exact start address @start. 81 84 */ 82 85 static inline struct drm_vma_offset_node * 83 - drm_vma_offset_exact_lookup(struct drm_vma_offset_manager *mgr, 84 - unsigned long start, 85 - unsigned long pages) 86 + drm_vma_offset_exact_lookup_locked(struct drm_vma_offset_manager *mgr, 87 + unsigned long start, 88 + unsigned long pages) 86 89 { 87 90 struct drm_vma_offset_node *node; 88 91 89 - node = drm_vma_offset_lookup(mgr, start, pages); 92 + node = drm_vma_offset_lookup_locked(mgr, start, pages); 90 93 return (node && node->vm_node.start == start) ? node : NULL; 91 94 } 92 95 ··· 94 97 * drm_vma_offset_lock_lookup() - Lock lookup for extended private use 95 98 * @mgr: Manager object 96 99 * 97 - * Lock VMA manager for extended lookups. Only *_locked() VMA function calls 100 + * Lock VMA manager for extended lookups. Only locked VMA function calls 98 101 * are allowed while holding this lock. All other contexts are blocked from VMA 99 102 * until the lock is released via drm_vma_offset_unlock_lookup(). 100 103 * ··· 105 108 * not call any other VMA helpers while holding this lock. 106 109 * 107 110 * Note: You're in atomic-context while holding this lock! 108 - * 109 - * Example: 110 - * drm_vma_offset_lock_lookup(mgr); 111 - * node = drm_vma_offset_lookup_locked(mgr); 112 - * if (node) 113 - * kref_get_unless_zero(container_of(node, sth, entr)); 114 - * drm_vma_offset_unlock_lookup(mgr); 115 111 */ 116 112 static inline void drm_vma_offset_lock_lookup(struct drm_vma_offset_manager *mgr) 117 113 {
+9 -5
include/linux/vga_switcheroo.h
··· 59 59 60 60 /** 61 61 * enum vga_switcheroo_client_id - client identifier 62 + * @VGA_SWITCHEROO_UNKNOWN_ID: initial identifier assigned to vga clients. 63 + * Determining the id requires the handler, so GPUs are given their 64 + * true id in a delayed fashion in vga_switcheroo_enable() 62 65 * @VGA_SWITCHEROO_IGD: integrated graphics device 63 66 * @VGA_SWITCHEROO_DIS: discrete graphics device 64 67 * @VGA_SWITCHEROO_MAX_CLIENTS: currently no more than two GPUs are supported ··· 69 66 * Client identifier. Audio clients use the same identifier & 0x100. 70 67 */ 71 68 enum vga_switcheroo_client_id { 69 + VGA_SWITCHEROO_UNKNOWN_ID = -1, 72 70 VGA_SWITCHEROO_IGD, 73 71 VGA_SWITCHEROO_DIS, 74 72 VGA_SWITCHEROO_MAX_CLIENTS, ··· 100 96 int (*switchto)(enum vga_switcheroo_client_id id); 101 97 int (*power_state)(enum vga_switcheroo_client_id id, 102 98 enum vga_switcheroo_state state); 103 - int (*get_client_id)(struct pci_dev *pdev); 99 + enum vga_switcheroo_client_id (*get_client_id)(struct pci_dev *pdev); 104 100 }; 105 101 106 102 /** ··· 132 128 bool driver_power_control); 133 129 int vga_switcheroo_register_audio_client(struct pci_dev *pdev, 134 130 const struct vga_switcheroo_client_ops *ops, 135 - int id); 131 + enum vga_switcheroo_client_id id); 136 132 137 133 void vga_switcheroo_client_fb_set(struct pci_dev *dev, 138 134 struct fb_info *info); ··· 142 138 143 139 int vga_switcheroo_process_delayed_switch(void); 144 140 145 - int vga_switcheroo_get_client_state(struct pci_dev *dev); 141 + enum vga_switcheroo_state vga_switcheroo_get_client_state(struct pci_dev *dev); 146 142 147 143 void vga_switcheroo_set_dynamic_switch(struct pci_dev *pdev, enum vga_switcheroo_state dynamic); 148 144 ··· 158 154 static inline int vga_switcheroo_register_handler(struct vga_switcheroo_handler *handler) { return 0; } 159 155 static inline int vga_switcheroo_register_audio_client(struct pci_dev *pdev, 160 156 const struct vga_switcheroo_client_ops *ops, 161 - int id) { return 0; } 157 + enum vga_switcheroo_client_id id) { return 0; } 162 158 static inline void vga_switcheroo_unregister_handler(void) {} 163 159 static inline int vga_switcheroo_process_delayed_switch(void) { return 0; } 164 - static inline int vga_switcheroo_get_client_state(struct pci_dev *dev) { return VGA_SWITCHEROO_ON; } 160 + static inline enum vga_switcheroo_state vga_switcheroo_get_client_state(struct pci_dev *dev) { return VGA_SWITCHEROO_ON; } 165 161 166 162 static inline void vga_switcheroo_set_dynamic_switch(struct pci_dev *pdev, enum vga_switcheroo_state dynamic) {} 167 163
+2
include/uapi/drm/i810_drm.h
··· 1 1 #ifndef _I810_DRM_H_ 2 2 #define _I810_DRM_H_ 3 3 4 + #include <drm/drm.h> 5 + 4 6 /* WARNING: These defines must be the same as what the Xserver uses. 5 7 * if you change them, you must change the defines in the Xserver. 6 8 */
+2
include/uapi/drm/r128_drm.h
··· 33 33 #ifndef __R128_DRM_H__ 34 34 #define __R128_DRM_H__ 35 35 36 + #include <drm/drm.h> 37 + 36 38 /* WARNING: If you change any of these defines, make sure to change the 37 39 * defines in the X server file (r128_sarea.h) 38 40 */
+2
include/uapi/drm/savage_drm.h
··· 26 26 #ifndef __SAVAGE_DRM_H__ 27 27 #define __SAVAGE_DRM_H__ 28 28 29 + #include <drm/drm.h> 30 + 29 31 #ifndef __SAVAGE_SAREA_DEFINES__ 30 32 #define __SAVAGE_SAREA_DEFINES__ 31 33
+1 -1
sound/pci/hda/hda_controller.h
··· 153 153 unsigned int snoop:1; 154 154 unsigned int align_buffer_size:1; 155 155 unsigned int region_requested:1; 156 - unsigned int disabled:1; /* disabled by VGA-switcher */ 156 + unsigned int disabled:1; /* disabled by vga_switcheroo */ 157 157 158 158 #ifdef CONFIG_SND_HDA_DSP_LOADER 159 159 struct azx_dev saved_azx_dev;
+6 -6
sound/pci/hda/hda_intel.c
··· 337 337 AZX_DCAPS_4K_BDLE_BOUNDARY | AZX_DCAPS_SNOOP_OFF) 338 338 339 339 /* 340 - * VGA-switcher support 340 + * vga_switcheroo support 341 341 */ 342 342 #ifdef SUPPORT_VGA_SWITCHEROO 343 343 #define use_vga_switcheroo(chip) ((chip)->use_vga_switcheroo) ··· 1076 1076 } 1077 1077 } 1078 1078 } else { 1079 - dev_info(chip->card->dev, "%s via VGA-switcheroo\n", 1079 + dev_info(chip->card->dev, "%s via vga_switcheroo\n", 1080 1080 disabled ? "Disabling" : "Enabling"); 1081 1081 if (disabled) { 1082 1082 pm_runtime_put_sync_suspend(card->dev); 1083 1083 azx_suspend(card->dev); 1084 - /* when we get suspended by vga switcheroo we end up in D3cold, 1084 + /* when we get suspended by vga_switcheroo we end up in D3cold, 1085 1085 * however we have no ACPI handle, so pci/acpi can't put us there, 1086 1086 * put ourselves there */ 1087 1087 pci->current_state = PCI_D3cold; ··· 1121 1121 struct pci_dev *p = get_bound_vga(chip->pci); 1122 1122 if (p) { 1123 1123 dev_info(chip->card->dev, 1124 - "Handle VGA-switcheroo audio client\n"); 1124 + "Handle vga_switcheroo audio client\n"); 1125 1125 hda->use_vga_switcheroo = 1; 1126 1126 pci_dev_put(p); 1127 1127 } ··· 1232 1232 1233 1233 #ifdef SUPPORT_VGA_SWITCHEROO 1234 1234 /* 1235 - * Check of disabled HDMI controller by vga-switcheroo 1235 + * Check of disabled HDMI controller by vga_switcheroo 1236 1236 */ 1237 1237 static struct pci_dev *get_bound_vga(struct pci_dev *pci) 1238 1238 { ··· 1917 1917 1918 1918 err = register_vga_switcheroo(chip); 1919 1919 if (err < 0) { 1920 - dev_err(card->dev, "Error registering VGA-switcheroo client\n"); 1920 + dev_err(card->dev, "Error registering vga_switcheroo client\n"); 1921 1921 goto out_free; 1922 1922 } 1923 1923
+1 -1
sound/pci/hda/hda_intel.h
··· 35 35 unsigned int irq_pending_warned:1; 36 36 unsigned int probe_continued:1; 37 37 38 - /* VGA-switcheroo setup */ 38 + /* vga_switcheroo setup */ 39 39 unsigned int use_vga_switcheroo:1; 40 40 unsigned int vga_switcheroo_registered:1; 41 41 unsigned int init_failed:1; /* delayed init failed */