Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2019-10-24-2' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.5:

UAPI Changes:
-syncobj: allow querying the last submitted timeline value (David)
-fourcc: explicitly defineDRM_FORMAT_BIG_ENDIAN as unsigned (Adam)
-omap: revert the OMAP_BO_* flags that were added -- no userspace (Sean)

Cross-subsystem Changes:
-MAINTAINERS: add Mihail as komeda co-maintainer (Mihail)

Core Changes:
-edid: a few cleanups, add AVI infoframe bar info (Ville)
-todo: remove i915 device_link item and add difficulty levels (Daniel)
-dp_helpers: add a few new helpers to parse dpcd (Thierry)

Driver Changes:
-gma500: fix a few memory disclosure leaks (Kangjie)
-qxl: convert to use the new drm_gem_object_funcs.mmap (Gerd)
-various: open code dp_link helpers in preparation for helper removal (Thierry)

Cc: Chunming Zhou <david1.zhou@amd.com>
Cc: Adam Jackson <ajax@redhat.com>
Cc: Sean Paul <seanpaul@chromium.org>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Kangjie Lu <kjlu@umn.edu>
Cc: Mihail Atanassov <mihail.atanassov@arm.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Thierry Reding <treding@nvidia.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20191024155535.GA10294@art_vandelay

+1856 -2016
+3
Documentation/devicetree/bindings/display/arm,malidp.txt
··· 37 37 Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt) 38 38 to be used for the framebuffer; if not present, the framebuffer may 39 39 be located anywhere in memory. 40 + - arm,malidp-arqos-high-level: integer of u32 value describing the ARQoS 41 + levels of DP500's QoS signaling. 40 42 41 43 42 44 Example: ··· 56 54 clocks = <&oscclk2>, <&fpgaosc0>, <&fpgaosc1>, <&fpgaosc1>; 57 55 clock-names = "pxlclk", "mclk", "aclk", "pclk"; 58 56 arm,malidp-output-port-lines = /bits/ 8 <8 8 8>; 57 + arm,malidp-arqos-high-level = <0xd000d000>; 59 58 port { 60 59 dp0_output: endpoint { 61 60 remote-endpoint = <&tda998x_2_input>;
+5 -1
Documentation/devicetree/bindings/display/rockchip/rockchip-vop.txt
··· 20 20 "rockchip,rk3228-vop"; 21 21 "rockchip,rk3328-vop"; 22 22 23 + - reg: Must contain one entry corresponding to the base address and length 24 + of the register space. Can optionally contain a second entry 25 + corresponding to the CRTC gamma LUT address. 26 + 23 27 - interrupts: should contain a list of all VOP IP block interrupts in the 24 28 order: VSYNC, LCD_SYSTEM. The interrupt specifier 25 29 format depends on the interrupt controller used. ··· 52 48 SoC specific DT entry: 53 49 vopb: vopb@ff930000 { 54 50 compatible = "rockchip,rk3288-vop"; 55 - reg = <0xff930000 0x19c>; 51 + reg = <0x0 0xff930000 0x0 0x19c>, <0x0 0xff931000 0x0 0x1000>; 56 52 interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>; 57 53 clocks = <&cru ACLK_VOP0>, <&cru DCLK_VOP0>, <&cru HCLK_VOP0>; 58 54 clock-names = "aclk_vop", "dclk_vop", "hclk_vop";
-3
Documentation/gpu/drm-kms-helpers.rst
··· 77 77 Atomic State Helper Reference 78 78 ----------------------------- 79 79 80 - .. kernel-doc:: include/drm/drm_atomic_state_helper.h 81 - :internal: 82 - 83 80 .. kernel-doc:: drivers/gpu/drm/drm_atomic_state_helper.c 84 81 :export: 85 82
+100 -19
Documentation/gpu/todo.rst
··· 7 7 This section contains a list of smaller janitorial tasks in the kernel DRM 8 8 graphics subsystem useful as newbie projects. Or for slow rainy days. 9 9 10 + Difficulty 11 + ---------- 12 + 13 + To make it easier task are categorized into different levels: 14 + 15 + Starter: Good tasks to get started with the DRM subsystem. 16 + 17 + Intermediate: Tasks which need some experience with working in the DRM 18 + subsystem, or some specific GPU/display graphics knowledge. For debugging issue 19 + it's good to have the relevant hardware (or a virtual driver set up) available 20 + for testing. 21 + 22 + Advanced: Tricky tasks that need fairly good understanding of the DRM subsystem 23 + and graphics topics. Generally need the relevant hardware for development and 24 + testing. 25 + 10 26 Subsystem-wide refactorings 11 27 =========================== 12 28 ··· 35 19 implementations), and then remove it. 36 20 37 21 Contact: Daniel Vetter, respective driver maintainers 22 + 23 + Level: Intermediate 38 24 39 25 Convert existing KMS drivers to atomic modesetting 40 26 -------------------------------------------------- ··· 56 38 57 39 Contact: Daniel Vetter, respective driver maintainers 58 40 41 + Level: Advanced 42 + 59 43 Clean up the clipped coordination confusion around planes 60 44 --------------------------------------------------------- 61 45 ··· 69 49 helpers. 70 50 71 51 Contact: Ville Syrjälä, Daniel Vetter, driver maintainers 52 + 53 + Level: Advanced 72 54 73 55 Convert early atomic drivers to async commit helpers 74 56 ---------------------------------------------------- ··· 84 62 events for atomic commits correctly. But fixing these bugs is good anyway. 85 63 86 64 Contact: Daniel Vetter, respective driver maintainers 65 + 66 + Level: Advanced 87 67 88 68 Fallout from atomic KMS 89 69 ----------------------- ··· 115 91 116 92 Contact: Daniel Vetter 117 93 94 + Level: Intermediate 95 + 118 96 Get rid of dev->struct_mutex from GEM drivers 119 97 --------------------------------------------- 120 98 ··· 140 114 141 115 Contact: Daniel Vetter, respective driver maintainers 142 116 117 + Level: Advanced 118 + 143 119 Convert instances of dev_info/dev_err/dev_warn to their DRM_DEV_* equivalent 144 120 ---------------------------------------------------------------------------- 145 121 ··· 157 129 158 130 Contact: Sean Paul, Maintainer of the driver you plan to convert 159 131 132 + Level: Starter 133 + 160 134 Convert drivers to use simple modeset suspend/resume 161 135 ---------------------------------------------------- 162 136 ··· 168 138 of the atomic suspend/resume code in older atomic modeset drivers. 169 139 170 140 Contact: Maintainer of the driver you plan to convert 141 + 142 + Level: Intermediate 171 143 172 144 Convert drivers to use drm_fb_helper_fbdev_setup/teardown() 173 145 ----------------------------------------------------------- ··· 189 157 190 158 Contact: Maintainer of the driver you plan to convert 191 159 160 + Level: Intermediate 161 + 192 162 Clean up mmap forwarding 193 163 ------------------------ 194 164 ··· 199 165 There's drm_gem_prime_mmap() for this now, but still needs to be rolled out. 200 166 201 167 Contact: Daniel Vetter 168 + 169 + Level: Intermediate 202 170 203 171 Generic fbdev defio support 204 172 --------------------------- ··· 232 196 233 197 Contact: Daniel Vetter, Noralf Tronnes 234 198 199 + Level: Advanced 200 + 235 201 idr_init_base() 236 202 --------------- 237 203 ··· 244 206 245 207 Contact: Daniel Vetter 246 208 209 + Level: Starter 210 + 247 211 struct drm_gem_object_funcs 248 212 --------------------------- 249 213 ··· 255 215 We also need a 2nd version of the CMA define that doesn't require the 256 216 vmapping to be present (different hook for prime importing). Plus this needs to 257 217 be rolled out to all drivers using their own implementations, too. 218 + 219 + Level: Intermediate 258 220 259 221 Use DRM_MODESET_LOCK_ALL_* helpers instead of boilerplate 260 222 --------------------------------------------------------- ··· 273 231 274 232 Contact: Sean Paul, respective driver maintainers 275 233 234 + Level: Starter 235 + 276 236 Rename CMA helpers to DMA helpers 277 237 --------------------------------- 278 238 ··· 284 240 no one knows what that means) since underneath they just use dma_alloc_coherent. 285 241 286 242 Contact: Laurent Pinchart, Daniel Vetter 243 + 244 + Level: Intermediate (mostly because it is a huge tasks without good partial 245 + milestones, not technically itself that challenging) 287 246 288 247 Convert direct mode.vrefresh accesses to use drm_mode_vrefresh() 289 248 ---------------------------------------------------------------- ··· 306 259 307 260 Contact: Sean Paul 308 261 262 + Level: Starter 263 + 309 264 Remove drm_display_mode.hsync 310 265 ----------------------------- 311 266 ··· 317 268 it to use drm_mode_hsync() instead. 318 269 319 270 Contact: Sean Paul 271 + 272 + Level: Starter 320 273 321 274 drm_fb_helper tasks 322 275 ------------------- ··· 335 284 removed: drm_fb_helper_single_add_all_connectors(), 336 285 drm_fb_helper_add_one_connector() and drm_fb_helper_remove_one_connector(). 337 286 287 + Level: Intermediate 288 + 338 289 connector register/unregister fixes 339 290 ----------------------------------- 340 291 ··· 349 296 drm_dp_aux_init, and moving the actual registering into a late_register 350 297 callback as recommended in the kerneldoc. 351 298 299 + Level: Intermediate 300 + 352 301 Core refactorings 353 302 ================= 354 - 355 - Clean up the DRM header mess 356 - ---------------------------- 357 - 358 - The DRM subsystem originally had only one huge global header, ``drmP.h``. This 359 - is now split up, but many source files still include it. The remaining part of 360 - the cleanup work here is to replace any ``#include <drm/drmP.h>`` by only the 361 - headers needed (and fixing up any missing pre-declarations in the headers). 362 - 363 - In the end no .c file should need to include ``drmP.h`` anymore. 364 - 365 - Contact: Daniel Vetter 366 303 367 304 Make panic handling work 368 305 ------------------------ ··· 393 350 394 351 Contact: Daniel Vetter 395 352 353 + Level: Advanced 354 + 396 355 Clean up the debugfs support 397 356 ---------------------------- 398 357 ··· 424 379 425 380 Contact: Daniel Vetter 426 381 382 + Level: Intermediate 383 + 427 384 KMS cleanups 428 385 ------------ 429 386 ··· 441 394 end, for which we could add drm_*_cleanup_kfree(). And then there's the (for 442 395 historical reasons) misnamed drm_primary_helper_destroy() function. 443 396 397 + Level: Intermediate 398 + 444 399 Better Testing 445 400 ============== 446 401 ··· 450 401 ---------------------- 451 402 452 403 And fix up the fallout. Should be really interesting ... 404 + 405 + Level: Advanced 453 406 454 407 Make KMS tests in i-g-t generic 455 408 ------------------------------- ··· 466 415 infrastructure to use dumb buffers for untiled buffers, to be able to run all 467 416 the non-i915 specific modeset tests. 468 417 418 + Level: Advanced 419 + 469 420 Extend virtual test driver (VKMS) 470 421 --------------------------------- 471 422 ··· 476 423 fit the available time. 477 424 478 425 Contact: Daniel Vetter 426 + 427 + Level: See details 479 428 480 429 Backlight Refactoring 481 430 --------------------- ··· 492 437 493 438 Contact: Daniel Vetter 494 439 440 + Level: Intermediate 441 + 495 442 Driver Specific 496 443 =============== 497 444 ··· 506 449 See drivers/gpu/drm/amd/display/TODO for tasks. 507 450 508 451 Contact: Harry Wentland, Alex Deucher 509 - 510 - i915 511 - ---- 512 - 513 - - Our early/late pm callbacks could be removed in favour of using 514 - device_link_add to model the dependency between i915 and snd_had. See 515 - https://dri.freedesktop.org/docs/drm/driver-api/device_link.html 516 452 517 453 Bootsplash 518 454 ========== ··· 522 472 523 473 Contact: Sam Ravnborg 524 474 475 + Level: Advanced 476 + 525 477 Outside DRM 526 478 =========== 479 + 480 + Convert fbdev drivers to DRM 481 + ---------------------------- 482 + 483 + There are plenty of fbdev drivers for older hardware. Some hwardware has 484 + become obsolete, but some still provides good(-enough) framebuffers. The 485 + drivers that are still useful should be converted to DRM and afterwards 486 + removed from fbdev. 487 + 488 + Very simple fbdev drivers can best be converted by starting with a new 489 + DRM driver. Simple KMS helpers and SHMEM should be able to handle any 490 + existing hardware. The new driver's call-back functions are filled from 491 + existing fbdev code. 492 + 493 + More complex fbdev drivers can be refactored step-by-step into a DRM 494 + driver with the help of the DRM fbconv helpers. [1] These helpers provide 495 + the transition layer between the DRM core infrastructure and the fbdev 496 + driver interface. Create a new DRM driver on top of the fbconv helpers, 497 + copy over the fbdev driver, and hook it up to the DRM code. Examples for 498 + several fbdev drivers are available at [1] and a tutorial of this process 499 + available at [2]. The result is a primitive DRM driver that can run X11 500 + and Weston. 501 + 502 + - [1] https://gitlab.freedesktop.org/tzimmermann/linux/tree/fbconv 503 + - [2] https://gitlab.freedesktop.org/tzimmermann/linux/blob/fbconv/drivers/gpu/drm/drm_fbconv_helper.c 504 + 505 + Contact: Thomas Zimmermann <tzimmermann@suse.de> 506 + 507 + Level: Advanced
+1
MAINTAINERS
··· 1251 1251 ARM KOMEDA DRM-KMS DRIVER 1252 1252 M: James (Qian) Wang <james.qian.wang@arm.com> 1253 1253 M: Liviu Dudau <liviu.dudau@arm.com> 1254 + M: Mihail Atanassov <mihail.atanassov@arm.com> 1254 1255 L: Mali DP Maintainers <malidp@foss.arm.com> 1255 1256 S: Supported 1256 1257 T: git git://anongit.freedesktop.org/drm/drm-misc
+2 -1
drivers/gpu/drm/Kconfig
··· 263 263 tristate "Virtual KMS (EXPERIMENTAL)" 264 264 depends on DRM 265 265 select DRM_KMS_HELPER 266 + select CRC32 266 267 default n 267 268 help 268 269 Virtual Kernel Mode-Setting (VKMS) is used for testing or for ··· 404 403 405 404 config DRM_I810 406 405 tristate "Intel I810" 407 - # !PREEMPT because of missing ioctl locking 406 + # !PREEMPTION because of missing ioctl locking 408 407 depends on DRM && AGP && AGP_INTEL && (!PREEMPTION || BROKEN) 409 408 help 410 409 Choose this option if you have an Intel I810 graphics card. If M is
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 1123 1123 int amdgpu_bo_fbdev_mmap(struct amdgpu_bo *bo, 1124 1124 struct vm_area_struct *vma) 1125 1125 { 1126 - return ttm_fbdev_mmap(vma, &bo->tbo); 1126 + if (vma->vm_pgoff != 0) 1127 + return -EACCES; 1128 + 1129 + return ttm_bo_mmap_obj(vma, &bo->tbo); 1127 1130 } 1128 1131 1129 1132 /**
+125 -10
drivers/gpu/drm/arm/display/komeda/d71/d71_component.c
··· 106 106 i, hdr.output_ids[i]); 107 107 } 108 108 109 + /* On D71, we are using the global line size. From D32, every component have 110 + * a line size register to indicate the fifo size. 111 + */ 112 + static u32 __get_blk_line_size(struct d71_dev *d71, u32 __iomem *reg, 113 + u32 max_default) 114 + { 115 + if (!d71->periph_addr) 116 + max_default = malidp_read32(reg, BLK_MAX_LINE_SIZE); 117 + 118 + return max_default; 119 + } 120 + 121 + static u32 get_blk_line_size(struct d71_dev *d71, u32 __iomem *reg) 122 + { 123 + return __get_blk_line_size(d71, reg, d71->max_line_size); 124 + } 125 + 109 126 static u32 to_rot_ctrl(u32 rot) 110 127 { 111 128 u32 lr_ctrl = 0; ··· 349 332 seq_printf(sf, "%sAD_V_CROP:\t\t0x%X\n", prefix, v[2]); 350 333 } 351 334 335 + static int d71_layer_validate(struct komeda_component *c, 336 + struct komeda_component_state *state) 337 + { 338 + struct komeda_layer_state *st = to_layer_st(state); 339 + struct komeda_layer *layer = to_layer(c); 340 + struct drm_plane_state *plane_st; 341 + struct drm_framebuffer *fb; 342 + u32 fourcc, line_sz, max_line_sz; 343 + 344 + plane_st = drm_atomic_get_new_plane_state(state->obj.state, 345 + state->plane); 346 + fb = plane_st->fb; 347 + fourcc = fb->format->format; 348 + 349 + if (drm_rotation_90_or_270(st->rot)) 350 + line_sz = st->vsize - st->afbc_crop_t - st->afbc_crop_b; 351 + else 352 + line_sz = st->hsize - st->afbc_crop_l - st->afbc_crop_r; 353 + 354 + if (fb->modifier) { 355 + if ((fb->modifier & AFBC_FORMAT_MOD_BLOCK_SIZE_MASK) == 356 + AFBC_FORMAT_MOD_BLOCK_SIZE_32x8) 357 + max_line_sz = layer->line_sz; 358 + else 359 + max_line_sz = layer->line_sz / 2; 360 + 361 + if (line_sz > max_line_sz) { 362 + DRM_DEBUG_ATOMIC("afbc request line_sz: %d exceed the max afbc line_sz: %d.\n", 363 + line_sz, max_line_sz); 364 + return -EINVAL; 365 + } 366 + } 367 + 368 + if (fourcc == DRM_FORMAT_YUV420_10BIT && line_sz > 2046 && (st->afbc_crop_l % 4)) { 369 + DRM_DEBUG_ATOMIC("YUV420_10BIT input_hsize: %d exceed the max size 2046.\n", 370 + line_sz); 371 + return -EINVAL; 372 + } 373 + 374 + if (fourcc == DRM_FORMAT_X0L2 && line_sz > 2046 && (st->addr[0] % 16)) { 375 + DRM_DEBUG_ATOMIC("X0L2 input_hsize: %d exceed the max size 2046.\n", 376 + line_sz); 377 + return -EINVAL; 378 + } 379 + 380 + return 0; 381 + } 382 + 352 383 static const struct komeda_component_funcs d71_layer_funcs = { 384 + .validate = d71_layer_validate, 353 385 .update = d71_layer_update, 354 386 .disable = d71_layer_disable, 355 387 .dump_register = d71_layer_dump, ··· 431 365 else 432 366 layer->layer_type = KOMEDA_FMT_SIMPLE_LAYER; 433 367 434 - set_range(&layer->hsize_in, 4, d71->max_line_size); 368 + if (!d71->periph_addr) { 369 + /* D32 or newer product */ 370 + layer->line_sz = malidp_read32(reg, BLK_MAX_LINE_SIZE); 371 + layer->yuv_line_sz = L_INFO_YUV_MAX_LINESZ(layer_info); 372 + } else if (d71->max_line_size > 2048) { 373 + /* D71 4K */ 374 + layer->line_sz = d71->max_line_size; 375 + layer->yuv_line_sz = layer->line_sz / 2; 376 + } else { 377 + /* D71 2K */ 378 + if (layer->layer_type == KOMEDA_FMT_RICH_LAYER) { 379 + /* rich layer is 4K configuration */ 380 + layer->line_sz = d71->max_line_size * 2; 381 + layer->yuv_line_sz = layer->line_sz / 2; 382 + } else { 383 + layer->line_sz = d71->max_line_size; 384 + layer->yuv_line_sz = 0; 385 + } 386 + } 387 + 388 + set_range(&layer->hsize_in, 4, layer->line_sz); 389 + 435 390 set_range(&layer->vsize_in, 4, d71->max_vsize); 436 391 437 392 malidp_write32(reg, LAYER_PALPHA, D71_PALPHA_DEF_MAP); ··· 543 456 544 457 wb_layer = to_layer(c); 545 458 wb_layer->layer_type = KOMEDA_FMT_WB_LAYER; 459 + wb_layer->line_sz = get_blk_line_size(d71, reg); 460 + wb_layer->yuv_line_sz = wb_layer->line_sz; 546 461 547 - set_range(&wb_layer->hsize_in, D71_MIN_LINE_SIZE, d71->max_line_size); 548 - set_range(&wb_layer->vsize_in, D71_MIN_VERTICAL_SIZE, d71->max_vsize); 462 + set_range(&wb_layer->hsize_in, 64, wb_layer->line_sz); 463 + set_range(&wb_layer->vsize_in, 64, d71->max_vsize); 549 464 550 465 return 0; 551 466 } ··· 684 595 685 596 compiz = to_compiz(c); 686 597 687 - set_range(&compiz->hsize, D71_MIN_LINE_SIZE, d71->max_line_size); 688 - set_range(&compiz->vsize, D71_MIN_VERTICAL_SIZE, d71->max_vsize); 598 + set_range(&compiz->hsize, 64, get_blk_line_size(d71, reg)); 599 + set_range(&compiz->vsize, 64, d71->max_vsize); 689 600 690 601 return 0; 691 602 } ··· 792 703 793 704 static void d71_scaler_dump(struct komeda_component *c, struct seq_file *sf) 794 705 { 795 - u32 v[9]; 706 + u32 v[10]; 796 707 797 708 dump_block_header(sf, c->reg); 798 709 ··· 812 723 seq_printf(sf, "SC_H_DELTA_PH:\t\t0x%X\n", v[6]); 813 724 seq_printf(sf, "SC_V_INIT_PH:\t\t0x%X\n", v[7]); 814 725 seq_printf(sf, "SC_V_DELTA_PH:\t\t0x%X\n", v[8]); 726 + 727 + get_values_from_reg(c->reg, 0x130, 10, v); 728 + seq_printf(sf, "SC_ENH_LIMITS:\t\t0x%X\n", v[0]); 729 + seq_printf(sf, "SC_ENH_COEFF0:\t\t0x%X\n", v[1]); 730 + seq_printf(sf, "SC_ENH_COEFF1:\t\t0x%X\n", v[2]); 731 + seq_printf(sf, "SC_ENH_COEFF2:\t\t0x%X\n", v[3]); 732 + seq_printf(sf, "SC_ENH_COEFF3:\t\t0x%X\n", v[4]); 733 + seq_printf(sf, "SC_ENH_COEFF4:\t\t0x%X\n", v[5]); 734 + seq_printf(sf, "SC_ENH_COEFF5:\t\t0x%X\n", v[6]); 735 + seq_printf(sf, "SC_ENH_COEFF6:\t\t0x%X\n", v[7]); 736 + seq_printf(sf, "SC_ENH_COEFF7:\t\t0x%X\n", v[8]); 737 + seq_printf(sf, "SC_ENH_COEFF8:\t\t0x%X\n", v[9]); 815 738 } 816 739 817 740 static const struct komeda_component_funcs d71_scaler_funcs = { ··· 854 753 } 855 754 856 755 scaler = to_scaler(c); 857 - set_range(&scaler->hsize, 4, 2048); 756 + set_range(&scaler->hsize, 4, __get_blk_line_size(d71, reg, 2048)); 858 757 set_range(&scaler->vsize, 4, 4096); 859 758 scaler->max_downscaling = 6; 860 759 scaler->max_upscaling = 64; ··· 963 862 964 863 splitter = to_splitter(c); 965 864 966 - set_range(&splitter->hsize, 4, d71->max_line_size); 865 + set_range(&splitter->hsize, 4, get_blk_line_size(d71, reg)); 967 866 set_range(&splitter->vsize, 4, d71->max_vsize); 968 867 969 868 return 0; ··· 1034 933 1035 934 merger = to_merger(c); 1036 935 1037 - set_range(&merger->hsize_merged, 4, 4032); 936 + set_range(&merger->hsize_merged, 4, 937 + __get_blk_line_size(d71, reg, 4032)); 1038 938 set_range(&merger->vsize_merged, 4, 4096); 1039 939 1040 940 return 0; ··· 1046 944 { 1047 945 struct komeda_improc_state *st = to_improc_st(state); 1048 946 u32 __iomem *reg = c->reg; 1049 - u32 index; 947 + u32 index, mask = 0, ctrl = 0; 1050 948 1051 949 for_each_changed_input(state, index) 1052 950 malidp_write32(reg, BLK_INPUT_ID0 + index * 4, 1053 951 to_d71_input_id(state, index)); 1054 952 1055 953 malidp_write32(reg, BLK_SIZE, HV_SIZE(st->hsize, st->vsize)); 954 + malidp_write32(reg, IPS_DEPTH, st->color_depth); 955 + 956 + mask |= IPS_CTRL_YUV | IPS_CTRL_CHD422 | IPS_CTRL_CHD420; 957 + 958 + /* config color format */ 959 + if (st->color_format == DRM_COLOR_FORMAT_YCRCB420) 960 + ctrl |= IPS_CTRL_YUV | IPS_CTRL_CHD422 | IPS_CTRL_CHD420; 961 + else if (st->color_format == DRM_COLOR_FORMAT_YCRCB422) 962 + ctrl |= IPS_CTRL_YUV | IPS_CTRL_CHD422; 963 + else if (st->color_format == DRM_COLOR_FORMAT_YCRCB444) 964 + ctrl |= IPS_CTRL_YUV; 965 + 966 + malidp_write32_mask(reg, BLK_CONTROL, mask, ctrl); 1056 967 } 1057 968 1058 969 static void d71_improc_dump(struct komeda_component *c, struct seq_file *sf)
+2 -7
drivers/gpu/drm/arm/display/komeda/d71/d71_regs.h
··· 10 10 /* Common block registers offset */ 11 11 #define BLK_BLOCK_INFO 0x000 12 12 #define BLK_PIPELINE_INFO 0x004 13 + #define BLK_MAX_LINE_SIZE 0x008 13 14 #define BLK_VALID_INPUT_ID0 0x020 14 15 #define BLK_OUTPUT_ID0 0x060 15 16 #define BLK_INPUT_ID0 0x080 ··· 322 321 #define L_INFO_RF BIT(0) 323 322 #define L_INFO_CM BIT(1) 324 323 #define L_INFO_ABUF_SIZE(x) (((x) >> 4) & 0x7) 324 + #define L_INFO_YUV_MAX_LINESZ(x) (((x) >> 16) & 0xFFFF) 325 325 326 326 /* Scaler registers */ 327 327 #define SC_COEFFTAB 0x0DC ··· 495 493 496 494 #define D71_DEFAULT_PREPRETCH_LINE 5 497 495 #define D71_BUS_WIDTH_16_BYTES 16 498 - 499 - #define D71_MIN_LINE_SIZE 64 500 - #define D71_MIN_VERTICAL_SIZE 64 501 - #define D71_SC_MIN_LIN_SIZE 4 502 - #define D71_SC_MIN_VERTICAL_SIZE 4 503 - #define D71_SC_MAX_LIN_SIZE 2048 504 - #define D71_SC_MAX_VERTICAL_SIZE 4096 505 496 506 497 #define D71_SC_MAX_UPSCALING 64 507 498 #define D71_SC_MAX_DOWNSCALING 6
+28 -1
drivers/gpu/drm/arm/display/komeda/komeda_crtc.c
··· 17 17 #include "komeda_dev.h" 18 18 #include "komeda_kms.h" 19 19 20 + void komeda_crtc_get_color_config(struct drm_crtc_state *crtc_st, 21 + u32 *color_depths, u32 *color_formats) 22 + { 23 + struct drm_connector *conn; 24 + struct drm_connector_state *conn_st; 25 + u32 conn_color_formats = ~0u; 26 + int i, min_bpc = 31, conn_bpc = 0; 27 + 28 + for_each_new_connector_in_state(crtc_st->state, conn, conn_st, i) { 29 + if (conn_st->crtc != crtc_st->crtc) 30 + continue; 31 + 32 + conn_bpc = conn->display_info.bpc ? conn->display_info.bpc : 8; 33 + conn_color_formats &= conn->display_info.color_formats; 34 + 35 + if (conn_bpc < min_bpc) 36 + min_bpc = conn_bpc; 37 + } 38 + 39 + /* connector doesn't config any color_format, use RGB444 as default */ 40 + if (!conn_color_formats) 41 + conn_color_formats = DRM_COLOR_FORMAT_RGB444; 42 + 43 + *color_depths = GENMASK(min_bpc, 0); 44 + *color_formats = conn_color_formats; 45 + } 46 + 20 47 static void komeda_crtc_update_clock_ratio(struct komeda_crtc_state *kcrtc_st) 21 48 { 22 49 u64 pxlclk, aclk; ··· 323 296 struct komeda_crtc_state *old_st = to_kcrtc_st(old); 324 297 struct komeda_pipeline *master = kcrtc->master; 325 298 struct komeda_pipeline *slave = kcrtc->slave; 326 - struct completion *disable_done = &crtc->state->commit->flip_done; 299 + struct completion *disable_done; 327 300 bool needs_phase2 = false; 328 301 329 302 DRM_DEBUG_ATOMIC("CRTC%d_DISABLE: active_pipes: 0x%x, affected: 0x%x\n",
+2
drivers/gpu/drm/arm/display/komeda/komeda_kms.h
··· 166 166 return !!(rotation & DRM_MODE_REFLECT_X); 167 167 } 168 168 169 + void komeda_crtc_get_color_config(struct drm_crtc_state *crtc_st, 170 + u32 *color_depths, u32 *color_formats); 169 171 unsigned long komeda_crtc_get_aclk(struct komeda_crtc_state *kcrtc_st); 170 172 171 173 int komeda_kms_setup_crtcs(struct komeda_kms_dev *kms, struct komeda_dev *mdev);
+3
drivers/gpu/drm/arm/display/komeda/komeda_pipeline.h
··· 227 227 /* accepted h/v input range before rotation */ 228 228 struct malidp_range hsize_in, vsize_in; 229 229 u32 layer_type; /* RICH, SIMPLE or WB */ 230 + u32 line_sz; 231 + u32 yuv_line_sz; /* maximum line size for YUV422 and YUV420 */ 230 232 u32 supported_rots; 231 233 /* komeda supports layer split which splits a whole image to two parts 232 234 * left and right and handle them by two individual layer processors ··· 325 323 326 324 struct komeda_improc_state { 327 325 struct komeda_component_state base; 326 + u8 color_format, color_depth; 328 327 u16 hsize, vsize; 329 328 }; 330 329
+46
drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
··· 285 285 struct komeda_data_flow_cfg *dflow) 286 286 { 287 287 u32 src_x, src_y, src_w, src_h; 288 + u32 line_sz, max_line_sz; 288 289 289 290 if (!komeda_fb_is_layer_supported(kfb, layer->layer_type, dflow->rot)) 290 291 return -EINVAL; ··· 312 311 313 312 if (!in_range(&layer->vsize_in, src_h)) { 314 313 DRM_DEBUG_ATOMIC("invalidate src_h %d.\n", src_h); 314 + return -EINVAL; 315 + } 316 + 317 + if (drm_rotation_90_or_270(dflow->rot)) 318 + line_sz = dflow->in_h; 319 + else 320 + line_sz = dflow->in_w; 321 + 322 + if (kfb->base.format->hsub > 1) 323 + max_line_sz = layer->yuv_line_sz; 324 + else 325 + max_line_sz = layer->line_sz; 326 + 327 + if (line_sz > max_line_sz) { 328 + DRM_DEBUG_ATOMIC("Required line_sz: %d exceeds the max size %d\n", 329 + line_sz, max_line_sz); 315 330 return -EINVAL; 316 331 } 317 332 ··· 760 743 struct komeda_data_flow_cfg *dflow) 761 744 { 762 745 struct drm_crtc *crtc = kcrtc_st->base.crtc; 746 + struct drm_crtc_state *crtc_st = &kcrtc_st->base; 763 747 struct komeda_component_state *c_st; 764 748 struct komeda_improc_state *st; 765 749 ··· 773 755 774 756 st->hsize = dflow->in_w; 775 757 st->vsize = dflow->in_h; 758 + 759 + if (drm_atomic_crtc_needs_modeset(crtc_st)) { 760 + u32 output_depths, output_formats; 761 + u32 avail_depths, avail_formats; 762 + 763 + komeda_crtc_get_color_config(crtc_st, &output_depths, 764 + &output_formats); 765 + 766 + avail_depths = output_depths & improc->supported_color_depths; 767 + if (avail_depths == 0) { 768 + DRM_DEBUG_ATOMIC("No available color depths, conn depths: 0x%x & display: 0x%x\n", 769 + output_depths, 770 + improc->supported_color_depths); 771 + return -EINVAL; 772 + } 773 + 774 + avail_formats = output_formats & 775 + improc->supported_color_formats; 776 + if (!avail_formats) { 777 + DRM_DEBUG_ATOMIC("No available color_formats, conn formats 0x%x & display: 0x%x\n", 778 + output_formats, 779 + improc->supported_color_formats); 780 + return -EINVAL; 781 + } 782 + 783 + st->color_depth = __fls(avail_depths); 784 + st->color_format = BIT(__ffs(avail_formats)); 785 + } 776 786 777 787 komeda_component_add_input(&st->base, &dflow->input, 0); 778 788 komeda_component_set_output(&dflow->input, &improc->base, 0);
+5
drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
··· 141 141 struct komeda_dev *mdev = kms->base.dev_private; 142 142 struct komeda_wb_connector *kwb_conn; 143 143 struct drm_writeback_connector *wb_conn; 144 + struct drm_display_info *info; 144 145 u32 *formats, n_formats = 0; 145 146 int err; 146 147 ··· 172 171 } 173 172 174 173 drm_connector_helper_add(&wb_conn->base, &komeda_wb_conn_helper_funcs); 174 + 175 + info = &kwb_conn->base.base.display_info; 176 + info->bpc = __fls(kcrtc->master->improc->supported_color_depths); 177 + info->color_formats = kcrtc->master->improc->supported_color_formats; 175 178 176 179 kcrtc->wb_conn = kwb_conn; 177 180
+11 -5
drivers/gpu/drm/arm/malidp_drv.c
··· 368 368 return false; 369 369 } 370 370 371 - struct drm_framebuffer * 371 + static struct drm_framebuffer * 372 372 malidp_fb_create(struct drm_device *dev, struct drm_file *file, 373 373 const struct drm_mode_fb_cmd2 *mode_cmd) 374 374 { ··· 491 491 spin_unlock_irqrestore(&malidp->errors_lock, irqflags); 492 492 } 493 493 494 - void malidp_error_stats_dump(const char *prefix, 495 - struct malidp_error_stats error_stats, 496 - struct seq_file *m) 494 + static void malidp_error_stats_dump(const char *prefix, 495 + struct malidp_error_stats error_stats, 496 + struct seq_file *m) 497 497 { 498 498 seq_printf(m, "[%s] num_errors : %d\n", prefix, 499 499 error_stats.num_errors); ··· 665 665 return snprintf(buf, PAGE_SIZE, "%08x\n", malidp->core_id); 666 666 } 667 667 668 - DEVICE_ATTR_RO(core_id); 668 + static DEVICE_ATTR_RO(core_id); 669 669 670 670 static int malidp_init_sysfs(struct device *dev) 671 671 { ··· 816 816 (version >> 12) & 0xf, (version >> 8) & 0xf); 817 817 818 818 malidp->core_id = version; 819 + 820 + ret = of_property_read_u32(dev->of_node, 821 + "arm,malidp-arqos-value", 822 + &hwdev->arqos_value); 823 + if (ret) 824 + hwdev->arqos_value = 0x0; 819 825 820 826 /* set the number of lines used for output of RGB data */ 821 827 ret = of_property_read_u8_array(dev->of_node,
+9
drivers/gpu/drm/arm/malidp_hw.c
··· 379 379 malidp_hw_setbits(hwdev, MALIDP_DISP_FUNC_ILACED, MALIDP_DE_DISPLAY_FUNC); 380 380 else 381 381 malidp_hw_clearbits(hwdev, MALIDP_DISP_FUNC_ILACED, MALIDP_DE_DISPLAY_FUNC); 382 + 383 + /* 384 + * Program the RQoS register to avoid high resolutions flicker 385 + * issue on the LS1028A. 386 + */ 387 + if (hwdev->arqos_value) { 388 + val = hwdev->arqos_value; 389 + malidp_hw_setbits(hwdev, val, MALIDP500_RQOS_QUALITY); 390 + } 382 391 } 383 392 384 393 int malidp_format_get_bpp(u32 fmt)
+3
drivers/gpu/drm/arm/malidp_hw.h
··· 251 251 252 252 /* size of memory used for rotating layers, up to two banks available */ 253 253 u32 rotation_memory[2]; 254 + 255 + /* priority level of RQOS register used for driven the ARQOS signal */ 256 + u32 arqos_value; 254 257 }; 255 258 256 259 static inline u32 malidp_hw_read(struct malidp_hw_device *hwdev, u32 reg)
+10
drivers/gpu/drm/arm/malidp_regs.h
··· 210 210 #define MALIDP500_CONFIG_VALID 0x00f00 211 211 #define MALIDP500_CONFIG_ID 0x00fd4 212 212 213 + /* 214 + * The quality of service (QoS) register on the DP500. RQOS register values 215 + * are driven by the ARQOS signal, using AXI transacations, dependent on the 216 + * FIFO input level. 217 + * The RQOS register can also set QoS levels for: 218 + * - RED_ARQOS @ A 4-bit signal value for close to underflow conditions 219 + * - GREEN_ARQOS @ A 4-bit signal value for normal conditions 220 + */ 221 + #define MALIDP500_RQOS_QUALITY 0x00500 222 + 213 223 /* register offsets and bits specific to DP550/DP650 */ 214 224 #define MALIDP550_ADDR_SPACE_SIZE 0x10000 215 225 #define MALIDP550_DE_CONTROL 0x00010
+1 -4
drivers/gpu/drm/ast/ast_drv.c
··· 200 200 .driver.pm = &ast_pm_ops, 201 201 }; 202 202 203 - static const struct file_operations ast_fops = { 204 - .owner = THIS_MODULE, 205 - DRM_VRAM_MM_FILE_OPERATIONS 206 - }; 203 + DEFINE_DRM_GEM_FOPS(ast_fops); 207 204 208 205 static struct drm_driver driver = { 209 206 .driver_features = DRIVER_MODESET | DRIVER_GEM,
+1 -4
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 601 601 struct drm_framebuffer *fb = state->base.fb; 602 602 const struct drm_display_mode *mode; 603 603 struct drm_crtc_state *crtc_state; 604 - unsigned int tmp; 605 604 int ret; 606 605 int i; 607 606 ··· 693 694 * Swap width and size in case of 90 or 270 degrees rotation 694 695 */ 695 696 if (drm_rotation_90_or_270(state->base.rotation)) { 696 - tmp = state->src_w; 697 - state->src_w = state->src_h; 698 - state->src_h = tmp; 697 + swap(state->src_w, state->src_h); 699 698 } 700 699 701 700 if (!desc->layout.size &&
+1 -4
drivers/gpu/drm/bochs/bochs_drv.c
··· 58 58 return ret; 59 59 } 60 60 61 - static const struct file_operations bochs_fops = { 62 - .owner = THIS_MODULE, 63 - DRM_VRAM_MM_FILE_OPERATIONS 64 - }; 61 + DEFINE_DRM_GEM_FOPS(bochs_fops); 65 62 66 63 static struct drm_driver bochs_driver = { 67 64 .driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC,
+1 -2
drivers/gpu/drm/bridge/Kconfig
··· 87 87 depends on OF 88 88 select DRM_KMS_HELPER 89 89 imply EXTCON 90 - select INPUT 91 - select RC_CORE 90 + depends on RC_CORE || !RC_CORE 92 91 help 93 92 Silicon Image SII8620 HDMI/MHL bridge chip driver. 94 93
+62 -30
drivers/gpu/drm/bridge/analogix-anx78xx.c
··· 39 39 #define AUX_CH_BUFFER_SIZE 16 40 40 #define AUX_WAIT_TIMEOUT_MS 15 41 41 42 - static const u8 anx78xx_i2c_addresses[] = { 43 - [I2C_IDX_TX_P0] = TX_P0, 44 - [I2C_IDX_TX_P1] = TX_P1, 45 - [I2C_IDX_TX_P2] = TX_P2, 46 - [I2C_IDX_RX_P0] = RX_P0, 47 - [I2C_IDX_RX_P1] = RX_P1, 42 + static const u8 anx7808_i2c_addresses[] = { 43 + [I2C_IDX_TX_P0] = 0x78, 44 + [I2C_IDX_TX_P1] = 0x7a, 45 + [I2C_IDX_TX_P2] = 0x72, 46 + [I2C_IDX_RX_P0] = 0x7e, 47 + [I2C_IDX_RX_P1] = 0x80, 48 + }; 49 + 50 + static const u8 anx781x_i2c_addresses[] = { 51 + [I2C_IDX_TX_P0] = 0x70, 52 + [I2C_IDX_TX_P1] = 0x7a, 53 + [I2C_IDX_TX_P2] = 0x72, 54 + [I2C_IDX_RX_P0] = 0x7e, 55 + [I2C_IDX_RX_P1] = 0x80, 48 56 }; 49 57 50 58 struct anx78xx_platform_data { ··· 71 63 struct i2c_client *client; 72 64 struct edid *edid; 73 65 struct drm_connector connector; 74 - struct drm_dp_link link; 75 66 struct anx78xx_platform_data pdata; 76 67 struct mutex lock; 77 68 ··· 747 740 748 741 static int anx78xx_dp_link_training(struct anx78xx *anx78xx) 749 742 { 750 - u8 dp_bw, value; 743 + u8 dp_bw, dpcd[2]; 751 744 int err; 752 745 753 746 err = regmap_write(anx78xx->map[I2C_IDX_RX_P0], SP_HDMI_MUTE_CTRL_REG, ··· 800 793 if (err) 801 794 return err; 802 795 803 - /* Check link capabilities */ 804 - err = drm_dp_link_probe(&anx78xx->aux, &anx78xx->link); 805 - if (err < 0) { 806 - DRM_ERROR("Failed to probe link capabilities: %d\n", err); 807 - return err; 808 - } 796 + /* 797 + * Power up the sink (DP_SET_POWER register is only available on DPCD 798 + * v1.1 and later). 799 + */ 800 + if (anx78xx->dpcd[DP_DPCD_REV] >= 0x11) { 801 + err = drm_dp_dpcd_readb(&anx78xx->aux, DP_SET_POWER, &dpcd[0]); 802 + if (err < 0) { 803 + DRM_ERROR("Failed to read DP_SET_POWER register: %d\n", 804 + err); 805 + return err; 806 + } 809 807 810 - /* Power up the sink */ 811 - err = drm_dp_link_power_up(&anx78xx->aux, &anx78xx->link); 812 - if (err < 0) { 813 - DRM_ERROR("Failed to power up DisplayPort link: %d\n", err); 814 - return err; 808 + dpcd[0] &= ~DP_SET_POWER_MASK; 809 + dpcd[0] |= DP_SET_POWER_D0; 810 + 811 + err = drm_dp_dpcd_writeb(&anx78xx->aux, DP_SET_POWER, dpcd[0]); 812 + if (err < 0) { 813 + DRM_ERROR("Failed to power up DisplayPort link: %d\n", 814 + err); 815 + return err; 816 + } 817 + 818 + /* 819 + * According to the DP 1.1 specification, a "Sink Device must 820 + * exit the power saving state within 1 ms" (Section 2.5.3.1, 821 + * Table 5-52, "Sink Control Field" (register 0x600). 822 + */ 823 + usleep_range(1000, 2000); 815 824 } 816 825 817 826 /* Possibly enable downspread on the sink */ ··· 866 843 if (err) 867 844 return err; 868 845 869 - value = drm_dp_link_rate_to_bw_code(anx78xx->link.rate); 846 + dpcd[0] = drm_dp_max_link_rate(anx78xx->dpcd); 847 + dpcd[0] = drm_dp_link_rate_to_bw_code(dpcd[0]); 870 848 err = regmap_write(anx78xx->map[I2C_IDX_TX_P0], 871 - SP_DP_MAIN_LINK_BW_SET_REG, value); 849 + SP_DP_MAIN_LINK_BW_SET_REG, dpcd[0]); 872 850 if (err) 873 851 return err; 874 852 875 - err = drm_dp_link_configure(&anx78xx->aux, &anx78xx->link); 853 + dpcd[1] = drm_dp_max_lane_count(anx78xx->dpcd); 854 + 855 + if (drm_dp_enhanced_frame_cap(anx78xx->dpcd)) 856 + dpcd[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 857 + 858 + err = drm_dp_dpcd_write(&anx78xx->aux, DP_LINK_BW_SET, dpcd, 859 + sizeof(dpcd)); 876 860 if (err < 0) { 877 - DRM_ERROR("Failed to configure DisplayPort link: %d\n", err); 861 + DRM_ERROR("Failed to configure link: %d\n", err); 878 862 return err; 879 863 } 880 864 ··· 1346 1316 struct anx78xx *anx78xx; 1347 1317 struct anx78xx_platform_data *pdata; 1348 1318 unsigned int i, idl, idh, version; 1319 + const u8 *i2c_addresses; 1349 1320 bool found = false; 1350 1321 int err; 1351 1322 ··· 1386 1355 } 1387 1356 1388 1357 /* Map slave addresses of ANX7814 */ 1358 + i2c_addresses = device_get_match_data(&client->dev); 1389 1359 for (i = 0; i < I2C_NUM_ADDRESSES; i++) { 1390 1360 struct i2c_client *i2c_dummy; 1391 1361 1392 1362 i2c_dummy = i2c_new_dummy_device(client->adapter, 1393 - anx78xx_i2c_addresses[i] >> 1); 1363 + i2c_addresses[i] >> 1); 1394 1364 if (IS_ERR(i2c_dummy)) { 1395 1365 err = PTR_ERR(i2c_dummy); 1396 1366 DRM_ERROR("Failed to reserve I2C bus %02x: %d\n", 1397 - anx78xx_i2c_addresses[i], err); 1367 + i2c_addresses[i], err); 1398 1368 goto err_unregister_i2c; 1399 1369 } 1400 1370 ··· 1405 1373 if (IS_ERR(anx78xx->map[i])) { 1406 1374 err = PTR_ERR(anx78xx->map[i]); 1407 1375 DRM_ERROR("Failed regmap initialization %02x\n", 1408 - anx78xx_i2c_addresses[i]); 1376 + i2c_addresses[i]); 1409 1377 goto err_unregister_i2c; 1410 1378 } 1411 1379 } ··· 1504 1472 1505 1473 #if IS_ENABLED(CONFIG_OF) 1506 1474 static const struct of_device_id anx78xx_match_table[] = { 1507 - { .compatible = "analogix,anx7808", }, 1508 - { .compatible = "analogix,anx7812", }, 1509 - { .compatible = "analogix,anx7814", }, 1510 - { .compatible = "analogix,anx7818", }, 1475 + { .compatible = "analogix,anx7808", .data = anx7808_i2c_addresses }, 1476 + { .compatible = "analogix,anx7812", .data = anx781x_i2c_addresses }, 1477 + { .compatible = "analogix,anx7814", .data = anx781x_i2c_addresses }, 1478 + { .compatible = "analogix,anx7818", .data = anx781x_i2c_addresses }, 1511 1479 { /* sentinel */ }, 1512 1480 }; 1513 1481 MODULE_DEVICE_TABLE(of, anx78xx_match_table);
+5 -12
drivers/gpu/drm/bridge/analogix-anx78xx.h
··· 6 6 #ifndef __ANX78xx_H 7 7 #define __ANX78xx_H 8 8 9 - #define TX_P0 0x70 10 - #define TX_P1 0x7a 11 - #define TX_P2 0x72 12 - 13 - #define RX_P0 0x7e 14 - #define RX_P1 0x80 15 - 16 9 /***************************************************************/ 17 - /* Register definition of device address 0x7e */ 10 + /* Register definitions for RX_PO */ 18 11 /***************************************************************/ 19 12 20 13 /* ··· 164 171 #define SP_VSI_RCVD BIT(1) 165 172 166 173 /***************************************************************/ 167 - /* Register definition of device address 0x80 */ 174 + /* Register definitions for RX_P1 */ 168 175 /***************************************************************/ 169 176 170 177 /* HDCP BCAPS Shadow Register */ ··· 210 217 #define SP_SET_AVMUTE BIT(0) 211 218 212 219 /***************************************************************/ 213 - /* Register definition of device address 0x70 */ 220 + /* Register definitions for TX_P0 */ 214 221 /***************************************************************/ 215 222 216 223 /* HDCP Status Register */ ··· 444 451 #define SP_DP_BUF_DATA0_REG 0xf0 445 452 446 453 /***************************************************************/ 447 - /* Register definition of device address 0x72 */ 454 + /* Register definitions for TX_P2 */ 448 455 /***************************************************************/ 449 456 450 457 /* ··· 667 674 #define SP_INT_CTRL_REG 0xff 668 675 669 676 /***************************************************************/ 670 - /* Register definition of device address 0x7a */ 677 + /* Register definitions for TX_P1 */ 671 678 /***************************************************************/ 672 679 673 680 /* DP TX Link Training Control Register */
+12 -24
drivers/gpu/drm/bridge/sii9234.c
··· 842 842 843 843 ctx->client[I2C_MHL] = client; 844 844 845 - ctx->client[I2C_TPI] = i2c_new_dummy(adapter, I2C_TPI_ADDR); 846 - if (!ctx->client[I2C_TPI]) { 845 + ctx->client[I2C_TPI] = devm_i2c_new_dummy_device(&client->dev, adapter, 846 + I2C_TPI_ADDR); 847 + if (IS_ERR(ctx->client[I2C_TPI])) { 847 848 dev_err(ctx->dev, "failed to create TPI client\n"); 848 - return -ENODEV; 849 + return PTR_ERR(ctx->client[I2C_TPI]); 849 850 } 850 851 851 - ctx->client[I2C_HDMI] = i2c_new_dummy(adapter, I2C_HDMI_ADDR); 852 - if (!ctx->client[I2C_HDMI]) { 852 + ctx->client[I2C_HDMI] = devm_i2c_new_dummy_device(&client->dev, adapter, 853 + I2C_HDMI_ADDR); 854 + if (IS_ERR(ctx->client[I2C_HDMI])) { 853 855 dev_err(ctx->dev, "failed to create HDMI RX client\n"); 854 - goto fail_tpi; 856 + return PTR_ERR(ctx->client[I2C_HDMI]); 855 857 } 856 858 857 - ctx->client[I2C_CBUS] = i2c_new_dummy(adapter, I2C_CBUS_ADDR); 858 - if (!ctx->client[I2C_CBUS]) { 859 + ctx->client[I2C_CBUS] = devm_i2c_new_dummy_device(&client->dev, adapter, 860 + I2C_CBUS_ADDR); 861 + if (IS_ERR(ctx->client[I2C_CBUS])) { 859 862 dev_err(ctx->dev, "failed to create CBUS client\n"); 860 - goto fail_hdmi; 863 + return PTR_ERR(ctx->client[I2C_CBUS]); 861 864 } 862 865 863 866 return 0; 864 - 865 - fail_hdmi: 866 - i2c_unregister_device(ctx->client[I2C_HDMI]); 867 - fail_tpi: 868 - i2c_unregister_device(ctx->client[I2C_TPI]); 869 - 870 - return -ENODEV; 871 - } 872 - 873 - static void sii9234_deinit_resources(struct sii9234 *ctx) 874 - { 875 - i2c_unregister_device(ctx->client[I2C_CBUS]); 876 - i2c_unregister_device(ctx->client[I2C_HDMI]); 877 - i2c_unregister_device(ctx->client[I2C_TPI]); 878 867 } 879 868 880 869 static inline struct sii9234 *bridge_to_sii9234(struct drm_bridge *bridge) ··· 940 951 941 952 sii9234_cable_out(ctx); 942 953 drm_bridge_remove(&ctx->bridge); 943 - sii9234_deinit_resources(ctx); 944 954 945 955 return 0; 946 956 }
+7 -3
drivers/gpu/drm/bridge/sil-sii8620.c
··· 1760 1760 1761 1761 scancode &= MHL_RCP_KEY_ID_MASK; 1762 1762 1763 - if (!ctx->rc_dev) { 1764 - dev_dbg(ctx->dev, "RCP input device not initialized\n"); 1763 + if (!IS_ENABLED(CONFIG_RC_CORE) || !ctx->rc_dev) 1765 1764 return false; 1766 - } 1767 1765 1768 1766 if (pressed) 1769 1767 rc_keydown(ctx->rc_dev, RC_PROTO_CEC, scancode, 0); ··· 2098 2100 struct rc_dev *rc_dev; 2099 2101 int ret; 2100 2102 2103 + if (!IS_ENABLED(CONFIG_RC_CORE)) 2104 + return; 2105 + 2101 2106 rc_dev = rc_allocate_device(RC_DRIVER_SCANCODE); 2102 2107 if (!rc_dev) { 2103 2108 dev_err(ctx->dev, "Failed to allocate RC device\n"); ··· 2214 2213 static void sii8620_detach(struct drm_bridge *bridge) 2215 2214 { 2216 2215 struct sii8620 *ctx = bridge_to_sii8620(bridge); 2216 + 2217 + if (!IS_ENABLED(CONFIG_RC_CORE)) 2218 + return; 2217 2219 2218 2220 rc_unregister_device(ctx->rc_dev); 2219 2221 }
+82 -1
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
··· 25 25 #include <uapi/linux/videodev2.h> 26 26 27 27 #include <drm/bridge/dw_hdmi.h> 28 + #include <drm/drm_atomic.h> 28 29 #include <drm/drm_atomic_helper.h> 29 30 #include <drm/drm_bridge.h> 30 31 #include <drm/drm_edid.h> ··· 1744 1743 HDMI_FC_DATAUTO0_VSD_MASK); 1745 1744 } 1746 1745 1746 + static void hdmi_config_drm_infoframe(struct dw_hdmi *hdmi) 1747 + { 1748 + const struct drm_connector_state *conn_state = hdmi->connector.state; 1749 + struct hdmi_drm_infoframe frame; 1750 + u8 buffer[30]; 1751 + ssize_t err; 1752 + int i; 1753 + 1754 + if (!hdmi->plat_data->use_drm_infoframe) 1755 + return; 1756 + 1757 + hdmi_modb(hdmi, HDMI_FC_PACKET_TX_EN_DRM_DISABLE, 1758 + HDMI_FC_PACKET_TX_EN_DRM_MASK, HDMI_FC_PACKET_TX_EN); 1759 + 1760 + err = drm_hdmi_infoframe_set_hdr_metadata(&frame, conn_state); 1761 + if (err < 0) 1762 + return; 1763 + 1764 + err = hdmi_drm_infoframe_pack(&frame, buffer, sizeof(buffer)); 1765 + if (err < 0) { 1766 + dev_err(hdmi->dev, "Failed to pack drm infoframe: %zd\n", err); 1767 + return; 1768 + } 1769 + 1770 + hdmi_writeb(hdmi, frame.version, HDMI_FC_DRM_HB0); 1771 + hdmi_writeb(hdmi, frame.length, HDMI_FC_DRM_HB1); 1772 + 1773 + for (i = 0; i < frame.length; i++) 1774 + hdmi_writeb(hdmi, buffer[4 + i], HDMI_FC_DRM_PB0 + i); 1775 + 1776 + hdmi_writeb(hdmi, 1, HDMI_FC_DRM_UP); 1777 + hdmi_modb(hdmi, HDMI_FC_PACKET_TX_EN_DRM_ENABLE, 1778 + HDMI_FC_PACKET_TX_EN_DRM_MASK, HDMI_FC_PACKET_TX_EN); 1779 + } 1780 + 1747 1781 static void hdmi_av_composer(struct dw_hdmi *hdmi, 1748 1782 const struct drm_display_mode *mode) 1749 1783 { ··· 2090 2054 2091 2055 /* HDMI Initialization Step E - Configure audio */ 2092 2056 hdmi_clk_regenerator_update_pixel_clock(hdmi); 2093 - hdmi_enable_audio_clk(hdmi, true); 2057 + hdmi_enable_audio_clk(hdmi, hdmi->audio_enable); 2094 2058 } 2095 2059 2096 2060 /* not for DVI mode */ ··· 2100 2064 /* HDMI Initialization Step F - Configure AVI InfoFrame */ 2101 2065 hdmi_config_AVI(hdmi, mode); 2102 2066 hdmi_config_vendor_specific_infoframe(hdmi, mode); 2067 + hdmi_config_drm_infoframe(hdmi); 2103 2068 } else { 2104 2069 dev_dbg(hdmi->dev, "%s DVI mode\n", __func__); 2105 2070 } ··· 2267 2230 return ret; 2268 2231 } 2269 2232 2233 + static bool hdr_metadata_equal(const struct drm_connector_state *old_state, 2234 + const struct drm_connector_state *new_state) 2235 + { 2236 + struct drm_property_blob *old_blob = old_state->hdr_output_metadata; 2237 + struct drm_property_blob *new_blob = new_state->hdr_output_metadata; 2238 + 2239 + if (!old_blob || !new_blob) 2240 + return old_blob == new_blob; 2241 + 2242 + if (old_blob->length != new_blob->length) 2243 + return false; 2244 + 2245 + return !memcmp(old_blob->data, new_blob->data, old_blob->length); 2246 + } 2247 + 2248 + static int dw_hdmi_connector_atomic_check(struct drm_connector *connector, 2249 + struct drm_atomic_state *state) 2250 + { 2251 + struct drm_connector_state *old_state = 2252 + drm_atomic_get_old_connector_state(state, connector); 2253 + struct drm_connector_state *new_state = 2254 + drm_atomic_get_new_connector_state(state, connector); 2255 + struct drm_crtc *crtc = new_state->crtc; 2256 + struct drm_crtc_state *crtc_state; 2257 + 2258 + if (!crtc) 2259 + return 0; 2260 + 2261 + if (!hdr_metadata_equal(old_state, new_state)) { 2262 + crtc_state = drm_atomic_get_crtc_state(state, crtc); 2263 + if (IS_ERR(crtc_state)) 2264 + return PTR_ERR(crtc_state); 2265 + 2266 + crtc_state->mode_changed = true; 2267 + } 2268 + 2269 + return 0; 2270 + } 2271 + 2270 2272 static void dw_hdmi_connector_force(struct drm_connector *connector) 2271 2273 { 2272 2274 struct dw_hdmi *hdmi = container_of(connector, struct dw_hdmi, ··· 2330 2254 2331 2255 static const struct drm_connector_helper_funcs dw_hdmi_connector_helper_funcs = { 2332 2256 .get_modes = dw_hdmi_connector_get_modes, 2257 + .atomic_check = dw_hdmi_connector_atomic_check, 2333 2258 }; 2334 2259 2335 2260 static int dw_hdmi_bridge_attach(struct drm_bridge *bridge) ··· 2350 2273 &dw_hdmi_connector_funcs, 2351 2274 DRM_MODE_CONNECTOR_HDMIA, 2352 2275 hdmi->ddc); 2276 + 2277 + if (hdmi->version >= 0x200a && hdmi->plat_data->use_drm_infoframe) 2278 + drm_object_attach_property(&connector->base, 2279 + connector->dev->mode_config.hdr_output_metadata_property, 0); 2353 2280 2354 2281 drm_connector_attach_encoder(connector, encoder); 2355 2282
+37
drivers/gpu/drm/bridge/synopsys/dw-hdmi.h
··· 254 254 #define HDMI_FC_POL2 0x10DB 255 255 #define HDMI_FC_PRCONF 0x10E0 256 256 #define HDMI_FC_SCRAMBLER_CTRL 0x10E1 257 + #define HDMI_FC_PACKET_TX_EN 0x10E3 257 258 258 259 #define HDMI_FC_GMD_STAT 0x1100 259 260 #define HDMI_FC_GMD_EN 0x1101 ··· 289 288 #define HDMI_FC_GMD_PB25 0x111E 290 289 #define HDMI_FC_GMD_PB26 0x111F 291 290 #define HDMI_FC_GMD_PB27 0x1120 291 + 292 + #define HDMI_FC_DRM_UP 0x1167 293 + #define HDMI_FC_DRM_HB0 0x1168 294 + #define HDMI_FC_DRM_HB1 0x1169 295 + #define HDMI_FC_DRM_PB0 0x116A 296 + #define HDMI_FC_DRM_PB1 0x116B 297 + #define HDMI_FC_DRM_PB2 0x116C 298 + #define HDMI_FC_DRM_PB3 0x116D 299 + #define HDMI_FC_DRM_PB4 0x116E 300 + #define HDMI_FC_DRM_PB5 0x116F 301 + #define HDMI_FC_DRM_PB6 0x1170 302 + #define HDMI_FC_DRM_PB7 0x1171 303 + #define HDMI_FC_DRM_PB8 0x1172 304 + #define HDMI_FC_DRM_PB9 0x1173 305 + #define HDMI_FC_DRM_PB10 0x1174 306 + #define HDMI_FC_DRM_PB11 0x1175 307 + #define HDMI_FC_DRM_PB12 0x1176 308 + #define HDMI_FC_DRM_PB13 0x1177 309 + #define HDMI_FC_DRM_PB14 0x1178 310 + #define HDMI_FC_DRM_PB15 0x1179 311 + #define HDMI_FC_DRM_PB16 0x117A 312 + #define HDMI_FC_DRM_PB17 0x117B 313 + #define HDMI_FC_DRM_PB18 0x117C 314 + #define HDMI_FC_DRM_PB19 0x117D 315 + #define HDMI_FC_DRM_PB20 0x117E 316 + #define HDMI_FC_DRM_PB21 0x117F 317 + #define HDMI_FC_DRM_PB22 0x1180 318 + #define HDMI_FC_DRM_PB23 0x1181 319 + #define HDMI_FC_DRM_PB24 0x1182 320 + #define HDMI_FC_DRM_PB25 0x1183 321 + #define HDMI_FC_DRM_PB26 0x1184 292 322 293 323 #define HDMI_FC_DBGFORCE 0x1200 294 324 #define HDMI_FC_DBGAUD0CH0 0x1201 ··· 775 743 HDMI_FC_PRCONF_INCOMING_PR_FACTOR_OFFSET = 4, 776 744 HDMI_FC_PRCONF_OUTPUT_PR_FACTOR_MASK = 0x0F, 777 745 HDMI_FC_PRCONF_OUTPUT_PR_FACTOR_OFFSET = 0, 746 + 747 + /* FC_PACKET_TX_EN field values */ 748 + HDMI_FC_PACKET_TX_EN_DRM_MASK = 0x80, 749 + HDMI_FC_PACKET_TX_EN_DRM_ENABLE = 0x80, 750 + HDMI_FC_PACKET_TX_EN_DRM_DISABLE = 0x00, 778 751 779 752 /* FC_AVICONF0-FC_AVICONF3 field values */ 780 753 HDMI_FC_AVICONF0_PIX_FMT_MASK = 0x03,
+42 -23
drivers/gpu/drm/bridge/tc358767.c
··· 229 229 module_param_named(test, tc_test_pattern, bool, 0644); 230 230 231 231 struct tc_edp_link { 232 - struct drm_dp_link base; 232 + u8 dpcd[DP_RECEIVER_CAP_SIZE]; 233 + unsigned int rate; 234 + u8 num_lanes; 233 235 u8 assr; 234 236 bool scrambler_dis; 235 237 bool spread; ··· 440 438 reg |= DP0_SRCCTRL_SCRMBLDIS; /* Scrambler Disabled */ 441 439 if (tc->link.spread) 442 440 reg |= DP0_SRCCTRL_SSCG; /* Spread Spectrum Enable */ 443 - if (tc->link.base.num_lanes == 2) 441 + if (tc->link.num_lanes == 2) 444 442 reg |= DP0_SRCCTRL_LANES_2; /* Two Main Channel Lanes */ 445 - if (tc->link.base.rate != 162000) 443 + if (tc->link.rate != 162000) 446 444 reg |= DP0_SRCCTRL_BW27; /* 2.7 Gbps link */ 447 445 return reg; 448 446 } ··· 665 663 666 664 static int tc_get_display_props(struct tc_data *tc) 667 665 { 666 + u8 revision, num_lanes; 667 + unsigned int rate; 668 668 int ret; 669 669 u8 reg; 670 670 671 671 /* Read DP Rx Link Capability */ 672 - ret = drm_dp_link_probe(&tc->aux, &tc->link.base); 672 + ret = drm_dp_dpcd_read(&tc->aux, DP_DPCD_REV, tc->link.dpcd, 673 + DP_RECEIVER_CAP_SIZE); 673 674 if (ret < 0) 674 675 goto err_dpcd_read; 675 - if (tc->link.base.rate != 162000 && tc->link.base.rate != 270000) { 676 + 677 + revision = tc->link.dpcd[DP_DPCD_REV]; 678 + rate = drm_dp_max_link_rate(tc->link.dpcd); 679 + num_lanes = drm_dp_max_lane_count(tc->link.dpcd); 680 + 681 + if (rate != 162000 && rate != 270000) { 676 682 dev_dbg(tc->dev, "Falling to 2.7 Gbps rate\n"); 677 - tc->link.base.rate = 270000; 683 + rate = 270000; 678 684 } 679 685 680 - if (tc->link.base.num_lanes > 2) { 686 + tc->link.rate = rate; 687 + 688 + if (num_lanes > 2) { 681 689 dev_dbg(tc->dev, "Falling to 2 lanes\n"); 682 - tc->link.base.num_lanes = 2; 690 + num_lanes = 2; 683 691 } 692 + 693 + tc->link.num_lanes = num_lanes; 684 694 685 695 ret = drm_dp_dpcd_readb(&tc->aux, DP_MAX_DOWNSPREAD, &reg); 686 696 if (ret < 0) ··· 711 697 tc->link.assr = reg & DP_ALTERNATE_SCRAMBLER_RESET_ENABLE; 712 698 713 699 dev_dbg(tc->dev, "DPCD rev: %d.%d, rate: %s, lanes: %d, framing: %s\n", 714 - tc->link.base.revision >> 4, tc->link.base.revision & 0x0f, 715 - (tc->link.base.rate == 162000) ? "1.62Gbps" : "2.7Gbps", 716 - tc->link.base.num_lanes, 717 - (tc->link.base.capabilities & DP_LINK_CAP_ENHANCED_FRAMING) ? 718 - "enhanced" : "non-enhanced"); 700 + revision >> 4, revision & 0x0f, 701 + (tc->link.rate == 162000) ? "1.62Gbps" : "2.7Gbps", 702 + tc->link.num_lanes, 703 + drm_dp_enhanced_frame_cap(tc->link.dpcd) ? 704 + "enhanced" : "default"); 719 705 dev_dbg(tc->dev, "Downspread: %s, scrambler: %s\n", 720 706 tc->link.spread ? "0.5%" : "0.0%", 721 707 tc->link.scrambler_dis ? "disabled" : "enabled"); ··· 754 740 */ 755 741 756 742 in_bw = mode->clock * bits_per_pixel / 8; 757 - out_bw = tc->link.base.num_lanes * tc->link.base.rate; 743 + out_bw = tc->link.num_lanes * tc->link.rate; 758 744 max_tu_symbol = DIV_ROUND_UP(in_bw * TU_SIZE_RECOMMENDED, out_bw); 759 745 760 746 dev_dbg(tc->dev, "set mode %dx%d\n", ··· 916 902 /* SSCG and BW27 on DP1 must be set to the same as on DP0 */ 917 903 ret = regmap_write(tc->regmap, DP1_SRCCTRL, 918 904 (tc->link.spread ? DP0_SRCCTRL_SSCG : 0) | 919 - ((tc->link.base.rate != 162000) ? DP0_SRCCTRL_BW27 : 0)); 905 + ((tc->link.rate != 162000) ? DP0_SRCCTRL_BW27 : 0)); 920 906 if (ret) 921 907 return ret; 922 908 ··· 926 912 927 913 /* Setup Main Link */ 928 914 dp_phy_ctrl = BGREN | PWR_SW_EN | PHY_A0_EN | PHY_M0_EN; 929 - if (tc->link.base.num_lanes == 2) 915 + if (tc->link.num_lanes == 2) 930 916 dp_phy_ctrl |= PHY_2LANE; 931 917 932 918 ret = regmap_write(tc->regmap, DP_PHY_CTRL, dp_phy_ctrl); ··· 989 975 } 990 976 991 977 /* Setup Link & DPRx Config for Training */ 992 - ret = drm_dp_link_configure(aux, &tc->link.base); 978 + tmp[0] = drm_dp_link_rate_to_bw_code(tc->link.rate); 979 + tmp[1] = tc->link.num_lanes; 980 + 981 + if (drm_dp_enhanced_frame_cap(tc->link.dpcd)) 982 + tmp[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 983 + 984 + ret = drm_dp_dpcd_write(aux, DP_LINK_BW_SET, tmp, 2); 993 985 if (ret < 0) 994 986 goto err_dpcd_write; 995 987 ··· 1039 1019 1040 1020 /* Enable DP0 to start Link Training */ 1041 1021 ret = regmap_write(tc->regmap, DP0CTL, 1042 - ((tc->link.base.capabilities & 1043 - DP_LINK_CAP_ENHANCED_FRAMING) ? EF_EN : 0) | 1044 - DP_EN); 1022 + (drm_dp_enhanced_frame_cap(tc->link.dpcd) ? 1023 + EF_EN : 0) | DP_EN); 1045 1024 if (ret) 1046 1025 return ret; 1047 1026 ··· 1119 1100 ret = -ENODEV; 1120 1101 } 1121 1102 1122 - if (tc->link.base.num_lanes == 2) { 1103 + if (tc->link.num_lanes == 2) { 1123 1104 value = (tmp[0] >> 4) & DP_CHANNEL_EQ_BITS; 1124 1105 1125 1106 if (value != DP_CHANNEL_EQ_BITS) { ··· 1190 1171 return ret; 1191 1172 1192 1173 value = VID_MN_GEN | DP_EN; 1193 - if (tc->link.base.capabilities & DP_LINK_CAP_ENHANCED_FRAMING) 1174 + if (drm_dp_enhanced_frame_cap(tc->link.dpcd)) 1194 1175 value |= EF_EN; 1195 1176 ret = regmap_write(tc->regmap, DP0CTL, value); 1196 1177 if (ret) ··· 1316 1297 return MODE_CLOCK_HIGH; 1317 1298 1318 1299 req = mode->clock * bits_per_pixel / 8; 1319 - avail = tc->link.base.num_lanes * tc->link.base.rate; 1300 + avail = tc->link.num_lanes * tc->link.rate; 1320 1301 1321 1302 if (req > avail) 1322 1303 return MODE_BAD;
+1 -1
drivers/gpu/drm/cirrus/cirrus.c
··· 510 510 511 511 /* ------------------------------------------------------------------ */ 512 512 513 - DEFINE_DRM_GEM_SHMEM_FOPS(cirrus_fops); 513 + DEFINE_DRM_GEM_FOPS(cirrus_fops); 514 514 515 515 static struct drm_driver cirrus_driver = { 516 516 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC,
-247
drivers/gpu/drm/cirrus/cirrus_drv.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright 2012 Red Hat 4 - * 5 - * Authors: Matthew Garrett 6 - * Dave Airlie 7 - */ 8 - #ifndef __CIRRUS_DRV_H__ 9 - #define __CIRRUS_DRV_H__ 10 - 11 - #include <video/vga.h> 12 - 13 - #include <drm/drm_encoder.h> 14 - #include <drm/drm_fb_helper.h> 15 - 16 - #include <drm/ttm/ttm_bo_api.h> 17 - #include <drm/ttm/ttm_bo_driver.h> 18 - #include <drm/ttm/ttm_placement.h> 19 - #include <drm/ttm/ttm_memory.h> 20 - #include <drm/ttm/ttm_module.h> 21 - 22 - #include <drm/drm_gem.h> 23 - 24 - #define DRIVER_AUTHOR "Matthew Garrett" 25 - 26 - #define DRIVER_NAME "cirrus" 27 - #define DRIVER_DESC "qemu Cirrus emulation" 28 - #define DRIVER_DATE "20110418" 29 - 30 - #define DRIVER_MAJOR 1 31 - #define DRIVER_MINOR 0 32 - #define DRIVER_PATCHLEVEL 0 33 - 34 - #define CIRRUSFB_CONN_LIMIT 1 35 - 36 - #define RREG8(reg) ioread8(((void __iomem *)cdev->rmmio) + (reg)) 37 - #define WREG8(reg, v) iowrite8(v, ((void __iomem *)cdev->rmmio) + (reg)) 38 - #define RREG32(reg) ioread32(((void __iomem *)cdev->rmmio) + (reg)) 39 - #define WREG32(reg, v) iowrite32(v, ((void __iomem *)cdev->rmmio) + (reg)) 40 - 41 - #define SEQ_INDEX 4 42 - #define SEQ_DATA 5 43 - 44 - #define WREG_SEQ(reg, v) \ 45 - do { \ 46 - WREG8(SEQ_INDEX, reg); \ 47 - WREG8(SEQ_DATA, v); \ 48 - } while (0) \ 49 - 50 - #define CRT_INDEX 0x14 51 - #define CRT_DATA 0x15 52 - 53 - #define WREG_CRT(reg, v) \ 54 - do { \ 55 - WREG8(CRT_INDEX, reg); \ 56 - WREG8(CRT_DATA, v); \ 57 - } while (0) \ 58 - 59 - #define GFX_INDEX 0xe 60 - #define GFX_DATA 0xf 61 - 62 - #define WREG_GFX(reg, v) \ 63 - do { \ 64 - WREG8(GFX_INDEX, reg); \ 65 - WREG8(GFX_DATA, v); \ 66 - } while (0) \ 67 - 68 - /* 69 - * Cirrus has a "hidden" DAC register that can be accessed by writing to 70 - * the pixel mask register to reset the state, then reading from the register 71 - * four times. The next write will then pass to the DAC 72 - */ 73 - #define VGA_DAC_MASK 0x6 74 - 75 - #define WREG_HDR(v) \ 76 - do { \ 77 - RREG8(VGA_DAC_MASK); \ 78 - RREG8(VGA_DAC_MASK); \ 79 - RREG8(VGA_DAC_MASK); \ 80 - RREG8(VGA_DAC_MASK); \ 81 - WREG8(VGA_DAC_MASK, v); \ 82 - } while (0) \ 83 - 84 - 85 - #define CIRRUS_MAX_FB_HEIGHT 4096 86 - #define CIRRUS_MAX_FB_WIDTH 4096 87 - 88 - #define CIRRUS_DPMS_CLEARED (-1) 89 - 90 - #define to_cirrus_crtc(x) container_of(x, struct cirrus_crtc, base) 91 - #define to_cirrus_encoder(x) container_of(x, struct cirrus_encoder, base) 92 - 93 - struct cirrus_crtc { 94 - struct drm_crtc base; 95 - int last_dpms; 96 - bool enabled; 97 - }; 98 - 99 - struct cirrus_fbdev; 100 - struct cirrus_mode_info { 101 - struct cirrus_crtc *crtc; 102 - /* pointer to fbdev info structure */ 103 - struct cirrus_fbdev *gfbdev; 104 - }; 105 - 106 - struct cirrus_encoder { 107 - struct drm_encoder base; 108 - int last_dpms; 109 - }; 110 - 111 - struct cirrus_connector { 112 - struct drm_connector base; 113 - }; 114 - 115 - struct cirrus_mc { 116 - resource_size_t vram_size; 117 - resource_size_t vram_base; 118 - }; 119 - 120 - struct cirrus_device { 121 - struct drm_device *dev; 122 - unsigned long flags; 123 - 124 - resource_size_t rmmio_base; 125 - resource_size_t rmmio_size; 126 - void __iomem *rmmio; 127 - 128 - struct cirrus_mc mc; 129 - struct cirrus_mode_info mode_info; 130 - 131 - int num_crtc; 132 - int fb_mtrr; 133 - 134 - struct { 135 - struct ttm_bo_device bdev; 136 - } ttm; 137 - bool mm_inited; 138 - }; 139 - 140 - 141 - struct cirrus_fbdev { 142 - struct drm_fb_helper helper; /* must be first */ 143 - struct drm_framebuffer *gfb; 144 - void *sysram; 145 - int size; 146 - int x1, y1, x2, y2; /* dirty rect */ 147 - spinlock_t dirty_lock; 148 - }; 149 - 150 - struct cirrus_bo { 151 - struct ttm_buffer_object bo; 152 - struct ttm_placement placement; 153 - struct ttm_bo_kmap_obj kmap; 154 - struct drm_gem_object gem; 155 - struct ttm_place placements[3]; 156 - int pin_count; 157 - }; 158 - #define gem_to_cirrus_bo(gobj) container_of((gobj), struct cirrus_bo, gem) 159 - 160 - static inline struct cirrus_bo * 161 - cirrus_bo(struct ttm_buffer_object *bo) 162 - { 163 - return container_of(bo, struct cirrus_bo, bo); 164 - } 165 - 166 - 167 - #define to_cirrus_obj(x) container_of(x, struct cirrus_gem_object, base) 168 - 169 - /* cirrus_main.c */ 170 - int cirrus_device_init(struct cirrus_device *cdev, 171 - struct drm_device *ddev, 172 - struct pci_dev *pdev, 173 - uint32_t flags); 174 - void cirrus_device_fini(struct cirrus_device *cdev); 175 - void cirrus_gem_free_object(struct drm_gem_object *obj); 176 - int cirrus_dumb_mmap_offset(struct drm_file *file, 177 - struct drm_device *dev, 178 - uint32_t handle, 179 - uint64_t *offset); 180 - int cirrus_gem_create(struct drm_device *dev, 181 - u32 size, bool iskernel, 182 - struct drm_gem_object **obj); 183 - int cirrus_dumb_create(struct drm_file *file, 184 - struct drm_device *dev, 185 - struct drm_mode_create_dumb *args); 186 - 187 - int cirrus_framebuffer_init(struct drm_device *dev, 188 - struct drm_framebuffer *gfb, 189 - const struct drm_mode_fb_cmd2 *mode_cmd, 190 - struct drm_gem_object *obj); 191 - 192 - bool cirrus_check_framebuffer(struct cirrus_device *cdev, int width, int height, 193 - int bpp, int pitch); 194 - 195 - /* cirrus_display.c */ 196 - int cirrus_modeset_init(struct cirrus_device *cdev); 197 - void cirrus_modeset_fini(struct cirrus_device *cdev); 198 - 199 - /* cirrus_fbdev.c */ 200 - int cirrus_fbdev_init(struct cirrus_device *cdev); 201 - void cirrus_fbdev_fini(struct cirrus_device *cdev); 202 - 203 - 204 - 205 - /* cirrus_irq.c */ 206 - void cirrus_driver_irq_preinstall(struct drm_device *dev); 207 - int cirrus_driver_irq_postinstall(struct drm_device *dev); 208 - void cirrus_driver_irq_uninstall(struct drm_device *dev); 209 - irqreturn_t cirrus_driver_irq_handler(int irq, void *arg); 210 - 211 - /* cirrus_kms.c */ 212 - int cirrus_driver_load(struct drm_device *dev, unsigned long flags); 213 - void cirrus_driver_unload(struct drm_device *dev); 214 - extern struct drm_ioctl_desc cirrus_ioctls[]; 215 - extern int cirrus_max_ioctl; 216 - 217 - int cirrus_mm_init(struct cirrus_device *cirrus); 218 - void cirrus_mm_fini(struct cirrus_device *cirrus); 219 - void cirrus_ttm_placement(struct cirrus_bo *bo, int domain); 220 - int cirrus_bo_create(struct drm_device *dev, int size, int align, 221 - uint32_t flags, struct cirrus_bo **pcirrusbo); 222 - int cirrus_mmap(struct file *filp, struct vm_area_struct *vma); 223 - 224 - static inline int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait) 225 - { 226 - int ret; 227 - 228 - ret = ttm_bo_reserve(&bo->bo, true, no_wait, NULL); 229 - if (ret) { 230 - if (ret != -ERESTARTSYS && ret != -EBUSY) 231 - DRM_ERROR("reserve failed %p\n", bo); 232 - return ret; 233 - } 234 - return 0; 235 - } 236 - 237 - static inline void cirrus_bo_unreserve(struct cirrus_bo *bo) 238 - { 239 - ttm_bo_unreserve(&bo->bo); 240 - } 241 - 242 - int cirrus_bo_push_sysram(struct cirrus_bo *bo); 243 - int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr); 244 - 245 - extern int cirrus_bpp; 246 - 247 - #endif /* __CIRRUS_DRV_H__ */
+4 -4
drivers/gpu/drm/drm_blend.c
··· 132 132 * planes. Without this property the primary plane is always below the cursor 133 133 * plane, and ordering between all other planes is undefined. The positive 134 134 * Z axis points towards the user, i.e. planes with lower Z position values 135 - * are underneath planes with higher Z position values. Note that the Z 136 - * position value can also be immutable, to inform userspace about the 137 - * hard-coded stacking of overlay planes, see 138 - * drm_plane_create_zpos_immutable_property(). 135 + * are underneath planes with higher Z position values. Two planes with the 136 + * same Z position value have undefined ordering. Note that the Z position 137 + * value can also be immutable, to inform userspace about the hard-coded 138 + * stacking of planes, see drm_plane_create_zpos_immutable_property(). 139 139 * 140 140 * pixel blend mode: 141 141 * Pixel blend mode is set up with drm_plane_create_blend_mode_property().
+5 -3
drivers/gpu/drm/drm_dp_cec.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/module.h> 10 10 #include <linux/slab.h> 11 - #include <drm/drm_connector.h> 12 - #include <drm/drm_dp_helper.h> 13 - #include <drm/drmP.h> 11 + 14 12 #include <media/cec.h> 13 + 14 + #include <drm/drm_connector.h> 15 + #include <drm/drm_device.h> 16 + #include <drm/drm_dp_helper.h> 15 17 16 18 /* 17 19 * Unfortunately it turns out that we have a chicken-and-egg situation
+28 -141
drivers/gpu/drm/drm_dp_helper.c
··· 120 120 } 121 121 EXPORT_SYMBOL(drm_dp_get_adjust_request_pre_emphasis); 122 122 123 - void drm_dp_link_train_clock_recovery_delay(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) { 124 - int rd_interval = dpcd[DP_TRAINING_AUX_RD_INTERVAL] & 125 - DP_TRAINING_AUX_RD_MASK; 123 + u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE], 124 + unsigned int lane) 125 + { 126 + unsigned int offset = DP_ADJUST_REQUEST_POST_CURSOR2; 127 + u8 value = dp_link_status(link_status, offset); 128 + 129 + return (value >> (lane << 1)) & 0x3; 130 + } 131 + EXPORT_SYMBOL(drm_dp_get_adjust_request_post_cursor); 132 + 133 + void drm_dp_link_train_clock_recovery_delay(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 134 + { 135 + unsigned long rd_interval = dpcd[DP_TRAINING_AUX_RD_INTERVAL] & 136 + DP_TRAINING_AUX_RD_MASK; 126 137 127 138 if (rd_interval > 4) 128 - DRM_DEBUG_KMS("AUX interval %d, out of range (max 4)\n", 139 + DRM_DEBUG_KMS("AUX interval %lu, out of range (max 4)\n", 129 140 rd_interval); 130 141 131 142 if (rd_interval == 0 || dpcd[DP_DPCD_REV] >= DP_DPCD_REV_14) 132 - udelay(100); 143 + rd_interval = 100; 133 144 else 134 - mdelay(rd_interval * 4); 145 + rd_interval *= 4 * USEC_PER_MSEC; 146 + 147 + usleep_range(rd_interval, rd_interval * 2); 135 148 } 136 149 EXPORT_SYMBOL(drm_dp_link_train_clock_recovery_delay); 137 150 138 - void drm_dp_link_train_channel_eq_delay(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) { 139 - int rd_interval = dpcd[DP_TRAINING_AUX_RD_INTERVAL] & 140 - DP_TRAINING_AUX_RD_MASK; 151 + void drm_dp_link_train_channel_eq_delay(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 152 + { 153 + unsigned long rd_interval = dpcd[DP_TRAINING_AUX_RD_INTERVAL] & 154 + DP_TRAINING_AUX_RD_MASK; 141 155 142 156 if (rd_interval > 4) 143 - DRM_DEBUG_KMS("AUX interval %d, out of range (max 4)\n", 157 + DRM_DEBUG_KMS("AUX interval %lu, out of range (max 4)\n", 144 158 rd_interval); 145 159 146 160 if (rd_interval == 0) 147 - udelay(400); 161 + rd_interval = 400; 148 162 else 149 - mdelay(rd_interval * 4); 163 + rd_interval *= 4 * USEC_PER_MSEC; 164 + 165 + usleep_range(rd_interval, rd_interval * 2); 150 166 } 151 167 EXPORT_SYMBOL(drm_dp_link_train_channel_eq_delay); 152 168 ··· 236 220 } 237 221 238 222 ret = aux->transfer(aux, &msg); 239 - 240 223 if (ret >= 0) { 241 224 native_reply = msg.reply & DP_AUX_NATIVE_REPLY_MASK; 242 225 if (native_reply == DP_AUX_NATIVE_REPLY_ACK) { ··· 350 335 DP_LINK_STATUS_SIZE); 351 336 } 352 337 EXPORT_SYMBOL(drm_dp_dpcd_read_link_status); 353 - 354 - /** 355 - * drm_dp_link_probe() - probe a DisplayPort link for capabilities 356 - * @aux: DisplayPort AUX channel 357 - * @link: pointer to structure in which to return link capabilities 358 - * 359 - * The structure filled in by this function can usually be passed directly 360 - * into drm_dp_link_power_up() and drm_dp_link_configure() to power up and 361 - * configure the link based on the link's capabilities. 362 - * 363 - * Returns 0 on success or a negative error code on failure. 364 - */ 365 - int drm_dp_link_probe(struct drm_dp_aux *aux, struct drm_dp_link *link) 366 - { 367 - u8 values[3]; 368 - int err; 369 - 370 - memset(link, 0, sizeof(*link)); 371 - 372 - err = drm_dp_dpcd_read(aux, DP_DPCD_REV, values, sizeof(values)); 373 - if (err < 0) 374 - return err; 375 - 376 - link->revision = values[0]; 377 - link->rate = drm_dp_bw_code_to_link_rate(values[1]); 378 - link->num_lanes = values[2] & DP_MAX_LANE_COUNT_MASK; 379 - 380 - if (values[2] & DP_ENHANCED_FRAME_CAP) 381 - link->capabilities |= DP_LINK_CAP_ENHANCED_FRAMING; 382 - 383 - return 0; 384 - } 385 - EXPORT_SYMBOL(drm_dp_link_probe); 386 - 387 - /** 388 - * drm_dp_link_power_up() - power up a DisplayPort link 389 - * @aux: DisplayPort AUX channel 390 - * @link: pointer to a structure containing the link configuration 391 - * 392 - * Returns 0 on success or a negative error code on failure. 393 - */ 394 - int drm_dp_link_power_up(struct drm_dp_aux *aux, struct drm_dp_link *link) 395 - { 396 - u8 value; 397 - int err; 398 - 399 - /* DP_SET_POWER register is only available on DPCD v1.1 and later */ 400 - if (link->revision < 0x11) 401 - return 0; 402 - 403 - err = drm_dp_dpcd_readb(aux, DP_SET_POWER, &value); 404 - if (err < 0) 405 - return err; 406 - 407 - value &= ~DP_SET_POWER_MASK; 408 - value |= DP_SET_POWER_D0; 409 - 410 - err = drm_dp_dpcd_writeb(aux, DP_SET_POWER, value); 411 - if (err < 0) 412 - return err; 413 - 414 - /* 415 - * According to the DP 1.1 specification, a "Sink Device must exit the 416 - * power saving state within 1 ms" (Section 2.5.3.1, Table 5-52, "Sink 417 - * Control Field" (register 0x600). 418 - */ 419 - usleep_range(1000, 2000); 420 - 421 - return 0; 422 - } 423 - EXPORT_SYMBOL(drm_dp_link_power_up); 424 - 425 - /** 426 - * drm_dp_link_power_down() - power down a DisplayPort link 427 - * @aux: DisplayPort AUX channel 428 - * @link: pointer to a structure containing the link configuration 429 - * 430 - * Returns 0 on success or a negative error code on failure. 431 - */ 432 - int drm_dp_link_power_down(struct drm_dp_aux *aux, struct drm_dp_link *link) 433 - { 434 - u8 value; 435 - int err; 436 - 437 - /* DP_SET_POWER register is only available on DPCD v1.1 and later */ 438 - if (link->revision < 0x11) 439 - return 0; 440 - 441 - err = drm_dp_dpcd_readb(aux, DP_SET_POWER, &value); 442 - if (err < 0) 443 - return err; 444 - 445 - value &= ~DP_SET_POWER_MASK; 446 - value |= DP_SET_POWER_D3; 447 - 448 - err = drm_dp_dpcd_writeb(aux, DP_SET_POWER, value); 449 - if (err < 0) 450 - return err; 451 - 452 - return 0; 453 - } 454 - EXPORT_SYMBOL(drm_dp_link_power_down); 455 - 456 - /** 457 - * drm_dp_link_configure() - configure a DisplayPort link 458 - * @aux: DisplayPort AUX channel 459 - * @link: pointer to a structure containing the link configuration 460 - * 461 - * Returns 0 on success or a negative error code on failure. 462 - */ 463 - int drm_dp_link_configure(struct drm_dp_aux *aux, struct drm_dp_link *link) 464 - { 465 - u8 values[2]; 466 - int err; 467 - 468 - values[0] = drm_dp_link_rate_to_bw_code(link->rate); 469 - values[1] = link->num_lanes; 470 - 471 - if (link->capabilities & DP_LINK_CAP_ENHANCED_FRAMING) 472 - values[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 473 - 474 - err = drm_dp_dpcd_write(aux, DP_LINK_BW_SET, values, sizeof(values)); 475 - if (err < 0) 476 - return err; 477 - 478 - return 0; 479 - } 480 - EXPORT_SYMBOL(drm_dp_link_configure); 481 338 482 339 /** 483 340 * drm_dp_downstream_max_clock() - extract branch device max
+2 -6
drivers/gpu/drm/drm_dp_mst_topology.c
··· 3540 3540 { 3541 3541 struct drm_dp_mst_topology_state *topology_state; 3542 3542 struct drm_dp_vcpi_allocation *pos, *vcpi = NULL; 3543 - int prev_slots, req_slots, ret; 3543 + int prev_slots, req_slots; 3544 3544 3545 3545 topology_state = drm_atomic_get_mst_topology_state(state, mgr); 3546 3546 if (IS_ERR(topology_state)) ··· 3587 3587 } 3588 3588 vcpi->vcpi = req_slots; 3589 3589 3590 - ret = req_slots; 3591 - return ret; 3590 + return req_slots; 3592 3591 } 3593 3592 EXPORT_SYMBOL(drm_dp_atomic_find_vcpi_slots); 3594 3593 ··· 4183 4184 struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_atomic_state *state, 4184 4185 struct drm_dp_mst_topology_mgr *mgr) 4185 4186 { 4186 - struct drm_device *dev = mgr->dev; 4187 - 4188 - WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); 4189 4187 return to_dp_mst_topology_state(drm_atomic_get_private_obj_state(state, &mgr->base)); 4190 4188 } 4191 4189 EXPORT_SYMBOL(drm_atomic_get_mst_topology_state);
+68 -46
drivers/gpu/drm/drm_edid.c
··· 2192 2192 return LEVEL_CVT; 2193 2193 if (drm_gtf2_hbreak(edid)) 2194 2194 return LEVEL_GTF2; 2195 - return LEVEL_GTF; 2195 + if (edid->features & DRM_EDID_FEATURE_DEFAULT_GTF) 2196 + return LEVEL_GTF; 2196 2197 } 2197 2198 return LEVEL_DMT; 2198 2199 } ··· 3209 3208 return vic > 0 && vic < ARRAY_SIZE(edid_cea_modes); 3210 3209 } 3211 3210 3212 - /** 3213 - * drm_get_cea_aspect_ratio - get the picture aspect ratio corresponding to 3214 - * the input VIC from the CEA mode list 3215 - * @video_code: ID given to each of the CEA modes 3216 - * 3217 - * Returns picture aspect ratio 3218 - */ 3219 - enum hdmi_picture_aspect drm_get_cea_aspect_ratio(const u8 video_code) 3211 + static enum hdmi_picture_aspect drm_get_cea_aspect_ratio(const u8 video_code) 3220 3212 { 3221 3213 return edid_cea_modes[video_code].picture_aspect_ratio; 3222 3214 } 3223 - EXPORT_SYMBOL(drm_get_cea_aspect_ratio); 3224 3215 3225 3216 /* 3226 3217 * Calculate the alternate clock for HDMI modes (those from the HDMI vendor ··· 5164 5171 } 5165 5172 EXPORT_SYMBOL(drm_hdmi_infoframe_set_hdr_metadata); 5166 5173 5174 + static u8 drm_mode_hdmi_vic(struct drm_connector *connector, 5175 + const struct drm_display_mode *mode) 5176 + { 5177 + bool has_hdmi_infoframe = connector ? 5178 + connector->display_info.has_hdmi_infoframe : false; 5179 + 5180 + if (!has_hdmi_infoframe) 5181 + return 0; 5182 + 5183 + /* No HDMI VIC when signalling 3D video format */ 5184 + if (mode->flags & DRM_MODE_FLAG_3D_MASK) 5185 + return 0; 5186 + 5187 + return drm_match_hdmi_mode(mode); 5188 + } 5189 + 5190 + static u8 drm_mode_cea_vic(struct drm_connector *connector, 5191 + const struct drm_display_mode *mode) 5192 + { 5193 + u8 vic; 5194 + 5195 + /* 5196 + * HDMI spec says if a mode is found in HDMI 1.4b 4K modes 5197 + * we should send its VIC in vendor infoframes, else send the 5198 + * VIC in AVI infoframes. Lets check if this mode is present in 5199 + * HDMI 1.4b 4K modes 5200 + */ 5201 + if (drm_mode_hdmi_vic(connector, mode)) 5202 + return 0; 5203 + 5204 + vic = drm_match_cea_mode(mode); 5205 + 5206 + /* 5207 + * HDMI 1.4 VIC range: 1 <= VIC <= 64 (CEA-861-D) but 5208 + * HDMI 2.0 VIC range: 1 <= VIC <= 107 (CEA-861-F). So we 5209 + * have to make sure we dont break HDMI 1.4 sinks. 5210 + */ 5211 + if (!is_hdmi2_sink(connector) && vic > 64) 5212 + return 0; 5213 + 5214 + return vic; 5215 + } 5216 + 5167 5217 /** 5168 5218 * drm_hdmi_avi_infoframe_from_display_mode() - fill an HDMI AVI infoframe with 5169 5219 * data from a DRM display mode ··· 5234 5198 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 5235 5199 frame->pixel_repeat = 1; 5236 5200 5237 - frame->video_code = drm_match_cea_mode(mode); 5238 - 5239 - /* 5240 - * HDMI 1.4 VIC range: 1 <= VIC <= 64 (CEA-861-D) but 5241 - * HDMI 2.0 VIC range: 1 <= VIC <= 107 (CEA-861-F). So we 5242 - * have to make sure we dont break HDMI 1.4 sinks. 5243 - */ 5244 - if (!is_hdmi2_sink(connector) && frame->video_code > 64) 5245 - frame->video_code = 0; 5246 - 5247 - /* 5248 - * HDMI spec says if a mode is found in HDMI 1.4b 4K modes 5249 - * we should send its VIC in vendor infoframes, else send the 5250 - * VIC in AVI infoframes. Lets check if this mode is present in 5251 - * HDMI 1.4b 4K modes 5252 - */ 5253 - if (frame->video_code) { 5254 - u8 vendor_if_vic = drm_match_hdmi_mode(mode); 5255 - bool is_s3d = mode->flags & DRM_MODE_FLAG_3D_MASK; 5256 - 5257 - if (drm_valid_hdmi_vic(vendor_if_vic) && !is_s3d) 5258 - frame->video_code = 0; 5259 - } 5201 + frame->video_code = drm_mode_cea_vic(connector, mode); 5260 5202 5261 5203 frame->picture_aspect = HDMI_PICTURE_ASPECT_NONE; 5262 5204 ··· 5399 5385 } 5400 5386 EXPORT_SYMBOL(drm_hdmi_avi_infoframe_quant_range); 5401 5387 5388 + /** 5389 + * drm_hdmi_avi_infoframe_bars() - fill the HDMI AVI infoframe 5390 + * bar information 5391 + * @frame: HDMI AVI infoframe 5392 + * @conn_state: connector state 5393 + */ 5394 + void 5395 + drm_hdmi_avi_infoframe_bars(struct hdmi_avi_infoframe *frame, 5396 + const struct drm_connector_state *conn_state) 5397 + { 5398 + frame->right_bar = conn_state->tv.margins.right; 5399 + frame->left_bar = conn_state->tv.margins.left; 5400 + frame->top_bar = conn_state->tv.margins.top; 5401 + frame->bottom_bar = conn_state->tv.margins.bottom; 5402 + } 5403 + EXPORT_SYMBOL(drm_hdmi_avi_infoframe_bars); 5404 + 5402 5405 static enum hdmi_3d_structure 5403 5406 s3d_structure_from_display_mode(const struct drm_display_mode *mode) 5404 5407 { ··· 5468 5437 bool has_hdmi_infoframe = connector ? 5469 5438 connector->display_info.has_hdmi_infoframe : false; 5470 5439 int err; 5471 - u32 s3d_flags; 5472 - u8 vic; 5473 5440 5474 5441 if (!frame || !mode) 5475 5442 return -EINVAL; ··· 5475 5446 if (!has_hdmi_infoframe) 5476 5447 return -EINVAL; 5477 5448 5478 - vic = drm_match_hdmi_mode(mode); 5479 - s3d_flags = mode->flags & DRM_MODE_FLAG_3D_MASK; 5449 + err = hdmi_vendor_infoframe_init(frame); 5450 + if (err < 0) 5451 + return err; 5480 5452 5481 5453 /* 5482 5454 * Even if it's not absolutely necessary to send the infoframe ··· 5488 5458 * mode if the source simply stops sending the infoframe when 5489 5459 * it wants to switch from 3D to 2D. 5490 5460 */ 5491 - 5492 - if (vic && s3d_flags) 5493 - return -EINVAL; 5494 - 5495 - err = hdmi_vendor_infoframe_init(frame); 5496 - if (err < 0) 5497 - return err; 5498 - 5499 - frame->vic = vic; 5461 + frame->vic = drm_mode_hdmi_vic(connector, mode); 5500 5462 frame->s3d_struct = s3d_structure_from_display_mode(mode); 5501 5463 5502 5464 return 0;
+18 -9
drivers/gpu/drm/drm_gem.c
··· 1099 1099 struct vm_area_struct *vma) 1100 1100 { 1101 1101 struct drm_device *dev = obj->dev; 1102 + int ret; 1102 1103 1103 1104 /* Check for valid size. */ 1104 1105 if (obj_size < vma->vm_end - vma->vm_start) 1105 1106 return -EINVAL; 1106 1107 1107 - if (obj->funcs && obj->funcs->vm_ops) 1108 - vma->vm_ops = obj->funcs->vm_ops; 1109 - else if (dev->driver->gem_vm_ops) 1110 - vma->vm_ops = dev->driver->gem_vm_ops; 1111 - else 1112 - return -EINVAL; 1108 + if (obj->funcs && obj->funcs->mmap) { 1109 + ret = obj->funcs->mmap(obj, vma); 1110 + if (ret) 1111 + return ret; 1112 + WARN_ON(!(vma->vm_flags & VM_DONTEXPAND)); 1113 + } else { 1114 + if (obj->funcs && obj->funcs->vm_ops) 1115 + vma->vm_ops = obj->funcs->vm_ops; 1116 + else if (dev->driver->gem_vm_ops) 1117 + vma->vm_ops = dev->driver->gem_vm_ops; 1118 + else 1119 + return -EINVAL; 1113 1120 1114 - vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP; 1121 + vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP; 1122 + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); 1123 + vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); 1124 + } 1125 + 1115 1126 vma->vm_private_data = obj; 1116 - vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); 1117 - vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); 1118 1127 1119 1128 /* Take a ref for this mapping of the object, so that the fault 1120 1129 * handler can dereference the mmap offset's pointer to the object.
+10 -18
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 32 32 .get_sg_table = drm_gem_shmem_get_sg_table, 33 33 .vmap = drm_gem_shmem_vmap, 34 34 .vunmap = drm_gem_shmem_vunmap, 35 - .vm_ops = &drm_gem_shmem_vm_ops, 35 + .mmap = drm_gem_shmem_mmap, 36 36 }; 37 37 38 38 /** ··· 505 505 drm_gem_vm_close(vma); 506 506 } 507 507 508 - const struct vm_operations_struct drm_gem_shmem_vm_ops = { 508 + static const struct vm_operations_struct drm_gem_shmem_vm_ops = { 509 509 .fault = drm_gem_shmem_fault, 510 510 .open = drm_gem_shmem_vm_open, 511 511 .close = drm_gem_shmem_vm_close, 512 512 }; 513 - EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops); 514 513 515 514 /** 516 515 * drm_gem_shmem_mmap - Memory-map a shmem GEM object 517 - * @filp: File object 516 + * @obj: gem object 518 517 * @vma: VMA for the area to be mapped 519 518 * 520 519 * This function implements an augmented version of the GEM DRM file mmap 521 520 * operation for shmem objects. Drivers which employ the shmem helpers should 522 - * use this function as their &file_operations.mmap handler in the DRM device file's 523 - * file_operations structure. 524 - * 525 - * Instead of directly referencing this function, drivers should use the 526 - * DEFINE_DRM_GEM_SHMEM_FOPS() macro. 521 + * use this function as their &drm_gem_object_funcs.mmap handler. 527 522 * 528 523 * Returns: 529 524 * 0 on success or a negative error code on failure. 530 525 */ 531 - int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma) 526 + int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) 532 527 { 533 528 struct drm_gem_shmem_object *shmem; 534 529 int ret; 535 530 536 - ret = drm_gem_mmap(filp, vma); 537 - if (ret) 538 - return ret; 539 - 540 - shmem = to_drm_gem_shmem_obj(vma->vm_private_data); 531 + shmem = to_drm_gem_shmem_obj(obj); 541 532 542 533 ret = drm_gem_shmem_get_pages(shmem); 543 534 if (ret) { ··· 536 545 return ret; 537 546 } 538 547 539 - /* VM_PFNMAP was set by drm_gem_mmap() */ 540 - vma->vm_flags &= ~VM_PFNMAP; 541 - vma->vm_flags |= VM_MIXEDMAP; 548 + vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND; 549 + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); 550 + vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); 551 + vma->vm_ops = &drm_gem_shmem_vm_ops; 542 552 543 553 /* Remove the fake offset */ 544 554 vma->vm_pgoff -= drm_vma_node_start(&shmem->base.vma_node);
+17
drivers/gpu/drm/drm_gem_ttm_helper.c
··· 52 52 } 53 53 EXPORT_SYMBOL(drm_gem_ttm_print_info); 54 54 55 + /** 56 + * drm_gem_ttm_mmap() - mmap &ttm_buffer_object 57 + * @gem: GEM object. 58 + * @vma: vm area. 59 + * 60 + * This function can be used as &drm_gem_object_funcs.mmap 61 + * callback. 62 + */ 63 + int drm_gem_ttm_mmap(struct drm_gem_object *gem, 64 + struct vm_area_struct *vma) 65 + { 66 + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); 67 + 68 + return ttm_bo_mmap_obj(vma, bo); 69 + } 70 + EXPORT_SYMBOL(drm_gem_ttm_mmap); 71 + 55 72 MODULE_DESCRIPTION("DRM gem ttm helpers"); 56 73 MODULE_LICENSE("GPL");
+1 -55
drivers/gpu/drm/drm_gem_vram_helper.c
··· 552 552 *pl = gbo->placement; 553 553 } 554 554 555 - static int drm_gem_vram_bo_driver_verify_access(struct drm_gem_vram_object *gbo, 556 - struct file *filp) 557 - { 558 - return drm_vma_node_verify_access(&gbo->bo.base.vma_node, 559 - filp->private_data); 560 - } 561 - 562 555 static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, 563 556 bool evict, 564 557 struct ttm_mem_reg *new_mem) ··· 730 737 .unpin = drm_gem_vram_object_unpin, 731 738 .vmap = drm_gem_vram_object_vmap, 732 739 .vunmap = drm_gem_vram_object_vunmap, 740 + .mmap = drm_gem_ttm_mmap, 733 741 .print_info = drm_gem_ttm_print_info, 734 742 }; 735 743 ··· 816 822 drm_gem_vram_bo_driver_evict_flags(gbo, placement); 817 823 } 818 824 819 - static int bo_driver_verify_access(struct ttm_buffer_object *bo, 820 - struct file *filp) 821 - { 822 - struct drm_gem_vram_object *gbo; 823 - 824 - /* TTM may pass BOs that are not GEM VRAM BOs. */ 825 - if (!drm_is_gem_vram(bo)) 826 - return -EINVAL; 827 - 828 - gbo = drm_gem_vram_of_bo(bo); 829 - 830 - return drm_gem_vram_bo_driver_verify_access(gbo, filp); 831 - } 832 - 833 825 static void bo_driver_move_notify(struct ttm_buffer_object *bo, 834 826 bool evict, 835 827 struct ttm_mem_reg *new_mem) ··· 872 892 .init_mem_type = bo_driver_init_mem_type, 873 893 .eviction_valuable = ttm_bo_eviction_valuable, 874 894 .evict_flags = bo_driver_evict_flags, 875 - .verify_access = bo_driver_verify_access, 876 895 .move_notify = bo_driver_move_notify, 877 896 .io_mem_reserve = bo_driver_io_mem_reserve, 878 897 .io_mem_free = bo_driver_io_mem_free, ··· 950 971 ttm_bo_device_release(&vmm->bdev); 951 972 } 952 973 953 - static int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma, 954 - struct drm_vram_mm *vmm) 955 - { 956 - return ttm_bo_mmap(filp, vma, &vmm->bdev); 957 - } 958 - 959 974 /* 960 975 * Helpers for integration with struct drm_device 961 976 */ ··· 1005 1032 dev->vram_mm = NULL; 1006 1033 } 1007 1034 EXPORT_SYMBOL(drm_vram_helper_release_mm); 1008 - 1009 - /* 1010 - * Helpers for &struct file_operations 1011 - */ 1012 - 1013 - /** 1014 - * drm_vram_mm_file_operations_mmap() - \ 1015 - Implements &struct file_operations.mmap() 1016 - * @filp: the mapping's file structure 1017 - * @vma: the mapping's memory area 1018 - * 1019 - * Returns: 1020 - * 0 on success, or 1021 - * a negative error code otherwise. 1022 - */ 1023 - int drm_vram_mm_file_operations_mmap( 1024 - struct file *filp, struct vm_area_struct *vma) 1025 - { 1026 - struct drm_file *file_priv = filp->private_data; 1027 - struct drm_device *dev = file_priv->minor->dev; 1028 - 1029 - if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized")) 1030 - return -EINVAL; 1031 - 1032 - return drm_vram_mm_mmap(filp, vma, dev->vram_mm); 1033 - } 1034 - EXPORT_SYMBOL(drm_vram_mm_file_operations_mmap);
+1 -1
drivers/gpu/drm/drm_mipi_dbi.c
··· 1021 1021 unsigned int i; 1022 1022 1023 1023 for (i = 0; i < len; i++) 1024 - data[i] = (buf[i] << 1) | !!(buf[i + 1] & BIT(7)); 1024 + data[i] = (buf[i] << 1) | (buf[i + 1] >> 7); 1025 1025 } 1026 1026 1027 1027 MIPI_DBI_DEBUG_COMMAND(*cmd, data, len);
-2
drivers/gpu/drm/drm_mode_config.c
··· 428 428 * Note that since this /should/ happen single-threaded at driver/device 429 429 * teardown time, no locking is required. It's the driver's job to ensure that 430 430 * this guarantee actually holds true. 431 - * 432 - * FIXME: cleanup any dangling user buffer objects too 433 431 */ 434 432 void drm_mode_config_cleanup(struct drm_device *dev) 435 433 {
+9
drivers/gpu/drm/drm_prime.c
··· 713 713 struct file *fil; 714 714 int ret; 715 715 716 + if (obj->funcs && obj->funcs->mmap) { 717 + ret = obj->funcs->mmap(obj, vma); 718 + if (ret) 719 + return ret; 720 + vma->vm_private_data = obj; 721 + drm_gem_object_get(obj); 722 + return 0; 723 + } 724 + 716 725 priv = kzalloc(sizeof(*priv), GFP_KERNEL); 717 726 fil = kzalloc(sizeof(*fil), GFP_KERNEL); 718 727 if (!priv || !fil) {
+21 -14
drivers/gpu/drm/drm_syncobj.c
··· 1280 1280 if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE)) 1281 1281 return -EOPNOTSUPP; 1282 1282 1283 - if (args->pad != 0) 1283 + if (args->flags != 0) 1284 1284 return -EINVAL; 1285 1285 1286 1286 if (args->count_handles == 0) ··· 1351 1351 if (!drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE)) 1352 1352 return -EOPNOTSUPP; 1353 1353 1354 - if (args->pad != 0) 1354 + if (args->flags & ~DRM_SYNCOBJ_QUERY_FLAGS_LAST_SUBMITTED) 1355 1355 return -EINVAL; 1356 1356 1357 1357 if (args->count_handles == 0) ··· 1372 1372 fence = drm_syncobj_fence_get(syncobjs[i]); 1373 1373 chain = to_dma_fence_chain(fence); 1374 1374 if (chain) { 1375 - struct dma_fence *iter, *last_signaled = NULL; 1375 + struct dma_fence *iter, *last_signaled = 1376 + dma_fence_get(fence); 1376 1377 1377 - dma_fence_chain_for_each(iter, fence) { 1378 - if (iter->context != fence->context) { 1379 - dma_fence_put(iter); 1380 - /* It is most likely that timeline has 1381 - * unorder points. */ 1382 - break; 1378 + if (args->flags & 1379 + DRM_SYNCOBJ_QUERY_FLAGS_LAST_SUBMITTED) { 1380 + point = fence->seqno; 1381 + } else { 1382 + dma_fence_chain_for_each(iter, fence) { 1383 + if (iter->context != fence->context) { 1384 + dma_fence_put(iter); 1385 + /* It is most likely that timeline has 1386 + * unorder points. */ 1387 + break; 1388 + } 1389 + dma_fence_put(last_signaled); 1390 + last_signaled = dma_fence_get(iter); 1383 1391 } 1384 - dma_fence_put(last_signaled); 1385 - last_signaled = dma_fence_get(iter); 1392 + point = dma_fence_is_signaled(last_signaled) ? 1393 + last_signaled->seqno : 1394 + to_dma_fence_chain(last_signaled)->prev_seqno; 1386 1395 } 1387 - point = dma_fence_is_signaled(last_signaled) ? 1388 - last_signaled->seqno : 1389 - to_dma_fence_chain(last_signaled)->prev_seqno; 1390 1396 dma_fence_put(last_signaled); 1391 1397 } else { 1392 1398 point = 0; 1393 1399 } 1400 + dma_fence_put(fence); 1394 1401 ret = copy_to_user(&points[i], &point, sizeof(uint64_t)); 1395 1402 ret = ret ? -EFAULT : 0; 1396 1403 if (ret)
+3 -3
drivers/gpu/drm/drm_vblank.c
··· 1610 1610 unsigned int flags, pipe, high_pipe; 1611 1611 1612 1612 if (!dev->irq_enabled) 1613 - return -EINVAL; 1613 + return -EOPNOTSUPP; 1614 1614 1615 1615 if (vblwait->request.type & _DRM_VBLANK_SIGNAL) 1616 1616 return -EINVAL; ··· 1876 1876 return -EOPNOTSUPP; 1877 1877 1878 1878 if (!dev->irq_enabled) 1879 - return -EINVAL; 1879 + return -EOPNOTSUPP; 1880 1880 1881 1881 crtc = drm_crtc_find(dev, file_priv, get_seq->crtc_id); 1882 1882 if (!crtc) ··· 1934 1934 return -EOPNOTSUPP; 1935 1935 1936 1936 if (!dev->irq_enabled) 1937 - return -EINVAL; 1937 + return -EOPNOTSUPP; 1938 1938 1939 1939 crtc = drm_crtc_find(dev, file_priv, queue_seq->crtc_id); 1940 1940 if (!crtc)
+2
drivers/gpu/drm/gma500/cdv_intel_display.c
··· 405 405 struct gma_crtc *gma_crtc = to_gma_crtc(crtc); 406 406 struct gma_clock_t clock; 407 407 408 + memset(&clock, 0, sizeof(clock)); 409 + 408 410 switch (refclk) { 409 411 case 27000: 410 412 if (target < 200000) {
+2
drivers/gpu/drm/gma500/oaktrail_crtc.c
··· 129 129 s32 freq_error, min_error = 100000; 130 130 131 131 memset(best_clock, 0, sizeof(*best_clock)); 132 + memset(&clock, 0, sizeof(clock)); 132 133 133 134 for (clock.m = limit->m.min; clock.m <= limit->m.max; clock.m++) { 134 135 for (clock.n = limit->n.min; clock.n <= limit->n.max; ··· 186 185 int err = target; 187 186 188 187 memset(best_clock, 0, sizeof(*best_clock)); 188 + memset(&clock, 0, sizeof(clock)); 189 189 190 190 for (clock.m = limit->m.min; clock.m <= limit->m.max; clock.m++) { 191 191 for (clock.p1 = limit->p1.min; clock.p1 <= limit->p1.max;
+1 -4
drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_drv.c
··· 26 26 #include "hibmc_drm_drv.h" 27 27 #include "hibmc_drm_regs.h" 28 28 29 - static const struct file_operations hibmc_fops = { 30 - .owner = THIS_MODULE, 31 - DRM_VRAM_MM_FILE_OPERATIONS 32 - }; 29 + DEFINE_DRM_GEM_FOPS(hibmc_fops); 33 30 34 31 static irqreturn_t hibmc_drm_interrupt(int irq, void *arg) 35 32 {
+4 -5
drivers/gpu/drm/i2c/tda998x_drv.c
··· 806 806 tda998x_edid_delay_start(priv); 807 807 } else { 808 808 schedule_work(&priv->detect_work); 809 - cec_notifier_set_phys_addr(priv->cec_notify, 810 - CEC_PHYS_ADDR_INVALID); 809 + cec_notifier_phys_addr_invalidate( 810 + priv->cec_notify); 811 811 } 812 812 813 813 handled = true; ··· 1791 1791 1792 1792 i2c_unregister_device(priv->cec); 1793 1793 1794 - if (priv->cec_notify) 1795 - cec_notifier_put(priv->cec_notify); 1794 + cec_notifier_conn_unregister(priv->cec_notify); 1796 1795 } 1797 1796 1798 1797 static int tda998x_create(struct device *dev) ··· 1916 1917 cec_write(priv, REG_CEC_RXSHPDINTENA, CEC_RXSHPDLEV_HPD); 1917 1918 } 1918 1919 1919 - priv->cec_notify = cec_notifier_get(dev); 1920 + priv->cec_notify = cec_notifier_conn_register(dev, NULL, NULL); 1920 1921 if (!priv->cec_notify) { 1921 1922 ret = -ENOMEM; 1922 1923 goto fail;
+1
drivers/gpu/drm/lima/Kconfig
··· 9 9 depends on COMMON_CLK 10 10 depends on OF 11 11 select DRM_SCHED 12 + select DRM_GEM_SHMEM_HELPER 12 13 help 13 14 DRM driver for ARM Mali 400/450 GPUs.
+1 -3
drivers/gpu/drm/lima/Makefile
··· 13 13 lima_vm.o \ 14 14 lima_sched.o \ 15 15 lima_ctx.o \ 16 - lima_gem_prime.o \ 17 16 lima_dlbu.o \ 18 - lima_bcast.o \ 19 - lima_object.o 17 + lima_bcast.o 20 18 21 19 obj-$(CONFIG_DRM_LIMA) += lima.o
+1 -1
drivers/gpu/drm/lima/lima_device.c
··· 314 314 ldev->va_end = LIMA_VA_RESERVE_START; 315 315 ldev->dlbu_cpu = dma_alloc_wc( 316 316 ldev->dev, LIMA_PAGE_SIZE, 317 - &ldev->dlbu_dma, GFP_KERNEL); 317 + &ldev->dlbu_dma, GFP_KERNEL | __GFP_NOWARN); 318 318 if (!ldev->dlbu_cpu) { 319 319 err = -ENOMEM; 320 320 goto err_out2;
+4 -18
drivers/gpu/drm/lima/lima_drv.c
··· 12 12 13 13 #include "lima_drv.h" 14 14 #include "lima_gem.h" 15 - #include "lima_gem_prime.h" 16 15 #include "lima_vm.h" 17 16 18 17 int lima_sched_timeout_ms; ··· 239 240 DRM_IOCTL_DEF_DRV(LIMA_CTX_FREE, lima_ioctl_ctx_free, DRM_RENDER_ALLOW), 240 241 }; 241 242 242 - static const struct file_operations lima_drm_driver_fops = { 243 - .owner = THIS_MODULE, 244 - .open = drm_open, 245 - .release = drm_release, 246 - .unlocked_ioctl = drm_ioctl, 247 - #ifdef CONFIG_COMPAT 248 - .compat_ioctl = drm_compat_ioctl, 249 - #endif 250 - .mmap = lima_gem_mmap, 251 - }; 243 + DEFINE_DRM_GEM_FOPS(lima_drm_driver_fops); 252 244 253 245 static struct drm_driver lima_drm_driver = { 254 246 .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ, ··· 248 258 .ioctls = lima_drm_driver_ioctls, 249 259 .num_ioctls = ARRAY_SIZE(lima_drm_driver_ioctls), 250 260 .fops = &lima_drm_driver_fops, 251 - .gem_free_object_unlocked = lima_gem_free_object, 252 - .gem_open_object = lima_gem_object_open, 253 - .gem_close_object = lima_gem_object_close, 254 - .gem_vm_ops = &lima_gem_vm_ops, 255 261 .name = "lima", 256 262 .desc = "lima DRM", 257 263 .date = "20190217", ··· 255 269 .minor = 0, 256 270 .patchlevel = 0, 257 271 272 + .gem_create_object = lima_gem_create_object, 258 273 .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 259 - .gem_prime_import_sg_table = lima_gem_prime_import_sg_table, 274 + .gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table, 260 275 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 261 - .gem_prime_get_sg_table = lima_gem_prime_get_sg_table, 262 - .gem_prime_mmap = lima_gem_prime_mmap, 276 + .gem_prime_mmap = drm_gem_prime_mmap, 263 277 }; 264 278 265 279 static int lima_pdev_probe(struct platform_device *pdev)
+71 -124
drivers/gpu/drm/lima/lima_gem.c
··· 3 3 4 4 #include <linux/mm.h> 5 5 #include <linux/sync_file.h> 6 - #include <linux/pfn_t.h> 6 + #include <linux/pagemap.h> 7 7 8 8 #include <drm/drm_file.h> 9 9 #include <drm/drm_syncobj.h> ··· 13 13 14 14 #include "lima_drv.h" 15 15 #include "lima_gem.h" 16 - #include "lima_gem_prime.h" 17 16 #include "lima_vm.h" 18 - #include "lima_object.h" 19 17 20 18 int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, 21 19 u32 size, u32 flags, u32 *handle) 22 20 { 23 21 int err; 24 - struct lima_bo *bo; 25 - struct lima_device *ldev = to_lima_dev(dev); 22 + gfp_t mask; 23 + struct drm_gem_shmem_object *shmem; 24 + struct drm_gem_object *obj; 25 + struct sg_table *sgt; 26 26 27 - bo = lima_bo_create(ldev, size, flags, NULL); 28 - if (IS_ERR(bo)) 29 - return PTR_ERR(bo); 27 + shmem = drm_gem_shmem_create(dev, size); 28 + if (IS_ERR(shmem)) 29 + return PTR_ERR(shmem); 30 30 31 - err = drm_gem_handle_create(file, &bo->gem, handle); 31 + obj = &shmem->base; 32 32 33 + /* Mali Utgard GPU can only support 32bit address space */ 34 + mask = mapping_gfp_mask(obj->filp->f_mapping); 35 + mask &= ~__GFP_HIGHMEM; 36 + mask |= __GFP_DMA32; 37 + mapping_set_gfp_mask(obj->filp->f_mapping, mask); 38 + 39 + sgt = drm_gem_shmem_get_pages_sgt(obj); 40 + if (IS_ERR(sgt)) { 41 + err = PTR_ERR(sgt); 42 + goto out; 43 + } 44 + 45 + err = drm_gem_handle_create(file, obj, handle); 46 + 47 + out: 33 48 /* drop reference from allocate - handle holds it now */ 34 - drm_gem_object_put_unlocked(&bo->gem); 49 + drm_gem_object_put_unlocked(obj); 35 50 36 51 return err; 37 52 } 38 53 39 - void lima_gem_free_object(struct drm_gem_object *obj) 54 + static void lima_gem_free_object(struct drm_gem_object *obj) 40 55 { 41 56 struct lima_bo *bo = to_lima_bo(obj); 42 57 43 58 if (!list_empty(&bo->va)) 44 59 dev_err(obj->dev->dev, "lima gem free bo still has va\n"); 45 60 46 - lima_bo_destroy(bo); 61 + drm_gem_shmem_free_object(obj); 47 62 } 48 63 49 - int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file) 64 + static int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file) 50 65 { 51 66 struct lima_bo *bo = to_lima_bo(obj); 52 67 struct lima_drm_priv *priv = to_lima_drm_priv(file); ··· 70 55 return lima_vm_bo_add(vm, bo, true); 71 56 } 72 57 73 - void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file) 58 + static void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file) 74 59 { 75 60 struct lima_bo *bo = to_lima_bo(obj); 76 61 struct lima_drm_priv *priv = to_lima_drm_priv(file); ··· 79 64 lima_vm_bo_del(vm, bo); 80 65 } 81 66 67 + static const struct drm_gem_object_funcs lima_gem_funcs = { 68 + .free = lima_gem_free_object, 69 + .open = lima_gem_object_open, 70 + .close = lima_gem_object_close, 71 + .print_info = drm_gem_shmem_print_info, 72 + .pin = drm_gem_shmem_pin, 73 + .unpin = drm_gem_shmem_unpin, 74 + .get_sg_table = drm_gem_shmem_get_sg_table, 75 + .vmap = drm_gem_shmem_vmap, 76 + .vunmap = drm_gem_shmem_vunmap, 77 + .mmap = drm_gem_shmem_mmap, 78 + }; 79 + 80 + struct drm_gem_object *lima_gem_create_object(struct drm_device *dev, size_t size) 81 + { 82 + struct lima_bo *bo; 83 + 84 + bo = kzalloc(sizeof(*bo), GFP_KERNEL); 85 + if (!bo) 86 + return NULL; 87 + 88 + mutex_init(&bo->lock); 89 + INIT_LIST_HEAD(&bo->va); 90 + 91 + bo->base.base.funcs = &lima_gem_funcs; 92 + 93 + return &bo->base.base; 94 + } 95 + 82 96 int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset) 83 97 { 84 98 struct drm_gem_object *obj; 85 99 struct lima_bo *bo; 86 100 struct lima_drm_priv *priv = to_lima_drm_priv(file); 87 101 struct lima_vm *vm = priv->vm; 88 - int err; 89 102 90 103 obj = drm_gem_object_lookup(file, handle); 91 104 if (!obj) ··· 123 80 124 81 *va = lima_vm_get_va(vm, bo); 125 82 126 - err = drm_gem_create_mmap_offset(obj); 127 - if (!err) 128 - *offset = drm_vma_node_offset_addr(&obj->vma_node); 83 + *offset = drm_vma_node_offset_addr(&obj->vma_node); 129 84 130 85 drm_gem_object_put_unlocked(obj); 131 - return err; 132 - } 133 - 134 - static vm_fault_t lima_gem_fault(struct vm_fault *vmf) 135 - { 136 - struct vm_area_struct *vma = vmf->vma; 137 - struct drm_gem_object *obj = vma->vm_private_data; 138 - struct lima_bo *bo = to_lima_bo(obj); 139 - pfn_t pfn; 140 - pgoff_t pgoff; 141 - 142 - /* We don't use vmf->pgoff since that has the fake offset: */ 143 - pgoff = (vmf->address - vma->vm_start) >> PAGE_SHIFT; 144 - pfn = __pfn_to_pfn_t(page_to_pfn(bo->pages[pgoff]), PFN_DEV); 145 - 146 - return vmf_insert_mixed(vma, vmf->address, pfn); 147 - } 148 - 149 - const struct vm_operations_struct lima_gem_vm_ops = { 150 - .fault = lima_gem_fault, 151 - .open = drm_gem_vm_open, 152 - .close = drm_gem_vm_close, 153 - }; 154 - 155 - void lima_set_vma_flags(struct vm_area_struct *vma) 156 - { 157 - pgprot_t prot = vm_get_page_prot(vma->vm_flags); 158 - 159 - vma->vm_flags |= VM_MIXEDMAP; 160 - vma->vm_flags &= ~VM_PFNMAP; 161 - vma->vm_page_prot = pgprot_writecombine(prot); 162 - } 163 - 164 - int lima_gem_mmap(struct file *filp, struct vm_area_struct *vma) 165 - { 166 - int ret; 167 - 168 - ret = drm_gem_mmap(filp, vma); 169 - if (ret) 170 - return ret; 171 - 172 - lima_set_vma_flags(vma); 173 86 return 0; 174 87 } 175 88 ··· 135 136 int err = 0; 136 137 137 138 if (!write) { 138 - err = dma_resv_reserve_shared(bo->gem.resv, 1); 139 + err = dma_resv_reserve_shared(lima_bo_resv(bo), 1); 139 140 if (err) 140 141 return err; 141 142 } ··· 144 145 if (explicit) 145 146 return 0; 146 147 147 - return drm_gem_fence_array_add_implicit(&task->deps, &bo->gem, write); 148 - } 149 - 150 - static int lima_gem_lock_bos(struct lima_bo **bos, u32 nr_bos, 151 - struct ww_acquire_ctx *ctx) 152 - { 153 - int i, ret = 0, contended, slow_locked = -1; 154 - 155 - ww_acquire_init(ctx, &reservation_ww_class); 156 - 157 - retry: 158 - for (i = 0; i < nr_bos; i++) { 159 - if (i == slow_locked) { 160 - slow_locked = -1; 161 - continue; 162 - } 163 - 164 - ret = ww_mutex_lock_interruptible(&bos[i]->gem.resv->lock, ctx); 165 - if (ret < 0) { 166 - contended = i; 167 - goto err; 168 - } 169 - } 170 - 171 - ww_acquire_done(ctx); 172 - return 0; 173 - 174 - err: 175 - for (i--; i >= 0; i--) 176 - ww_mutex_unlock(&bos[i]->gem.resv->lock); 177 - 178 - if (slow_locked >= 0) 179 - ww_mutex_unlock(&bos[slow_locked]->gem.resv->lock); 180 - 181 - if (ret == -EDEADLK) { 182 - /* we lost out in a seqno race, lock and retry.. */ 183 - ret = ww_mutex_lock_slow_interruptible( 184 - &bos[contended]->gem.resv->lock, ctx); 185 - if (!ret) { 186 - slow_locked = contended; 187 - goto retry; 188 - } 189 - } 190 - ww_acquire_fini(ctx); 191 - 192 - return ret; 193 - } 194 - 195 - static void lima_gem_unlock_bos(struct lima_bo **bos, u32 nr_bos, 196 - struct ww_acquire_ctx *ctx) 197 - { 198 - int i; 199 - 200 - for (i = 0; i < nr_bos; i++) 201 - ww_mutex_unlock(&bos[i]->gem.resv->lock); 202 - ww_acquire_fini(ctx); 148 + return drm_gem_fence_array_add_implicit(&task->deps, &bo->base.base, write); 203 149 } 204 150 205 151 static int lima_gem_add_deps(struct drm_file *file, struct lima_submit *submit) ··· 212 268 bos[i] = bo; 213 269 } 214 270 215 - err = lima_gem_lock_bos(bos, submit->nr_bos, &ctx); 271 + err = drm_gem_lock_reservations((struct drm_gem_object **)bos, 272 + submit->nr_bos, &ctx); 216 273 if (err) 217 274 goto err_out0; 218 275 ··· 241 296 242 297 for (i = 0; i < submit->nr_bos; i++) { 243 298 if (submit->bos[i].flags & LIMA_SUBMIT_BO_WRITE) 244 - dma_resv_add_excl_fence(bos[i]->gem.resv, fence); 299 + dma_resv_add_excl_fence(lima_bo_resv(bos[i]), fence); 245 300 else 246 - dma_resv_add_shared_fence(bos[i]->gem.resv, fence); 301 + dma_resv_add_shared_fence(lima_bo_resv(bos[i]), fence); 247 302 } 248 303 249 - lima_gem_unlock_bos(bos, submit->nr_bos, &ctx); 304 + drm_gem_unlock_reservations((struct drm_gem_object **)bos, 305 + submit->nr_bos, &ctx); 250 306 251 307 for (i = 0; i < submit->nr_bos; i++) 252 - drm_gem_object_put_unlocked(&bos[i]->gem); 308 + drm_gem_object_put_unlocked(&bos[i]->base.base); 253 309 254 310 if (out_sync) { 255 311 drm_syncobj_replace_fence(out_sync, fence); ··· 264 318 err_out2: 265 319 lima_sched_task_fini(submit->task); 266 320 err_out1: 267 - lima_gem_unlock_bos(bos, submit->nr_bos, &ctx); 321 + drm_gem_unlock_reservations((struct drm_gem_object **)bos, 322 + submit->nr_bos, &ctx); 268 323 err_out0: 269 324 for (i = 0; i < submit->nr_bos; i++) { 270 325 if (!bos[i]) 271 326 break; 272 327 lima_vm_bo_del(vm, bos[i]); 273 - drm_gem_object_put_unlocked(&bos[i]->gem); 328 + drm_gem_object_put_unlocked(&bos[i]->base.base); 274 329 } 275 330 if (out_sync) 276 331 drm_syncobj_put(out_sync);
+25 -7
drivers/gpu/drm/lima/lima_gem.h
··· 4 4 #ifndef __LIMA_GEM_H__ 5 5 #define __LIMA_GEM_H__ 6 6 7 - struct lima_bo; 7 + #include <drm/drm_gem_shmem_helper.h> 8 + 8 9 struct lima_submit; 9 10 10 - extern const struct vm_operations_struct lima_gem_vm_ops; 11 + struct lima_bo { 12 + struct drm_gem_shmem_object base; 11 13 12 - struct lima_bo *lima_gem_create_bo(struct drm_device *dev, u32 size, u32 flags); 14 + struct mutex lock; 15 + struct list_head va; 16 + }; 17 + 18 + static inline struct lima_bo * 19 + to_lima_bo(struct drm_gem_object *obj) 20 + { 21 + return container_of(to_drm_gem_shmem_obj(obj), struct lima_bo, base); 22 + } 23 + 24 + static inline size_t lima_bo_size(struct lima_bo *bo) 25 + { 26 + return bo->base.base.size; 27 + } 28 + 29 + static inline struct dma_resv *lima_bo_resv(struct lima_bo *bo) 30 + { 31 + return bo->base.base.resv; 32 + } 33 + 34 + struct drm_gem_object *lima_gem_create_object(struct drm_device *dev, size_t size); 13 35 int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file, 14 36 u32 size, u32 flags, u32 *handle); 15 - void lima_gem_free_object(struct drm_gem_object *obj); 16 - int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file); 17 - void lima_gem_object_close(struct drm_gem_object *obj, struct drm_file *file); 18 37 int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset); 19 - int lima_gem_mmap(struct file *filp, struct vm_area_struct *vma); 20 38 int lima_gem_submit(struct drm_file *file, struct lima_submit *submit); 21 39 int lima_gem_wait(struct drm_file *file, u32 handle, u32 op, s64 timeout_ns); 22 40
-46
drivers/gpu/drm/lima/lima_gem_prime.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 OR MIT 2 - /* Copyright 2018-2019 Qiang Yu <yuq825@gmail.com> */ 3 - 4 - #include <linux/dma-buf.h> 5 - #include <drm/drm_prime.h> 6 - #include <drm/drm_drv.h> 7 - #include <drm/drm_file.h> 8 - 9 - #include "lima_device.h" 10 - #include "lima_object.h" 11 - #include "lima_gem.h" 12 - #include "lima_gem_prime.h" 13 - 14 - struct drm_gem_object *lima_gem_prime_import_sg_table( 15 - struct drm_device *dev, struct dma_buf_attachment *attach, 16 - struct sg_table *sgt) 17 - { 18 - struct lima_device *ldev = to_lima_dev(dev); 19 - struct lima_bo *bo; 20 - 21 - bo = lima_bo_create(ldev, attach->dmabuf->size, 0, sgt); 22 - if (IS_ERR(bo)) 23 - return ERR_CAST(bo); 24 - 25 - return &bo->gem; 26 - } 27 - 28 - struct sg_table *lima_gem_prime_get_sg_table(struct drm_gem_object *obj) 29 - { 30 - struct lima_bo *bo = to_lima_bo(obj); 31 - int npages = obj->size >> PAGE_SHIFT; 32 - 33 - return drm_prime_pages_to_sg(bo->pages, npages); 34 - } 35 - 36 - int lima_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) 37 - { 38 - int ret; 39 - 40 - ret = drm_gem_mmap_obj(obj, obj->size, vma); 41 - if (ret) 42 - return ret; 43 - 44 - lima_set_vma_flags(vma); 45 - return 0; 46 - }
-13
drivers/gpu/drm/lima/lima_gem_prime.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 - /* Copyright 2018-2019 Qiang Yu <yuq825@gmail.com> */ 3 - 4 - #ifndef __LIMA_GEM_PRIME_H__ 5 - #define __LIMA_GEM_PRIME_H__ 6 - 7 - struct drm_gem_object *lima_gem_prime_import_sg_table( 8 - struct drm_device *dev, struct dma_buf_attachment *attach, 9 - struct sg_table *sgt); 10 - struct sg_table *lima_gem_prime_get_sg_table(struct drm_gem_object *obj); 11 - int lima_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); 12 - 13 - #endif
-1
drivers/gpu/drm/lima/lima_mmu.c
··· 8 8 #include "lima_device.h" 9 9 #include "lima_mmu.h" 10 10 #include "lima_vm.h" 11 - #include "lima_object.h" 12 11 #include "lima_regs.h" 13 12 14 13 #define mmu_write(reg, data) writel(data, ip->iomem + reg)
-119
drivers/gpu/drm/lima/lima_object.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 OR MIT 2 - /* Copyright 2018-2019 Qiang Yu <yuq825@gmail.com> */ 3 - 4 - #include <drm/drm_prime.h> 5 - #include <linux/pagemap.h> 6 - #include <linux/dma-mapping.h> 7 - 8 - #include "lima_object.h" 9 - 10 - void lima_bo_destroy(struct lima_bo *bo) 11 - { 12 - if (bo->sgt) { 13 - kfree(bo->pages); 14 - drm_prime_gem_destroy(&bo->gem, bo->sgt); 15 - } else { 16 - if (bo->pages_dma_addr) { 17 - int i, npages = bo->gem.size >> PAGE_SHIFT; 18 - 19 - for (i = 0; i < npages; i++) { 20 - if (bo->pages_dma_addr[i]) 21 - dma_unmap_page(bo->gem.dev->dev, 22 - bo->pages_dma_addr[i], 23 - PAGE_SIZE, DMA_BIDIRECTIONAL); 24 - } 25 - } 26 - 27 - if (bo->pages) 28 - drm_gem_put_pages(&bo->gem, bo->pages, true, true); 29 - } 30 - 31 - kfree(bo->pages_dma_addr); 32 - drm_gem_object_release(&bo->gem); 33 - kfree(bo); 34 - } 35 - 36 - static struct lima_bo *lima_bo_create_struct(struct lima_device *dev, u32 size, u32 flags) 37 - { 38 - struct lima_bo *bo; 39 - int err; 40 - 41 - size = PAGE_ALIGN(size); 42 - 43 - bo = kzalloc(sizeof(*bo), GFP_KERNEL); 44 - if (!bo) 45 - return ERR_PTR(-ENOMEM); 46 - 47 - mutex_init(&bo->lock); 48 - INIT_LIST_HEAD(&bo->va); 49 - 50 - err = drm_gem_object_init(dev->ddev, &bo->gem, size); 51 - if (err) { 52 - kfree(bo); 53 - return ERR_PTR(err); 54 - } 55 - 56 - return bo; 57 - } 58 - 59 - struct lima_bo *lima_bo_create(struct lima_device *dev, u32 size, 60 - u32 flags, struct sg_table *sgt) 61 - { 62 - int i, err; 63 - size_t npages; 64 - struct lima_bo *bo, *ret; 65 - 66 - bo = lima_bo_create_struct(dev, size, flags); 67 - if (IS_ERR(bo)) 68 - return bo; 69 - 70 - npages = bo->gem.size >> PAGE_SHIFT; 71 - 72 - bo->pages_dma_addr = kcalloc(npages, sizeof(dma_addr_t), GFP_KERNEL); 73 - if (!bo->pages_dma_addr) { 74 - ret = ERR_PTR(-ENOMEM); 75 - goto err_out; 76 - } 77 - 78 - if (sgt) { 79 - bo->sgt = sgt; 80 - 81 - bo->pages = kcalloc(npages, sizeof(*bo->pages), GFP_KERNEL); 82 - if (!bo->pages) { 83 - ret = ERR_PTR(-ENOMEM); 84 - goto err_out; 85 - } 86 - 87 - err = drm_prime_sg_to_page_addr_arrays( 88 - sgt, bo->pages, bo->pages_dma_addr, npages); 89 - if (err) { 90 - ret = ERR_PTR(err); 91 - goto err_out; 92 - } 93 - } else { 94 - mapping_set_gfp_mask(bo->gem.filp->f_mapping, GFP_DMA32); 95 - bo->pages = drm_gem_get_pages(&bo->gem); 96 - if (IS_ERR(bo->pages)) { 97 - ret = ERR_CAST(bo->pages); 98 - bo->pages = NULL; 99 - goto err_out; 100 - } 101 - 102 - for (i = 0; i < npages; i++) { 103 - dma_addr_t addr = dma_map_page(dev->dev, bo->pages[i], 0, 104 - PAGE_SIZE, DMA_BIDIRECTIONAL); 105 - if (dma_mapping_error(dev->dev, addr)) { 106 - ret = ERR_PTR(-EFAULT); 107 - goto err_out; 108 - } 109 - bo->pages_dma_addr[i] = addr; 110 - } 111 - 112 - } 113 - 114 - return bo; 115 - 116 - err_out: 117 - lima_bo_destroy(bo); 118 - return ret; 119 - }
-35
drivers/gpu/drm/lima/lima_object.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 - /* Copyright 2018-2019 Qiang Yu <yuq825@gmail.com> */ 3 - 4 - #ifndef __LIMA_OBJECT_H__ 5 - #define __LIMA_OBJECT_H__ 6 - 7 - #include <drm/drm_gem.h> 8 - 9 - #include "lima_device.h" 10 - 11 - struct lima_bo { 12 - struct drm_gem_object gem; 13 - 14 - struct page **pages; 15 - dma_addr_t *pages_dma_addr; 16 - struct sg_table *sgt; 17 - void *vaddr; 18 - 19 - struct mutex lock; 20 - struct list_head va; 21 - }; 22 - 23 - static inline struct lima_bo * 24 - to_lima_bo(struct drm_gem_object *obj) 25 - { 26 - return container_of(obj, struct lima_bo, gem); 27 - } 28 - 29 - struct lima_bo *lima_bo_create(struct lima_device *dev, u32 size, 30 - u32 flags, struct sg_table *sgt); 31 - void lima_bo_destroy(struct lima_bo *bo); 32 - void *lima_bo_vmap(struct lima_bo *bo); 33 - void lima_bo_vunmap(struct lima_bo *bo); 34 - 35 - #endif
+3 -3
drivers/gpu/drm/lima/lima_sched.c
··· 10 10 #include "lima_vm.h" 11 11 #include "lima_mmu.h" 12 12 #include "lima_l2_cache.h" 13 - #include "lima_object.h" 13 + #include "lima_gem.h" 14 14 15 15 struct lima_fence { 16 16 struct dma_fence base; ··· 117 117 return -ENOMEM; 118 118 119 119 for (i = 0; i < num_bos; i++) 120 - drm_gem_object_get(&bos[i]->gem); 120 + drm_gem_object_get(&bos[i]->base.base); 121 121 122 122 err = drm_sched_job_init(&task->base, &context->base, vm); 123 123 if (err) { ··· 148 148 149 149 if (task->bos) { 150 150 for (i = 0; i < task->num_bos; i++) 151 - drm_gem_object_put_unlocked(&task->bos[i]->gem); 151 + drm_gem_object_put_unlocked(&task->bos[i]->base.base); 152 152 kfree(task->bos); 153 153 } 154 154
+39 -42
drivers/gpu/drm/lima/lima_vm.c
··· 6 6 7 7 #include "lima_device.h" 8 8 #include "lima_vm.h" 9 - #include "lima_object.h" 9 + #include "lima_gem.h" 10 10 #include "lima_regs.h" 11 11 12 12 struct lima_bo_va { ··· 32 32 #define LIMA_BTE(va) ((va & LIMA_VM_BT_MASK) >> LIMA_VM_BT_SHIFT) 33 33 34 34 35 - static void lima_vm_unmap_page_table(struct lima_vm *vm, u32 start, u32 end) 35 + static void lima_vm_unmap_range(struct lima_vm *vm, u32 start, u32 end) 36 36 { 37 37 u32 addr; 38 38 ··· 44 44 } 45 45 } 46 46 47 - static int lima_vm_map_page_table(struct lima_vm *vm, dma_addr_t *dma, 48 - u32 start, u32 end) 47 + static int lima_vm_map_page(struct lima_vm *vm, dma_addr_t pa, u32 va) 49 48 { 50 - u64 addr; 51 - int i = 0; 49 + u32 pbe = LIMA_PBE(va); 50 + u32 bte = LIMA_BTE(va); 52 51 53 - for (addr = start; addr <= end; addr += LIMA_PAGE_SIZE) { 54 - u32 pbe = LIMA_PBE(addr); 55 - u32 bte = LIMA_BTE(addr); 52 + if (!vm->bts[pbe].cpu) { 53 + dma_addr_t pts; 54 + u32 *pd; 55 + int j; 56 56 57 - if (!vm->bts[pbe].cpu) { 58 - dma_addr_t pts; 59 - u32 *pd; 60 - int j; 57 + vm->bts[pbe].cpu = dma_alloc_wc( 58 + vm->dev->dev, LIMA_PAGE_SIZE << LIMA_VM_NUM_PT_PER_BT_SHIFT, 59 + &vm->bts[pbe].dma, GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO); 60 + if (!vm->bts[pbe].cpu) 61 + return -ENOMEM; 61 62 62 - vm->bts[pbe].cpu = dma_alloc_wc( 63 - vm->dev->dev, LIMA_PAGE_SIZE << LIMA_VM_NUM_PT_PER_BT_SHIFT, 64 - &vm->bts[pbe].dma, GFP_KERNEL | __GFP_ZERO); 65 - if (!vm->bts[pbe].cpu) { 66 - if (addr != start) 67 - lima_vm_unmap_page_table(vm, start, addr - 1); 68 - return -ENOMEM; 69 - } 70 - 71 - pts = vm->bts[pbe].dma; 72 - pd = vm->pd.cpu + (pbe << LIMA_VM_NUM_PT_PER_BT_SHIFT); 73 - for (j = 0; j < LIMA_VM_NUM_PT_PER_BT; j++) { 74 - pd[j] = pts | LIMA_VM_FLAG_PRESENT; 75 - pts += LIMA_PAGE_SIZE; 76 - } 63 + pts = vm->bts[pbe].dma; 64 + pd = vm->pd.cpu + (pbe << LIMA_VM_NUM_PT_PER_BT_SHIFT); 65 + for (j = 0; j < LIMA_VM_NUM_PT_PER_BT; j++) { 66 + pd[j] = pts | LIMA_VM_FLAG_PRESENT; 67 + pts += LIMA_PAGE_SIZE; 77 68 } 78 - 79 - vm->bts[pbe].cpu[bte] = dma[i++] | LIMA_VM_FLAGS_CACHE; 80 69 } 70 + 71 + vm->bts[pbe].cpu[bte] = pa | LIMA_VM_FLAGS_CACHE; 81 72 82 73 return 0; 83 74 } ··· 91 100 int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo *bo, bool create) 92 101 { 93 102 struct lima_bo_va *bo_va; 94 - int err; 103 + struct sg_dma_page_iter sg_iter; 104 + int offset = 0, err; 95 105 96 106 mutex_lock(&bo->lock); 97 107 ··· 120 128 121 129 mutex_lock(&vm->lock); 122 130 123 - err = drm_mm_insert_node(&vm->mm, &bo_va->node, bo->gem.size); 131 + err = drm_mm_insert_node(&vm->mm, &bo_va->node, lima_bo_size(bo)); 124 132 if (err) 125 133 goto err_out1; 126 134 127 - err = lima_vm_map_page_table(vm, bo->pages_dma_addr, bo_va->node.start, 128 - bo_va->node.start + bo_va->node.size - 1); 129 - if (err) 130 - goto err_out2; 135 + for_each_sg_dma_page(bo->base.sgt->sgl, &sg_iter, bo->base.sgt->nents, 0) { 136 + err = lima_vm_map_page(vm, sg_page_iter_dma_address(&sg_iter), 137 + bo_va->node.start + offset); 138 + if (err) 139 + goto err_out2; 140 + 141 + offset += PAGE_SIZE; 142 + } 131 143 132 144 mutex_unlock(&vm->lock); 133 145 ··· 141 145 return 0; 142 146 143 147 err_out2: 148 + if (offset) 149 + lima_vm_unmap_range(vm, bo_va->node.start, bo_va->node.start + offset - 1); 144 150 drm_mm_remove_node(&bo_va->node); 145 151 err_out1: 146 152 mutex_unlock(&vm->lock); ··· 166 168 167 169 mutex_lock(&vm->lock); 168 170 169 - lima_vm_unmap_page_table(vm, bo_va->node.start, 170 - bo_va->node.start + bo_va->node.size - 1); 171 + lima_vm_unmap_range(vm, bo_va->node.start, 172 + bo_va->node.start + bo_va->node.size - 1); 171 173 172 174 drm_mm_remove_node(&bo_va->node); 173 175 ··· 208 210 kref_init(&vm->refcount); 209 211 210 212 vm->pd.cpu = dma_alloc_wc(dev->dev, LIMA_PAGE_SIZE, &vm->pd.dma, 211 - GFP_KERNEL | __GFP_ZERO); 213 + GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO); 212 214 if (!vm->pd.cpu) 213 215 goto err_out0; 214 216 215 217 if (dev->dlbu_cpu) { 216 - int err = lima_vm_map_page_table( 217 - vm, &dev->dlbu_dma, LIMA_VA_RESERVE_DLBU, 218 - LIMA_VA_RESERVE_DLBU + LIMA_PAGE_SIZE - 1); 218 + int err = lima_vm_map_page( 219 + vm, dev->dlbu_dma, LIMA_VA_RESERVE_DLBU); 219 220 if (err) 220 221 goto err_out1; 221 222 }
+5
drivers/gpu/drm/meson/meson_dw_hdmi.c
··· 977 977 dw_plat_data->input_bus_format = MEDIA_BUS_FMT_YUV8_1X24; 978 978 dw_plat_data->input_bus_encoding = V4L2_YCBCR_ENC_709; 979 979 980 + if (dw_hdmi_is_compatible(meson_dw_hdmi, "amlogic,meson-gxl-dw-hdmi") || 981 + dw_hdmi_is_compatible(meson_dw_hdmi, "amlogic,meson-gxm-dw-hdmi") || 982 + dw_hdmi_is_compatible(meson_dw_hdmi, "amlogic,meson-g12a-dw-hdmi")) 983 + dw_plat_data->use_drm_infoframe = true; 984 + 980 985 platform_set_drvdata(pdev, meson_dw_hdmi); 981 986 982 987 meson_dw_hdmi->hdmi = dw_hdmi_bind(pdev, encoder,
+1 -4
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 58 58 drm_put_dev(dev); 59 59 } 60 60 61 - static const struct file_operations mgag200_driver_fops = { 62 - .owner = THIS_MODULE, 63 - DRM_VRAM_MM_FILE_OPERATIONS 64 - }; 61 + DEFINE_DRM_GEM_FOPS(mgag200_driver_fops); 65 62 66 63 static struct drm_driver driver = { 67 64 .driver_features = DRIVER_GEM | DRIVER_MODESET,
+49 -21
drivers/gpu/drm/msm/edp/edp_ctrl.c
··· 89 89 /* edid raw data */ 90 90 struct edid *edid; 91 91 92 - struct drm_dp_link dp_link; 93 92 struct drm_dp_aux *drm_aux; 94 93 95 94 /* dpcd raw data */ ··· 402 403 u32 prate; 403 404 u32 lrate; 404 405 u32 bpp; 405 - u8 max_lane = ctrl->dp_link.num_lanes; 406 + u8 max_lane = drm_dp_max_lane_count(ctrl->dpcd); 406 407 u8 lane; 407 408 408 409 prate = ctrl->pixel_rate; ··· 412 413 * By default, use the maximum link rate and minimum lane count, 413 414 * so that we can do rate down shift during link training. 414 415 */ 415 - ctrl->link_rate = drm_dp_link_rate_to_bw_code(ctrl->dp_link.rate); 416 + ctrl->link_rate = ctrl->dpcd[DP_MAX_LINK_RATE]; 416 417 417 418 prate *= bpp; 418 419 prate /= 8; /* in kByte */ ··· 438 439 439 440 data = EDP_CONFIGURATION_CTRL_LANES(ctrl->lane_cnt - 1); 440 441 441 - if (ctrl->dp_link.capabilities & DP_LINK_CAP_ENHANCED_FRAMING) 442 + if (drm_dp_enhanced_frame_cap(ctrl->dpcd)) 442 443 data |= EDP_CONFIGURATION_CTRL_ENHANCED_FRAMING; 443 444 444 445 depth = EDP_6BIT; ··· 700 701 701 702 rate = ctrl->link_rate; 702 703 lane = ctrl->lane_cnt; 703 - max_lane = ctrl->dp_link.num_lanes; 704 + max_lane = drm_dp_max_lane_count(ctrl->dpcd); 704 705 705 706 bpp = ctrl->color_depth * 3; 706 707 prate = ctrl->pixel_rate; ··· 750 751 751 752 static int edp_do_link_train(struct edp_ctrl *ctrl) 752 753 { 754 + u8 values[2]; 753 755 int ret; 754 - struct drm_dp_link dp_link; 755 756 756 757 DBG(""); 757 758 /* 758 759 * Set the current link rate and lane cnt to panel. They may have been 759 760 * adjusted and the values are different from them in DPCD CAP 760 761 */ 761 - dp_link.num_lanes = ctrl->lane_cnt; 762 - dp_link.rate = drm_dp_bw_code_to_link_rate(ctrl->link_rate); 763 - dp_link.capabilities = ctrl->dp_link.capabilities; 764 - if (drm_dp_link_configure(ctrl->drm_aux, &dp_link) < 0) 762 + values[0] = ctrl->lane_cnt; 763 + values[1] = ctrl->link_rate; 764 + 765 + if (drm_dp_enhanced_frame_cap(ctrl->dpcd)) 766 + values[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 767 + 768 + if (drm_dp_dpcd_write(ctrl->drm_aux, DP_LINK_BW_SET, values, 769 + sizeof(values)) < 0) 765 770 return EDP_TRAIN_FAIL; 766 771 767 772 ctrl->v_level = 0; /* start from default level */ ··· 955 952 { 956 953 struct edp_ctrl *ctrl = container_of( 957 954 work, struct edp_ctrl, on_work); 955 + u8 value; 958 956 int ret; 959 957 960 958 mutex_lock(&ctrl->dev_mutex); ··· 969 965 edp_ctrl_link_enable(ctrl, 1); 970 966 971 967 edp_ctrl_irq_enable(ctrl, 1); 972 - ret = drm_dp_link_power_up(ctrl->drm_aux, &ctrl->dp_link); 973 - if (ret) 974 - goto fail; 968 + 969 + /* DP_SET_POWER register is only available on DPCD v1.1 and later */ 970 + if (ctrl->dpcd[DP_DPCD_REV] >= 0x11) { 971 + ret = drm_dp_dpcd_readb(ctrl->drm_aux, DP_SET_POWER, &value); 972 + if (ret < 0) 973 + goto fail; 974 + 975 + value &= ~DP_SET_POWER_MASK; 976 + value |= DP_SET_POWER_D0; 977 + 978 + ret = drm_dp_dpcd_writeb(ctrl->drm_aux, DP_SET_POWER, value); 979 + if (ret < 0) 980 + goto fail; 981 + 982 + /* 983 + * According to the DP 1.1 specification, a "Sink Device must 984 + * exit the power saving state within 1 ms" (Section 2.5.3.1, 985 + * Table 5-52, "Sink Control Field" (register 0x600). 986 + */ 987 + usleep_range(1000, 2000); 988 + } 975 989 976 990 ctrl->power_on = true; 977 991 ··· 1033 1011 1034 1012 edp_state_ctrl(ctrl, 0); 1035 1013 1036 - drm_dp_link_power_down(ctrl->drm_aux, &ctrl->dp_link); 1014 + /* DP_SET_POWER register is only available on DPCD v1.1 and later */ 1015 + if (ctrl->dpcd[DP_DPCD_REV] >= 0x11) { 1016 + u8 value; 1017 + int ret; 1018 + 1019 + ret = drm_dp_dpcd_readb(ctrl->drm_aux, DP_SET_POWER, &value); 1020 + if (ret > 0) { 1021 + value &= ~DP_SET_POWER_MASK; 1022 + value |= DP_SET_POWER_D3; 1023 + 1024 + drm_dp_dpcd_writeb(ctrl->drm_aux, DP_SET_POWER, value); 1025 + } 1026 + } 1037 1027 1038 1028 edp_ctrl_irq_enable(ctrl, 0); 1039 1029 ··· 1259 1225 edp_ctrl_irq_enable(ctrl, 1); 1260 1226 } 1261 1227 1262 - ret = drm_dp_link_probe(ctrl->drm_aux, &ctrl->dp_link); 1263 - if (ret) { 1264 - pr_err("%s: read dpcd cap failed, %d\n", __func__, ret); 1265 - goto disable_ret; 1266 - } 1267 - 1268 1228 /* Initialize link rate as panel max link rate */ 1269 - ctrl->link_rate = drm_dp_link_rate_to_bw_code(ctrl->dp_link.rate); 1229 + ctrl->link_rate = ctrl->dpcd[DP_MAX_LINK_RATE]; 1270 1230 1271 1231 ctrl->edid = drm_get_edid(connector, &ctrl->drm_aux->ddc); 1272 1232 if (!ctrl->edid) {
+17 -3
drivers/gpu/drm/mxsfb/mxsfb_crtc.c
··· 95 95 96 96 reg = readl(mxsfb->base + LCDC_CTRL); 97 97 98 - if (mxsfb->connector.display_info.num_bus_formats) 99 - bus_format = mxsfb->connector.display_info.bus_formats[0]; 98 + if (mxsfb->connector->display_info.num_bus_formats) 99 + bus_format = mxsfb->connector->display_info.bus_formats[0]; 100 + 101 + DRM_DEV_DEBUG_DRIVER(drm->dev, "Using bus_format: 0x%08X\n", 102 + bus_format); 100 103 101 104 reg &= ~CTRL_BUS_WIDTH_MASK; 102 105 switch (bus_format) { ··· 207 204 208 205 static void mxsfb_crtc_mode_set_nofb(struct mxsfb_drm_private *mxsfb) 209 206 { 207 + struct drm_device *drm = mxsfb->pipe.crtc.dev; 210 208 struct drm_display_mode *m = &mxsfb->pipe.crtc.state->adjusted_mode; 211 - const u32 bus_flags = mxsfb->connector.display_info.bus_flags; 209 + u32 bus_flags = mxsfb->connector->display_info.bus_flags; 212 210 u32 vdctrl0, vsync_pulse_len, hsync_pulse_len; 213 211 int err; 214 212 ··· 232 228 return; 233 229 234 230 clk_set_rate(mxsfb->clk, m->crtc_clock * 1000); 231 + 232 + if (mxsfb->bridge && mxsfb->bridge->timings) 233 + bus_flags = mxsfb->bridge->timings->input_bus_flags; 234 + 235 + DRM_DEV_DEBUG_DRIVER(drm->dev, "Pixel clock: %dkHz (actual: %dkHz)\n", 236 + m->crtc_clock, 237 + (int)(clk_get_rate(mxsfb->clk) / 1000)); 238 + DRM_DEV_DEBUG_DRIVER(drm->dev, "Connector bus_flags: 0x%08X\n", 239 + bus_flags); 240 + DRM_DEV_DEBUG_DRIVER(drm->dev, "Mode flags: 0x%08X\n", m->flags); 235 241 236 242 writel(TRANSFER_COUNT_SET_VCOUNT(m->crtc_vdisplay) | 237 243 TRANSFER_COUNT_SET_HCOUNT(m->crtc_hdisplay),
+41 -5
drivers/gpu/drm/mxsfb/mxsfb_drv.c
··· 101 101 struct drm_crtc_state *crtc_state, 102 102 struct drm_plane_state *plane_state) 103 103 { 104 + struct drm_connector *connector; 104 105 struct mxsfb_drm_private *mxsfb = drm_pipe_to_mxsfb_drm_private(pipe); 105 106 struct drm_device *drm = pipe->plane.dev; 107 + 108 + if (!mxsfb->connector) { 109 + list_for_each_entry(connector, 110 + &drm->mode_config.connector_list, 111 + head) 112 + if (connector->encoder == &mxsfb->pipe.encoder) { 113 + mxsfb->connector = connector; 114 + break; 115 + } 116 + } 117 + 118 + if (!mxsfb->connector) { 119 + dev_warn(drm->dev, "No connector attached, using default\n"); 120 + mxsfb->connector = &mxsfb->panel_connector; 121 + } 106 122 107 123 pm_runtime_get_sync(drm->dev); 108 124 drm_panel_prepare(mxsfb->panel); ··· 145 129 drm_crtc_send_vblank_event(crtc, event); 146 130 } 147 131 spin_unlock_irq(&drm->event_lock); 132 + 133 + if (mxsfb->connector != &mxsfb->panel_connector) 134 + mxsfb->connector = NULL; 148 135 } 149 136 150 137 static void mxsfb_pipe_update(struct drm_simple_display_pipe *pipe, ··· 245 226 246 227 ret = drm_simple_display_pipe_init(drm, &mxsfb->pipe, &mxsfb_funcs, 247 228 mxsfb_formats, ARRAY_SIZE(mxsfb_formats), NULL, 248 - &mxsfb->connector); 229 + mxsfb->connector); 249 230 if (ret < 0) { 250 231 dev_err(drm->dev, "Cannot setup simple display pipe\n"); 251 232 goto err_vblank; 252 233 } 253 234 254 - ret = drm_panel_attach(mxsfb->panel, &mxsfb->connector); 255 - if (ret) { 256 - dev_err(drm->dev, "Cannot connect panel\n"); 257 - goto err_vblank; 235 + /* 236 + * Attach panel only if there is one. 237 + * If there is no panel attach, it must be a bridge. In this case, we 238 + * need a reference to its connector for a proper initialization. 239 + * We will do this check in pipe->enable(), since the connector won't 240 + * be attached to an encoder until then. 241 + */ 242 + 243 + if (mxsfb->panel) { 244 + ret = drm_panel_attach(mxsfb->panel, mxsfb->connector); 245 + if (ret) { 246 + dev_err(drm->dev, "Cannot connect panel: %d\n", ret); 247 + goto err_vblank; 248 + } 249 + } else if (mxsfb->bridge) { 250 + ret = drm_simple_display_pipe_attach_bridge(&mxsfb->pipe, 251 + mxsfb->bridge); 252 + if (ret) { 253 + dev_err(drm->dev, "Cannot connect bridge: %d\n", ret); 254 + goto err_vblank; 255 + } 258 256 } 259 257 260 258 drm->mode_config.min_width = MXSFB_MIN_XRES;
+3 -1
drivers/gpu/drm/mxsfb/mxsfb_drv.h
··· 27 27 struct clk *clk_disp_axi; 28 28 29 29 struct drm_simple_display_pipe pipe; 30 - struct drm_connector connector; 30 + struct drm_connector panel_connector; 31 + struct drm_connector *connector; 31 32 struct drm_panel *panel; 33 + struct drm_bridge *bridge; 32 34 }; 33 35 34 36 int mxsfb_setup_crtc(struct drm_device *dev);
+14 -12
drivers/gpu/drm/mxsfb/mxsfb_out.c
··· 21 21 static struct mxsfb_drm_private * 22 22 drm_connector_to_mxsfb_drm_private(struct drm_connector *connector) 23 23 { 24 - return container_of(connector, struct mxsfb_drm_private, connector); 24 + return container_of(connector, struct mxsfb_drm_private, 25 + panel_connector); 25 26 } 26 27 27 28 static int mxsfb_panel_get_modes(struct drm_connector *connector) ··· 77 76 int mxsfb_create_output(struct drm_device *drm) 78 77 { 79 78 struct mxsfb_drm_private *mxsfb = drm->dev_private; 80 - struct drm_panel *panel; 81 79 int ret; 82 80 83 - ret = drm_of_find_panel_or_bridge(drm->dev->of_node, 0, 0, &panel, NULL); 81 + ret = drm_of_find_panel_or_bridge(drm->dev->of_node, 0, 0, 82 + &mxsfb->panel, &mxsfb->bridge); 84 83 if (ret) 85 84 return ret; 86 85 87 - mxsfb->connector.dpms = DRM_MODE_DPMS_OFF; 88 - mxsfb->connector.polled = 0; 89 - drm_connector_helper_add(&mxsfb->connector, 90 - &mxsfb_panel_connector_helper_funcs); 91 - ret = drm_connector_init(drm, &mxsfb->connector, 92 - &mxsfb_panel_connector_funcs, 93 - DRM_MODE_CONNECTOR_Unknown); 94 - if (!ret) 95 - mxsfb->panel = panel; 86 + if (mxsfb->panel) { 87 + mxsfb->connector = &mxsfb->panel_connector; 88 + mxsfb->connector->dpms = DRM_MODE_DPMS_OFF; 89 + mxsfb->connector->polled = 0; 90 + drm_connector_helper_add(mxsfb->connector, 91 + &mxsfb_panel_connector_helper_funcs); 92 + ret = drm_connector_init(drm, mxsfb->connector, 93 + &mxsfb_panel_connector_funcs, 94 + DRM_MODE_CONNECTOR_Unknown); 95 + } 96 96 97 97 return ret; 98 98 }
+1 -2
drivers/gpu/drm/omapdrm/dss/dsi.c
··· 3548 3548 3549 3549 static void dsi_proto_timings(struct dsi_data *dsi) 3550 3550 { 3551 - unsigned int tlpx, tclk_zero, tclk_prepare, tclk_trail; 3551 + unsigned int tlpx, tclk_zero, tclk_prepare; 3552 3552 unsigned int tclk_pre, tclk_post; 3553 3553 unsigned int ths_prepare, ths_prepare_ths_zero, ths_zero; 3554 3554 unsigned int ths_trail, ths_exit; ··· 3567 3567 3568 3568 r = dsi_read_reg(dsi, DSI_DSIPHY_CFG1); 3569 3569 tlpx = FLD_GET(r, 20, 16) * 2; 3570 - tclk_trail = FLD_GET(r, 15, 8); 3571 3570 tclk_zero = FLD_GET(r, 7, 0); 3572 3571 3573 3572 r = dsi_read_reg(dsi, DSI_DSIPHY_CFG2);
+2 -2
drivers/gpu/drm/omapdrm/dss/hdmi4_core.c
··· 676 676 struct hdmi_audio_format audio_format; 677 677 struct hdmi_audio_dma audio_dma; 678 678 struct hdmi_core_audio_config acore; 679 - int err, n, cts, channel_count; 679 + int n, cts, channel_count; 680 680 unsigned int fs_nr; 681 681 bool word_length_16b = false; 682 682 ··· 738 738 return -EINVAL; 739 739 } 740 740 741 - err = hdmi_compute_acr(pclk, fs_nr, &n, &cts); 741 + hdmi_compute_acr(pclk, fs_nr, &n, &cts); 742 742 743 743 /* Audio clock regeneration settings */ 744 744 acore.n = n;
+2 -2
drivers/gpu/drm/omapdrm/dss/hdmi5_core.c
··· 807 807 struct hdmi_audio_format audio_format; 808 808 struct hdmi_audio_dma audio_dma; 809 809 struct hdmi_core_audio_config core_cfg; 810 - int err, n, cts, channel_count; 810 + int n, cts, channel_count; 811 811 unsigned int fs_nr; 812 812 bool word_length_16b = false; 813 813 ··· 850 850 return -EINVAL; 851 851 } 852 852 853 - err = hdmi_compute_acr(pclk, fs_nr, &n, &cts); 853 + hdmi_compute_acr(pclk, fs_nr, &n, &cts); 854 854 core_cfg.n = n; 855 855 core_cfg.cts = cts; 856 856
+1 -1
drivers/gpu/drm/omapdrm/omap_dmm_tiler.h
··· 113 113 /* GEM bo flags -> tiler fmt */ 114 114 static inline enum tiler_fmt gem2fmt(u32 flags) 115 115 { 116 - switch (flags & OMAP_BO_TILED) { 116 + switch (flags & OMAP_BO_TILED_MASK) { 117 117 case OMAP_BO_TILED_8: 118 118 return TILFMT_8BIT; 119 119 case OMAP_BO_TILED_16:
+3 -6
drivers/gpu/drm/omapdrm/omap_fb.c
··· 95 95 96 96 bool omap_framebuffer_supports_rotation(struct drm_framebuffer *fb) 97 97 { 98 - return omap_gem_flags(fb->obj[0]) & OMAP_BO_TILED; 98 + return omap_gem_flags(fb->obj[0]) & OMAP_BO_TILED_MASK; 99 99 } 100 100 101 101 /* Note: DRM rotates counter-clockwise, TILER & DSS rotates clockwise */ ··· 135 135 { 136 136 struct omap_framebuffer *omap_fb = to_omap_framebuffer(fb); 137 137 const struct drm_format_info *format = omap_fb->format; 138 - struct plane *plane = &omap_fb->planes[0]; 139 138 u32 x, y, orient = 0; 140 139 141 140 info->fourcc = fb->format->format; ··· 153 154 x = state->src_x >> 16; 154 155 y = state->src_y >> 16; 155 156 156 - if (omap_gem_flags(fb->obj[0]) & OMAP_BO_TILED) { 157 + if (omap_gem_flags(fb->obj[0]) & OMAP_BO_TILED_MASK) { 157 158 u32 w = state->src_w >> 16; 158 159 u32 h = state->src_h >> 16; 159 160 ··· 208 209 info->screen_width /= format->cpp[0]; 209 210 210 211 if (fb->format->format == DRM_FORMAT_NV12) { 211 - plane = &omap_fb->planes[1]; 212 - 213 212 if (info->rotation_type == OMAP_DSS_ROT_TILER) { 214 - WARN_ON(!(omap_gem_flags(fb->obj[1]) & OMAP_BO_TILED)); 213 + WARN_ON(!(omap_gem_flags(fb->obj[1]) & OMAP_BO_TILED_MASK)); 215 214 omap_gem_rotated_dma_addr(fb->obj[1], orient, x/2, y/2, 216 215 &info->p_uv_addr); 217 216 } else {
+89 -48
drivers/gpu/drm/omapdrm/omap_gem.c
··· 67 67 /** 68 68 * # of users of dma_addr 69 69 */ 70 - u32 dma_addr_cnt; 70 + refcount_t dma_addr_cnt; 71 71 72 72 /** 73 73 * If the buffer has been imported from a dmabuf the OMAP_DB_DMABUF flag ··· 196 196 struct omap_gem_object *omap_obj = to_omap_bo(obj); 197 197 struct omap_drm_private *priv = obj->dev->dev_private; 198 198 199 - if (omap_obj->flags & OMAP_BO_TILED) { 199 + if (omap_obj->flags & OMAP_BO_TILED_MASK) { 200 200 enum tiler_fmt fmt = gem2fmt(omap_obj->flags); 201 201 int i; 202 202 ··· 324 324 struct omap_gem_object *omap_obj = to_omap_bo(obj); 325 325 size_t size = obj->size; 326 326 327 - if (omap_obj->flags & OMAP_BO_TILED) { 327 + if (omap_obj->flags & OMAP_BO_TILED_MASK) { 328 328 /* for tiled buffers, the virtual size has stride rounded up 329 329 * to 4kb.. (to hide the fact that row n+1 might start 16kb or 330 330 * 32kb later!). But we don't back the entire buffer with ··· 513 513 * probably trigger put_pages()? 514 514 */ 515 515 516 - if (omap_obj->flags & OMAP_BO_TILED) 516 + if (omap_obj->flags & OMAP_BO_TILED_MASK) 517 517 ret = omap_gem_fault_2d(obj, vma, vmf); 518 518 else 519 519 ret = omap_gem_fault_1d(obj, vma, vmf); ··· 773 773 mutex_lock(&omap_obj->lock); 774 774 775 775 if (!omap_gem_is_contiguous(omap_obj) && priv->has_dmm) { 776 - if (omap_obj->dma_addr_cnt == 0) { 776 + if (refcount_read(&omap_obj->dma_addr_cnt) == 0) { 777 777 u32 npages = obj->size >> PAGE_SHIFT; 778 778 enum tiler_fmt fmt = gem2fmt(omap_obj->flags); 779 779 struct tiler_block *block; 780 780 781 781 BUG_ON(omap_obj->block); 782 782 783 + refcount_set(&omap_obj->dma_addr_cnt, 1); 784 + 783 785 ret = omap_gem_attach_pages(obj); 784 786 if (ret) 785 787 goto fail; 786 788 787 - if (omap_obj->flags & OMAP_BO_TILED) { 789 + if (omap_obj->flags & OMAP_BO_TILED_MASK) { 788 790 block = tiler_reserve_2d(fmt, 789 791 omap_obj->width, 790 792 omap_obj->height, 0); ··· 815 813 omap_obj->block = block; 816 814 817 815 DBG("got dma address: %pad", &omap_obj->dma_addr); 816 + } else { 817 + refcount_inc(&omap_obj->dma_addr_cnt); 818 818 } 819 819 820 - omap_obj->dma_addr_cnt++; 821 - 822 - *dma_addr = omap_obj->dma_addr; 820 + if (dma_addr) 821 + *dma_addr = omap_obj->dma_addr; 823 822 } else if (omap_gem_is_contiguous(omap_obj)) { 824 - *dma_addr = omap_obj->dma_addr; 823 + if (dma_addr) 824 + *dma_addr = omap_obj->dma_addr; 825 825 } else { 826 826 ret = -EINVAL; 827 827 goto fail; ··· 836 832 } 837 833 838 834 /** 835 + * omap_gem_unpin_locked() - Unpin a GEM object from memory 836 + * @obj: the GEM object 837 + * 838 + * omap_gem_unpin() without locking. 839 + */ 840 + static void omap_gem_unpin_locked(struct drm_gem_object *obj) 841 + { 842 + struct omap_gem_object *omap_obj = to_omap_bo(obj); 843 + int ret; 844 + 845 + if (refcount_dec_and_test(&omap_obj->dma_addr_cnt)) { 846 + ret = tiler_unpin(omap_obj->block); 847 + if (ret) { 848 + dev_err(obj->dev->dev, 849 + "could not unpin pages: %d\n", ret); 850 + } 851 + ret = tiler_release(omap_obj->block); 852 + if (ret) { 853 + dev_err(obj->dev->dev, 854 + "could not release unmap: %d\n", ret); 855 + } 856 + omap_obj->dma_addr = 0; 857 + omap_obj->block = NULL; 858 + } 859 + } 860 + 861 + /** 839 862 * omap_gem_unpin() - Unpin a GEM object from memory 840 863 * @obj: the GEM object 841 864 * 842 865 * Unpin the given GEM object previously pinned with omap_gem_pin(). Pins are 843 - * reference-counted, the actualy unpin will only be performed when the number 866 + * reference-counted, the actual unpin will only be performed when the number 844 867 * of calls to this function matches the number of calls to omap_gem_pin(). 845 868 */ 846 869 void omap_gem_unpin(struct drm_gem_object *obj) 847 870 { 848 871 struct omap_gem_object *omap_obj = to_omap_bo(obj); 849 - int ret; 850 872 851 873 mutex_lock(&omap_obj->lock); 852 - 853 - if (omap_obj->dma_addr_cnt > 0) { 854 - omap_obj->dma_addr_cnt--; 855 - if (omap_obj->dma_addr_cnt == 0) { 856 - ret = tiler_unpin(omap_obj->block); 857 - if (ret) { 858 - dev_err(obj->dev->dev, 859 - "could not unpin pages: %d\n", ret); 860 - } 861 - ret = tiler_release(omap_obj->block); 862 - if (ret) { 863 - dev_err(obj->dev->dev, 864 - "could not release unmap: %d\n", ret); 865 - } 866 - omap_obj->dma_addr = 0; 867 - omap_obj->block = NULL; 868 - } 869 - } 870 - 874 + omap_gem_unpin_locked(obj); 871 875 mutex_unlock(&omap_obj->lock); 872 876 } 873 877 ··· 891 879 892 880 mutex_lock(&omap_obj->lock); 893 881 894 - if ((omap_obj->dma_addr_cnt > 0) && omap_obj->block && 895 - (omap_obj->flags & OMAP_BO_TILED)) { 882 + if ((refcount_read(&omap_obj->dma_addr_cnt) > 0) && omap_obj->block && 883 + (omap_obj->flags & OMAP_BO_TILED_MASK)) { 896 884 *dma_addr = tiler_tsptr(omap_obj->block, orient, x, y); 897 885 ret = 0; 898 886 } ··· 907 895 { 908 896 struct omap_gem_object *omap_obj = to_omap_bo(obj); 909 897 int ret = -EINVAL; 910 - if (omap_obj->flags & OMAP_BO_TILED) 898 + if (omap_obj->flags & OMAP_BO_TILED_MASK) 911 899 ret = tiler_stride(gem2fmt(omap_obj->flags), orient); 912 900 return ret; 913 901 } ··· 1042 1030 1043 1031 seq_printf(m, "%08x: %2d (%2d) %08llx %pad (%2d) %p %4d", 1044 1032 omap_obj->flags, obj->name, kref_read(&obj->refcount), 1045 - off, &omap_obj->dma_addr, omap_obj->dma_addr_cnt, 1033 + off, &omap_obj->dma_addr, 1034 + refcount_read(&omap_obj->dma_addr_cnt), 1046 1035 omap_obj->vaddr, omap_obj->roll); 1047 1036 1048 - if (omap_obj->flags & OMAP_BO_TILED) { 1037 + if (omap_obj->flags & OMAP_BO_TILED_MASK) { 1049 1038 seq_printf(m, " %dx%d", omap_obj->width, omap_obj->height); 1050 1039 if (omap_obj->block) { 1051 1040 struct tcm_area *area = &omap_obj->block->area; ··· 1106 1093 mutex_lock(&omap_obj->lock); 1107 1094 1108 1095 /* The object should not be pinned. */ 1109 - WARN_ON(omap_obj->dma_addr_cnt > 0); 1096 + WARN_ON(refcount_read(&omap_obj->dma_addr_cnt) > 0); 1110 1097 1111 1098 if (omap_obj->pages) { 1112 1099 if (omap_obj->flags & OMAP_BO_MEM_DMABUF) ··· 1133 1120 kfree(omap_obj); 1134 1121 } 1135 1122 1123 + static bool omap_gem_validate_flags(struct drm_device *dev, u32 flags) 1124 + { 1125 + struct omap_drm_private *priv = dev->dev_private; 1126 + 1127 + switch (flags & OMAP_BO_CACHE_MASK) { 1128 + case OMAP_BO_CACHED: 1129 + case OMAP_BO_WC: 1130 + case OMAP_BO_CACHE_MASK: 1131 + break; 1132 + 1133 + default: 1134 + return false; 1135 + } 1136 + 1137 + if (flags & OMAP_BO_TILED_MASK) { 1138 + if (!priv->usergart) 1139 + return false; 1140 + 1141 + switch (flags & OMAP_BO_TILED_MASK) { 1142 + case OMAP_BO_TILED_8: 1143 + case OMAP_BO_TILED_16: 1144 + case OMAP_BO_TILED_32: 1145 + break; 1146 + 1147 + default: 1148 + return false; 1149 + } 1150 + } 1151 + 1152 + return true; 1153 + } 1154 + 1136 1155 /* GEM buffer object constructor */ 1137 1156 struct drm_gem_object *omap_gem_new(struct drm_device *dev, 1138 1157 union omap_gem_size gsize, u32 flags) ··· 1176 1131 size_t size; 1177 1132 int ret; 1178 1133 1179 - /* Validate the flags and compute the memory and cache flags. */ 1180 - if (flags & OMAP_BO_TILED) { 1181 - if (!priv->usergart) { 1182 - dev_err(dev->dev, "Tiled buffers require DMM\n"); 1183 - return NULL; 1184 - } 1134 + if (!omap_gem_validate_flags(dev, flags)) 1135 + return NULL; 1185 1136 1137 + /* Validate the flags and compute the memory and cache flags. */ 1138 + if (flags & OMAP_BO_TILED_MASK) { 1186 1139 /* 1187 1140 * Tiled buffers are always shmem paged backed. When they are 1188 1141 * scanned out, they are remapped into DMM/TILER. 1189 1142 */ 1190 - flags &= ~OMAP_BO_SCANOUT; 1191 1143 flags |= OMAP_BO_MEM_SHMEM; 1192 1144 1193 1145 /* ··· 1195 1153 flags |= tiler_get_cpu_cache_flags(); 1196 1154 } else if ((flags & OMAP_BO_SCANOUT) && !priv->has_dmm) { 1197 1155 /* 1198 - * OMAP_BO_SCANOUT hints that the buffer doesn't need to be 1199 - * tiled. However, to lower the pressure on memory allocation, 1200 - * use contiguous memory only if no TILER is available. 1156 + * If we don't have DMM, we must allocate scanout buffers 1157 + * from contiguous DMA memory. 1201 1158 */ 1202 1159 flags |= OMAP_BO_MEM_DMA_API; 1203 1160 } else if (!(flags & OMAP_BO_MEM_DMABUF)) { ··· 1215 1174 omap_obj->flags = flags; 1216 1175 mutex_init(&omap_obj->lock); 1217 1176 1218 - if (flags & OMAP_BO_TILED) { 1177 + if (flags & OMAP_BO_TILED_MASK) { 1219 1178 /* 1220 1179 * For tiled buffers align dimensions to slot boundaries and 1221 1180 * calculate size based on aligned dimensions.
+1 -1
drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
··· 67 67 { 68 68 struct drm_gem_object *obj = buffer->priv; 69 69 struct page **pages; 70 - if (omap_gem_flags(obj) & OMAP_BO_TILED) { 70 + if (omap_gem_flags(obj) & OMAP_BO_TILED_MASK) { 71 71 /* TODO we would need to pin at least part of the buffer to 72 72 * get de-tiled view. For now just reject it. 73 73 */
+2
drivers/gpu/drm/panfrost/TODO
··· 10 10 11 11 - Compute job support. So called 'compute only' jobs need to be plumbed up to 12 12 userspace. 13 + 14 + - Support core dump on job failure
+2 -4
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 53 53 if (err) { 54 54 dev_err(dev, "Cannot set frequency %lu (%d)\n", target_rate, 55 55 err); 56 - if (pfdev->regulator) 57 - regulator_set_voltage(pfdev->regulator, 58 - pfdev->devfreq.cur_volt, 59 - pfdev->devfreq.cur_volt); 56 + regulator_set_voltage(pfdev->regulator, pfdev->devfreq.cur_volt, 57 + pfdev->devfreq.cur_volt); 60 58 return err; 61 59 } 62 60
+1 -1
drivers/gpu/drm/panfrost/panfrost_drv.c
··· 470 470 PANFROST_IOCTL(MADVISE, madvise, DRM_RENDER_ALLOW), 471 471 }; 472 472 473 - DEFINE_DRM_GEM_SHMEM_FOPS(panfrost_drm_driver_fops); 473 + DEFINE_DRM_GEM_FOPS(panfrost_drm_driver_fops); 474 474 475 475 /* 476 476 * Panfrost driver version:
+1 -1
drivers/gpu/drm/panfrost/panfrost_gem.c
··· 112 112 .get_sg_table = drm_gem_shmem_get_sg_table, 113 113 .vmap = drm_gem_shmem_vmap, 114 114 .vunmap = drm_gem_shmem_vunmap, 115 - .vm_ops = &drm_gem_shmem_vm_ops, 115 + .mmap = drm_gem_shmem_mmap, 116 116 }; 117 117 118 118 /**
-2
drivers/gpu/drm/panfrost/panfrost_job.c
··· 404 404 } 405 405 spin_unlock_irqrestore(&pfdev->js->job_lock, flags); 406 406 407 - /* panfrost_core_dump(pfdev); */ 408 - 409 407 panfrost_devfreq_record_transition(pfdev, js); 410 408 panfrost_device_reset(pfdev); 411 409
+1
drivers/gpu/drm/qxl/Kconfig
··· 4 4 depends on DRM && PCI && MMU 5 5 select DRM_KMS_HELPER 6 6 select DRM_TTM 7 + select DRM_TTM_HELPER 7 8 select CRC32 8 9 help 9 10 QXL virtual GPU for Spice virtualization desktop integration.
+1 -9
drivers/gpu/drm/qxl/qxl_drv.c
··· 150 150 drm_dev_put(dev); 151 151 } 152 152 153 - static const struct file_operations qxl_fops = { 154 - .owner = THIS_MODULE, 155 - .open = drm_open, 156 - .release = drm_release, 157 - .unlocked_ioctl = drm_ioctl, 158 - .poll = drm_poll, 159 - .read = drm_read, 160 - .mmap = qxl_mmap, 161 - }; 153 + DEFINE_DRM_GEM_FOPS(qxl_fops); 162 154 163 155 static int qxl_drm_freeze(struct drm_device *dev) 164 156 {
-1
drivers/gpu/drm/qxl/qxl_drv.h
··· 355 355 /* qxl ttm */ 356 356 int qxl_ttm_init(struct qxl_device *qdev); 357 357 void qxl_ttm_fini(struct qxl_device *qdev); 358 - int qxl_mmap(struct file *filp, struct vm_area_struct *vma); 359 358 360 359 /* qxl image */ 361 360
+7 -1
drivers/gpu/drm/qxl/qxl_object.c
··· 54 54 void qxl_ttm_placement_from_domain(struct qxl_bo *qbo, u32 domain, bool pinned) 55 55 { 56 56 u32 c = 0; 57 - u32 pflag = pinned ? TTM_PL_FLAG_NO_EVICT : 0; 57 + u32 pflag = 0; 58 58 unsigned int i; 59 + 60 + if (pinned) 61 + pflag |= TTM_PL_FLAG_NO_EVICT; 62 + if (qbo->tbo.base.size <= PAGE_SIZE) 63 + pflag |= TTM_PL_FLAG_TOPDOWN; 59 64 60 65 qbo->placement.placement = qbo->placements; 61 66 qbo->placement.busy_placement = qbo->placements; ··· 91 86 .get_sg_table = qxl_gem_prime_get_sg_table, 92 87 .vmap = qxl_gem_prime_vmap, 93 88 .vunmap = qxl_gem_prime_vunmap, 89 + .mmap = drm_gem_ttm_mmap, 94 90 .print_info = drm_gem_ttm_print_info, 95 91 }; 96 92
-50
drivers/gpu/drm/qxl/qxl_ttm.c
··· 48 48 return qdev; 49 49 } 50 50 51 - static struct vm_operations_struct qxl_ttm_vm_ops; 52 - static const struct vm_operations_struct *ttm_vm_ops; 53 - 54 - static vm_fault_t qxl_ttm_fault(struct vm_fault *vmf) 55 - { 56 - struct ttm_buffer_object *bo; 57 - vm_fault_t ret; 58 - 59 - bo = (struct ttm_buffer_object *)vmf->vma->vm_private_data; 60 - if (bo == NULL) 61 - return VM_FAULT_NOPAGE; 62 - ret = ttm_vm_ops->fault(vmf); 63 - return ret; 64 - } 65 - 66 - int qxl_mmap(struct file *filp, struct vm_area_struct *vma) 67 - { 68 - int r; 69 - struct drm_file *file_priv = filp->private_data; 70 - struct qxl_device *qdev = file_priv->minor->dev->dev_private; 71 - 72 - if (qdev == NULL) { 73 - DRM_ERROR( 74 - "filp->private_data->minor->dev->dev_private == NULL\n"); 75 - return -EINVAL; 76 - } 77 - DRM_DEBUG_DRIVER("filp->private_data = 0x%p, vma->vm_pgoff = %lx\n", 78 - filp->private_data, vma->vm_pgoff); 79 - 80 - r = ttm_bo_mmap(filp, vma, &qdev->mman.bdev); 81 - if (unlikely(r != 0)) 82 - return r; 83 - if (unlikely(ttm_vm_ops == NULL)) { 84 - ttm_vm_ops = vma->vm_ops; 85 - qxl_ttm_vm_ops = *ttm_vm_ops; 86 - qxl_ttm_vm_ops.fault = &qxl_ttm_fault; 87 - } 88 - vma->vm_ops = &qxl_ttm_vm_ops; 89 - return 0; 90 - } 91 - 92 51 static int qxl_invalidate_caches(struct ttm_bo_device *bdev, uint32_t flags) 93 52 { 94 53 return 0; ··· 108 149 qbo = to_qxl_bo(bo); 109 150 qxl_ttm_placement_from_domain(qbo, QXL_GEM_DOMAIN_CPU, false); 110 151 *placement = qbo->placement; 111 - } 112 - 113 - static int qxl_verify_access(struct ttm_buffer_object *bo, struct file *filp) 114 - { 115 - struct qxl_bo *qbo = to_qxl_bo(bo); 116 - 117 - return drm_vma_node_verify_access(&qbo->tbo.base.vma_node, 118 - filp->private_data); 119 152 } 120 153 121 154 static int qxl_ttm_io_mem_reserve(struct ttm_bo_device *bdev, ··· 261 310 .eviction_valuable = ttm_bo_eviction_valuable, 262 311 .evict_flags = &qxl_evict_flags, 263 312 .move = &qxl_bo_move, 264 - .verify_access = &qxl_verify_access, 265 313 .io_mem_reserve = &qxl_ttm_io_mem_reserve, 266 314 .io_mem_free = &qxl_ttm_io_mem_free, 267 315 .move_notify = &qxl_bo_move_notify,
+6 -6
drivers/gpu/drm/rockchip/cdn-dp-core.c
··· 477 477 cdn_dp_set_firmware_active(dp, false); 478 478 cdn_dp_clk_disable(dp); 479 479 dp->active = false; 480 - dp->link.rate = 0; 481 - dp->link.num_lanes = 0; 480 + dp->max_lanes = 0; 481 + dp->max_rate = 0; 482 482 if (!dp->connected) { 483 483 kfree(dp->edid); 484 484 dp->edid = NULL; ··· 570 570 struct cdn_dp_port *port = cdn_dp_connected_port(dp); 571 571 u8 sink_lanes = drm_dp_max_lane_count(dp->dpcd); 572 572 573 - if (!port || !dp->link.rate || !dp->link.num_lanes) 573 + if (!port || !dp->max_rate || !dp->max_lanes) 574 574 return false; 575 575 576 576 if (cdn_dp_dpcd_read(dp, DP_LANE0_1_STATUS, link_status, ··· 952 952 953 953 /* Enabled and connected with a sink, re-train if requested */ 954 954 } else if (!cdn_dp_check_link_status(dp)) { 955 - unsigned int rate = dp->link.rate; 956 - unsigned int lanes = dp->link.num_lanes; 955 + unsigned int rate = dp->max_rate; 956 + unsigned int lanes = dp->max_lanes; 957 957 struct drm_display_mode *mode = &dp->mode; 958 958 959 959 DRM_DEV_INFO(dp->dev, "Connected with sink. Re-train link\n"); ··· 966 966 967 967 /* If training result is changed, update the video config */ 968 968 if (mode->clock && 969 - (rate != dp->link.rate || lanes != dp->link.num_lanes)) { 969 + (rate != dp->max_rate || lanes != dp->max_lanes)) { 970 970 ret = cdn_dp_config_video(dp); 971 971 if (ret) { 972 972 dp->connected = false;
+2 -1
drivers/gpu/drm/rockchip/cdn-dp-core.h
··· 92 92 struct reset_control *core_rst; 93 93 struct audio_info audio_info; 94 94 struct video_info video_info; 95 - struct drm_dp_link link; 96 95 struct cdn_dp_port *port[MAX_PHY]; 97 96 u8 ports; 97 + u8 max_lanes; 98 + u8 max_rate; 98 99 u8 lanes; 99 100 int active_port; 100 101
+9 -10
drivers/gpu/drm/rockchip/cdn-dp-reg.c
··· 535 535 if (ret) 536 536 goto err_get_training_status; 537 537 538 - dp->link.rate = drm_dp_bw_code_to_link_rate(status[0]); 539 - dp->link.num_lanes = status[1]; 538 + dp->max_rate = drm_dp_bw_code_to_link_rate(status[0]); 539 + dp->max_lanes = status[1]; 540 540 541 541 err_get_training_status: 542 542 if (ret) ··· 560 560 return ret; 561 561 } 562 562 563 - DRM_DEV_DEBUG_KMS(dp->dev, "rate:0x%x, lanes:%d\n", dp->link.rate, 564 - dp->link.num_lanes); 563 + DRM_DEV_DEBUG_KMS(dp->dev, "rate:0x%x, lanes:%d\n", dp->max_rate, 564 + dp->max_lanes); 565 565 return ret; 566 566 } 567 567 ··· 639 639 bit_per_pix = (video->color_fmt == YCBCR_4_2_2) ? 640 640 (video->color_depth * 2) : (video->color_depth * 3); 641 641 642 - link_rate = dp->link.rate / 1000; 642 + link_rate = dp->max_rate / 1000; 643 643 644 644 ret = cdn_dp_reg_write(dp, BND_HSYNC2VSYNC, VIF_BYPASS_INTERLACE); 645 645 if (ret) ··· 659 659 do { 660 660 tu_size_reg += 2; 661 661 symbol = tu_size_reg * mode->clock * bit_per_pix; 662 - do_div(symbol, dp->link.num_lanes * link_rate * 8); 662 + do_div(symbol, dp->max_lanes * link_rate * 8); 663 663 rem = do_div(symbol, 1000); 664 664 if (tu_size_reg > 64) { 665 665 ret = -EINVAL; 666 666 DRM_DEV_ERROR(dp->dev, 667 667 "tu error, clk:%d, lanes:%d, rate:%d\n", 668 - mode->clock, dp->link.num_lanes, 669 - link_rate); 668 + mode->clock, dp->max_lanes, link_rate); 670 669 goto err_config_video; 671 670 } 672 671 } while ((symbol <= 1) || (tu_size_reg - symbol < 4) || ··· 679 680 680 681 /* set the FIFO Buffer size */ 681 682 val = div_u64(mode->clock * (symbol + 1), 1000) + link_rate; 682 - val /= (dp->link.num_lanes * link_rate); 683 + val /= (dp->max_lanes * link_rate); 683 684 val = div_u64(8 * (symbol + 1), bit_per_pix) - val; 684 685 val += 2; 685 686 ret = cdn_dp_reg_write(dp, DP_VC_TABLE(15), val); ··· 832 833 u32 val; 833 834 834 835 if (audio->channels == 2) { 835 - if (dp->link.num_lanes == 1) 836 + if (dp->max_lanes == 1) 836 837 sub_pckt_num = 2; 837 838 else 838 839 sub_pckt_num = 4;
+2
drivers/gpu/drm/rockchip/dw_hdmi-rockchip.c
··· 450 450 .phy_ops = &rk3328_hdmi_phy_ops, 451 451 .phy_name = "inno_dw_hdmi_phy2", 452 452 .phy_force_vendor = true, 453 + .use_drm_infoframe = true, 453 454 }; 454 455 455 456 static struct rockchip_hdmi_chip_data rk3399_chip_data = { ··· 465 464 .cur_ctr = rockchip_cur_ctr, 466 465 .phy_config = rockchip_phy_config, 467 466 .phy_data = &rk3399_chip_data, 467 + .use_drm_infoframe = true, 468 468 }; 469 469 470 470 static const struct of_device_id dw_hdmi_rockchip_dt_ids[] = {
+1 -7
drivers/gpu/drm/rockchip/rk3066_hdmi.c
··· 743 743 struct platform_device *pdev = to_platform_device(dev); 744 744 struct drm_device *drm = data; 745 745 struct rk3066_hdmi *hdmi; 746 - struct resource *iores; 747 746 int irq; 748 747 int ret; 749 748 ··· 752 753 753 754 hdmi->dev = dev; 754 755 hdmi->drm_dev = drm; 755 - 756 - iores = platform_get_resource(pdev, IORESOURCE_MEM, 0); 757 - if (!iores) 758 - return -ENXIO; 759 - 760 - hdmi->regs = devm_ioremap_resource(dev, iores); 756 + hdmi->regs = devm_platform_ioremap_resource(pdev, 0); 761 757 if (IS_ERR(hdmi->regs)) 762 758 return PTR_ERR(hdmi->regs); 763 759
+1 -1
drivers/gpu/drm/rockchip/rockchip_drm_gem.c
··· 294 294 kfree(rk_obj); 295 295 } 296 296 297 - struct rockchip_gem_object * 297 + static struct rockchip_gem_object * 298 298 rockchip_gem_alloc_object(struct drm_device *drm, unsigned int size) 299 299 { 300 300 struct rockchip_gem_object *rk_obj;
+161 -8
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 139 139 140 140 uint32_t *regsbak; 141 141 void __iomem *regs; 142 + void __iomem *lut_regs; 142 143 143 144 /* physical map length of vop register */ 144 145 uint32_t len; ··· 1041 1040 struct drm_display_mode *adjusted_mode) 1042 1041 { 1043 1042 struct vop *vop = to_vop(crtc); 1043 + unsigned long rate; 1044 1044 1045 - adjusted_mode->clock = 1046 - DIV_ROUND_UP(clk_round_rate(vop->dclk, 1047 - adjusted_mode->clock * 1000), 1000); 1045 + /* 1046 + * Clock craziness. 1047 + * 1048 + * Key points: 1049 + * 1050 + * - DRM works in in kHz. 1051 + * - Clock framework works in Hz. 1052 + * - Rockchip's clock driver picks the clock rate that is the 1053 + * same _OR LOWER_ than the one requested. 1054 + * 1055 + * Action plan: 1056 + * 1057 + * 1. When DRM gives us a mode, we should add 999 Hz to it. That way 1058 + * if the clock we need is 60000001 Hz (~60 MHz) and DRM tells us to 1059 + * make 60000 kHz then the clock framework will actually give us 1060 + * the right clock. 1061 + * 1062 + * NOTE: if the PLL (maybe through a divider) could actually make 1063 + * a clock rate 999 Hz higher instead of the one we want then this 1064 + * could be a problem. Unfortunately there's not much we can do 1065 + * since it's baked into DRM to use kHz. It shouldn't matter in 1066 + * practice since Rockchip PLLs are controlled by tables and 1067 + * even if there is a divider in the middle I wouldn't expect PLL 1068 + * rates in the table that are just a few kHz different. 1069 + * 1070 + * 2. Get the clock framework to round the rate for us to tell us 1071 + * what it will actually make. 1072 + * 1073 + * 3. Store the rounded up rate so that we don't need to worry about 1074 + * this in the actual clk_set_rate(). 1075 + */ 1076 + rate = clk_round_rate(vop->dclk, adjusted_mode->clock * 1000 + 999); 1077 + adjusted_mode->clock = DIV_ROUND_UP(rate, 1000); 1048 1078 1049 1079 return true; 1080 + } 1081 + 1082 + static bool vop_dsp_lut_is_enabled(struct vop *vop) 1083 + { 1084 + return vop_read_reg(vop, 0, &vop->data->common->dsp_lut_en); 1085 + } 1086 + 1087 + static void vop_crtc_write_gamma_lut(struct vop *vop, struct drm_crtc *crtc) 1088 + { 1089 + struct drm_color_lut *lut = crtc->state->gamma_lut->data; 1090 + unsigned int i; 1091 + 1092 + for (i = 0; i < crtc->gamma_size; i++) { 1093 + u32 word; 1094 + 1095 + word = (drm_color_lut_extract(lut[i].red, 10) << 20) | 1096 + (drm_color_lut_extract(lut[i].green, 10) << 10) | 1097 + drm_color_lut_extract(lut[i].blue, 10); 1098 + writel(word, vop->lut_regs + i * 4); 1099 + } 1100 + } 1101 + 1102 + static void vop_crtc_gamma_set(struct vop *vop, struct drm_crtc *crtc, 1103 + struct drm_crtc_state *old_state) 1104 + { 1105 + struct drm_crtc_state *state = crtc->state; 1106 + unsigned int idle; 1107 + int ret; 1108 + 1109 + if (!vop->lut_regs) 1110 + return; 1111 + /* 1112 + * To disable gamma (gamma_lut is null) or to write 1113 + * an update to the LUT, clear dsp_lut_en. 1114 + */ 1115 + spin_lock(&vop->reg_lock); 1116 + VOP_REG_SET(vop, common, dsp_lut_en, 0); 1117 + vop_cfg_done(vop); 1118 + spin_unlock(&vop->reg_lock); 1119 + 1120 + /* 1121 + * In order to write the LUT to the internal memory, 1122 + * we need to first make sure the dsp_lut_en bit is cleared. 1123 + */ 1124 + ret = readx_poll_timeout(vop_dsp_lut_is_enabled, vop, 1125 + idle, !idle, 5, 30 * 1000); 1126 + if (ret) { 1127 + DRM_DEV_ERROR(vop->dev, "display LUT RAM enable timeout!\n"); 1128 + return; 1129 + } 1130 + 1131 + if (!state->gamma_lut) 1132 + return; 1133 + 1134 + spin_lock(&vop->reg_lock); 1135 + vop_crtc_write_gamma_lut(vop, crtc); 1136 + VOP_REG_SET(vop, common, dsp_lut_en, 1); 1137 + vop_cfg_done(vop); 1138 + spin_unlock(&vop->reg_lock); 1139 + } 1140 + 1141 + static void vop_crtc_atomic_begin(struct drm_crtc *crtc, 1142 + struct drm_crtc_state *old_crtc_state) 1143 + { 1144 + struct vop *vop = to_vop(crtc); 1145 + 1146 + /* 1147 + * Only update GAMMA if the 'active' flag is not changed, 1148 + * otherwise it's updated by .atomic_enable. 1149 + */ 1150 + if (crtc->state->color_mgmt_changed && 1151 + !crtc->state->active_changed) 1152 + vop_crtc_gamma_set(vop, crtc, old_crtc_state); 1050 1153 } 1051 1154 1052 1155 static void vop_crtc_atomic_enable(struct drm_crtc *crtc, ··· 1180 1075 return; 1181 1076 } 1182 1077 1078 + /* 1079 + * If we have a GAMMA LUT in the state, then let's make sure 1080 + * it's updated. We might be coming out of suspend, 1081 + * which means the LUT internal memory needs to be re-written. 1082 + */ 1083 + if (crtc->state->gamma_lut) 1084 + vop_crtc_gamma_set(vop, crtc, old_state); 1085 + 1183 1086 mutex_lock(&vop->vop_lock); 1184 1087 1185 1088 WARN_ON(vop->event); ··· 1198 1085 DRM_DEV_ERROR(vop->dev, "Failed to enable vop (%d)\n", ret); 1199 1086 return; 1200 1087 } 1201 - 1202 - pin_pol = BIT(DCLK_INVERT); 1203 - pin_pol |= (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC) ? 1088 + pin_pol = (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC) ? 1204 1089 BIT(HSYNC_POSITIVE) : 0; 1205 1090 pin_pol |= (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC) ? 1206 1091 BIT(VSYNC_POSITIVE) : 0; ··· 1207 1096 1208 1097 switch (s->output_type) { 1209 1098 case DRM_MODE_CONNECTOR_LVDS: 1210 - VOP_REG_SET(vop, output, rgb_en, 1); 1099 + VOP_REG_SET(vop, output, rgb_dclk_pol, 1); 1211 1100 VOP_REG_SET(vop, output, rgb_pin_pol, pin_pol); 1101 + VOP_REG_SET(vop, output, rgb_en, 1); 1212 1102 break; 1213 1103 case DRM_MODE_CONNECTOR_eDP: 1104 + VOP_REG_SET(vop, output, edp_dclk_pol, 1); 1214 1105 VOP_REG_SET(vop, output, edp_pin_pol, pin_pol); 1215 1106 VOP_REG_SET(vop, output, edp_en, 1); 1216 1107 break; 1217 1108 case DRM_MODE_CONNECTOR_HDMIA: 1109 + VOP_REG_SET(vop, output, hdmi_dclk_pol, 1); 1218 1110 VOP_REG_SET(vop, output, hdmi_pin_pol, pin_pol); 1219 1111 VOP_REG_SET(vop, output, hdmi_en, 1); 1220 1112 break; 1221 1113 case DRM_MODE_CONNECTOR_DSI: 1114 + VOP_REG_SET(vop, output, mipi_dclk_pol, 1); 1222 1115 VOP_REG_SET(vop, output, mipi_pin_pol, pin_pol); 1223 1116 VOP_REG_SET(vop, output, mipi_en, 1); 1224 1117 VOP_REG_SET(vop, output, mipi_dual_channel_en, 1225 1118 !!(s->output_flags & ROCKCHIP_OUTPUT_DSI_DUAL)); 1226 1119 break; 1227 1120 case DRM_MODE_CONNECTOR_DisplayPort: 1228 - pin_pol &= ~BIT(DCLK_INVERT); 1121 + VOP_REG_SET(vop, output, dp_dclk_pol, 0); 1229 1122 VOP_REG_SET(vop, output, dp_pin_pol, pin_pol); 1230 1123 VOP_REG_SET(vop, output, dp_en, 1); 1231 1124 break; ··· 1306 1191 synchronize_irq(vop->irq); 1307 1192 } 1308 1193 1194 + static int vop_crtc_atomic_check(struct drm_crtc *crtc, 1195 + struct drm_crtc_state *crtc_state) 1196 + { 1197 + struct vop *vop = to_vop(crtc); 1198 + 1199 + if (vop->lut_regs && crtc_state->color_mgmt_changed && 1200 + crtc_state->gamma_lut) { 1201 + unsigned int len; 1202 + 1203 + len = drm_color_lut_size(crtc_state->gamma_lut); 1204 + if (len != crtc->gamma_size) { 1205 + DRM_DEBUG_KMS("Invalid LUT size; got %d, expected %d\n", 1206 + len, crtc->gamma_size); 1207 + return -EINVAL; 1208 + } 1209 + } 1210 + 1211 + return 0; 1212 + } 1213 + 1309 1214 static void vop_crtc_atomic_flush(struct drm_crtc *crtc, 1310 1215 struct drm_crtc_state *old_crtc_state) 1311 1216 { ··· 1378 1243 1379 1244 static const struct drm_crtc_helper_funcs vop_crtc_helper_funcs = { 1380 1245 .mode_fixup = vop_crtc_mode_fixup, 1246 + .atomic_check = vop_crtc_atomic_check, 1247 + .atomic_begin = vop_crtc_atomic_begin, 1381 1248 .atomic_flush = vop_crtc_atomic_flush, 1382 1249 .atomic_enable = vop_crtc_atomic_enable, 1383 1250 .atomic_disable = vop_crtc_atomic_disable, ··· 1498 1361 .disable_vblank = vop_crtc_disable_vblank, 1499 1362 .set_crc_source = vop_crtc_set_crc_source, 1500 1363 .verify_crc_source = vop_crtc_verify_crc_source, 1364 + .gamma_set = drm_atomic_helper_legacy_gamma_set, 1501 1365 }; 1502 1366 1503 1367 static void vop_fb_unref_worker(struct drm_flip_work *work, void *val) ··· 1656 1518 goto err_cleanup_planes; 1657 1519 1658 1520 drm_crtc_helper_add(crtc, &vop_crtc_helper_funcs); 1521 + if (vop->lut_regs) { 1522 + drm_mode_crtc_set_gamma_size(crtc, vop_data->lut_size); 1523 + drm_crtc_enable_color_mgmt(crtc, 0, false, vop_data->lut_size); 1524 + } 1659 1525 1660 1526 /* 1661 1527 * Create drm_planes for overlay windows with possible_crtcs restricted ··· 1963 1821 vop->regs = devm_ioremap_resource(dev, res); 1964 1822 if (IS_ERR(vop->regs)) 1965 1823 return PTR_ERR(vop->regs); 1824 + 1825 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1826 + if (res) { 1827 + if (!vop_data->lut_size) { 1828 + DRM_DEV_ERROR(dev, "no gamma LUT size defined\n"); 1829 + return -EINVAL; 1830 + } 1831 + vop->lut_regs = devm_ioremap_resource(dev, res); 1832 + if (IS_ERR(vop->lut_regs)) 1833 + return PTR_ERR(vop->lut_regs); 1834 + } 1966 1835 1967 1836 vop->regsbak = devm_kzalloc(dev, vop->len, GFP_KERNEL); 1968 1837 if (!vop->regsbak)
+8 -2
drivers/gpu/drm/rockchip/rockchip_drm_vop.h
··· 46 46 struct vop_output { 47 47 struct vop_reg pin_pol; 48 48 struct vop_reg dp_pin_pol; 49 + struct vop_reg dp_dclk_pol; 49 50 struct vop_reg edp_pin_pol; 51 + struct vop_reg edp_dclk_pol; 50 52 struct vop_reg hdmi_pin_pol; 53 + struct vop_reg hdmi_dclk_pol; 51 54 struct vop_reg mipi_pin_pol; 55 + struct vop_reg mipi_dclk_pol; 52 56 struct vop_reg rgb_pin_pol; 57 + struct vop_reg rgb_dclk_pol; 53 58 struct vop_reg dp_en; 54 59 struct vop_reg edp_en; 55 60 struct vop_reg hdmi_en; ··· 72 67 struct vop_reg dither_down_mode; 73 68 struct vop_reg dither_down_en; 74 69 struct vop_reg dither_up; 70 + struct vop_reg dsp_lut_en; 75 71 struct vop_reg gate_en; 76 72 struct vop_reg mmu_en; 77 73 struct vop_reg out_mode; ··· 176 170 const struct vop_win_yuv2yuv_data *win_yuv2yuv; 177 171 const struct vop_win_data *win; 178 172 unsigned int win_size; 173 + unsigned int lut_size; 179 174 180 175 #define VOP_FEATURE_OUTPUT_RGB10 BIT(0) 181 176 #define VOP_FEATURE_INTERNAL_RGB BIT(1) ··· 301 294 enum vop_pol { 302 295 HSYNC_POSITIVE = 0, 303 296 VSYNC_POSITIVE = 1, 304 - DEN_NEGATIVE = 2, 305 - DCLK_INVERT = 3 297 + DEN_NEGATIVE = 2 306 298 }; 307 299 308 300 #define FRAC_16_16(mult, div) (((mult) << 16) / (div))
+33 -15
drivers/gpu/drm/rockchip/rockchip_vop_reg.c
··· 16 16 17 17 #include "rockchip_drm_vop.h" 18 18 #include "rockchip_vop_reg.h" 19 + #include "rockchip_drm_drv.h" 19 20 20 21 #define _VOP_REG(off, _mask, _shift, _write_mask, _relaxed) \ 21 22 { \ ··· 215 214 }; 216 215 217 216 static const struct vop_output px30_output = { 218 - .rgb_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0xf, 1), 219 - .mipi_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0xf, 25), 217 + .rgb_dclk_pol = VOP_REG(PX30_DSP_CTRL0, 0x1, 1), 218 + .rgb_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0x7, 2), 220 219 .rgb_en = VOP_REG(PX30_DSP_CTRL0, 0x1, 0), 220 + .mipi_dclk_pol = VOP_REG(PX30_DSP_CTRL0, 0x1, 25), 221 + .mipi_pin_pol = VOP_REG(PX30_DSP_CTRL0, 0x7, 26), 221 222 .mipi_en = VOP_REG(PX30_DSP_CTRL0, 0x1, 24), 222 223 }; 223 224 ··· 601 598 .dither_down_en = VOP_REG(RK3288_DSP_CTRL1, 0x1, 2), 602 599 .pre_dither_down = VOP_REG(RK3288_DSP_CTRL1, 0x1, 1), 603 600 .dither_up = VOP_REG(RK3288_DSP_CTRL1, 0x1, 6), 601 + .dsp_lut_en = VOP_REG(RK3288_DSP_CTRL1, 0x1, 0), 604 602 .data_blank = VOP_REG(RK3288_DSP_CTRL0, 0x1, 19), 605 603 .dsp_blank = VOP_REG(RK3288_DSP_CTRL0, 0x3, 18), 606 604 .out_mode = VOP_REG(RK3288_DSP_CTRL0, 0xf, 0), ··· 650 646 .output = &rk3288_output, 651 647 .win = rk3288_vop_win_data, 652 648 .win_size = ARRAY_SIZE(rk3288_vop_win_data), 649 + .lut_size = 1024, 653 650 }; 654 651 655 652 static const int rk3368_vop_intrs[] = { ··· 722 717 }; 723 718 724 719 static const struct vop_output rk3368_output = { 725 - .rgb_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0xf, 16), 726 - .hdmi_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0xf, 20), 727 - .edp_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0xf, 24), 728 - .mipi_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0xf, 28), 720 + .rgb_dclk_pol = VOP_REG(RK3368_DSP_CTRL1, 0x1, 19), 721 + .hdmi_dclk_pol = VOP_REG(RK3368_DSP_CTRL1, 0x1, 23), 722 + .edp_dclk_pol = VOP_REG(RK3368_DSP_CTRL1, 0x1, 27), 723 + .mipi_dclk_pol = VOP_REG(RK3368_DSP_CTRL1, 0x1, 31), 724 + .rgb_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0x7, 16), 725 + .hdmi_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0x7, 20), 726 + .edp_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0x7, 24), 727 + .mipi_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0x7, 28), 729 728 .rgb_en = VOP_REG(RK3288_SYS_CTRL, 0x1, 12), 730 729 .hdmi_en = VOP_REG(RK3288_SYS_CTRL, 0x1, 13), 731 730 .edp_en = VOP_REG(RK3288_SYS_CTRL, 0x1, 14), ··· 773 764 }; 774 765 775 766 static const struct vop_output rk3399_output = { 776 - .dp_pin_pol = VOP_REG(RK3399_DSP_CTRL1, 0xf, 16), 777 - .rgb_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0xf, 16), 778 - .hdmi_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0xf, 20), 779 - .edp_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0xf, 24), 780 - .mipi_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0xf, 28), 767 + .dp_dclk_pol = VOP_REG(RK3399_DSP_CTRL1, 0x1, 19), 768 + .rgb_dclk_pol = VOP_REG(RK3368_DSP_CTRL1, 0x1, 19), 769 + .hdmi_dclk_pol = VOP_REG(RK3368_DSP_CTRL1, 0x1, 23), 770 + .edp_dclk_pol = VOP_REG(RK3368_DSP_CTRL1, 0x1, 27), 771 + .mipi_dclk_pol = VOP_REG(RK3368_DSP_CTRL1, 0x1, 31), 772 + .dp_pin_pol = VOP_REG(RK3399_DSP_CTRL1, 0x7, 16), 773 + .rgb_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0x7, 16), 774 + .hdmi_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0x7, 20), 775 + .edp_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0x7, 24), 776 + .mipi_pin_pol = VOP_REG(RK3368_DSP_CTRL1, 0x7, 28), 781 777 .dp_en = VOP_REG(RK3399_SYS_CTRL, 0x1, 11), 782 778 .rgb_en = VOP_REG(RK3288_SYS_CTRL, 0x1, 12), 783 779 .hdmi_en = VOP_REG(RK3288_SYS_CTRL, 0x1, 13), ··· 886 872 }; 887 873 888 874 static const struct vop_output rk3328_output = { 875 + .rgb_dclk_pol = VOP_REG(RK3328_DSP_CTRL1, 0x1, 19), 876 + .hdmi_dclk_pol = VOP_REG(RK3328_DSP_CTRL1, 0x1, 23), 877 + .edp_dclk_pol = VOP_REG(RK3328_DSP_CTRL1, 0x1, 27), 878 + .mipi_dclk_pol = VOP_REG(RK3328_DSP_CTRL1, 0x1, 31), 889 879 .rgb_en = VOP_REG(RK3328_SYS_CTRL, 0x1, 12), 890 880 .hdmi_en = VOP_REG(RK3328_SYS_CTRL, 0x1, 13), 891 881 .edp_en = VOP_REG(RK3328_SYS_CTRL, 0x1, 14), 892 882 .mipi_en = VOP_REG(RK3328_SYS_CTRL, 0x1, 15), 893 - .rgb_pin_pol = VOP_REG(RK3328_DSP_CTRL1, 0xf, 16), 894 - .hdmi_pin_pol = VOP_REG(RK3328_DSP_CTRL1, 0xf, 20), 895 - .edp_pin_pol = VOP_REG(RK3328_DSP_CTRL1, 0xf, 24), 896 - .mipi_pin_pol = VOP_REG(RK3328_DSP_CTRL1, 0xf, 28), 883 + .rgb_pin_pol = VOP_REG(RK3328_DSP_CTRL1, 0x7, 16), 884 + .hdmi_pin_pol = VOP_REG(RK3328_DSP_CTRL1, 0x7, 20), 885 + .edp_pin_pol = VOP_REG(RK3328_DSP_CTRL1, 0x7, 24), 886 + .mipi_pin_pol = VOP_REG(RK3328_DSP_CTRL1, 0x7, 28), 897 887 }; 898 888 899 889 static const struct vop_misc rk3328_misc = {
+2 -2
drivers/gpu/drm/scheduler/sched_fence.c
··· 128 128 dma_fence_put(&fence->scheduled); 129 129 } 130 130 131 - const struct dma_fence_ops drm_sched_fence_ops_scheduled = { 131 + static const struct dma_fence_ops drm_sched_fence_ops_scheduled = { 132 132 .get_driver_name = drm_sched_fence_get_driver_name, 133 133 .get_timeline_name = drm_sched_fence_get_timeline_name, 134 134 .release = drm_sched_fence_release_scheduled, 135 135 }; 136 136 137 - const struct dma_fence_ops drm_sched_fence_ops_finished = { 137 + static const struct dma_fence_ops drm_sched_fence_ops_finished = { 138 138 .get_driver_name = drm_sched_fence_get_driver_name, 139 139 .get_timeline_name = drm_sched_fence_get_timeline_name, 140 140 .release = drm_sched_fence_release_finished,
+2
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
··· 226 226 sun8i_hdmi_phy_init(hdmi->phy); 227 227 228 228 plat_data->mode_valid = hdmi->quirks->mode_valid; 229 + plat_data->use_drm_infoframe = hdmi->quirks->use_drm_infoframe; 229 230 sun8i_hdmi_phy_set_ops(hdmi->phy, plat_data); 230 231 231 232 platform_set_drvdata(pdev, hdmi); ··· 301 300 302 301 static const struct sun8i_dw_hdmi_quirks sun50i_h6_quirks = { 303 302 .mode_valid = sun8i_dw_hdmi_mode_valid_h6, 303 + .use_drm_infoframe = true, 304 304 }; 305 305 306 306 static const struct of_device_id sun8i_dw_hdmi_dt_ids[] = {
+1
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
··· 179 179 enum drm_mode_status (*mode_valid)(struct drm_connector *connector, 180 180 const struct drm_display_mode *mode); 181 181 unsigned int set_rate : 1; 182 + unsigned int use_drm_infoframe : 1; 182 183 }; 183 184 184 185 struct sun8i_dw_hdmi {
+1
drivers/gpu/drm/tegra/Makefile
··· 5 5 drm.o \ 6 6 gem.o \ 7 7 fb.o \ 8 + dp.o \ 8 9 hub.o \ 9 10 plane.o \ 10 11 dc.o \
+133
drivers/gpu/drm/tegra/dp.c
··· 1 + // SPDX-License-Identifier: MIT 2 + /* 3 + * Copyright (C) 2013-2019 NVIDIA Corporation 4 + * Copyright (C) 2015 Rob Clark 5 + */ 6 + 7 + #include <drm/drm_dp_helper.h> 8 + 9 + #include "dp.h" 10 + 11 + /** 12 + * drm_dp_link_probe() - probe a DisplayPort link for capabilities 13 + * @aux: DisplayPort AUX channel 14 + * @link: pointer to structure in which to return link capabilities 15 + * 16 + * The structure filled in by this function can usually be passed directly 17 + * into drm_dp_link_power_up() and drm_dp_link_configure() to power up and 18 + * configure the link based on the link's capabilities. 19 + * 20 + * Returns 0 on success or a negative error code on failure. 21 + */ 22 + int drm_dp_link_probe(struct drm_dp_aux *aux, struct drm_dp_link *link) 23 + { 24 + u8 values[3]; 25 + int err; 26 + 27 + memset(link, 0, sizeof(*link)); 28 + 29 + err = drm_dp_dpcd_read(aux, DP_DPCD_REV, values, sizeof(values)); 30 + if (err < 0) 31 + return err; 32 + 33 + link->revision = values[0]; 34 + link->rate = drm_dp_bw_code_to_link_rate(values[1]); 35 + link->num_lanes = values[2] & DP_MAX_LANE_COUNT_MASK; 36 + 37 + if (values[2] & DP_ENHANCED_FRAME_CAP) 38 + link->capabilities |= DP_LINK_CAP_ENHANCED_FRAMING; 39 + 40 + return 0; 41 + } 42 + 43 + /** 44 + * drm_dp_link_power_up() - power up a DisplayPort link 45 + * @aux: DisplayPort AUX channel 46 + * @link: pointer to a structure containing the link configuration 47 + * 48 + * Returns 0 on success or a negative error code on failure. 49 + */ 50 + int drm_dp_link_power_up(struct drm_dp_aux *aux, struct drm_dp_link *link) 51 + { 52 + u8 value; 53 + int err; 54 + 55 + /* DP_SET_POWER register is only available on DPCD v1.1 and later */ 56 + if (link->revision < 0x11) 57 + return 0; 58 + 59 + err = drm_dp_dpcd_readb(aux, DP_SET_POWER, &value); 60 + if (err < 0) 61 + return err; 62 + 63 + value &= ~DP_SET_POWER_MASK; 64 + value |= DP_SET_POWER_D0; 65 + 66 + err = drm_dp_dpcd_writeb(aux, DP_SET_POWER, value); 67 + if (err < 0) 68 + return err; 69 + 70 + /* 71 + * According to the DP 1.1 specification, a "Sink Device must exit the 72 + * power saving state within 1 ms" (Section 2.5.3.1, Table 5-52, "Sink 73 + * Control Field" (register 0x600). 74 + */ 75 + usleep_range(1000, 2000); 76 + 77 + return 0; 78 + } 79 + 80 + /** 81 + * drm_dp_link_power_down() - power down a DisplayPort link 82 + * @aux: DisplayPort AUX channel 83 + * @link: pointer to a structure containing the link configuration 84 + * 85 + * Returns 0 on success or a negative error code on failure. 86 + */ 87 + int drm_dp_link_power_down(struct drm_dp_aux *aux, struct drm_dp_link *link) 88 + { 89 + u8 value; 90 + int err; 91 + 92 + /* DP_SET_POWER register is only available on DPCD v1.1 and later */ 93 + if (link->revision < 0x11) 94 + return 0; 95 + 96 + err = drm_dp_dpcd_readb(aux, DP_SET_POWER, &value); 97 + if (err < 0) 98 + return err; 99 + 100 + value &= ~DP_SET_POWER_MASK; 101 + value |= DP_SET_POWER_D3; 102 + 103 + err = drm_dp_dpcd_writeb(aux, DP_SET_POWER, value); 104 + if (err < 0) 105 + return err; 106 + 107 + return 0; 108 + } 109 + 110 + /** 111 + * drm_dp_link_configure() - configure a DisplayPort link 112 + * @aux: DisplayPort AUX channel 113 + * @link: pointer to a structure containing the link configuration 114 + * 115 + * Returns 0 on success or a negative error code on failure. 116 + */ 117 + int drm_dp_link_configure(struct drm_dp_aux *aux, struct drm_dp_link *link) 118 + { 119 + u8 values[2]; 120 + int err; 121 + 122 + values[0] = drm_dp_link_rate_to_bw_code(link->rate); 123 + values[1] = link->num_lanes; 124 + 125 + if (link->capabilities & DP_LINK_CAP_ENHANCED_FRAMING) 126 + values[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 127 + 128 + err = drm_dp_dpcd_write(aux, DP_LINK_BW_SET, values, sizeof(values)); 129 + if (err < 0) 130 + return err; 131 + 132 + return 0; 133 + }
+26
drivers/gpu/drm/tegra/dp.h
··· 1 + /* SPDX-License-Identifier: MIT */ 2 + /* 3 + * Copyright (C) 2013-2019 NVIDIA Corporation. 4 + * Copyright (C) 2015 Rob Clark 5 + */ 6 + 7 + #ifndef DRM_TEGRA_DP_H 8 + #define DRM_TEGRA_DP_H 1 9 + 10 + struct drm_dp_aux; 11 + 12 + #define DP_LINK_CAP_ENHANCED_FRAMING (1 << 0) 13 + 14 + struct drm_dp_link { 15 + unsigned char revision; 16 + unsigned int rate; 17 + unsigned int num_lanes; 18 + unsigned long capabilities; 19 + }; 20 + 21 + int drm_dp_link_probe(struct drm_dp_aux *aux, struct drm_dp_link *link); 22 + int drm_dp_link_power_up(struct drm_dp_aux *aux, struct drm_dp_link *link); 23 + int drm_dp_link_power_down(struct drm_dp_aux *aux, struct drm_dp_link *link); 24 + int drm_dp_link_configure(struct drm_dp_aux *aux, struct drm_dp_link *link); 25 + 26 + #endif
+1
drivers/gpu/drm/tegra/dpaux.c
··· 22 22 #include <drm/drm_dp_helper.h> 23 23 #include <drm/drm_panel.h> 24 24 25 + #include "dp.h" 25 26 #include "dpaux.h" 26 27 #include "drm.h" 27 28 #include "trace.h"
+1
drivers/gpu/drm/tegra/sor.c
··· 25 25 #include <drm/drm_scdc_helper.h> 26 26 27 27 #include "dc.h" 28 + #include "dp.h" 28 29 #include "drm.h" 29 30 #include "hda.h" 30 31 #include "sor.h"
+1 -1
drivers/gpu/drm/tiny/gm12u320.c
··· 649 649 kfree(gm12u320); 650 650 } 651 651 652 - DEFINE_DRM_GEM_SHMEM_FOPS(gm12u320_fops); 652 + DEFINE_DRM_GEM_FOPS(gm12u320_fops); 653 653 654 654 static struct drm_driver gm12u320_drm_driver = { 655 655 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_ATOMIC,
+26 -28
drivers/gpu/drm/ttm/ttm_bo_vm.c
··· 424 424 return bo; 425 425 } 426 426 427 + static void ttm_bo_mmap_vma_setup(struct ttm_buffer_object *bo, struct vm_area_struct *vma) 428 + { 429 + vma->vm_ops = &ttm_bo_vm_ops; 430 + 431 + /* 432 + * Note: We're transferring the bo reference to 433 + * vma->vm_private_data here. 434 + */ 435 + 436 + vma->vm_private_data = bo; 437 + 438 + /* 439 + * We'd like to use VM_PFNMAP on shared mappings, where 440 + * (vma->vm_flags & VM_SHARED) != 0, for performance reasons, 441 + * but for some reason VM_PFNMAP + x86 PAT + write-combine is very 442 + * bad for performance. Until that has been sorted out, use 443 + * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719 444 + */ 445 + vma->vm_flags |= VM_MIXEDMAP; 446 + vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP; 447 + } 448 + 427 449 int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma, 428 450 struct ttm_bo_device *bdev) 429 451 { ··· 469 447 if (unlikely(ret != 0)) 470 448 goto out_unref; 471 449 472 - vma->vm_ops = &ttm_bo_vm_ops; 473 - 474 - /* 475 - * Note: We're transferring the bo reference to 476 - * vma->vm_private_data here. 477 - */ 478 - 479 - vma->vm_private_data = bo; 480 - 481 - /* 482 - * We'd like to use VM_PFNMAP on shared mappings, where 483 - * (vma->vm_flags & VM_SHARED) != 0, for performance reasons, 484 - * but for some reason VM_PFNMAP + x86 PAT + write-combine is very 485 - * bad for performance. Until that has been sorted out, use 486 - * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719 487 - */ 488 - vma->vm_flags |= VM_MIXEDMAP; 489 - vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP; 450 + ttm_bo_mmap_vma_setup(bo, vma); 490 451 return 0; 491 452 out_unref: 492 453 ttm_bo_put(bo); ··· 477 472 } 478 473 EXPORT_SYMBOL(ttm_bo_mmap); 479 474 480 - int ttm_fbdev_mmap(struct vm_area_struct *vma, struct ttm_buffer_object *bo) 475 + int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo) 481 476 { 482 - if (vma->vm_pgoff != 0) 483 - return -EACCES; 484 - 485 477 ttm_bo_get(bo); 486 - 487 - vma->vm_ops = &ttm_bo_vm_ops; 488 - vma->vm_private_data = bo; 489 - vma->vm_flags |= VM_MIXEDMAP; 490 - vma->vm_flags |= VM_IO | VM_DONTEXPAND; 478 + ttm_bo_mmap_vma_setup(bo, vma); 491 479 return 0; 492 480 } 493 - EXPORT_SYMBOL(ttm_fbdev_mmap); 481 + EXPORT_SYMBOL(ttm_bo_mmap_obj);
+1 -1
drivers/gpu/drm/v3d/v3d_bo.c
··· 58 58 .get_sg_table = drm_gem_shmem_get_sg_table, 59 59 .vmap = drm_gem_shmem_vmap, 60 60 .vunmap = drm_gem_shmem_vunmap, 61 - .vm_ops = &drm_gem_shmem_vm_ops, 61 + .mmap = drm_gem_shmem_mmap, 62 62 }; 63 63 64 64 /* gem_create_object function for allocating a BO struct and doing
+1 -1
drivers/gpu/drm/v3d/v3d_drv.c
··· 172 172 kfree(v3d_priv); 173 173 } 174 174 175 - DEFINE_DRM_GEM_SHMEM_FOPS(v3d_drm_fops); 175 + DEFINE_DRM_GEM_FOPS(v3d_drm_fops); 176 176 177 177 /* DRM_AUTH is required on SUBMIT_CL for now, while we don't have GMP 178 178 * protection between clients. Note that render nodes would be be
+1 -1
drivers/gpu/drm/vboxvideo/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 vboxvideo-y := hgsmi_base.o modesetting.o vbva_base.o \ 3 - vbox_drv.o vbox_fb.o vbox_hgsmi.o vbox_irq.o vbox_main.o \ 3 + vbox_drv.o vbox_hgsmi.o vbox_irq.o vbox_main.o \ 4 4 vbox_mode.o vbox_ttm.o 5 5 6 6 obj-$(CONFIG_DRM_VBOXVIDEO) += vboxvideo.o
+4 -15
drivers/gpu/drm/vboxvideo/vbox_drv.c
··· 14 14 15 15 #include <drm/drm_crtc_helper.h> 16 16 #include <drm/drm_drv.h> 17 + #include <drm/drm_fb_helper.h> 17 18 #include <drm/drm_file.h> 18 19 #include <drm/drm_ioctl.h> 19 20 ··· 32 31 { } 33 32 }; 34 33 MODULE_DEVICE_TABLE(pci, pciidlist); 35 - 36 - static const struct drm_fb_helper_funcs vbox_fb_helper_funcs = { 37 - .fb_probe = vboxfb_create, 38 - }; 39 34 40 35 static int vbox_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 41 36 { ··· 76 79 if (ret) 77 80 goto err_mode_fini; 78 81 79 - ret = drm_fb_helper_fbdev_setup(&vbox->ddev, &vbox->fb_helper, 80 - &vbox_fb_helper_funcs, 32, 81 - vbox->num_crtcs); 82 + ret = drm_fbdev_generic_setup(&vbox->ddev, 32); 82 83 if (ret) 83 84 goto err_irq_fini; 84 85 85 86 ret = drm_dev_register(&vbox->ddev, 0); 86 87 if (ret) 87 - goto err_fbdev_fini; 88 + goto err_irq_fini; 88 89 89 90 return 0; 90 91 91 - err_fbdev_fini: 92 - vbox_fbdev_fini(vbox); 93 92 err_irq_fini: 94 93 vbox_irq_fini(vbox); 95 94 err_mode_fini: ··· 106 113 struct vbox_private *vbox = pci_get_drvdata(pdev); 107 114 108 115 drm_dev_unregister(&vbox->ddev); 109 - vbox_fbdev_fini(vbox); 110 116 vbox_irq_fini(vbox); 111 117 vbox_mode_fini(vbox); 112 118 vbox_mm_fini(vbox); ··· 181 189 #endif 182 190 }; 183 191 184 - static const struct file_operations vbox_fops = { 185 - .owner = THIS_MODULE, 186 - DRM_VRAM_MM_FILE_OPERATIONS 187 - }; 192 + DEFINE_DRM_GEM_FOPS(vbox_fops); 188 193 189 194 static struct drm_driver driver = { 190 195 .driver_features =
-25
drivers/gpu/drm/vboxvideo/vbox_drv.h
··· 16 16 #include <linux/string.h> 17 17 18 18 #include <drm/drm_encoder.h> 19 - #include <drm/drm_fb_helper.h> 20 19 #include <drm/drm_gem.h> 21 20 #include <drm/drm_gem_vram_helper.h> 22 21 ··· 45 46 sizeof(struct hgsmi_host_flags)) 46 47 #define HOST_FLAGS_OFFSET GUEST_HEAP_USABLE_SIZE 47 48 48 - struct vbox_framebuffer { 49 - struct drm_framebuffer base; 50 - struct drm_gem_object *obj; 51 - }; 52 - 53 49 struct vbox_private { 54 50 /* Must be first; or we must define our own release callback */ 55 51 struct drm_device ddev; 56 - struct drm_fb_helper fb_helper; 57 - struct vbox_framebuffer afb; 58 52 59 53 u8 __iomem *guest_heap; 60 54 u8 __iomem *vbva_buffers; ··· 127 135 #define to_vbox_crtc(x) container_of(x, struct vbox_crtc, base) 128 136 #define to_vbox_connector(x) container_of(x, struct vbox_connector, base) 129 137 #define to_vbox_encoder(x) container_of(x, struct vbox_encoder, base) 130 - #define to_vbox_framebuffer(x) container_of(x, struct vbox_framebuffer, base) 131 138 132 139 bool vbox_check_supported(u16 id); 133 140 int vbox_hw_init(struct vbox_private *vbox); ··· 137 146 138 147 void vbox_report_caps(struct vbox_private *vbox); 139 148 140 - void vbox_framebuffer_dirty_rectangles(struct drm_framebuffer *fb, 141 - struct drm_clip_rect *rects, 142 - unsigned int num_rects); 143 - 144 - int vbox_framebuffer_init(struct vbox_private *vbox, 145 - struct vbox_framebuffer *vbox_fb, 146 - const struct drm_mode_fb_cmd2 *mode_cmd, 147 - struct drm_gem_object *obj); 148 - 149 - int vboxfb_create(struct drm_fb_helper *helper, 150 - struct drm_fb_helper_surface_size *sizes); 151 - void vbox_fbdev_fini(struct vbox_private *vbox); 152 - 153 149 int vbox_mm_init(struct vbox_private *vbox); 154 150 void vbox_mm_fini(struct vbox_private *vbox); 155 - 156 - int vbox_gem_create(struct vbox_private *vbox, 157 - u32 size, bool iskernel, struct drm_gem_object **obj); 158 151 159 152 /* vbox_irq.c */ 160 153 int vbox_irq_init(struct vbox_private *vbox);
-149
drivers/gpu/drm/vboxvideo/vbox_fb.c
··· 1 - // SPDX-License-Identifier: MIT 2 - /* 3 - * Copyright (C) 2013-2017 Oracle Corporation 4 - * This file is based on ast_fb.c 5 - * Copyright 2012 Red Hat Inc. 6 - * Authors: Dave Airlie <airlied@redhat.com> 7 - * Michael Thayer <michael.thayer@oracle.com, 8 - */ 9 - #include <linux/delay.h> 10 - #include <linux/errno.h> 11 - #include <linux/fb.h> 12 - #include <linux/init.h> 13 - #include <linux/kernel.h> 14 - #include <linux/mm.h> 15 - #include <linux/module.h> 16 - #include <linux/pci.h> 17 - #include <linux/string.h> 18 - #include <linux/sysrq.h> 19 - #include <linux/tty.h> 20 - 21 - #include <drm/drm_crtc.h> 22 - #include <drm/drm_crtc_helper.h> 23 - #include <drm/drm_fb_helper.h> 24 - #include <drm/drm_fourcc.h> 25 - 26 - #include "vbox_drv.h" 27 - #include "vboxvideo.h" 28 - 29 - #ifdef CONFIG_DRM_KMS_FB_HELPER 30 - static struct fb_deferred_io vbox_defio = { 31 - .delay = HZ / 30, 32 - .deferred_io = drm_fb_helper_deferred_io, 33 - }; 34 - #endif 35 - 36 - static struct fb_ops vboxfb_ops = { 37 - .owner = THIS_MODULE, 38 - DRM_FB_HELPER_DEFAULT_OPS, 39 - .fb_fillrect = drm_fb_helper_sys_fillrect, 40 - .fb_copyarea = drm_fb_helper_sys_copyarea, 41 - .fb_imageblit = drm_fb_helper_sys_imageblit, 42 - }; 43 - 44 - int vboxfb_create(struct drm_fb_helper *helper, 45 - struct drm_fb_helper_surface_size *sizes) 46 - { 47 - struct vbox_private *vbox = 48 - container_of(helper, struct vbox_private, fb_helper); 49 - struct pci_dev *pdev = vbox->ddev.pdev; 50 - struct drm_mode_fb_cmd2 mode_cmd; 51 - struct drm_framebuffer *fb; 52 - struct fb_info *info; 53 - struct drm_gem_object *gobj; 54 - struct drm_gem_vram_object *gbo; 55 - int size, ret; 56 - s64 gpu_addr; 57 - u32 pitch; 58 - 59 - mode_cmd.width = sizes->surface_width; 60 - mode_cmd.height = sizes->surface_height; 61 - pitch = mode_cmd.width * ((sizes->surface_bpp + 7) / 8); 62 - mode_cmd.pixel_format = drm_mode_legacy_fb_format(sizes->surface_bpp, 63 - sizes->surface_depth); 64 - mode_cmd.pitches[0] = pitch; 65 - 66 - size = pitch * mode_cmd.height; 67 - 68 - ret = vbox_gem_create(vbox, size, true, &gobj); 69 - if (ret) { 70 - DRM_ERROR("failed to create fbcon backing object %d\n", ret); 71 - return ret; 72 - } 73 - 74 - ret = vbox_framebuffer_init(vbox, &vbox->afb, &mode_cmd, gobj); 75 - if (ret) 76 - return ret; 77 - 78 - gbo = drm_gem_vram_of_gem(gobj); 79 - 80 - ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 81 - if (ret) 82 - return ret; 83 - 84 - info = drm_fb_helper_alloc_fbi(helper); 85 - if (IS_ERR(info)) 86 - return PTR_ERR(info); 87 - 88 - info->screen_size = size; 89 - info->screen_base = (char __iomem *)drm_gem_vram_kmap(gbo, true, NULL); 90 - if (IS_ERR(info->screen_base)) 91 - return PTR_ERR(info->screen_base); 92 - 93 - fb = &vbox->afb.base; 94 - helper->fb = fb; 95 - 96 - info->fbops = &vboxfb_ops; 97 - 98 - /* 99 - * This seems to be done for safety checking that the framebuffer 100 - * is not registered twice by different drivers. 101 - */ 102 - info->apertures->ranges[0].base = pci_resource_start(pdev, 0); 103 - info->apertures->ranges[0].size = pci_resource_len(pdev, 0); 104 - 105 - drm_fb_helper_fill_info(info, helper, sizes); 106 - 107 - gpu_addr = drm_gem_vram_offset(gbo); 108 - if (gpu_addr < 0) 109 - return (int)gpu_addr; 110 - info->fix.smem_start = info->apertures->ranges[0].base + gpu_addr; 111 - info->fix.smem_len = vbox->available_vram_size - gpu_addr; 112 - 113 - #ifdef CONFIG_DRM_KMS_FB_HELPER 114 - info->fbdefio = &vbox_defio; 115 - fb_deferred_io_init(info); 116 - #endif 117 - 118 - info->pixmap.flags = FB_PIXMAP_SYSTEM; 119 - 120 - DRM_DEBUG_KMS("allocated %dx%d\n", fb->width, fb->height); 121 - 122 - return 0; 123 - } 124 - 125 - void vbox_fbdev_fini(struct vbox_private *vbox) 126 - { 127 - struct vbox_framebuffer *afb = &vbox->afb; 128 - 129 - #ifdef CONFIG_DRM_KMS_FB_HELPER 130 - if (vbox->fb_helper.fbdev && vbox->fb_helper.fbdev->fbdefio) 131 - fb_deferred_io_cleanup(vbox->fb_helper.fbdev); 132 - #endif 133 - 134 - drm_fb_helper_unregister_fbi(&vbox->fb_helper); 135 - 136 - if (afb->obj) { 137 - struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(afb->obj); 138 - 139 - drm_gem_vram_kunmap(gbo); 140 - drm_gem_vram_unpin(gbo); 141 - 142 - drm_gem_object_put_unlocked(afb->obj); 143 - afb->obj = NULL; 144 - } 145 - drm_fb_helper_fini(&vbox->fb_helper); 146 - 147 - drm_framebuffer_unregister_private(&afb->base); 148 - drm_framebuffer_cleanup(&afb->base); 149 - }
+1 -118
drivers/gpu/drm/vboxvideo/vbox_main.c
··· 11 11 #include <linux/vbox_err.h> 12 12 #include <drm/drm_fb_helper.h> 13 13 #include <drm/drm_crtc_helper.h> 14 + #include <drm/drm_damage_helper.h> 14 15 15 16 #include "vbox_drv.h" 16 17 #include "vboxvideo_guest.h" 17 18 #include "vboxvideo_vbe.h" 18 - 19 - static void vbox_user_framebuffer_destroy(struct drm_framebuffer *fb) 20 - { 21 - struct vbox_framebuffer *vbox_fb = to_vbox_framebuffer(fb); 22 - 23 - if (vbox_fb->obj) 24 - drm_gem_object_put_unlocked(vbox_fb->obj); 25 - 26 - drm_framebuffer_cleanup(fb); 27 - kfree(fb); 28 - } 29 19 30 20 void vbox_report_caps(struct vbox_private *vbox) 31 21 { ··· 26 36 hgsmi_send_caps_info(vbox->guest_pool, caps); 27 37 caps |= VBVACAPS_VIDEO_MODE_HINTS; 28 38 hgsmi_send_caps_info(vbox->guest_pool, caps); 29 - } 30 - 31 - /* Send information about dirty rectangles to VBVA. */ 32 - void vbox_framebuffer_dirty_rectangles(struct drm_framebuffer *fb, 33 - struct drm_clip_rect *rects, 34 - unsigned int num_rects) 35 - { 36 - struct vbox_private *vbox = fb->dev->dev_private; 37 - struct drm_display_mode *mode; 38 - struct drm_crtc *crtc; 39 - int crtc_x, crtc_y; 40 - unsigned int i; 41 - 42 - mutex_lock(&vbox->hw_mutex); 43 - list_for_each_entry(crtc, &fb->dev->mode_config.crtc_list, head) { 44 - if (crtc->primary->state->fb != fb) 45 - continue; 46 - 47 - mode = &crtc->state->mode; 48 - crtc_x = crtc->primary->state->src_x >> 16; 49 - crtc_y = crtc->primary->state->src_y >> 16; 50 - 51 - for (i = 0; i < num_rects; ++i) { 52 - struct vbva_cmd_hdr cmd_hdr; 53 - unsigned int crtc_id = to_vbox_crtc(crtc)->crtc_id; 54 - 55 - if (rects[i].x1 > crtc_x + mode->hdisplay || 56 - rects[i].y1 > crtc_y + mode->vdisplay || 57 - rects[i].x2 < crtc_x || 58 - rects[i].y2 < crtc_y) 59 - continue; 60 - 61 - cmd_hdr.x = (s16)rects[i].x1; 62 - cmd_hdr.y = (s16)rects[i].y1; 63 - cmd_hdr.w = (u16)rects[i].x2 - rects[i].x1; 64 - cmd_hdr.h = (u16)rects[i].y2 - rects[i].y1; 65 - 66 - if (!vbva_buffer_begin_update(&vbox->vbva_info[crtc_id], 67 - vbox->guest_pool)) 68 - continue; 69 - 70 - vbva_write(&vbox->vbva_info[crtc_id], vbox->guest_pool, 71 - &cmd_hdr, sizeof(cmd_hdr)); 72 - vbva_buffer_end_update(&vbox->vbva_info[crtc_id]); 73 - } 74 - } 75 - mutex_unlock(&vbox->hw_mutex); 76 - } 77 - 78 - static int vbox_user_framebuffer_dirty(struct drm_framebuffer *fb, 79 - struct drm_file *file_priv, 80 - unsigned int flags, unsigned int color, 81 - struct drm_clip_rect *rects, 82 - unsigned int num_rects) 83 - { 84 - vbox_framebuffer_dirty_rectangles(fb, rects, num_rects); 85 - 86 - return 0; 87 - } 88 - 89 - static const struct drm_framebuffer_funcs vbox_fb_funcs = { 90 - .destroy = vbox_user_framebuffer_destroy, 91 - .dirty = vbox_user_framebuffer_dirty, 92 - }; 93 - 94 - int vbox_framebuffer_init(struct vbox_private *vbox, 95 - struct vbox_framebuffer *vbox_fb, 96 - const struct drm_mode_fb_cmd2 *mode_cmd, 97 - struct drm_gem_object *obj) 98 - { 99 - int ret; 100 - 101 - drm_helper_mode_fill_fb_struct(&vbox->ddev, &vbox_fb->base, mode_cmd); 102 - vbox_fb->obj = obj; 103 - ret = drm_framebuffer_init(&vbox->ddev, &vbox_fb->base, &vbox_fb_funcs); 104 - if (ret) { 105 - DRM_ERROR("framebuffer init failed %d\n", ret); 106 - return ret; 107 - } 108 - 109 - return 0; 110 39 } 111 40 112 41 static int vbox_accel_init(struct vbox_private *vbox) ··· 178 269 vbox_accel_fini(vbox); 179 270 gen_pool_destroy(vbox->guest_pool); 180 271 pci_iounmap(vbox->ddev.pdev, vbox->guest_heap); 181 - } 182 - 183 - int vbox_gem_create(struct vbox_private *vbox, 184 - u32 size, bool iskernel, struct drm_gem_object **obj) 185 - { 186 - struct drm_gem_vram_object *gbo; 187 - int ret; 188 - 189 - *obj = NULL; 190 - 191 - size = roundup(size, PAGE_SIZE); 192 - if (size == 0) 193 - return -EINVAL; 194 - 195 - gbo = drm_gem_vram_create(&vbox->ddev, &vbox->ddev.vram_mm->bdev, 196 - size, 0, false); 197 - if (IS_ERR(gbo)) { 198 - ret = PTR_ERR(gbo); 199 - if (ret != -ERESTARTSYS) 200 - DRM_ERROR("failed to allocate GEM object\n"); 201 - return ret; 202 - } 203 - 204 - *obj = &gbo->bo.base; 205 - 206 - return 0; 207 272 }
+43 -42
drivers/gpu/drm/vboxvideo/vbox_mode.c
··· 13 13 14 14 #include <drm/drm_atomic.h> 15 15 #include <drm/drm_atomic_helper.h> 16 + #include <drm/drm_fb_helper.h> 16 17 #include <drm/drm_fourcc.h> 18 + #include <drm/drm_gem_framebuffer_helper.h> 17 19 #include <drm/drm_plane_helper.h> 18 20 #include <drm/drm_probe_helper.h> 19 21 #include <drm/drm_vblank.h> ··· 135 133 136 134 if (!fb1) { 137 135 fb1 = fb; 138 - if (to_vbox_framebuffer(fb1) == &vbox->afb) 136 + if (fb1 == vbox->ddev.fb_helper->fb) 139 137 break; 140 138 } else if (fb != fb1) { 141 139 single_framebuffer = false; ··· 174 172 struct drm_framebuffer *fb, 175 173 int x, int y) 176 174 { 177 - struct drm_gem_vram_object *gbo = 178 - drm_gem_vram_of_gem(to_vbox_framebuffer(fb)->obj); 175 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]); 179 176 struct vbox_private *vbox = crtc->dev->dev_private; 180 177 struct vbox_crtc *vbox_crtc = to_vbox_crtc(crtc); 181 178 bool needs_modeset = drm_atomic_crtc_needs_modeset(crtc->state); ··· 284 283 { 285 284 struct drm_crtc *crtc = plane->state->crtc; 286 285 struct drm_framebuffer *fb = plane->state->fb; 286 + struct vbox_private *vbox = fb->dev->dev_private; 287 + struct drm_mode_rect *clips; 288 + uint32_t num_clips, i; 287 289 288 290 vbox_crtc_set_base_and_mode(crtc, fb, 289 291 plane->state->src_x >> 16, 290 292 plane->state->src_y >> 16); 293 + 294 + /* Send information about dirty rectangles to VBVA. */ 295 + 296 + clips = drm_plane_get_damage_clips(plane->state); 297 + num_clips = drm_plane_get_damage_clips_count(plane->state); 298 + 299 + if (!num_clips) 300 + return; 301 + 302 + mutex_lock(&vbox->hw_mutex); 303 + 304 + for (i = 0; i < num_clips; ++i, ++clips) { 305 + struct vbva_cmd_hdr cmd_hdr; 306 + unsigned int crtc_id = to_vbox_crtc(crtc)->crtc_id; 307 + 308 + cmd_hdr.x = (s16)clips->x1; 309 + cmd_hdr.y = (s16)clips->y1; 310 + cmd_hdr.w = (u16)clips->x2 - clips->x1; 311 + cmd_hdr.h = (u16)clips->y2 - clips->y1; 312 + 313 + if (!vbva_buffer_begin_update(&vbox->vbva_info[crtc_id], 314 + vbox->guest_pool)) 315 + continue; 316 + 317 + vbva_write(&vbox->vbva_info[crtc_id], vbox->guest_pool, 318 + &cmd_hdr, sizeof(cmd_hdr)); 319 + vbva_buffer_end_update(&vbox->vbva_info[crtc_id]); 320 + } 321 + 322 + mutex_unlock(&vbox->hw_mutex); 291 323 } 292 324 293 325 static void vbox_primary_atomic_disable(struct drm_plane *plane, ··· 343 309 if (!new_state->fb) 344 310 return 0; 345 311 346 - gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(new_state->fb)->obj); 312 + gbo = drm_gem_vram_of_gem(new_state->fb->obj[0]); 347 313 ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM); 348 314 if (ret) 349 315 DRM_WARN("Error %d pinning new fb, out of video mem?\n", ret); ··· 359 325 if (!old_state->fb) 360 326 return; 361 327 362 - gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(old_state->fb)->obj); 328 + gbo = drm_gem_vram_of_gem(old_state->fb->obj[0]); 363 329 drm_gem_vram_unpin(gbo); 364 330 } 365 331 ··· 420 386 container_of(plane->dev, struct vbox_private, ddev); 421 387 struct vbox_crtc *vbox_crtc = to_vbox_crtc(plane->state->crtc); 422 388 struct drm_framebuffer *fb = plane->state->fb; 423 - struct drm_gem_vram_object *gbo = 424 - drm_gem_vram_of_gem(to_vbox_framebuffer(fb)->obj); 389 + struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(fb->obj[0]); 425 390 u32 width = plane->state->crtc_w; 426 391 u32 height = plane->state->crtc_h; 427 392 size_t data_size, mask_size; ··· 500 467 if (!new_state->fb) 501 468 return 0; 502 469 503 - gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(new_state->fb)->obj); 470 + gbo = drm_gem_vram_of_gem(new_state->fb->obj[0]); 504 471 return drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_SYSTEM); 505 472 } 506 473 ··· 512 479 if (!plane->state->fb) 513 480 return; 514 481 515 - gbo = drm_gem_vram_of_gem(to_vbox_framebuffer(plane->state->fb)->obj); 482 + gbo = drm_gem_vram_of_gem(plane->state->fb->obj[0]); 516 483 drm_gem_vram_unpin(gbo); 517 484 } 518 485 ··· 889 856 return 0; 890 857 } 891 858 892 - static struct drm_framebuffer *vbox_user_framebuffer_create( 893 - struct drm_device *dev, 894 - struct drm_file *filp, 895 - const struct drm_mode_fb_cmd2 *mode_cmd) 896 - { 897 - struct vbox_private *vbox = 898 - container_of(dev, struct vbox_private, ddev); 899 - struct drm_gem_object *obj; 900 - struct vbox_framebuffer *vbox_fb; 901 - int ret = -ENOMEM; 902 - 903 - obj = drm_gem_object_lookup(filp, mode_cmd->handles[0]); 904 - if (!obj) 905 - return ERR_PTR(-ENOENT); 906 - 907 - vbox_fb = kzalloc(sizeof(*vbox_fb), GFP_KERNEL); 908 - if (!vbox_fb) 909 - goto err_unref_obj; 910 - 911 - ret = vbox_framebuffer_init(vbox, vbox_fb, mode_cmd, obj); 912 - if (ret) 913 - goto err_free_vbox_fb; 914 - 915 - return &vbox_fb->base; 916 - 917 - err_free_vbox_fb: 918 - kfree(vbox_fb); 919 - err_unref_obj: 920 - drm_gem_object_put_unlocked(obj); 921 - return ERR_PTR(ret); 922 - } 923 - 924 859 static const struct drm_mode_config_funcs vbox_mode_funcs = { 925 - .fb_create = vbox_user_framebuffer_create, 860 + .fb_create = drm_gem_fb_create, 926 861 .atomic_check = drm_atomic_helper_check, 927 862 .atomic_commit = drm_atomic_helper_commit, 928 863 };
+1 -4
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 398 398 HDMI_QUANTIZATION_RANGE_LIMITED : 399 399 HDMI_QUANTIZATION_RANGE_FULL); 400 400 401 - frame.avi.right_bar = cstate->tv.margins.right; 402 - frame.avi.left_bar = cstate->tv.margins.left; 403 - frame.avi.top_bar = cstate->tv.margins.top; 404 - frame.avi.bottom_bar = cstate->tv.margins.bottom; 401 + drm_hdmi_avi_infoframe_bars(&frame.avi, cstate); 405 402 406 403 vc4_hdmi_write_infoframe(encoder, &frame); 407 404 }
+1 -1
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 184 184 MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>"); 185 185 MODULE_AUTHOR("Alon Levy"); 186 186 187 - DEFINE_DRM_GEM_SHMEM_FOPS(virtio_gpu_driver_fops); 187 + DEFINE_DRM_GEM_FOPS(virtio_gpu_driver_fops); 188 188 189 189 static struct drm_driver driver = { 190 190 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC,
+2 -2
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 267 267 uint32_t resource_id); 268 268 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, 269 269 uint64_t offset, 270 - __le32 width, __le32 height, 271 - __le32 x, __le32 y, 270 + uint32_t width, uint32_t height, 271 + uint32_t x, uint32_t y, 272 272 struct virtio_gpu_object_array *objs, 273 273 struct virtio_gpu_fence *fence); 274 274 void virtio_gpu_cmd_resource_flush(struct virtio_gpu_device *vgdev,
+4 -5
drivers/gpu/drm/virtio/virtgpu_kms.c
··· 155 155 #ifdef __LITTLE_ENDIAN 156 156 if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_VIRGL)) 157 157 vgdev->has_virgl_3d = true; 158 - DRM_INFO("virgl 3d acceleration %s\n", 159 - vgdev->has_virgl_3d ? "enabled" : "not supported by host"); 160 - #else 161 - DRM_INFO("virgl 3d acceleration not supported by guest\n"); 162 158 #endif 163 159 if (virtio_has_feature(vgdev->vdev, VIRTIO_GPU_F_EDID)) { 164 160 vgdev->has_edid = true; 165 - DRM_INFO("EDID support available.\n"); 166 161 } 162 + 163 + DRM_INFO("features: %cvirgl %cedid\n", 164 + vgdev->has_virgl_3d ? '+' : '-', 165 + vgdev->has_edid ? '+' : '-'); 167 166 168 167 ret = virtio_find_vqs(vgdev->vdev, 2, vqs, callbacks, names, NULL); 169 168 if (ret) {
+1 -1
drivers/gpu/drm/virtio/virtgpu_object.c
··· 86 86 .get_sg_table = drm_gem_shmem_get_sg_table, 87 87 .vmap = drm_gem_shmem_vmap, 88 88 .vunmap = drm_gem_shmem_vunmap, 89 - .vm_ops = &drm_gem_shmem_vm_ops, 89 + .mmap = &drm_gem_shmem_mmap, 90 90 }; 91 91 92 92 struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
+6 -6
drivers/gpu/drm/virtio/virtgpu_plane.c
··· 132 132 virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]); 133 133 virtio_gpu_cmd_transfer_to_host_2d 134 134 (vgdev, 0, 135 - cpu_to_le32(plane->state->src_w >> 16), 136 - cpu_to_le32(plane->state->src_h >> 16), 137 - cpu_to_le32(plane->state->src_x >> 16), 138 - cpu_to_le32(plane->state->src_y >> 16), 135 + plane->state->src_w >> 16, 136 + plane->state->src_h >> 16, 137 + plane->state->src_x >> 16, 138 + plane->state->src_y >> 16, 139 139 objs, NULL); 140 140 } 141 141 } else { ··· 234 234 virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]); 235 235 virtio_gpu_cmd_transfer_to_host_2d 236 236 (vgdev, 0, 237 - cpu_to_le32(plane->state->crtc_w), 238 - cpu_to_le32(plane->state->crtc_h), 237 + plane->state->crtc_w, 238 + plane->state->crtc_h, 239 239 0, 0, objs, vgfb->fence); 240 240 dma_fence_wait(&vgfb->fence->f, true); 241 241 dma_fence_put(&vgfb->fence->f);
+6 -6
drivers/gpu/drm/virtio/virtgpu_vq.c
··· 549 549 550 550 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev, 551 551 uint64_t offset, 552 - __le32 width, __le32 height, 553 - __le32 x, __le32 y, 552 + uint32_t width, uint32_t height, 553 + uint32_t x, uint32_t y, 554 554 struct virtio_gpu_object_array *objs, 555 555 struct virtio_gpu_fence *fence) 556 556 { ··· 571 571 cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D); 572 572 cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle); 573 573 cmd_p->offset = cpu_to_le64(offset); 574 - cmd_p->r.width = width; 575 - cmd_p->r.height = height; 576 - cmd_p->r.x = x; 577 - cmd_p->r.y = y; 574 + cmd_p->r.width = cpu_to_le32(width); 575 + cmd_p->r.height = cpu_to_le32(height); 576 + cmd_p->r.x = cpu_to_le32(x); 577 + cmd_p->r.y = cpu_to_le32(y); 578 578 579 579 virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, &cmd_p->hdr, fence); 580 580 }
+12 -1
drivers/gpu/drm/vkms/vkms_drv.c
··· 11 11 12 12 #include <linux/module.h> 13 13 #include <linux/platform_device.h> 14 + #include <linux/dma-mapping.h> 14 15 16 + #include <drm/drm_gem.h> 15 17 #include <drm/drm_atomic.h> 16 18 #include <drm/drm_atomic_helper.h> 17 19 #include <drm/drm_drv.h> 18 20 #include <drm/drm_fb_helper.h> 19 21 #include <drm/drm_file.h> 20 - #include <drm/drm_gem.h> 21 22 #include <drm/drm_gem_framebuffer_helper.h> 22 23 #include <drm/drm_ioctl.h> 23 24 #include <drm/drm_probe_helper.h> ··· 104 103 .gem_vm_ops = &vkms_gem_vm_ops, 105 104 .gem_free_object_unlocked = vkms_gem_free_object, 106 105 .get_vblank_timestamp = vkms_get_vblank_timestamp, 106 + .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 107 + .gem_prime_import_sg_table = vkms_prime_import_sg_table, 107 108 108 109 .name = DRIVER_NAME, 109 110 .desc = DRIVER_DESC, ··· 159 156 &vkms_device->platform->dev); 160 157 if (ret) 161 158 goto out_unregister; 159 + 160 + ret = dma_coerce_mask_and_coherent(vkms_device->drm.dev, 161 + DMA_BIT_MASK(64)); 162 + 163 + if (ret) { 164 + DRM_ERROR("Could not initialize DMA support\n"); 165 + goto out_fini; 166 + } 162 167 163 168 vkms_device->drm.irq_enabled = true; 164 169
+6
drivers/gpu/drm/vkms/vkms_drv.h
··· 137 137 138 138 void vkms_gem_vunmap(struct drm_gem_object *obj); 139 139 140 + /* Prime */ 141 + struct drm_gem_object * 142 + vkms_prime_import_sg_table(struct drm_device *dev, 143 + struct dma_buf_attachment *attach, 144 + struct sg_table *sg); 145 + 140 146 /* CRC Support */ 141 147 const char *const *vkms_get_crc_sources(struct drm_crtc *crtc, 142 148 size_t *count);
+27
drivers/gpu/drm/vkms/vkms_gem.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 3 + #include <linux/dma-buf.h> 3 4 #include <linux/shmem_fs.h> 4 5 #include <linux/vmalloc.h> 6 + #include <drm/drm_prime.h> 5 7 6 8 #include "vkms_drv.h" 7 9 ··· 219 217 out: 220 218 mutex_unlock(&vkms_obj->pages_lock); 221 219 return ret; 220 + } 221 + 222 + struct drm_gem_object * 223 + vkms_prime_import_sg_table(struct drm_device *dev, 224 + struct dma_buf_attachment *attach, 225 + struct sg_table *sg) 226 + { 227 + struct vkms_gem_object *obj; 228 + int npages; 229 + 230 + obj = __vkms_gem_create(dev, attach->dmabuf->size); 231 + if (IS_ERR(obj)) 232 + return ERR_CAST(obj); 233 + 234 + npages = PAGE_ALIGN(attach->dmabuf->size) / PAGE_SIZE; 235 + DRM_DEBUG_PRIME("Importing %d pages\n", npages); 236 + 237 + obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); 238 + if (!obj->pages) { 239 + vkms_gem_free_object(&obj->gem); 240 + return ERR_PTR(-ENOMEM); 241 + } 242 + 243 + drm_prime_sg_to_page_addr_arrays(sg, obj->pages, NULL, npages); 244 + return &obj->gem; 222 245 }
+1
include/drm/bridge/dw_hdmi.h
··· 126 126 const struct drm_display_mode *mode); 127 127 unsigned long input_bus_format; 128 128 unsigned long input_bus_encoding; 129 + bool use_drm_infoframe; 129 130 130 131 /* Vendor PHY support */ 131 132 const struct dw_hdmi_phy_ops *phy_ops;
-103
include/drm/drmP.h
··· 1 - /* 2 - * Internal Header for the Direct Rendering Manager 3 - * 4 - * Copyright 1999 Precision Insight, Inc., Cedar Park, Texas. 5 - * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California. 6 - * Copyright (c) 2009-2010, Code Aurora Forum. 7 - * All rights reserved. 8 - * 9 - * Author: Rickard E. (Rik) Faith <faith@valinux.com> 10 - * Author: Gareth Hughes <gareth@valinux.com> 11 - * 12 - * Permission is hereby granted, free of charge, to any person obtaining a 13 - * copy of this software and associated documentation files (the "Software"), 14 - * to deal in the Software without restriction, including without limitation 15 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 16 - * and/or sell copies of the Software, and to permit persons to whom the 17 - * Software is furnished to do so, subject to the following conditions: 18 - * 19 - * The above copyright notice and this permission notice (including the next 20 - * paragraph) shall be included in all copies or substantial portions of the 21 - * Software. 22 - * 23 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 24 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 25 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 26 - * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 27 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 28 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 29 - * OTHER DEALINGS IN THE SOFTWARE. 30 - */ 31 - 32 - #ifndef _DRM_P_H_ 33 - #define _DRM_P_H_ 34 - 35 - #include <linux/agp_backend.h> 36 - #include <linux/cdev.h> 37 - #include <linux/dma-mapping.h> 38 - #include <linux/file.h> 39 - #include <linux/fs.h> 40 - #include <linux/highmem.h> 41 - #include <linux/idr.h> 42 - #include <linux/init.h> 43 - #include <linux/io.h> 44 - #include <linux/jiffies.h> 45 - #include <linux/kernel.h> 46 - #include <linux/kref.h> 47 - #include <linux/miscdevice.h> 48 - #include <linux/mm.h> 49 - #include <linux/mutex.h> 50 - #include <linux/platform_device.h> 51 - #include <linux/poll.h> 52 - #include <linux/ratelimit.h> 53 - #include <linux/sched.h> 54 - #include <linux/slab.h> 55 - #include <linux/types.h> 56 - #include <linux/vmalloc.h> 57 - #include <linux/workqueue.h> 58 - #include <linux/dma-fence.h> 59 - #include <linux/module.h> 60 - #include <linux/mman.h> 61 - #include <asm/pgalloc.h> 62 - #include <linux/uaccess.h> 63 - 64 - #include <uapi/drm/drm.h> 65 - #include <uapi/drm/drm_mode.h> 66 - 67 - #include <drm/drm_agpsupport.h> 68 - #include <drm/drm_crtc.h> 69 - #include <drm/drm_fourcc.h> 70 - #include <drm/drm_hashtab.h> 71 - #include <drm/drm_mm.h> 72 - #include <drm/drm_os_linux.h> 73 - #include <drm/drm_sarea.h> 74 - #include <drm/drm_drv.h> 75 - #include <drm/drm_prime.h> 76 - #include <drm/drm_print.h> 77 - #include <drm/drm_pci.h> 78 - #include <drm/drm_file.h> 79 - #include <drm/drm_debugfs.h> 80 - #include <drm/drm_ioctl.h> 81 - #include <drm/drm_sysfs.h> 82 - #include <drm/drm_vblank.h> 83 - #include <drm/drm_irq.h> 84 - #include <drm/drm_device.h> 85 - 86 - struct module; 87 - 88 - struct device_node; 89 - struct videomode; 90 - struct dma_resv; 91 - struct dma_buf_attachment; 92 - 93 - struct pci_dev; 94 - struct pci_controller; 95 - 96 - /* 97 - * NOTE: drmP.h is obsolete - do NOT add anything to this file 98 - * 99 - * Do not include drmP.h in new files. 100 - * Work is ongoing to remove drmP.h includes from existing files 101 - */ 102 - 103 - #endif
+63 -18
include/drm/drm_dp_helper.h
··· 23 23 #ifndef _DRM_DP_HELPER_H_ 24 24 #define _DRM_DP_HELPER_H_ 25 25 26 - #include <linux/types.h> 27 - #include <linux/i2c.h> 28 26 #include <linux/delay.h> 27 + #include <linux/i2c.h> 28 + #include <linux/types.h> 29 29 30 30 /* 31 31 * Unless otherwise noted, all values are from the DP 1.1a spec. Note that ··· 137 137 # define DP_DETAILED_CAP_INFO_AVAILABLE (1 << 4) /* DPI */ 138 138 139 139 #define DP_MAIN_LINK_CHANNEL_CODING 0x006 140 + # define DP_CAP_ANSI_8B10B (1 << 0) 140 141 141 142 #define DP_DOWN_STREAM_PORT_COUNT 0x007 142 143 # define DP_PORT_COUNT_MASK 0x0f ··· 605 604 # define DP_ADJUST_PRE_EMPHASIS_LANE1_SHIFT 6 606 605 607 606 #define DP_ADJUST_REQUEST_POST_CURSOR2 0x20c 607 + # define DP_ADJUST_POST_CURSOR2_LANE0_MASK 0x03 608 + # define DP_ADJUST_POST_CURSOR2_LANE0_SHIFT 0 609 + # define DP_ADJUST_POST_CURSOR2_LANE1_MASK 0x0c 610 + # define DP_ADJUST_POST_CURSOR2_LANE1_SHIFT 2 611 + # define DP_ADJUST_POST_CURSOR2_LANE2_MASK 0x30 612 + # define DP_ADJUST_POST_CURSOR2_LANE2_SHIFT 4 613 + # define DP_ADJUST_POST_CURSOR2_LANE3_MASK 0xc0 614 + # define DP_ADJUST_POST_CURSOR2_LANE3_SHIFT 6 608 615 609 616 #define DP_TEST_REQUEST 0x218 610 617 # define DP_TEST_LINK_TRAINING (1 << 0) ··· 1017 1008 #define DP_HDCP_2_2_REG_STREAM_TYPE_OFFSET 0x69494 1018 1009 #define DP_HDCP_2_2_REG_DBG_OFFSET 0x69518 1019 1010 1011 + /* Link Training (LT)-tunable PHY Repeaters */ 1012 + #define DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV 0xf0000 /* 1.3 */ 1013 + #define DP_MAX_LINK_RATE_PHY_REPEATER 0xf0001 /* 1.4a */ 1014 + #define DP_PHY_REPEATER_CNT 0xf0002 /* 1.3 */ 1015 + #define DP_PHY_REPEATER_MODE 0xf0003 /* 1.3 */ 1016 + #define DP_MAX_LANE_COUNT_PHY_REPEATER 0xf0004 /* 1.4a */ 1017 + #define DP_Repeater_FEC_CAPABILITY 0xf0004 /* 1.4 */ 1018 + #define DP_PHY_REPEATER_EXTENDED_WAIT_TIMEOUT 0xf0005 /* 1.4a */ 1019 + #define DP_TRAINING_PATTERN_SET_PHY_REPEATER1 0xf0010 /* 1.3 */ 1020 + #define DP_TRAINING_LANE0_SET_PHY_REPEATER1 0xf0011 /* 1.3 */ 1021 + #define DP_TRAINING_LANE1_SET_PHY_REPEATER1 0xf0012 /* 1.3 */ 1022 + #define DP_TRAINING_LANE2_SET_PHY_REPEATER1 0xf0013 /* 1.3 */ 1023 + #define DP_TRAINING_LANE3_SET_PHY_REPEATER1 0xf0014 /* 1.3 */ 1024 + #define DP_TRAINING_AUX_RD_INTERVAL_PHY_REPEATER1 0xf0020 /* 1.4a */ 1025 + #define DP_TRANSMITTER_CAPABILITY_PHY_REPEATER1 0xf0021 /* 1.4a */ 1026 + #define DP_LANE0_1_STATUS_PHY_REPEATER1 0xf0030 /* 1.3 */ 1027 + #define DP_LANE2_3_STATUS_PHY_REPEATER1 0xf0031 /* 1.3 */ 1028 + #define DP_LANE_ALIGN_STATUS_UPDATED_PHY_REPEATER1 0xf0032 /* 1.3 */ 1029 + #define DP_ADJUST_REQUEST_LANE0_1_PHY_REPEATER1 0xf0033 /* 1.3 */ 1030 + #define DP_ADJUST_REQUEST_LANE2_3_PHY_REPEATER1 0xf0034 /* 1.3 */ 1031 + #define DP_SYMBOL_ERROR_COUNT_LANE0_PHY_REPEATER1 0xf0035 /* 1.3 */ 1032 + #define DP_SYMBOL_ERROR_COUNT_LANE1_PHY_REPEATER1 0xf0037 /* 1.3 */ 1033 + #define DP_SYMBOL_ERROR_COUNT_LANE2_PHY_REPEATER1 0xf0039 /* 1.3 */ 1034 + #define DP_SYMBOL_ERROR_COUNT_LANE3_PHY_REPEATER1 0xf003b /* 1.3 */ 1035 + #define DP_FEC_STATUS_PHY_REPEATER1 0xf0290 /* 1.4 */ 1036 + 1037 + /* Repeater modes */ 1038 + #define DP_PHY_REPEATER_MODE_TRANSPARENT 0x55 /* 1.3 */ 1039 + #define DP_PHY_REPEATER_MODE_NON_TRANSPARENT 0xaa /* 1.3 */ 1040 + 1020 1041 /* DP HDCP message start offsets in DPCD address space */ 1021 1042 #define DP_HDCP_2_2_AKE_INIT_OFFSET DP_HDCP_2_2_REG_RTX_OFFSET 1022 1043 #define DP_HDCP_2_2_AKE_SEND_CERT_OFFSET DP_HDCP_2_2_REG_CERT_RX_OFFSET ··· 1130 1091 int lane); 1131 1092 u8 drm_dp_get_adjust_request_pre_emphasis(const u8 link_status[DP_LINK_STATUS_SIZE], 1132 1093 int lane); 1094 + u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE], 1095 + unsigned int lane); 1133 1096 1134 1097 #define DP_BRANCH_OUI_HEADER_SIZE 0xc 1135 1098 #define DP_RECEIVER_CAP_SIZE 0xf ··· 1227 1186 } 1228 1187 1229 1188 static inline bool 1189 + drm_dp_fast_training_cap(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 1190 + { 1191 + return dpcd[DP_DPCD_REV] >= 0x11 && 1192 + (dpcd[DP_MAX_DOWNSPREAD] & DP_NO_AUX_HANDSHAKE_LINK_TRAINING); 1193 + } 1194 + 1195 + static inline bool 1230 1196 drm_dp_tps3_supported(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 1231 1197 { 1232 1198 return dpcd[DP_DPCD_REV] >= 0x12 && ··· 1296 1248 drm_dp_sink_supports_fec(const u8 fec_capable) 1297 1249 { 1298 1250 return fec_capable & DP_FEC_CAPABLE; 1251 + } 1252 + 1253 + static inline bool 1254 + drm_dp_channel_coding_supported(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 1255 + { 1256 + return dpcd[DP_MAIN_LINK_CHANNEL_CODING] & DP_CAP_ANSI_8B10B; 1257 + } 1258 + 1259 + static inline bool 1260 + drm_dp_alternate_scrambler_reset_cap(const u8 dpcd[DP_RECEIVER_CAP_SIZE]) 1261 + { 1262 + return dpcd[DP_EDP_CONFIGURATION_CAP] & 1263 + DP_ALTERNATE_SCRAMBLER_RESET_CAP; 1299 1264 } 1300 1265 1301 1266 /* ··· 1455 1394 int drm_dp_dpcd_read_link_status(struct drm_dp_aux *aux, 1456 1395 u8 status[DP_LINK_STATUS_SIZE]); 1457 1396 1458 - /* 1459 - * DisplayPort link 1460 - */ 1461 - #define DP_LINK_CAP_ENHANCED_FRAMING (1 << 0) 1462 - 1463 - struct drm_dp_link { 1464 - unsigned char revision; 1465 - unsigned int rate; 1466 - unsigned int num_lanes; 1467 - unsigned long capabilities; 1468 - }; 1469 - 1470 - int drm_dp_link_probe(struct drm_dp_aux *aux, struct drm_dp_link *link); 1471 - int drm_dp_link_power_up(struct drm_dp_aux *aux, struct drm_dp_link *link); 1472 - int drm_dp_link_power_down(struct drm_dp_aux *aux, struct drm_dp_link *link); 1473 - int drm_dp_link_configure(struct drm_dp_aux *aux, struct drm_dp_link *link); 1474 1397 int drm_dp_downstream_max_clock(const u8 dpcd[DP_RECEIVER_CAP_SIZE], 1475 1398 const u8 port_cap[4]); 1476 1399 int drm_dp_downstream_max_bpc(const u8 dpcd[DP_RECEIVER_CAP_SIZE],
+4 -1
include/drm/drm_edid.h
··· 368 368 const struct drm_connector_state *conn_state); 369 369 370 370 void 371 + drm_hdmi_avi_infoframe_bars(struct hdmi_avi_infoframe *frame, 372 + const struct drm_connector_state *conn_state); 373 + 374 + void 371 375 drm_hdmi_avi_infoframe_quant_range(struct hdmi_avi_infoframe *frame, 372 376 struct drm_connector *connector, 373 377 const struct drm_display_mode *mode, ··· 485 481 int drm_add_override_edid_modes(struct drm_connector *connector); 486 482 487 483 u8 drm_match_cea_mode(const struct drm_display_mode *to_match); 488 - enum hdmi_picture_aspect drm_get_cea_aspect_ratio(const u8 video_code); 489 484 bool drm_detect_hdmi_monitor(struct edid *edid); 490 485 bool drm_detect_monitor_audio(struct edid *edid); 491 486 enum hdmi_quantization_range
+14
include/drm/drm_gem.h
··· 151 151 void (*vunmap)(struct drm_gem_object *obj, void *vaddr); 152 152 153 153 /** 154 + * @mmap: 155 + * 156 + * Handle mmap() of the gem object, setup vma accordingly. 157 + * 158 + * This callback is optional. 159 + * 160 + * The callback is used by by both drm_gem_mmap_obj() and 161 + * drm_gem_prime_mmap(). When @mmap is present @vm_ops is not 162 + * used, the @mmap callback must set vma->vm_ops instead. 163 + * 164 + */ 165 + int (*mmap)(struct drm_gem_object *obj, struct vm_area_struct *vma); 166 + 167 + /** 154 168 * @vm_ops: 155 169 * 156 170 * Virtual memory operations used with mmap.
+1 -29
include/drm/drm_gem_shmem_helper.h
··· 88 88 #define to_drm_gem_shmem_obj(obj) \ 89 89 container_of(obj, struct drm_gem_shmem_object, base) 90 90 91 - /** 92 - * DEFINE_DRM_GEM_SHMEM_FOPS() - Macro to generate file operations for shmem drivers 93 - * @name: name for the generated structure 94 - * 95 - * This macro autogenerates a suitable &struct file_operations for shmem based 96 - * drivers, which can be assigned to &drm_driver.fops. Note that this structure 97 - * cannot be shared between drivers, because it contains a reference to the 98 - * current module using THIS_MODULE. 99 - * 100 - * Note that the declaration is already marked as static - if you need a 101 - * non-static version of this you're probably doing it wrong and will break the 102 - * THIS_MODULE reference by accident. 103 - */ 104 - #define DEFINE_DRM_GEM_SHMEM_FOPS(name) \ 105 - static const struct file_operations name = {\ 106 - .owner = THIS_MODULE,\ 107 - .open = drm_open,\ 108 - .release = drm_release,\ 109 - .unlocked_ioctl = drm_ioctl,\ 110 - .compat_ioctl = drm_compat_ioctl,\ 111 - .poll = drm_poll,\ 112 - .read = drm_read,\ 113 - .llseek = noop_llseek,\ 114 - .mmap = drm_gem_shmem_mmap, \ 115 - } 116 - 117 91 struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size); 118 92 void drm_gem_shmem_free_object(struct drm_gem_object *obj); 119 93 ··· 117 143 int drm_gem_shmem_dumb_create(struct drm_file *file, struct drm_device *dev, 118 144 struct drm_mode_create_dumb *args); 119 145 120 - int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma); 121 - 122 - extern const struct vm_operations_struct drm_gem_shmem_vm_ops; 146 + int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); 123 147 124 148 void drm_gem_shmem_print_info(struct drm_printer *p, unsigned int indent, 125 149 const struct drm_gem_object *obj);
+2
include/drm/drm_gem_ttm_helper.h
··· 15 15 16 16 void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, 17 17 const struct drm_gem_object *gem); 18 + int drm_gem_ttm_mmap(struct drm_gem_object *gem, 19 + struct vm_area_struct *vma); 18 20 19 21 #endif
-25
include/drm/drm_gem_vram_helper.h
··· 184 184 struct drm_device *dev, uint64_t vram_base, size_t vram_size); 185 185 void drm_vram_helper_release_mm(struct drm_device *dev); 186 186 187 - /* 188 - * Helpers for &struct file_operations 189 - */ 190 - 191 - int drm_vram_mm_file_operations_mmap( 192 - struct file *filp, struct vm_area_struct *vma); 193 - 194 - /** 195 - * define DRM_VRAM_MM_FILE_OPERATIONS - default callback functions for \ 196 - &struct file_operations 197 - * 198 - * Drivers that use VRAM MM can use this macro to initialize 199 - * &struct file_operations with default functions. 200 - */ 201 - #define DRM_VRAM_MM_FILE_OPERATIONS \ 202 - .llseek = no_llseek, \ 203 - .read = drm_read, \ 204 - .poll = drm_poll, \ 205 - .unlocked_ioctl = drm_ioctl, \ 206 - .compat_ioctl = drm_compat_ioctl, \ 207 - .mmap = drm_vram_mm_file_operations_mmap, \ 208 - .open = drm_open, \ 209 - .release = drm_release \ 210 - 211 - 212 187 #endif
-55
include/drm/drm_os_linux.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /** 3 - * \file drm_os_linux.h 4 - * OS abstraction macros. 5 - */ 6 - 7 - #include <linux/interrupt.h> /* For task queue support */ 8 - #include <linux/sched/signal.h> 9 - #include <linux/delay.h> 10 - #include <linux/io-64-nonatomic-lo-hi.h> 11 - 12 - /** Current process ID */ 13 - #define DRM_CURRENTPID task_pid_nr(current) 14 - #define DRM_UDELAY(d) udelay(d) 15 - /** Read a byte from a MMIO region */ 16 - #define DRM_READ8(map, offset) readb(((void __iomem *)(map)->handle) + (offset)) 17 - /** Read a word from a MMIO region */ 18 - #define DRM_READ16(map, offset) readw(((void __iomem *)(map)->handle) + (offset)) 19 - /** Read a dword from a MMIO region */ 20 - #define DRM_READ32(map, offset) readl(((void __iomem *)(map)->handle) + (offset)) 21 - /** Write a byte into a MMIO region */ 22 - #define DRM_WRITE8(map, offset, val) writeb(val, ((void __iomem *)(map)->handle) + (offset)) 23 - /** Write a word into a MMIO region */ 24 - #define DRM_WRITE16(map, offset, val) writew(val, ((void __iomem *)(map)->handle) + (offset)) 25 - /** Write a dword into a MMIO region */ 26 - #define DRM_WRITE32(map, offset, val) writel(val, ((void __iomem *)(map)->handle) + (offset)) 27 - 28 - /** Read a qword from a MMIO region - be careful using these unless you really understand them */ 29 - #define DRM_READ64(map, offset) readq(((void __iomem *)(map)->handle) + (offset)) 30 - /** Write a qword into a MMIO region */ 31 - #define DRM_WRITE64(map, offset, val) writeq(val, ((void __iomem *)(map)->handle) + (offset)) 32 - 33 - #define DRM_WAIT_ON( ret, queue, timeout, condition ) \ 34 - do { \ 35 - DECLARE_WAITQUEUE(entry, current); \ 36 - unsigned long end = jiffies + (timeout); \ 37 - add_wait_queue(&(queue), &entry); \ 38 - \ 39 - for (;;) { \ 40 - __set_current_state(TASK_INTERRUPTIBLE); \ 41 - if (condition) \ 42 - break; \ 43 - if (time_after_eq(jiffies, end)) { \ 44 - ret = -EBUSY; \ 45 - break; \ 46 - } \ 47 - schedule_timeout((HZ/100 > 1) ? HZ/100 : 1); \ 48 - if (signal_pending(current)) { \ 49 - ret = -EINTR; \ 50 - break; \ 51 - } \ 52 - } \ 53 - __set_current_state(TASK_RUNNING); \ 54 - remove_wait_queue(&(queue), &entry); \ 55 - } while (0)
+25 -6
include/drm/drm_plane.h
··· 140 140 * @zpos: 141 141 * Priority of the given plane on crtc (optional). 142 142 * 143 - * Note that multiple active planes on the same crtc can have an 144 - * identical zpos value. The rule to solving the conflict is to compare 145 - * the plane object IDs; the plane with a higher ID must be stacked on 146 - * top of a plane with a lower ID. 143 + * User-space may set mutable zpos properties so that multiple active 144 + * planes on the same CRTC have identical zpos values. This is a 145 + * user-space bug, but drivers can solve the conflict by comparing the 146 + * plane object IDs; the plane with a higher ID is stacked on top of a 147 + * plane with a lower ID. 147 148 * 148 149 * See drm_plane_create_zpos_property() and 149 150 * drm_plane_create_zpos_immutable_property() for more details. ··· 184 183 */ 185 184 struct drm_property_blob *fb_damage_clips; 186 185 187 - /** @src: clipped source coordinates of the plane (in 16.16) */ 188 - /** @dst: clipped destination coordinates of the plane */ 186 + /** 187 + * @src: 188 + * 189 + * source coordinates of the plane (in 16.16). 190 + * 191 + * When using drm_atomic_helper_check_plane_state(), 192 + * the coordinates are clipped, but the driver may choose 193 + * to use unclipped coordinates instead when the hardware 194 + * performs the clipping automatically. 195 + */ 196 + /** 197 + * @dst: 198 + * 199 + * clipped destination coordinates of the plane. 200 + * 201 + * When using drm_atomic_helper_check_plane_state(), 202 + * the coordinates are clipped, but the driver may choose 203 + * to use unclipped coordinates instead when the hardware 204 + * performs the clipping automatically. 205 + */ 189 206 struct drm_rect src, dst; 190 207 191 208 /**
+4 -6
include/drm/ttm/ttm_bo_api.h
··· 710 710 void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); 711 711 712 712 /** 713 - * ttm_fbdev_mmap - mmap fbdev memory backed by a ttm buffer object. 713 + * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. 714 714 * 715 715 * @vma: vma as input from the fbdev mmap method. 716 - * @bo: The bo backing the address space. The address space will 717 - * have the same size as the bo, and start at offset 0. 716 + * @bo: The bo backing the address space. 718 717 * 719 - * This function is intended to be called by the fbdev mmap method 720 - * if the fbdev address space is to be backed by a bo. 718 + * Maps a buffer object. 721 719 */ 722 - int ttm_fbdev_mmap(struct vm_area_struct *vma, struct ttm_buffer_object *bo); 720 + int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo); 723 721 724 722 /** 725 723 * ttm_bo_mmap - mmap out of the ttm device address space.
+2 -1
include/uapi/drm/drm.h
··· 778 778 __u32 pad; 779 779 }; 780 780 781 + #define DRM_SYNCOBJ_QUERY_FLAGS_LAST_SUBMITTED (1 << 0) /* last available point on timeline syncobj */ 781 782 struct drm_syncobj_timeline_array { 782 783 __u64 handles; 783 784 __u64 points; 784 785 __u32 count_handles; 785 - __u32 pad; 786 + __u32 flags; 786 787 }; 787 788 788 789
+1 -1
include/uapi/drm/drm_fourcc.h
··· 69 69 #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \ 70 70 ((__u32)(c) << 16) | ((__u32)(d) << 24)) 71 71 72 - #define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */ 72 + #define DRM_FORMAT_BIG_ENDIAN (1U<<31) /* format is big endian instead of little endian */ 73 73 74 74 /* Reserve 0 for the invalid format specifier */ 75 75 #define DRM_FORMAT_INVALID 0
+9 -9
include/uapi/drm/omap_drm.h
··· 38 38 __u64 value; /* in (set_param), out (get_param) */ 39 39 }; 40 40 41 - #define OMAP_BO_SCANOUT 0x00000001 /* scanout capable (phys contiguous) */ 42 - #define OMAP_BO_CACHE_MASK 0x00000006 /* cache type mask, see cache modes */ 43 - #define OMAP_BO_TILED_MASK 0x00000f00 /* tiled mapping mask, see tiled modes */ 41 + /* Scanout buffer, consumable by DSS */ 42 + #define OMAP_BO_SCANOUT 0x00000001 44 43 45 - /* cache modes */ 46 - #define OMAP_BO_CACHED 0x00000000 /* default */ 47 - #define OMAP_BO_WC 0x00000002 /* write-combine */ 48 - #define OMAP_BO_UNCACHED 0x00000004 /* strongly-ordered (uncached) */ 44 + /* Buffer CPU caching mode: cached, write-combining or uncached. */ 45 + #define OMAP_BO_CACHED 0x00000000 46 + #define OMAP_BO_WC 0x00000002 47 + #define OMAP_BO_UNCACHED 0x00000004 48 + #define OMAP_BO_CACHE_MASK 0x00000006 49 49 50 - /* tiled modes */ 50 + /* Use TILER for the buffer. The TILER container unit can be 8, 16 or 32 bits. */ 51 51 #define OMAP_BO_TILED_8 0x00000100 52 52 #define OMAP_BO_TILED_16 0x00000200 53 53 #define OMAP_BO_TILED_32 0x00000300 54 - #define OMAP_BO_TILED (OMAP_BO_TILED_8 | OMAP_BO_TILED_16 | OMAP_BO_TILED_32) 54 + #define OMAP_BO_TILED_MASK 0x00000f00 55 55 56 56 union omap_gem_size { 57 57 __u32 bytes; /* (for non-tiled formats) */