Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-next-2021-11-12' of git://anongit.freedesktop.org/drm/drm

Pull more drm updates from Dave Airlie:
"I missed a drm-misc-next pull for the main pull last week. It wasn't
that major and isn't the bulk of this at all. This has a bunch of
fixes all over, a lot for amdgpu and i915.

bridge:
- HPD improvments for lt9611uxc
- eDP aux-bus support for ps8640
- LVDS data-mapping selection support

ttm:
- remove huge page functionality (needs reworking)
- fix a race condition during BO eviction

panels:
- add some new panels

fbdev:
- fix double-free
- remove unused scrolling acceleration
- CONFIG_FB dep improvements

locking:
- improve contended locking logging
- naming collision fix

dma-buf:
- add dma_resv_for_each_fence iterator
- fix fence refcounting bug
- name locking fixesA

prime:
- fix object references during mmap

nouveau:
- various code style changes
- refcount fix
- device removal fixes
- protect client list with a mutex
- fix CE0 address calculation

i915:
- DP rates related fixes
- Revert disabling dual eDP that was causing state readout problems
- put the cdclk vtables in const data
- Fix DVO port type for older platforms
- Fix blankscreen by turning DP++ TMDS output buffers on encoder->shutdown
- CCS FBs related fixes
- Fix recursive lock in GuC submission
- Revert guc_id from i915_request tracepoint
- Build fix around dmabuf

amdgpu:
- GPU reset fix
- Aldebaran fix
- Yellow Carp fixes
- DCN2.1 DMCUB fix
- IOMMU regression fix for Picasso
- DSC display fixes
- BPC display calculation fixes
- Other misc display fixes
- Don't allow partial copy from user for DC debugfs
- SRIOV fixes
- GFX9 CSB pin count fix
- Various IP version check fixes
- DP 2.0 fixes
- Limit DCN1 MPO fix to DCN1

amdkfd:
- SVM fixes
- Fix gfx version for renoir
- Reset fixes

udl:
- timeout fix

imx:
- circular locking fix

virtio:
- NULL ptr deref fix"

* tag 'drm-next-2021-11-12' of git://anongit.freedesktop.org/drm/drm: (126 commits)
drm/ttm: Double check mem_type of BO while eviction
drm/amdgpu: add missed support for UVD IP_VERSION(3, 0, 64)
drm/amdgpu: drop jpeg IP initialization in SRIOV case
drm/amd/display: reject both non-zero src_x and src_y only for DCN1x
drm/amd/display: Add callbacks for DMUB HPD IRQ notifications
drm/amd/display: Don't lock connection_mutex for DMUB HPD
drm/amd/display: Add comment where CONFIG_DRM_AMD_DC_DCN macro ends
drm/amdkfd: Fix retry fault drain race conditions
drm/amdkfd: lower the VAs base offset to 8KB
drm/amd/display: fix exit from amdgpu_dm_atomic_check() abruptly
drm/amd/amdgpu: fix the kfd pre_reset sequence in sriov
drm/amdgpu: fix uvd crash on Polaris12 during driver unloading
drm/i915/adlp/fb: Prevent the mapping of redundant trailing padding NULL pages
drm/i915/fb: Fix rounding error in subsampled plane size calculation
drm/i915/hdmi: Turn DP++ TMDS output buffers back on in encoder->shutdown()
drm/locking: fix __stack_depot_* name conflict
drm/virtio: Fix NULL dereference error in virtio_gpu_poll
drm/amdgpu: fix SI handling in amdgpu_device_asic_has_dc_support()
drm/amdgpu: Fix dangling kfd_bo pointer for shared BOs
drm/amd/amdkfd: Don't sent command to HWS on kfd reset
...

+1770 -1501
+32 -1
Documentation/devicetree/bindings/display/bridge/lvds-codec.yaml
··· 49 49 50 50 properties: 51 51 port@0: 52 - $ref: /schemas/graph.yaml#/properties/port 52 + $ref: /schemas/graph.yaml#/$defs/port-base 53 53 description: | 54 54 For LVDS encoders, port 0 is the parallel input 55 55 For LVDS decoders, port 0 is the LVDS input 56 + 57 + properties: 58 + endpoint: 59 + $ref: /schemas/media/video-interfaces.yaml# 60 + unevaluatedProperties: false 61 + 62 + properties: 63 + data-mapping: 64 + enum: 65 + - jeida-18 66 + - jeida-24 67 + - vesa-24 68 + description: | 69 + The color signals mapping order. See details in 70 + Documentation/devicetree/bindings/display/panel/lvds.yaml 56 71 57 72 port@1: 58 73 $ref: /schemas/graph.yaml#/properties/port ··· 85 70 maxItems: 1 86 71 87 72 power-supply: true 73 + 74 + if: 75 + not: 76 + properties: 77 + compatible: 78 + contains: 79 + const: lvds-decoder 80 + then: 81 + properties: 82 + ports: 83 + properties: 84 + port@0: 85 + properties: 86 + endpoint: 87 + properties: 88 + data-mapping: false 88 89 89 90 required: 90 91 - compatible
+18 -1
Documentation/devicetree/bindings/display/bridge/ps8640.yaml
··· 40 40 vdd33-supply: 41 41 description: Regulator for 3.3V digital core power. 42 42 43 + aux-bus: 44 + $ref: /schemas/display/dp-aux-bus.yaml# 45 + 43 46 ports: 44 47 $ref: /schemas/graph.yaml#/properties/ports 45 48 ··· 101 98 reg = <1>; 102 99 ps8640_out: endpoint { 103 100 remote-endpoint = <&panel_in>; 104 - }; 101 + }; 102 + }; 103 + }; 104 + 105 + aux-bus { 106 + panel { 107 + compatible = "boe,nv133fhm-n62"; 108 + power-supply = <&pp3300_dx_edp>; 109 + backlight = <&backlight>; 110 + 111 + port { 112 + panel_in: endpoint { 113 + remote-endpoint = <&ps8640_out>; 114 + }; 115 + }; 105 116 }; 106 117 }; 107 118 };
+5
Documentation/devicetree/bindings/display/panel/panel-simple.yaml
··· 166 166 - innolux,at070tn92 167 167 # Innolux G070Y2-L01 7" WVGA (800x480) TFT LCD panel 168 168 - innolux,g070y2-l01 169 + # Innolux G070Y2-T02 7" WVGA (800x480) TFT LCD TTL panel 170 + - innolux,g070y2-t02 169 171 # Innolux Corporation 10.1" G101ICE-L01 WXGA (1280x800) LVDS panel 170 172 - innolux,g101ice-l01 171 173 # Innolux Corporation 12.1" WXGA (1280x800) TFT LCD panel ··· 311 309 - urt,umsh-8596md-11t 312 310 - urt,umsh-8596md-19t 313 311 - urt,umsh-8596md-20t 312 + # Vivax TPC-9150 tablet 9.0" WSVGA TFT LCD panel 313 + - vivax,tpc9150-panel 314 314 # VXT 800x480 color TFT LCD panel 315 315 - vxt,vl050-8048nt-c01 316 316 # Winstar Display Corporation 3.5" QVGA (320x240) TFT LCD panel ··· 321 317 - yes-optoelectronics,ytc700tlag-05-201c 322 318 323 319 backlight: true 320 + ddc-i2c-bus: true 324 321 enable-gpios: true 325 322 port: true 326 323 power-supply: true
+56
Documentation/devicetree/bindings/display/panel/sharp,ls060t1sx01.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/panel/sharp,ls060t1sx01.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Sharp Microelectronics 6.0" FullHD TFT LCD panel 8 + 9 + maintainers: 10 + - Dmitry Baryskov <dmitry.baryshkov@linaro.org> 11 + 12 + allOf: 13 + - $ref: panel-common.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: sharp,ls060t1sx01 18 + 19 + reg: true 20 + backlight: true 21 + reset-gpios: true 22 + port: true 23 + 24 + avdd-supply: 25 + description: handle of the regulator that provides the positive supply voltage 26 + avee-supply: 27 + description: handle of the regulator that provides the negative supply voltage 28 + vddi-supply: 29 + description: handle of the regulator that provides the I/O supply voltage 30 + vddh-supply: 31 + description: handle of the regulator that provides the analog supply voltage 32 + 33 + required: 34 + - compatible 35 + - reg 36 + 37 + additionalProperties: false 38 + 39 + examples: 40 + - | 41 + #include <dt-bindings/gpio/gpio.h> 42 + 43 + dsi { 44 + #address-cells = <1>; 45 + #size-cells = <0>; 46 + 47 + panel@0 { 48 + compatible = "sharp,ls060t1sx01"; 49 + reg = <0>; 50 + avdd-supply = <&pm8941_l22>; 51 + backlight = <&backlight>; 52 + reset-gpios = <&pm8916_gpios 25 GPIO_ACTIVE_LOW>; 53 + }; 54 + }; 55 + 56 + ...
+2
Documentation/devicetree/bindings/vendor-prefixes.yaml
··· 1286 1286 description: Vitesse Semiconductor Corporation 1287 1287 "^vivante,.*": 1288 1288 description: Vivante Corporation 1289 + "^vivax,.*": 1290 + description: Vivax brand by M SAN Grupa d.o.o. 1289 1291 "^vocore,.*": 1290 1292 description: VoCore Studio 1291 1293 "^voipac,.*":
+8 -5
Documentation/gpu/todo.rst
··· 314 314 Garbage collect fbdev scrolling acceleration 315 315 -------------------------------------------- 316 316 317 - Scroll acceleration is disabled in fbcon by hard-wiring p->scrollmode = 318 - SCROLL_REDRAW. There's a ton of code this will allow us to remove: 317 + Scroll acceleration has been disabled in fbcon. Now it works as the old 318 + SCROLL_REDRAW mode. A ton of code was removed in fbcon.c and the hook bmove was 319 + removed from fbcon_ops. 320 + Remaining tasks: 319 321 320 - - lots of code in fbcon.c 321 - 322 - - a bunch of the hooks in fbcon_ops, maybe the remaining hooks could be called 322 + - a bunch of the hooks in fbcon_ops could be removed or simplified by calling 323 323 directly instead of the function table (with a switch on p->rotate) 324 324 325 325 - fb_copyarea is unused after this, and can be deleted from all drivers 326 + 327 + - after that, fb_copyarea can be deleted from fb_ops in include/linux/fb.h as 328 + well as cfb_copyarea 326 329 327 330 Note that not all acceleration code can be deleted, since clearing and cursor 328 331 support is still accelerated, which might be good candidates for further
+27 -54
drivers/dma-buf/dma-buf.c
··· 67 67 BUG_ON(dmabuf->vmapping_counter); 68 68 69 69 /* 70 - * Any fences that a dma-buf poll can wait on should be signaled 71 - * before releasing dma-buf. This is the responsibility of each 72 - * driver that uses the reservation objects. 73 - * 74 - * If you hit this BUG() it means someone dropped their ref to the 75 - * dma-buf while still having pending operation to the buffer. 70 + * If you hit this BUG() it could mean: 71 + * * There's a file reference imbalance in dma_buf_poll / dma_buf_poll_cb or somewhere else 72 + * * dmabuf->cb_in/out.active are non-0 despite no pending fence callback 76 73 */ 77 74 BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active); 78 75 ··· 197 200 static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb) 198 201 { 199 202 struct dma_buf_poll_cb_t *dcb = (struct dma_buf_poll_cb_t *)cb; 203 + struct dma_buf *dmabuf = container_of(dcb->poll, struct dma_buf, poll); 200 204 unsigned long flags; 201 205 202 206 spin_lock_irqsave(&dcb->poll->lock, flags); ··· 205 207 dcb->active = 0; 206 208 spin_unlock_irqrestore(&dcb->poll->lock, flags); 207 209 dma_fence_put(fence); 210 + /* Paired with get_file in dma_buf_poll */ 211 + fput(dmabuf->file); 208 212 } 209 213 210 - static bool dma_buf_poll_shared(struct dma_resv *resv, 214 + static bool dma_buf_poll_add_cb(struct dma_resv *resv, bool write, 211 215 struct dma_buf_poll_cb_t *dcb) 212 216 { 213 - struct dma_resv_list *fobj = dma_resv_shared_list(resv); 217 + struct dma_resv_iter cursor; 214 218 struct dma_fence *fence; 215 - int i, r; 219 + int r; 216 220 217 - if (!fobj) 218 - return false; 219 - 220 - for (i = 0; i < fobj->shared_count; ++i) { 221 - fence = rcu_dereference_protected(fobj->shared[i], 222 - dma_resv_held(resv)); 221 + dma_resv_for_each_fence(&cursor, resv, write, fence) { 223 222 dma_fence_get(fence); 224 223 r = dma_fence_add_callback(fence, &dcb->cb, dma_buf_poll_cb); 225 224 if (!r) 226 225 return true; 227 226 dma_fence_put(fence); 228 227 } 229 - 230 - return false; 231 - } 232 - 233 - static bool dma_buf_poll_excl(struct dma_resv *resv, 234 - struct dma_buf_poll_cb_t *dcb) 235 - { 236 - struct dma_fence *fence = dma_resv_excl_fence(resv); 237 - int r; 238 - 239 - if (!fence) 240 - return false; 241 - 242 - dma_fence_get(fence); 243 - r = dma_fence_add_callback(fence, &dcb->cb, dma_buf_poll_cb); 244 - if (!r) 245 - return true; 246 - dma_fence_put(fence); 247 228 248 229 return false; 249 230 } ··· 259 282 spin_unlock_irq(&dmabuf->poll.lock); 260 283 261 284 if (events & EPOLLOUT) { 262 - if (!dma_buf_poll_shared(resv, dcb) && 263 - !dma_buf_poll_excl(resv, dcb)) 285 + /* Paired with fput in dma_buf_poll_cb */ 286 + get_file(dmabuf->file); 287 + 288 + if (!dma_buf_poll_add_cb(resv, true, dcb)) 264 289 /* No callback queued, wake up any other waiters */ 265 290 dma_buf_poll_cb(NULL, &dcb->cb); 266 291 else ··· 282 303 spin_unlock_irq(&dmabuf->poll.lock); 283 304 284 305 if (events & EPOLLIN) { 285 - if (!dma_buf_poll_excl(resv, dcb)) 306 + /* Paired with fput in dma_buf_poll_cb */ 307 + get_file(dmabuf->file); 308 + 309 + if (!dma_buf_poll_add_cb(resv, false, dcb)) 286 310 /* No callback queued, wake up any other waiters */ 287 311 dma_buf_poll_cb(NULL, &dcb->cb); 288 312 else ··· 1338 1356 { 1339 1357 struct dma_buf *buf_obj; 1340 1358 struct dma_buf_attachment *attach_obj; 1341 - struct dma_resv *robj; 1342 - struct dma_resv_list *fobj; 1359 + struct dma_resv_iter cursor; 1343 1360 struct dma_fence *fence; 1344 - int count = 0, attach_count, shared_count, i; 1361 + int count = 0, attach_count; 1345 1362 size_t size = 0; 1346 1363 int ret; 1347 1364 ··· 1359 1378 if (ret) 1360 1379 goto error_unlock; 1361 1380 1381 + 1382 + spin_lock(&buf_obj->name_lock); 1362 1383 seq_printf(s, "%08zu\t%08x\t%08x\t%08ld\t%s\t%08lu\t%s\n", 1363 1384 buf_obj->size, 1364 1385 buf_obj->file->f_flags, buf_obj->file->f_mode, ··· 1368 1385 buf_obj->exp_name, 1369 1386 file_inode(buf_obj->file)->i_ino, 1370 1387 buf_obj->name ?: ""); 1388 + spin_unlock(&buf_obj->name_lock); 1371 1389 1372 - robj = buf_obj->resv; 1373 - fence = dma_resv_excl_fence(robj); 1374 - if (fence) 1375 - seq_printf(s, "\tExclusive fence: %s %s %ssignalled\n", 1376 - fence->ops->get_driver_name(fence), 1377 - fence->ops->get_timeline_name(fence), 1378 - dma_fence_is_signaled(fence) ? "" : "un"); 1379 - 1380 - fobj = rcu_dereference_protected(robj->fence, 1381 - dma_resv_held(robj)); 1382 - shared_count = fobj ? fobj->shared_count : 0; 1383 - for (i = 0; i < shared_count; i++) { 1384 - fence = rcu_dereference_protected(fobj->shared[i], 1385 - dma_resv_held(robj)); 1386 - seq_printf(s, "\tShared fence: %s %s %ssignalled\n", 1390 + dma_resv_for_each_fence(&cursor, buf_obj->resv, true, fence) { 1391 + seq_printf(s, "\t%s fence: %s %s %ssignalled\n", 1392 + dma_resv_iter_is_exclusive(&cursor) ? 1393 + "Exclusive" : "Shared", 1387 1394 fence->ops->get_driver_name(fence), 1388 1395 fence->ops->get_timeline_name(fence), 1389 1396 dma_fence_is_signaled(fence) ? "" : "un");
+61 -8
drivers/dma-buf/dma-resv.c
··· 333 333 { 334 334 cursor->seq = read_seqcount_begin(&cursor->obj->seq); 335 335 cursor->index = -1; 336 - if (cursor->all_fences) 336 + cursor->shared_count = 0; 337 + if (cursor->all_fences) { 337 338 cursor->fences = dma_resv_shared_list(cursor->obj); 338 - else 339 + if (cursor->fences) 340 + cursor->shared_count = cursor->fences->shared_count; 341 + } else { 339 342 cursor->fences = NULL; 343 + } 340 344 cursor->is_restarted = true; 341 345 } 342 346 ··· 367 363 continue; 368 364 369 365 } else if (!cursor->fences || 370 - cursor->index >= cursor->fences->shared_count) { 366 + cursor->index >= cursor->shared_count) { 371 367 cursor->fence = NULL; 372 368 break; 373 369 ··· 428 424 EXPORT_SYMBOL(dma_resv_iter_next_unlocked); 429 425 430 426 /** 427 + * dma_resv_iter_first - first fence from a locked dma_resv object 428 + * @cursor: cursor to record the current position 429 + * 430 + * Return the first fence in the dma_resv object while holding the 431 + * &dma_resv.lock. 432 + */ 433 + struct dma_fence *dma_resv_iter_first(struct dma_resv_iter *cursor) 434 + { 435 + struct dma_fence *fence; 436 + 437 + dma_resv_assert_held(cursor->obj); 438 + 439 + cursor->index = 0; 440 + if (cursor->all_fences) 441 + cursor->fences = dma_resv_shared_list(cursor->obj); 442 + else 443 + cursor->fences = NULL; 444 + 445 + fence = dma_resv_excl_fence(cursor->obj); 446 + if (!fence) 447 + fence = dma_resv_iter_next(cursor); 448 + 449 + cursor->is_restarted = true; 450 + return fence; 451 + } 452 + EXPORT_SYMBOL_GPL(dma_resv_iter_first); 453 + 454 + /** 455 + * dma_resv_iter_next - next fence from a locked dma_resv object 456 + * @cursor: cursor to record the current position 457 + * 458 + * Return the next fences from the dma_resv object while holding the 459 + * &dma_resv.lock. 460 + */ 461 + struct dma_fence *dma_resv_iter_next(struct dma_resv_iter *cursor) 462 + { 463 + unsigned int idx; 464 + 465 + dma_resv_assert_held(cursor->obj); 466 + 467 + cursor->is_restarted = false; 468 + if (!cursor->fences || cursor->index >= cursor->fences->shared_count) 469 + return NULL; 470 + 471 + idx = cursor->index++; 472 + return rcu_dereference_protected(cursor->fences->shared[idx], 473 + dma_resv_held(cursor->obj)); 474 + } 475 + EXPORT_SYMBOL_GPL(dma_resv_iter_next); 476 + 477 + /** 431 478 * dma_resv_copy_fences - Copy all fences from src to dst. 432 479 * @dst: the destination reservation object 433 480 * @src: the source reservation object ··· 503 448 dma_resv_list_free(list); 504 449 dma_fence_put(excl); 505 450 506 - if (cursor.fences) { 507 - unsigned int cnt = cursor.fences->shared_count; 508 - 509 - list = dma_resv_list_alloc(cnt); 451 + if (cursor.shared_count) { 452 + list = dma_resv_list_alloc(cursor.shared_count); 510 453 if (!list) { 511 454 dma_resv_iter_end(&cursor); 512 455 return -ENOMEM; ··· 575 522 if (fence_excl) 576 523 dma_fence_put(*fence_excl); 577 524 578 - count = cursor.fences ? cursor.fences->shared_count : 0; 525 + count = cursor.shared_count; 579 526 count += fence_excl ? 0 : 1; 580 527 581 528 /* Eventually re-allocate the array */
+17 -3
drivers/gpu/drm/Kconfig
··· 100 100 This has the potential to use a lot of memory and print some very 101 101 large kernel messages. If in doubt, say "N". 102 102 103 + config DRM_DEBUG_MODESET_LOCK 104 + bool "Enable backtrace history for lock contention" 105 + depends on STACKTRACE_SUPPORT 106 + depends on DEBUG_KERNEL 107 + depends on EXPERT 108 + select STACKDEPOT 109 + default y if DEBUG_WW_MUTEX_SLOWPATH 110 + help 111 + Enable debug tracing of failures to gracefully handle drm modeset lock 112 + contention. A history of each drm modeset lock path hitting -EDEADLK 113 + will be saved until gracefully handled, and the backtrace will be 114 + printed when attempting to lock a contended lock. 115 + 116 + If in doubt, say "N". 117 + 103 118 config DRM_FBDEV_EMULATION 104 119 bool "Enable legacy fbdev support for your modesetting driver" 105 - depends on DRM 106 - depends on FB=y || FB=DRM 107 - select DRM_KMS_HELPER 120 + depends on DRM_KMS_HELPER 121 + depends on FB=y || FB=DRM_KMS_HELPER 108 122 select FB_CFB_FILLRECT 109 123 select FB_CFB_COPYAREA 110 124 select FB_CFB_IMAGEBLIT
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
··· 297 297 void amdgpu_amdkfd_gpuvm_init_mem_limits(void); 298 298 void amdgpu_amdkfd_gpuvm_destroy_cb(struct amdgpu_device *adev, 299 299 struct amdgpu_vm *vm); 300 - void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo); 300 + void amdgpu_amdkfd_release_notify(struct amdgpu_bo *bo); 301 301 void amdgpu_amdkfd_reserve_system_mem(uint64_t size); 302 302 #else 303 303 static inline ··· 312 312 } 313 313 314 314 static inline 315 - void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo) 315 + void amdgpu_amdkfd_release_notify(struct amdgpu_bo *bo) 316 316 { 317 317 } 318 318 #endif
+19 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 207 207 spin_unlock(&kfd_mem_limit.mem_limit_lock); 208 208 } 209 209 210 - void amdgpu_amdkfd_unreserve_memory_limit(struct amdgpu_bo *bo) 210 + void amdgpu_amdkfd_release_notify(struct amdgpu_bo *bo) 211 211 { 212 212 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 213 213 u32 domain = bo->preferred_domains; ··· 219 219 } 220 220 221 221 unreserve_mem_limit(adev, amdgpu_bo_size(bo), domain, sg); 222 + 223 + kfree(bo->kfd_bo); 222 224 } 223 225 224 226 ··· 736 734 } 737 735 738 736 /* Add BO to VM internal data structures */ 737 + ret = amdgpu_bo_reserve(bo[i], false); 738 + if (ret) { 739 + pr_debug("Unable to reserve BO during memory attach"); 740 + goto unwind; 741 + } 739 742 attachment[i]->bo_va = amdgpu_vm_bo_add(adev, vm, bo[i]); 743 + amdgpu_bo_unreserve(bo[i]); 740 744 if (unlikely(!attachment[i]->bo_va)) { 741 745 ret = -ENOMEM; 742 746 pr_err("Failed to add BO object to VM. ret == %d\n", 743 747 ret); 744 748 goto unwind; 745 749 } 746 - 747 750 attachment[i]->va = va; 748 751 attachment[i]->pte_flags = get_pte_flags(adev, mem); 749 752 attachment[i]->adev = adev; ··· 764 757 if (!attachment[i]) 765 758 continue; 766 759 if (attachment[i]->bo_va) { 760 + amdgpu_bo_reserve(bo[i], true); 767 761 amdgpu_vm_bo_rmv(adev, attachment[i]->bo_va); 762 + amdgpu_bo_unreserve(bo[i]); 768 763 list_del(&attachment[i]->list); 769 764 } 770 765 if (bo[i]) ··· 1577 1568 pr_debug("Release VA 0x%llx - 0x%llx\n", mem->va, 1578 1569 mem->va + bo_size * (1 + mem->aql_queue)); 1579 1570 1580 - ret = unreserve_bo_and_vms(&ctx, false, false); 1581 - 1582 1571 /* Remove from VM internal data structures */ 1583 1572 list_for_each_entry_safe(entry, tmp, &mem->attachments, list) 1584 1573 kfd_mem_detach(entry); 1574 + 1575 + ret = unreserve_bo_and_vms(&ctx, false, false); 1585 1576 1586 1577 /* Free the sync object */ 1587 1578 amdgpu_sync_free(&mem->sync); ··· 1609 1600 drm_vma_node_revoke(&mem->bo->tbo.base.vma_node, drm_priv); 1610 1601 if (mem->dmabuf) 1611 1602 dma_buf_put(mem->dmabuf); 1612 - drm_gem_object_put(&mem->bo->tbo.base); 1613 1603 mutex_destroy(&mem->lock); 1614 - kfree(mem); 1604 + 1605 + /* If this releases the last reference, it will end up calling 1606 + * amdgpu_amdkfd_release_notify and kfree the mem struct. That's why 1607 + * this needs to be the last call here. 1608 + */ 1609 + drm_gem_object_put(&mem->bo->tbo.base); 1615 1610 1616 1611 return ret; 1617 1612 }
+16 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 2398 2398 if (!adev->gmc.xgmi.pending_reset) 2399 2399 amdgpu_amdkfd_device_init(adev); 2400 2400 2401 - r = amdgpu_amdkfd_resume_iommu(adev); 2402 - if (r) 2403 - goto init_failed; 2404 - 2405 2401 amdgpu_fru_get_product_info(adev); 2406 2402 2407 2403 init_failed: ··· 3167 3171 { 3168 3172 switch (asic_type) { 3169 3173 #if defined(CONFIG_DRM_AMD_DC) 3170 - #if defined(CONFIG_DRM_AMD_DC_SI) 3171 3174 case CHIP_TAHITI: 3172 3175 case CHIP_PITCAIRN: 3173 3176 case CHIP_VERDE: 3174 3177 case CHIP_OLAND: 3178 + /* 3179 + * We have systems in the wild with these ASICs that require 3180 + * LVDS and VGA support which is not supported with DC. 3181 + * 3182 + * Fallback to the non-DC driver here by default so as not to 3183 + * cause regressions. 3184 + */ 3185 + #if defined(CONFIG_DRM_AMD_DC_SI) 3186 + return amdgpu_dc > 0; 3187 + #else 3188 + return false; 3175 3189 #endif 3176 3190 case CHIP_BONAIRE: 3177 3191 case CHIP_KAVERI: ··· 4293 4287 if (r) 4294 4288 return r; 4295 4289 4296 - amdgpu_amdkfd_pre_reset(adev); 4297 - 4298 4290 /* Resume IP prior to SMC */ 4299 4291 r = amdgpu_device_ip_reinit_early_sriov(adev); 4300 4292 if (r) ··· 4854 4850 4855 4851 /* clear job's guilty and depend the folowing step to decide the real one */ 4856 4852 drm_sched_reset_karma(s_job); 4853 + /* for the real bad job, it will be resubmitted twice, adding a dma_fence_get 4854 + * to make sure fence is balanced */ 4855 + dma_fence_get(s_job->s_fence->parent); 4857 4856 drm_sched_resubmit_jobs_ext(&ring->sched, 1); 4858 4857 4859 4858 ret = dma_fence_wait_timeout(s_job->s_fence->parent, false, ring->sched.timeout); ··· 4892 4885 4893 4886 /* got the hw fence, signal finished fence */ 4894 4887 atomic_dec(ring->sched.score); 4888 + dma_fence_put(s_job->s_fence->parent); 4895 4889 dma_fence_get(&s_job->s_fence->finished); 4896 4890 dma_fence_signal(&s_job->s_fence->finished); 4897 4891 dma_fence_put(&s_job->s_fence->finished); ··· 5028 5020 5029 5021 cancel_delayed_work_sync(&tmp_adev->delayed_init_work); 5030 5022 5031 - if (!amdgpu_sriov_vf(tmp_adev)) 5032 - amdgpu_amdkfd_pre_reset(tmp_adev); 5023 + amdgpu_amdkfd_pre_reset(tmp_adev); 5033 5024 5034 5025 /* 5035 5026 * Mark these ASICs to be reseted as untracked first
+3 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 867 867 case IP_VERSION(2, 0, 2): 868 868 case IP_VERSION(2, 2, 0): 869 869 amdgpu_device_ip_block_add(adev, &vcn_v2_0_ip_block); 870 - amdgpu_device_ip_block_add(adev, &jpeg_v2_0_ip_block); 870 + if (!amdgpu_sriov_vf(adev)) 871 + amdgpu_device_ip_block_add(adev, &jpeg_v2_0_ip_block); 871 872 break; 872 873 case IP_VERSION(2, 0, 3): 873 874 break; ··· 882 881 break; 883 882 case IP_VERSION(3, 0, 0): 884 883 case IP_VERSION(3, 0, 16): 884 + case IP_VERSION(3, 0, 64): 885 885 case IP_VERSION(3, 1, 1): 886 886 case IP_VERSION(3, 0, 2): 887 887 amdgpu_device_ip_block_add(adev, &vcn_v3_0_ip_block);
+4 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 60 60 goto unlock; 61 61 } 62 62 63 - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, 64 - TTM_BO_VM_NUM_PREFAULT, 1); 65 - drm_dev_exit(idx); 63 + ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, 64 + TTM_BO_VM_NUM_PREFAULT); 65 + 66 + drm_dev_exit(idx); 66 67 } else { 67 68 ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot); 68 69 }
+7 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 1423 1423 struct drm_amdgpu_info_firmware fw_info; 1424 1424 struct drm_amdgpu_query_fw query_fw; 1425 1425 struct atom_context *ctx = adev->mode_info.atom_context; 1426 + uint8_t smu_minor, smu_debug; 1427 + uint16_t smu_major; 1426 1428 int ret, i; 1427 1429 1428 1430 static const char *ta_fw_name[TA_FW_TYPE_MAX_INDEX] = { ··· 1570 1568 ret = amdgpu_firmware_info(&fw_info, &query_fw, adev); 1571 1569 if (ret) 1572 1570 return ret; 1573 - seq_printf(m, "SMC feature version: %u, firmware version: 0x%08x\n", 1574 - fw_info.feature, fw_info.ver); 1571 + smu_major = (fw_info.ver >> 16) & 0xffff; 1572 + smu_minor = (fw_info.ver >> 8) & 0xff; 1573 + smu_debug = (fw_info.ver >> 0) & 0xff; 1574 + seq_printf(m, "SMC feature version: %u, firmware version: 0x%08x (%d.%d.%d)\n", 1575 + fw_info.feature, fw_info.ver, smu_major, smu_minor, smu_debug); 1575 1576 1576 1577 /* SDMA */ 1577 1578 query_fw.fw_type = AMDGPU_INFO_FW_SDMA;
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 1274 1274 abo = ttm_to_amdgpu_bo(bo); 1275 1275 1276 1276 if (abo->kfd_bo) 1277 - amdgpu_amdkfd_unreserve_memory_limit(abo); 1277 + amdgpu_amdkfd_release_notify(abo); 1278 1278 1279 1279 /* We only remove the fence if the resv has individualized. */ 1280 1280 WARN_ON_ONCE(bo->type == ttm_bo_type_kernel
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
··· 134 134 adev->vcn.indirect_sram = true; 135 135 break; 136 136 case IP_VERSION(3, 0, 0): 137 + case IP_VERSION(3, 0, 64): 137 138 if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0)) 138 139 fw_name = FIRMWARE_SIENNA_CICHLID; 139 140 else
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
··· 806 806 for (i = 0; i < ARRAY_SIZE(xgmi23_pcs_err_status_reg_aldebaran); i++) 807 807 pcs_clear_status(adev, 808 808 xgmi23_pcs_err_status_reg_aldebaran[i]); 809 - for (i = 0; i < ARRAY_SIZE(xgmi23_pcs_err_status_reg_aldebaran); i++) 809 + for (i = 0; i < ARRAY_SIZE(xgmi3x16_pcs_err_status_reg_aldebaran); i++) 810 810 pcs_clear_status(adev, 811 - xgmi23_pcs_err_status_reg_aldebaran[i]); 811 + xgmi3x16_pcs_err_status_reg_aldebaran[i]); 812 812 for (i = 0; i < ARRAY_SIZE(walf_pcs_err_status_reg_aldebaran); i++) 813 813 pcs_clear_status(adev, 814 814 walf_pcs_err_status_reg_aldebaran[i]);
+6 -4
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 8249 8249 static void gfx_v10_0_update_spm_vmid(struct amdgpu_device *adev, unsigned vmid) 8250 8250 { 8251 8251 u32 reg, data; 8252 + 8253 + amdgpu_gfx_off_ctrl(adev, false); 8254 + 8252 8255 /* not for *_SOC15 */ 8253 8256 reg = SOC15_REG_OFFSET(GC, 0, mmRLC_SPM_MC_CNTL); 8254 8257 if (amdgpu_sriov_is_pp_one_vf(adev)) ··· 8266 8263 WREG32_SOC15_NO_KIQ(GC, 0, mmRLC_SPM_MC_CNTL, data); 8267 8264 else 8268 8265 WREG32_SOC15(GC, 0, mmRLC_SPM_MC_CNTL, data); 8266 + 8267 + amdgpu_gfx_off_ctrl(adev, true); 8269 8268 } 8270 8269 8271 8270 static bool gfx_v10_0_check_rlcg_range(struct amdgpu_device *adev, ··· 8321 8316 if (enable && (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)) { 8322 8317 switch (adev->ip_versions[GC_HWIP][0]) { 8323 8318 case IP_VERSION(10, 3, 1): 8324 - data = 0x4E20 & RLC_PG_DELAY_3__CGCG_ACTIVE_BEFORE_CGPG_MASK_Vangogh; 8325 - WREG32_SOC15(GC, 0, mmRLC_PG_DELAY_3, data); 8326 - break; 8327 8319 case IP_VERSION(10, 3, 3): 8328 - data = 0x1388 & RLC_PG_DELAY_3__CGCG_ACTIVE_BEFORE_CGPG_MASK_Vangogh; 8320 + data = 0x4E20 & RLC_PG_DELAY_3__CGCG_ACTIVE_BEFORE_CGPG_MASK_Vangogh; 8329 8321 WREG32_SOC15(GC, 0, mmRLC_PG_DELAY_3, data); 8330 8322 break; 8331 8323 default:
+4
drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
··· 3575 3575 { 3576 3576 u32 data; 3577 3577 3578 + amdgpu_gfx_off_ctrl(adev, false); 3579 + 3578 3580 data = RREG32(mmRLC_SPM_VMID); 3579 3581 3580 3582 data &= ~RLC_SPM_VMID__RLC_SPM_VMID_MASK; 3581 3583 data |= (vmid & RLC_SPM_VMID__RLC_SPM_VMID_MASK) << RLC_SPM_VMID__RLC_SPM_VMID__SHIFT; 3582 3584 3583 3585 WREG32(mmRLC_SPM_VMID, data); 3586 + 3587 + amdgpu_gfx_off_ctrl(adev, true); 3584 3588 } 3585 3589 3586 3590 static void gfx_v7_0_enable_cgcg(struct amdgpu_device *adev, bool enable)
+4
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
··· 5624 5624 { 5625 5625 u32 data; 5626 5626 5627 + amdgpu_gfx_off_ctrl(adev, false); 5628 + 5627 5629 if (amdgpu_sriov_is_pp_one_vf(adev)) 5628 5630 data = RREG32_NO_KIQ(mmRLC_SPM_VMID); 5629 5631 else ··· 5638 5636 WREG32_NO_KIQ(mmRLC_SPM_VMID, data); 5639 5637 else 5640 5638 WREG32(mmRLC_SPM_VMID, data); 5639 + 5640 + amdgpu_gfx_off_ctrl(adev, true); 5641 5641 } 5642 5642 5643 5643 static const struct amdgpu_rlc_funcs iceland_rlc_funcs = {
+7 -1
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 2462 2462 amdgpu_gfx_kiq_fini(adev); 2463 2463 2464 2464 gfx_v9_0_mec_fini(adev); 2465 - amdgpu_bo_unref(&adev->gfx.rlc.clear_state_obj); 2465 + amdgpu_bo_free_kernel(&adev->gfx.rlc.clear_state_obj, 2466 + &adev->gfx.rlc.clear_state_gpu_addr, 2467 + (void **)&adev->gfx.rlc.cs_ptr); 2466 2468 if (adev->flags & AMD_IS_APU) { 2467 2469 amdgpu_bo_free_kernel(&adev->gfx.rlc.cp_table_obj, 2468 2470 &adev->gfx.rlc.cp_table_gpu_addr, ··· 5104 5102 { 5105 5103 u32 reg, data; 5106 5104 5105 + amdgpu_gfx_off_ctrl(adev, false); 5106 + 5107 5107 reg = SOC15_REG_OFFSET(GC, 0, mmRLC_SPM_MC_CNTL); 5108 5108 if (amdgpu_sriov_is_pp_one_vf(adev)) 5109 5109 data = RREG32_NO_KIQ(reg); ··· 5119 5115 WREG32_SOC15_NO_KIQ(GC, 0, mmRLC_SPM_MC_CNTL, data); 5120 5116 else 5121 5117 WREG32_SOC15(GC, 0, mmRLC_SPM_MC_CNTL, data); 5118 + 5119 + amdgpu_gfx_off_ctrl(adev, true); 5122 5120 } 5123 5121 5124 5122 static bool gfx_v9_0_check_rlcg_range(struct amdgpu_device *adev,
+4
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
··· 348 348 WREG32_SOC15_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL, 349 349 i * hub->ctx_distance, 0); 350 350 351 + if (amdgpu_sriov_vf(adev)) 352 + /* Avoid write to GMC registers */ 353 + return; 354 + 351 355 /* Setup TLB control */ 352 356 tmp = RREG32_SOC15(GC, 0, mmMC_VM_MX_L1_TLB_CNTL); 353 357 tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
+13 -5
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_1.c
··· 54 54 seg_size = REG_GET_FIELD( 55 55 RREG32_SOC15(GC, 0, mmMC_VM_XGMI_LFB_SIZE_ALDE), 56 56 MC_VM_XGMI_LFB_SIZE, PF_LFB_SIZE) << 24; 57 + max_region = 58 + REG_GET_FIELD(xgmi_lfb_cntl, MC_VM_XGMI_LFB_CNTL_ALDE, PF_MAX_REGION); 57 59 } else { 58 60 xgmi_lfb_cntl = RREG32_SOC15(GC, 0, mmMC_VM_XGMI_LFB_CNTL); 59 61 seg_size = REG_GET_FIELD( 60 62 RREG32_SOC15(GC, 0, mmMC_VM_XGMI_LFB_SIZE), 61 63 MC_VM_XGMI_LFB_SIZE, PF_LFB_SIZE) << 24; 64 + max_region = 65 + REG_GET_FIELD(xgmi_lfb_cntl, MC_VM_XGMI_LFB_CNTL, PF_MAX_REGION); 62 66 } 63 67 64 - max_region = 65 - REG_GET_FIELD(xgmi_lfb_cntl, MC_VM_XGMI_LFB_CNTL, PF_MAX_REGION); 66 68 67 69 68 70 switch (adev->asic_type) { ··· 91 89 if (adev->gmc.xgmi.num_physical_nodes > max_num_physical_nodes) 92 90 return -EINVAL; 93 91 94 - adev->gmc.xgmi.physical_node_id = 95 - REG_GET_FIELD(xgmi_lfb_cntl, MC_VM_XGMI_LFB_CNTL, 96 - PF_LFB_REGION); 92 + if (adev->asic_type == CHIP_ALDEBARAN) { 93 + adev->gmc.xgmi.physical_node_id = 94 + REG_GET_FIELD(xgmi_lfb_cntl, MC_VM_XGMI_LFB_CNTL_ALDE, 95 + PF_LFB_REGION); 96 + } else { 97 + adev->gmc.xgmi.physical_node_id = 98 + REG_GET_FIELD(xgmi_lfb_cntl, MC_VM_XGMI_LFB_CNTL, 99 + PF_LFB_REGION); 100 + } 97 101 98 102 if (adev->gmc.xgmi.physical_node_id > max_physical_node_id) 99 103 return -EINVAL;
+1
drivers/gpu/drm/amd/amdgpu/nv.c
··· 182 182 { 183 183 switch (adev->ip_versions[UVD_HWIP][0]) { 184 184 case IP_VERSION(3, 0, 0): 185 + case IP_VERSION(3, 0, 64): 185 186 if (amdgpu_sriov_vf(adev)) { 186 187 if (encode) 187 188 *codecs = &sriov_sc_video_codecs_encode;
+13 -11
drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
··· 534 534 { 535 535 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 536 536 537 + cancel_delayed_work_sync(&adev->uvd.idle_work); 538 + 539 + if (RREG32(mmUVD_STATUS) != 0) 540 + uvd_v6_0_stop(adev); 541 + 542 + return 0; 543 + } 544 + 545 + static int uvd_v6_0_suspend(void *handle) 546 + { 547 + int r; 548 + struct amdgpu_device *adev = (struct amdgpu_device *)handle; 549 + 537 550 /* 538 551 * Proper cleanups before halting the HW engine: 539 552 * - cancel the delayed idle work ··· 570 557 amdgpu_device_ip_set_clockgating_state(adev, AMD_IP_BLOCK_TYPE_UVD, 571 558 AMD_CG_STATE_GATE); 572 559 } 573 - 574 - if (RREG32(mmUVD_STATUS) != 0) 575 - uvd_v6_0_stop(adev); 576 - 577 - return 0; 578 - } 579 - 580 - static int uvd_v6_0_suspend(void *handle) 581 - { 582 - int r; 583 - struct amdgpu_device *adev = (struct amdgpu_device *)handle; 584 560 585 561 r = uvd_v6_0_hw_fini(adev); 586 562 if (r)
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_device.c
··· 406 406 static const struct kfd_device_info renoir_device_info = { 407 407 .asic_family = CHIP_RENOIR, 408 408 .asic_name = "renoir", 409 - .gfx_target_version = 90002, 409 + .gfx_target_version = 90012, 410 410 .max_pasid_bits = 16, 411 411 .max_no_of_hqd = 24, 412 412 .doorbell_size = 8,
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 1430 1430 1431 1431 if (!dqm->sched_running) 1432 1432 return 0; 1433 - if (dqm->is_hws_hang) 1433 + if (dqm->is_hws_hang || dqm->is_resetting) 1434 1434 return -EIO; 1435 1435 if (!dqm->active_runlist) 1436 1436 return retval;
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_flat_memory.c
··· 308 308 * 16MB are reserved for kernel use (CWSR trap handler and kernel IB 309 309 * for now). 310 310 */ 311 - #define SVM_USER_BASE 0x1000000ull 311 + #define SVM_USER_BASE (u64)(KFD_CWSR_TBA_TMA_SIZE + 2*PAGE_SIZE) 312 312 #define SVM_CWSR_BASE (SVM_USER_BASE - KFD_CWSR_TBA_TMA_SIZE) 313 313 #define SVM_IB_BASE (SVM_CWSR_BASE - PAGE_SIZE) 314 314
+38 -12
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
··· 281 281 return cpages; 282 282 } 283 283 284 + static unsigned long svm_migrate_unsuccessful_pages(struct migrate_vma *migrate) 285 + { 286 + unsigned long upages = 0; 287 + unsigned long i; 288 + 289 + for (i = 0; i < migrate->npages; i++) { 290 + if (migrate->src[i] & MIGRATE_PFN_VALID && 291 + !(migrate->src[i] & MIGRATE_PFN_MIGRATE)) 292 + upages++; 293 + } 294 + return upages; 295 + } 296 + 284 297 static int 285 298 svm_migrate_copy_to_vram(struct amdgpu_device *adev, struct svm_range *prange, 286 299 struct migrate_vma *migrate, struct dma_fence **mfence, ··· 645 632 struct vm_area_struct *vma, uint64_t start, uint64_t end) 646 633 { 647 634 uint64_t npages = (end - start) >> PAGE_SHIFT; 635 + unsigned long upages = npages; 636 + unsigned long cpages = 0; 648 637 struct kfd_process_device *pdd; 649 638 struct dma_fence *mfence = NULL; 650 639 struct migrate_vma migrate; 651 - unsigned long cpages = 0; 652 640 dma_addr_t *scratch; 653 641 size_t size; 654 642 void *buf; ··· 683 669 if (!cpages) { 684 670 pr_debug("failed collect migrate device pages [0x%lx 0x%lx]\n", 685 671 prange->start, prange->last); 672 + upages = svm_migrate_unsuccessful_pages(&migrate); 686 673 goto out_free; 687 674 } 688 675 if (cpages != npages) ··· 696 681 scratch, npages); 697 682 migrate_vma_pages(&migrate); 698 683 699 - pr_debug("successful/cpages/npages 0x%lx/0x%lx/0x%lx\n", 700 - svm_migrate_successful_pages(&migrate), cpages, migrate.npages); 684 + upages = svm_migrate_unsuccessful_pages(&migrate); 685 + pr_debug("unsuccessful/cpages/npages 0x%lx/0x%lx/0x%lx\n", 686 + upages, cpages, migrate.npages); 701 687 702 688 svm_migrate_copy_done(adev, mfence); 703 689 migrate_vma_finalize(&migrate); ··· 712 696 if (pdd) 713 697 WRITE_ONCE(pdd->page_out, pdd->page_out + cpages); 714 698 715 - return cpages; 699 + return upages; 716 700 } 717 - return r; 701 + return r ? r : upages; 718 702 } 719 703 720 704 /** ··· 734 718 unsigned long addr; 735 719 unsigned long start; 736 720 unsigned long end; 737 - unsigned long cpages = 0; 721 + unsigned long upages = 0; 738 722 long r = 0; 739 723 740 724 if (!prange->actual_loc) { ··· 770 754 pr_debug("failed %ld to migrate\n", r); 771 755 break; 772 756 } else { 773 - cpages += r; 757 + upages += r; 774 758 } 775 759 addr = next; 776 760 } 777 761 778 - if (cpages) { 762 + if (!upages) { 779 763 svm_range_vram_node_free(prange); 780 764 prange->actual_loc = 0; 781 765 } ··· 798 782 svm_migrate_vram_to_vram(struct svm_range *prange, uint32_t best_loc, 799 783 struct mm_struct *mm) 800 784 { 801 - int r; 785 + int r, retries = 3; 802 786 803 787 /* 804 788 * TODO: for both devices with PCIe large bar or on same xgmi hive, skip ··· 807 791 808 792 pr_debug("from gpu 0x%x to gpu 0x%x\n", prange->actual_loc, best_loc); 809 793 810 - r = svm_migrate_vram_to_ram(prange, mm); 811 - if (r) 812 - return r; 794 + do { 795 + r = svm_migrate_vram_to_ram(prange, mm); 796 + if (r) 797 + return r; 798 + } while (prange->actual_loc && --retries); 799 + 800 + if (prange->actual_loc) 801 + return -EDEADLK; 813 802 814 803 return svm_migrate_ram_to_vram(prange, best_loc, mm); 815 804 } ··· 858 837 if (!p) { 859 838 pr_debug("failed find process at fault address 0x%lx\n", addr); 860 839 return VM_FAULT_SIGBUS; 840 + } 841 + if (READ_ONCE(p->svms.faulting_task) == current) { 842 + pr_debug("skipping ram migration\n"); 843 + kfd_unref_process(p); 844 + return 0; 861 845 } 862 846 addr >>= PAGE_SHIFT; 863 847 pr_debug("CPU page fault svms 0x%p address 0x%lx\n", &p->svms, addr);
+2
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 766 766 struct list_head deferred_range_list; 767 767 spinlock_t deferred_list_lock; 768 768 atomic_t evicted_ranges; 769 + bool drain_pagefaults; 769 770 struct delayed_work restore_work; 770 771 DECLARE_BITMAP(bitmap_supported, MAX_GPU_INSTANCE); 772 + struct task_struct *faulting_task; 771 773 }; 772 774 773 775 /* Process data */
+5 -1
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 1715 1715 1716 1716 r = pdd->dev->dqm->ops.evict_process_queues(pdd->dev->dqm, 1717 1717 &pdd->qpd); 1718 - if (r) { 1718 + /* evict return -EIO if HWS is hang or asic is resetting, in this case 1719 + * we would like to set all the queues to be in evicted state to prevent 1720 + * them been add back since they actually not be saved right now. 1721 + */ 1722 + if (r && r != -EIO) { 1719 1723 pr_err("Failed to evict process queues\n"); 1720 1724 goto fail; 1721 1725 }
+56 -14
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 1496 1496 1497 1497 next = min(vma->vm_end, end); 1498 1498 npages = (next - addr) >> PAGE_SHIFT; 1499 + WRITE_ONCE(p->svms.faulting_task, current); 1499 1500 r = amdgpu_hmm_range_get_pages(&prange->notifier, mm, NULL, 1500 1501 addr, npages, &hmm_range, 1501 1502 readonly, true, owner); 1503 + WRITE_ONCE(p->svms.faulting_task, NULL); 1502 1504 if (r) { 1503 1505 pr_debug("failed %d to get svm range pages\n", r); 1504 1506 goto unreserve_out; ··· 2002 2000 pr_debug("prange 0x%p [0x%lx 0x%lx] op %d\n", prange, 2003 2001 prange->start, prange->last, prange->work_item.op); 2004 2002 2005 - /* Make sure no stale retry fault coming after range is freed */ 2006 - if (prange->work_item.op == SVM_OP_UNMAP_RANGE) 2007 - svm_range_drain_retry_fault(prange->svms); 2008 - 2009 2003 mm = prange->work_item.mm; 2004 + retry: 2010 2005 mmap_write_lock(mm); 2011 2006 mutex_lock(&svms->lock); 2012 2007 2013 - /* Remove from deferred_list must be inside mmap write lock, 2008 + /* Checking for the need to drain retry faults must be in 2009 + * mmap write lock to serialize with munmap notifiers. 2010 + * 2011 + * Remove from deferred_list must be inside mmap write lock, 2014 2012 * otherwise, svm_range_list_lock_and_flush_work may hold mmap 2015 2013 * write lock, and continue because deferred_list is empty, then 2016 2014 * deferred_list handle is blocked by mmap write lock. 2017 2015 */ 2018 2016 spin_lock(&svms->deferred_list_lock); 2017 + if (unlikely(svms->drain_pagefaults)) { 2018 + svms->drain_pagefaults = false; 2019 + spin_unlock(&svms->deferred_list_lock); 2020 + mutex_unlock(&svms->lock); 2021 + mmap_write_unlock(mm); 2022 + svm_range_drain_retry_fault(svms); 2023 + goto retry; 2024 + } 2019 2025 list_del_init(&prange->deferred_list); 2020 2026 spin_unlock(&svms->deferred_list_lock); 2021 2027 ··· 2056 2046 struct mm_struct *mm, enum svm_work_list_ops op) 2057 2047 { 2058 2048 spin_lock(&svms->deferred_list_lock); 2049 + /* Make sure pending page faults are drained in the deferred worker 2050 + * before the range is freed to avoid straggler interrupts on 2051 + * unmapped memory causing "phantom faults". 2052 + */ 2053 + if (op == SVM_OP_UNMAP_RANGE) 2054 + svms->drain_pagefaults = true; 2059 2055 /* if prange is on the deferred list */ 2060 2056 if (!list_empty(&prange->deferred_list)) { 2061 2057 pr_debug("update exist prange 0x%p work op %d\n", prange, op); ··· 2277 2261 * migration if actual loc is not best location, then update GPU page table 2278 2262 * mapping to the best location. 2279 2263 * 2280 - * If vm fault gpu is range preferred loc, the best_loc is preferred loc. 2264 + * If the preferred loc is accessible by faulting GPU, use preferred loc. 2281 2265 * If vm fault gpu idx is on range ACCESSIBLE bitmap, best_loc is vm fault gpu 2282 2266 * If vm fault gpu idx is on range ACCESSIBLE_IN_PLACE bitmap, then 2283 2267 * if range actual loc is cpu, best_loc is cpu ··· 2294 2278 struct amdgpu_device *adev, 2295 2279 int32_t *gpuidx) 2296 2280 { 2297 - struct amdgpu_device *bo_adev; 2281 + struct amdgpu_device *bo_adev, *preferred_adev; 2298 2282 struct kfd_process *p; 2299 2283 uint32_t gpuid; 2300 2284 int r; ··· 2307 2291 return -1; 2308 2292 } 2309 2293 2310 - if (prange->preferred_loc == gpuid) 2294 + if (prange->preferred_loc == gpuid || 2295 + prange->preferred_loc == KFD_IOCTL_SVM_LOCATION_SYSMEM) { 2311 2296 return prange->preferred_loc; 2297 + } else if (prange->preferred_loc != KFD_IOCTL_SVM_LOCATION_UNDEFINED) { 2298 + preferred_adev = svm_range_get_adev_by_id(prange, 2299 + prange->preferred_loc); 2300 + if (amdgpu_xgmi_same_hive(adev, preferred_adev)) 2301 + return prange->preferred_loc; 2302 + /* fall through */ 2303 + } 2312 2304 2313 2305 if (test_bit(*gpuidx, prange->bitmap_access)) 2314 2306 return gpuid; ··· 2337 2313 2338 2314 static int 2339 2315 svm_range_get_range_boundaries(struct kfd_process *p, int64_t addr, 2340 - unsigned long *start, unsigned long *last) 2316 + unsigned long *start, unsigned long *last, 2317 + bool *is_heap_stack) 2341 2318 { 2342 2319 struct vm_area_struct *vma; 2343 2320 struct interval_tree_node *node; ··· 2349 2324 pr_debug("VMA does not exist in address [0x%llx]\n", addr); 2350 2325 return -EFAULT; 2351 2326 } 2327 + 2328 + *is_heap_stack = (vma->vm_start <= vma->vm_mm->brk && 2329 + vma->vm_end >= vma->vm_mm->start_brk) || 2330 + (vma->vm_start <= vma->vm_mm->start_stack && 2331 + vma->vm_end >= vma->vm_mm->start_stack); 2332 + 2352 2333 start_limit = max(vma->vm_start >> PAGE_SHIFT, 2353 2334 (unsigned long)ALIGN_DOWN(addr, 2UL << 8)); 2354 2335 end_limit = min(vma->vm_end >> PAGE_SHIFT, ··· 2384 2353 *start = start_limit; 2385 2354 *last = end_limit - 1; 2386 2355 2387 - pr_debug("vma start: 0x%lx start: 0x%lx vma end: 0x%lx last: 0x%lx\n", 2388 - vma->vm_start >> PAGE_SHIFT, *start, 2389 - vma->vm_end >> PAGE_SHIFT, *last); 2356 + pr_debug("vma [0x%lx 0x%lx] range [0x%lx 0x%lx] is_heap_stack %d\n", 2357 + vma->vm_start >> PAGE_SHIFT, vma->vm_end >> PAGE_SHIFT, 2358 + *start, *last, *is_heap_stack); 2390 2359 2391 2360 return 0; 2392 2361 } ··· 2451 2420 struct svm_range *prange = NULL; 2452 2421 unsigned long start, last; 2453 2422 uint32_t gpuid, gpuidx; 2423 + bool is_heap_stack; 2454 2424 uint64_t bo_s = 0; 2455 2425 uint64_t bo_l = 0; 2456 2426 int r; 2457 2427 2458 - if (svm_range_get_range_boundaries(p, addr, &start, &last)) 2428 + if (svm_range_get_range_boundaries(p, addr, &start, &last, 2429 + &is_heap_stack)) 2459 2430 return NULL; 2460 2431 2461 2432 r = svm_range_check_vm(p, start, last, &bo_s, &bo_l); ··· 2483 2450 svm_range_free(prange); 2484 2451 return NULL; 2485 2452 } 2453 + 2454 + if (is_heap_stack) 2455 + prange->preferred_loc = KFD_IOCTL_SVM_LOCATION_SYSMEM; 2486 2456 2487 2457 svm_range_add_to_svms(prange); 2488 2458 svm_range_add_notifier_locked(mm, prange); ··· 3112 3076 struct svm_range *prange = 3113 3077 list_first_entry(&svm_bo->range_list, 3114 3078 struct svm_range, svm_bo_list); 3079 + int retries = 3; 3080 + 3115 3081 list_del_init(&prange->svm_bo_list); 3116 3082 spin_unlock(&svm_bo->list_lock); 3117 3083 ··· 3121 3083 prange->start, prange->last); 3122 3084 3123 3085 mutex_lock(&prange->migrate_mutex); 3124 - svm_migrate_vram_to_ram(prange, svm_bo->eviction_fence->mm); 3086 + do { 3087 + svm_migrate_vram_to_ram(prange, 3088 + svm_bo->eviction_fence->mm); 3089 + } while (prange->actual_loc && --retries); 3090 + WARN(prange->actual_loc, "Migration failed during eviction"); 3125 3091 3126 3092 mutex_lock(&prange->lock); 3127 3093 prange->svm_bo = NULL;
+59 -33
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 217 217 amd_get_format_info(const struct drm_mode_fb_cmd2 *cmd); 218 218 219 219 static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector); 220 + static void handle_hpd_rx_irq(void *param); 220 221 221 222 static bool 222 223 is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state, ··· 620 619 621 620 amdgpu_dm_crtc_handle_crc_window_irq(&acrtc->base); 622 621 } 623 - #endif 622 + #endif /* CONFIG_DRM_AMD_SECURE_DISPLAY */ 624 623 625 624 /** 626 625 * dmub_aux_setconfig_reply_callback - Callback for AUX or SET_CONFIG command. ··· 670 669 return; 671 670 } 672 671 673 - drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 674 - 675 672 link_index = notify->link_index; 676 - 677 673 link = adev->dm.dc->links[link_index]; 678 674 679 675 drm_connector_list_iter_begin(dev, &iter); ··· 683 685 } 684 686 } 685 687 drm_connector_list_iter_end(&iter); 686 - drm_modeset_unlock(&dev->mode_config.connection_mutex); 687 688 688 - if (hpd_aconnector) 689 - handle_hpd_irq_helper(hpd_aconnector); 689 + if (hpd_aconnector) { 690 + if (notify->type == DMUB_NOTIFICATION_HPD) 691 + handle_hpd_irq_helper(hpd_aconnector); 692 + else if (notify->type == DMUB_NOTIFICATION_HPD_IRQ) 693 + handle_hpd_rx_irq(hpd_aconnector); 694 + } 690 695 } 691 696 692 697 /** ··· 765 764 DRM_ERROR("DM: notify type %d invalid!", notify.type); 766 765 continue; 767 766 } 767 + if (!dm->dmub_callback[notify.type]) { 768 + DRM_DEBUG_DRIVER("DMUB notification skipped, no handler: type=%d\n", notify.type); 769 + continue; 770 + } 768 771 if (dm->dmub_thread_offload[notify.type] == true) { 769 772 dmub_hpd_wrk = kzalloc(sizeof(*dmub_hpd_wrk), GFP_ATOMIC); 770 773 if (!dmub_hpd_wrk) { ··· 818 813 if (count > DMUB_TRACE_MAX_READ) 819 814 DRM_DEBUG_DRIVER("Warning : count > DMUB_TRACE_MAX_READ"); 820 815 } 821 - #endif 816 + #endif /* CONFIG_DRM_AMD_DC_DCN */ 822 817 823 818 static int dm_set_clockgating_state(void *handle, 824 819 enum amd_clockgating_state state) ··· 1415 1410 switch (adev->ip_versions[DCE_HWIP][0]) { 1416 1411 case IP_VERSION(2, 1, 0): 1417 1412 init_data.flags.gpu_vm_support = true; 1418 - init_data.flags.disable_dmcu = true; 1413 + switch (adev->dm.dmcub_fw_version) { 1414 + case 0: /* development */ 1415 + case 0x1: /* linux-firmware.git hash 6d9f399 */ 1416 + case 0x01000000: /* linux-firmware.git hash 9a0b0f4 */ 1417 + init_data.flags.disable_dmcu = false; 1418 + break; 1419 + default: 1420 + init_data.flags.disable_dmcu = true; 1421 + } 1419 1422 break; 1420 1423 case IP_VERSION(1, 0, 0): 1421 1424 case IP_VERSION(1, 0, 1): ··· 1569 1556 DRM_ERROR("amdgpu: fail to register dmub hpd callback"); 1570 1557 goto error; 1571 1558 } 1572 - #endif 1559 + if (!register_dmub_notify_callback(adev, DMUB_NOTIFICATION_HPD_IRQ, dmub_hpd_callback, true)) { 1560 + DRM_ERROR("amdgpu: fail to register dmub hpd callback"); 1561 + goto error; 1562 + } 1563 + #endif /* CONFIG_DRM_AMD_DC_DCN */ 1573 1564 } 1574 1565 1575 1566 if (amdgpu_dm_initialize_drm_device(adev)) { ··· 4582 4565 } 4583 4566 4584 4567 4585 - static int fill_dc_scaling_info(const struct drm_plane_state *state, 4568 + static int fill_dc_scaling_info(struct amdgpu_device *adev, 4569 + const struct drm_plane_state *state, 4586 4570 struct dc_scaling_info *scaling_info) 4587 4571 { 4588 4572 int scale_w, scale_h, min_downscale, max_upscale; ··· 4597 4579 /* 4598 4580 * For reasons we don't (yet) fully understand a non-zero 4599 4581 * src_y coordinate into an NV12 buffer can cause a 4600 - * system hang. To avoid hangs (and maybe be overly cautious) 4582 + * system hang on DCN1x. 4583 + * To avoid hangs (and maybe be overly cautious) 4601 4584 * let's reject both non-zero src_x and src_y. 4602 4585 * 4603 4586 * We currently know of only one use-case to reproduce a ··· 4606 4587 * is to gesture the YouTube Android app into full screen 4607 4588 * on ChromeOS. 4608 4589 */ 4609 - if (state->fb && 4610 - state->fb->format->format == DRM_FORMAT_NV12 && 4611 - (scaling_info->src_rect.x != 0 || 4612 - scaling_info->src_rect.y != 0)) 4590 + if (((adev->ip_versions[DCE_HWIP][0] == IP_VERSION(1, 0, 0)) || 4591 + (adev->ip_versions[DCE_HWIP][0] == IP_VERSION(1, 0, 1))) && 4592 + (state->fb && state->fb->format->format == DRM_FORMAT_NV12 && 4593 + (scaling_info->src_rect.x != 0 || scaling_info->src_rect.y != 0))) 4613 4594 return -EINVAL; 4614 4595 4615 4596 scaling_info->src_rect.width = state->src_w >> 16; ··· 5515 5496 int ret; 5516 5497 bool force_disable_dcc = false; 5517 5498 5518 - ret = fill_dc_scaling_info(plane_state, &scaling_info); 5499 + ret = fill_dc_scaling_info(adev, plane_state, &scaling_info); 5519 5500 if (ret) 5520 5501 return ret; 5521 5502 ··· 6089 6070 if (stream->timing.flags.DSC && aconnector->dsc_settings.dsc_bits_per_pixel) 6090 6071 stream->timing.dsc_cfg.bits_per_pixel = aconnector->dsc_settings.dsc_bits_per_pixel; 6091 6072 } 6092 - #endif 6073 + #endif /* CONFIG_DRM_AMD_DC_DCN */ 6093 6074 6094 6075 /** 6095 6076 * DOC: FreeSync Video ··· 7260 7241 struct drm_connector_state *new_con_state; 7261 7242 struct amdgpu_dm_connector *aconnector; 7262 7243 struct dm_connector_state *dm_conn_state; 7263 - int i, j, clock; 7264 - int vcpi, pbn_div, pbn = 0; 7244 + int i, j; 7245 + int vcpi, pbn_div, pbn, slot_num = 0; 7265 7246 7266 7247 for_each_new_connector_in_state(state, connector, new_con_state, i) { 7267 7248 ··· 7289 7270 if (!stream) 7290 7271 continue; 7291 7272 7292 - if (stream->timing.flags.DSC != 1) { 7293 - drm_dp_mst_atomic_enable_dsc(state, 7294 - aconnector->port, 7295 - dm_conn_state->pbn, 7296 - 0, 7297 - false); 7298 - continue; 7299 - } 7300 - 7301 7273 pbn_div = dm_mst_get_pbn_divider(stream->link); 7302 - clock = stream->timing.pix_clk_100hz / 10; 7303 7274 /* pbn is calculated by compute_mst_dsc_configs_for_state*/ 7304 7275 for (j = 0; j < dc_state->stream_count; j++) { 7305 7276 if (vars[j].aconnector == aconnector) { 7306 7277 pbn = vars[j].pbn; 7307 7278 break; 7308 7279 } 7280 + } 7281 + 7282 + if (j == dc_state->stream_count) 7283 + continue; 7284 + 7285 + slot_num = DIV_ROUND_UP(pbn, pbn_div); 7286 + 7287 + if (stream->timing.flags.DSC != 1) { 7288 + dm_conn_state->pbn = pbn; 7289 + dm_conn_state->vcpi_slots = slot_num; 7290 + 7291 + drm_dp_mst_atomic_enable_dsc(state, 7292 + aconnector->port, 7293 + dm_conn_state->pbn, 7294 + 0, 7295 + false); 7296 + continue; 7309 7297 } 7310 7298 7311 7299 vcpi = drm_dp_mst_atomic_enable_dsc(state, ··· 7578 7552 if (ret) 7579 7553 return ret; 7580 7554 7581 - ret = fill_dc_scaling_info(new_plane_state, &scaling_info); 7555 + ret = fill_dc_scaling_info(adev, new_plane_state, &scaling_info); 7582 7556 if (ret) 7583 7557 return ret; 7584 7558 ··· 9026 9000 bundle->surface_updates[planes_count].gamut_remap_matrix = &dc_plane->gamut_remap_matrix; 9027 9001 } 9028 9002 9029 - fill_dc_scaling_info(new_plane_state, 9003 + fill_dc_scaling_info(dm->adev, new_plane_state, 9030 9004 &bundle->scaling_infos[planes_count]); 9031 9005 9032 9006 bundle->surface_updates[planes_count].scaling_info = ··· 10813 10787 10814 10788 ret = drm_atomic_add_affected_connectors(state, crtc); 10815 10789 if (ret) 10816 - return ret; 10790 + goto fail; 10817 10791 10818 10792 ret = drm_atomic_add_affected_planes(state, crtc); 10819 10793 if (ret)
+4 -6
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
··· 78 78 79 79 wr_buf_ptr = wr_buf; 80 80 81 - r = copy_from_user(wr_buf_ptr, buf, wr_buf_size); 82 - 83 - /* r is bytes not be copied */ 84 - if (r >= wr_buf_size) { 85 - DRM_DEBUG_DRIVER("user data not be read\n"); 86 - return -EINVAL; 81 + /* r is bytes not be copied */ 82 + if (copy_from_user(wr_buf_ptr, buf, wr_buf_size)) { 83 + DRM_DEBUG_DRIVER("user data could not be read successfully\n"); 84 + return -EFAULT; 87 85 } 88 86 89 87 /* check number of parameters. isspace could not differ space and \n */
+119 -31
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 534 534 535 535 static void set_dsc_configs_from_fairness_vars(struct dsc_mst_fairness_params *params, 536 536 struct dsc_mst_fairness_vars *vars, 537 - int count) 537 + int count, 538 + int k) 538 539 { 539 540 int i; 540 541 541 542 for (i = 0; i < count; i++) { 542 543 memset(&params[i].timing->dsc_cfg, 0, sizeof(params[i].timing->dsc_cfg)); 543 - if (vars[i].dsc_enabled && dc_dsc_compute_config( 544 + if (vars[i + k].dsc_enabled && dc_dsc_compute_config( 544 545 params[i].sink->ctx->dc->res_pool->dscs[0], 545 546 &params[i].sink->dsc_caps.dsc_dec_caps, 546 547 params[i].sink->ctx->dc->debug.dsc_min_slice_height_override, ··· 554 553 if (params[i].bpp_overwrite) 555 554 params[i].timing->dsc_cfg.bits_per_pixel = params[i].bpp_overwrite; 556 555 else 557 - params[i].timing->dsc_cfg.bits_per_pixel = vars[i].bpp_x16; 556 + params[i].timing->dsc_cfg.bits_per_pixel = vars[i + k].bpp_x16; 558 557 559 558 if (params[i].num_slices_h) 560 559 params[i].timing->dsc_cfg.num_slices_h = params[i].num_slices_h; ··· 587 586 struct dc_link *dc_link, 588 587 struct dsc_mst_fairness_params *params, 589 588 struct dsc_mst_fairness_vars *vars, 590 - int count) 589 + int count, 590 + int k) 591 591 { 592 592 int i; 593 593 bool bpp_increased[MAX_PIPES]; ··· 603 601 pbn_per_timeslot = dm_mst_get_pbn_divider(dc_link); 604 602 605 603 for (i = 0; i < count; i++) { 606 - if (vars[i].dsc_enabled) { 607 - initial_slack[i] = kbps_to_peak_pbn(params[i].bw_range.max_kbps) - vars[i].pbn; 604 + if (vars[i + k].dsc_enabled) { 605 + initial_slack[i] = 606 + kbps_to_peak_pbn(params[i].bw_range.max_kbps) - vars[i + k].pbn; 608 607 bpp_increased[i] = false; 609 608 remaining_to_increase += 1; 610 609 } else { ··· 632 629 link_timeslots_used = 0; 633 630 634 631 for (i = 0; i < count; i++) 635 - link_timeslots_used += DIV_ROUND_UP(vars[i].pbn, pbn_per_timeslot); 632 + link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, pbn_per_timeslot); 636 633 637 634 fair_pbn_alloc = (63 - link_timeslots_used) / remaining_to_increase * pbn_per_timeslot; 638 635 ··· 685 682 struct dc_link *dc_link, 686 683 struct dsc_mst_fairness_params *params, 687 684 struct dsc_mst_fairness_vars *vars, 688 - int count) 685 + int count, 686 + int k) 689 687 { 690 688 int i; 691 689 bool tried[MAX_PIPES]; ··· 696 692 int remaining_to_try = 0; 697 693 698 694 for (i = 0; i < count; i++) { 699 - if (vars[i].dsc_enabled 700 - && vars[i].bpp_x16 == params[i].bw_range.max_target_bpp_x16 695 + if (vars[i + k].dsc_enabled 696 + && vars[i + k].bpp_x16 == params[i].bw_range.max_target_bpp_x16 701 697 && params[i].clock_force_enable == DSC_CLK_FORCE_DEFAULT) { 702 698 kbps_increase[i] = params[i].bw_range.stream_kbps - params[i].bw_range.max_kbps; 703 699 tried[i] = false; ··· 752 748 static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, 753 749 struct dc_state *dc_state, 754 750 struct dc_link *dc_link, 755 - struct dsc_mst_fairness_vars *vars) 751 + struct dsc_mst_fairness_vars *vars, 752 + int *link_vars_start_index) 756 753 { 757 - int i; 754 + int i, k; 758 755 struct dc_stream_state *stream; 759 756 struct dsc_mst_fairness_params params[MAX_PIPES]; 760 757 struct amdgpu_dm_connector *aconnector; ··· 773 768 if (stream->link != dc_link) 774 769 continue; 775 770 771 + aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context; 772 + if (!aconnector) 773 + continue; 774 + 775 + if (!aconnector->port) 776 + continue; 777 + 776 778 stream->timing.flags.DSC = 0; 777 779 778 780 params[count].timing = &stream->timing; 779 781 params[count].sink = stream->sink; 780 - aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context; 781 782 params[count].aconnector = aconnector; 782 783 params[count].port = aconnector->port; 783 784 params[count].clock_force_enable = aconnector->dsc_settings.dsc_force_enable; ··· 805 794 806 795 count++; 807 796 } 797 + 798 + if (count == 0) { 799 + ASSERT(0); 800 + return true; 801 + } 802 + 803 + /* k is start index of vars for current phy link used by mst hub */ 804 + k = *link_vars_start_index; 805 + /* set vars start index for next mst hub phy link */ 806 + *link_vars_start_index += count; 807 + 808 808 /* Try no compression */ 809 809 for (i = 0; i < count; i++) { 810 - vars[i].aconnector = params[i].aconnector; 811 - vars[i].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); 812 - vars[i].dsc_enabled = false; 813 - vars[i].bpp_x16 = 0; 810 + vars[i + k].aconnector = params[i].aconnector; 811 + vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); 812 + vars[i + k].dsc_enabled = false; 813 + vars[i + k].bpp_x16 = 0; 814 814 if (drm_dp_atomic_find_vcpi_slots(state, 815 815 params[i].port->mgr, 816 816 params[i].port, 817 - vars[i].pbn, 817 + vars[i + k].pbn, 818 818 dm_mst_get_pbn_divider(dc_link)) < 0) 819 819 return false; 820 820 } 821 821 if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) { 822 - set_dsc_configs_from_fairness_vars(params, vars, count); 822 + set_dsc_configs_from_fairness_vars(params, vars, count, k); 823 823 return true; 824 824 } 825 825 826 826 /* Try max compression */ 827 827 for (i = 0; i < count; i++) { 828 828 if (params[i].compression_possible && params[i].clock_force_enable != DSC_CLK_FORCE_DISABLE) { 829 - vars[i].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps); 830 - vars[i].dsc_enabled = true; 831 - vars[i].bpp_x16 = params[i].bw_range.min_target_bpp_x16; 829 + vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps); 830 + vars[i + k].dsc_enabled = true; 831 + vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16; 832 832 if (drm_dp_atomic_find_vcpi_slots(state, 833 833 params[i].port->mgr, 834 834 params[i].port, 835 - vars[i].pbn, 835 + vars[i + k].pbn, 836 836 dm_mst_get_pbn_divider(dc_link)) < 0) 837 837 return false; 838 838 } else { 839 - vars[i].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); 840 - vars[i].dsc_enabled = false; 841 - vars[i].bpp_x16 = 0; 839 + vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); 840 + vars[i + k].dsc_enabled = false; 841 + vars[i + k].bpp_x16 = 0; 842 842 if (drm_dp_atomic_find_vcpi_slots(state, 843 843 params[i].port->mgr, 844 844 params[i].port, 845 - vars[i].pbn, 845 + vars[i + k].pbn, 846 846 dm_mst_get_pbn_divider(dc_link)) < 0) 847 847 return false; 848 848 } ··· 862 840 return false; 863 841 864 842 /* Optimize degree of compression */ 865 - increase_dsc_bpp(state, dc_link, params, vars, count); 843 + increase_dsc_bpp(state, dc_link, params, vars, count, k); 866 844 867 - try_disable_dsc(state, dc_link, params, vars, count); 845 + try_disable_dsc(state, dc_link, params, vars, count, k); 868 846 869 - set_dsc_configs_from_fairness_vars(params, vars, count); 847 + set_dsc_configs_from_fairness_vars(params, vars, count, k); 870 848 871 849 return true; 850 + } 851 + 852 + static bool is_dsc_need_re_compute( 853 + struct drm_atomic_state *state, 854 + struct dc_state *dc_state, 855 + struct dc_link *dc_link) 856 + { 857 + int i; 858 + bool is_dsc_need_re_compute = false; 859 + 860 + /* only check phy used by mst branch */ 861 + if (dc_link->type != dc_connection_mst_branch) 862 + return false; 863 + 864 + /* check if there is mode change in new request */ 865 + for (i = 0; i < dc_state->stream_count; i++) { 866 + struct amdgpu_dm_connector *aconnector; 867 + struct dc_stream_state *stream; 868 + struct drm_crtc_state *new_crtc_state; 869 + struct drm_connector_state *new_conn_state; 870 + 871 + stream = dc_state->streams[i]; 872 + 873 + if (!stream) 874 + continue; 875 + 876 + /* check if stream using the same link for mst */ 877 + if (stream->link != dc_link) 878 + continue; 879 + 880 + aconnector = (struct amdgpu_dm_connector *) stream->dm_stream_context; 881 + if (!aconnector) 882 + continue; 883 + 884 + new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base); 885 + 886 + if (!new_conn_state) 887 + continue; 888 + 889 + if (IS_ERR(new_conn_state)) 890 + continue; 891 + 892 + if (!new_conn_state->crtc) 893 + continue; 894 + 895 + new_crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); 896 + 897 + if (!new_crtc_state) 898 + continue; 899 + 900 + if (IS_ERR(new_crtc_state)) 901 + continue; 902 + 903 + if (new_crtc_state->enable && new_crtc_state->active) { 904 + if (new_crtc_state->mode_changed || new_crtc_state->active_changed || 905 + new_crtc_state->connectors_changed) 906 + is_dsc_need_re_compute = true; 907 + } 908 + } 909 + 910 + return is_dsc_need_re_compute; 872 911 } 873 912 874 913 bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state, ··· 940 857 struct dc_stream_state *stream; 941 858 bool computed_streams[MAX_PIPES]; 942 859 struct amdgpu_dm_connector *aconnector; 860 + int link_vars_start_index = 0; 943 861 944 862 for (i = 0; i < dc_state->stream_count; i++) 945 863 computed_streams[i] = false; ··· 965 881 if (dcn20_remove_stream_from_ctx(stream->ctx->dc, dc_state, stream) != DC_OK) 966 882 return false; 967 883 884 + if (!is_dsc_need_re_compute(state, dc_state, stream->link)) 885 + continue; 886 + 968 887 mutex_lock(&aconnector->mst_mgr.lock); 969 - if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars)) { 888 + if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, 889 + vars, &link_vars_start_index)) { 970 890 mutex_unlock(&aconnector->mst_mgr.lock); 971 891 return false; 972 892 }
+11 -3
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 1085 1085 struct dc_stream_state *old_stream = 1086 1086 dc->current_state->res_ctx.pipe_ctx[i].stream; 1087 1087 bool should_disable = true; 1088 + bool pipe_split_change = 1089 + context->res_ctx.pipe_ctx[i].top_pipe != dc->current_state->res_ctx.pipe_ctx[i].top_pipe; 1088 1090 1089 1091 for (j = 0; j < context->stream_count; j++) { 1090 1092 if (old_stream == context->streams[j]) { ··· 1094 1092 break; 1095 1093 } 1096 1094 } 1095 + if (!should_disable && pipe_split_change) 1096 + should_disable = true; 1097 + 1097 1098 if (should_disable && old_stream) { 1098 1099 dc_rem_all_planes_for_stream(dc, old_stream, dangling_context); 1099 1100 disable_all_writeback_pipes_for_stream(dc, old_stream, dangling_context); ··· 1892 1887 return false; 1893 1888 } 1894 1889 1890 + #ifdef CONFIG_DRM_AMD_DC_DCN 1895 1891 /* Perform updates here which need to be deferred until next vupdate 1896 1892 * 1897 1893 * i.e. blnd lut, 3dlut, and shaper lut bypass regs are double buffered ··· 1902 1896 */ 1903 1897 static void process_deferred_updates(struct dc *dc) 1904 1898 { 1905 - #ifdef CONFIG_DRM_AMD_DC_DCN 1906 1899 int i = 0; 1907 1900 1908 1901 if (dc->debug.enable_mem_low_power.bits.cm) { ··· 1910 1905 if (dc->res_pool->dpps[i]->funcs->dpp_deferred_update) 1911 1906 dc->res_pool->dpps[i]->funcs->dpp_deferred_update(dc->res_pool->dpps[i]); 1912 1907 } 1913 - #endif 1914 1908 } 1909 + #endif /* CONFIG_DRM_AMD_DC_DCN */ 1915 1910 1916 1911 void dc_post_update_surfaces_to_stream(struct dc *dc) 1917 1912 { ··· 1938 1933 dc->hwss.disable_plane(dc, &context->res_ctx.pipe_ctx[i]); 1939 1934 } 1940 1935 1936 + #ifdef CONFIG_DRM_AMD_DC_DCN 1941 1937 process_deferred_updates(dc); 1938 + #endif 1942 1939 1943 1940 dc->hwss.optimize_bandwidth(dc, context); 1944 1941 ··· 3610 3603 #if defined(CONFIG_DRM_AMD_DC_DCN) 3611 3604 /* YELLOW_CARP B0 USB4 DPIA needs dmub notifications for interrupts */ 3612 3605 if (dc->ctx->asic_id.chip_family == FAMILY_YELLOW_CARP && 3613 - dc->ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) 3606 + dc->ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0 && 3607 + !dc->debug.dpia_debug.bits.disable_dpia) 3614 3608 return true; 3615 3609 #endif 3616 3610 /* dmub aux needs dmub notifications to be enabled */
+3 -1
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 4279 4279 */ 4280 4280 if (status != DC_FAIL_DP_LINK_TRAINING || 4281 4281 pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST) { 4282 + if (false == stream->link->link_status.link_active) 4283 + disable_link(stream->link, pipe_ctx->stream->signal); 4282 4284 BREAK_TO_DEBUGGER(); 4283 4285 return; 4284 4286 } ··· 4770 4768 timing->dsc_cfg.bits_per_pixel, 4771 4769 timing->dsc_cfg.num_slices_h, 4772 4770 timing->dsc_cfg.is_dp); 4773 - #endif 4771 + #endif /* CONFIG_DRM_AMD_DC_DCN */ 4774 4772 4775 4773 switch (timing->display_color_depth) { 4776 4774 case COLOR_DEPTH_666:
+2 -1
drivers/gpu/drm/amd/display/dc/dc.h
··· 47 47 struct set_config_cmd_payload; 48 48 struct dmub_notification; 49 49 50 - #define DC_VER "3.2.159" 50 + #define DC_VER "3.2.160" 51 51 52 52 #define MAX_SURFACES 3 53 53 #define MAX_PLANES 6 ··· 675 675 #endif 676 676 union mem_low_power_enable_options enable_mem_low_power; 677 677 union root_clock_optimization_options root_clock_optimization; 678 + bool hpo_optimization; 678 679 bool force_vblank_alignment; 679 680 680 681 /* Enable dmub aux for legacy ddc */
+3
drivers/gpu/drm/amd/display/dc/dc_dp_types.h
··· 898 898 #ifndef DP_DFP_CAPABILITY_EXTENSION_SUPPORT 899 899 #define DP_DFP_CAPABILITY_EXTENSION_SUPPORT 0x0A3 900 900 #endif 901 + #ifndef DP_LINK_SQUARE_PATTERN 902 + #define DP_LINK_SQUARE_PATTERN 0x10F 903 + #endif 901 904 #ifndef DP_DSC_CONFIGURATION 902 905 #define DP_DSC_CONFIGURATION 0x161 903 906 #endif
+3 -1
drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h
··· 671 671 uint32_t MC_VM_FB_LOCATION_BASE; 672 672 uint32_t MC_VM_FB_LOCATION_TOP; 673 673 uint32_t MC_VM_FB_OFFSET; 674 + uint32_t HPO_TOP_HW_CONTROL; 674 675 }; 675 676 /* set field name */ 676 677 #define HWS_SF(blk_name, reg_name, field_name, post_fix)\ ··· 1153 1152 type DOMAIN_PGFSM_PWR_STATUS;\ 1154 1153 type HPO_HDMISTREAMCLK_G_GATE_DIS;\ 1155 1154 type DISABLE_HOSTVM_FORCE_ALLOW_PSTATE;\ 1156 - type I2C_LIGHT_SLEEP_FORCE; 1155 + type I2C_LIGHT_SLEEP_FORCE;\ 1156 + type HPO_IO_EN; 1157 1157 1158 1158 struct dce_hwseq_shift { 1159 1159 HWSEQ_REG_FIELD_LIST(uint8_t)
+6
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 1244 1244 #endif 1245 1245 if (dc_is_dp_signal(pipe_ctx->stream->signal)) 1246 1246 dp_source_sequence_trace(link, DPCD_SOURCE_SEQ_AFTER_DISCONNECT_DIG_FE_BE); 1247 + 1248 + #if defined(CONFIG_DRM_AMD_DC_DCN) 1249 + if (dc->hwseq->funcs.setup_hpo_hw_control && is_dp_128b_132b_signal(pipe_ctx)) 1250 + dc->hwseq->funcs.setup_hpo_hw_control(dc->hwseq, false); 1251 + #endif 1252 + 1247 1253 } 1248 1254 1249 1255 void dce110_unblank_stream(struct pipe_ctx *pipe_ctx,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
··· 231 231 232 232 if (!s->blank_en) 233 233 DTN_INFO("[%2d]: %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh" 234 - "% 8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh" 234 + " %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh" 235 235 " %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh %8xh\n", 236 236 pool->hubps[i]->inst, dlg_regs->refcyc_h_blank_end, dlg_regs->dlg_vblank_end, dlg_regs->min_dst_y_next_start, 237 237 dlg_regs->refcyc_per_htotal, dlg_regs->refcyc_x_after_scaler, dlg_regs->dst_y_after_scaler,
+3
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
··· 2397 2397 * BY this, it is logic clean to separate stream and link 2398 2398 */ 2399 2399 if (is_dp_128b_132b_signal(pipe_ctx)) { 2400 + if (pipe_ctx->stream->ctx->dc->hwseq->funcs.setup_hpo_hw_control) 2401 + pipe_ctx->stream->ctx->dc->hwseq->funcs.setup_hpo_hw_control( 2402 + pipe_ctx->stream->ctx->dc->hwseq, true); 2400 2403 setup_dp_hpo_stream(pipe_ctx, true); 2401 2404 pipe_ctx->stream_res.hpo_dp_stream_enc->funcs->enable_stream( 2402 2405 pipe_ctx->stream_res.hpo_dp_stream_enc);
+3 -4
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c
··· 1381 1381 1382 1382 } 1383 1383 1384 - static void mpc3_mpc_init(struct mpc *mpc) 1384 + static void mpc3_set_mpc_mem_lp_mode(struct mpc *mpc) 1385 1385 { 1386 1386 struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); 1387 1387 int mpcc_id; 1388 - 1389 - mpc1_mpc_init(mpc); 1390 1388 1391 1389 if (mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc) { 1392 1390 if (mpc30->mpc_mask->MPC_RMU0_MEM_LOW_PWR_MODE && mpc30->mpc_mask->MPC_RMU1_MEM_LOW_PWR_MODE) { ··· 1403 1405 .read_mpcc_state = mpc1_read_mpcc_state, 1404 1406 .insert_plane = mpc1_insert_plane, 1405 1407 .remove_mpcc = mpc1_remove_mpcc, 1406 - .mpc_init = mpc3_mpc_init, 1408 + .mpc_init = mpc1_mpc_init, 1407 1409 .mpc_init_single_inst = mpc1_mpc_init_single_inst, 1408 1410 .update_blending = mpc2_update_blending, 1409 1411 .cursor_lock = mpc1_cursor_lock, ··· 1430 1432 .power_on_mpc_mem_pwr = mpc3_power_on_ogam_lut, 1431 1433 .get_mpc_out_mux = mpc1_get_mpc_out_mux, 1432 1434 .set_bg_color = mpc1_set_bg_color, 1435 + .set_mpc_mem_lp_mode = mpc3_set_mpc_mem_lp_mode, 1433 1436 }; 1434 1437 1435 1438 void dcn30_mpc_construct(struct dcn30_mpc *mpc30,
+4 -3
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
··· 2128 2128 int pipe_cnt, 2129 2129 int vlevel) 2130 2130 { 2131 + int maxMpcComb = context->bw_ctx.dml.vba.maxMpcComb; 2131 2132 int i, pipe_idx; 2132 - double dcfclk = context->bw_ctx.dml.vba.DCFCLKState[vlevel][context->bw_ctx.dml.vba.maxMpcComb]; 2133 - bool pstate_en = context->bw_ctx.dml.vba.DRAMClockChangeSupport[vlevel][context->bw_ctx.dml.vba.maxMpcComb] != 2134 - dm_dram_clock_change_unsupported; 2133 + double dcfclk = context->bw_ctx.dml.vba.DCFCLKState[vlevel][maxMpcComb]; 2134 + bool pstate_en = context->bw_ctx.dml.vba.DRAMClockChangeSupport[vlevel][maxMpcComb] != dm_dram_clock_change_unsupported; 2135 2135 2136 2136 if (context->bw_ctx.dml.soc.min_dcfclk > dcfclk) 2137 2137 dcfclk = context->bw_ctx.dml.soc.min_dcfclk; ··· 2207 2207 context->bw_ctx.dml.soc.sr_enter_plus_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].dml_input.sr_enter_plus_exit_time_us; 2208 2208 context->bw_ctx.dml.soc.sr_exit_time_us = dc->clk_mgr->bw_params->wm_table.nv_entries[WM_C].dml_input.sr_exit_time_us; 2209 2209 } 2210 + 2210 2211 context->bw_ctx.bw.dcn.watermarks.c.urgent_ns = get_wm_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000; 2211 2212 context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.cstate_enter_plus_exit_ns = get_wm_stutter_enter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000; 2212 2213 context->bw_ctx.bw.dcn.watermarks.c.cstate_pstate.cstate_exit_ns = get_wm_stutter_exit(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000;
+49 -29
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.c
··· 66 66 #define FN(reg_name, field_name) \ 67 67 hws->shifts->field_name, hws->masks->field_name 68 68 69 + static void enable_memory_low_power(struct dc *dc) 70 + { 71 + struct dce_hwseq *hws = dc->hwseq; 72 + int i; 73 + 74 + if (dc->debug.enable_mem_low_power.bits.dmcu) { 75 + // Force ERAM to shutdown if DMCU is not enabled 76 + if (dc->debug.disable_dmcu || dc->config.disable_dmcu) { 77 + REG_UPDATE(DMU_MEM_PWR_CNTL, DMCU_ERAM_MEM_PWR_FORCE, 3); 78 + } 79 + } 80 + 81 + // Set default OPTC memory power states 82 + if (dc->debug.enable_mem_low_power.bits.optc) { 83 + // Shutdown when unassigned and light sleep in VBLANK 84 + REG_SET_2(ODM_MEM_PWR_CTRL3, 0, ODM_MEM_UNASSIGNED_PWR_MODE, 3, ODM_MEM_VBLANK_PWR_MODE, 1); 85 + } 86 + 87 + if (dc->debug.enable_mem_low_power.bits.vga) { 88 + // Power down VGA memory 89 + REG_UPDATE(MMHUBBUB_MEM_PWR_CNTL, VGA_MEM_PWR_FORCE, 1); 90 + } 91 + 92 + if (dc->debug.enable_mem_low_power.bits.mpc) 93 + dc->res_pool->mpc->funcs->set_mpc_mem_lp_mode(dc->res_pool->mpc); 94 + 95 + 96 + if (dc->debug.enable_mem_low_power.bits.vpg && dc->res_pool->stream_enc[0]->vpg->funcs->vpg_powerdown) { 97 + // Power down VPGs 98 + for (i = 0; i < dc->res_pool->stream_enc_count; i++) 99 + dc->res_pool->stream_enc[i]->vpg->funcs->vpg_powerdown(dc->res_pool->stream_enc[i]->vpg); 100 + #if defined(CONFIG_DRM_AMD_DC_DCN) 101 + for (i = 0; i < dc->res_pool->hpo_dp_stream_enc_count; i++) 102 + dc->res_pool->hpo_dp_stream_enc[i]->vpg->funcs->vpg_powerdown(dc->res_pool->hpo_dp_stream_enc[i]->vpg); 103 + #endif 104 + } 105 + 106 + } 107 + 69 108 void dcn31_init_hw(struct dc *dc) 70 109 { 71 110 struct abm **abms = dc->res_pool->multiple_abms; ··· 147 108 if (res_pool->dccg->funcs->dccg_init) 148 109 res_pool->dccg->funcs->dccg_init(res_pool->dccg); 149 110 150 - if (dc->debug.enable_mem_low_power.bits.dmcu) { 151 - // Force ERAM to shutdown if DMCU is not enabled 152 - if (dc->debug.disable_dmcu || dc->config.disable_dmcu) { 153 - REG_UPDATE(DMU_MEM_PWR_CNTL, DMCU_ERAM_MEM_PWR_FORCE, 3); 154 - } 155 - } 156 - 157 - // Set default OPTC memory power states 158 - if (dc->debug.enable_mem_low_power.bits.optc) { 159 - // Shutdown when unassigned and light sleep in VBLANK 160 - REG_SET_2(ODM_MEM_PWR_CTRL3, 0, ODM_MEM_UNASSIGNED_PWR_MODE, 3, ODM_MEM_VBLANK_PWR_MODE, 1); 161 - } 162 - 163 - if (dc->debug.enable_mem_low_power.bits.vga) { 164 - // Power down VGA memory 165 - REG_UPDATE(MMHUBBUB_MEM_PWR_CNTL, VGA_MEM_PWR_FORCE, 1); 166 - } 167 - 168 - #if defined(CONFIG_DRM_AMD_DC_DCN) 169 - if (dc->debug.enable_mem_low_power.bits.vpg && dc->res_pool->stream_enc[0]->vpg->funcs->vpg_powerdown) { 170 - // Power down VPGs 171 - for (i = 0; i < dc->res_pool->stream_enc_count; i++) 172 - dc->res_pool->stream_enc[i]->vpg->funcs->vpg_powerdown(dc->res_pool->stream_enc[i]->vpg); 173 - #if defined(CONFIG_DRM_AMD_DC_DP2_0) 174 - for (i = 0; i < dc->res_pool->hpo_dp_stream_enc_count; i++) 175 - dc->res_pool->hpo_dp_stream_enc[i]->vpg->funcs->vpg_powerdown(dc->res_pool->hpo_dp_stream_enc[i]->vpg); 176 - #endif 177 - } 178 - #endif 111 + enable_memory_low_power(dc); 179 112 180 113 if (dc->ctx->dc_bios->fw_info_valid) { 181 114 res_pool->ref_clocks.xtalin_clock_inKhz = ··· 274 263 // Set i2c to light sleep until engine is setup 275 264 if (dc->debug.enable_mem_low_power.bits.i2c) 276 265 REG_UPDATE(DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, 1); 266 + 267 + if (hws->funcs.setup_hpo_hw_control) 268 + hws->funcs.setup_hpo_hw_control(hws, false); 277 269 278 270 if (!dc->debug.disable_clock_gate) { 279 271 /* enable all DCN clock gating */ ··· 610 596 611 597 /* New dc_state in the process of being applied to hardware. */ 612 598 dc->current_state->res_ctx.link_enc_cfg_ctx.mode = LINK_ENC_CFG_TRANSIENT; 599 + } 600 + 601 + void dcn31_setup_hpo_hw_control(const struct dce_hwseq *hws, bool enable) 602 + { 603 + if (hws->ctx->dc->debug.hpo_optimization) 604 + REG_UPDATE(HPO_TOP_HW_CONTROL, HPO_IO_EN, !!enable); 613 605 }
+1
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_hwseq.h
··· 54 54 bool dcn31_is_abm_supported(struct dc *dc, 55 55 struct dc_state *context, struct dc_stream_state *stream); 56 56 void dcn31_init_pipes(struct dc *dc, struct dc_state *context); 57 + void dcn31_setup_hpo_hw_control(const struct dce_hwseq *hws, bool enable); 57 58 58 59 #endif /* __DC_HWSS_DCN31_H__ */
+1
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_init.c
··· 137 137 .dccg_init = dcn20_dccg_init, 138 138 .set_blend_lut = dcn30_set_blend_lut, 139 139 .set_shaper_3dlut = dcn20_set_shaper_3dlut, 140 + .setup_hpo_hw_control = dcn31_setup_hpo_hw_control, 140 141 }; 141 142 142 143 void dcn31_hw_sequencer_construct(struct dc *dc)
+4 -2
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
··· 860 860 SR(D6VGA_CONTROL), \ 861 861 SR(DC_IP_REQUEST_CNTL), \ 862 862 SR(AZALIA_AUDIO_DTO), \ 863 - SR(AZALIA_CONTROLLER_CLOCK_GATING) 863 + SR(AZALIA_CONTROLLER_CLOCK_GATING), \ 864 + SR(HPO_TOP_HW_CONTROL) 864 865 865 866 static const struct dce_hwseq_registers hwseq_reg = { 866 867 HWSEQ_DCN31_REG_LIST() ··· 899 898 HWS_SF(, ODM_MEM_PWR_CTRL3, ODM_MEM_UNASSIGNED_PWR_MODE, mask_sh), \ 900 899 HWS_SF(, ODM_MEM_PWR_CTRL3, ODM_MEM_VBLANK_PWR_MODE, mask_sh), \ 901 900 HWS_SF(, MMHUBBUB_MEM_PWR_CNTL, VGA_MEM_PWR_FORCE, mask_sh), \ 902 - HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh) 901 + HWS_SF(, DIO_MEM_PWR_CTRL, I2C_LIGHT_SLEEP_FORCE, mask_sh), \ 902 + HWS_SF(, HPO_TOP_HW_CONTROL, HPO_IO_EN, mask_sh) 903 903 904 904 static const struct dce_hwseq_shift hwseq_shift = { 905 905 HWSEQ_DCN31_MASK_SH_LIST(__SHIFT)
+3 -10
drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
··· 3576 3576 MinDSCBPP = 8; 3577 3577 MaxDSCBPP = 3 * DSCInputBitPerComponent - 1.0 / 16; 3578 3578 } else { 3579 - if (Output == dm_hdmi) { 3580 - NonDSCBPP0 = 24; 3581 - NonDSCBPP1 = 24; 3582 - NonDSCBPP2 = 24; 3583 - } 3584 - else { 3585 - NonDSCBPP0 = 16; 3586 - NonDSCBPP1 = 20; 3587 - NonDSCBPP2 = 24; 3588 - } 3579 + NonDSCBPP0 = 16; 3580 + NonDSCBPP1 = 20; 3581 + NonDSCBPP2 = 24; 3589 3582 3590 3583 if (Format == dm_n422) { 3591 3584 MinDSCBPP = 7;
+5 -9
drivers/gpu/drm/amd/display/dc/dml/dcn31/display_mode_vba_31.c
··· 3892 3892 MinDSCBPP = 8; 3893 3893 MaxDSCBPP = 3 * DSCInputBitPerComponent - 1.0 / 16; 3894 3894 } else { 3895 - if (Output == dm_hdmi) { 3896 - NonDSCBPP0 = 24; 3897 - NonDSCBPP1 = 24; 3898 - NonDSCBPP2 = 24; 3899 - } else { 3900 - NonDSCBPP0 = 16; 3901 - NonDSCBPP1 = 20; 3902 - NonDSCBPP2 = 24; 3903 - } 3895 + 3896 + NonDSCBPP0 = 16; 3897 + NonDSCBPP1 = 20; 3898 + NonDSCBPP2 = 24; 3899 + 3904 3900 if (Format == dm_n422) { 3905 3901 MinDSCBPP = 7; 3906 3902 MaxDSCBPP = 2 * DSCInputBitPerComponent - 1.0 / 16.0;
+1
drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
··· 367 367 void (*set_bg_color)(struct mpc *mpc, 368 368 struct tg_color *bg_color, 369 369 int mpcc_id); 370 + void (*set_mpc_mem_lp_mode)(struct mpc *mpc); 370 371 }; 371 372 372 373 #endif
+1
drivers/gpu/drm/amd/display/dc/inc/hw_sequencer_private.h
··· 143 143 const struct dc_plane_state *plane_state); 144 144 void (*PLAT_58856_wa)(struct dc_state *context, 145 145 struct pipe_ctx *pipe_ctx); 146 + void (*setup_hpo_hw_control)(const struct dce_hwseq *hws, bool enable); 146 147 }; 147 148 148 149 struct dce_hwseq {
+1
drivers/gpu/drm/amd/display/dmub/dmub_srv.h
··· 238 238 bool load_inst_const; 239 239 bool skip_panel_power_sequence; 240 240 bool disable_z10; 241 + bool power_optimization; 241 242 bool dpia_supported; 242 243 bool disable_dpia; 243 244 };
+2 -2
drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
··· 46 46 47 47 /* Firmware versioning. */ 48 48 #ifdef DMUB_EXPOSE_VERSION 49 - #define DMUB_FW_VERSION_GIT_HASH 0x9525efb5 49 + #define DMUB_FW_VERSION_GIT_HASH 0x1d82d23e 50 50 #define DMUB_FW_VERSION_MAJOR 0 51 51 #define DMUB_FW_VERSION_MINOR 0 52 - #define DMUB_FW_VERSION_REVISION 90 52 + #define DMUB_FW_VERSION_REVISION 91 53 53 #define DMUB_FW_VERSION_TEST 0 54 54 #define DMUB_FW_VERSION_VBIOS 0 55 55 #define DMUB_FW_VERSION_HOTFIX 0
+1
drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
··· 340 340 boot_options.bits.z10_disable = params->disable_z10; 341 341 boot_options.bits.dpia_supported = params->dpia_supported; 342 342 boot_options.bits.enable_dpia = params->disable_dpia ? 0 : 1; 343 + boot_options.bits.power_optimization = params->power_optimization; 343 344 344 345 boot_options.bits.sel_mux_phy_c_d_phy_f_g = (dmub->asic == DMUB_ASIC_DCN31B) ? 1 : 0; 345 346
+4
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 2094 2094 } else if (DEVICE_ATTR_IS(pp_dpm_dclk)) { 2095 2095 if (!(asic_type == CHIP_VANGOGH || asic_type == CHIP_SIENNA_CICHLID)) 2096 2096 *states = ATTR_STATE_UNSUPPORTED; 2097 + } else if (DEVICE_ATTR_IS(pp_power_profile_mode)) { 2098 + if (!adev->powerplay.pp_funcs->get_power_profile_mode || 2099 + amdgpu_dpm_get_power_profile_mode(adev, NULL) == -EOPNOTSUPP) 2100 + *states = ATTR_STATE_UNSUPPORTED; 2097 2101 } 2098 2102 2099 2103 switch (asic_type) {
+2 -2
drivers/gpu/drm/amd/pm/inc/smu_v13_0_1_ppsmc.h
··· 51 51 #define PPSMC_MSG_PowerUpVcn 0x07 ///< Power up VCN; VCN is power gated by default 52 52 #define PPSMC_MSG_SetHardMinVcn 0x08 ///< For wireless display 53 53 #define PPSMC_MSG_SetSoftMinGfxclk 0x09 ///< Set SoftMin for GFXCLK, argument is frequency in MHz 54 - #define PPSMC_MSG_ActiveProcessNotify 0x0A ///< Set active work load type 54 + #define PPSMC_MSG_ActiveProcessNotify 0x0A ///< Deprecated (Not to be used) 55 55 #define PPSMC_MSG_ForcePowerDownGfx 0x0B ///< Force power down GFX, i.e. enter GFXOFF 56 56 #define PPSMC_MSG_PrepareMp1ForUnload 0x0C ///< Prepare PMFW for GFX driver unload 57 57 #define PPSMC_MSG_SetDriverDramAddrHigh 0x0D ///< Set high 32 bits of DRAM address for Driver table transfer ··· 63 63 #define PPSMC_MSG_SetHardMinSocclkByFreq 0x13 ///< Set hard min for SOC CLK 64 64 #define PPSMC_MSG_SetSoftMinFclk 0x14 ///< Set hard min for FCLK 65 65 #define PPSMC_MSG_SetSoftMinVcn 0x15 ///< Set soft min for VCN clocks (VCLK and DCLK) 66 - #define PPSMC_MSG_SPARE0 0x16 ///< Spared 66 + #define PPSMC_MSG_SPARE 0x16 ///< Spare 67 67 #define PPSMC_MSG_GetGfxclkFrequency 0x17 ///< Get GFX clock frequency 68 68 #define PPSMC_MSG_GetFclkFrequency 0x18 ///< Get FCLK frequency 69 69 #define PPSMC_MSG_AllowGfxOff 0x19 ///< Inform PMFW of allowing GFXOFF entry
+11 -15
drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
··· 875 875 static int pp_get_power_profile_mode(void *handle, char *buf) 876 876 { 877 877 struct pp_hwmgr *hwmgr = handle; 878 + int ret; 878 879 879 - if (!hwmgr || !hwmgr->pm_en || !buf) 880 + if (!hwmgr || !hwmgr->pm_en || !hwmgr->hwmgr_func->get_power_profile_mode) 881 + return -EOPNOTSUPP; 882 + if (!buf) 880 883 return -EINVAL; 881 884 882 - if (hwmgr->hwmgr_func->get_power_profile_mode == NULL) { 883 - pr_info_ratelimited("%s was not implemented.\n", __func__); 884 - return snprintf(buf, PAGE_SIZE, "\n"); 885 - } 886 - 887 - return hwmgr->hwmgr_func->get_power_profile_mode(hwmgr, buf); 885 + mutex_lock(&hwmgr->smu_lock); 886 + ret = hwmgr->hwmgr_func->get_power_profile_mode(hwmgr, buf); 887 + mutex_unlock(&hwmgr->smu_lock); 888 + return ret; 888 889 } 889 890 890 891 static int pp_set_power_profile_mode(void *handle, long *input, uint32_t size) 891 892 { 892 893 struct pp_hwmgr *hwmgr = handle; 893 - int ret = -EINVAL; 894 + int ret = -EOPNOTSUPP; 894 895 895 - if (!hwmgr || !hwmgr->pm_en) 896 + if (!hwmgr || !hwmgr->pm_en || !hwmgr->hwmgr_func->set_power_profile_mode) 896 897 return ret; 897 - 898 - if (hwmgr->hwmgr_func->set_power_profile_mode == NULL) { 899 - pr_info_ratelimited("%s was not implemented.\n", __func__); 900 - return ret; 901 - } 902 898 903 899 if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL) { 904 900 pr_debug("power profile setting is for manual dpm mode only.\n"); 905 - return ret; 901 + return -EINVAL; 906 902 } 907 903 908 904 mutex_lock(&hwmgr->smu_lock);
+6 -2
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
··· 1024 1024 uint32_t min_freq, max_freq = 0; 1025 1025 uint32_t ret = 0; 1026 1026 1027 + phm_get_sysfs_buf(&buf, &size); 1028 + 1027 1029 switch (type) { 1028 1030 case PP_SCLK: 1029 1031 smum_send_msg_to_smc(hwmgr, PPSMC_MSG_GetGfxclkFrequency, &now); ··· 1067 1065 if (ret) 1068 1066 return ret; 1069 1067 1070 - size = sysfs_emit(buf, "%s:\n", "OD_SCLK"); 1068 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK"); 1071 1069 size += sysfs_emit_at(buf, size, "0: %10uMhz\n", 1072 1070 (data->gfx_actual_soft_min_freq > 0) ? data->gfx_actual_soft_min_freq : min_freq); 1073 1071 size += sysfs_emit_at(buf, size, "1: %10uMhz\n", ··· 1083 1081 if (ret) 1084 1082 return ret; 1085 1083 1086 - size = sysfs_emit(buf, "%s:\n", "OD_RANGE"); 1084 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE"); 1087 1085 size += sysfs_emit_at(buf, size, "SCLK: %7uMHz %10uMHz\n", 1088 1086 min_freq, max_freq); 1089 1087 } ··· 1457 1455 1458 1456 if (!buf) 1459 1457 return -EINVAL; 1458 + 1459 + phm_get_sysfs_buf(&buf, &size); 1460 1460 1461 1461 size += sysfs_emit_at(buf, size, "%s %16s %s %s %s %s\n",title[0], 1462 1462 title[1], title[2], title[3], title[4], title[5]);
+7 -3
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_hwmgr.c
··· 4914 4914 int size = 0; 4915 4915 uint32_t i, now, clock, pcie_speed; 4916 4916 4917 + phm_get_sysfs_buf(&buf, &size); 4918 + 4917 4919 switch (type) { 4918 4920 case PP_SCLK: 4919 4921 smum_send_msg_to_smc(hwmgr, PPSMC_MSG_API_GetSclkFrequency, &clock); ··· 4965 4963 break; 4966 4964 case OD_SCLK: 4967 4965 if (hwmgr->od_enabled) { 4968 - size = sysfs_emit(buf, "%s:\n", "OD_SCLK"); 4966 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK"); 4969 4967 for (i = 0; i < odn_sclk_table->num_of_pl; i++) 4970 4968 size += sysfs_emit_at(buf, size, "%d: %10uMHz %10umV\n", 4971 4969 i, odn_sclk_table->entries[i].clock/100, ··· 4974 4972 break; 4975 4973 case OD_MCLK: 4976 4974 if (hwmgr->od_enabled) { 4977 - size = sysfs_emit(buf, "%s:\n", "OD_MCLK"); 4975 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_MCLK"); 4978 4976 for (i = 0; i < odn_mclk_table->num_of_pl; i++) 4979 4977 size += sysfs_emit_at(buf, size, "%d: %10uMHz %10umV\n", 4980 4978 i, odn_mclk_table->entries[i].clock/100, ··· 4983 4981 break; 4984 4982 case OD_RANGE: 4985 4983 if (hwmgr->od_enabled) { 4986 - size = sysfs_emit(buf, "%s:\n", "OD_RANGE"); 4984 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE"); 4987 4985 size += sysfs_emit_at(buf, size, "SCLK: %7uMHz %10uMHz\n", 4988 4986 data->golden_dpm_table.sclk_table.dpm_levels[0].value/100, 4989 4987 hwmgr->platform_descriptor.overdriveLimit.engineClock/100); ··· 5519 5517 5520 5518 if (!buf) 5521 5519 return -EINVAL; 5520 + 5521 + phm_get_sysfs_buf(&buf, &size); 5522 5522 5523 5523 size += sysfs_emit_at(buf, size, "%s %16s %16s %16s %16s %16s %16s %16s\n", 5524 5524 title[0], title[1], title[2], title[3],
+2
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu8_hwmgr.c
··· 1550 1550 uint32_t i, now; 1551 1551 int size = 0; 1552 1552 1553 + phm_get_sysfs_buf(&buf, &size); 1554 + 1553 1555 switch (type) { 1554 1556 case PP_SCLK: 1555 1557 now = PHM_GET_FIELD(cgs_read_ind_register(hwmgr->device,
+13
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu_helper.h
··· 109 109 struct amdgpu_irq_src *source, 110 110 struct amdgpu_iv_entry *entry); 111 111 112 + /* 113 + * Helper function to make sysfs_emit_at() happy. Align buf to 114 + * the current page boundary and record the offset. 115 + */ 116 + static inline void phm_get_sysfs_buf(char **buf, int *offset) 117 + { 118 + if (!*buf || !offset) 119 + return; 120 + 121 + *offset = offset_in_page(*buf); 122 + *buf -= *offset; 123 + } 124 + 112 125 int smu9_register_irq_handlers(struct pp_hwmgr *hwmgr); 113 126 114 127 void *smu_atom_get_data_table(void *dev, uint32_t table, uint16_t *size,
+9 -3
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_hwmgr.c
··· 4548 4548 int ret = 0; 4549 4549 int size = 0; 4550 4550 4551 + phm_get_sysfs_buf(&buf, &size); 4552 + 4551 4553 ret = vega10_get_enabled_smc_features(hwmgr, &features_enabled); 4552 4554 PP_ASSERT_WITH_CODE(!ret, 4553 4555 "[EnableAllSmuFeatures] Failed to get enabled smc features!", ··· 4639 4637 4640 4638 int i, now, size = 0, count = 0; 4641 4639 4640 + phm_get_sysfs_buf(&buf, &size); 4641 + 4642 4642 switch (type) { 4643 4643 case PP_SCLK: 4644 4644 if (data->registry_data.sclk_dpm_key_disabled) ··· 4721 4717 4722 4718 case OD_SCLK: 4723 4719 if (hwmgr->od_enabled) { 4724 - size = sysfs_emit(buf, "%s:\n", "OD_SCLK"); 4720 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK"); 4725 4721 podn_vdd_dep = &data->odn_dpm_table.vdd_dep_on_sclk; 4726 4722 for (i = 0; i < podn_vdd_dep->count; i++) 4727 4723 size += sysfs_emit_at(buf, size, "%d: %10uMhz %10umV\n", ··· 4731 4727 break; 4732 4728 case OD_MCLK: 4733 4729 if (hwmgr->od_enabled) { 4734 - size = sysfs_emit(buf, "%s:\n", "OD_MCLK"); 4730 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_MCLK"); 4735 4731 podn_vdd_dep = &data->odn_dpm_table.vdd_dep_on_mclk; 4736 4732 for (i = 0; i < podn_vdd_dep->count; i++) 4737 4733 size += sysfs_emit_at(buf, size, "%d: %10uMhz %10umV\n", ··· 4741 4737 break; 4742 4738 case OD_RANGE: 4743 4739 if (hwmgr->od_enabled) { 4744 - size = sysfs_emit(buf, "%s:\n", "OD_RANGE"); 4740 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE"); 4745 4741 size += sysfs_emit_at(buf, size, "SCLK: %7uMHz %10uMHz\n", 4746 4742 data->golden_dpm_table.gfx_table.dpm_levels[0].value/100, 4747 4743 hwmgr->platform_descriptor.overdriveLimit.engineClock/100); ··· 5115 5111 5116 5112 if (!buf) 5117 5113 return -EINVAL; 5114 + 5115 + phm_get_sysfs_buf(&buf, &size); 5118 5116 5119 5117 size += sysfs_emit_at(buf, size, "%s %16s %s %s %s %s\n",title[0], 5120 5118 title[1], title[2], title[3], title[4], title[5]);
+4
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega12_hwmgr.c
··· 2141 2141 int ret = 0; 2142 2142 int size = 0; 2143 2143 2144 + phm_get_sysfs_buf(&buf, &size); 2145 + 2144 2146 ret = vega12_get_enabled_smc_features(hwmgr, &features_enabled); 2145 2147 PP_ASSERT_WITH_CODE(!ret, 2146 2148 "[EnableAllSmuFeatures] Failed to get enabled smc features!", ··· 2245 2243 { 2246 2244 int i, now, size = 0; 2247 2245 struct pp_clock_levels_with_latency clocks; 2246 + 2247 + phm_get_sysfs_buf(&buf, &size); 2248 2248 2249 2249 switch (type) { 2250 2250 case PP_SCLK:
+10 -4
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_hwmgr.c
··· 3238 3238 int ret = 0; 3239 3239 int size = 0; 3240 3240 3241 + phm_get_sysfs_buf(&buf, &size); 3242 + 3241 3243 ret = vega20_get_enabled_smc_features(hwmgr, &features_enabled); 3242 3244 PP_ASSERT_WITH_CODE(!ret, 3243 3245 "[EnableAllSmuFeatures] Failed to get enabled smc features!", ··· 3366 3364 int ret = 0; 3367 3365 uint32_t gen_speed, lane_width, current_gen_speed, current_lane_width; 3368 3366 3367 + phm_get_sysfs_buf(&buf, &size); 3368 + 3369 3369 switch (type) { 3370 3370 case PP_SCLK: 3371 3371 ret = vega20_get_current_clk_freq(hwmgr, PPCLK_GFXCLK, &now); ··· 3483 3479 case OD_SCLK: 3484 3480 if (od8_settings[OD8_SETTING_GFXCLK_FMIN].feature_id && 3485 3481 od8_settings[OD8_SETTING_GFXCLK_FMAX].feature_id) { 3486 - size = sysfs_emit(buf, "%s:\n", "OD_SCLK"); 3482 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_SCLK"); 3487 3483 size += sysfs_emit_at(buf, size, "0: %10uMhz\n", 3488 3484 od_table->GfxclkFmin); 3489 3485 size += sysfs_emit_at(buf, size, "1: %10uMhz\n", ··· 3493 3489 3494 3490 case OD_MCLK: 3495 3491 if (od8_settings[OD8_SETTING_UCLK_FMAX].feature_id) { 3496 - size = sysfs_emit(buf, "%s:\n", "OD_MCLK"); 3492 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_MCLK"); 3497 3493 size += sysfs_emit_at(buf, size, "1: %10uMhz\n", 3498 3494 od_table->UclkFmax); 3499 3495 } ··· 3507 3503 od8_settings[OD8_SETTING_GFXCLK_VOLTAGE1].feature_id && 3508 3504 od8_settings[OD8_SETTING_GFXCLK_VOLTAGE2].feature_id && 3509 3505 od8_settings[OD8_SETTING_GFXCLK_VOLTAGE3].feature_id) { 3510 - size = sysfs_emit(buf, "%s:\n", "OD_VDDC_CURVE"); 3506 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_VDDC_CURVE"); 3511 3507 size += sysfs_emit_at(buf, size, "0: %10uMhz %10dmV\n", 3512 3508 od_table->GfxclkFreq1, 3513 3509 od_table->GfxclkVolt1 / VOLTAGE_SCALE); ··· 3522 3518 break; 3523 3519 3524 3520 case OD_RANGE: 3525 - size = sysfs_emit(buf, "%s:\n", "OD_RANGE"); 3521 + size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE"); 3526 3522 3527 3523 if (od8_settings[OD8_SETTING_GFXCLK_FMIN].feature_id && 3528 3524 od8_settings[OD8_SETTING_GFXCLK_FMAX].feature_id) { ··· 4006 4002 4007 4003 if (!buf) 4008 4004 return -EINVAL; 4005 + 4006 + phm_get_sysfs_buf(&buf, &size); 4009 4007 4010 4008 size += sysfs_emit_at(buf, size, "%16s %s %s %s %s %s %s %s %s %s %s\n", 4011 4009 title[0], title[1], title[2], title[3], title[4], title[5],
+8 -5
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1468 1468 dev_err(adev->dev, "Failed to disable smu features.\n"); 1469 1469 } 1470 1470 1471 - if (adev->ip_versions[MP1_HWIP][0] >= IP_VERSION(11, 0, 0) && 1471 + if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0) && 1472 1472 adev->gfx.rlc.funcs->stop) 1473 1473 adev->gfx.rlc.funcs->stop(adev); 1474 1474 ··· 2534 2534 struct smu_context *smu = handle; 2535 2535 int ret = 0; 2536 2536 2537 - if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled) 2537 + if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled || 2538 + !smu->ppt_funcs->get_power_profile_mode) 2538 2539 return -EOPNOTSUPP; 2540 + if (!buf) 2541 + return -EINVAL; 2539 2542 2540 2543 mutex_lock(&smu->mutex); 2541 2544 2542 - if (smu->ppt_funcs->get_power_profile_mode) 2543 - ret = smu->ppt_funcs->get_power_profile_mode(smu, buf); 2545 + ret = smu->ppt_funcs->get_power_profile_mode(smu, buf); 2544 2546 2545 2547 mutex_unlock(&smu->mutex); 2546 2548 ··· 2556 2554 struct smu_context *smu = handle; 2557 2555 int ret = 0; 2558 2556 2559 - if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled) 2557 + if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled || 2558 + !smu->ppt_funcs->set_power_profile_mode) 2560 2559 return -EOPNOTSUPP; 2561 2560 2562 2561 mutex_lock(&smu->mutex);
-87
drivers/gpu/drm/amd/pm/swsmu/smu13/yellow_carp_ppt.c
··· 64 64 MSG_MAP(PowerDownVcn, PPSMC_MSG_PowerDownVcn, 1), 65 65 MSG_MAP(PowerUpVcn, PPSMC_MSG_PowerUpVcn, 1), 66 66 MSG_MAP(SetHardMinVcn, PPSMC_MSG_SetHardMinVcn, 1), 67 - MSG_MAP(ActiveProcessNotify, PPSMC_MSG_ActiveProcessNotify, 1), 68 67 MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 1), 69 68 MSG_MAP(SetDriverDramAddrHigh, PPSMC_MSG_SetDriverDramAddrHigh, 1), 70 69 MSG_MAP(SetDriverDramAddrLow, PPSMC_MSG_SetDriverDramAddrLow, 1), ··· 133 134 TAB_MAP_VALID(SMU_METRICS), 134 135 TAB_MAP_VALID(CUSTOM_DPM), 135 136 TAB_MAP_VALID(DPMCLOCKS), 136 - }; 137 - 138 - static struct cmn2asic_mapping yellow_carp_workload_map[PP_SMC_POWER_PROFILE_COUNT] = { 139 - WORKLOAD_MAP(PP_SMC_POWER_PROFILE_FULLSCREEN3D, WORKLOAD_PPLIB_FULL_SCREEN_3D_BIT), 140 - WORKLOAD_MAP(PP_SMC_POWER_PROFILE_VIDEO, WORKLOAD_PPLIB_VIDEO_BIT), 141 - WORKLOAD_MAP(PP_SMC_POWER_PROFILE_VR, WORKLOAD_PPLIB_VR_BIT), 142 - WORKLOAD_MAP(PP_SMC_POWER_PROFILE_COMPUTE, WORKLOAD_PPLIB_COMPUTE_BIT), 143 - WORKLOAD_MAP(PP_SMC_POWER_PROFILE_CUSTOM, WORKLOAD_PPLIB_CUSTOM_BIT), 144 137 }; 145 138 146 139 static int yellow_carp_init_smc_tables(struct smu_context *smu) ··· 530 539 } 531 540 smu->watermarks_bitmap |= WATERMARKS_LOADED; 532 541 } 533 - 534 - return 0; 535 - } 536 - 537 - static int yellow_carp_get_power_profile_mode(struct smu_context *smu, 538 - char *buf) 539 - { 540 - static const char *profile_name[] = { 541 - "BOOTUP_DEFAULT", 542 - "3D_FULL_SCREEN", 543 - "POWER_SAVING", 544 - "VIDEO", 545 - "VR", 546 - "COMPUTE", 547 - "CUSTOM"}; 548 - uint32_t i, size = 0; 549 - int16_t workload_type = 0; 550 - 551 - if (!buf) 552 - return -EINVAL; 553 - 554 - for (i = 0; i <= PP_SMC_POWER_PROFILE_CUSTOM; i++) { 555 - /* 556 - * Conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT. 557 - * Not all profile modes are supported on yellow carp. 558 - */ 559 - workload_type = smu_cmn_to_asic_specific_index(smu, 560 - CMN2ASIC_MAPPING_WORKLOAD, 561 - i); 562 - 563 - if (workload_type < 0) 564 - continue; 565 - 566 - size += sysfs_emit_at(buf, size, "%2d %14s%s\n", 567 - i, profile_name[i], (i == smu->power_profile_mode) ? "*" : " "); 568 - } 569 - 570 - return size; 571 - } 572 - 573 - static int yellow_carp_set_power_profile_mode(struct smu_context *smu, 574 - long *input, uint32_t size) 575 - { 576 - int workload_type, ret; 577 - uint32_t profile_mode = input[size]; 578 - 579 - if (profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) { 580 - dev_err(smu->adev->dev, "Invalid power profile mode %d\n", profile_mode); 581 - return -EINVAL; 582 - } 583 - 584 - if (profile_mode == PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT || 585 - profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING) 586 - return 0; 587 - 588 - /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */ 589 - workload_type = smu_cmn_to_asic_specific_index(smu, 590 - CMN2ASIC_MAPPING_WORKLOAD, 591 - profile_mode); 592 - if (workload_type < 0) { 593 - dev_dbg(smu->adev->dev, "Unsupported power profile mode %d on YELLOWCARP\n", 594 - profile_mode); 595 - return -EINVAL; 596 - } 597 - 598 - ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify, 599 - 1 << workload_type, 600 - NULL); 601 - if (ret) { 602 - dev_err_once(smu->adev->dev, "Fail to set workload type %d\n", 603 - workload_type); 604 - return ret; 605 - } 606 - 607 - smu->power_profile_mode = profile_mode; 608 542 609 543 return 0; 610 544 } ··· 1154 1238 .read_sensor = yellow_carp_read_sensor, 1155 1239 .is_dpm_running = yellow_carp_is_dpm_running, 1156 1240 .set_watermarks_table = yellow_carp_set_watermarks_table, 1157 - .get_power_profile_mode = yellow_carp_get_power_profile_mode, 1158 - .set_power_profile_mode = yellow_carp_set_power_profile_mode, 1159 1241 .get_gpu_metrics = yellow_carp_get_gpu_metrics, 1160 1242 .get_enabled_mask = smu_cmn_get_enabled_32_bits_mask, 1161 1243 .get_pp_feature_mask = smu_cmn_get_pp_feature_mask, ··· 1175 1261 smu->message_map = yellow_carp_message_map; 1176 1262 smu->feature_map = yellow_carp_feature_mask_map; 1177 1263 smu->table_map = yellow_carp_table_map; 1178 - smu->workload_map = yellow_carp_workload_map; 1179 1264 smu->is_apu = true; 1180 1265 }
+6 -3
drivers/gpu/drm/bridge/lontium-lt9611uxc.c
··· 167 167 struct lt9611uxc *lt9611uxc = container_of(work, struct lt9611uxc, work); 168 168 bool connected; 169 169 170 - if (lt9611uxc->connector.dev) 171 - drm_kms_helper_hotplug_event(lt9611uxc->connector.dev); 172 - else { 170 + if (lt9611uxc->connector.dev) { 171 + if (lt9611uxc->connector.dev->mode_config.funcs) 172 + drm_kms_helper_hotplug_event(lt9611uxc->connector.dev); 173 + } else { 173 174 174 175 mutex_lock(&lt9611uxc->ocm_lock); 175 176 connected = lt9611uxc->hdmi_connected; ··· 339 338 DRM_ERROR("Parent encoder object not found"); 340 339 return -ENODEV; 341 340 } 341 + 342 + lt9611uxc->connector.polled = DRM_CONNECTOR_POLL_HPD; 342 343 343 344 drm_connector_helper_add(&lt9611uxc->connector, 344 345 &lt9611uxc_bridge_connector_helper_funcs);
+75 -1
drivers/gpu/drm/bridge/lvds-codec.c
··· 12 12 #include <linux/platform_device.h> 13 13 #include <linux/regulator/consumer.h> 14 14 15 + #include <drm/drm_atomic_helper.h> 15 16 #include <drm/drm_bridge.h> 16 17 #include <drm/drm_panel.h> 17 18 ··· 23 22 struct regulator *vcc; 24 23 struct gpio_desc *powerdown_gpio; 25 24 u32 connector_type; 25 + unsigned int bus_format; 26 26 }; 27 27 28 28 static inline struct lvds_codec *to_lvds_codec(struct drm_bridge *bridge) ··· 76 74 .disable = lvds_codec_disable, 77 75 }; 78 76 77 + #define MAX_INPUT_SEL_FORMATS 1 78 + static u32 * 79 + lvds_codec_atomic_get_input_bus_fmts(struct drm_bridge *bridge, 80 + struct drm_bridge_state *bridge_state, 81 + struct drm_crtc_state *crtc_state, 82 + struct drm_connector_state *conn_state, 83 + u32 output_fmt, 84 + unsigned int *num_input_fmts) 85 + { 86 + struct lvds_codec *lvds_codec = to_lvds_codec(bridge); 87 + u32 *input_fmts; 88 + 89 + *num_input_fmts = 0; 90 + 91 + input_fmts = kcalloc(MAX_INPUT_SEL_FORMATS, sizeof(*input_fmts), 92 + GFP_KERNEL); 93 + if (!input_fmts) 94 + return NULL; 95 + 96 + input_fmts[0] = lvds_codec->bus_format; 97 + *num_input_fmts = MAX_INPUT_SEL_FORMATS; 98 + 99 + return input_fmts; 100 + } 101 + 102 + static const struct drm_bridge_funcs funcs_decoder = { 103 + .attach = lvds_codec_attach, 104 + .enable = lvds_codec_enable, 105 + .disable = lvds_codec_disable, 106 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 107 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 108 + .atomic_reset = drm_atomic_helper_bridge_reset, 109 + .atomic_get_input_bus_fmts = lvds_codec_atomic_get_input_bus_fmts, 110 + }; 111 + 79 112 static int lvds_codec_probe(struct platform_device *pdev) 80 113 { 81 114 struct device *dev = &pdev->dev; 82 115 struct device_node *panel_node; 116 + struct device_node *bus_node; 83 117 struct drm_panel *panel; 84 118 struct lvds_codec *lvds_codec; 119 + const char *mapping; 120 + int ret; 85 121 86 122 lvds_codec = devm_kzalloc(dev, sizeof(*lvds_codec), GFP_KERNEL); 87 123 if (!lvds_codec) ··· 159 119 if (IS_ERR(lvds_codec->panel_bridge)) 160 120 return PTR_ERR(lvds_codec->panel_bridge); 161 121 122 + lvds_codec->bridge.funcs = &funcs; 123 + 124 + /* 125 + * Decoder input LVDS format is a property of the decoder chip or even 126 + * its strapping. Handle data-mapping the same way lvds-panel does. In 127 + * case data-mapping is not present, do nothing, since there are still 128 + * legacy bindings which do not specify this property. 129 + */ 130 + if (lvds_codec->connector_type != DRM_MODE_CONNECTOR_LVDS) { 131 + bus_node = of_graph_get_endpoint_by_regs(dev->of_node, 0, 0); 132 + if (!bus_node) { 133 + dev_dbg(dev, "bus DT node not found\n"); 134 + return -ENXIO; 135 + } 136 + 137 + ret = of_property_read_string(bus_node, "data-mapping", 138 + &mapping); 139 + of_node_put(bus_node); 140 + if (ret < 0) { 141 + dev_warn(dev, "missing 'data-mapping' DT property\n"); 142 + } else { 143 + if (!strcmp(mapping, "jeida-18")) { 144 + lvds_codec->bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG; 145 + } else if (!strcmp(mapping, "jeida-24")) { 146 + lvds_codec->bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA; 147 + } else if (!strcmp(mapping, "vesa-24")) { 148 + lvds_codec->bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG; 149 + } else { 150 + dev_err(dev, "invalid 'data-mapping' DT property\n"); 151 + return -EINVAL; 152 + } 153 + lvds_codec->bridge.funcs = &funcs_decoder; 154 + } 155 + } 156 + 162 157 /* 163 158 * The panel_bridge bridge is attached to the panel's of_node, 164 159 * but we need a bridge attached to our of_node for our user 165 160 * to look up. 166 161 */ 167 162 lvds_codec->bridge.of_node = dev->of_node; 168 - lvds_codec->bridge.funcs = &funcs; 169 163 drm_bridge_add(&lvds_codec->bridge); 170 164 171 165 platform_set_drvdata(pdev, lvds_codec);
+35
drivers/gpu/drm/bridge/nwl-dsi.c
··· 939 939 drm_of_panel_bridge_remove(dsi->dev->of_node, 1, 0); 940 940 } 941 941 942 + static u32 *nwl_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge, 943 + struct drm_bridge_state *bridge_state, 944 + struct drm_crtc_state *crtc_state, 945 + struct drm_connector_state *conn_state, 946 + u32 output_fmt, 947 + unsigned int *num_input_fmts) 948 + { 949 + u32 *input_fmts, input_fmt; 950 + 951 + *num_input_fmts = 0; 952 + 953 + switch (output_fmt) { 954 + /* If MEDIA_BUS_FMT_FIXED is tested, return default bus format */ 955 + case MEDIA_BUS_FMT_FIXED: 956 + input_fmt = MEDIA_BUS_FMT_RGB888_1X24; 957 + break; 958 + case MEDIA_BUS_FMT_RGB888_1X24: 959 + case MEDIA_BUS_FMT_RGB666_1X18: 960 + case MEDIA_BUS_FMT_RGB565_1X16: 961 + input_fmt = output_fmt; 962 + break; 963 + default: 964 + return NULL; 965 + } 966 + 967 + input_fmts = kcalloc(1, sizeof(*input_fmts), GFP_KERNEL); 968 + if (!input_fmts) 969 + return NULL; 970 + input_fmts[0] = input_fmt; 971 + *num_input_fmts = 1; 972 + 973 + return input_fmts; 974 + } 975 + 942 976 static const struct drm_bridge_funcs nwl_dsi_bridge_funcs = { 943 977 .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 944 978 .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, ··· 980 946 .atomic_check = nwl_dsi_bridge_atomic_check, 981 947 .atomic_enable = nwl_dsi_bridge_atomic_enable, 982 948 .atomic_disable = nwl_dsi_bridge_atomic_disable, 949 + .atomic_get_input_bus_fmts = nwl_bridge_atomic_get_input_bus_fmts, 983 950 .mode_set = nwl_dsi_bridge_mode_set, 984 951 .mode_valid = nwl_dsi_bridge_mode_valid, 985 952 .attach = nwl_dsi_bridge_attach,
+14 -3
drivers/gpu/drm/bridge/ti-sn65dsi83.c
··· 288 288 return ret; 289 289 } 290 290 291 + static void sn65dsi83_detach(struct drm_bridge *bridge) 292 + { 293 + struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge); 294 + 295 + if (!ctx->dsi) 296 + return; 297 + 298 + mipi_dsi_detach(ctx->dsi); 299 + mipi_dsi_device_unregister(ctx->dsi); 300 + drm_bridge_remove(&ctx->bridge); 301 + ctx->dsi = NULL; 302 + } 303 + 291 304 static void sn65dsi83_atomic_pre_enable(struct drm_bridge *bridge, 292 305 struct drm_bridge_state *old_bridge_state) 293 306 { ··· 596 583 597 584 static const struct drm_bridge_funcs sn65dsi83_funcs = { 598 585 .attach = sn65dsi83_attach, 586 + .detach = sn65dsi83_detach, 599 587 .atomic_pre_enable = sn65dsi83_atomic_pre_enable, 600 588 .atomic_enable = sn65dsi83_atomic_enable, 601 589 .atomic_disable = sn65dsi83_atomic_disable, ··· 711 697 { 712 698 struct sn65dsi83 *ctx = i2c_get_clientdata(client); 713 699 714 - mipi_dsi_detach(ctx->dsi); 715 - mipi_dsi_device_unregister(ctx->dsi); 716 - drm_bridge_remove(&ctx->bridge); 717 700 of_node_put(ctx->host_node); 718 701 719 702 return 0;
+28 -4
drivers/gpu/drm/drm_connector.c
··· 625 625 * 626 626 * In contrast to the other drm_get_*_name functions this one here returns a 627 627 * const pointer and hence is threadsafe. 628 + * 629 + * Returns: connector status string 628 630 */ 629 631 const char *drm_get_connector_status_name(enum drm_connector_status status) 630 632 { ··· 709 707 * drm_connector_list_iter_next - return next connector 710 708 * @iter: connector_list iterator 711 709 * 712 - * Returns the next connector for @iter, or NULL when the list walk has 710 + * Returns: the next connector for @iter, or NULL when the list walk has 713 711 * completed. 714 712 */ 715 713 struct drm_connector * ··· 782 780 * 783 781 * Note you could abuse this and return something out of bounds, but that 784 782 * would be a caller error. No unscrubbed user data should make it here. 783 + * 784 + * Returns: string describing an enumerated subpixel property 785 785 */ 786 786 const char *drm_get_subpixel_order_name(enum subpixel_order order) 787 787 { ··· 813 809 * Store the supported bus formats in display info structure. 814 810 * See MEDIA_BUS_FMT_* definitions in include/uapi/linux/media-bus-format.h for 815 811 * a full list of available formats. 812 + * 813 + * Returns: 814 + * 0 on success or a negative error code on failure. 816 815 */ 817 816 int drm_display_info_set_bus_formats(struct drm_display_info *info, 818 817 const u32 *formats, ··· 1333 1326 * @dev: DRM device 1334 1327 * 1335 1328 * Called by a driver the first time a DVI-I connector is made. 1329 + * 1330 + * Returns: %0 1336 1331 */ 1337 1332 int drm_mode_create_dvi_i_properties(struct drm_device *dev) 1338 1333 { ··· 1406 1397 * Game: 1407 1398 * Content type is game 1408 1399 * 1400 + * The meaning of each content type is defined in CTA-861-G table 15. 1401 + * 1409 1402 * Drivers can set up this property by calling 1410 1403 * drm_connector_attach_content_type_property(). Decoding to 1411 1404 * infoframe values is done through drm_hdmi_avi_infoframe_content_type(). ··· 1418 1407 * @connector: connector to attach content type property on. 1419 1408 * 1420 1409 * Called by a driver the first time a HDMI connector is made. 1410 + * 1411 + * Returns: %0 1421 1412 */ 1422 1413 int drm_connector_attach_content_type_property(struct drm_connector *connector) 1423 1414 { ··· 1500 1487 * creates the TV margin properties for a given device. No need to call this 1501 1488 * function for an SDTV connector, it's already called from 1502 1489 * drm_mode_create_tv_properties(). 1490 + * 1491 + * Returns: 1492 + * 0 on success or a negative error code on failure. 1503 1493 */ 1504 1494 int drm_mode_create_tv_margin_properties(struct drm_device *dev) 1505 1495 { ··· 1543 1527 * the TV specific connector properties for a given device. Caller is 1544 1528 * responsible for allocating a list of format names and passing them to 1545 1529 * this routine. 1530 + * 1531 + * Returns: 1532 + * 0 on success or a negative error code on failure. 1546 1533 */ 1547 1534 int drm_mode_create_tv_properties(struct drm_device *dev, 1548 1535 unsigned int num_modes, ··· 1641 1622 * Atomic drivers should use drm_connector_attach_scaling_mode_property() 1642 1623 * instead to correctly assign &drm_connector_state.scaling_mode 1643 1624 * in the atomic state. 1625 + * 1626 + * Returns: %0 1644 1627 */ 1645 1628 int drm_mode_create_scaling_mode_property(struct drm_device *dev) 1646 1629 { ··· 1960 1939 * @dev: DRM device 1961 1940 * 1962 1941 * Create the suggested x/y offset property for connectors. 1942 + * 1943 + * Returns: 1944 + * 0 on success or a negative error code on failure. 1963 1945 */ 1964 1946 int drm_mode_create_suggested_offset_properties(struct drm_device *dev) 1965 1947 { ··· 2336 2312 EXPORT_SYMBOL(drm_connector_set_panel_orientation); 2337 2313 2338 2314 /** 2339 - * drm_connector_set_panel_orientation_with_quirk - 2340 - * set the connector's panel_orientation after checking for quirks 2315 + * drm_connector_set_panel_orientation_with_quirk - set the 2316 + * connector's panel_orientation after checking for quirks 2341 2317 * @connector: connector for which to init the panel-orientation property. 2342 2318 * @panel_orientation: drm_panel_orientation value to set 2343 2319 * @width: width in pixels of the panel, used for panel quirk detection ··· 2621 2597 2622 2598 /** 2623 2599 * drm_connector_oob_hotplug_event - Report out-of-band hotplug event to connector 2624 - * @connector: connector to report the event on 2600 + * @connector_fwnode: fwnode_handle to report the event on 2625 2601 * 2626 2602 * On some hardware a hotplug event notification may come from outside the display 2627 2603 * driver / device. An example of this is some USB Type-C setups where the hardware
+5 -21
drivers/gpu/drm/drm_gem.c
··· 1340 1340 struct drm_gem_object *obj, 1341 1341 bool write) 1342 1342 { 1343 - int ret; 1344 - struct dma_fence **fences; 1345 - unsigned int i, fence_count; 1343 + struct dma_resv_iter cursor; 1344 + struct dma_fence *fence; 1345 + int ret = 0; 1346 1346 1347 - if (!write) { 1348 - struct dma_fence *fence = 1349 - dma_resv_get_excl_unlocked(obj->resv); 1350 - 1351 - return drm_gem_fence_array_add(fence_array, fence); 1352 - } 1353 - 1354 - ret = dma_resv_get_fences(obj->resv, NULL, 1355 - &fence_count, &fences); 1356 - if (ret || !fence_count) 1357 - return ret; 1358 - 1359 - for (i = 0; i < fence_count; i++) { 1360 - ret = drm_gem_fence_array_add(fence_array, fences[i]); 1347 + dma_resv_for_each_fence(&cursor, obj->resv, write, fence) { 1348 + ret = drm_gem_fence_array_add(fence_array, fence); 1361 1349 if (ret) 1362 1350 break; 1363 1351 } 1364 - 1365 - for (; i < fence_count; i++) 1366 - dma_fence_put(fences[i]); 1367 - kfree(fences); 1368 1352 return ret; 1369 1353 } 1370 1354 EXPORT_SYMBOL(drm_gem_fence_array_add_implicit);
+47 -2
drivers/gpu/drm/drm_modeset_lock.c
··· 25 25 #include <drm/drm_crtc.h> 26 26 #include <drm/drm_device.h> 27 27 #include <drm/drm_modeset_lock.h> 28 + #include <drm/drm_print.h> 28 29 29 30 /** 30 31 * DOC: kms locking ··· 77 76 */ 78 77 79 78 static DEFINE_WW_CLASS(crtc_ww_class); 79 + 80 + #if IS_ENABLED(CONFIG_DRM_DEBUG_MODESET_LOCK) 81 + static noinline depot_stack_handle_t __drm_stack_depot_save(void) 82 + { 83 + unsigned long entries[8]; 84 + unsigned int n; 85 + 86 + n = stack_trace_save(entries, ARRAY_SIZE(entries), 1); 87 + 88 + return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN); 89 + } 90 + 91 + static void __drm_stack_depot_print(depot_stack_handle_t stack_depot) 92 + { 93 + struct drm_printer p = drm_debug_printer("drm_modeset_lock"); 94 + unsigned long *entries; 95 + unsigned int nr_entries; 96 + char *buf; 97 + 98 + buf = kmalloc(PAGE_SIZE, GFP_NOWAIT | __GFP_NOWARN); 99 + if (!buf) 100 + return; 101 + 102 + nr_entries = stack_depot_fetch(stack_depot, &entries); 103 + stack_trace_snprint(buf, PAGE_SIZE, entries, nr_entries, 2); 104 + 105 + drm_printf(&p, "attempting to lock a contended lock without backoff:\n%s", buf); 106 + 107 + kfree(buf); 108 + } 109 + #else /* CONFIG_DRM_DEBUG_MODESET_LOCK */ 110 + static depot_stack_handle_t __drm_stack_depot_save(void) 111 + { 112 + return 0; 113 + } 114 + static void __drm_stack_depot_print(depot_stack_handle_t stack_depot) 115 + { 116 + } 117 + #endif /* CONFIG_DRM_DEBUG_MODESET_LOCK */ 80 118 81 119 /** 82 120 * drm_modeset_lock_all - take all modeset locks ··· 265 225 */ 266 226 void drm_modeset_drop_locks(struct drm_modeset_acquire_ctx *ctx) 267 227 { 268 - WARN_ON(ctx->contended); 228 + if (WARN_ON(ctx->contended)) 229 + __drm_stack_depot_print(ctx->stack_depot); 230 + 269 231 while (!list_empty(&ctx->locked)) { 270 232 struct drm_modeset_lock *lock; 271 233 ··· 285 243 { 286 244 int ret; 287 245 288 - WARN_ON(ctx->contended); 246 + if (WARN_ON(ctx->contended)) 247 + __drm_stack_depot_print(ctx->stack_depot); 289 248 290 249 if (ctx->trylock_only) { 291 250 lockdep_assert_held(&ctx->ww_ctx); ··· 317 274 ret = 0; 318 275 } else if (ret == -EDEADLK) { 319 276 ctx->contended = lock; 277 + ctx->stack_depot = __drm_stack_depot_save(); 320 278 } 321 279 322 280 return ret; ··· 340 296 struct drm_modeset_lock *contended = ctx->contended; 341 297 342 298 ctx->contended = NULL; 299 + ctx->stack_depot = 0; 343 300 344 301 if (WARN_ON(!contended)) 345 302 return 0;
-1
drivers/gpu/drm/drm_plane_helper.c
··· 123 123 .crtc_w = drm_rect_width(dst), 124 124 .crtc_h = drm_rect_height(dst), 125 125 .rotation = rotation, 126 - .visible = *visible, 127 126 }; 128 127 struct drm_crtc_state crtc_state = { 129 128 .crtc = crtc,
+6 -4
drivers/gpu/drm/drm_prime.c
··· 722 722 if (obj->funcs && obj->funcs->mmap) { 723 723 vma->vm_ops = obj->funcs->vm_ops; 724 724 725 - ret = obj->funcs->mmap(obj, vma); 726 - if (ret) 727 - return ret; 728 - vma->vm_private_data = obj; 729 725 drm_gem_object_get(obj); 726 + ret = obj->funcs->mmap(obj, vma); 727 + if (ret) { 728 + drm_gem_object_put(obj); 729 + return ret; 730 + } 731 + vma->vm_private_data = obj; 730 732 return 0; 731 733 } 732 734
+1
drivers/gpu/drm/i915/display/g4x_hdmi.c
··· 584 584 else 585 585 intel_encoder->enable = g4x_enable_hdmi; 586 586 } 587 + intel_encoder->shutdown = intel_hdmi_encoder_shutdown; 587 588 588 589 intel_encoder->type = INTEL_OUTPUT_HDMI; 589 590 intel_encoder->power_domain = intel_port_to_power_domain(port);
+64 -70
drivers/gpu/drm/i915/display/intel_bios.c
··· 1707 1707 child->aux_channel = 0; 1708 1708 } 1709 1709 1710 + static u8 dvo_port_type(u8 dvo_port) 1711 + { 1712 + switch (dvo_port) { 1713 + case DVO_PORT_HDMIA: 1714 + case DVO_PORT_HDMIB: 1715 + case DVO_PORT_HDMIC: 1716 + case DVO_PORT_HDMID: 1717 + case DVO_PORT_HDMIE: 1718 + case DVO_PORT_HDMIF: 1719 + case DVO_PORT_HDMIG: 1720 + case DVO_PORT_HDMIH: 1721 + case DVO_PORT_HDMII: 1722 + return DVO_PORT_HDMIA; 1723 + case DVO_PORT_DPA: 1724 + case DVO_PORT_DPB: 1725 + case DVO_PORT_DPC: 1726 + case DVO_PORT_DPD: 1727 + case DVO_PORT_DPE: 1728 + case DVO_PORT_DPF: 1729 + case DVO_PORT_DPG: 1730 + case DVO_PORT_DPH: 1731 + case DVO_PORT_DPI: 1732 + return DVO_PORT_DPA; 1733 + case DVO_PORT_MIPIA: 1734 + case DVO_PORT_MIPIB: 1735 + case DVO_PORT_MIPIC: 1736 + case DVO_PORT_MIPID: 1737 + return DVO_PORT_MIPIA; 1738 + default: 1739 + return dvo_port; 1740 + } 1741 + } 1742 + 1710 1743 static enum port __dvo_port_to_port(int n_ports, int n_dvo, 1711 1744 const int port_mapping[][3], u8 dvo_port) 1712 1745 { ··· 1963 1930 } 1964 1931 } 1965 1932 1966 - static enum port get_edp_port(struct drm_i915_private *i915) 1967 - { 1968 - const struct intel_bios_encoder_data *devdata; 1969 - enum port port; 1970 - 1971 - for_each_port(port) { 1972 - devdata = i915->vbt.ports[port]; 1973 - 1974 - if (devdata && intel_bios_encoder_supports_edp(devdata)) 1975 - return port; 1976 - } 1977 - 1978 - return PORT_NONE; 1979 - } 1980 - 1981 - /* 1982 - * FIXME: The power sequencer and backlight code currently do not support more 1983 - * than one set registers, at least not on anything other than VLV/CHV. It will 1984 - * clobber the registers. As a temporary workaround, gracefully prevent more 1985 - * than one eDP from being registered. 1986 - */ 1987 - static void sanitize_dual_edp(struct intel_bios_encoder_data *devdata, 1988 - enum port port) 1989 - { 1990 - struct drm_i915_private *i915 = devdata->i915; 1991 - struct child_device_config *child = &devdata->child; 1992 - enum port p; 1993 - 1994 - /* CHV might not clobber PPS registers. */ 1995 - if (IS_CHERRYVIEW(i915)) 1996 - return; 1997 - 1998 - p = get_edp_port(i915); 1999 - if (p == PORT_NONE) 2000 - return; 2001 - 2002 - drm_dbg_kms(&i915->drm, "both ports %c and %c configured as eDP, " 2003 - "disabling port %c eDP\n", port_name(p), port_name(port), 2004 - port_name(port)); 2005 - 2006 - child->device_type &= ~DEVICE_TYPE_DISPLAYPORT_OUTPUT; 2007 - child->device_type &= ~DEVICE_TYPE_INTERNAL_CONNECTOR; 2008 - } 2009 - 2010 1933 static bool is_port_valid(struct drm_i915_private *i915, enum port port) 2011 1934 { 2012 1935 /* ··· 2019 2030 HAS_LSPCON(i915) && child->lspcon, 2020 2031 supports_typec_usb, supports_tbt, 2021 2032 devdata->dsc != NULL); 2022 - 2023 - if (is_edp) 2024 - sanitize_dual_edp(devdata, port); 2025 2033 2026 2034 if (is_dvi) 2027 2035 sanitize_ddc_pin(devdata, port); ··· 2656 2670 return false; 2657 2671 } 2658 2672 2659 - static bool child_dev_is_dp_dual_mode(const struct child_device_config *child, 2660 - enum port port) 2673 + static bool child_dev_is_dp_dual_mode(const struct child_device_config *child) 2674 + { 2675 + if ((child->device_type & DEVICE_TYPE_DP_DUAL_MODE_BITS) != 2676 + (DEVICE_TYPE_DP_DUAL_MODE & DEVICE_TYPE_DP_DUAL_MODE_BITS)) 2677 + return false; 2678 + 2679 + if (dvo_port_type(child->dvo_port) == DVO_PORT_DPA) 2680 + return true; 2681 + 2682 + /* Only accept a HDMI dvo_port as DP++ if it has an AUX channel */ 2683 + if (dvo_port_type(child->dvo_port) == DVO_PORT_HDMIA && 2684 + child->aux_channel != 0) 2685 + return true; 2686 + 2687 + return false; 2688 + } 2689 + 2690 + bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *i915, 2691 + enum port port) 2661 2692 { 2662 2693 static const struct { 2663 2694 u16 dp, hdmi; ··· 2689 2686 [PORT_E] = { DVO_PORT_DPE, DVO_PORT_HDMIE, }, 2690 2687 [PORT_F] = { DVO_PORT_DPF, DVO_PORT_HDMIF, }, 2691 2688 }; 2689 + const struct intel_bios_encoder_data *devdata; 2690 + 2691 + if (HAS_DDI(i915)) { 2692 + const struct intel_bios_encoder_data *devdata; 2693 + 2694 + devdata = intel_bios_encoder_data_lookup(i915, port); 2695 + 2696 + return devdata && child_dev_is_dp_dual_mode(&devdata->child); 2697 + } 2692 2698 2693 2699 if (port == PORT_A || port >= ARRAY_SIZE(port_mapping)) 2694 2700 return false; 2695 2701 2696 - if ((child->device_type & DEVICE_TYPE_DP_DUAL_MODE_BITS) != 2697 - (DEVICE_TYPE_DP_DUAL_MODE & DEVICE_TYPE_DP_DUAL_MODE_BITS)) 2698 - return false; 2699 - 2700 - if (child->dvo_port == port_mapping[port].dp) 2701 - return true; 2702 - 2703 - /* Only accept a HDMI dvo_port as DP++ if it has an AUX channel */ 2704 - if (child->dvo_port == port_mapping[port].hdmi && 2705 - child->aux_channel != 0) 2706 - return true; 2707 - 2708 - return false; 2709 - } 2710 - 2711 - bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *i915, 2712 - enum port port) 2713 - { 2714 - const struct intel_bios_encoder_data *devdata; 2715 - 2716 2702 list_for_each_entry(devdata, &i915->vbt.display_devices, node) { 2717 - if (child_dev_is_dp_dual_mode(&devdata->child, port)) 2703 + if ((devdata->child.dvo_port == port_mapping[port].dp || 2704 + devdata->child.dvo_port == port_mapping[port].hdmi) && 2705 + child_dev_is_dp_dual_mode(&devdata->child)) 2718 2706 return true; 2719 2707 } 2720 2708
+22 -22
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 2885 2885 return freq; 2886 2886 } 2887 2887 2888 - static struct intel_cdclk_funcs tgl_cdclk_funcs = { 2888 + static const struct intel_cdclk_funcs tgl_cdclk_funcs = { 2889 2889 .get_cdclk = bxt_get_cdclk, 2890 2890 .set_cdclk = bxt_set_cdclk, 2891 2891 .bw_calc_min_cdclk = skl_bw_calc_min_cdclk, ··· 2893 2893 .calc_voltage_level = tgl_calc_voltage_level, 2894 2894 }; 2895 2895 2896 - static struct intel_cdclk_funcs ehl_cdclk_funcs = { 2896 + static const struct intel_cdclk_funcs ehl_cdclk_funcs = { 2897 2897 .get_cdclk = bxt_get_cdclk, 2898 2898 .set_cdclk = bxt_set_cdclk, 2899 2899 .bw_calc_min_cdclk = skl_bw_calc_min_cdclk, ··· 2901 2901 .calc_voltage_level = ehl_calc_voltage_level, 2902 2902 }; 2903 2903 2904 - static struct intel_cdclk_funcs icl_cdclk_funcs = { 2904 + static const struct intel_cdclk_funcs icl_cdclk_funcs = { 2905 2905 .get_cdclk = bxt_get_cdclk, 2906 2906 .set_cdclk = bxt_set_cdclk, 2907 2907 .bw_calc_min_cdclk = skl_bw_calc_min_cdclk, ··· 2909 2909 .calc_voltage_level = icl_calc_voltage_level, 2910 2910 }; 2911 2911 2912 - static struct intel_cdclk_funcs bxt_cdclk_funcs = { 2912 + static const struct intel_cdclk_funcs bxt_cdclk_funcs = { 2913 2913 .get_cdclk = bxt_get_cdclk, 2914 2914 .set_cdclk = bxt_set_cdclk, 2915 2915 .bw_calc_min_cdclk = skl_bw_calc_min_cdclk, ··· 2917 2917 .calc_voltage_level = bxt_calc_voltage_level, 2918 2918 }; 2919 2919 2920 - static struct intel_cdclk_funcs skl_cdclk_funcs = { 2920 + static const struct intel_cdclk_funcs skl_cdclk_funcs = { 2921 2921 .get_cdclk = skl_get_cdclk, 2922 2922 .set_cdclk = skl_set_cdclk, 2923 2923 .bw_calc_min_cdclk = skl_bw_calc_min_cdclk, 2924 2924 .modeset_calc_cdclk = skl_modeset_calc_cdclk, 2925 2925 }; 2926 2926 2927 - static struct intel_cdclk_funcs bdw_cdclk_funcs = { 2927 + static const struct intel_cdclk_funcs bdw_cdclk_funcs = { 2928 2928 .get_cdclk = bdw_get_cdclk, 2929 2929 .set_cdclk = bdw_set_cdclk, 2930 2930 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2931 2931 .modeset_calc_cdclk = bdw_modeset_calc_cdclk, 2932 2932 }; 2933 2933 2934 - static struct intel_cdclk_funcs chv_cdclk_funcs = { 2934 + static const struct intel_cdclk_funcs chv_cdclk_funcs = { 2935 2935 .get_cdclk = vlv_get_cdclk, 2936 2936 .set_cdclk = chv_set_cdclk, 2937 2937 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2938 2938 .modeset_calc_cdclk = vlv_modeset_calc_cdclk, 2939 2939 }; 2940 2940 2941 - static struct intel_cdclk_funcs vlv_cdclk_funcs = { 2941 + static const struct intel_cdclk_funcs vlv_cdclk_funcs = { 2942 2942 .get_cdclk = vlv_get_cdclk, 2943 2943 .set_cdclk = vlv_set_cdclk, 2944 2944 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2945 2945 .modeset_calc_cdclk = vlv_modeset_calc_cdclk, 2946 2946 }; 2947 2947 2948 - static struct intel_cdclk_funcs hsw_cdclk_funcs = { 2948 + static const struct intel_cdclk_funcs hsw_cdclk_funcs = { 2949 2949 .get_cdclk = hsw_get_cdclk, 2950 2950 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2951 2951 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 2952 2952 }; 2953 2953 2954 2954 /* SNB, IVB, 965G, 945G */ 2955 - static struct intel_cdclk_funcs fixed_400mhz_cdclk_funcs = { 2955 + static const struct intel_cdclk_funcs fixed_400mhz_cdclk_funcs = { 2956 2956 .get_cdclk = fixed_400mhz_get_cdclk, 2957 2957 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2958 2958 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 2959 2959 }; 2960 2960 2961 - static struct intel_cdclk_funcs ilk_cdclk_funcs = { 2961 + static const struct intel_cdclk_funcs ilk_cdclk_funcs = { 2962 2962 .get_cdclk = fixed_450mhz_get_cdclk, 2963 2963 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2964 2964 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 2965 2965 }; 2966 2966 2967 - static struct intel_cdclk_funcs gm45_cdclk_funcs = { 2967 + static const struct intel_cdclk_funcs gm45_cdclk_funcs = { 2968 2968 .get_cdclk = gm45_get_cdclk, 2969 2969 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2970 2970 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, ··· 2972 2972 2973 2973 /* G45 uses G33 */ 2974 2974 2975 - static struct intel_cdclk_funcs i965gm_cdclk_funcs = { 2975 + static const struct intel_cdclk_funcs i965gm_cdclk_funcs = { 2976 2976 .get_cdclk = i965gm_get_cdclk, 2977 2977 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2978 2978 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, ··· 2980 2980 2981 2981 /* i965G uses fixed 400 */ 2982 2982 2983 - static struct intel_cdclk_funcs pnv_cdclk_funcs = { 2983 + static const struct intel_cdclk_funcs pnv_cdclk_funcs = { 2984 2984 .get_cdclk = pnv_get_cdclk, 2985 2985 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2986 2986 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 2987 2987 }; 2988 2988 2989 - static struct intel_cdclk_funcs g33_cdclk_funcs = { 2989 + static const struct intel_cdclk_funcs g33_cdclk_funcs = { 2990 2990 .get_cdclk = g33_get_cdclk, 2991 2991 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2992 2992 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 2993 2993 }; 2994 2994 2995 - static struct intel_cdclk_funcs i945gm_cdclk_funcs = { 2995 + static const struct intel_cdclk_funcs i945gm_cdclk_funcs = { 2996 2996 .get_cdclk = i945gm_get_cdclk, 2997 2997 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 2998 2998 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, ··· 3000 3000 3001 3001 /* i945G uses fixed 400 */ 3002 3002 3003 - static struct intel_cdclk_funcs i915gm_cdclk_funcs = { 3003 + static const struct intel_cdclk_funcs i915gm_cdclk_funcs = { 3004 3004 .get_cdclk = i915gm_get_cdclk, 3005 3005 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 3006 3006 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 3007 3007 }; 3008 3008 3009 - static struct intel_cdclk_funcs i915g_cdclk_funcs = { 3009 + static const struct intel_cdclk_funcs i915g_cdclk_funcs = { 3010 3010 .get_cdclk = fixed_333mhz_get_cdclk, 3011 3011 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 3012 3012 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 3013 3013 }; 3014 3014 3015 - static struct intel_cdclk_funcs i865g_cdclk_funcs = { 3015 + static const struct intel_cdclk_funcs i865g_cdclk_funcs = { 3016 3016 .get_cdclk = fixed_266mhz_get_cdclk, 3017 3017 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 3018 3018 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 3019 3019 }; 3020 3020 3021 - static struct intel_cdclk_funcs i85x_cdclk_funcs = { 3021 + static const struct intel_cdclk_funcs i85x_cdclk_funcs = { 3022 3022 .get_cdclk = i85x_get_cdclk, 3023 3023 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 3024 3024 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 3025 3025 }; 3026 3026 3027 - static struct intel_cdclk_funcs i845g_cdclk_funcs = { 3027 + static const struct intel_cdclk_funcs i845g_cdclk_funcs = { 3028 3028 .get_cdclk = fixed_200mhz_get_cdclk, 3029 3029 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 3030 3030 .modeset_calc_cdclk = fixed_modeset_calc_cdclk, 3031 3031 }; 3032 3032 3033 - static struct intel_cdclk_funcs i830_cdclk_funcs = { 3033 + static const struct intel_cdclk_funcs i830_cdclk_funcs = { 3034 3034 .get_cdclk = fixed_133mhz_get_cdclk, 3035 3035 .bw_calc_min_cdclk = intel_bw_calc_min_cdclk, 3036 3036 .modeset_calc_cdclk = fixed_modeset_calc_cdclk,
+1
drivers/gpu/drm/i915/display/intel_ddi.c
··· 4361 4361 enum phy phy = intel_port_to_phy(i915, encoder->port); 4362 4362 4363 4363 intel_dp_encoder_shutdown(encoder); 4364 + intel_hdmi_encoder_shutdown(encoder); 4364 4365 4365 4366 if (!intel_phy_is_tc(i915, phy)) 4366 4367 return;
+8 -1
drivers/gpu/drm/i915/display/intel_display.c
··· 848 848 int i; 849 849 850 850 for (i = 0 ; i < ARRAY_SIZE(rem_info->plane); i++) { 851 + unsigned int plane_size; 852 + 853 + plane_size = rem_info->plane[i].dst_stride * rem_info->plane[i].height; 854 + if (plane_size == 0) 855 + continue; 856 + 851 857 if (rem_info->plane_alignment) 852 858 size = ALIGN(size, rem_info->plane_alignment); 853 - size += rem_info->plane[i].dst_stride * rem_info->plane[i].height; 859 + 860 + size += plane_size; 854 861 } 855 862 856 863 return size;
+22 -9
drivers/gpu/drm/i915/display/intel_dp.c
··· 120 120 return crtc_state->port_clock >= 1000000; 121 121 } 122 122 123 + static void intel_dp_set_default_sink_rates(struct intel_dp *intel_dp) 124 + { 125 + intel_dp->sink_rates[0] = 162000; 126 + intel_dp->num_sink_rates = 1; 127 + } 128 + 123 129 /* update sink rates from dpcd */ 124 130 static void intel_dp_set_sink_rates(struct intel_dp *intel_dp) 125 131 { ··· 287 281 */ 288 282 int max_link_rate_kbps = max_link_rate * 10; 289 283 290 - max_link_rate_kbps = DIV_ROUND_CLOSEST_ULL(max_link_rate_kbps * 9671, 10000); 284 + max_link_rate_kbps = DIV_ROUND_CLOSEST_ULL(mul_u32_u32(max_link_rate_kbps, 9671), 10000); 291 285 max_link_rate = max_link_rate_kbps / 8; 292 286 } 293 287 ··· 1864 1858 intel_dp->lane_count = lane_count; 1865 1859 } 1866 1860 1861 + static void intel_dp_reset_max_link_params(struct intel_dp *intel_dp) 1862 + { 1863 + intel_dp->max_link_lane_count = intel_dp_max_common_lane_count(intel_dp); 1864 + intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp); 1865 + } 1866 + 1867 1867 /* Enable backlight PWM and backlight PP control. */ 1868 1868 void intel_edp_backlight_on(const struct intel_crtc_state *crtc_state, 1869 1869 const struct drm_connector_state *conn_state) ··· 2029 2017 if (intel_dp->dpcd[DP_DPCD_REV] == 0) 2030 2018 intel_dp_get_dpcd(intel_dp); 2031 2019 2032 - intel_dp->max_link_lane_count = intel_dp_max_common_lane_count(intel_dp); 2033 - intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp); 2020 + intel_dp_reset_max_link_params(intel_dp); 2034 2021 } 2035 2022 2036 2023 bool intel_dp_initial_fastset_check(struct intel_encoder *encoder, ··· 2567 2556 */ 2568 2557 intel_psr_init_dpcd(intel_dp); 2569 2558 2559 + /* Clear the default sink rates */ 2560 + intel_dp->num_sink_rates = 0; 2561 + 2570 2562 /* Read the eDP 1.4+ supported link rates. */ 2571 2563 if (intel_dp->edp_dpcd[0] >= DP_EDP_14) { 2572 2564 __le16 sink_rates[DP_MAX_SUPPORTED_RATES]; ··· 2605 2591 intel_dp_set_sink_rates(intel_dp); 2606 2592 2607 2593 intel_dp_set_common_rates(intel_dp); 2594 + intel_dp_reset_max_link_params(intel_dp); 2608 2595 2609 2596 /* Read the eDP DSC DPCD registers */ 2610 2597 if (DISPLAY_VER(dev_priv) >= 10) ··· 4347 4332 * supports link training fallback params. 4348 4333 */ 4349 4334 if (intel_dp->reset_link_params || intel_dp->is_mst) { 4350 - /* Initial max link lane count */ 4351 - intel_dp->max_link_lane_count = intel_dp_max_common_lane_count(intel_dp); 4352 - 4353 - /* Initial max link rate */ 4354 - intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp); 4355 - 4335 + intel_dp_reset_max_link_params(intel_dp); 4356 4336 intel_dp->reset_link_params = false; 4357 4337 } 4358 4338 ··· 5013 5003 } 5014 5004 5015 5005 intel_dp_set_source_rates(intel_dp); 5006 + intel_dp_set_default_sink_rates(intel_dp); 5007 + intel_dp_set_common_rates(intel_dp); 5008 + intel_dp_reset_max_link_params(intel_dp); 5016 5009 5017 5010 if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv)) 5018 5011 intel_dp->pps.active_pipe = vlv_active_pipe(intel_dp);
+2 -2
drivers/gpu/drm/i915/display/intel_fb.c
··· 378 378 intel_fb_plane_get_subsampling(&main_hsub, &main_vsub, &fb->base, main_plane); 379 379 intel_fb_plane_get_subsampling(&hsub, &vsub, &fb->base, color_plane); 380 380 381 - *w = main_width / main_hsub / hsub; 382 - *h = main_height / main_vsub / vsub; 381 + *w = DIV_ROUND_UP(main_width, main_hsub * hsub); 382 + *h = DIV_ROUND_UP(main_height, main_vsub * vsub); 383 383 } 384 384 385 385 static u32 intel_adjust_tile_offset(int *x, int *y,
+14 -2
drivers/gpu/drm/i915/display/intel_hdmi.c
··· 1246 1246 void intel_dp_dual_mode_set_tmds_output(struct intel_hdmi *hdmi, bool enable) 1247 1247 { 1248 1248 struct drm_i915_private *dev_priv = intel_hdmi_to_i915(hdmi); 1249 - struct i2c_adapter *adapter = 1250 - intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus); 1249 + struct i2c_adapter *adapter; 1251 1250 1252 1251 if (hdmi->dp_dual_mode.type < DRM_DP_DUAL_MODE_TYPE2_DVI) 1253 1252 return; 1253 + 1254 + adapter = intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus); 1254 1255 1255 1256 drm_dbg_kms(&dev_priv->drm, "%s DP dual mode adaptor TMDS output\n", 1256 1257 enable ? "Enabling" : "Disabling"); ··· 2257 2256 } 2258 2257 2259 2258 return 0; 2259 + } 2260 + 2261 + void intel_hdmi_encoder_shutdown(struct intel_encoder *encoder) 2262 + { 2263 + struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder); 2264 + 2265 + /* 2266 + * Give a hand to buggy BIOSen which forget to turn 2267 + * the TMDS output buffers back on after a reboot. 2268 + */ 2269 + intel_dp_dual_mode_set_tmds_output(intel_hdmi, true); 2260 2270 } 2261 2271 2262 2272 static void
+1
drivers/gpu/drm/i915/display/intel_hdmi.h
··· 28 28 int intel_hdmi_compute_config(struct intel_encoder *encoder, 29 29 struct intel_crtc_state *pipe_config, 30 30 struct drm_connector_state *conn_state); 31 + void intel_hdmi_encoder_shutdown(struct intel_encoder *encoder); 31 32 bool intel_hdmi_handle_sink_scrambling(struct intel_encoder *encoder, 32 33 struct drm_connector *connector, 33 34 bool high_tmds_clock_ratio,
+2
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
··· 9 9 #include <linux/dma-resv.h> 10 10 #include <linux/module.h> 11 11 12 + #include <asm/smp.h> 13 + 12 14 #include "i915_drv.h" 13 15 #include "i915_gem_object.h" 14 16 #include "i915_scatterlist.h"
+3
drivers/gpu/drm/i915/gt/intel_ggtt.c
··· 1396 1396 { 1397 1397 unsigned int row; 1398 1398 1399 + if (!width || !height) 1400 + return sg; 1401 + 1399 1402 if (alignment_pad) { 1400 1403 st->nents++; 1401 1404
+2 -1
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 2373 2373 unsigned long flags; 2374 2374 bool disabled; 2375 2375 2376 + lockdep_assert_held(&guc->submission_state.lock); 2376 2377 GEM_BUG_ON(!intel_gt_pm_is_awake(gt)); 2377 2378 GEM_BUG_ON(!lrc_desc_registered(guc, ce->guc_id.id)); 2378 2379 GEM_BUG_ON(ce != __get_context(guc, ce->guc_id.id)); ··· 2389 2388 } 2390 2389 spin_unlock_irqrestore(&ce->guc_state.lock, flags); 2391 2390 if (unlikely(disabled)) { 2392 - release_guc_id(guc, ce); 2391 + __release_guc_id(guc, ce); 2393 2392 __guc_context_destroy(ce); 2394 2393 return; 2395 2394 }
+5 -29
drivers/gpu/drm/i915/i915_request.c
··· 1537 1537 struct drm_i915_gem_object *obj, 1538 1538 bool write) 1539 1539 { 1540 - struct dma_fence *excl; 1540 + struct dma_resv_iter cursor; 1541 + struct dma_fence *fence; 1541 1542 int ret = 0; 1542 1543 1543 - if (write) { 1544 - struct dma_fence **shared; 1545 - unsigned int count, i; 1546 - 1547 - ret = dma_resv_get_fences(obj->base.resv, &excl, &count, 1548 - &shared); 1544 + dma_resv_for_each_fence(&cursor, obj->base.resv, write, fence) { 1545 + ret = i915_request_await_dma_fence(to, fence); 1549 1546 if (ret) 1550 - return ret; 1551 - 1552 - for (i = 0; i < count; i++) { 1553 - ret = i915_request_await_dma_fence(to, shared[i]); 1554 - if (ret) 1555 - break; 1556 - 1557 - dma_fence_put(shared[i]); 1558 - } 1559 - 1560 - for (; i < count; i++) 1561 - dma_fence_put(shared[i]); 1562 - kfree(shared); 1563 - } else { 1564 - excl = dma_resv_get_excl_unlocked(obj->base.resv); 1565 - } 1566 - 1567 - if (excl) { 1568 - if (ret == 0) 1569 - ret = i915_request_await_dma_fence(to, excl); 1570 - 1571 - dma_fence_put(excl); 1547 + break; 1572 1548 } 1573 1549 1574 1550 return ret;
-2
drivers/gpu/drm/imx/imx-drm-core.c
··· 81 81 struct drm_plane_state *old_plane_state, *new_plane_state; 82 82 bool plane_disabling = false; 83 83 int i; 84 - bool fence_cookie = dma_fence_begin_signalling(); 85 84 86 85 drm_atomic_helper_commit_modeset_disables(dev, state); 87 86 ··· 111 112 } 112 113 113 114 drm_atomic_helper_commit_hw_done(state); 114 - dma_fence_end_signalling(fence_cookie); 115 115 } 116 116 117 117 static const struct drm_mode_config_helper_funcs imx_drm_mode_config_helpers = {
+7 -1
drivers/gpu/drm/mxsfb/mxsfb_kms.c
··· 88 88 ctrl |= CTRL_BUS_WIDTH_24; 89 89 break; 90 90 default: 91 - dev_err(drm->dev, "Unknown media bus format %d\n", bus_format); 91 + dev_err(drm->dev, "Unknown media bus format 0x%x\n", bus_format); 92 92 break; 93 93 } 94 94 ··· 362 362 drm_atomic_get_new_bridge_state(state, 363 363 mxsfb->bridge); 364 364 bus_format = bridge_state->input_bus_cfg.format; 365 + if (bus_format == MEDIA_BUS_FMT_FIXED) { 366 + dev_warn_once(drm->dev, 367 + "Bridge does not provide bus format, assuming MEDIA_BUS_FMT_RGB888_1X24.\n" 368 + "Please fix bridge driver by handling atomic_get_input_bus_fmts.\n"); 369 + bus_format = MEDIA_BUS_FMT_RGB888_1X24; 370 + } 365 371 } 366 372 367 373 /* If there is no bridge, use bus format from connector */
-4
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 1249 1249 { 1250 1250 struct ttm_tt *ttm_dma = (void *)ttm; 1251 1251 struct nouveau_drm *drm; 1252 - struct device *dev; 1253 1252 bool slave = !!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL); 1254 1253 1255 1254 if (ttm_tt_is_populated(ttm)) ··· 1261 1262 } 1262 1263 1263 1264 drm = nouveau_bdev(bdev); 1264 - dev = drm->dev->dev; 1265 1265 1266 1266 return ttm_pool_alloc(&drm->ttm.bdev.pool, ttm, ctx); 1267 1267 } ··· 1270 1272 struct ttm_tt *ttm) 1271 1273 { 1272 1274 struct nouveau_drm *drm; 1273 - struct device *dev; 1274 1275 bool slave = !!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL); 1275 1276 1276 1277 if (slave) ··· 1278 1281 nouveau_ttm_tt_unbind(bdev, ttm); 1279 1282 1280 1283 drm = nouveau_bdev(bdev); 1281 - dev = drm->dev->dev; 1282 1284 1283 1285 return ttm_pool_free(&drm->ttm.bdev.pool, ttm); 1284 1286 }
+37 -5
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 562 562 nvkm_dbgopt(nouveau_debug, "DRM"); 563 563 564 564 INIT_LIST_HEAD(&drm->clients); 565 + mutex_init(&drm->clients_lock); 565 566 spin_lock_init(&drm->tile.lock); 566 567 567 568 /* workaround an odd issue on nvc1 by disabling the device's ··· 633 632 static void 634 633 nouveau_drm_device_fini(struct drm_device *dev) 635 634 { 635 + struct nouveau_cli *cli, *temp_cli; 636 636 struct nouveau_drm *drm = nouveau_drm(dev); 637 637 638 638 if (nouveau_pmops_runtime()) { ··· 658 656 nouveau_ttm_fini(drm); 659 657 nouveau_vga_fini(drm); 660 658 659 + /* 660 + * There may be existing clients from as-yet unclosed files. For now, 661 + * clean them up here rather than deferring until the file is closed, 662 + * but this likely not correct if we want to support hot-unplugging 663 + * properly. 664 + */ 665 + mutex_lock(&drm->clients_lock); 666 + list_for_each_entry_safe(cli, temp_cli, &drm->clients, head) { 667 + list_del(&cli->head); 668 + mutex_lock(&cli->mutex); 669 + if (cli->abi16) 670 + nouveau_abi16_fini(cli->abi16); 671 + mutex_unlock(&cli->mutex); 672 + nouveau_cli_fini(cli); 673 + kfree(cli); 674 + } 675 + mutex_unlock(&drm->clients_lock); 676 + 661 677 nouveau_cli_fini(&drm->client); 662 678 nouveau_cli_fini(&drm->master); 663 679 nvif_parent_dtor(&drm->parent); 680 + mutex_destroy(&drm->clients_lock); 664 681 kfree(drm); 665 682 } 666 683 ··· 817 796 struct nvkm_client *client; 818 797 struct nvkm_device *device; 819 798 820 - drm_dev_unregister(dev); 799 + drm_dev_unplug(dev); 821 800 822 801 client = nvxx_client(&drm->client.base); 823 802 device = nvkm_device_find(client->device); ··· 1111 1090 1112 1091 fpriv->driver_priv = cli; 1113 1092 1114 - mutex_lock(&drm->client.mutex); 1093 + mutex_lock(&drm->clients_lock); 1115 1094 list_add(&cli->head, &drm->clients); 1116 - mutex_unlock(&drm->client.mutex); 1095 + mutex_unlock(&drm->clients_lock); 1117 1096 1118 1097 done: 1119 1098 if (ret && cli) { ··· 1131 1110 { 1132 1111 struct nouveau_cli *cli = nouveau_cli(fpriv); 1133 1112 struct nouveau_drm *drm = nouveau_drm(dev); 1113 + int dev_index; 1114 + 1115 + /* 1116 + * The device is gone, and as it currently stands all clients are 1117 + * cleaned up in the removal codepath. In the future this may change 1118 + * so that we can support hot-unplugging, but for now we immediately 1119 + * return to avoid a double-free situation. 1120 + */ 1121 + if (!drm_dev_enter(dev, &dev_index)) 1122 + return; 1134 1123 1135 1124 pm_runtime_get_sync(dev->dev); 1136 1125 ··· 1149 1118 nouveau_abi16_fini(cli->abi16); 1150 1119 mutex_unlock(&cli->mutex); 1151 1120 1152 - mutex_lock(&drm->client.mutex); 1121 + mutex_lock(&drm->clients_lock); 1153 1122 list_del(&cli->head); 1154 - mutex_unlock(&drm->client.mutex); 1123 + mutex_unlock(&drm->clients_lock); 1155 1124 1156 1125 nouveau_cli_fini(cli); 1157 1126 kfree(cli); 1158 1127 pm_runtime_mark_last_busy(dev->dev); 1159 1128 pm_runtime_put_autosuspend(dev->dev); 1129 + drm_dev_exit(dev_index); 1160 1130 } 1161 1131 1162 1132 static const struct drm_ioctl_desc
+5
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 139 139 140 140 struct list_head clients; 141 141 142 + /** 143 + * @clients_lock: Protects access to the @clients list of &struct nouveau_cli. 144 + */ 145 + struct mutex clients_lock; 146 + 142 147 u8 old_pm_cap; 143 148 144 149 struct {
+2 -2
drivers/gpu/drm/nouveau/nouveau_gem.c
··· 56 56 57 57 nouveau_bo_del_io_reserve_lru(bo); 58 58 prot = vm_get_page_prot(vma->vm_flags); 59 - ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT, 1); 59 + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); 60 60 nouveau_bo_add_io_reserve_lru(bo); 61 61 if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) 62 62 return ret; ··· 337 337 struct ttm_buffer_object *bo = &nvbo->bo; 338 338 uint32_t domains = valid_domains & nvbo->valid_domains & 339 339 (write_domains ? write_domains : read_domains); 340 - uint32_t pref_domains = 0;; 340 + uint32_t pref_domains = 0; 341 341 342 342 if (!domains) 343 343 return -EINVAL;
+4
drivers/gpu/drm/nouveau/nouveau_svm.c
··· 162 162 */ 163 163 164 164 mm = get_task_mm(current); 165 + if (!mm) { 166 + return -EINVAL; 167 + } 165 168 mmap_read_lock(mm); 166 169 167 170 if (!cli->svm.svmm) { 168 171 mmap_read_unlock(mm); 172 + mmput(mm); 169 173 return -EINVAL; 170 174 } 171 175
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/ce/gt215.c
··· 78 78 gt215_ce_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, 79 79 struct nvkm_engine **pengine) 80 80 { 81 - return nvkm_falcon_new_(&gt215_ce, device, type, inst, 81 + return nvkm_falcon_new_(&gt215_ce, device, type, -1, 82 82 (device->chipset != 0xaf), 0x104000, pengine); 83 83 }
+1 -2
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
··· 3147 3147 WARN_ON(device->chip->ptr.inst & ~((1 << ARRAY_SIZE(device->ptr)) - 1)); \ 3148 3148 for (j = 0; device->chip->ptr.inst && j < ARRAY_SIZE(device->ptr); j++) { \ 3149 3149 if ((device->chip->ptr.inst & BIT(j)) && (subdev_mask & BIT_ULL(type))) { \ 3150 - int inst = (device->chip->ptr.inst == 1) ? -1 : (j); \ 3151 - ret = device->chip->ptr.ctor(device, (type), inst, &device->ptr[j]); \ 3150 + ret = device->chip->ptr.ctor(device, (type), (j), &device->ptr[j]); \ 3152 3151 subdev = nvkm_device_subdev(device, (type), (j)); \ 3153 3152 if (ret) { \ 3154 3153 nvkm_subdev_del(&subdev); \
-1
drivers/gpu/drm/nouveau/nvkm/engine/nvenc/base.c
··· 21 21 */ 22 22 #include "priv.h" 23 23 24 - #include "priv.h" 25 24 #include <core/firmware.h> 26 25 27 26 static void *
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/uvmm.c
··· 299 299 page = uvmm->vmm->func->page; 300 300 for (nr = 0; page[nr].shift; nr++); 301 301 302 - if (!(ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, false))) { 302 + if (!(nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, false))) { 303 303 if ((index = args->v0.index) >= nr) 304 304 return -EINVAL; 305 305 type = page[index].type;
+2 -2
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
··· 488 488 struct gp100_vmm_fault_cancel_v0 v0; 489 489 } *args = argv; 490 490 int ret = -ENOSYS; 491 - u32 inst, aper; 491 + u32 aper; 492 492 493 493 if ((ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, false))) 494 494 return ret; ··· 502 502 args->v0.inst |= 0x80000000; 503 503 504 504 if (!WARN_ON(nvkm_gr_ctxsw_pause(device))) { 505 - if ((inst = nvkm_gr_ctxsw_inst(device)) == args->v0.inst) { 505 + if (nvkm_gr_ctxsw_inst(device) == args->v0.inst) { 506 506 gf100_vmm_invalidate(vmm, 0x0000001b 507 507 /* CANCEL_TARGETED. */ | 508 508 (args->v0.hub << 20) |
+10
drivers/gpu/drm/panel/Kconfig
··· 520 520 Say Y here if you want to enable support for Sharp LS043T1LE01 qHD 521 521 (540x960) DSI panel as found on the Qualcomm APQ8074 Dragonboard 522 522 523 + config DRM_PANEL_SHARP_LS060T1SX01 524 + tristate "Sharp LS060T1SX01 FullHD video mode panel" 525 + depends on OF 526 + depends on DRM_MIPI_DSI 527 + depends on BACKLIGHT_CLASS_DEVICE 528 + help 529 + Say Y here if you want to enable support for Sharp LS060T1SX01 6.0" 530 + FullHD (1080x1920) DSI panel as found in Dragonboard Display Adapter 531 + Bundle. 532 + 523 533 config DRM_PANEL_SITRONIX_ST7701 524 534 tristate "Sitronix ST7701 panel driver" 525 535 depends on OF
+1
drivers/gpu/drm/panel/Makefile
··· 53 53 obj-$(CONFIG_DRM_PANEL_SHARP_LQ101R1SX01) += panel-sharp-lq101r1sx01.o 54 54 obj-$(CONFIG_DRM_PANEL_SHARP_LS037V7DW01) += panel-sharp-ls037v7dw01.o 55 55 obj-$(CONFIG_DRM_PANEL_SHARP_LS043T1LE01) += panel-sharp-ls043t1le01.o 56 + obj-$(CONFIG_DRM_PANEL_SHARP_LS060T1SX01) += panel-sharp-ls060t1sx01.o 56 57 obj-$(CONFIG_DRM_PANEL_SITRONIX_ST7701) += panel-sitronix-st7701.o 57 58 obj-$(CONFIG_DRM_PANEL_SITRONIX_ST7703) += panel-sitronix-st7703.o 58 59 obj-$(CONFIG_DRM_PANEL_SITRONIX_ST7789V) += panel-sitronix-st7789v.o
+9
drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c
··· 8 8 #include <linux/backlight.h> 9 9 #include <linux/delay.h> 10 10 #include <linux/gpio/consumer.h> 11 + #include <linux/media-bus-format.h> 11 12 #include <linux/module.h> 12 13 #include <linux/of_device.h> 13 14 #include <linux/regulator/consumer.h> ··· 221 220 .height_mm = 130, 222 221 }; 223 222 223 + static const u32 mantix_bus_formats[] = { 224 + MEDIA_BUS_FMT_RGB888_1X24, 225 + }; 226 + 224 227 static int mantix_get_modes(struct drm_panel *panel, 225 228 struct drm_connector *connector) 226 229 { ··· 245 240 connector->display_info.width_mm = mode->width_mm; 246 241 connector->display_info.height_mm = mode->height_mm; 247 242 drm_mode_probed_add(connector, mode); 243 + 244 + drm_display_info_set_bus_formats(&connector->display_info, 245 + mantix_bus_formats, 246 + ARRAY_SIZE(mantix_bus_formats)); 248 247 249 248 return 1; 250 249 }
+2 -1
drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
··· 116 116 static int s6e63m0_dsi_remove(struct mipi_dsi_device *dsi) 117 117 { 118 118 mipi_dsi_detach(dsi); 119 - return s6e63m0_remove(&dsi->dev); 119 + s6e63m0_remove(&dsi->dev); 120 + return 0; 120 121 } 121 122 122 123 static const struct of_device_id s6e63m0_dsi_of_match[] = {
+2 -1
drivers/gpu/drm/panel/panel-samsung-s6e63m0-spi.c
··· 64 64 65 65 static int s6e63m0_spi_remove(struct spi_device *spi) 66 66 { 67 - return s6e63m0_remove(&spi->dev); 67 + s6e63m0_remove(&spi->dev); 68 + return 0; 68 69 } 69 70 70 71 static const struct of_device_id s6e63m0_spi_of_match[] = {
+1 -3
drivers/gpu/drm/panel/panel-samsung-s6e63m0.c
··· 749 749 } 750 750 EXPORT_SYMBOL_GPL(s6e63m0_probe); 751 751 752 - int s6e63m0_remove(struct device *dev) 752 + void s6e63m0_remove(struct device *dev) 753 753 { 754 754 struct s6e63m0 *ctx = dev_get_drvdata(dev); 755 755 756 756 drm_panel_remove(&ctx->panel); 757 - 758 - return 0; 759 757 } 760 758 EXPORT_SYMBOL_GPL(s6e63m0_remove); 761 759
+1 -1
drivers/gpu/drm/panel/panel-samsung-s6e63m0.h
··· 35 35 const u8 *data, 36 36 size_t len), 37 37 bool dsi_mode); 38 - int s6e63m0_remove(struct device *dev); 38 + void s6e63m0_remove(struct device *dev); 39 39 40 40 #endif /* _PANEL_SAMSUNG_S6E63M0_H */
+333
drivers/gpu/drm/panel/panel-sharp-ls060t1sx01.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright (c) 2021 Linaro Ltd. 3 + * Generated with linux-mdss-dsi-panel-driver-generator from vendor device tree: 4 + * Copyright (c) 2013-2014, The Linux Foundation. All rights reserved. 5 + */ 6 + 7 + #include <linux/delay.h> 8 + #include <linux/gpio/consumer.h> 9 + #include <linux/module.h> 10 + #include <linux/of.h> 11 + #include <linux/regulator/consumer.h> 12 + 13 + #include <video/mipi_display.h> 14 + 15 + #include <drm/drm_mipi_dsi.h> 16 + #include <drm/drm_modes.h> 17 + #include <drm/drm_panel.h> 18 + 19 + struct sharp_ls060 { 20 + struct drm_panel panel; 21 + struct mipi_dsi_device *dsi; 22 + struct regulator *vddi_supply; 23 + struct regulator *vddh_supply; 24 + struct regulator *avdd_supply; 25 + struct regulator *avee_supply; 26 + struct gpio_desc *reset_gpio; 27 + bool prepared; 28 + }; 29 + 30 + static inline struct sharp_ls060 *to_sharp_ls060(struct drm_panel *panel) 31 + { 32 + return container_of(panel, struct sharp_ls060, panel); 33 + } 34 + 35 + #define dsi_dcs_write_seq(dsi, seq...) ({ \ 36 + static const u8 d[] = { seq }; \ 37 + \ 38 + mipi_dsi_dcs_write_buffer(dsi, d, ARRAY_SIZE(d)); \ 39 + }) 40 + 41 + static void sharp_ls060_reset(struct sharp_ls060 *ctx) 42 + { 43 + gpiod_set_value_cansleep(ctx->reset_gpio, 0); 44 + usleep_range(10000, 11000); 45 + gpiod_set_value_cansleep(ctx->reset_gpio, 1); 46 + usleep_range(10000, 11000); 47 + gpiod_set_value_cansleep(ctx->reset_gpio, 0); 48 + usleep_range(10000, 11000); 49 + } 50 + 51 + static int sharp_ls060_on(struct sharp_ls060 *ctx) 52 + { 53 + struct mipi_dsi_device *dsi = ctx->dsi; 54 + struct device *dev = &dsi->dev; 55 + int ret; 56 + 57 + dsi->mode_flags |= MIPI_DSI_MODE_LPM; 58 + 59 + ret = dsi_dcs_write_seq(dsi, 0xbb, 0x13); 60 + if (ret < 0) { 61 + dev_err(dev, "Failed to send command: %d\n", ret); 62 + return ret; 63 + } 64 + 65 + ret = dsi_dcs_write_seq(dsi, MIPI_DCS_WRITE_MEMORY_START); 66 + if (ret < 0) { 67 + dev_err(dev, "Failed to send command: %d\n", ret); 68 + return ret; 69 + } 70 + 71 + ret = mipi_dsi_dcs_exit_sleep_mode(dsi); 72 + if (ret < 0) { 73 + dev_err(dev, "Failed to exit sleep mode: %d\n", ret); 74 + return ret; 75 + } 76 + msleep(120); 77 + 78 + ret = mipi_dsi_dcs_set_display_on(dsi); 79 + if (ret < 0) { 80 + dev_err(dev, "Failed to set display on: %d\n", ret); 81 + return ret; 82 + } 83 + msleep(50); 84 + 85 + return 0; 86 + } 87 + 88 + static int sharp_ls060_off(struct sharp_ls060 *ctx) 89 + { 90 + struct mipi_dsi_device *dsi = ctx->dsi; 91 + struct device *dev = &dsi->dev; 92 + int ret; 93 + 94 + dsi->mode_flags &= ~MIPI_DSI_MODE_LPM; 95 + 96 + ret = mipi_dsi_dcs_set_display_off(dsi); 97 + if (ret < 0) { 98 + dev_err(dev, "Failed to set display off: %d\n", ret); 99 + return ret; 100 + } 101 + usleep_range(2000, 3000); 102 + 103 + ret = mipi_dsi_dcs_enter_sleep_mode(dsi); 104 + if (ret < 0) { 105 + dev_err(dev, "Failed to enter sleep mode: %d\n", ret); 106 + return ret; 107 + } 108 + msleep(121); 109 + 110 + return 0; 111 + } 112 + 113 + static int sharp_ls060_prepare(struct drm_panel *panel) 114 + { 115 + struct sharp_ls060 *ctx = to_sharp_ls060(panel); 116 + struct device *dev = &ctx->dsi->dev; 117 + int ret; 118 + 119 + if (ctx->prepared) 120 + return 0; 121 + 122 + ret = regulator_enable(ctx->vddi_supply); 123 + if (ret < 0) 124 + return ret; 125 + 126 + ret = regulator_enable(ctx->avdd_supply); 127 + if (ret < 0) 128 + goto err_avdd; 129 + 130 + usleep_range(1000, 2000); 131 + 132 + ret = regulator_enable(ctx->avee_supply); 133 + if (ret < 0) 134 + goto err_avee; 135 + 136 + usleep_range(10000, 11000); 137 + 138 + ret = regulator_enable(ctx->vddh_supply); 139 + if (ret < 0) 140 + goto err_vddh; 141 + 142 + usleep_range(10000, 11000); 143 + 144 + sharp_ls060_reset(ctx); 145 + 146 + ret = sharp_ls060_on(ctx); 147 + if (ret < 0) { 148 + dev_err(dev, "Failed to initialize panel: %d\n", ret); 149 + goto err_on; 150 + } 151 + 152 + ctx->prepared = true; 153 + 154 + return 0; 155 + 156 + err_on: 157 + regulator_disable(ctx->vddh_supply); 158 + 159 + usleep_range(10000, 11000); 160 + 161 + err_vddh: 162 + regulator_disable(ctx->avee_supply); 163 + 164 + err_avee: 165 + regulator_disable(ctx->avdd_supply); 166 + 167 + gpiod_set_value_cansleep(ctx->reset_gpio, 1); 168 + 169 + err_avdd: 170 + regulator_disable(ctx->vddi_supply); 171 + 172 + return ret; 173 + } 174 + 175 + static int sharp_ls060_unprepare(struct drm_panel *panel) 176 + { 177 + struct sharp_ls060 *ctx = to_sharp_ls060(panel); 178 + struct device *dev = &ctx->dsi->dev; 179 + int ret; 180 + 181 + if (!ctx->prepared) 182 + return 0; 183 + 184 + ret = sharp_ls060_off(ctx); 185 + if (ret < 0) 186 + dev_err(dev, "Failed to un-initialize panel: %d\n", ret); 187 + 188 + regulator_disable(ctx->vddh_supply); 189 + 190 + usleep_range(10000, 11000); 191 + 192 + regulator_disable(ctx->avee_supply); 193 + regulator_disable(ctx->avdd_supply); 194 + 195 + gpiod_set_value_cansleep(ctx->reset_gpio, 1); 196 + 197 + regulator_disable(ctx->vddi_supply); 198 + 199 + ctx->prepared = false; 200 + return 0; 201 + } 202 + 203 + static const struct drm_display_mode sharp_ls060_mode = { 204 + .clock = (1080 + 96 + 16 + 64) * (1920 + 4 + 1 + 16) * 60 / 1000, 205 + .hdisplay = 1080, 206 + .hsync_start = 1080 + 96, 207 + .hsync_end = 1080 + 96 + 16, 208 + .htotal = 1080 + 96 + 16 + 64, 209 + .vdisplay = 1920, 210 + .vsync_start = 1920 + 4, 211 + .vsync_end = 1920 + 4 + 1, 212 + .vtotal = 1920 + 4 + 1 + 16, 213 + .width_mm = 75, 214 + .height_mm = 132, 215 + }; 216 + 217 + static int sharp_ls060_get_modes(struct drm_panel *panel, 218 + struct drm_connector *connector) 219 + { 220 + struct drm_display_mode *mode; 221 + 222 + mode = drm_mode_duplicate(connector->dev, &sharp_ls060_mode); 223 + if (!mode) 224 + return -ENOMEM; 225 + 226 + drm_mode_set_name(mode); 227 + 228 + mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 229 + connector->display_info.width_mm = mode->width_mm; 230 + connector->display_info.height_mm = mode->height_mm; 231 + drm_mode_probed_add(connector, mode); 232 + 233 + return 1; 234 + } 235 + 236 + static const struct drm_panel_funcs sharp_ls060_panel_funcs = { 237 + .prepare = sharp_ls060_prepare, 238 + .unprepare = sharp_ls060_unprepare, 239 + .get_modes = sharp_ls060_get_modes, 240 + }; 241 + 242 + static int sharp_ls060_probe(struct mipi_dsi_device *dsi) 243 + { 244 + struct device *dev = &dsi->dev; 245 + struct sharp_ls060 *ctx; 246 + int ret; 247 + 248 + ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); 249 + if (!ctx) 250 + return -ENOMEM; 251 + 252 + ctx->vddi_supply = devm_regulator_get(dev, "vddi"); 253 + if (IS_ERR(ctx->vddi_supply)) 254 + return PTR_ERR(ctx->vddi_supply); 255 + 256 + ctx->vddh_supply = devm_regulator_get(dev, "vddh"); 257 + if (IS_ERR(ctx->vddh_supply)) 258 + return PTR_ERR(ctx->vddh_supply); 259 + 260 + ctx->avdd_supply = devm_regulator_get(dev, "avdd"); 261 + if (IS_ERR(ctx->avdd_supply)) 262 + return PTR_ERR(ctx->avdd_supply); 263 + 264 + ctx->avee_supply = devm_regulator_get(dev, "avee"); 265 + if (IS_ERR(ctx->avee_supply)) 266 + return PTR_ERR(ctx->avee_supply); 267 + 268 + ctx->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 269 + if (IS_ERR(ctx->reset_gpio)) 270 + return dev_err_probe(dev, PTR_ERR(ctx->reset_gpio), 271 + "Failed to get reset-gpios\n"); 272 + 273 + ctx->dsi = dsi; 274 + mipi_dsi_set_drvdata(dsi, ctx); 275 + 276 + dsi->lanes = 4; 277 + dsi->format = MIPI_DSI_FMT_RGB888; 278 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 279 + MIPI_DSI_MODE_NO_EOT_PACKET | 280 + MIPI_DSI_CLOCK_NON_CONTINUOUS; 281 + 282 + drm_panel_init(&ctx->panel, dev, &sharp_ls060_panel_funcs, 283 + DRM_MODE_CONNECTOR_DSI); 284 + 285 + ret = drm_panel_of_backlight(&ctx->panel); 286 + if (ret) 287 + return dev_err_probe(dev, ret, "Failed to get backlight\n"); 288 + 289 + drm_panel_add(&ctx->panel); 290 + 291 + ret = mipi_dsi_attach(dsi); 292 + if (ret < 0) { 293 + dev_err(dev, "Failed to attach to DSI host: %d\n", ret); 294 + drm_panel_remove(&ctx->panel); 295 + return ret; 296 + } 297 + 298 + return 0; 299 + } 300 + 301 + static int sharp_ls060_remove(struct mipi_dsi_device *dsi) 302 + { 303 + struct sharp_ls060 *ctx = mipi_dsi_get_drvdata(dsi); 304 + int ret; 305 + 306 + ret = mipi_dsi_detach(dsi); 307 + if (ret < 0) 308 + dev_err(&dsi->dev, "Failed to detach from DSI host: %d\n", ret); 309 + 310 + drm_panel_remove(&ctx->panel); 311 + 312 + return 0; 313 + } 314 + 315 + static const struct of_device_id sharp_ls060t1sx01_of_match[] = { 316 + { .compatible = "sharp,ls060t1sx01" }, 317 + { /* sentinel */ } 318 + }; 319 + MODULE_DEVICE_TABLE(of, sharp_ls060t1sx01_of_match); 320 + 321 + static struct mipi_dsi_driver sharp_ls060_driver = { 322 + .probe = sharp_ls060_probe, 323 + .remove = sharp_ls060_remove, 324 + .driver = { 325 + .name = "panel-sharp-ls060t1sx01", 326 + .of_match_table = sharp_ls060t1sx01_of_match, 327 + }, 328 + }; 329 + module_mipi_dsi_driver(sharp_ls060_driver); 330 + 331 + MODULE_AUTHOR("Dmitry Baryshkov <dmitry.baryshkov@linaro.org>"); 332 + MODULE_DESCRIPTION("DRM driver for Sharp LS060T1SX01 1080p video mode dsi panel"); 333 + MODULE_LICENSE("GPL v2");
+35
drivers/gpu/drm/panel/panel-simple.c
··· 2370 2370 .connector_type = DRM_MODE_CONNECTOR_LVDS, 2371 2371 }; 2372 2372 2373 + static const struct drm_display_mode logictechno_lttd800480070_l2rt_mode = { 2374 + .clock = 33000, 2375 + .hdisplay = 800, 2376 + .hsync_start = 800 + 112, 2377 + .hsync_end = 800 + 112 + 3, 2378 + .htotal = 800 + 112 + 3 + 85, 2379 + .vdisplay = 480, 2380 + .vsync_start = 480 + 38, 2381 + .vsync_end = 480 + 38 + 3, 2382 + .vtotal = 480 + 38 + 3 + 29, 2383 + .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC, 2384 + }; 2385 + 2386 + static const struct panel_desc logictechno_lttd800480070_l2rt = { 2387 + .modes = &logictechno_lttd800480070_l2rt_mode, 2388 + .num_modes = 1, 2389 + .bpc = 8, 2390 + .size = { 2391 + .width = 154, 2392 + .height = 86, 2393 + }, 2394 + .delay = { 2395 + .prepare = 45, 2396 + .enable = 100, 2397 + .disable = 100, 2398 + .unprepare = 45 2399 + }, 2400 + .bus_format = MEDIA_BUS_FMT_RGB888_1X24, 2401 + .bus_flags = DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE, 2402 + .connector_type = DRM_MODE_CONNECTOR_DPI, 2403 + }; 2404 + 2373 2405 static const struct drm_display_mode logictechno_lttd800480070_l6wh_rt_mode = { 2374 2406 .clock = 33000, 2375 2407 .hdisplay = 800, ··· 3782 3750 }, { 3783 3751 .compatible = "logictechno,lt170410-2whc", 3784 3752 .data = &logictechno_lt170410_2whc, 3753 + }, { 3754 + .compatible = "logictechno,lttd800480070-l2rt", 3755 + .data = &logictechno_lttd800480070_l2rt, 3785 3756 }, { 3786 3757 .compatible = "logictechno,lttd800480070-l6wh-rt", 3787 3758 .data = &logictechno_lttd800480070_l6wh_rt,
+8
drivers/gpu/drm/panel/panel-sitronix-st7703.c
··· 453 453 return ret; 454 454 } 455 455 456 + static const u32 mantix_bus_formats[] = { 457 + MEDIA_BUS_FMT_RGB888_1X24, 458 + }; 459 + 456 460 static int st7703_get_modes(struct drm_panel *panel, 457 461 struct drm_connector *connector) 458 462 { ··· 477 473 connector->display_info.width_mm = mode->width_mm; 478 474 connector->display_info.height_mm = mode->height_mm; 479 475 drm_mode_probed_add(connector, mode); 476 + 477 + drm_display_info_set_bus_formats(&connector->display_info, 478 + mantix_bus_formats, 479 + ARRAY_SIZE(mantix_bus_formats)); 480 480 481 481 return 1; 482 482 }
+1 -1
drivers/gpu/drm/radeon/radeon_gem.c
··· 61 61 goto unlock_resv; 62 62 63 63 ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot, 64 - TTM_BO_VM_NUM_PREFAULT, 1); 64 + TTM_BO_VM_NUM_PREFAULT); 65 65 if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) 66 66 goto unlock_mclk; 67 67
+6 -20
drivers/gpu/drm/scheduler/sched_main.c
··· 699 699 struct drm_gem_object *obj, 700 700 bool write) 701 701 { 702 + struct dma_resv_iter cursor; 703 + struct dma_fence *fence; 702 704 int ret; 703 - struct dma_fence **fences; 704 - unsigned int i, fence_count; 705 705 706 - if (!write) { 707 - struct dma_fence *fence = dma_resv_get_excl_unlocked(obj->resv); 708 - 709 - return drm_sched_job_add_dependency(job, fence); 710 - } 711 - 712 - ret = dma_resv_get_fences(obj->resv, NULL, &fence_count, &fences); 713 - if (ret || !fence_count) 714 - return ret; 715 - 716 - for (i = 0; i < fence_count; i++) { 717 - ret = drm_sched_job_add_dependency(job, fences[i]); 706 + dma_resv_for_each_fence(&cursor, obj->resv, write, fence) { 707 + ret = drm_sched_job_add_dependency(job, fence); 718 708 if (ret) 719 - break; 709 + return ret; 720 710 } 721 - 722 - for (; i < fence_count; i++) 723 - dma_fence_put(fences[i]); 724 - kfree(fences); 725 - return ret; 711 + return 0; 726 712 } 727 713 EXPORT_SYMBOL(drm_sched_job_add_implicit_dependencies); 728 714
+6 -13
drivers/gpu/drm/ttm/ttm_bo.c
··· 269 269 static void ttm_bo_flush_all_fences(struct ttm_buffer_object *bo) 270 270 { 271 271 struct dma_resv *resv = &bo->base._resv; 272 - struct dma_resv_list *fobj; 272 + struct dma_resv_iter cursor; 273 273 struct dma_fence *fence; 274 - int i; 275 274 276 - rcu_read_lock(); 277 - fobj = dma_resv_shared_list(resv); 278 - fence = dma_resv_excl_fence(resv); 279 - if (fence && !fence->ops->signaled) 280 - dma_fence_enable_sw_signaling(fence); 281 - 282 - for (i = 0; fobj && i < fobj->shared_count; ++i) { 283 - fence = rcu_dereference(fobj->shared[i]); 284 - 275 + dma_resv_iter_begin(&cursor, resv, true); 276 + dma_resv_for_each_fence_unlocked(&cursor, fence) { 285 277 if (!fence->ops->signaled) 286 278 dma_fence_enable_sw_signaling(fence); 287 279 } 288 - rcu_read_unlock(); 280 + dma_resv_iter_end(&cursor); 289 281 } 290 282 291 283 /** ··· 619 627 *busy = !ret; 620 628 } 621 629 622 - if (ret && place && !bo->bdev->funcs->eviction_valuable(bo, place)) { 630 + if (ret && place && (bo->resource->mem_type != place->mem_type || 631 + !bo->bdev->funcs->eviction_valuable(bo, place))) { 623 632 ret = false; 624 633 if (*locked) { 625 634 dma_resv_unlock(bo->base.resv);
+2 -92
drivers/gpu/drm/ttm/ttm_bo_vm.c
··· 173 173 } 174 174 EXPORT_SYMBOL(ttm_bo_vm_reserve); 175 175 176 - #ifdef CONFIG_TRANSPARENT_HUGEPAGE 177 - /** 178 - * ttm_bo_vm_insert_huge - Insert a pfn for PUD or PMD faults 179 - * @vmf: Fault data 180 - * @bo: The buffer object 181 - * @page_offset: Page offset from bo start 182 - * @fault_page_size: The size of the fault in pages. 183 - * @pgprot: The page protections. 184 - * Does additional checking whether it's possible to insert a PUD or PMD 185 - * pfn and performs the insertion. 186 - * 187 - * Return: VM_FAULT_NOPAGE on successful insertion, VM_FAULT_FALLBACK if 188 - * a huge fault was not possible, or on insertion error. 189 - */ 190 - static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf, 191 - struct ttm_buffer_object *bo, 192 - pgoff_t page_offset, 193 - pgoff_t fault_page_size, 194 - pgprot_t pgprot) 195 - { 196 - pgoff_t i; 197 - vm_fault_t ret; 198 - unsigned long pfn; 199 - pfn_t pfnt; 200 - struct ttm_tt *ttm = bo->ttm; 201 - bool write = vmf->flags & FAULT_FLAG_WRITE; 202 - 203 - /* Fault should not cross bo boundary. */ 204 - page_offset &= ~(fault_page_size - 1); 205 - if (page_offset + fault_page_size > bo->resource->num_pages) 206 - goto out_fallback; 207 - 208 - if (bo->resource->bus.is_iomem) 209 - pfn = ttm_bo_io_mem_pfn(bo, page_offset); 210 - else 211 - pfn = page_to_pfn(ttm->pages[page_offset]); 212 - 213 - /* pfn must be fault_page_size aligned. */ 214 - if ((pfn & (fault_page_size - 1)) != 0) 215 - goto out_fallback; 216 - 217 - /* Check that memory is contiguous. */ 218 - if (!bo->resource->bus.is_iomem) { 219 - for (i = 1; i < fault_page_size; ++i) { 220 - if (page_to_pfn(ttm->pages[page_offset + i]) != pfn + i) 221 - goto out_fallback; 222 - } 223 - } else if (bo->bdev->funcs->io_mem_pfn) { 224 - for (i = 1; i < fault_page_size; ++i) { 225 - if (ttm_bo_io_mem_pfn(bo, page_offset + i) != pfn + i) 226 - goto out_fallback; 227 - } 228 - } 229 - 230 - pfnt = __pfn_to_pfn_t(pfn, PFN_DEV); 231 - if (fault_page_size == (HPAGE_PMD_SIZE >> PAGE_SHIFT)) 232 - ret = vmf_insert_pfn_pmd_prot(vmf, pfnt, pgprot, write); 233 - #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD 234 - else if (fault_page_size == (HPAGE_PUD_SIZE >> PAGE_SHIFT)) 235 - ret = vmf_insert_pfn_pud_prot(vmf, pfnt, pgprot, write); 236 - #endif 237 - else 238 - WARN_ON_ONCE(ret = VM_FAULT_FALLBACK); 239 - 240 - if (ret != VM_FAULT_NOPAGE) 241 - goto out_fallback; 242 - 243 - return VM_FAULT_NOPAGE; 244 - out_fallback: 245 - count_vm_event(THP_FAULT_FALLBACK); 246 - return VM_FAULT_FALLBACK; 247 - } 248 - #else 249 - static vm_fault_t ttm_bo_vm_insert_huge(struct vm_fault *vmf, 250 - struct ttm_buffer_object *bo, 251 - pgoff_t page_offset, 252 - pgoff_t fault_page_size, 253 - pgprot_t pgprot) 254 - { 255 - return VM_FAULT_FALLBACK; 256 - } 257 - #endif 258 - 259 176 /** 260 177 * ttm_bo_vm_fault_reserved - TTM fault helper 261 178 * @vmf: The struct vm_fault given as argument to the fault callback ··· 180 263 * @num_prefault: Maximum number of prefault pages. The caller may want to 181 264 * specify this based on madvice settings and the size of the GPU object 182 265 * backed by the memory. 183 - * @fault_page_size: The size of the fault in pages. 184 266 * 185 267 * This function inserts one or more page table entries pointing to the 186 268 * memory backing the buffer object, and then returns a return code ··· 193 277 */ 194 278 vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, 195 279 pgprot_t prot, 196 - pgoff_t num_prefault, 197 - pgoff_t fault_page_size) 280 + pgoff_t num_prefault) 198 281 { 199 282 struct vm_area_struct *vma = vmf->vma; 200 283 struct ttm_buffer_object *bo = vma->vm_private_data; ··· 243 328 /* Iomem should not be marked encrypted */ 244 329 prot = pgprot_decrypted(prot); 245 330 } 246 - 247 - /* We don't prefault on huge faults. Yet. */ 248 - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && fault_page_size != 1) 249 - return ttm_bo_vm_insert_huge(vmf, bo, page_offset, 250 - fault_page_size, prot); 251 331 252 332 /* 253 333 * Speculatively prefault a number of pages. Only error on ··· 339 429 340 430 prot = vma->vm_page_prot; 341 431 if (drm_dev_enter(ddev, &idx)) { 342 - ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT, 1); 432 + ret = ttm_bo_vm_fault_reserved(vmf, prot, TTM_BO_VM_NUM_PREFAULT); 343 433 drm_dev_exit(idx); 344 434 } else { 345 435 ret = ttm_bo_vm_dummy_page(vmf, prot);
+1 -1
drivers/gpu/drm/udl/udl_connector.c
··· 30 30 int bval = (i + block * EDID_LENGTH) << 8; 31 31 ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 32 32 0x02, (0x80 | (0x02 << 5)), bval, 33 - 0xA1, read_buff, 2, HZ); 33 + 0xA1, read_buff, 2, 1000); 34 34 if (ret < 1) { 35 35 DRM_ERROR("Read EDID byte %d failed err %x\n", i, ret); 36 36 kfree(read_buff);
+6 -7
drivers/gpu/drm/v3d/v3d_gem.c
··· 487 487 for (i = 0; i < se->in_sync_count; i++) { 488 488 struct drm_v3d_sem in; 489 489 490 - ret = copy_from_user(&in, handle++, sizeof(in)); 491 - if (ret) { 490 + if (copy_from_user(&in, handle++, sizeof(in))) { 491 + ret = -EFAULT; 492 492 DRM_DEBUG("Failed to copy wait dep handle.\n"); 493 493 goto fail_deps; 494 494 } ··· 609 609 for (i = 0; i < count; i++) { 610 610 struct drm_v3d_sem out; 611 611 612 - ret = copy_from_user(&out, post_deps++, sizeof(out)); 613 - if (ret) { 612 + if (copy_from_user(&out, post_deps++, sizeof(out))) { 613 + ret = -EFAULT; 614 614 DRM_DEBUG("Failed to copy post dep handles\n"); 615 615 goto fail; 616 616 } ··· 646 646 struct v3d_submit_ext *se = data; 647 647 int ret; 648 648 649 - ret = copy_from_user(&multisync, ext, sizeof(multisync)); 650 - if (ret) 651 - return ret; 649 + if (copy_from_user(&multisync, ext, sizeof(multisync))) 650 + return -EFAULT; 652 651 653 652 if (multisync.pad) 654 653 return -EINVAL;
+3 -1
drivers/gpu/drm/virtio/virtgpu_display.c
··· 308 308 return ERR_PTR(-EINVAL); 309 309 310 310 virtio_gpu_fb = kzalloc(sizeof(*virtio_gpu_fb), GFP_KERNEL); 311 - if (virtio_gpu_fb == NULL) 311 + if (virtio_gpu_fb == NULL) { 312 + drm_gem_object_put(obj); 312 313 return ERR_PTR(-ENOMEM); 314 + } 313 315 314 316 ret = virtio_gpu_framebuffer_init(dev, virtio_gpu_fb, mode_cmd, obj); 315 317 if (ret) {
+2 -1
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 163 163 struct drm_file *drm_file = filp->private_data; 164 164 struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv; 165 165 struct drm_device *dev = drm_file->minor->dev; 166 + struct virtio_gpu_device *vgdev = dev->dev_private; 166 167 struct drm_pending_event *e = NULL; 167 168 __poll_t mask = 0; 168 169 169 - if (!vfpriv->ring_idx_mask) 170 + if (!vgdev->has_virgl_3d || !vfpriv || !vfpriv->ring_idx_mask) 170 171 return drm_poll(filp, wait); 171 172 172 173 poll_wait(filp, &drm_file->event_wait, wait);
-4
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 1550 1550 pgoff_t start, pgoff_t end); 1551 1551 vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf); 1552 1552 vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf); 1553 - #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1554 - vm_fault_t vmw_bo_vm_huge_fault(struct vm_fault *vmf, 1555 - enum page_entry_size pe_size); 1556 - #endif 1557 1553 1558 1554 /* Transparent hugepage support - vmwgfx_thp.c */ 1559 1555 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+1 -71
drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
··· 477 477 else 478 478 prot = vm_get_page_prot(vma->vm_flags); 479 479 480 - ret = ttm_bo_vm_fault_reserved(vmf, prot, num_prefault, 1); 480 + ret = ttm_bo_vm_fault_reserved(vmf, prot, num_prefault); 481 481 if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) 482 482 return ret; 483 483 ··· 486 486 487 487 return ret; 488 488 } 489 - 490 - #ifdef CONFIG_TRANSPARENT_HUGEPAGE 491 - vm_fault_t vmw_bo_vm_huge_fault(struct vm_fault *vmf, 492 - enum page_entry_size pe_size) 493 - { 494 - struct vm_area_struct *vma = vmf->vma; 495 - struct ttm_buffer_object *bo = (struct ttm_buffer_object *) 496 - vma->vm_private_data; 497 - struct vmw_buffer_object *vbo = 498 - container_of(bo, struct vmw_buffer_object, base); 499 - pgprot_t prot; 500 - vm_fault_t ret; 501 - pgoff_t fault_page_size; 502 - bool write = vmf->flags & FAULT_FLAG_WRITE; 503 - 504 - switch (pe_size) { 505 - case PE_SIZE_PMD: 506 - fault_page_size = HPAGE_PMD_SIZE >> PAGE_SHIFT; 507 - break; 508 - #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD 509 - case PE_SIZE_PUD: 510 - fault_page_size = HPAGE_PUD_SIZE >> PAGE_SHIFT; 511 - break; 512 - #endif 513 - default: 514 - WARN_ON_ONCE(1); 515 - return VM_FAULT_FALLBACK; 516 - } 517 - 518 - /* Always do write dirty-tracking and COW on PTE level. */ 519 - if (write && (READ_ONCE(vbo->dirty) || is_cow_mapping(vma->vm_flags))) 520 - return VM_FAULT_FALLBACK; 521 - 522 - ret = ttm_bo_vm_reserve(bo, vmf); 523 - if (ret) 524 - return ret; 525 - 526 - if (vbo->dirty) { 527 - pgoff_t allowed_prefault; 528 - unsigned long page_offset; 529 - 530 - page_offset = vmf->pgoff - 531 - drm_vma_node_start(&bo->base.vma_node); 532 - if (page_offset >= bo->resource->num_pages || 533 - vmw_resources_clean(vbo, page_offset, 534 - page_offset + PAGE_SIZE, 535 - &allowed_prefault)) { 536 - ret = VM_FAULT_SIGBUS; 537 - goto out_unlock; 538 - } 539 - 540 - /* 541 - * Write protect, so we get a new fault on write, and can 542 - * split. 543 - */ 544 - prot = vm_get_page_prot(vma->vm_flags & ~VM_SHARED); 545 - } else { 546 - prot = vm_get_page_prot(vma->vm_flags); 547 - } 548 - 549 - ret = ttm_bo_vm_fault_reserved(vmf, prot, 1, fault_page_size); 550 - if (ret == VM_FAULT_RETRY && !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) 551 - return ret; 552 - 553 - out_unlock: 554 - dma_resv_unlock(bo->base.resv); 555 - 556 - return ret; 557 - } 558 - #endif
-3
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_glue.c
··· 61 61 .fault = vmw_bo_vm_fault, 62 62 .open = ttm_bo_vm_open, 63 63 .close = ttm_bo_vm_close, 64 - #ifdef CONFIG_TRANSPARENT_HUGEPAGE 65 - .huge_fault = vmw_bo_vm_huge_fault, 66 - #endif 67 64 }; 68 65 struct drm_file *file_priv = filp->private_data; 69 66 struct vmw_private *dev_priv = vmw_priv(file_priv->minor->dev);
-16
drivers/video/fbdev/core/bitblit.c
··· 43 43 } 44 44 } 45 45 46 - static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy, 47 - int sx, int dy, int dx, int height, int width) 48 - { 49 - struct fb_copyarea area; 50 - 51 - area.sx = sx * vc->vc_font.width; 52 - area.sy = sy * vc->vc_font.height; 53 - area.dx = dx * vc->vc_font.width; 54 - area.dy = dy * vc->vc_font.height; 55 - area.height = height * vc->vc_font.height; 56 - area.width = width * vc->vc_font.width; 57 - 58 - info->fbops->fb_copyarea(info, &area); 59 - } 60 - 61 46 static void bit_clear(struct vc_data *vc, struct fb_info *info, int sy, 62 47 int sx, int height, int width) 63 48 { ··· 378 393 379 394 void fbcon_set_bitops(struct fbcon_ops *ops) 380 395 { 381 - ops->bmove = bit_bmove; 382 396 ops->clear = bit_clear; 383 397 ops->putcs = bit_putcs; 384 398 ops->clear_margins = bit_clear_margins;
+20 -489
drivers/video/fbdev/core/fbcon.c
··· 173 173 int count, int ypos, int xpos); 174 174 static void fbcon_clear_margins(struct vc_data *vc, int bottom_only); 175 175 static void fbcon_cursor(struct vc_data *vc, int mode); 176 - static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx, 177 - int height, int width); 178 176 static int fbcon_switch(struct vc_data *vc); 179 177 static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch); 180 178 static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table); ··· 180 182 /* 181 183 * Internal routines 182 184 */ 183 - static __inline__ void ywrap_up(struct vc_data *vc, int count); 184 - static __inline__ void ywrap_down(struct vc_data *vc, int count); 185 - static __inline__ void ypan_up(struct vc_data *vc, int count); 186 - static __inline__ void ypan_down(struct vc_data *vc, int count); 187 - static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy, int sx, 188 - int dy, int dx, int height, int width, u_int y_break); 189 185 static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var, 190 186 int unit); 191 - static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p, 192 - int line, int count, int dy); 193 187 static void fbcon_modechanged(struct fb_info *info); 194 188 static void fbcon_set_all_vcs(struct fb_info *info); 195 189 static void fbcon_start(void); ··· 1126 1136 ops->graphics = 0; 1127 1137 1128 1138 /* 1129 - * No more hw acceleration for fbcon. 1130 - * 1131 - * FIXME: Garbage collect all the now dead code after sufficient time 1132 - * has passed. 1133 - */ 1134 - p->scrollmode = SCROLL_REDRAW; 1135 - 1136 - /* 1137 1139 * ++guenther: console.c:vc_allocate() relies on initializing 1138 1140 * vc_{cols,rows}, but we must not set those if we are only 1139 1141 * resizing the console. ··· 1211 1229 * This system is now divided into two levels because of complications 1212 1230 * caused by hardware scrolling. Top level functions: 1213 1231 * 1214 - * fbcon_bmove(), fbcon_clear(), fbcon_putc(), fbcon_clear_margins() 1232 + * fbcon_clear(), fbcon_putc(), fbcon_clear_margins() 1215 1233 * 1216 1234 * handles y values in range [0, scr_height-1] that correspond to real 1217 1235 * screen positions. y_wrap shift means that first line of bitmap may be 1218 1236 * anywhere on this display. These functions convert lineoffsets to 1219 1237 * bitmap offsets and deal with the wrap-around case by splitting blits. 1220 1238 * 1221 - * fbcon_bmove_physical_8() -- These functions fast implementations 1222 1239 * fbcon_clear_physical_8() -- of original fbcon_XXX fns. 1223 1240 * fbcon_putc_physical_8() -- (font width != 8) may be added later 1224 1241 * ··· 1390 1409 } 1391 1410 } 1392 1411 1393 - static __inline__ void ywrap_up(struct vc_data *vc, int count) 1394 - { 1395 - struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1396 - struct fbcon_ops *ops = info->fbcon_par; 1397 - struct fbcon_display *p = &fb_display[vc->vc_num]; 1398 - 1399 - p->yscroll += count; 1400 - if (p->yscroll >= p->vrows) /* Deal with wrap */ 1401 - p->yscroll -= p->vrows; 1402 - ops->var.xoffset = 0; 1403 - ops->var.yoffset = p->yscroll * vc->vc_font.height; 1404 - ops->var.vmode |= FB_VMODE_YWRAP; 1405 - ops->update_start(info); 1406 - scrollback_max += count; 1407 - if (scrollback_max > scrollback_phys_max) 1408 - scrollback_max = scrollback_phys_max; 1409 - scrollback_current = 0; 1410 - } 1411 - 1412 - static __inline__ void ywrap_down(struct vc_data *vc, int count) 1413 - { 1414 - struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1415 - struct fbcon_ops *ops = info->fbcon_par; 1416 - struct fbcon_display *p = &fb_display[vc->vc_num]; 1417 - 1418 - p->yscroll -= count; 1419 - if (p->yscroll < 0) /* Deal with wrap */ 1420 - p->yscroll += p->vrows; 1421 - ops->var.xoffset = 0; 1422 - ops->var.yoffset = p->yscroll * vc->vc_font.height; 1423 - ops->var.vmode |= FB_VMODE_YWRAP; 1424 - ops->update_start(info); 1425 - scrollback_max -= count; 1426 - if (scrollback_max < 0) 1427 - scrollback_max = 0; 1428 - scrollback_current = 0; 1429 - } 1430 - 1431 - static __inline__ void ypan_up(struct vc_data *vc, int count) 1432 - { 1433 - struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1434 - struct fbcon_display *p = &fb_display[vc->vc_num]; 1435 - struct fbcon_ops *ops = info->fbcon_par; 1436 - 1437 - p->yscroll += count; 1438 - if (p->yscroll > p->vrows - vc->vc_rows) { 1439 - ops->bmove(vc, info, p->vrows - vc->vc_rows, 1440 - 0, 0, 0, vc->vc_rows, vc->vc_cols); 1441 - p->yscroll -= p->vrows - vc->vc_rows; 1442 - } 1443 - 1444 - ops->var.xoffset = 0; 1445 - ops->var.yoffset = p->yscroll * vc->vc_font.height; 1446 - ops->var.vmode &= ~FB_VMODE_YWRAP; 1447 - ops->update_start(info); 1448 - fbcon_clear_margins(vc, 1); 1449 - scrollback_max += count; 1450 - if (scrollback_max > scrollback_phys_max) 1451 - scrollback_max = scrollback_phys_max; 1452 - scrollback_current = 0; 1453 - } 1454 - 1455 - static __inline__ void ypan_up_redraw(struct vc_data *vc, int t, int count) 1456 - { 1457 - struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1458 - struct fbcon_ops *ops = info->fbcon_par; 1459 - struct fbcon_display *p = &fb_display[vc->vc_num]; 1460 - 1461 - p->yscroll += count; 1462 - 1463 - if (p->yscroll > p->vrows - vc->vc_rows) { 1464 - p->yscroll -= p->vrows - vc->vc_rows; 1465 - fbcon_redraw_move(vc, p, t + count, vc->vc_rows - count, t); 1466 - } 1467 - 1468 - ops->var.xoffset = 0; 1469 - ops->var.yoffset = p->yscroll * vc->vc_font.height; 1470 - ops->var.vmode &= ~FB_VMODE_YWRAP; 1471 - ops->update_start(info); 1472 - fbcon_clear_margins(vc, 1); 1473 - scrollback_max += count; 1474 - if (scrollback_max > scrollback_phys_max) 1475 - scrollback_max = scrollback_phys_max; 1476 - scrollback_current = 0; 1477 - } 1478 - 1479 - static __inline__ void ypan_down(struct vc_data *vc, int count) 1480 - { 1481 - struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1482 - struct fbcon_display *p = &fb_display[vc->vc_num]; 1483 - struct fbcon_ops *ops = info->fbcon_par; 1484 - 1485 - p->yscroll -= count; 1486 - if (p->yscroll < 0) { 1487 - ops->bmove(vc, info, 0, 0, p->vrows - vc->vc_rows, 1488 - 0, vc->vc_rows, vc->vc_cols); 1489 - p->yscroll += p->vrows - vc->vc_rows; 1490 - } 1491 - 1492 - ops->var.xoffset = 0; 1493 - ops->var.yoffset = p->yscroll * vc->vc_font.height; 1494 - ops->var.vmode &= ~FB_VMODE_YWRAP; 1495 - ops->update_start(info); 1496 - fbcon_clear_margins(vc, 1); 1497 - scrollback_max -= count; 1498 - if (scrollback_max < 0) 1499 - scrollback_max = 0; 1500 - scrollback_current = 0; 1501 - } 1502 - 1503 - static __inline__ void ypan_down_redraw(struct vc_data *vc, int t, int count) 1504 - { 1505 - struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1506 - struct fbcon_ops *ops = info->fbcon_par; 1507 - struct fbcon_display *p = &fb_display[vc->vc_num]; 1508 - 1509 - p->yscroll -= count; 1510 - 1511 - if (p->yscroll < 0) { 1512 - p->yscroll += p->vrows - vc->vc_rows; 1513 - fbcon_redraw_move(vc, p, t, vc->vc_rows - count, t + count); 1514 - } 1515 - 1516 - ops->var.xoffset = 0; 1517 - ops->var.yoffset = p->yscroll * vc->vc_font.height; 1518 - ops->var.vmode &= ~FB_VMODE_YWRAP; 1519 - ops->update_start(info); 1520 - fbcon_clear_margins(vc, 1); 1521 - scrollback_max -= count; 1522 - if (scrollback_max < 0) 1523 - scrollback_max = 0; 1524 - scrollback_current = 0; 1525 - } 1526 - 1527 - static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p, 1528 - int line, int count, int dy) 1529 - { 1530 - unsigned short *s = (unsigned short *) 1531 - (vc->vc_origin + vc->vc_size_row * line); 1532 - 1533 - while (count--) { 1534 - unsigned short *start = s; 1535 - unsigned short *le = advance_row(s, 1); 1536 - unsigned short c; 1537 - int x = 0; 1538 - unsigned short attr = 1; 1539 - 1540 - do { 1541 - c = scr_readw(s); 1542 - if (attr != (c & 0xff00)) { 1543 - attr = c & 0xff00; 1544 - if (s > start) { 1545 - fbcon_putcs(vc, start, s - start, 1546 - dy, x); 1547 - x += s - start; 1548 - start = s; 1549 - } 1550 - } 1551 - console_conditional_schedule(); 1552 - s++; 1553 - } while (s < le); 1554 - if (s > start) 1555 - fbcon_putcs(vc, start, s - start, dy, x); 1556 - console_conditional_schedule(); 1557 - dy++; 1558 - } 1559 - } 1560 - 1561 - static void fbcon_redraw_blit(struct vc_data *vc, struct fb_info *info, 1562 - struct fbcon_display *p, int line, int count, int ycount) 1563 - { 1564 - int offset = ycount * vc->vc_cols; 1565 - unsigned short *d = (unsigned short *) 1566 - (vc->vc_origin + vc->vc_size_row * line); 1567 - unsigned short *s = d + offset; 1568 - struct fbcon_ops *ops = info->fbcon_par; 1569 - 1570 - while (count--) { 1571 - unsigned short *start = s; 1572 - unsigned short *le = advance_row(s, 1); 1573 - unsigned short c; 1574 - int x = 0; 1575 - 1576 - do { 1577 - c = scr_readw(s); 1578 - 1579 - if (c == scr_readw(d)) { 1580 - if (s > start) { 1581 - ops->bmove(vc, info, line + ycount, x, 1582 - line, x, 1, s-start); 1583 - x += s - start + 1; 1584 - start = s + 1; 1585 - } else { 1586 - x++; 1587 - start++; 1588 - } 1589 - } 1590 - 1591 - scr_writew(c, d); 1592 - console_conditional_schedule(); 1593 - s++; 1594 - d++; 1595 - } while (s < le); 1596 - if (s > start) 1597 - ops->bmove(vc, info, line + ycount, x, line, x, 1, 1598 - s-start); 1599 - console_conditional_schedule(); 1600 - if (ycount > 0) 1601 - line++; 1602 - else { 1603 - line--; 1604 - /* NOTE: We subtract two lines from these pointers */ 1605 - s -= vc->vc_size_row; 1606 - d -= vc->vc_size_row; 1607 - } 1608 - } 1609 - } 1610 - 1611 1412 static void fbcon_redraw(struct vc_data *vc, struct fbcon_display *p, 1612 1413 int line, int count, int offset) 1613 1414 { ··· 1450 1687 { 1451 1688 struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1452 1689 struct fbcon_display *p = &fb_display[vc->vc_num]; 1453 - int scroll_partial = info->flags & FBINFO_PARTIAL_PAN_OK; 1454 1690 1455 1691 if (fbcon_is_inactive(vc, info)) 1456 1692 return true; ··· 1466 1704 case SM_UP: 1467 1705 if (count > vc->vc_rows) /* Maximum realistic size */ 1468 1706 count = vc->vc_rows; 1469 - if (logo_shown >= 0) 1470 - goto redraw_up; 1471 - switch (p->scrollmode) { 1472 - case SCROLL_MOVE: 1473 - fbcon_redraw_blit(vc, info, p, t, b - t - count, 1474 - count); 1475 - fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1476 - scr_memsetw((unsigned short *) (vc->vc_origin + 1477 - vc->vc_size_row * 1478 - (b - count)), 1479 - vc->vc_video_erase_char, 1480 - vc->vc_size_row * count); 1481 - return true; 1482 - 1483 - case SCROLL_WRAP_MOVE: 1484 - if (b - t - count > 3 * vc->vc_rows >> 2) { 1485 - if (t > 0) 1486 - fbcon_bmove(vc, 0, 0, count, 0, t, 1487 - vc->vc_cols); 1488 - ywrap_up(vc, count); 1489 - if (vc->vc_rows - b > 0) 1490 - fbcon_bmove(vc, b - count, 0, b, 0, 1491 - vc->vc_rows - b, 1492 - vc->vc_cols); 1493 - } else if (info->flags & FBINFO_READS_FAST) 1494 - fbcon_bmove(vc, t + count, 0, t, 0, 1495 - b - t - count, vc->vc_cols); 1496 - else 1497 - goto redraw_up; 1498 - fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1499 - break; 1500 - 1501 - case SCROLL_PAN_REDRAW: 1502 - if ((p->yscroll + count <= 1503 - 2 * (p->vrows - vc->vc_rows)) 1504 - && ((!scroll_partial && (b - t == vc->vc_rows)) 1505 - || (scroll_partial 1506 - && (b - t - count > 1507 - 3 * vc->vc_rows >> 2)))) { 1508 - if (t > 0) 1509 - fbcon_redraw_move(vc, p, 0, t, count); 1510 - ypan_up_redraw(vc, t, count); 1511 - if (vc->vc_rows - b > 0) 1512 - fbcon_redraw_move(vc, p, b, 1513 - vc->vc_rows - b, b); 1514 - } else 1515 - fbcon_redraw_move(vc, p, t + count, b - t - count, t); 1516 - fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1517 - break; 1518 - 1519 - case SCROLL_PAN_MOVE: 1520 - if ((p->yscroll + count <= 1521 - 2 * (p->vrows - vc->vc_rows)) 1522 - && ((!scroll_partial && (b - t == vc->vc_rows)) 1523 - || (scroll_partial 1524 - && (b - t - count > 1525 - 3 * vc->vc_rows >> 2)))) { 1526 - if (t > 0) 1527 - fbcon_bmove(vc, 0, 0, count, 0, t, 1528 - vc->vc_cols); 1529 - ypan_up(vc, count); 1530 - if (vc->vc_rows - b > 0) 1531 - fbcon_bmove(vc, b - count, 0, b, 0, 1532 - vc->vc_rows - b, 1533 - vc->vc_cols); 1534 - } else if (info->flags & FBINFO_READS_FAST) 1535 - fbcon_bmove(vc, t + count, 0, t, 0, 1536 - b - t - count, vc->vc_cols); 1537 - else 1538 - goto redraw_up; 1539 - fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1540 - break; 1541 - 1542 - case SCROLL_REDRAW: 1543 - redraw_up: 1544 - fbcon_redraw(vc, p, t, b - t - count, 1545 - count * vc->vc_cols); 1546 - fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1547 - scr_memsetw((unsigned short *) (vc->vc_origin + 1548 - vc->vc_size_row * 1549 - (b - count)), 1550 - vc->vc_video_erase_char, 1551 - vc->vc_size_row * count); 1552 - return true; 1553 - } 1554 - break; 1707 + fbcon_redraw(vc, p, t, b - t - count, 1708 + count * vc->vc_cols); 1709 + fbcon_clear(vc, b - count, 0, count, vc->vc_cols); 1710 + scr_memsetw((unsigned short *) (vc->vc_origin + 1711 + vc->vc_size_row * 1712 + (b - count)), 1713 + vc->vc_video_erase_char, 1714 + vc->vc_size_row * count); 1715 + return true; 1555 1716 1556 1717 case SM_DOWN: 1557 1718 if (count > vc->vc_rows) /* Maximum realistic size */ 1558 1719 count = vc->vc_rows; 1559 - if (logo_shown >= 0) 1560 - goto redraw_down; 1561 - switch (p->scrollmode) { 1562 - case SCROLL_MOVE: 1563 - fbcon_redraw_blit(vc, info, p, b - 1, b - t - count, 1564 - -count); 1565 - fbcon_clear(vc, t, 0, count, vc->vc_cols); 1566 - scr_memsetw((unsigned short *) (vc->vc_origin + 1567 - vc->vc_size_row * 1568 - t), 1569 - vc->vc_video_erase_char, 1570 - vc->vc_size_row * count); 1571 - return true; 1572 - 1573 - case SCROLL_WRAP_MOVE: 1574 - if (b - t - count > 3 * vc->vc_rows >> 2) { 1575 - if (vc->vc_rows - b > 0) 1576 - fbcon_bmove(vc, b, 0, b - count, 0, 1577 - vc->vc_rows - b, 1578 - vc->vc_cols); 1579 - ywrap_down(vc, count); 1580 - if (t > 0) 1581 - fbcon_bmove(vc, count, 0, 0, 0, t, 1582 - vc->vc_cols); 1583 - } else if (info->flags & FBINFO_READS_FAST) 1584 - fbcon_bmove(vc, t, 0, t + count, 0, 1585 - b - t - count, vc->vc_cols); 1586 - else 1587 - goto redraw_down; 1588 - fbcon_clear(vc, t, 0, count, vc->vc_cols); 1589 - break; 1590 - 1591 - case SCROLL_PAN_MOVE: 1592 - if ((count - p->yscroll <= p->vrows - vc->vc_rows) 1593 - && ((!scroll_partial && (b - t == vc->vc_rows)) 1594 - || (scroll_partial 1595 - && (b - t - count > 1596 - 3 * vc->vc_rows >> 2)))) { 1597 - if (vc->vc_rows - b > 0) 1598 - fbcon_bmove(vc, b, 0, b - count, 0, 1599 - vc->vc_rows - b, 1600 - vc->vc_cols); 1601 - ypan_down(vc, count); 1602 - if (t > 0) 1603 - fbcon_bmove(vc, count, 0, 0, 0, t, 1604 - vc->vc_cols); 1605 - } else if (info->flags & FBINFO_READS_FAST) 1606 - fbcon_bmove(vc, t, 0, t + count, 0, 1607 - b - t - count, vc->vc_cols); 1608 - else 1609 - goto redraw_down; 1610 - fbcon_clear(vc, t, 0, count, vc->vc_cols); 1611 - break; 1612 - 1613 - case SCROLL_PAN_REDRAW: 1614 - if ((count - p->yscroll <= p->vrows - vc->vc_rows) 1615 - && ((!scroll_partial && (b - t == vc->vc_rows)) 1616 - || (scroll_partial 1617 - && (b - t - count > 1618 - 3 * vc->vc_rows >> 2)))) { 1619 - if (vc->vc_rows - b > 0) 1620 - fbcon_redraw_move(vc, p, b, vc->vc_rows - b, 1621 - b - count); 1622 - ypan_down_redraw(vc, t, count); 1623 - if (t > 0) 1624 - fbcon_redraw_move(vc, p, count, t, 0); 1625 - } else 1626 - fbcon_redraw_move(vc, p, t, b - t - count, t + count); 1627 - fbcon_clear(vc, t, 0, count, vc->vc_cols); 1628 - break; 1629 - 1630 - case SCROLL_REDRAW: 1631 - redraw_down: 1632 - fbcon_redraw(vc, p, b - 1, b - t - count, 1633 - -count * vc->vc_cols); 1634 - fbcon_clear(vc, t, 0, count, vc->vc_cols); 1635 - scr_memsetw((unsigned short *) (vc->vc_origin + 1636 - vc->vc_size_row * 1637 - t), 1638 - vc->vc_video_erase_char, 1639 - vc->vc_size_row * count); 1640 - return true; 1641 - } 1720 + fbcon_redraw(vc, p, b - 1, b - t - count, 1721 + -count * vc->vc_cols); 1722 + fbcon_clear(vc, t, 0, count, vc->vc_cols); 1723 + scr_memsetw((unsigned short *) (vc->vc_origin + 1724 + vc->vc_size_row * 1725 + t), 1726 + vc->vc_video_erase_char, 1727 + vc->vc_size_row * count); 1728 + return true; 1642 1729 } 1643 1730 return false; 1644 - } 1645 - 1646 - 1647 - static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx, 1648 - int height, int width) 1649 - { 1650 - struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1651 - struct fbcon_display *p = &fb_display[vc->vc_num]; 1652 - 1653 - if (fbcon_is_inactive(vc, info)) 1654 - return; 1655 - 1656 - if (!width || !height) 1657 - return; 1658 - 1659 - /* Split blits that cross physical y_wrap case. 1660 - * Pathological case involves 4 blits, better to use recursive 1661 - * code rather than unrolled case 1662 - * 1663 - * Recursive invocations don't need to erase the cursor over and 1664 - * over again, so we use fbcon_bmove_rec() 1665 - */ 1666 - fbcon_bmove_rec(vc, p, sy, sx, dy, dx, height, width, 1667 - p->vrows - p->yscroll); 1668 - } 1669 - 1670 - static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy, int sx, 1671 - int dy, int dx, int height, int width, u_int y_break) 1672 - { 1673 - struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]]; 1674 - struct fbcon_ops *ops = info->fbcon_par; 1675 - u_int b; 1676 - 1677 - if (sy < y_break && sy + height > y_break) { 1678 - b = y_break - sy; 1679 - if (dy < sy) { /* Avoid trashing self */ 1680 - fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width, 1681 - y_break); 1682 - fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx, 1683 - height - b, width, y_break); 1684 - } else { 1685 - fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx, 1686 - height - b, width, y_break); 1687 - fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width, 1688 - y_break); 1689 - } 1690 - return; 1691 - } 1692 - 1693 - if (dy < y_break && dy + height > y_break) { 1694 - b = y_break - dy; 1695 - if (dy < sy) { /* Avoid trashing self */ 1696 - fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width, 1697 - y_break); 1698 - fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx, 1699 - height - b, width, y_break); 1700 - } else { 1701 - fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx, 1702 - height - b, width, y_break); 1703 - fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width, 1704 - y_break); 1705 - } 1706 - return; 1707 - } 1708 - ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx, 1709 - height, width); 1710 1731 } 1711 1732 1712 1733 static void updatescrollmode(struct fbcon_display *p, ··· 1664 2119 1665 2120 updatescrollmode(p, info, vc); 1666 2121 1667 - switch (p->scrollmode) { 1668 - case SCROLL_WRAP_MOVE: 1669 - scrollback_phys_max = p->vrows - vc->vc_rows; 1670 - break; 1671 - case SCROLL_PAN_MOVE: 1672 - case SCROLL_PAN_REDRAW: 1673 - scrollback_phys_max = p->vrows - 2 * vc->vc_rows; 1674 - if (scrollback_phys_max < 0) 1675 - scrollback_phys_max = 0; 1676 - break; 1677 - default: 1678 - scrollback_phys_max = 0; 1679 - break; 1680 - } 1681 - 2122 + scrollback_phys_max = 0; 1682 2123 scrollback_max = 0; 1683 2124 scrollback_current = 0; 1684 2125
-59
drivers/video/fbdev/core/fbcon.h
··· 29 29 /* Filled in by the low-level console driver */ 30 30 const u_char *fontdata; 31 31 int userfont; /* != 0 if fontdata kmalloc()ed */ 32 - u_short scrollmode; /* Scroll Method */ 33 32 u_short inverse; /* != 0 text black on white as default */ 34 33 short yscroll; /* Hardware scrolling */ 35 34 int vrows; /* number of virtual rows */ ··· 51 52 }; 52 53 53 54 struct fbcon_ops { 54 - void (*bmove)(struct vc_data *vc, struct fb_info *info, int sy, 55 - int sx, int dy, int dx, int height, int width); 56 55 void (*clear)(struct vc_data *vc, struct fb_info *info, int sy, 57 56 int sx, int height, int width); 58 57 void (*putcs)(struct vc_data *vc, struct fb_info *info, ··· 148 151 149 152 #define attr_bgcol_ec(bgshift, vc, info) attr_col_ec(bgshift, vc, info, 0) 150 153 #define attr_fgcol_ec(fgshift, vc, info) attr_col_ec(fgshift, vc, info, 1) 151 - 152 - /* 153 - * Scroll Method 154 - */ 155 - 156 - /* There are several methods fbcon can use to move text around the screen: 157 - * 158 - * Operation Pan Wrap 159 - *--------------------------------------------- 160 - * SCROLL_MOVE copyarea No No 161 - * SCROLL_PAN_MOVE copyarea Yes No 162 - * SCROLL_WRAP_MOVE copyarea No Yes 163 - * SCROLL_REDRAW imageblit No No 164 - * SCROLL_PAN_REDRAW imageblit Yes No 165 - * SCROLL_WRAP_REDRAW imageblit No Yes 166 - * 167 - * (SCROLL_WRAP_REDRAW is not implemented yet) 168 - * 169 - * In general, fbcon will choose the best scrolling 170 - * method based on the rule below: 171 - * 172 - * Pan/Wrap > accel imageblit > accel copyarea > 173 - * soft imageblit > (soft copyarea) 174 - * 175 - * Exception to the rule: Pan + accel copyarea is 176 - * preferred over Pan + accel imageblit. 177 - * 178 - * The above is typical for PCI/AGP cards. Unless 179 - * overridden, fbcon will never use soft copyarea. 180 - * 181 - * If you need to override the above rule, set the 182 - * appropriate flags in fb_info->flags. For example, 183 - * to prefer copyarea over imageblit, set 184 - * FBINFO_READS_FAST. 185 - * 186 - * Other notes: 187 - * + use the hardware engine to move the text 188 - * (hw-accelerated copyarea() and fillrect()) 189 - * + use hardware-supported panning on a large virtual screen 190 - * + amifb can not only pan, but also wrap the display by N lines 191 - * (i.e. visible line i = physical line (i+N) % yres). 192 - * + read what's already rendered on the screen and 193 - * write it in a different place (this is cfb_copyarea()) 194 - * + re-render the text to the screen 195 - * 196 - * Whether to use wrapping or panning can only be figured out at 197 - * runtime (when we know whether our font height is a multiple 198 - * of the pan/wrap step) 199 - * 200 - */ 201 - 202 - #define SCROLL_MOVE 0x001 203 - #define SCROLL_PAN_MOVE 0x002 204 - #define SCROLL_WRAP_MOVE 0x003 205 - #define SCROLL_REDRAW 0x004 206 - #define SCROLL_PAN_REDRAW 0x005 207 154 208 155 #ifdef CONFIG_FB_TILEBLITTING 209 156 extern void fbcon_set_tileops(struct vc_data *vc, struct fb_info *info);
+4 -24
drivers/video/fbdev/core/fbcon_ccw.c
··· 59 59 } 60 60 } 61 61 62 - 63 - static void ccw_bmove(struct vc_data *vc, struct fb_info *info, int sy, 64 - int sx, int dy, int dx, int height, int width) 65 - { 66 - struct fbcon_ops *ops = info->fbcon_par; 67 - struct fb_copyarea area; 68 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 69 - 70 - area.sx = sy * vc->vc_font.height; 71 - area.sy = vyres - ((sx + width) * vc->vc_font.width); 72 - area.dx = dy * vc->vc_font.height; 73 - area.dy = vyres - ((dx + width) * vc->vc_font.width); 74 - area.width = height * vc->vc_font.height; 75 - area.height = width * vc->vc_font.width; 76 - 77 - info->fbops->fb_copyarea(info, &area); 78 - } 79 - 80 62 static void ccw_clear(struct vc_data *vc, struct fb_info *info, int sy, 81 63 int sx, int height, int width) 82 64 { 83 - struct fbcon_ops *ops = info->fbcon_par; 84 65 struct fb_fillrect region; 85 66 int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; 86 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 67 + u32 vyres = info->var.yres; 87 68 88 69 region.color = attr_bgcol_ec(bgshift,vc,info); 89 70 region.dx = sy * vc->vc_font.height; ··· 121 140 u32 cnt, pitch, size; 122 141 u32 attribute = get_attribute(info, scr_readw(s)); 123 142 u8 *dst, *buf = NULL; 124 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 143 + u32 vyres = info->var.yres; 125 144 126 145 if (!ops->fontbuffer) 127 146 return; ··· 210 229 int attribute, use_sw = vc->vc_cursor_type & CUR_SW; 211 230 int err = 1, dx, dy; 212 231 char *src; 213 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 232 + u32 vyres = info->var.yres; 214 233 215 234 if (!ops->fontbuffer) 216 235 return; ··· 368 387 { 369 388 struct fbcon_ops *ops = info->fbcon_par; 370 389 u32 yoffset; 371 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 390 + u32 vyres = info->var.yres; 372 391 int err; 373 392 374 393 yoffset = (vyres - info->var.yres) - ops->var.xoffset; ··· 383 402 384 403 void fbcon_rotate_ccw(struct fbcon_ops *ops) 385 404 { 386 - ops->bmove = ccw_bmove; 387 405 ops->clear = ccw_clear; 388 406 ops->putcs = ccw_putcs; 389 407 ops->clear_margins = ccw_clear_margins;
+4 -24
drivers/video/fbdev/core/fbcon_cw.c
··· 44 44 } 45 45 } 46 46 47 - 48 - static void cw_bmove(struct vc_data *vc, struct fb_info *info, int sy, 49 - int sx, int dy, int dx, int height, int width) 50 - { 51 - struct fbcon_ops *ops = info->fbcon_par; 52 - struct fb_copyarea area; 53 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 54 - 55 - area.sx = vxres - ((sy + height) * vc->vc_font.height); 56 - area.sy = sx * vc->vc_font.width; 57 - area.dx = vxres - ((dy + height) * vc->vc_font.height); 58 - area.dy = dx * vc->vc_font.width; 59 - area.width = height * vc->vc_font.height; 60 - area.height = width * vc->vc_font.width; 61 - 62 - info->fbops->fb_copyarea(info, &area); 63 - } 64 - 65 47 static void cw_clear(struct vc_data *vc, struct fb_info *info, int sy, 66 48 int sx, int height, int width) 67 49 { 68 - struct fbcon_ops *ops = info->fbcon_par; 69 50 struct fb_fillrect region; 70 51 int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; 71 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 52 + u32 vxres = info->var.xres; 72 53 73 54 region.color = attr_bgcol_ec(bgshift,vc,info); 74 55 region.dx = vxres - ((sy + height) * vc->vc_font.height); ··· 106 125 u32 cnt, pitch, size; 107 126 u32 attribute = get_attribute(info, scr_readw(s)); 108 127 u8 *dst, *buf = NULL; 109 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 128 + u32 vxres = info->var.xres; 110 129 111 130 if (!ops->fontbuffer) 112 131 return; ··· 193 212 int attribute, use_sw = vc->vc_cursor_type & CUR_SW; 194 213 int err = 1, dx, dy; 195 214 char *src; 196 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 215 + u32 vxres = info->var.xres; 197 216 198 217 if (!ops->fontbuffer) 199 218 return; ··· 350 369 static int cw_update_start(struct fb_info *info) 351 370 { 352 371 struct fbcon_ops *ops = info->fbcon_par; 353 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 372 + u32 vxres = info->var.xres; 354 373 u32 xoffset; 355 374 int err; 356 375 ··· 366 385 367 386 void fbcon_rotate_cw(struct fbcon_ops *ops) 368 387 { 369 - ops->bmove = cw_bmove; 370 388 ops->clear = cw_clear; 371 389 ops->putcs = cw_putcs; 372 390 ops->clear_margins = cw_clear_margins;
-9
drivers/video/fbdev/core/fbcon_rotate.h
··· 11 11 #ifndef _FBCON_ROTATE_H 12 12 #define _FBCON_ROTATE_H 13 13 14 - #define GETVYRES(s,i) ({ \ 15 - (s == SCROLL_REDRAW || s == SCROLL_MOVE) ? \ 16 - (i)->var.yres : (i)->var.yres_virtual; }) 17 - 18 - #define GETVXRES(s,i) ({ \ 19 - (s == SCROLL_REDRAW || s == SCROLL_MOVE || !(i)->fix.xpanstep) ? \ 20 - (i)->var.xres : (i)->var.xres_virtual; }) 21 - 22 - 23 14 static inline int pattern_test_bit(u32 x, u32 y, u32 pitch, const char *pat) 24 15 { 25 16 u32 tmp = (y * pitch) + x, index = tmp / 8, bit = tmp % 8;
+8 -29
drivers/video/fbdev/core/fbcon_ud.c
··· 44 44 } 45 45 } 46 46 47 - 48 - static void ud_bmove(struct vc_data *vc, struct fb_info *info, int sy, 49 - int sx, int dy, int dx, int height, int width) 50 - { 51 - struct fbcon_ops *ops = info->fbcon_par; 52 - struct fb_copyarea area; 53 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 54 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 55 - 56 - area.sy = vyres - ((sy + height) * vc->vc_font.height); 57 - area.sx = vxres - ((sx + width) * vc->vc_font.width); 58 - area.dy = vyres - ((dy + height) * vc->vc_font.height); 59 - area.dx = vxres - ((dx + width) * vc->vc_font.width); 60 - area.height = height * vc->vc_font.height; 61 - area.width = width * vc->vc_font.width; 62 - 63 - info->fbops->fb_copyarea(info, &area); 64 - } 65 - 66 47 static void ud_clear(struct vc_data *vc, struct fb_info *info, int sy, 67 48 int sx, int height, int width) 68 49 { 69 - struct fbcon_ops *ops = info->fbcon_par; 70 50 struct fb_fillrect region; 71 51 int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; 72 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 73 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 52 + u32 vyres = info->var.yres; 53 + u32 vxres = info->var.xres; 74 54 75 55 region.color = attr_bgcol_ec(bgshift,vc,info); 76 56 region.dy = vyres - ((sy + height) * vc->vc_font.height); ··· 142 162 u32 mod = vc->vc_font.width % 8, cnt, pitch, size; 143 163 u32 attribute = get_attribute(info, scr_readw(s)); 144 164 u8 *dst, *buf = NULL; 145 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 146 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 165 + u32 vyres = info->var.yres; 166 + u32 vxres = info->var.xres; 147 167 148 168 if (!ops->fontbuffer) 149 169 return; ··· 239 259 int attribute, use_sw = vc->vc_cursor_type & CUR_SW; 240 260 int err = 1, dx, dy; 241 261 char *src; 242 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 243 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 262 + u32 vyres = info->var.yres; 263 + u32 vxres = info->var.xres; 244 264 245 265 if (!ops->fontbuffer) 246 266 return; ··· 390 410 { 391 411 struct fbcon_ops *ops = info->fbcon_par; 392 412 int xoffset, yoffset; 393 - u32 vyres = GETVYRES(ops->p->scrollmode, info); 394 - u32 vxres = GETVXRES(ops->p->scrollmode, info); 413 + u32 vyres = info->var.yres; 414 + u32 vxres = info->var.xres; 395 415 int err; 396 416 397 417 xoffset = vxres - info->var.xres - ops->var.xoffset; ··· 409 429 410 430 void fbcon_rotate_ud(struct fbcon_ops *ops) 411 431 { 412 - ops->bmove = ud_bmove; 413 432 ops->clear = ud_clear; 414 433 ops->putcs = ud_putcs; 415 434 ops->clear_margins = ud_clear_margins;
+4 -1
drivers/video/fbdev/core/fbmem.c
··· 1702 1702 { 1703 1703 unlink_framebuffer(fb_info); 1704 1704 if (fb_info->pixmap.addr && 1705 - (fb_info->pixmap.flags & FB_PIXMAP_DEFAULT)) 1705 + (fb_info->pixmap.flags & FB_PIXMAP_DEFAULT)) { 1706 1706 kfree(fb_info->pixmap.addr); 1707 + fb_info->pixmap.addr = NULL; 1708 + } 1709 + 1707 1710 fb_destroy_modelist(&fb_info->modelist); 1708 1711 registered_fb[fb_info->node] = NULL; 1709 1712 num_registered_fb--;
-16
drivers/video/fbdev/core/tileblit.c
··· 16 16 #include <asm/types.h> 17 17 #include "fbcon.h" 18 18 19 - static void tile_bmove(struct vc_data *vc, struct fb_info *info, int sy, 20 - int sx, int dy, int dx, int height, int width) 21 - { 22 - struct fb_tilearea area; 23 - 24 - area.sx = sx; 25 - area.sy = sy; 26 - area.dx = dx; 27 - area.dy = dy; 28 - area.height = height; 29 - area.width = width; 30 - 31 - info->tileops->fb_tilecopy(info, &area); 32 - } 33 - 34 19 static void tile_clear(struct vc_data *vc, struct fb_info *info, int sy, 35 20 int sx, int height, int width) 36 21 { ··· 118 133 struct fb_tilemap map; 119 134 struct fbcon_ops *ops = info->fbcon_par; 120 135 121 - ops->bmove = tile_bmove; 122 136 ops->clear = tile_clear; 123 137 ops->putcs = tile_putcs; 124 138 ops->clear_margins = tile_clear_margins;
+6 -6
drivers/video/fbdev/skeletonfb.c
··· 505 505 } 506 506 507 507 /** 508 - * xxxfb_copyarea - REQUIRED function. Can use generic routines if 509 - * non acclerated hardware and packed pixel based. 508 + * xxxfb_copyarea - OBSOLETE function. 510 509 * Copies one area of the screen to another area. 510 + * Will be deleted in a future version 511 511 * 512 512 * @info: frame buffer structure that represents a single frame buffer 513 513 * @area: Structure providing the data to copy the framebuffer contents 514 514 * from one region to another. 515 515 * 516 - * This drawing operation copies a rectangular area from one area of the 516 + * This drawing operation copied a rectangular area from one area of the 517 517 * screen to another area. 518 518 */ 519 519 void xxxfb_copyarea(struct fb_info *p, const struct fb_copyarea *area) ··· 645 645 .fb_setcolreg = xxxfb_setcolreg, 646 646 .fb_blank = xxxfb_blank, 647 647 .fb_pan_display = xxxfb_pan_display, 648 - .fb_fillrect = xxxfb_fillrect, /* Needed !!! */ 649 - .fb_copyarea = xxxfb_copyarea, /* Needed !!! */ 650 - .fb_imageblit = xxxfb_imageblit, /* Needed !!! */ 648 + .fb_fillrect = xxxfb_fillrect, /* Needed !!! */ 649 + .fb_copyarea = xxxfb_copyarea, /* Obsolete */ 650 + .fb_imageblit = xxxfb_imageblit, /* Needed !!! */ 651 651 .fb_cursor = xxxfb_cursor, /* Optional !!! */ 652 652 .fb_sync = xxxfb_sync, 653 653 .fb_ioctl = xxxfb_ioctl,
+8
include/drm/drm_modeset_lock.h
··· 24 24 #ifndef DRM_MODESET_LOCK_H_ 25 25 #define DRM_MODESET_LOCK_H_ 26 26 27 + #include <linux/types.h> /* stackdepot.h is not self-contained */ 28 + #include <linux/stackdepot.h> 27 29 #include <linux/ww_mutex.h> 28 30 29 31 struct drm_modeset_lock; ··· 52 50 * contended lock. 53 51 */ 54 52 struct drm_modeset_lock *contended; 53 + 54 + /* 55 + * Stack depot for debugging when a contended lock was not backed off 56 + * from. 57 + */ 58 + depot_stack_handle_t stack_depot; 55 59 56 60 /* 57 61 * list of held locks (drm_modeset_lock)
+5 -4
include/drm/ttm/ttm_bo_api.h
··· 351 351 * @bo: Pointer to a ttm_buffer_object to be initialized. 352 352 * @size: Requested size of buffer object. 353 353 * @type: Requested type of buffer object. 354 - * @flags: Initial placement flags. 354 + * @placement: Initial placement for buffer object. 355 355 * @page_alignment: Data alignment in pages. 356 356 * @ctx: TTM operation context for memory allocation. 357 + * @sg: Scatter-gather table. 357 358 * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. 358 359 * @destroy: Destroy function. Use NULL for kfree(). 359 360 * ··· 395 394 * @bo: Pointer to a ttm_buffer_object to be initialized. 396 395 * @size: Requested size of buffer object. 397 396 * @type: Requested type of buffer object. 398 - * @flags: Initial placement flags. 397 + * @placement: Initial placement for buffer object. 399 398 * @page_alignment: Data alignment in pages. 400 399 * @interruptible: If needing to sleep to wait for GPU resources, 401 400 * sleep interruptible. ··· 403 402 * holds a pointer to a persistent shmem object. Typically, this would 404 403 * point to the shmem object backing a GEM object if TTM is used to back a 405 404 * GEM user interface. 405 + * @sg: Scatter-gather table. 406 406 * @resv: Pointer to a dma_resv, or NULL to let ttm allocate one. 407 407 * @destroy: Destroy function. Use NULL for kfree(). 408 408 * ··· 584 582 585 583 vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf, 586 584 pgprot_t prot, 587 - pgoff_t num_prefault, 588 - pgoff_t fault_page_size); 585 + pgoff_t num_prefault); 589 586 590 587 vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf); 591 588
+24 -1
include/linux/dma-resv.h
··· 170 170 /** @index: index into the shared fences */ 171 171 unsigned int index; 172 172 173 - /** @fences: the shared fences */ 173 + /** @fences: the shared fences; private, *MUST* not dereference */ 174 174 struct dma_resv_list *fences; 175 + 176 + /** @shared_count: number of shared fences */ 177 + unsigned int shared_count; 175 178 176 179 /** @is_restarted: true if this is the first returned fence */ 177 180 bool is_restarted; ··· 182 179 183 180 struct dma_fence *dma_resv_iter_first_unlocked(struct dma_resv_iter *cursor); 184 181 struct dma_fence *dma_resv_iter_next_unlocked(struct dma_resv_iter *cursor); 182 + struct dma_fence *dma_resv_iter_first(struct dma_resv_iter *cursor); 183 + struct dma_fence *dma_resv_iter_next(struct dma_resv_iter *cursor); 185 184 186 185 /** 187 186 * dma_resv_iter_begin - initialize a dma_resv_iter object ··· 248 243 #define dma_resv_for_each_fence_unlocked(cursor, fence) \ 249 244 for (fence = dma_resv_iter_first_unlocked(cursor); \ 250 245 fence; fence = dma_resv_iter_next_unlocked(cursor)) 246 + 247 + /** 248 + * dma_resv_for_each_fence - fence iterator 249 + * @cursor: a struct dma_resv_iter pointer 250 + * @obj: a dma_resv object pointer 251 + * @all_fences: true if all fences should be returned 252 + * @fence: the current fence 253 + * 254 + * Iterate over the fences in a struct dma_resv object while holding the 255 + * &dma_resv.lock. @all_fences controls if the shared fences are returned as 256 + * well. The cursor initialisation is part of the iterator and the fence stays 257 + * valid as long as the lock is held and so no extra reference to the fence is 258 + * taken. 259 + */ 260 + #define dma_resv_for_each_fence(cursor, obj, all_fences, fence) \ 261 + for (dma_resv_iter_begin(cursor, obj, all_fences), \ 262 + fence = dma_resv_iter_first(cursor); fence; \ 263 + fence = dma_resv_iter_next(cursor)) 251 264 252 265 #define dma_resv_held(obj) lockdep_is_held(&(obj)->lock.base) 253 266 #define dma_resv_assert_held(obj) lockdep_assert_held(&(obj)->lock.base)
+1 -1
include/linux/fb.h
··· 262 262 263 263 /* Draws a rectangle */ 264 264 void (*fb_fillrect) (struct fb_info *info, const struct fb_fillrect *rect); 265 - /* Copy data from area to another */ 265 + /* Copy data from area to another. Obsolete. */ 266 266 void (*fb_copyarea) (struct fb_info *info, const struct fb_copyarea *region); 267 267 /* Draws a image to the display */ 268 268 void (*fb_imageblit) (struct fb_info *info, const struct fb_image *image);