Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

drm/ttm: cope with reserved buffers on lru list in ttm_mem_evict_first, v2

Replace the goto loop with a simple for each loop, and only run the
delayed destroy cleanup if we can reserve the buffer first.

No race occurs, since lru lock is never dropped any more. An empty list
and a list full of unreservable buffers both cause -EBUSY to be returned,
which is identical to the previous situation, because previously buffers
on the lru list were always guaranteed to be reservable.

This should work since currently ttm guarantees items on the lru are
always reservable, and reserving items blockingly with some bo held
are enough to cause you to run into a deadlock.

Currently this is not a concern since removal off the lru list and
reservations are always done with atomically, but when this guarantee
no longer holds, we have to handle this situation or end up with
possible deadlocks.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>

authored by

Maarten Lankhorst and committed by
Dave Airlie
e7ab2019 2b7b3ad2

+12 -32
+12 -32
drivers/gpu/drm/ttm/ttm_bo.c
··· 811 811 struct ttm_bo_global *glob = bdev->glob; 812 812 struct ttm_mem_type_manager *man = &bdev->man[mem_type]; 813 813 struct ttm_buffer_object *bo; 814 - int ret, put_count = 0; 814 + int ret = -EBUSY, put_count; 815 815 816 - retry: 817 816 spin_lock(&glob->lru_lock); 818 - if (list_empty(&man->lru)) { 819 - spin_unlock(&glob->lru_lock); 820 - return -EBUSY; 817 + list_for_each_entry(bo, &man->lru, lru) { 818 + ret = ttm_bo_reserve_locked(bo, false, true, false, 0); 819 + if (!ret) 820 + break; 821 821 } 822 822 823 - bo = list_first_entry(&man->lru, struct ttm_buffer_object, lru); 824 - kref_get(&bo->list_kref); 825 - 826 - if (!list_empty(&bo->ddestroy)) { 827 - ret = ttm_bo_reserve_locked(bo, interruptible, no_wait_reserve, false, 0); 828 - if (!ret) 829 - ret = ttm_bo_cleanup_refs_and_unlock(bo, interruptible, 830 - no_wait_gpu); 831 - else 832 - spin_unlock(&glob->lru_lock); 833 - 834 - kref_put(&bo->list_kref, ttm_bo_release_list); 835 - 823 + if (ret) { 824 + spin_unlock(&glob->lru_lock); 836 825 return ret; 837 826 } 838 827 839 - ret = ttm_bo_reserve_locked(bo, false, true, false, 0); 828 + kref_get(&bo->list_kref); 840 829 841 - if (unlikely(ret == -EBUSY)) { 842 - spin_unlock(&glob->lru_lock); 843 - if (likely(!no_wait_reserve)) 844 - ret = ttm_bo_wait_unreserved(bo, interruptible); 845 - 830 + if (!list_empty(&bo->ddestroy)) { 831 + ret = ttm_bo_cleanup_refs_and_unlock(bo, interruptible, 832 + no_wait_gpu); 846 833 kref_put(&bo->list_kref, ttm_bo_release_list); 847 - 848 - /** 849 - * We *need* to retry after releasing the lru lock. 850 - */ 851 - 852 - if (unlikely(ret != 0)) 853 - return ret; 854 - goto retry; 834 + return ret; 855 835 } 856 836 857 837 put_count = ttm_bo_del_from_lru(bo);