Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

drm/ttm/pool, drm/ttm/tt: Provide a helper to shrink pages

Provide a helper to shrink ttm_tt page-vectors on a per-page
basis. A ttm_backup backend could then in theory get away with
allocating a single temporary page for each struct ttm_tt.

This is accomplished by splitting larger pages before trying to
back them up.

In the future we could allow ttm_backup to handle backing up
large pages as well, but currently there's no benefit in
doing that, since the shmem backup backend would have to
split those anyway to avoid allocating too much temporary
memory, and if the backend instead inserts pages into the
swap-cache, those are split on reclaim by the core.

Due to potential backup- and recover errors, allow partially swapped
out struct ttm_tt's, although mark them as swapped out stopping them
from being swapped out a second time. More details in the ttm_pool.c
DOC section.

v2:
- A couple of cleanups and error fixes in ttm_pool_back_up_tt.
- s/back_up/backup/
- Add a writeback parameter to the exported interface.
v8:
- Use a struct for flags for readability (Matt Brost)
- Address misc other review comments (Matt Brost)
v9:
- Update the kerneldoc for the ttm_tt::backup field.
v10:
- Rebase.
v13:
- Rebase on ttm_backup interface change. Update kerneldoc.
- Rebase and adjust ttm_tt_is_swapped().
v15:
- Rebase on ttm_backup return value change.
- Rebase on previous restructuring of ttm_pool_alloc()
- Rework the ttm_pool backup interface (Christian König)
- Remove cond_resched() (Christian König)
- Get rid of the need to allocate an intermediate page array
when restoring a multi-order page (Christian König)
- Update documentation.

Cc: Christian König <christian.koenig@amd.com>
Cc: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Acked-by: Christian Koenig <christian.koenig@amd.com>
Link: https://lore.kernel.org/intel-xe/20250305092220.123405-3-thomas.hellstrom@linux.intel.com

+629 -54
+502 -52
drivers/gpu/drm/ttm/ttm_pool.c
··· 41 41 #include <asm/set_memory.h> 42 42 #endif 43 43 44 + #include <drm/ttm/ttm_backup.h> 44 45 #include <drm/ttm/ttm_pool.h> 45 46 #include <drm/ttm/ttm_tt.h> 46 47 #include <drm/ttm/ttm_bo.h> ··· 74 73 dma_addr_t *dma_addr; 75 74 pgoff_t remaining_pages; 76 75 enum ttm_caching tt_caching; 76 + }; 77 + 78 + /** 79 + * struct ttm_pool_tt_restore - State representing restore from backup 80 + * @pool: The pool used for page allocation while restoring. 81 + * @snapshot_alloc: A snapshot of the most recent struct ttm_pool_alloc_state. 82 + * @alloced_page: Pointer to the page most recently allocated from a pool or system. 83 + * @first_dma: The dma address corresponding to @alloced_page if dma_mapping 84 + * is requested. 85 + * @alloced_pages: The number of allocated pages present in the struct ttm_tt 86 + * page vector from this restore session. 87 + * @restored_pages: The number of 4K pages restored for @alloced_page (which 88 + * is typically a multi-order page). 89 + * @page_caching: The struct ttm_tt requested caching 90 + * @order: The order of @alloced_page. 91 + * 92 + * Recovery from backup might fail when we've recovered less than the 93 + * full ttm_tt. In order not to loose any data (yet), keep information 94 + * around that allows us to restart a failed ttm backup recovery. 95 + */ 96 + struct ttm_pool_tt_restore { 97 + struct ttm_pool *pool; 98 + struct ttm_pool_alloc_state snapshot_alloc; 99 + struct page *alloced_page; 100 + dma_addr_t first_dma; 101 + pgoff_t alloced_pages; 102 + pgoff_t restored_pages; 103 + enum ttm_caching page_caching; 104 + unsigned int order; 77 105 }; 78 106 79 107 static unsigned long page_pool_size; ··· 229 199 return 0; 230 200 } 231 201 232 - /* Map pages of 1 << order size and fill the DMA address array */ 202 + /* DMA Map pages of 1 << order size and return the resulting dma_address. */ 233 203 static int ttm_pool_map(struct ttm_pool *pool, unsigned int order, 234 - struct page *p, dma_addr_t **dma_addr) 204 + struct page *p, dma_addr_t *dma_addr) 235 205 { 236 206 dma_addr_t addr; 237 - unsigned int i; 238 207 239 208 if (pool->use_dma_alloc) { 240 209 struct ttm_pool_dma *dma = (void *)p->private; ··· 247 218 return -EFAULT; 248 219 } 249 220 250 - for (i = 1 << order; i ; --i) { 251 - *(*dma_addr)++ = addr; 252 - addr += PAGE_SIZE; 253 - } 221 + *dma_addr = addr; 254 222 255 223 return 0; 256 224 } ··· 398 372 } 399 373 400 374 /* 375 + * Split larger pages so that we can free each PAGE_SIZE page as soon 376 + * as it has been backed up, in order to avoid memory pressure during 377 + * reclaim. 378 + */ 379 + static void ttm_pool_split_for_swap(struct ttm_pool *pool, struct page *p) 380 + { 381 + unsigned int order = ttm_pool_page_order(pool, p); 382 + pgoff_t nr; 383 + 384 + if (!order) 385 + return; 386 + 387 + split_page(p, order); 388 + nr = 1UL << order; 389 + while (nr--) 390 + (p++)->private = 0; 391 + } 392 + 393 + /** 394 + * DOC: Partial backup and restoration of a struct ttm_tt. 395 + * 396 + * Swapout using ttm_backup_backup_page() and swapin using 397 + * ttm_backup_copy_page() may fail. 398 + * The former most likely due to lack of swap-space or memory, the latter due 399 + * to lack of memory or because of signal interruption during waits. 400 + * 401 + * Backup failure is easily handled by using a ttm_tt pages vector that holds 402 + * both backup handles and page pointers. This has to be taken into account when 403 + * restoring such a ttm_tt from backup, and when freeing it while backed up. 404 + * When restoring, for simplicity, new pages are actually allocated from the 405 + * pool and the contents of any old pages are copied in and then the old pages 406 + * are released. 407 + * 408 + * For restoration failures, the struct ttm_pool_tt_restore holds sufficient state 409 + * to be able to resume an interrupted restore, and that structure is freed once 410 + * the restoration is complete. If the struct ttm_tt is destroyed while there 411 + * is a valid struct ttm_pool_tt_restore attached, that is also properly taken 412 + * care of. 413 + */ 414 + 415 + /* Is restore ongoing for the currently allocated page? */ 416 + static bool ttm_pool_restore_valid(const struct ttm_pool_tt_restore *restore) 417 + { 418 + return restore && restore->restored_pages < (1 << restore->order); 419 + } 420 + 421 + /* DMA unmap and free a multi-order page, either to the relevant pool or to system. */ 422 + static pgoff_t ttm_pool_unmap_and_free(struct ttm_pool *pool, struct page *page, 423 + const dma_addr_t *dma_addr, enum ttm_caching caching) 424 + { 425 + struct ttm_pool_type *pt = NULL; 426 + unsigned int order; 427 + pgoff_t nr; 428 + 429 + if (pool) { 430 + order = ttm_pool_page_order(pool, page); 431 + nr = (1UL << order); 432 + if (dma_addr) 433 + ttm_pool_unmap(pool, *dma_addr, nr); 434 + 435 + pt = ttm_pool_select_type(pool, caching, order); 436 + } else { 437 + order = page->private; 438 + nr = (1UL << order); 439 + } 440 + 441 + if (pt) 442 + ttm_pool_type_give(pt, page); 443 + else 444 + ttm_pool_free_page(pool, caching, order, page); 445 + 446 + return nr; 447 + } 448 + 449 + /* Populate the page-array using the most recent allocated multi-order page. */ 450 + static void ttm_pool_allocated_page_commit(struct page *allocated, 451 + dma_addr_t first_dma, 452 + struct ttm_pool_alloc_state *alloc, 453 + pgoff_t nr) 454 + { 455 + pgoff_t i; 456 + 457 + for (i = 0; i < nr; ++i) 458 + *alloc->pages++ = allocated++; 459 + 460 + alloc->remaining_pages -= nr; 461 + 462 + if (!alloc->dma_addr) 463 + return; 464 + 465 + for (i = 0; i < nr; ++i) { 466 + *alloc->dma_addr++ = first_dma; 467 + first_dma += PAGE_SIZE; 468 + } 469 + } 470 + 471 + /* 472 + * When restoring, restore backed-up content to the newly allocated page and 473 + * if successful, populate the page-table and dma-address arrays. 474 + */ 475 + static int ttm_pool_restore_commit(struct ttm_pool_tt_restore *restore, 476 + struct ttm_backup *backup, 477 + const struct ttm_operation_ctx *ctx, 478 + struct ttm_pool_alloc_state *alloc) 479 + 480 + { 481 + pgoff_t i, nr = 1UL << restore->order; 482 + struct page **first_page = alloc->pages; 483 + struct page *p; 484 + int ret = 0; 485 + 486 + for (i = restore->restored_pages; i < nr; ++i) { 487 + p = first_page[i]; 488 + if (ttm_backup_page_ptr_is_handle(p)) { 489 + unsigned long handle = ttm_backup_page_ptr_to_handle(p); 490 + 491 + if (handle == 0) { 492 + restore->restored_pages++; 493 + continue; 494 + } 495 + 496 + ret = ttm_backup_copy_page(backup, restore->alloced_page + i, 497 + handle, ctx->interruptible); 498 + if (ret) 499 + break; 500 + 501 + ttm_backup_drop(backup, handle); 502 + } else if (p) { 503 + /* 504 + * We could probably avoid splitting the old page 505 + * using clever logic, but ATM we don't care, as 506 + * we prioritize releasing memory ASAP. Note that 507 + * here, the old retained page is always write-back 508 + * cached. 509 + */ 510 + ttm_pool_split_for_swap(restore->pool, p); 511 + copy_highpage(restore->alloced_page + i, p); 512 + __free_pages(p, 0); 513 + } 514 + 515 + restore->restored_pages++; 516 + first_page[i] = ttm_backup_handle_to_page_ptr(0); 517 + } 518 + 519 + if (ret) { 520 + if (!restore->restored_pages) { 521 + dma_addr_t *dma_addr = alloc->dma_addr ? &restore->first_dma : NULL; 522 + 523 + ttm_pool_unmap_and_free(restore->pool, restore->alloced_page, 524 + dma_addr, restore->page_caching); 525 + restore->restored_pages = nr; 526 + } 527 + return ret; 528 + } 529 + 530 + ttm_pool_allocated_page_commit(restore->alloced_page, restore->first_dma, 531 + alloc, nr); 532 + if (restore->page_caching == alloc->tt_caching || PageHighMem(restore->alloced_page)) 533 + alloc->caching_divide = alloc->pages; 534 + restore->snapshot_alloc = *alloc; 535 + restore->alloced_pages += nr; 536 + 537 + return 0; 538 + } 539 + 540 + /* If restoring, save information needed for ttm_pool_restore_commit(). */ 541 + static void 542 + ttm_pool_page_allocated_restore(struct ttm_pool *pool, unsigned int order, 543 + struct page *p, 544 + enum ttm_caching page_caching, 545 + dma_addr_t first_dma, 546 + struct ttm_pool_tt_restore *restore, 547 + const struct ttm_pool_alloc_state *alloc) 548 + { 549 + restore->pool = pool; 550 + restore->order = order; 551 + restore->restored_pages = 0; 552 + restore->page_caching = page_caching; 553 + restore->first_dma = first_dma; 554 + restore->alloced_page = p; 555 + restore->snapshot_alloc = *alloc; 556 + } 557 + 558 + /* 401 559 * Called when we got a page, either from a pool or newly allocated. 402 560 * if needed, dma map the page and populate the dma address array. 403 561 * Populate the page address array. ··· 590 380 */ 591 381 static int ttm_pool_page_allocated(struct ttm_pool *pool, unsigned int order, 592 382 struct page *p, enum ttm_caching page_caching, 593 - struct ttm_pool_alloc_state *alloc) 383 + struct ttm_pool_alloc_state *alloc, 384 + struct ttm_pool_tt_restore *restore) 594 385 { 595 - pgoff_t i, nr = 1UL << order; 596 386 bool caching_consistent; 387 + dma_addr_t first_dma; 597 388 int r = 0; 598 389 599 390 caching_consistent = (page_caching == alloc->tt_caching) || PageHighMem(p); ··· 606 395 } 607 396 608 397 if (alloc->dma_addr) { 609 - r = ttm_pool_map(pool, order, p, &alloc->dma_addr); 398 + r = ttm_pool_map(pool, order, p, &first_dma); 610 399 if (r) 611 400 return r; 612 401 } 613 402 614 - alloc->remaining_pages -= nr; 615 - for (i = 0; i < nr; ++i) 616 - *alloc->pages++ = p++; 403 + if (restore) { 404 + ttm_pool_page_allocated_restore(pool, order, p, page_caching, 405 + first_dma, restore, alloc); 406 + } else { 407 + ttm_pool_allocated_page_commit(p, first_dma, alloc, 1UL << order); 617 408 618 - if (caching_consistent) 619 - alloc->caching_divide = alloc->pages; 409 + if (caching_consistent) 410 + alloc->caching_divide = alloc->pages; 411 + } 620 412 621 413 return 0; 622 414 } ··· 642 428 pgoff_t start_page, pgoff_t end_page) 643 429 { 644 430 struct page **pages = &tt->pages[start_page]; 645 - unsigned int order; 431 + struct ttm_backup *backup = tt->backup; 646 432 pgoff_t i, nr; 647 433 648 434 for (i = start_page; i < end_page; i += nr, pages += nr) { 649 - struct ttm_pool_type *pt = NULL; 435 + struct page *p = *pages; 650 436 651 - order = ttm_pool_page_order(pool, *pages); 652 - nr = (1UL << order); 653 - if (tt->dma_address) 654 - ttm_pool_unmap(pool, tt->dma_address[i], nr); 437 + nr = 1; 438 + if (ttm_backup_page_ptr_is_handle(p)) { 439 + unsigned long handle = ttm_backup_page_ptr_to_handle(p); 655 440 656 - pt = ttm_pool_select_type(pool, caching, order); 657 - if (pt) 658 - ttm_pool_type_give(pt, *pages); 659 - else 660 - ttm_pool_free_page(pool, caching, order, *pages); 441 + if (handle != 0) 442 + ttm_backup_drop(backup, handle); 443 + } else if (p) { 444 + dma_addr_t *dma_addr = tt->dma_address ? 445 + tt->dma_address + i : NULL; 446 + 447 + nr = ttm_pool_unmap_and_free(pool, p, dma_addr, caching); 448 + } 661 449 } 662 450 } 663 451 ··· 683 467 return min_t(unsigned int, highest, __fls(alloc->remaining_pages)); 684 468 } 685 469 686 - /** 687 - * ttm_pool_alloc - Fill a ttm_tt object 688 - * 689 - * @pool: ttm_pool to use 690 - * @tt: ttm_tt object to fill 691 - * @ctx: operation context 692 - * 693 - * Fill the ttm_tt object with pages and also make sure to DMA map them when 694 - * necessary. 695 - * 696 - * Returns: 0 on successe, negative error code otherwise. 697 - */ 698 - int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, 699 - struct ttm_operation_ctx *ctx) 470 + static int __ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, 471 + const struct ttm_operation_ctx *ctx, 472 + struct ttm_pool_alloc_state *alloc, 473 + struct ttm_pool_tt_restore *restore) 700 474 { 701 - struct ttm_pool_alloc_state alloc; 702 475 enum ttm_caching page_caching; 703 476 gfp_t gfp_flags = GFP_USER; 704 477 pgoff_t caching_divide; ··· 696 491 struct page *p; 697 492 int r; 698 493 699 - ttm_pool_alloc_state_init(tt, &alloc); 700 - 701 - WARN_ON(!alloc.remaining_pages || ttm_tt_is_populated(tt)); 702 - WARN_ON(alloc.dma_addr && !pool->dev); 494 + WARN_ON(!alloc->remaining_pages || ttm_tt_is_populated(tt)); 495 + WARN_ON(alloc->dma_addr && !pool->dev); 703 496 704 497 if (tt->page_flags & TTM_TT_FLAG_ZERO_ALLOC) 705 498 gfp_flags |= __GFP_ZERO; ··· 712 509 713 510 page_caching = tt->caching; 714 511 allow_pools = true; 715 - for (order = ttm_pool_alloc_find_order(MAX_PAGE_ORDER, &alloc); 716 - alloc.remaining_pages; 717 - order = ttm_pool_alloc_find_order(order, &alloc)) { 512 + for (order = ttm_pool_alloc_find_order(MAX_PAGE_ORDER, alloc); 513 + alloc->remaining_pages; 514 + order = ttm_pool_alloc_find_order(order, alloc)) { 718 515 struct ttm_pool_type *pt; 719 516 720 517 /* First, try to allocate a page from a pool if one exists. */ ··· 744 541 r = -ENOMEM; 745 542 goto error_free_all; 746 543 } 747 - r = ttm_pool_page_allocated(pool, order, p, page_caching, &alloc); 544 + r = ttm_pool_page_allocated(pool, order, p, page_caching, alloc, 545 + restore); 748 546 if (r) 749 547 goto error_free_page; 548 + 549 + if (ttm_pool_restore_valid(restore)) { 550 + r = ttm_pool_restore_commit(restore, tt->backup, ctx, alloc); 551 + if (r) 552 + goto error_free_all; 553 + } 750 554 } 751 555 752 - r = ttm_pool_apply_caching(&alloc); 556 + r = ttm_pool_apply_caching(alloc); 753 557 if (r) 754 558 goto error_free_all; 559 + 560 + kfree(tt->restore); 561 + tt->restore = NULL; 755 562 756 563 return 0; 757 564 ··· 769 556 ttm_pool_free_page(pool, page_caching, order, p); 770 557 771 558 error_free_all: 772 - caching_divide = alloc.caching_divide - tt->pages; 559 + if (tt->restore) 560 + return r; 561 + 562 + caching_divide = alloc->caching_divide - tt->pages; 773 563 ttm_pool_free_range(pool, tt, tt->caching, 0, caching_divide); 774 564 ttm_pool_free_range(pool, tt, ttm_cached, caching_divide, 775 - tt->num_pages - alloc.remaining_pages); 565 + tt->num_pages - alloc->remaining_pages); 776 566 777 567 return r; 778 568 } 569 + 570 + /** 571 + * ttm_pool_alloc - Fill a ttm_tt object 572 + * 573 + * @pool: ttm_pool to use 574 + * @tt: ttm_tt object to fill 575 + * @ctx: operation context 576 + * 577 + * Fill the ttm_tt object with pages and also make sure to DMA map them when 578 + * necessary. 579 + * 580 + * Returns: 0 on successe, negative error code otherwise. 581 + */ 582 + int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, 583 + struct ttm_operation_ctx *ctx) 584 + { 585 + struct ttm_pool_alloc_state alloc; 586 + 587 + if (WARN_ON(ttm_tt_is_backed_up(tt))) 588 + return -EINVAL; 589 + 590 + ttm_pool_alloc_state_init(tt, &alloc); 591 + 592 + return __ttm_pool_alloc(pool, tt, ctx, &alloc, NULL); 593 + } 779 594 EXPORT_SYMBOL(ttm_pool_alloc); 595 + 596 + /** 597 + * ttm_pool_restore_and_alloc - Fill a ttm_tt, restoring previously backed-up 598 + * content. 599 + * 600 + * @pool: ttm_pool to use 601 + * @tt: ttm_tt object to fill 602 + * @ctx: operation context 603 + * 604 + * Fill the ttm_tt object with pages and also make sure to DMA map them when 605 + * necessary. Read in backed-up content. 606 + * 607 + * Returns: 0 on successe, negative error code otherwise. 608 + */ 609 + int ttm_pool_restore_and_alloc(struct ttm_pool *pool, struct ttm_tt *tt, 610 + const struct ttm_operation_ctx *ctx) 611 + { 612 + struct ttm_pool_alloc_state alloc; 613 + 614 + if (WARN_ON(!ttm_tt_is_backed_up(tt))) 615 + return -EINVAL; 616 + 617 + if (!tt->restore) { 618 + gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; 619 + 620 + ttm_pool_alloc_state_init(tt, &alloc); 621 + if (ctx->gfp_retry_mayfail) 622 + gfp |= __GFP_RETRY_MAYFAIL; 623 + 624 + tt->restore = kzalloc(sizeof(*tt->restore), gfp); 625 + if (!tt->restore) 626 + return -ENOMEM; 627 + 628 + tt->restore->snapshot_alloc = alloc; 629 + tt->restore->pool = pool; 630 + tt->restore->restored_pages = 1; 631 + } else { 632 + struct ttm_pool_tt_restore *restore = tt->restore; 633 + int ret; 634 + 635 + alloc = restore->snapshot_alloc; 636 + if (ttm_pool_restore_valid(tt->restore)) { 637 + ret = ttm_pool_restore_commit(restore, tt->backup, ctx, &alloc); 638 + if (ret) 639 + return ret; 640 + } 641 + if (!alloc.remaining_pages) 642 + return 0; 643 + } 644 + 645 + return __ttm_pool_alloc(pool, tt, ctx, &alloc, tt->restore); 646 + } 780 647 781 648 /** 782 649 * ttm_pool_free - Free the backing pages from a ttm_tt object ··· 874 581 ttm_pool_shrink(); 875 582 } 876 583 EXPORT_SYMBOL(ttm_pool_free); 584 + 585 + /** 586 + * ttm_pool_drop_backed_up() - Release content of a swapped-out struct ttm_tt 587 + * @tt: The struct ttm_tt. 588 + * 589 + * Release handles with associated content or any remaining pages of 590 + * a backed-up struct ttm_tt. 591 + */ 592 + void ttm_pool_drop_backed_up(struct ttm_tt *tt) 593 + { 594 + struct ttm_pool_tt_restore *restore; 595 + pgoff_t start_page = 0; 596 + 597 + WARN_ON(!ttm_tt_is_backed_up(tt)); 598 + 599 + restore = tt->restore; 600 + 601 + /* 602 + * Unmap and free any uncommitted restore page. 603 + * any tt page-array backup entries already read back has 604 + * been cleared already 605 + */ 606 + if (ttm_pool_restore_valid(restore)) { 607 + dma_addr_t *dma_addr = tt->dma_address ? &restore->first_dma : NULL; 608 + 609 + ttm_pool_unmap_and_free(restore->pool, restore->alloced_page, 610 + dma_addr, restore->page_caching); 611 + restore->restored_pages = 1UL << restore->order; 612 + } 613 + 614 + /* 615 + * If a restore is ongoing, part of the tt pages may have a 616 + * caching different than writeback. 617 + */ 618 + if (restore) { 619 + pgoff_t mid = restore->snapshot_alloc.caching_divide - tt->pages; 620 + 621 + start_page = restore->alloced_pages; 622 + WARN_ON(mid > start_page); 623 + /* Pages that might be dma-mapped and non-cached */ 624 + ttm_pool_free_range(restore->pool, tt, tt->caching, 625 + 0, mid); 626 + /* Pages that might be dma-mapped but cached */ 627 + ttm_pool_free_range(restore->pool, tt, ttm_cached, 628 + mid, restore->alloced_pages); 629 + kfree(restore); 630 + tt->restore = NULL; 631 + } 632 + 633 + ttm_pool_free_range(NULL, tt, ttm_cached, start_page, tt->num_pages); 634 + } 635 + 636 + /** 637 + * ttm_pool_backup() - Back up or purge a struct ttm_tt 638 + * @pool: The pool used when allocating the struct ttm_tt. 639 + * @tt: The struct ttm_tt. 640 + * @flags: Flags to govern the backup behaviour. 641 + * 642 + * Back up or purge a struct ttm_tt. If @purge is true, then 643 + * all pages will be freed directly to the system rather than to the pool 644 + * they were allocated from, making the function behave similarly to 645 + * ttm_pool_free(). If @purge is false the pages will be backed up instead, 646 + * exchanged for handles. 647 + * A subsequent call to ttm_pool_restore_and_alloc() will then read back the content and 648 + * a subsequent call to ttm_pool_drop_backed_up() will drop it. 649 + * If backup of a page fails for whatever reason, @ttm will still be 650 + * partially backed up, retaining those pages for which backup fails. 651 + * In that case, this function can be retried, possibly after freeing up 652 + * memory resources. 653 + * 654 + * Return: Number of pages actually backed up or freed, or negative 655 + * error code on error. 656 + */ 657 + long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt, 658 + const struct ttm_backup_flags *flags) 659 + { 660 + struct ttm_backup *backup = tt->backup; 661 + struct page *page; 662 + unsigned long handle; 663 + gfp_t alloc_gfp; 664 + gfp_t gfp; 665 + int ret = 0; 666 + pgoff_t shrunken = 0; 667 + pgoff_t i, num_pages; 668 + 669 + if (WARN_ON(ttm_tt_is_backed_up(tt))) 670 + return -EINVAL; 671 + 672 + if ((!ttm_backup_bytes_avail() && !flags->purge) || 673 + pool->use_dma_alloc || ttm_tt_is_backed_up(tt)) 674 + return -EBUSY; 675 + 676 + #ifdef CONFIG_X86 677 + /* Anything returned to the system needs to be cached. */ 678 + if (tt->caching != ttm_cached) 679 + set_pages_array_wb(tt->pages, tt->num_pages); 680 + #endif 681 + 682 + if (tt->dma_address || flags->purge) { 683 + for (i = 0; i < tt->num_pages; i += num_pages) { 684 + unsigned int order; 685 + 686 + page = tt->pages[i]; 687 + if (unlikely(!page)) { 688 + num_pages = 1; 689 + continue; 690 + } 691 + 692 + order = ttm_pool_page_order(pool, page); 693 + num_pages = 1UL << order; 694 + if (tt->dma_address) 695 + ttm_pool_unmap(pool, tt->dma_address[i], 696 + num_pages); 697 + if (flags->purge) { 698 + shrunken += num_pages; 699 + page->private = 0; 700 + __free_pages(page, order); 701 + memset(tt->pages + i, 0, 702 + num_pages * sizeof(*tt->pages)); 703 + } 704 + } 705 + } 706 + 707 + if (flags->purge) 708 + return shrunken; 709 + 710 + if (pool->use_dma32) 711 + gfp = GFP_DMA32; 712 + else 713 + gfp = GFP_HIGHUSER; 714 + 715 + alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL; 716 + 717 + for (i = 0; i < tt->num_pages; ++i) { 718 + s64 shandle; 719 + 720 + page = tt->pages[i]; 721 + if (unlikely(!page)) 722 + continue; 723 + 724 + ttm_pool_split_for_swap(pool, page); 725 + 726 + shandle = ttm_backup_backup_page(backup, page, flags->writeback, i, 727 + gfp, alloc_gfp); 728 + if (shandle < 0) { 729 + /* We allow partially shrunken tts */ 730 + ret = shandle; 731 + break; 732 + } 733 + handle = shandle; 734 + tt->pages[i] = ttm_backup_handle_to_page_ptr(handle); 735 + put_page(page); 736 + shrunken++; 737 + } 738 + 739 + return shrunken ? shrunken : ret; 740 + } 877 741 878 742 /** 879 743 * ttm_pool_init - Initialize a pool
+54
drivers/gpu/drm/ttm/ttm_tt.c
··· 40 40 #include <drm/drm_cache.h> 41 41 #include <drm/drm_device.h> 42 42 #include <drm/drm_util.h> 43 + #include <drm/ttm/ttm_backup.h> 43 44 #include <drm/ttm/ttm_bo.h> 44 45 #include <drm/ttm/ttm_tt.h> 45 46 ··· 159 158 ttm->swap_storage = NULL; 160 159 ttm->sg = bo->sg; 161 160 ttm->caching = caching; 161 + ttm->restore = NULL; 162 + ttm->backup = NULL; 162 163 } 163 164 164 165 int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo, ··· 184 181 if (ttm->swap_storage) 185 182 fput(ttm->swap_storage); 186 183 ttm->swap_storage = NULL; 184 + 185 + if (ttm_tt_is_backed_up(ttm)) 186 + ttm_pool_drop_backed_up(ttm); 187 + if (ttm->backup) { 188 + ttm_backup_fini(ttm->backup); 189 + ttm->backup = NULL; 190 + } 187 191 188 192 if (ttm->pages) 189 193 kvfree(ttm->pages); ··· 262 252 return ret; 263 253 } 264 254 EXPORT_SYMBOL_FOR_TESTS_ONLY(ttm_tt_swapin); 255 + 256 + /** 257 + * ttm_tt_backup() - Helper to back up a struct ttm_tt. 258 + * @bdev: The TTM device. 259 + * @tt: The struct ttm_tt. 260 + * @flags: Flags that govern the backup behaviour. 261 + * 262 + * Update the page accounting and call ttm_pool_shrink_tt to free pages 263 + * or back them up. 264 + * 265 + * Return: Number of pages freed or swapped out, or negative error code on 266 + * error. 267 + */ 268 + long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt, 269 + const struct ttm_backup_flags flags) 270 + { 271 + long ret; 272 + 273 + if (WARN_ON(IS_ERR_OR_NULL(tt->backup))) 274 + return 0; 275 + 276 + ret = ttm_pool_backup(&bdev->pool, tt, &flags); 277 + if (ret > 0) { 278 + tt->page_flags &= ~TTM_TT_FLAG_PRIV_POPULATED; 279 + tt->page_flags |= TTM_TT_FLAG_BACKED_UP; 280 + } 281 + 282 + return ret; 283 + } 284 + 285 + int ttm_tt_restore(struct ttm_device *bdev, struct ttm_tt *tt, 286 + const struct ttm_operation_ctx *ctx) 287 + { 288 + int ret = ttm_pool_restore_and_alloc(&bdev->pool, tt, ctx); 289 + 290 + if (ret) 291 + return ret; 292 + 293 + tt->page_flags &= ~TTM_TT_FLAG_BACKED_UP; 294 + 295 + return 0; 296 + } 297 + EXPORT_SYMBOL(ttm_tt_restore); 265 298 266 299 /** 267 300 * ttm_tt_swapout - swap out tt object ··· 401 348 goto error; 402 349 403 350 ttm->page_flags |= TTM_TT_FLAG_PRIV_POPULATED; 351 + ttm->page_flags &= ~TTM_TT_FLAG_BACKED_UP; 404 352 if (unlikely(ttm->page_flags & TTM_TT_FLAG_SWAPPED)) { 405 353 ret = ttm_tt_swapin(ttm); 406 354 if (unlikely(ret != 0)) {
+8
include/drm/ttm/ttm_pool.h
··· 33 33 34 34 struct device; 35 35 struct seq_file; 36 + struct ttm_backup_flags; 36 37 struct ttm_operation_ctx; 37 38 struct ttm_pool; 38 39 struct ttm_tt; ··· 89 88 void ttm_pool_fini(struct ttm_pool *pool); 90 89 91 90 int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m); 91 + 92 + void ttm_pool_drop_backed_up(struct ttm_tt *tt); 93 + 94 + long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *ttm, 95 + const struct ttm_backup_flags *flags); 96 + int ttm_pool_restore_and_alloc(struct ttm_pool *pool, struct ttm_tt *tt, 97 + const struct ttm_operation_ctx *ctx); 92 98 93 99 int ttm_pool_mgr_init(unsigned long num_pages); 94 100 void ttm_pool_mgr_fini(void);
+65 -2
include/drm/ttm/ttm_tt.h
··· 32 32 #include <drm/ttm/ttm_caching.h> 33 33 #include <drm/ttm/ttm_kmap_iter.h> 34 34 35 + struct ttm_backup; 35 36 struct ttm_device; 36 37 struct ttm_tt; 37 38 struct ttm_resource; 38 39 struct ttm_buffer_object; 39 40 struct ttm_operation_ctx; 41 + struct ttm_pool_tt_restore; 40 42 41 43 /** 42 44 * struct ttm_tt - This is a structure holding the pages, caching- and aperture ··· 87 85 * fault handling abuses the DMA api a bit and dma_map_attrs can't be 88 86 * used to assure pgprot always matches. 89 87 * 88 + * TTM_TT_FLAG_BACKED_UP: TTM internal only. This is set if the 89 + * struct ttm_tt has been (possibly partially) backed up. 90 + * 90 91 * TTM_TT_FLAG_PRIV_POPULATED: TTM internal only. DO NOT USE. This is 91 92 * set by TTM after ttm_tt_populate() has successfully returned, and is 92 93 * then unset when TTM calls ttm_tt_unpopulate(). 94 + * 93 95 */ 94 96 #define TTM_TT_FLAG_SWAPPED BIT(0) 95 97 #define TTM_TT_FLAG_ZERO_ALLOC BIT(1) 96 98 #define TTM_TT_FLAG_EXTERNAL BIT(2) 97 99 #define TTM_TT_FLAG_EXTERNAL_MAPPABLE BIT(3) 98 100 #define TTM_TT_FLAG_DECRYPTED BIT(4) 101 + #define TTM_TT_FLAG_BACKED_UP BIT(5) 99 102 100 - #define TTM_TT_FLAG_PRIV_POPULATED BIT(5) 103 + #define TTM_TT_FLAG_PRIV_POPULATED BIT(6) 101 104 uint32_t page_flags; 102 105 /** @num_pages: Number of pages in the page array. */ 103 106 uint32_t num_pages; ··· 113 106 /** @swap_storage: Pointer to shmem struct file for swap storage. */ 114 107 struct file *swap_storage; 115 108 /** 109 + * @backup: Pointer to backup struct for backed up tts. 110 + * Could be unified with @swap_storage. Meanwhile, the driver's 111 + * ttm_tt_create() callback is responsible for assigning 112 + * this field. 113 + */ 114 + struct ttm_backup *backup; 115 + /** 116 116 * @caching: The current caching state of the pages, see enum 117 117 * ttm_caching. 118 118 */ 119 119 enum ttm_caching caching; 120 + /** @restore: Partial restoration from backup state. TTM private */ 121 + struct ttm_pool_tt_restore *restore; 120 122 }; 121 123 122 124 /** ··· 145 129 return tt->page_flags & TTM_TT_FLAG_PRIV_POPULATED; 146 130 } 147 131 132 + /** 133 + * ttm_tt_is_swapped() - Whether the ttm_tt is swapped out or backed up 134 + * @tt: The struct ttm_tt. 135 + * 136 + * Return: true if swapped or backed up, false otherwise. 137 + */ 148 138 static inline bool ttm_tt_is_swapped(const struct ttm_tt *tt) 149 139 { 150 - return tt->page_flags & TTM_TT_FLAG_SWAPPED; 140 + return tt->page_flags & (TTM_TT_FLAG_SWAPPED | TTM_TT_FLAG_BACKED_UP); 141 + } 142 + 143 + /** 144 + * ttm_tt_is_backed_up() - Whether the ttm_tt backed up 145 + * @tt: The struct ttm_tt. 146 + * 147 + * Return: true if swapped or backed up, false otherwise. 148 + */ 149 + static inline bool ttm_tt_is_backed_up(const struct ttm_tt *tt) 150 + { 151 + return tt->page_flags & TTM_TT_FLAG_BACKED_UP; 152 + } 153 + 154 + /** 155 + * ttm_tt_clear_backed_up() - Clear the ttm_tt backed-up status 156 + * @tt: The struct ttm_tt. 157 + * 158 + * Drivers can use this functionto clear the backed-up status, 159 + * for example before destroying or re-validating a purged tt. 160 + */ 161 + static inline void ttm_tt_clear_backed_up(struct ttm_tt *tt) 162 + { 163 + tt->page_flags &= ~TTM_TT_FLAG_BACKED_UP; 151 164 } 152 165 153 166 /** ··· 280 235 struct ttm_kmap_iter *ttm_kmap_iter_tt_init(struct ttm_kmap_iter_tt *iter_tt, 281 236 struct ttm_tt *tt); 282 237 unsigned long ttm_tt_pages_limit(void); 238 + 239 + /** 240 + * struct ttm_backup_flags - Flags to govern backup behaviour. 241 + * @purge: Free pages without backing up. Bypass pools. 242 + * @writeback: Attempt to copy contents directly to swap space, even 243 + * if that means blocking on writes to external memory. 244 + */ 245 + struct ttm_backup_flags { 246 + u32 purge : 1; 247 + u32 writeback : 1; 248 + }; 249 + 250 + long ttm_tt_backup(struct ttm_device *bdev, struct ttm_tt *tt, 251 + const struct ttm_backup_flags flags); 252 + 253 + int ttm_tt_restore(struct ttm_device *bdev, struct ttm_tt *tt, 254 + const struct ttm_operation_ctx *ctx); 255 + 283 256 #if IS_ENABLED(CONFIG_AGP) 284 257 #include <linux/agp_backend.h> 285 258