Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/kmemleak: prevent soft lockup in kmemleak_scan()'s object iteration loops

Commit 6edda04ccc7c ("mm/kmemleak: prevent soft lockup in first object
iteration loop of kmemleak_scan()") adds cond_resched() in the first
object iteration loop of kmemleak_scan(). However, it turns that the 2nd
objection iteration loop can still cause soft lockup to happen in some
cases. So add a cond_resched() call in the 2nd and 3rd loops as well to
prevent that and for completeness.

Link: https://lkml.kernel.org/r/20221020175619.366317-1-longman@redhat.com
Fixes: 6edda04ccc7c ("mm/kmemleak: prevent soft lockup in first object iteration loop of kmemleak_scan()")
Signed-off-by: Waiman Long <longman@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Waiman Long and committed by
Andrew Morton
984a6083 e11c4e08

+42 -19
+42 -19
mm/kmemleak.c
··· 1461 1461 } 1462 1462 1463 1463 /* 1464 + * Conditionally call resched() in a object iteration loop while making sure 1465 + * that the given object won't go away without RCU read lock by performing a 1466 + * get_object() if !pinned. 1467 + * 1468 + * Return: false if can't do a cond_resched() due to get_object() failure 1469 + * true otherwise 1470 + */ 1471 + static bool kmemleak_cond_resched(struct kmemleak_object *object, bool pinned) 1472 + { 1473 + if (!pinned && !get_object(object)) 1474 + return false; 1475 + 1476 + rcu_read_unlock(); 1477 + cond_resched(); 1478 + rcu_read_lock(); 1479 + if (!pinned) 1480 + put_object(object); 1481 + return true; 1482 + } 1483 + 1484 + /* 1464 1485 * Scan data sections and all the referenced memory blocks allocated via the 1465 1486 * kernel's standard allocators. This function must be called with the 1466 1487 * scan_mutex held. ··· 1492 1471 struct zone *zone; 1493 1472 int __maybe_unused i; 1494 1473 int new_leaks = 0; 1495 - int loop1_cnt = 0; 1474 + int loop_cnt = 0; 1496 1475 1497 1476 jiffies_last_scan = jiffies; 1498 1477 ··· 1501 1480 list_for_each_entry_rcu(object, &object_list, object_list) { 1502 1481 bool obj_pinned = false; 1503 1482 1504 - loop1_cnt++; 1505 1483 raw_spin_lock_irq(&object->lock); 1506 1484 #ifdef DEBUG 1507 1485 /* ··· 1534 1514 raw_spin_unlock_irq(&object->lock); 1535 1515 1536 1516 /* 1537 - * Do a cond_resched() to avoid soft lockup every 64k objects. 1538 - * Make sure a reference has been taken so that the object 1539 - * won't go away without RCU read lock. 1517 + * Do a cond_resched() every 64k objects to avoid soft lockup. 1540 1518 */ 1541 - if (!(loop1_cnt & 0xffff)) { 1542 - if (!obj_pinned && !get_object(object)) { 1543 - /* Try the next object instead */ 1544 - loop1_cnt--; 1545 - continue; 1546 - } 1547 - 1548 - rcu_read_unlock(); 1549 - cond_resched(); 1550 - rcu_read_lock(); 1551 - 1552 - if (!obj_pinned) 1553 - put_object(object); 1554 - } 1519 + if (!(++loop_cnt & 0xffff) && 1520 + !kmemleak_cond_resched(object, obj_pinned)) 1521 + loop_cnt--; /* Try again on next object */ 1555 1522 } 1556 1523 rcu_read_unlock(); 1557 1524 ··· 1605 1598 * scan and color them gray until the next scan. 1606 1599 */ 1607 1600 rcu_read_lock(); 1601 + loop_cnt = 0; 1608 1602 list_for_each_entry_rcu(object, &object_list, object_list) { 1603 + /* 1604 + * Do a cond_resched() every 64k objects to avoid soft lockup. 1605 + */ 1606 + if (!(++loop_cnt & 0xffff) && 1607 + !kmemleak_cond_resched(object, false)) 1608 + loop_cnt--; /* Try again on next object */ 1609 + 1609 1610 /* 1610 1611 * This is racy but we can save the overhead of lock/unlock 1611 1612 * calls. The missed objects, if any, should be caught in ··· 1647 1632 * Scanning result reporting. 1648 1633 */ 1649 1634 rcu_read_lock(); 1635 + loop_cnt = 0; 1650 1636 list_for_each_entry_rcu(object, &object_list, object_list) { 1637 + /* 1638 + * Do a cond_resched() every 64k objects to avoid soft lockup. 1639 + */ 1640 + if (!(++loop_cnt & 0xffff) && 1641 + !kmemleak_cond_resched(object, false)) 1642 + loop_cnt--; /* Try again on next object */ 1643 + 1651 1644 /* 1652 1645 * This is racy but we can save the overhead of lock/unlock 1653 1646 * calls. The missed objects, if any, should be caught in