Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Btrfs: fix tree mod logging

While running the test btrfs/004 from xfstests in a loop, it failed
about 1 time out of 20 runs in my desktop. The failure happened in
the backref walking part of the test, and the test's error message was
like this:

btrfs/004 93s ... [failed, exit status 1] - output mismatch (see /home/fdmanana/git/hub/xfstests_2/results//btrfs/004.out.bad)
--- tests/btrfs/004.out 2013-11-26 18:25:29.263333714 +0000
+++ /home/fdmanana/git/hub/xfstests_2/results//btrfs/004.out.bad 2013-12-10 15:25:10.327518516 +0000
@@ -1,3 +1,8 @@
QA output created by 004
*** test backref walking
-*** done
+unexpected output from
+ /home/fdmanana/git/hub/btrfs-progs/btrfs inspect-internal logical-resolve -P 141512704 /home/fdmanana/btrfs-tests/scratch_1
+expected inum: 405, expected address: 454656, file: /home/fdmanana/btrfs-tests/scratch_1/snap1/p0/d6/d3d/d156/fce, got:
+
...
(Run 'diff -u tests/btrfs/004.out /home/fdmanana/git/hub/xfstests_2/results//btrfs/004.out.bad' to see the entire diff)
Ran: btrfs/004
Failures: btrfs/004
Failed 1 of 1 tests

But immediately after the test finished, the btrfs inspect-internal command
returned the expected output:

$ btrfs inspect-internal logical-resolve -P 141512704 /home/fdmanana/btrfs-tests/scratch_1
inode 405 offset 454656 root 258
inode 405 offset 454656 root 5

It turned out this was because the btrfs_search_old_slot() calls performed
during backref walking (backref.c:__resolve_indirect_ref) were not finding
anything. The reason for this turned out to be that the tree mod logging
code was not logging some node multi-step operations atomically, therefore
btrfs_search_old_slot() callers iterated often over an incomplete tree that
wasn't fully consistent with any tree state from the past. Besides missing
items, this often (but not always) resulted in -EIO errors during old slot
searches, reported in dmesg like this:

[ 4299.933936] ------------[ cut here ]------------
[ 4299.933949] WARNING: CPU: 0 PID: 23190 at fs/btrfs/ctree.c:1343 btrfs_search_old_slot+0x57b/0xab0 [btrfs]()
[ 4299.933950] Modules linked in: btrfs raid6_pq xor pci_stub vboxpci(O) vboxnetadp(O) vboxnetflt(O) vboxdrv(O) bnep rfcomm bluetooth parport_pc ppdev binfmt_misc joydev snd_hda_codec_h
[ 4299.933977] CPU: 0 PID: 23190 Comm: btrfs Tainted: G W O 3.12.0-fdm-btrfs-next-16+ #70
[ 4299.933978] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z77 Pro4, BIOS P1.50 09/04/2012
[ 4299.933979] 000000000000053f ffff8806f3fd98f8 ffffffff8176d284 0000000000000007
[ 4299.933982] 0000000000000000 ffff8806f3fd9938 ffffffff8104a81c ffff880659c64b70
[ 4299.933984] ffff880659c643d0 ffff8806599233d8 ffff880701e2e938 0000160000000000
[ 4299.933987] Call Trace:
[ 4299.933991] [<ffffffff8176d284>] dump_stack+0x55/0x76
[ 4299.933994] [<ffffffff8104a81c>] warn_slowpath_common+0x8c/0xc0
[ 4299.933997] [<ffffffff8104a86a>] warn_slowpath_null+0x1a/0x20
[ 4299.934003] [<ffffffffa065d3bb>] btrfs_search_old_slot+0x57b/0xab0 [btrfs]
[ 4299.934005] [<ffffffff81775f3b>] ? _raw_read_unlock+0x2b/0x50
[ 4299.934010] [<ffffffffa0655001>] ? __tree_mod_log_search+0x81/0xc0 [btrfs]
[ 4299.934019] [<ffffffffa06dd9b0>] __resolve_indirect_refs+0x130/0x5f0 [btrfs]
[ 4299.934027] [<ffffffffa06a21f1>] ? free_extent_buffer+0x61/0xc0 [btrfs]
[ 4299.934034] [<ffffffffa06de39c>] find_parent_nodes+0x1fc/0xe40 [btrfs]
[ 4299.934042] [<ffffffffa06b13e0>] ? defrag_lookup_extent+0xe0/0xe0 [btrfs]
[ 4299.934048] [<ffffffffa06b13e0>] ? defrag_lookup_extent+0xe0/0xe0 [btrfs]
[ 4299.934056] [<ffffffffa06df980>] iterate_extent_inodes+0xe0/0x250 [btrfs]
[ 4299.934058] [<ffffffff817762db>] ? _raw_spin_unlock+0x2b/0x50
[ 4299.934065] [<ffffffffa06dfb82>] iterate_inodes_from_logical+0x92/0xb0 [btrfs]
[ 4299.934071] [<ffffffffa06b13e0>] ? defrag_lookup_extent+0xe0/0xe0 [btrfs]
[ 4299.934078] [<ffffffffa06b7015>] btrfs_ioctl+0xf65/0x1f60 [btrfs]
[ 4299.934080] [<ffffffff811658b8>] ? handle_mm_fault+0x278/0xb00
[ 4299.934083] [<ffffffff81075563>] ? up_read+0x23/0x40
[ 4299.934085] [<ffffffff8177a41c>] ? __do_page_fault+0x20c/0x5a0
[ 4299.934088] [<ffffffff811b2946>] do_vfs_ioctl+0x96/0x570
[ 4299.934090] [<ffffffff81776e23>] ? error_sti+0x5/0x6
[ 4299.934093] [<ffffffff810b71e8>] ? trace_hardirqs_off_caller+0x28/0xd0
[ 4299.934096] [<ffffffff81776a09>] ? retint_swapgs+0xe/0x13
[ 4299.934098] [<ffffffff811b2eb1>] SyS_ioctl+0x91/0xb0
[ 4299.934100] [<ffffffff813eecde>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[ 4299.934102] [<ffffffff8177ef12>] system_call_fastpath+0x16/0x1b
[ 4299.934102] [<ffffffff8177ef12>] system_call_fastpath+0x16/0x1b
[ 4299.934104] ---[ end trace 48f0cfc902491414 ]---
[ 4299.934378] btrfs bad fsid on block 0

These tree mod log operations that must be performed atomically, tree_mod_log_free_eb,
tree_mod_log_eb_copy, tree_mod_log_insert_root and tree_mod_log_insert_move, used to
be performed atomically before the following commit:

c8cc6341653721b54760480b0d0d9b5f09b46741
(Btrfs: stop using GFP_ATOMIC for the tree mod log allocations)

That change removed the atomicity of such operations. This patch restores the
atomicity while still not doing the GFP_ATOMIC allocations of tree_mod_elem
structures, so it has to do the allocations using GFP_NOFS before acquiring
the mod log lock.

This issue has been experienced by several users recently, such as for example:

http://www.spinics.net/lists/linux-btrfs/msg28574.html

After running the btrfs/004 test for 679 consecutive iterations with this
patch applied, I didn't ran into the issue anymore.

Cc: stable@vger.kernel.org
Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>

authored by

Filipe David Borba Manana and committed by
Chris Mason
5de865ee 66ef7d65

+297 -90
+297 -90
fs/btrfs/ctree.c
··· 39 39 struct extent_buffer *src_buf); 40 40 static void del_ptr(struct btrfs_root *root, struct btrfs_path *path, 41 41 int level, int slot); 42 - static void tree_mod_log_free_eb(struct btrfs_fs_info *fs_info, 42 + static int tree_mod_log_free_eb(struct btrfs_fs_info *fs_info, 43 43 struct extent_buffer *eb); 44 44 45 45 struct btrfs_path *btrfs_alloc_path(void) ··· 474 474 * the index is the shifted logical of the *new* root node for root replace 475 475 * operations, or the shifted logical of the affected block for all other 476 476 * operations. 477 + * 478 + * Note: must be called with write lock (tree_mod_log_write_lock). 477 479 */ 478 480 static noinline int 479 481 __tree_mod_log_insert(struct btrfs_fs_info *fs_info, struct tree_mod_elem *tm) ··· 484 482 struct rb_node **new; 485 483 struct rb_node *parent = NULL; 486 484 struct tree_mod_elem *cur; 487 - int ret = 0; 488 485 489 486 BUG_ON(!tm); 490 - 491 - tree_mod_log_write_lock(fs_info); 492 - if (list_empty(&fs_info->tree_mod_seq_list)) { 493 - tree_mod_log_write_unlock(fs_info); 494 - /* 495 - * Ok we no longer care about logging modifications, free up tm 496 - * and return 0. Any callers shouldn't be using tm after 497 - * calling tree_mod_log_insert, but if they do we can just 498 - * change this to return a special error code to let the callers 499 - * do their own thing. 500 - */ 501 - kfree(tm); 502 - return 0; 503 - } 504 487 505 488 spin_lock(&fs_info->tree_mod_seq_lock); 506 489 tm->seq = btrfs_inc_tree_mod_seq_minor(fs_info); ··· 504 517 new = &((*new)->rb_left); 505 518 else if (cur->seq > tm->seq) 506 519 new = &((*new)->rb_right); 507 - else { 508 - ret = -EEXIST; 509 - kfree(tm); 510 - goto out; 511 - } 520 + else 521 + return -EEXIST; 512 522 } 513 523 514 524 rb_link_node(&tm->node, parent, new); 515 525 rb_insert_color(&tm->node, tm_root); 516 - out: 517 - tree_mod_log_write_unlock(fs_info); 518 - return ret; 526 + return 0; 519 527 } 520 528 521 529 /* ··· 526 544 return 1; 527 545 if (eb && btrfs_header_level(eb) == 0) 528 546 return 1; 547 + 548 + tree_mod_log_write_lock(fs_info); 549 + if (list_empty(&(fs_info)->tree_mod_seq_list)) { 550 + tree_mod_log_write_unlock(fs_info); 551 + return 1; 552 + } 553 + 529 554 return 0; 530 555 } 531 556 532 - static inline int 533 - __tree_mod_log_insert_key(struct btrfs_fs_info *fs_info, 534 - struct extent_buffer *eb, int slot, 535 - enum mod_log_op op, gfp_t flags) 557 + /* Similar to tree_mod_dont_log, but doesn't acquire any locks. */ 558 + static inline int tree_mod_need_log(const struct btrfs_fs_info *fs_info, 559 + struct extent_buffer *eb) 560 + { 561 + smp_mb(); 562 + if (list_empty(&(fs_info)->tree_mod_seq_list)) 563 + return 0; 564 + if (eb && btrfs_header_level(eb) == 0) 565 + return 0; 566 + 567 + return 1; 568 + } 569 + 570 + static struct tree_mod_elem * 571 + alloc_tree_mod_elem(struct extent_buffer *eb, int slot, 572 + enum mod_log_op op, gfp_t flags) 536 573 { 537 574 struct tree_mod_elem *tm; 538 575 539 576 tm = kzalloc(sizeof(*tm), flags); 540 577 if (!tm) 541 - return -ENOMEM; 578 + return NULL; 542 579 543 580 tm->index = eb->start >> PAGE_CACHE_SHIFT; 544 581 if (op != MOD_LOG_KEY_ADD) { ··· 567 566 tm->op = op; 568 567 tm->slot = slot; 569 568 tm->generation = btrfs_node_ptr_generation(eb, slot); 569 + RB_CLEAR_NODE(&tm->node); 570 570 571 - return __tree_mod_log_insert(fs_info, tm); 571 + return tm; 572 572 } 573 573 574 574 static noinline int ··· 577 575 struct extent_buffer *eb, int slot, 578 576 enum mod_log_op op, gfp_t flags) 579 577 { 580 - if (tree_mod_dont_log(fs_info, eb)) 578 + struct tree_mod_elem *tm; 579 + int ret; 580 + 581 + if (!tree_mod_need_log(fs_info, eb)) 581 582 return 0; 582 583 583 - return __tree_mod_log_insert_key(fs_info, eb, slot, op, flags); 584 + tm = alloc_tree_mod_elem(eb, slot, op, flags); 585 + if (!tm) 586 + return -ENOMEM; 587 + 588 + if (tree_mod_dont_log(fs_info, eb)) { 589 + kfree(tm); 590 + return 0; 591 + } 592 + 593 + ret = __tree_mod_log_insert(fs_info, tm); 594 + tree_mod_log_write_unlock(fs_info); 595 + if (ret) 596 + kfree(tm); 597 + 598 + return ret; 584 599 } 585 600 586 601 static noinline int ··· 605 586 struct extent_buffer *eb, int dst_slot, int src_slot, 606 587 int nr_items, gfp_t flags) 607 588 { 608 - struct tree_mod_elem *tm; 609 - int ret; 589 + struct tree_mod_elem *tm = NULL; 590 + struct tree_mod_elem **tm_list = NULL; 591 + int ret = 0; 610 592 int i; 593 + int locked = 0; 611 594 612 - if (tree_mod_dont_log(fs_info, eb)) 595 + if (!tree_mod_need_log(fs_info, eb)) 613 596 return 0; 614 597 615 - /* 616 - * When we override something during the move, we log these removals. 617 - * This can only happen when we move towards the beginning of the 618 - * buffer, i.e. dst_slot < src_slot. 619 - */ 620 - for (i = 0; i + dst_slot < src_slot && i < nr_items; i++) { 621 - ret = __tree_mod_log_insert_key(fs_info, eb, i + dst_slot, 622 - MOD_LOG_KEY_REMOVE_WHILE_MOVING, GFP_NOFS); 623 - BUG_ON(ret < 0); 624 - } 598 + tm_list = kzalloc(nr_items * sizeof(struct tree_mod_elem *), flags); 599 + if (!tm_list) 600 + return -ENOMEM; 625 601 626 602 tm = kzalloc(sizeof(*tm), flags); 627 - if (!tm) 628 - return -ENOMEM; 603 + if (!tm) { 604 + ret = -ENOMEM; 605 + goto free_tms; 606 + } 629 607 630 608 tm->index = eb->start >> PAGE_CACHE_SHIFT; 631 609 tm->slot = src_slot; ··· 630 614 tm->move.nr_items = nr_items; 631 615 tm->op = MOD_LOG_MOVE_KEYS; 632 616 633 - return __tree_mod_log_insert(fs_info, tm); 617 + for (i = 0; i + dst_slot < src_slot && i < nr_items; i++) { 618 + tm_list[i] = alloc_tree_mod_elem(eb, i + dst_slot, 619 + MOD_LOG_KEY_REMOVE_WHILE_MOVING, flags); 620 + if (!tm_list[i]) { 621 + ret = -ENOMEM; 622 + goto free_tms; 623 + } 624 + } 625 + 626 + if (tree_mod_dont_log(fs_info, eb)) 627 + goto free_tms; 628 + locked = 1; 629 + 630 + /* 631 + * When we override something during the move, we log these removals. 632 + * This can only happen when we move towards the beginning of the 633 + * buffer, i.e. dst_slot < src_slot. 634 + */ 635 + for (i = 0; i + dst_slot < src_slot && i < nr_items; i++) { 636 + ret = __tree_mod_log_insert(fs_info, tm_list[i]); 637 + if (ret) 638 + goto free_tms; 639 + } 640 + 641 + ret = __tree_mod_log_insert(fs_info, tm); 642 + if (ret) 643 + goto free_tms; 644 + tree_mod_log_write_unlock(fs_info); 645 + kfree(tm_list); 646 + 647 + return 0; 648 + free_tms: 649 + for (i = 0; i < nr_items; i++) { 650 + if (tm_list[i] && !RB_EMPTY_NODE(&tm_list[i]->node)) 651 + rb_erase(&tm_list[i]->node, &fs_info->tree_mod_log); 652 + kfree(tm_list[i]); 653 + } 654 + if (locked) 655 + tree_mod_log_write_unlock(fs_info); 656 + kfree(tm_list); 657 + kfree(tm); 658 + 659 + return ret; 634 660 } 635 661 636 - static inline void 637 - __tree_mod_log_free_eb(struct btrfs_fs_info *fs_info, struct extent_buffer *eb) 662 + static inline int 663 + __tree_mod_log_free_eb(struct btrfs_fs_info *fs_info, 664 + struct tree_mod_elem **tm_list, 665 + int nritems) 638 666 { 639 - int i; 640 - u32 nritems; 667 + int i, j; 641 668 int ret; 642 669 643 - if (btrfs_header_level(eb) == 0) 644 - return; 645 - 646 - nritems = btrfs_header_nritems(eb); 647 670 for (i = nritems - 1; i >= 0; i--) { 648 - ret = __tree_mod_log_insert_key(fs_info, eb, i, 649 - MOD_LOG_KEY_REMOVE_WHILE_FREEING, GFP_NOFS); 650 - BUG_ON(ret < 0); 671 + ret = __tree_mod_log_insert(fs_info, tm_list[i]); 672 + if (ret) { 673 + for (j = nritems - 1; j > i; j--) 674 + rb_erase(&tm_list[j]->node, 675 + &fs_info->tree_mod_log); 676 + return ret; 677 + } 651 678 } 679 + 680 + return 0; 652 681 } 653 682 654 683 static noinline int ··· 702 641 struct extent_buffer *new_root, gfp_t flags, 703 642 int log_removal) 704 643 { 705 - struct tree_mod_elem *tm; 644 + struct tree_mod_elem *tm = NULL; 645 + struct tree_mod_elem **tm_list = NULL; 646 + int nritems = 0; 647 + int ret = 0; 648 + int i; 706 649 707 - if (tree_mod_dont_log(fs_info, NULL)) 650 + if (!tree_mod_need_log(fs_info, NULL)) 708 651 return 0; 709 652 710 - if (log_removal) 711 - __tree_mod_log_free_eb(fs_info, old_root); 653 + if (log_removal && btrfs_header_level(old_root) > 0) { 654 + nritems = btrfs_header_nritems(old_root); 655 + tm_list = kzalloc(nritems * sizeof(struct tree_mod_elem *), 656 + flags); 657 + if (!tm_list) { 658 + ret = -ENOMEM; 659 + goto free_tms; 660 + } 661 + for (i = 0; i < nritems; i++) { 662 + tm_list[i] = alloc_tree_mod_elem(old_root, i, 663 + MOD_LOG_KEY_REMOVE_WHILE_FREEING, flags); 664 + if (!tm_list[i]) { 665 + ret = -ENOMEM; 666 + goto free_tms; 667 + } 668 + } 669 + } 712 670 713 671 tm = kzalloc(sizeof(*tm), flags); 714 - if (!tm) 715 - return -ENOMEM; 672 + if (!tm) { 673 + ret = -ENOMEM; 674 + goto free_tms; 675 + } 716 676 717 677 tm->index = new_root->start >> PAGE_CACHE_SHIFT; 718 678 tm->old_root.logical = old_root->start; ··· 741 659 tm->generation = btrfs_header_generation(old_root); 742 660 tm->op = MOD_LOG_ROOT_REPLACE; 743 661 744 - return __tree_mod_log_insert(fs_info, tm); 662 + if (tree_mod_dont_log(fs_info, NULL)) 663 + goto free_tms; 664 + 665 + if (tm_list) 666 + ret = __tree_mod_log_free_eb(fs_info, tm_list, nritems); 667 + if (!ret) 668 + ret = __tree_mod_log_insert(fs_info, tm); 669 + 670 + tree_mod_log_write_unlock(fs_info); 671 + if (ret) 672 + goto free_tms; 673 + kfree(tm_list); 674 + 675 + return ret; 676 + 677 + free_tms: 678 + if (tm_list) { 679 + for (i = 0; i < nritems; i++) 680 + kfree(tm_list[i]); 681 + kfree(tm_list); 682 + } 683 + kfree(tm); 684 + 685 + return ret; 745 686 } 746 687 747 688 static struct tree_mod_elem * ··· 833 728 return __tree_mod_log_search(fs_info, start, min_seq, 0); 834 729 } 835 730 836 - static noinline void 731 + static noinline int 837 732 tree_mod_log_eb_copy(struct btrfs_fs_info *fs_info, struct extent_buffer *dst, 838 733 struct extent_buffer *src, unsigned long dst_offset, 839 734 unsigned long src_offset, int nr_items) 840 735 { 841 - int ret; 736 + int ret = 0; 737 + struct tree_mod_elem **tm_list = NULL; 738 + struct tree_mod_elem **tm_list_add, **tm_list_rem; 842 739 int i; 740 + int locked = 0; 843 741 844 - if (tree_mod_dont_log(fs_info, NULL)) 845 - return; 742 + if (!tree_mod_need_log(fs_info, NULL)) 743 + return 0; 846 744 847 745 if (btrfs_header_level(dst) == 0 && btrfs_header_level(src) == 0) 848 - return; 746 + return 0; 747 + 748 + tm_list = kzalloc(nr_items * 2 * sizeof(struct tree_mod_elem *), 749 + GFP_NOFS); 750 + if (!tm_list) 751 + return -ENOMEM; 752 + 753 + tm_list_add = tm_list; 754 + tm_list_rem = tm_list + nr_items; 755 + for (i = 0; i < nr_items; i++) { 756 + tm_list_rem[i] = alloc_tree_mod_elem(src, i + src_offset, 757 + MOD_LOG_KEY_REMOVE, GFP_NOFS); 758 + if (!tm_list_rem[i]) { 759 + ret = -ENOMEM; 760 + goto free_tms; 761 + } 762 + 763 + tm_list_add[i] = alloc_tree_mod_elem(dst, i + dst_offset, 764 + MOD_LOG_KEY_ADD, GFP_NOFS); 765 + if (!tm_list_add[i]) { 766 + ret = -ENOMEM; 767 + goto free_tms; 768 + } 769 + } 770 + 771 + if (tree_mod_dont_log(fs_info, NULL)) 772 + goto free_tms; 773 + locked = 1; 849 774 850 775 for (i = 0; i < nr_items; i++) { 851 - ret = __tree_mod_log_insert_key(fs_info, src, 852 - i + src_offset, 853 - MOD_LOG_KEY_REMOVE, GFP_NOFS); 854 - BUG_ON(ret < 0); 855 - ret = __tree_mod_log_insert_key(fs_info, dst, 856 - i + dst_offset, 857 - MOD_LOG_KEY_ADD, 858 - GFP_NOFS); 859 - BUG_ON(ret < 0); 776 + ret = __tree_mod_log_insert(fs_info, tm_list_rem[i]); 777 + if (ret) 778 + goto free_tms; 779 + ret = __tree_mod_log_insert(fs_info, tm_list_add[i]); 780 + if (ret) 781 + goto free_tms; 860 782 } 783 + 784 + tree_mod_log_write_unlock(fs_info); 785 + kfree(tm_list); 786 + 787 + return 0; 788 + 789 + free_tms: 790 + for (i = 0; i < nr_items * 2; i++) { 791 + if (tm_list[i] && !RB_EMPTY_NODE(&tm_list[i]->node)) 792 + rb_erase(&tm_list[i]->node, &fs_info->tree_mod_log); 793 + kfree(tm_list[i]); 794 + } 795 + if (locked) 796 + tree_mod_log_write_unlock(fs_info); 797 + kfree(tm_list); 798 + 799 + return ret; 861 800 } 862 801 863 802 static inline void ··· 926 777 BUG_ON(ret < 0); 927 778 } 928 779 929 - static noinline void 780 + static noinline int 930 781 tree_mod_log_free_eb(struct btrfs_fs_info *fs_info, struct extent_buffer *eb) 931 782 { 783 + struct tree_mod_elem **tm_list = NULL; 784 + int nritems = 0; 785 + int i; 786 + int ret = 0; 787 + 788 + if (btrfs_header_level(eb) == 0) 789 + return 0; 790 + 791 + if (!tree_mod_need_log(fs_info, NULL)) 792 + return 0; 793 + 794 + nritems = btrfs_header_nritems(eb); 795 + tm_list = kzalloc(nritems * sizeof(struct tree_mod_elem *), 796 + GFP_NOFS); 797 + if (!tm_list) 798 + return -ENOMEM; 799 + 800 + for (i = 0; i < nritems; i++) { 801 + tm_list[i] = alloc_tree_mod_elem(eb, i, 802 + MOD_LOG_KEY_REMOVE_WHILE_FREEING, GFP_NOFS); 803 + if (!tm_list[i]) { 804 + ret = -ENOMEM; 805 + goto free_tms; 806 + } 807 + } 808 + 932 809 if (tree_mod_dont_log(fs_info, eb)) 933 - return; 934 - __tree_mod_log_free_eb(fs_info, eb); 810 + goto free_tms; 811 + 812 + ret = __tree_mod_log_free_eb(fs_info, tm_list, nritems); 813 + tree_mod_log_write_unlock(fs_info); 814 + if (ret) 815 + goto free_tms; 816 + kfree(tm_list); 817 + 818 + return 0; 819 + 820 + free_tms: 821 + for (i = 0; i < nritems; i++) 822 + kfree(tm_list[i]); 823 + kfree(tm_list); 824 + 825 + return ret; 935 826 } 936 827 937 828 static noinline void ··· 1229 1040 btrfs_set_node_ptr_generation(parent, parent_slot, 1230 1041 trans->transid); 1231 1042 btrfs_mark_buffer_dirty(parent); 1232 - if (last_ref) 1233 - tree_mod_log_free_eb(root->fs_info, buf); 1043 + if (last_ref) { 1044 + ret = tree_mod_log_free_eb(root->fs_info, buf); 1045 + if (ret) { 1046 + btrfs_abort_transaction(trans, root, ret); 1047 + return ret; 1048 + } 1049 + } 1234 1050 btrfs_free_tree_block(trans, root, buf, parent_start, 1235 1051 last_ref); 1236 1052 } ··· 3258 3064 } else 3259 3065 push_items = min(src_nritems - 8, push_items); 3260 3066 3261 - tree_mod_log_eb_copy(root->fs_info, dst, src, dst_nritems, 0, 3262 - push_items); 3067 + ret = tree_mod_log_eb_copy(root->fs_info, dst, src, dst_nritems, 0, 3068 + push_items); 3069 + if (ret) { 3070 + btrfs_abort_transaction(trans, root, ret); 3071 + return ret; 3072 + } 3263 3073 copy_extent_buffer(dst, src, 3264 3074 btrfs_node_key_ptr_offset(dst_nritems), 3265 3075 btrfs_node_key_ptr_offset(0), ··· 3333 3135 (dst_nritems) * 3334 3136 sizeof(struct btrfs_key_ptr)); 3335 3137 3336 - tree_mod_log_eb_copy(root->fs_info, dst, src, 0, 3337 - src_nritems - push_items, push_items); 3138 + ret = tree_mod_log_eb_copy(root->fs_info, dst, src, 0, 3139 + src_nritems - push_items, push_items); 3140 + if (ret) { 3141 + btrfs_abort_transaction(trans, root, ret); 3142 + return ret; 3143 + } 3338 3144 copy_extent_buffer(dst, src, 3339 3145 btrfs_node_key_ptr_offset(0), 3340 3146 btrfs_node_key_ptr_offset(src_nritems - push_items), ··· 3539 3337 btrfs_header_chunk_tree_uuid(split), 3540 3338 BTRFS_UUID_SIZE); 3541 3339 3542 - tree_mod_log_eb_copy(root->fs_info, split, c, 0, mid, c_nritems - mid); 3340 + ret = tree_mod_log_eb_copy(root->fs_info, split, c, 0, 3341 + mid, c_nritems - mid); 3342 + if (ret) { 3343 + btrfs_abort_transaction(trans, root, ret); 3344 + return ret; 3345 + } 3543 3346 copy_extent_buffer(split, c, 3544 3347 btrfs_node_key_ptr_offset(0), 3545 3348 btrfs_node_key_ptr_offset(mid),