Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'vfs-6.11-rc1.fixes.2' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs

Pull vfs fixes from Christian Brauner:
"VFS:

- The new 64bit mount ids start after the old mount id, i.e., at the
first non-32 bit value. However, we started counting one id too
late and thus lost 4294967296 as the first valid id. Fix that.

- Update a few comments on some vfs_*() creation helpers.

- Move copying of the xattr name out from the locks required to start
a filesystem write.

- Extend the filelock lock UAF fix to the compat code as well.

- Now that we added the ability to look up an inode under RCU it's
possible that lockless hash lookup can find and lock an inode after
it gets I_FREEING set. It then waits until inode teardown in
evict() is finished.

The flag however is still set after evict() has woken up all
waiters. If the inode lock is taken late enough on the waiting side
after hash removal and wakeup happened the waiting thread will
never be woken.

Before RCU based lookup this was synchronized via the
inode_hash_lock. But since unhashing requires the inode lock as
well we can check whether the inode is unhashed while holding inode
lock even without holding inode_hash_lock.

pidfd:

- The nsproxy structure contains nearly all of the namespaces
associated with a task. When a namespace type isn't supported
nsproxy might contain a NULL pointer or always point to the initial
namespace type. The logic isn't consistent. So when deriving
namespace fds we need to ensure that the namespace type is
supported.

First, so that we don't risk dereferncing NULL pointers. The
correct bigger fix would be to change all namespaces to always set
a valid namespace pointer in struct nsproxy independent of whether
or not it is compiled in. But that requires quite a few changes.

Second, so that we don't allow deriving namespace fds when the
namespace type doesn't exist and thus when they couldn't also be
derived via /proc/self/ns/.

- Add missing selftests for the new pidfd ioctls to derive namespace
fds. This simply extends the already existing testsuite.

netfs:

- Fix debug logging and fix kconfig variable name so it actually
works.

- Fix writeback that goes both to the server and cache. The streams
are only activated once a subreq is added. When a server write
happens the subreq doesn't need to have finished by the time the
cache write is started. If the server write has already finished by
the time the cache write is about to start the cache write will
operate on a folio that might already have been reused. Fix this by
preactivating the cache write.

- Limit cachefiles subreq size for cache writes to MAX_RW_COUNT"

* tag 'vfs-6.11-rc1.fixes.2' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
inode: clarify what's locked
vfs: Fix potential circular locking through setxattr() and removexattr()
filelock: Fix fcntl/close race recovery compat path
fs: use all available ids
cachefiles: Set the max subreq size for cache writes to MAX_RW_COUNT
netfs: Fix writeback that needs to go to both server and cache
pidfs: add selftests for new namespace ioctls
pidfs: handle kernels without namespaces cleanly
pidfs: when time ns disabled add check for ioctl
vfs: correct the comments of vfs_*() helpers
vfs: handle __wait_on_freeing_inode() and evict() race
netfs: Rename CONFIG_FSCACHE_DEBUG to CONFIG_NETFS_DEBUG
netfs: Revert "netfs: Switch debug logging to pr_debug()"

+488 -213
+1 -1
fs/cachefiles/io.c
··· 630 630 631 631 _enter("W=%x[%x] %llx", wreq->debug_id, subreq->debug_index, subreq->start); 632 632 633 - subreq->max_len = ULONG_MAX; 633 + subreq->max_len = MAX_RW_COUNT; 634 634 subreq->max_nr_segs = BIO_MAX_VECS; 635 635 636 636 if (!cachefiles_cres_file(cres)) {
+30 -10
fs/inode.c
··· 676 676 677 677 remove_inode_hash(inode); 678 678 679 + /* 680 + * Wake up waiters in __wait_on_freeing_inode(). 681 + * 682 + * Lockless hash lookup may end up finding the inode before we removed 683 + * it above, but only lock it *after* we are done with the wakeup below. 684 + * In this case the potential waiter cannot safely block. 685 + * 686 + * The inode being unhashed after the call to remove_inode_hash() is 687 + * used as an indicator whether blocking on it is safe. 688 + */ 679 689 spin_lock(&inode->i_lock); 680 690 wake_up_bit(&inode->i_state, __I_NEW); 681 691 BUG_ON(inode->i_state != (I_FREEING | I_CLEAR)); ··· 898 888 return freed; 899 889 } 900 890 901 - static void __wait_on_freeing_inode(struct inode *inode, bool locked); 891 + static void __wait_on_freeing_inode(struct inode *inode, bool is_inode_hash_locked); 902 892 /* 903 893 * Called with the inode lock held. 904 894 */ 905 895 static struct inode *find_inode(struct super_block *sb, 906 896 struct hlist_head *head, 907 897 int (*test)(struct inode *, void *), 908 - void *data, bool locked) 898 + void *data, bool is_inode_hash_locked) 909 899 { 910 900 struct inode *inode = NULL; 911 901 912 - if (locked) 902 + if (is_inode_hash_locked) 913 903 lockdep_assert_held(&inode_hash_lock); 914 904 else 915 905 lockdep_assert_not_held(&inode_hash_lock); ··· 923 913 continue; 924 914 spin_lock(&inode->i_lock); 925 915 if (inode->i_state & (I_FREEING|I_WILL_FREE)) { 926 - __wait_on_freeing_inode(inode, locked); 916 + __wait_on_freeing_inode(inode, is_inode_hash_locked); 927 917 goto repeat; 928 918 } 929 919 if (unlikely(inode->i_state & I_CREATING)) { ··· 946 936 */ 947 937 static struct inode *find_inode_fast(struct super_block *sb, 948 938 struct hlist_head *head, unsigned long ino, 949 - bool locked) 939 + bool is_inode_hash_locked) 950 940 { 951 941 struct inode *inode = NULL; 952 942 953 - if (locked) 943 + if (is_inode_hash_locked) 954 944 lockdep_assert_held(&inode_hash_lock); 955 945 else 956 946 lockdep_assert_not_held(&inode_hash_lock); ··· 964 954 continue; 965 955 spin_lock(&inode->i_lock); 966 956 if (inode->i_state & (I_FREEING|I_WILL_FREE)) { 967 - __wait_on_freeing_inode(inode, locked); 957 + __wait_on_freeing_inode(inode, is_inode_hash_locked); 968 958 goto repeat; 969 959 } 970 960 if (unlikely(inode->i_state & I_CREATING)) { ··· 2297 2287 * wake_up_bit(&inode->i_state, __I_NEW) after removing from the hash list 2298 2288 * will DTRT. 2299 2289 */ 2300 - static void __wait_on_freeing_inode(struct inode *inode, bool locked) 2290 + static void __wait_on_freeing_inode(struct inode *inode, bool is_inode_hash_locked) 2301 2291 { 2302 2292 wait_queue_head_t *wq; 2303 2293 DEFINE_WAIT_BIT(wait, &inode->i_state, __I_NEW); 2294 + 2295 + /* 2296 + * Handle racing against evict(), see that routine for more details. 2297 + */ 2298 + if (unlikely(inode_unhashed(inode))) { 2299 + WARN_ON(is_inode_hash_locked); 2300 + spin_unlock(&inode->i_lock); 2301 + return; 2302 + } 2303 + 2304 2304 wq = bit_waitqueue(&inode->i_state, __I_NEW); 2305 2305 prepare_to_wait(wq, &wait.wq_entry, TASK_UNINTERRUPTIBLE); 2306 2306 spin_unlock(&inode->i_lock); 2307 2307 rcu_read_unlock(); 2308 - if (locked) 2308 + if (is_inode_hash_locked) 2309 2309 spin_unlock(&inode_hash_lock); 2310 2310 schedule(); 2311 2311 finish_wait(wq, &wait.wq_entry); 2312 - if (locked) 2312 + if (is_inode_hash_locked) 2313 2313 spin_lock(&inode_hash_lock); 2314 2314 rcu_read_lock(); 2315 2315 }
+4 -5
fs/locks.c
··· 2570 2570 error = do_lock_file_wait(filp, cmd, file_lock); 2571 2571 2572 2572 /* 2573 - * Attempt to detect a close/fcntl race and recover by releasing the 2574 - * lock that was just acquired. There is no need to do that when we're 2573 + * Detect close/fcntl races and recover by zapping all POSIX locks 2574 + * associated with this file and our files_struct, just like on 2575 + * filp_flush(). There is no need to do that when we're 2575 2576 * unlocking though, or for OFD locks. 2576 2577 */ 2577 2578 if (!error && file_lock->c.flc_type != F_UNLCK && ··· 2587 2586 f = files_lookup_fd_locked(files, fd); 2588 2587 spin_unlock(&files->file_lock); 2589 2588 if (f != filp) { 2590 - file_lock->c.flc_type = F_UNLCK; 2591 - error = do_lock_file_wait(filp, cmd, file_lock); 2592 - WARN_ON_ONCE(error); 2589 + locks_remove_posix(filp, files); 2593 2590 error = -EBADF; 2594 2591 } 2595 2592 }
+13 -13
fs/namei.c
··· 3248 3248 /** 3249 3249 * vfs_create - create new file 3250 3250 * @idmap: idmap of the mount the inode was found from 3251 - * @dir: inode of @dentry 3252 - * @dentry: pointer to dentry of the base directory 3253 - * @mode: mode of the new file 3251 + * @dir: inode of the parent directory 3252 + * @dentry: dentry of the child file 3253 + * @mode: mode of the child file 3254 3254 * @want_excl: whether the file must not yet exist 3255 3255 * 3256 3256 * Create a new file. ··· 4047 4047 /** 4048 4048 * vfs_mknod - create device node or file 4049 4049 * @idmap: idmap of the mount the inode was found from 4050 - * @dir: inode of @dentry 4051 - * @dentry: pointer to dentry of the base directory 4052 - * @mode: mode of the new device node or file 4050 + * @dir: inode of the parent directory 4051 + * @dentry: dentry of the child device node 4052 + * @mode: mode of the child device node 4053 4053 * @dev: device number of device to create 4054 4054 * 4055 4055 * Create a device node or file. ··· 4174 4174 /** 4175 4175 * vfs_mkdir - create directory 4176 4176 * @idmap: idmap of the mount the inode was found from 4177 - * @dir: inode of @dentry 4178 - * @dentry: pointer to dentry of the base directory 4179 - * @mode: mode of the new directory 4177 + * @dir: inode of the parent directory 4178 + * @dentry: dentry of the child directory 4179 + * @mode: mode of the child directory 4180 4180 * 4181 4181 * Create a directory. 4182 4182 * ··· 4256 4256 /** 4257 4257 * vfs_rmdir - remove directory 4258 4258 * @idmap: idmap of the mount the inode was found from 4259 - * @dir: inode of @dentry 4260 - * @dentry: pointer to dentry of the base directory 4259 + * @dir: inode of the parent directory 4260 + * @dentry: dentry of the child directory 4261 4261 * 4262 4262 * Remove a directory. 4263 4263 * ··· 4537 4537 /** 4538 4538 * vfs_symlink - create symlink 4539 4539 * @idmap: idmap of the mount the inode was found from 4540 - * @dir: inode of @dentry 4541 - * @dentry: pointer to dentry of the base directory 4540 + * @dir: inode of the parent directory 4541 + * @dentry: dentry of the child symlink file 4542 4542 * @oldname: name of the file to link to 4543 4543 * 4544 4544 * Create a symlink.
+1 -1
fs/namespace.c
··· 70 70 static DEFINE_IDA(mnt_group_ida); 71 71 72 72 /* Don't allow confusion with old 32bit mount ID */ 73 - #define MNT_UNIQUE_ID_OFFSET (1ULL << 32) 73 + #define MNT_UNIQUE_ID_OFFSET (1ULL << 31) 74 74 static atomic64_t mnt_id_ctr = ATOMIC64_INIT(MNT_UNIQUE_ID_OFFSET); 75 75 76 76 static struct hlist_head *mount_hashtable __ro_after_init;
+8 -10
fs/netfs/Kconfig
··· 22 22 between CPUs. On the other hand, the stats are very useful for 23 23 debugging purposes. Saying 'Y' here is recommended. 24 24 25 + config NETFS_DEBUG 26 + bool "Enable dynamic debugging netfslib and FS-Cache" 27 + depends on NETFS 28 + help 29 + This permits debugging to be dynamically enabled in the local caching 30 + management module. If this is set, the debugging output may be 31 + enabled by setting bits in /sys/module/netfs/parameters/debug. 32 + 25 33 config FSCACHE 26 34 bool "General filesystem local caching manager" 27 35 depends on NETFS_SUPPORT ··· 56 48 multi-CPU system these may be on cachelines that keep bouncing 57 49 between CPUs. On the other hand, the stats are very useful for 58 50 debugging purposes. Saying 'Y' here is recommended. 59 - 60 - See Documentation/filesystems/caching/fscache.rst for more information. 61 - 62 - config FSCACHE_DEBUG 63 - bool "Debug FS-Cache" 64 - depends on FSCACHE 65 - help 66 - This permits debugging to be dynamically enabled in the local caching 67 - management module. If this is set, the debugging output may be 68 - enabled by setting bits in /sys/modules/fscache/parameter/debug. 69 51 70 52 See Documentation/filesystems/caching/fscache.rst for more information.
+7 -7
fs/netfs/buffered_read.c
··· 117 117 if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) { 118 118 if (folio->index == rreq->no_unlock_folio && 119 119 test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) 120 - kdebug("no unlock"); 120 + _debug("no unlock"); 121 121 else 122 122 folio_unlock(folio); 123 123 } ··· 204 204 struct netfs_inode *ctx = netfs_inode(ractl->mapping->host); 205 205 int ret; 206 206 207 - kenter("%lx,%x", readahead_index(ractl), readahead_count(ractl)); 207 + _enter("%lx,%x", readahead_index(ractl), readahead_count(ractl)); 208 208 209 209 if (readahead_count(ractl) == 0) 210 210 return; ··· 268 268 struct folio *sink = NULL; 269 269 int ret; 270 270 271 - kenter("%lx", folio->index); 271 + _enter("%lx", folio->index); 272 272 273 273 rreq = netfs_alloc_request(mapping, file, 274 274 folio_pos(folio), folio_size(folio), ··· 508 508 509 509 have_folio: 510 510 *_folio = folio; 511 - kleave(" = 0"); 511 + _leave(" = 0"); 512 512 return 0; 513 513 514 514 error_put: ··· 518 518 folio_unlock(folio); 519 519 folio_put(folio); 520 520 } 521 - kleave(" = %d", ret); 521 + _leave(" = %d", ret); 522 522 return ret; 523 523 } 524 524 EXPORT_SYMBOL(netfs_write_begin); ··· 536 536 size_t flen = folio_size(folio); 537 537 int ret; 538 538 539 - kenter("%zx @%llx", flen, start); 539 + _enter("%zx @%llx", flen, start); 540 540 541 541 ret = -ENOMEM; 542 542 ··· 567 567 error_put: 568 568 netfs_put_request(rreq, false, netfs_rreq_trace_put_discard); 569 569 error: 570 - kleave(" = %d", ret); 570 + _leave(" = %d", ret); 571 571 return ret; 572 572 } 573 573
+6 -6
fs/netfs/buffered_write.c
··· 56 56 struct netfs_group *group = netfs_folio_group(folio); 57 57 loff_t pos = folio_pos(folio); 58 58 59 - kenter(""); 59 + _enter(""); 60 60 61 61 if (group != netfs_group && group != NETFS_FOLIO_COPY_TO_CACHE) 62 62 return NETFS_FLUSH_CONTENT; ··· 272 272 */ 273 273 howto = netfs_how_to_modify(ctx, file, folio, netfs_group, 274 274 flen, offset, part, maybe_trouble); 275 - kdebug("howto %u", howto); 275 + _debug("howto %u", howto); 276 276 switch (howto) { 277 277 case NETFS_JUST_PREFETCH: 278 278 ret = netfs_prefetch_for_write(file, folio, offset, part); 279 279 if (ret < 0) { 280 - kdebug("prefetch = %zd", ret); 280 + _debug("prefetch = %zd", ret); 281 281 goto error_folio_unlock; 282 282 } 283 283 break; ··· 418 418 } 419 419 420 420 iocb->ki_pos += written; 421 - kleave(" = %zd [%zd]", written, ret); 421 + _leave(" = %zd [%zd]", written, ret); 422 422 return written ? written : ret; 423 423 424 424 error_folio_unlock: ··· 491 491 struct netfs_inode *ictx = netfs_inode(inode); 492 492 ssize_t ret; 493 493 494 - kenter("%llx,%zx,%llx", iocb->ki_pos, iov_iter_count(from), i_size_read(inode)); 494 + _enter("%llx,%zx,%llx", iocb->ki_pos, iov_iter_count(from), i_size_read(inode)); 495 495 496 496 if (!iov_iter_count(from)) 497 497 return 0; ··· 529 529 vm_fault_t ret = VM_FAULT_RETRY; 530 530 int err; 531 531 532 - kenter("%lx", folio->index); 532 + _enter("%lx", folio->index); 533 533 534 534 sb_start_pagefault(inode->i_sb); 535 535
+1 -1
fs/netfs/direct_read.c
··· 33 33 size_t orig_count = iov_iter_count(iter); 34 34 bool async = !is_sync_kiocb(iocb); 35 35 36 - kenter(""); 36 + _enter(""); 37 37 38 38 if (!orig_count) 39 39 return 0; /* Don't update atime */
+4 -4
fs/netfs/direct_write.c
··· 37 37 size_t len = iov_iter_count(iter); 38 38 bool async = !is_sync_kiocb(iocb); 39 39 40 - kenter(""); 40 + _enter(""); 41 41 42 42 /* We're going to need a bounce buffer if what we transmit is going to 43 43 * be different in some way to the source buffer, e.g. because it gets ··· 45 45 */ 46 46 // TODO 47 47 48 - kdebug("uw %llx-%llx", start, end); 48 + _debug("uw %llx-%llx", start, end); 49 49 50 50 wreq = netfs_create_write_req(iocb->ki_filp->f_mapping, iocb->ki_filp, start, 51 51 iocb->ki_flags & IOCB_DIRECT ? ··· 96 96 wreq->cleanup = netfs_cleanup_dio_write; 97 97 ret = netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), wreq->len); 98 98 if (ret < 0) { 99 - kdebug("begin = %zd", ret); 99 + _debug("begin = %zd", ret); 100 100 goto out; 101 101 } 102 102 ··· 143 143 loff_t pos = iocb->ki_pos; 144 144 unsigned long long end = pos + iov_iter_count(from) - 1; 145 145 146 - kenter("%llx,%zx,%llx", pos, iov_iter_count(from), i_size_read(inode)); 146 + _enter("%llx,%zx,%llx", pos, iov_iter_count(from), i_size_read(inode)); 147 147 148 148 if (!iov_iter_count(from)) 149 149 return 0;
+2 -2
fs/netfs/fscache_cache.c
··· 237 237 { 238 238 int n_accesses; 239 239 240 - kenter("{%s,%s}", ops->name, cache->name); 240 + _enter("{%s,%s}", ops->name, cache->name); 241 241 242 242 BUG_ON(fscache_cache_state(cache) != FSCACHE_CACHE_IS_PREPARING); 243 243 ··· 257 257 258 258 up_write(&fscache_addremove_sem); 259 259 pr_notice("Cache \"%s\" added (type %s)\n", cache->name, ops->name); 260 - kleave(" = 0 [%s]", cache->name); 260 + _leave(" = 0 [%s]", cache->name); 261 261 return 0; 262 262 } 263 263 EXPORT_SYMBOL(fscache_add_cache);
+14 -14
fs/netfs/fscache_cookie.c
··· 456 456 { 457 457 struct fscache_cookie *cookie; 458 458 459 - kenter("V=%x", volume->debug_id); 459 + _enter("V=%x", volume->debug_id); 460 460 461 461 if (!index_key || !index_key_len || index_key_len > 255 || aux_data_len > 255) 462 462 return NULL; ··· 484 484 485 485 trace_fscache_acquire(cookie); 486 486 fscache_stat(&fscache_n_acquires_ok); 487 - kleave(" = c=%08x", cookie->debug_id); 487 + _leave(" = c=%08x", cookie->debug_id); 488 488 return cookie; 489 489 } 490 490 EXPORT_SYMBOL(__fscache_acquire_cookie); ··· 505 505 enum fscache_access_trace trace = fscache_access_lookup_cookie_end_failed; 506 506 bool need_withdraw = false; 507 507 508 - kenter(""); 508 + _enter(""); 509 509 510 510 if (!cookie->volume->cache_priv) { 511 511 fscache_create_volume(cookie->volume, true); ··· 519 519 if (cookie->state != FSCACHE_COOKIE_STATE_FAILED) 520 520 fscache_set_cookie_state(cookie, FSCACHE_COOKIE_STATE_QUIESCENT); 521 521 need_withdraw = true; 522 - kleave(" [fail]"); 522 + _leave(" [fail]"); 523 523 goto out; 524 524 } 525 525 ··· 572 572 bool queue = false; 573 573 int n_active; 574 574 575 - kenter("c=%08x", cookie->debug_id); 575 + _enter("c=%08x", cookie->debug_id); 576 576 577 577 if (WARN(test_bit(FSCACHE_COOKIE_RELINQUISHED, &cookie->flags), 578 578 "Trying to use relinquished cookie\n")) ··· 636 636 spin_unlock(&cookie->lock); 637 637 if (queue) 638 638 fscache_queue_cookie(cookie, fscache_cookie_get_use_work); 639 - kleave(""); 639 + _leave(""); 640 640 } 641 641 EXPORT_SYMBOL(__fscache_use_cookie); 642 642 ··· 702 702 enum fscache_cookie_state state; 703 703 bool wake = false; 704 704 705 - kenter("c=%x", cookie->debug_id); 705 + _enter("c=%x", cookie->debug_id); 706 706 707 707 again: 708 708 spin_lock(&cookie->lock); ··· 820 820 spin_unlock(&cookie->lock); 821 821 if (wake) 822 822 wake_up_cookie_state(cookie); 823 - kleave(""); 823 + _leave(""); 824 824 } 825 825 826 826 static void fscache_cookie_worker(struct work_struct *work) ··· 867 867 set_bit(FSCACHE_COOKIE_DO_LRU_DISCARD, &cookie->flags); 868 868 spin_unlock(&cookie->lock); 869 869 fscache_stat(&fscache_n_cookies_lru_expired); 870 - kdebug("lru c=%x", cookie->debug_id); 870 + _debug("lru c=%x", cookie->debug_id); 871 871 __fscache_withdraw_cookie(cookie); 872 872 } 873 873 ··· 971 971 if (retire) 972 972 fscache_stat(&fscache_n_relinquishes_retire); 973 973 974 - kenter("c=%08x{%d},%d", 974 + _enter("c=%08x{%d},%d", 975 975 cookie->debug_id, atomic_read(&cookie->n_active), retire); 976 976 977 977 if (WARN(test_and_set_bit(FSCACHE_COOKIE_RELINQUISHED, &cookie->flags), ··· 1050 1050 { 1051 1051 bool is_caching; 1052 1052 1053 - kenter("c=%x", cookie->debug_id); 1053 + _enter("c=%x", cookie->debug_id); 1054 1054 1055 1055 fscache_stat(&fscache_n_invalidates); 1056 1056 ··· 1072 1072 case FSCACHE_COOKIE_STATE_INVALIDATING: /* is_still_valid will catch it */ 1073 1073 default: 1074 1074 spin_unlock(&cookie->lock); 1075 - kleave(" [no %u]", cookie->state); 1075 + _leave(" [no %u]", cookie->state); 1076 1076 return; 1077 1077 1078 1078 case FSCACHE_COOKIE_STATE_LOOKING_UP: ··· 1081 1081 fallthrough; 1082 1082 case FSCACHE_COOKIE_STATE_CREATING: 1083 1083 spin_unlock(&cookie->lock); 1084 - kleave(" [look %x]", cookie->inval_counter); 1084 + _leave(" [look %x]", cookie->inval_counter); 1085 1085 return; 1086 1086 1087 1087 case FSCACHE_COOKIE_STATE_ACTIVE: ··· 1094 1094 1095 1095 if (is_caching) 1096 1096 fscache_queue_cookie(cookie, fscache_cookie_get_inval_work); 1097 - kleave(" [inv]"); 1097 + _leave(" [inv]"); 1098 1098 return; 1099 1099 } 1100 1100 }
+6 -6
fs/netfs/fscache_io.c
··· 28 28 29 29 again: 30 30 if (!fscache_cache_is_live(cookie->volume->cache)) { 31 - kleave(" [broken]"); 31 + _leave(" [broken]"); 32 32 return false; 33 33 } 34 34 35 35 state = fscache_cookie_state(cookie); 36 - kenter("c=%08x{%u},%x", cookie->debug_id, state, want_state); 36 + _enter("c=%08x{%u},%x", cookie->debug_id, state, want_state); 37 37 38 38 switch (state) { 39 39 case FSCACHE_COOKIE_STATE_CREATING: ··· 52 52 case FSCACHE_COOKIE_STATE_DROPPED: 53 53 case FSCACHE_COOKIE_STATE_RELINQUISHING: 54 54 default: 55 - kleave(" [not live]"); 55 + _leave(" [not live]"); 56 56 return false; 57 57 } 58 58 ··· 92 92 spin_lock(&cookie->lock); 93 93 94 94 state = fscache_cookie_state(cookie); 95 - kenter("c=%08x{%u},%x", cookie->debug_id, state, want_state); 95 + _enter("c=%08x{%u},%x", cookie->debug_id, state, want_state); 96 96 97 97 switch (state) { 98 98 case FSCACHE_COOKIE_STATE_LOOKING_UP: ··· 140 140 cres->cache_priv = NULL; 141 141 cres->ops = NULL; 142 142 fscache_end_cookie_access(cookie, fscache_access_io_not_live); 143 - kleave(" = -ENOBUFS"); 143 + _leave(" = -ENOBUFS"); 144 144 return -ENOBUFS; 145 145 } 146 146 ··· 224 224 if (len == 0) 225 225 goto abandon; 226 226 227 - kenter("%llx,%zx", start, len); 227 + _enter("%llx,%zx", start, len); 228 228 229 229 wreq = kzalloc(sizeof(struct fscache_write_request), GFP_NOFS); 230 230 if (!wreq)
+1 -1
fs/netfs/fscache_main.c
··· 99 99 */ 100 100 void __exit fscache_exit(void) 101 101 { 102 - kenter(""); 102 + _enter(""); 103 103 104 104 kmem_cache_destroy(fscache_cookie_jar); 105 105 fscache_proc_cleanup();
+2 -2
fs/netfs/fscache_volume.c
··· 264 264 fscache_see_volume(volume, fscache_volume_new_acquire); 265 265 fscache_stat(&fscache_n_volumes); 266 266 up_write(&fscache_addremove_sem); 267 - kleave(" = v=%x", volume->debug_id); 267 + _leave(" = v=%x", volume->debug_id); 268 268 return volume; 269 269 270 270 err_vol: ··· 466 466 { 467 467 int n_accesses; 468 468 469 - kdebug("withdraw V=%x", volume->debug_id); 469 + _debug("withdraw V=%x", volume->debug_id); 470 470 471 471 /* Allow wakeups on dec-to-0 */ 472 472 n_accesses = atomic_dec_return(&volume->n_accesses);
+32 -1
fs/netfs/internal.h
··· 34 34 /* 35 35 * main.c 36 36 */ 37 + extern unsigned int netfs_debug; 37 38 extern struct list_head netfs_io_requests; 38 39 extern spinlock_t netfs_proc_lock; 39 40 extern mempool_t netfs_request_pool; ··· 354 353 * debug tracing 355 354 */ 356 355 #define dbgprintk(FMT, ...) \ 357 - pr_debug("[%-6.6s] "FMT"\n", current->comm, ##__VA_ARGS__) 356 + printk("[%-6.6s] "FMT"\n", current->comm, ##__VA_ARGS__) 358 357 359 358 #define kenter(FMT, ...) dbgprintk("==> %s("FMT")", __func__, ##__VA_ARGS__) 360 359 #define kleave(FMT, ...) dbgprintk("<== %s()"FMT"", __func__, ##__VA_ARGS__) 361 360 #define kdebug(FMT, ...) dbgprintk(FMT, ##__VA_ARGS__) 361 + 362 + #ifdef __KDEBUG 363 + #define _enter(FMT, ...) kenter(FMT, ##__VA_ARGS__) 364 + #define _leave(FMT, ...) kleave(FMT, ##__VA_ARGS__) 365 + #define _debug(FMT, ...) kdebug(FMT, ##__VA_ARGS__) 366 + 367 + #elif defined(CONFIG_NETFS_DEBUG) 368 + #define _enter(FMT, ...) \ 369 + do { \ 370 + if (netfs_debug) \ 371 + kenter(FMT, ##__VA_ARGS__); \ 372 + } while (0) 373 + 374 + #define _leave(FMT, ...) \ 375 + do { \ 376 + if (netfs_debug) \ 377 + kleave(FMT, ##__VA_ARGS__); \ 378 + } while (0) 379 + 380 + #define _debug(FMT, ...) \ 381 + do { \ 382 + if (netfs_debug) \ 383 + kdebug(FMT, ##__VA_ARGS__); \ 384 + } while (0) 385 + 386 + #else 387 + #define _enter(FMT, ...) no_printk("==> %s("FMT")", __func__, ##__VA_ARGS__) 388 + #define _leave(FMT, ...) no_printk("<== %s()"FMT"", __func__, ##__VA_ARGS__) 389 + #define _debug(FMT, ...) no_printk(FMT, ##__VA_ARGS__) 390 + #endif 362 391 363 392 /* 364 393 * assertions
+6 -6
fs/netfs/io.c
··· 130 130 if (count == remaining) 131 131 return; 132 132 133 - kdebug("R=%08x[%u] ITER RESUB-MISMATCH %zx != %zx-%zx-%llx %x\n", 133 + _debug("R=%08x[%u] ITER RESUB-MISMATCH %zx != %zx-%zx-%llx %x\n", 134 134 rreq->debug_id, subreq->debug_index, 135 135 iov_iter_count(&subreq->io_iter), subreq->transferred, 136 136 subreq->len, rreq->i_size, ··· 326 326 struct netfs_io_request *rreq = subreq->rreq; 327 327 int u; 328 328 329 - kenter("R=%x[%x]{%llx,%lx},%zd", 329 + _enter("R=%x[%x]{%llx,%lx},%zd", 330 330 rreq->debug_id, subreq->debug_index, 331 331 subreq->start, subreq->flags, transferred_or_error); 332 332 ··· 435 435 struct netfs_inode *ictx = netfs_inode(rreq->inode); 436 436 size_t lsize; 437 437 438 - kenter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size); 438 + _enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size); 439 439 440 440 if (rreq->origin != NETFS_DIO_READ) { 441 441 source = netfs_cache_prepare_read(subreq, rreq->i_size); ··· 518 518 subreq->start = rreq->start + rreq->submitted; 519 519 subreq->len = io_iter->count; 520 520 521 - kdebug("slice %llx,%zx,%llx", subreq->start, subreq->len, rreq->submitted); 521 + _debug("slice %llx,%zx,%llx", subreq->start, subreq->len, rreq->submitted); 522 522 list_add_tail(&subreq->rreq_link, &rreq->subrequests); 523 523 524 524 /* Call out to the cache to find out what it can do with the remaining ··· 570 570 struct iov_iter io_iter; 571 571 int ret; 572 572 573 - kenter("R=%x %llx-%llx", 573 + _enter("R=%x %llx-%llx", 574 574 rreq->debug_id, rreq->start, rreq->start + rreq->len - 1); 575 575 576 576 if (rreq->len == 0) { ··· 593 593 atomic_set(&rreq->nr_outstanding, 1); 594 594 io_iter = rreq->io_iter; 595 595 do { 596 - kdebug("submit %llx + %llx >= %llx", 596 + _debug("submit %llx + %llx >= %llx", 597 597 rreq->start, rreq->submitted, rreq->i_size); 598 598 if (rreq->origin == NETFS_DIO_READ && 599 599 rreq->start + rreq->submitted >= rreq->i_size)
+4
fs/netfs/main.c
··· 20 20 21 21 EXPORT_TRACEPOINT_SYMBOL(netfs_sreq); 22 22 23 + unsigned netfs_debug; 24 + module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO); 25 + MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask"); 26 + 23 27 static struct kmem_cache *netfs_request_slab; 24 28 static struct kmem_cache *netfs_subrequest_slab; 25 29 mempool_t netfs_request_pool;
+2 -2
fs/netfs/misc.c
··· 26 26 struct fscache_cookie *cookie = netfs_i_cookie(ictx); 27 27 bool need_use = false; 28 28 29 - kenter(""); 29 + _enter(""); 30 30 31 31 if (!filemap_dirty_folio(mapping, folio)) 32 32 return false; ··· 99 99 struct netfs_folio *finfo; 100 100 size_t flen = folio_size(folio); 101 101 102 - kenter("{%lx},%zx,%zx", folio->index, offset, length); 102 + _enter("{%lx},%zx,%zx", folio->index, offset, length); 103 103 104 104 if (!folio_test_private(folio)) 105 105 return;
+8 -8
fs/netfs/write_collect.c
··· 161 161 { 162 162 struct list_head *next; 163 163 164 - kenter("R=%x[%x:]", wreq->debug_id, stream->stream_nr); 164 + _enter("R=%x[%x:]", wreq->debug_id, stream->stream_nr); 165 165 166 166 if (list_empty(&stream->subrequests)) 167 167 return; ··· 374 374 unsigned int notes; 375 375 int s; 376 376 377 - kenter("%llx-%llx", wreq->start, wreq->start + wreq->len); 377 + _enter("%llx-%llx", wreq->start, wreq->start + wreq->len); 378 378 trace_netfs_collect(wreq); 379 379 trace_netfs_rreq(wreq, netfs_rreq_trace_collect); 380 380 ··· 409 409 front = stream->front; 410 410 while (front) { 411 411 trace_netfs_collect_sreq(wreq, front); 412 - //kdebug("sreq [%x] %llx %zx/%zx", 412 + //_debug("sreq [%x] %llx %zx/%zx", 413 413 // front->debug_index, front->start, front->transferred, front->len); 414 414 415 415 /* Stall if there may be a discontinuity. */ ··· 598 598 out: 599 599 netfs_put_group_many(wreq->group, wreq->nr_group_rel); 600 600 wreq->nr_group_rel = 0; 601 - kleave(" = %x", notes); 601 + _leave(" = %x", notes); 602 602 return; 603 603 604 604 need_retry: ··· 606 606 * that any partially completed op will have had any wholly transferred 607 607 * folios removed from it. 608 608 */ 609 - kdebug("retry"); 609 + _debug("retry"); 610 610 netfs_retry_writes(wreq); 611 611 goto out; 612 612 } ··· 621 621 size_t transferred; 622 622 int s; 623 623 624 - kenter("R=%x", wreq->debug_id); 624 + _enter("R=%x", wreq->debug_id); 625 625 626 626 netfs_see_request(wreq, netfs_rreq_trace_see_work); 627 627 if (!test_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags)) { ··· 684 684 if (wreq->origin == NETFS_DIO_WRITE) 685 685 inode_dio_end(wreq->inode); 686 686 687 - kdebug("finished"); 687 + _debug("finished"); 688 688 trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip); 689 689 clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags); 690 690 wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS); ··· 744 744 struct netfs_io_request *wreq = subreq->rreq; 745 745 struct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr]; 746 746 747 - kenter("%x[%x] %zd", wreq->debug_id, subreq->debug_index, transferred_or_error); 747 + _enter("%x[%x] %zd", wreq->debug_id, subreq->debug_index, transferred_or_error); 748 748 749 749 switch (subreq->source) { 750 750 case NETFS_UPLOAD_TO_SERVER:
+19 -18
fs/netfs/write_issue.c
··· 99 99 if (IS_ERR(wreq)) 100 100 return wreq; 101 101 102 - kenter("R=%x", wreq->debug_id); 102 + _enter("R=%x", wreq->debug_id); 103 103 104 104 ictx = netfs_inode(wreq->inode); 105 105 if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &wreq->flags)) ··· 122 122 wreq->io_streams[1].transferred = LONG_MAX; 123 123 if (fscache_resources_valid(&wreq->cache_resources)) { 124 124 wreq->io_streams[1].avail = true; 125 + wreq->io_streams[1].active = true; 125 126 wreq->io_streams[1].prepare_write = wreq->cache_resources.ops->prepare_write_subreq; 126 127 wreq->io_streams[1].issue_write = wreq->cache_resources.ops->issue_write; 127 128 } ··· 160 159 subreq->max_nr_segs = INT_MAX; 161 160 subreq->stream_nr = stream->stream_nr; 162 161 163 - kenter("R=%x[%x]", wreq->debug_id, subreq->debug_index); 162 + _enter("R=%x[%x]", wreq->debug_id, subreq->debug_index); 164 163 165 164 trace_netfs_sreq_ref(wreq->debug_id, subreq->debug_index, 166 165 refcount_read(&subreq->ref), ··· 216 215 { 217 216 struct netfs_io_request *wreq = subreq->rreq; 218 217 219 - kenter("R=%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len); 218 + _enter("R=%x[%x],%zx", wreq->debug_id, subreq->debug_index, subreq->len); 220 219 221 220 if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) 222 221 return netfs_write_subrequest_terminated(subreq, subreq->error, false); ··· 273 272 size_t part; 274 273 275 274 if (!stream->avail) { 276 - kleave("no write"); 275 + _leave("no write"); 277 276 return len; 278 277 } 279 278 280 - kenter("R=%x[%x]", wreq->debug_id, subreq ? subreq->debug_index : 0); 279 + _enter("R=%x[%x]", wreq->debug_id, subreq ? subreq->debug_index : 0); 281 280 282 281 if (subreq && start != subreq->start + subreq->len) { 283 282 netfs_issue_write(wreq, stream); ··· 289 288 subreq = stream->construct; 290 289 291 290 part = min(subreq->max_len - subreq->len, len); 292 - kdebug("part %zx/%zx %zx/%zx", subreq->len, subreq->max_len, part, len); 291 + _debug("part %zx/%zx %zx/%zx", subreq->len, subreq->max_len, part, len); 293 292 subreq->len += part; 294 293 subreq->nr_segs++; 295 294 ··· 320 319 bool to_eof = false, streamw = false; 321 320 bool debug = false; 322 321 323 - kenter(""); 322 + _enter(""); 324 323 325 324 /* netfs_perform_write() may shift i_size around the page or from out 326 325 * of the page to beyond it, but cannot move i_size into or through the ··· 330 329 331 330 if (fpos >= i_size) { 332 331 /* mmap beyond eof. */ 333 - kdebug("beyond eof"); 332 + _debug("beyond eof"); 334 333 folio_start_writeback(folio); 335 334 folio_unlock(folio); 336 335 wreq->nr_group_rel += netfs_folio_written_back(folio); ··· 364 363 } 365 364 flen -= foff; 366 365 367 - kdebug("folio %zx %zx %zx", foff, flen, fsize); 366 + _debug("folio %zx %zx %zx", foff, flen, fsize); 368 367 369 368 /* Deal with discontinuities in the stream of dirty pages. These can 370 369 * arise from a number of sources: ··· 488 487 for (int s = 0; s < NR_IO_STREAMS; s++) 489 488 netfs_issue_write(wreq, &wreq->io_streams[s]); 490 489 491 - kleave(" = 0"); 490 + _leave(" = 0"); 492 491 return 0; 493 492 } 494 493 ··· 523 522 netfs_stat(&netfs_n_wh_writepages); 524 523 525 524 do { 526 - kdebug("wbiter %lx %llx", folio->index, wreq->start + wreq->submitted); 525 + _debug("wbiter %lx %llx", folio->index, wreq->start + wreq->submitted); 527 526 528 527 /* It appears we don't have to handle cyclic writeback wrapping. */ 529 528 WARN_ON_ONCE(wreq && folio_pos(folio) < wreq->start + wreq->submitted); ··· 547 546 mutex_unlock(&ictx->wb_lock); 548 547 549 548 netfs_put_request(wreq, false, netfs_rreq_trace_put_return); 550 - kleave(" = %d", error); 549 + _leave(" = %d", error); 551 550 return error; 552 551 553 552 couldnt_start: 554 553 netfs_kill_dirty_pages(mapping, wbc, folio); 555 554 out: 556 555 mutex_unlock(&ictx->wb_lock); 557 - kleave(" = %d", error); 556 + _leave(" = %d", error); 558 557 return error; 559 558 } 560 559 EXPORT_SYMBOL(netfs_writepages); ··· 591 590 struct folio *folio, size_t copied, bool to_page_end, 592 591 struct folio **writethrough_cache) 593 592 { 594 - kenter("R=%x ic=%zu ws=%u cp=%zu tp=%u", 593 + _enter("R=%x ic=%zu ws=%u cp=%zu tp=%u", 595 594 wreq->debug_id, wreq->iter.count, wreq->wsize, copied, to_page_end); 596 595 597 596 if (!*writethrough_cache) { ··· 625 624 struct netfs_inode *ictx = netfs_inode(wreq->inode); 626 625 int ret; 627 626 628 - kenter("R=%x", wreq->debug_id); 627 + _enter("R=%x", wreq->debug_id); 629 628 630 629 if (writethrough_cache) 631 630 netfs_write_folio(wreq, wbc, writethrough_cache); ··· 658 657 loff_t start = wreq->start; 659 658 int error = 0; 660 659 661 - kenter("%zx", len); 660 + _enter("%zx", len); 662 661 663 662 if (wreq->origin == NETFS_DIO_WRITE) 664 663 inode_dio_begin(wreq->inode); ··· 666 665 while (len) { 667 666 // TODO: Prepare content encryption 668 667 669 - kdebug("unbuffered %zx", len); 668 + _debug("unbuffered %zx", len); 670 669 part = netfs_advance_write(wreq, upload, start, len, false); 671 670 start += part; 672 671 len -= part; ··· 685 684 if (list_empty(&upload->subrequests)) 686 685 netfs_wake_write_collector(wreq, false); 687 686 688 - kleave(" = %d", error); 687 + _leave(" = %d", error); 689 688 return error; 690 689 }
+42 -21
fs/pidfs.c
··· 119 119 struct task_struct *task __free(put_task) = NULL; 120 120 struct nsproxy *nsp __free(put_nsproxy) = NULL; 121 121 struct pid *pid = pidfd_pid(file); 122 - struct ns_common *ns_common; 122 + struct ns_common *ns_common = NULL; 123 123 124 124 if (arg) 125 125 return -EINVAL; ··· 146 146 switch (cmd) { 147 147 /* Namespaces that hang of nsproxy. */ 148 148 case PIDFD_GET_CGROUP_NAMESPACE: 149 - get_cgroup_ns(nsp->cgroup_ns); 150 - ns_common = to_ns_common(nsp->cgroup_ns); 149 + if (IS_ENABLED(CONFIG_CGROUPS)) { 150 + get_cgroup_ns(nsp->cgroup_ns); 151 + ns_common = to_ns_common(nsp->cgroup_ns); 152 + } 151 153 break; 152 154 case PIDFD_GET_IPC_NAMESPACE: 153 - get_ipc_ns(nsp->ipc_ns); 154 - ns_common = to_ns_common(nsp->ipc_ns); 155 + if (IS_ENABLED(CONFIG_IPC_NS)) { 156 + get_ipc_ns(nsp->ipc_ns); 157 + ns_common = to_ns_common(nsp->ipc_ns); 158 + } 155 159 break; 156 160 case PIDFD_GET_MNT_NAMESPACE: 157 161 get_mnt_ns(nsp->mnt_ns); 158 162 ns_common = to_ns_common(nsp->mnt_ns); 159 163 break; 160 164 case PIDFD_GET_NET_NAMESPACE: 161 - ns_common = to_ns_common(nsp->net_ns); 162 - get_net_ns(ns_common); 165 + if (IS_ENABLED(CONFIG_NET_NS)) { 166 + ns_common = to_ns_common(nsp->net_ns); 167 + get_net_ns(ns_common); 168 + } 163 169 break; 164 170 case PIDFD_GET_PID_FOR_CHILDREN_NAMESPACE: 165 - get_pid_ns(nsp->pid_ns_for_children); 166 - ns_common = to_ns_common(nsp->pid_ns_for_children); 171 + if (IS_ENABLED(CONFIG_PID_NS)) { 172 + get_pid_ns(nsp->pid_ns_for_children); 173 + ns_common = to_ns_common(nsp->pid_ns_for_children); 174 + } 167 175 break; 168 176 case PIDFD_GET_TIME_NAMESPACE: 169 - get_time_ns(nsp->time_ns); 170 - ns_common = to_ns_common(nsp->time_ns); 177 + if (IS_ENABLED(CONFIG_TIME_NS)) { 178 + get_time_ns(nsp->time_ns); 179 + ns_common = to_ns_common(nsp->time_ns); 180 + } 171 181 break; 172 182 case PIDFD_GET_TIME_FOR_CHILDREN_NAMESPACE: 173 - get_time_ns(nsp->time_ns_for_children); 174 - ns_common = to_ns_common(nsp->time_ns_for_children); 183 + if (IS_ENABLED(CONFIG_TIME_NS)) { 184 + get_time_ns(nsp->time_ns_for_children); 185 + ns_common = to_ns_common(nsp->time_ns_for_children); 186 + } 175 187 break; 176 188 case PIDFD_GET_UTS_NAMESPACE: 177 - get_uts_ns(nsp->uts_ns); 178 - ns_common = to_ns_common(nsp->uts_ns); 189 + if (IS_ENABLED(CONFIG_UTS_NS)) { 190 + get_uts_ns(nsp->uts_ns); 191 + ns_common = to_ns_common(nsp->uts_ns); 192 + } 179 193 break; 180 194 /* Namespaces that don't hang of nsproxy. */ 181 195 case PIDFD_GET_USER_NAMESPACE: 182 - rcu_read_lock(); 183 - ns_common = to_ns_common(get_user_ns(task_cred_xxx(task, user_ns))); 184 - rcu_read_unlock(); 196 + if (IS_ENABLED(CONFIG_USER_NS)) { 197 + rcu_read_lock(); 198 + ns_common = to_ns_common(get_user_ns(task_cred_xxx(task, user_ns))); 199 + rcu_read_unlock(); 200 + } 185 201 break; 186 202 case PIDFD_GET_PID_NAMESPACE: 187 - rcu_read_lock(); 188 - ns_common = to_ns_common(get_pid_ns(task_active_pid_ns(task))); 189 - rcu_read_unlock(); 203 + if (IS_ENABLED(CONFIG_PID_NS)) { 204 + rcu_read_lock(); 205 + ns_common = to_ns_common( get_pid_ns(task_active_pid_ns(task))); 206 + rcu_read_unlock(); 207 + } 190 208 break; 191 209 default: 192 210 return -ENOIOCTLCMD; 193 211 } 212 + 213 + if (!ns_common) 214 + return -EOPNOTSUPP; 194 215 195 216 /* open_namespace() unconditionally consumes the reference */ 196 217 return open_namespace(ns_common);
+48 -43
fs/xattr.c
··· 630 630 ctx->kvalue, ctx->size, ctx->flags); 631 631 } 632 632 633 - static long 634 - setxattr(struct mnt_idmap *idmap, struct dentry *d, 635 - const char __user *name, const void __user *value, size_t size, 636 - int flags) 633 + static int path_setxattr(const char __user *pathname, 634 + const char __user *name, const void __user *value, 635 + size_t size, int flags, unsigned int lookup_flags) 637 636 { 638 637 struct xattr_name kname; 639 638 struct xattr_ctx ctx = { ··· 642 643 .kname = &kname, 643 644 .flags = flags, 644 645 }; 646 + struct path path; 645 647 int error; 646 648 647 649 error = setxattr_copy(name, &ctx); 648 650 if (error) 649 651 return error; 650 652 651 - error = do_setxattr(idmap, d, &ctx); 652 - 653 - kvfree(ctx.kvalue); 654 - return error; 655 - } 656 - 657 - static int path_setxattr(const char __user *pathname, 658 - const char __user *name, const void __user *value, 659 - size_t size, int flags, unsigned int lookup_flags) 660 - { 661 - struct path path; 662 - int error; 663 - 664 653 retry: 665 654 error = user_path_at(AT_FDCWD, pathname, lookup_flags, &path); 666 655 if (error) 667 - return error; 656 + goto out; 668 657 error = mnt_want_write(path.mnt); 669 658 if (!error) { 670 - error = setxattr(mnt_idmap(path.mnt), path.dentry, name, 671 - value, size, flags); 659 + error = do_setxattr(mnt_idmap(path.mnt), path.dentry, &ctx); 672 660 mnt_drop_write(path.mnt); 673 661 } 674 662 path_put(&path); ··· 663 677 lookup_flags |= LOOKUP_REVAL; 664 678 goto retry; 665 679 } 680 + 681 + out: 682 + kvfree(ctx.kvalue); 666 683 return error; 667 684 } 668 685 ··· 686 697 SYSCALL_DEFINE5(fsetxattr, int, fd, const char __user *, name, 687 698 const void __user *,value, size_t, size, int, flags) 688 699 { 689 - struct fd f = fdget(fd); 690 - int error = -EBADF; 700 + struct xattr_name kname; 701 + struct xattr_ctx ctx = { 702 + .cvalue = value, 703 + .kvalue = NULL, 704 + .size = size, 705 + .kname = &kname, 706 + .flags = flags, 707 + }; 708 + int error; 691 709 710 + CLASS(fd, f)(fd); 692 711 if (!f.file) 693 - return error; 712 + return -EBADF; 713 + 694 714 audit_file(f.file); 715 + error = setxattr_copy(name, &ctx); 716 + if (error) 717 + return error; 718 + 695 719 error = mnt_want_write_file(f.file); 696 720 if (!error) { 697 - error = setxattr(file_mnt_idmap(f.file), 698 - f.file->f_path.dentry, name, 699 - value, size, flags); 721 + error = do_setxattr(file_mnt_idmap(f.file), 722 + f.file->f_path.dentry, &ctx); 700 723 mnt_drop_write_file(f.file); 701 724 } 702 - fdput(f); 725 + kvfree(ctx.kvalue); 703 726 return error; 704 727 } 705 728 ··· 900 899 * Extended attribute REMOVE operations 901 900 */ 902 901 static long 903 - removexattr(struct mnt_idmap *idmap, struct dentry *d, 904 - const char __user *name) 902 + removexattr(struct mnt_idmap *idmap, struct dentry *d, const char *name) 905 903 { 904 + if (is_posix_acl_xattr(name)) 905 + return vfs_remove_acl(idmap, d, name); 906 + return vfs_removexattr(idmap, d, name); 907 + } 908 + 909 + static int path_removexattr(const char __user *pathname, 910 + const char __user *name, unsigned int lookup_flags) 911 + { 912 + struct path path; 906 913 int error; 907 914 char kname[XATTR_NAME_MAX + 1]; 908 915 ··· 919 910 error = -ERANGE; 920 911 if (error < 0) 921 912 return error; 922 - 923 - if (is_posix_acl_xattr(kname)) 924 - return vfs_remove_acl(idmap, d, kname); 925 - 926 - return vfs_removexattr(idmap, d, kname); 927 - } 928 - 929 - static int path_removexattr(const char __user *pathname, 930 - const char __user *name, unsigned int lookup_flags) 931 - { 932 - struct path path; 933 - int error; 934 913 retry: 935 914 error = user_path_at(AT_FDCWD, pathname, lookup_flags, &path); 936 915 if (error) 937 916 return error; 938 917 error = mnt_want_write(path.mnt); 939 918 if (!error) { 940 - error = removexattr(mnt_idmap(path.mnt), path.dentry, name); 919 + error = removexattr(mnt_idmap(path.mnt), path.dentry, kname); 941 920 mnt_drop_write(path.mnt); 942 921 } 943 922 path_put(&path); ··· 951 954 SYSCALL_DEFINE2(fremovexattr, int, fd, const char __user *, name) 952 955 { 953 956 struct fd f = fdget(fd); 957 + char kname[XATTR_NAME_MAX + 1]; 954 958 int error = -EBADF; 955 959 956 960 if (!f.file) 957 961 return error; 958 962 audit_file(f.file); 963 + 964 + error = strncpy_from_user(kname, name, sizeof(kname)); 965 + if (error == 0 || error == sizeof(kname)) 966 + error = -ERANGE; 967 + if (error < 0) 968 + return error; 969 + 959 970 error = mnt_want_write_file(f.file); 960 971 if (!error) { 961 972 error = removexattr(file_mnt_idmap(f.file), 962 - f.file->f_path.dentry, name); 973 + f.file->f_path.dentry, kname); 963 974 mnt_drop_write_file(f.file); 964 975 } 965 976 fdput(f);
+227 -31
tools/testing/selftests/pidfd/pidfd_setns_test.c
··· 16 16 #include <unistd.h> 17 17 #include <sys/socket.h> 18 18 #include <sys/stat.h> 19 + #include <linux/ioctl.h> 19 20 20 21 #include "pidfd.h" 21 22 #include "../clone3/clone3_selftests.h" 22 23 #include "../kselftest_harness.h" 24 + 25 + #ifndef PIDFS_IOCTL_MAGIC 26 + #define PIDFS_IOCTL_MAGIC 0xFF 27 + #endif 28 + 29 + #ifndef PIDFD_GET_CGROUP_NAMESPACE 30 + #define PIDFD_GET_CGROUP_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 1) 31 + #endif 32 + 33 + #ifndef PIDFD_GET_IPC_NAMESPACE 34 + #define PIDFD_GET_IPC_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 2) 35 + #endif 36 + 37 + #ifndef PIDFD_GET_MNT_NAMESPACE 38 + #define PIDFD_GET_MNT_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 3) 39 + #endif 40 + 41 + #ifndef PIDFD_GET_NET_NAMESPACE 42 + #define PIDFD_GET_NET_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 4) 43 + #endif 44 + 45 + #ifndef PIDFD_GET_PID_NAMESPACE 46 + #define PIDFD_GET_PID_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 5) 47 + #endif 48 + 49 + #ifndef PIDFD_GET_PID_FOR_CHILDREN_NAMESPACE 50 + #define PIDFD_GET_PID_FOR_CHILDREN_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 6) 51 + #endif 52 + 53 + #ifndef PIDFD_GET_TIME_NAMESPACE 54 + #define PIDFD_GET_TIME_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 7) 55 + #endif 56 + 57 + #ifndef PIDFD_GET_TIME_FOR_CHILDREN_NAMESPACE 58 + #define PIDFD_GET_TIME_FOR_CHILDREN_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 8) 59 + #endif 60 + 61 + #ifndef PIDFD_GET_USER_NAMESPACE 62 + #define PIDFD_GET_USER_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 9) 63 + #endif 64 + 65 + #ifndef PIDFD_GET_UTS_NAMESPACE 66 + #define PIDFD_GET_UTS_NAMESPACE _IO(PIDFS_IOCTL_MAGIC, 10) 67 + #endif 23 68 24 69 enum { 25 70 PIDFD_NS_USER, ··· 76 31 PIDFD_NS_CGROUP, 77 32 PIDFD_NS_PIDCLD, 78 33 PIDFD_NS_TIME, 34 + PIDFD_NS_TIMECLD, 79 35 PIDFD_NS_MAX 80 36 }; 81 37 82 38 const struct ns_info { 83 39 const char *name; 84 40 int flag; 41 + unsigned int pidfd_ioctl; 85 42 } ns_info[] = { 86 - [PIDFD_NS_USER] = { "user", CLONE_NEWUSER, }, 87 - [PIDFD_NS_MNT] = { "mnt", CLONE_NEWNS, }, 88 - [PIDFD_NS_PID] = { "pid", CLONE_NEWPID, }, 89 - [PIDFD_NS_UTS] = { "uts", CLONE_NEWUTS, }, 90 - [PIDFD_NS_IPC] = { "ipc", CLONE_NEWIPC, }, 91 - [PIDFD_NS_NET] = { "net", CLONE_NEWNET, }, 92 - [PIDFD_NS_CGROUP] = { "cgroup", CLONE_NEWCGROUP, }, 93 - [PIDFD_NS_PIDCLD] = { "pid_for_children", 0, }, 94 - [PIDFD_NS_TIME] = { "time", CLONE_NEWTIME, }, 43 + [PIDFD_NS_USER] = { "user", CLONE_NEWUSER, PIDFD_GET_USER_NAMESPACE, }, 44 + [PIDFD_NS_MNT] = { "mnt", CLONE_NEWNS, PIDFD_GET_MNT_NAMESPACE, }, 45 + [PIDFD_NS_PID] = { "pid", CLONE_NEWPID, PIDFD_GET_PID_NAMESPACE, }, 46 + [PIDFD_NS_UTS] = { "uts", CLONE_NEWUTS, PIDFD_GET_UTS_NAMESPACE, }, 47 + [PIDFD_NS_IPC] = { "ipc", CLONE_NEWIPC, PIDFD_GET_IPC_NAMESPACE, }, 48 + [PIDFD_NS_NET] = { "net", CLONE_NEWNET, PIDFD_GET_NET_NAMESPACE, }, 49 + [PIDFD_NS_CGROUP] = { "cgroup", CLONE_NEWCGROUP, PIDFD_GET_CGROUP_NAMESPACE, }, 50 + [PIDFD_NS_TIME] = { "time", CLONE_NEWTIME, PIDFD_GET_TIME_NAMESPACE, }, 51 + [PIDFD_NS_PIDCLD] = { "pid_for_children", 0, PIDFD_GET_PID_FOR_CHILDREN_NAMESPACE, }, 52 + [PIDFD_NS_TIMECLD] = { "time_for_children", 0, PIDFD_GET_TIME_FOR_CHILDREN_NAMESPACE, }, 95 53 }; 96 54 97 55 FIXTURE(current_nsset) ··· 102 54 pid_t pid; 103 55 int pidfd; 104 56 int nsfds[PIDFD_NS_MAX]; 57 + int child_pidfd_derived_nsfds[PIDFD_NS_MAX]; 105 58 106 59 pid_t child_pid_exited; 107 60 int child_pidfd_exited; ··· 110 61 pid_t child_pid1; 111 62 int child_pidfd1; 112 63 int child_nsfds1[PIDFD_NS_MAX]; 64 + int child_pidfd_derived_nsfds1[PIDFD_NS_MAX]; 113 65 114 66 pid_t child_pid2; 115 67 int child_pidfd2; 116 68 int child_nsfds2[PIDFD_NS_MAX]; 69 + int child_pidfd_derived_nsfds2[PIDFD_NS_MAX]; 117 70 }; 118 71 119 72 static int sys_waitid(int which, pid_t pid, int options) ··· 179 128 char c; 180 129 181 130 for (i = 0; i < PIDFD_NS_MAX; i++) { 182 - self->nsfds[i] = -EBADF; 183 - self->child_nsfds1[i] = -EBADF; 184 - self->child_nsfds2[i] = -EBADF; 131 + self->nsfds[i] = -EBADF; 132 + self->child_nsfds1[i] = -EBADF; 133 + self->child_nsfds2[i] = -EBADF; 134 + self->child_pidfd_derived_nsfds[i] = -EBADF; 135 + self->child_pidfd_derived_nsfds1[i] = -EBADF; 136 + self->child_pidfd_derived_nsfds2[i] = -EBADF; 185 137 } 186 138 187 139 proc_fd = open("/proc/self/ns", O_DIRECTORY | O_CLOEXEC); ··· 193 139 } 194 140 195 141 self->pid = getpid(); 142 + self->pidfd = sys_pidfd_open(self->pid, 0); 143 + EXPECT_GT(self->pidfd, 0) { 144 + TH_LOG("%m - Failed to open pidfd for process %d", self->pid); 145 + } 146 + 196 147 for (i = 0; i < PIDFD_NS_MAX; i++) { 197 148 const struct ns_info *info = &ns_info[i]; 198 149 self->nsfds[i] = openat(proc_fd, info->name, O_RDONLY | O_CLOEXEC); ··· 207 148 info->name, self->pid); 208 149 } 209 150 } 210 - } 211 151 212 - self->pidfd = sys_pidfd_open(self->pid, 0); 213 - EXPECT_GT(self->pidfd, 0) { 214 - TH_LOG("%m - Failed to open pidfd for process %d", self->pid); 152 + self->child_pidfd_derived_nsfds[i] = ioctl(self->pidfd, info->pidfd_ioctl, 0); 153 + if (self->child_pidfd_derived_nsfds[i] < 0) { 154 + EXPECT_EQ(errno, EOPNOTSUPP) { 155 + TH_LOG("%m - Failed to derive %s namespace from pidfd of process %d", 156 + info->name, self->pid); 157 + } 158 + } 215 159 } 216 160 217 161 /* Create task that exits right away. */ 218 - self->child_pid_exited = create_child(&self->child_pidfd_exited, 219 - CLONE_NEWUSER | CLONE_NEWNET); 162 + self->child_pid_exited = create_child(&self->child_pidfd_exited, 0); 220 163 EXPECT_GE(self->child_pid_exited, 0); 221 164 222 - if (self->child_pid_exited == 0) 165 + if (self->child_pid_exited == 0) { 166 + if (self->nsfds[PIDFD_NS_USER] >= 0 && unshare(CLONE_NEWUSER) < 0) 167 + _exit(EXIT_FAILURE); 168 + if (self->nsfds[PIDFD_NS_NET] >= 0 && unshare(CLONE_NEWNET) < 0) 169 + _exit(EXIT_FAILURE); 223 170 _exit(EXIT_SUCCESS); 171 + } 224 172 225 173 ASSERT_EQ(sys_waitid(P_PID, self->child_pid_exited, WEXITED | WNOWAIT), 0); 226 174 ··· 240 174 EXPECT_EQ(ret, 0); 241 175 242 176 /* Create tasks that will be stopped. */ 243 - self->child_pid1 = create_child(&self->child_pidfd1, 244 - CLONE_NEWUSER | CLONE_NEWNS | 245 - CLONE_NEWCGROUP | CLONE_NEWIPC | 246 - CLONE_NEWUTS | CLONE_NEWPID | 247 - CLONE_NEWNET); 177 + if (self->nsfds[PIDFD_NS_USER] >= 0 && self->nsfds[PIDFD_NS_PID] >= 0) 178 + self->child_pid1 = create_child(&self->child_pidfd1, CLONE_NEWUSER | CLONE_NEWPID); 179 + else if (self->nsfds[PIDFD_NS_PID] >= 0) 180 + self->child_pid1 = create_child(&self->child_pidfd1, CLONE_NEWPID); 181 + else if (self->nsfds[PIDFD_NS_USER] >= 0) 182 + self->child_pid1 = create_child(&self->child_pidfd1, CLONE_NEWUSER); 183 + else 184 + self->child_pid1 = create_child(&self->child_pidfd1, 0); 248 185 EXPECT_GE(self->child_pid1, 0); 249 186 250 187 if (self->child_pid1 == 0) { 251 188 close(ipc_sockets[0]); 252 189 253 - if (!switch_timens()) 190 + if (self->nsfds[PIDFD_NS_MNT] >= 0 && unshare(CLONE_NEWNS) < 0) { 191 + TH_LOG("%m - Failed to unshare mount namespace for process %d", self->pid); 254 192 _exit(EXIT_FAILURE); 193 + } 194 + if (self->nsfds[PIDFD_NS_CGROUP] >= 0 && unshare(CLONE_NEWCGROUP) < 0) { 195 + TH_LOG("%m - Failed to unshare cgroup namespace for process %d", self->pid); 196 + _exit(EXIT_FAILURE); 197 + } 198 + if (self->nsfds[PIDFD_NS_IPC] >= 0 && unshare(CLONE_NEWIPC) < 0) { 199 + TH_LOG("%m - Failed to unshare ipc namespace for process %d", self->pid); 200 + _exit(EXIT_FAILURE); 201 + } 202 + if (self->nsfds[PIDFD_NS_UTS] >= 0 && unshare(CLONE_NEWUTS) < 0) { 203 + TH_LOG("%m - Failed to unshare uts namespace for process %d", self->pid); 204 + _exit(EXIT_FAILURE); 205 + } 206 + if (self->nsfds[PIDFD_NS_NET] >= 0 && unshare(CLONE_NEWNET) < 0) { 207 + TH_LOG("%m - Failed to unshare net namespace for process %d", self->pid); 208 + _exit(EXIT_FAILURE); 209 + } 210 + if (self->nsfds[PIDFD_NS_TIME] >= 0 && !switch_timens()) { 211 + TH_LOG("%m - Failed to unshare time namespace for process %d", self->pid); 212 + _exit(EXIT_FAILURE); 213 + } 255 214 256 215 if (write_nointr(ipc_sockets[1], "1", 1) < 0) 257 216 _exit(EXIT_FAILURE); ··· 294 203 ret = socketpair(AF_LOCAL, SOCK_STREAM | SOCK_CLOEXEC, 0, ipc_sockets); 295 204 EXPECT_EQ(ret, 0); 296 205 297 - self->child_pid2 = create_child(&self->child_pidfd2, 298 - CLONE_NEWUSER | CLONE_NEWNS | 299 - CLONE_NEWCGROUP | CLONE_NEWIPC | 300 - CLONE_NEWUTS | CLONE_NEWPID | 301 - CLONE_NEWNET); 206 + if (self->nsfds[PIDFD_NS_USER] >= 0 && self->nsfds[PIDFD_NS_PID] >= 0) 207 + self->child_pid2 = create_child(&self->child_pidfd2, CLONE_NEWUSER | CLONE_NEWPID); 208 + else if (self->nsfds[PIDFD_NS_PID] >= 0) 209 + self->child_pid2 = create_child(&self->child_pidfd2, CLONE_NEWPID); 210 + else if (self->nsfds[PIDFD_NS_USER] >= 0) 211 + self->child_pid2 = create_child(&self->child_pidfd2, CLONE_NEWUSER); 212 + else 213 + self->child_pid2 = create_child(&self->child_pidfd2, 0); 302 214 EXPECT_GE(self->child_pid2, 0); 303 215 304 216 if (self->child_pid2 == 0) { 305 217 close(ipc_sockets[0]); 306 218 307 - if (!switch_timens()) 219 + if (self->nsfds[PIDFD_NS_MNT] >= 0 && unshare(CLONE_NEWNS) < 0) { 220 + TH_LOG("%m - Failed to unshare mount namespace for process %d", self->pid); 308 221 _exit(EXIT_FAILURE); 222 + } 223 + if (self->nsfds[PIDFD_NS_CGROUP] >= 0 && unshare(CLONE_NEWCGROUP) < 0) { 224 + TH_LOG("%m - Failed to unshare cgroup namespace for process %d", self->pid); 225 + _exit(EXIT_FAILURE); 226 + } 227 + if (self->nsfds[PIDFD_NS_IPC] >= 0 && unshare(CLONE_NEWIPC) < 0) { 228 + TH_LOG("%m - Failed to unshare ipc namespace for process %d", self->pid); 229 + _exit(EXIT_FAILURE); 230 + } 231 + if (self->nsfds[PIDFD_NS_UTS] >= 0 && unshare(CLONE_NEWUTS) < 0) { 232 + TH_LOG("%m - Failed to unshare uts namespace for process %d", self->pid); 233 + _exit(EXIT_FAILURE); 234 + } 235 + if (self->nsfds[PIDFD_NS_NET] >= 0 && unshare(CLONE_NEWNET) < 0) { 236 + TH_LOG("%m - Failed to unshare net namespace for process %d", self->pid); 237 + _exit(EXIT_FAILURE); 238 + } 239 + if (self->nsfds[PIDFD_NS_TIME] >= 0 && !switch_timens()) { 240 + TH_LOG("%m - Failed to unshare time namespace for process %d", self->pid); 241 + _exit(EXIT_FAILURE); 242 + } 309 243 310 244 if (write_nointr(ipc_sockets[1], "1", 1) < 0) 311 245 _exit(EXIT_FAILURE); ··· 383 267 info->name, self->child_pid1); 384 268 } 385 269 } 270 + 271 + self->child_pidfd_derived_nsfds1[i] = ioctl(self->child_pidfd1, info->pidfd_ioctl, 0); 272 + if (self->child_pidfd_derived_nsfds1[i] < 0) { 273 + EXPECT_EQ(errno, EOPNOTSUPP) { 274 + TH_LOG("%m - Failed to derive %s namespace from pidfd of process %d", 275 + info->name, self->child_pid1); 276 + } 277 + } 278 + 279 + self->child_pidfd_derived_nsfds2[i] = ioctl(self->child_pidfd2, info->pidfd_ioctl, 0); 280 + if (self->child_pidfd_derived_nsfds2[i] < 0) { 281 + EXPECT_EQ(errno, EOPNOTSUPP) { 282 + TH_LOG("%m - Failed to derive %s namespace from pidfd of process %d", 283 + info->name, self->child_pid2); 284 + } 285 + } 386 286 } 387 287 388 288 close(proc_fd); ··· 420 288 close(self->child_nsfds1[i]); 421 289 if (self->child_nsfds2[i] >= 0) 422 290 close(self->child_nsfds2[i]); 291 + if (self->child_pidfd_derived_nsfds[i] >= 0) 292 + close(self->child_pidfd_derived_nsfds[i]); 293 + if (self->child_pidfd_derived_nsfds1[i] >= 0) 294 + close(self->child_pidfd_derived_nsfds1[i]); 295 + if (self->child_pidfd_derived_nsfds2[i] >= 0) 296 + close(self->child_pidfd_derived_nsfds2[i]); 423 297 } 424 298 425 299 if (self->child_pidfd1 >= 0) ··· 584 446 } 585 447 } 586 448 449 + TEST_F(current_nsset, pidfd_derived_nsfd_incremental_setns) 450 + { 451 + int i; 452 + pid_t pid; 453 + 454 + pid = getpid(); 455 + for (i = 0; i < PIDFD_NS_MAX; i++) { 456 + const struct ns_info *info = &ns_info[i]; 457 + int nsfd; 458 + 459 + if (self->child_pidfd_derived_nsfds1[i] < 0) 460 + continue; 461 + 462 + if (info->flag) { 463 + ASSERT_EQ(setns(self->child_pidfd_derived_nsfds1[i], info->flag), 0) { 464 + TH_LOG("%m - Failed to setns to %s namespace of %d via nsfd %d", 465 + info->name, self->child_pid1, 466 + self->child_pidfd_derived_nsfds1[i]); 467 + } 468 + } 469 + 470 + /* Verify that we have changed to the correct namespaces. */ 471 + if (info->flag == CLONE_NEWPID) 472 + nsfd = self->child_pidfd_derived_nsfds[i]; 473 + else 474 + nsfd = self->child_pidfd_derived_nsfds1[i]; 475 + ASSERT_EQ(in_same_namespace(nsfd, pid, info->name), 1) { 476 + TH_LOG("setns failed to place us correctly into %s namespace of %d via nsfd %d", 477 + info->name, self->child_pid1, 478 + self->child_pidfd_derived_nsfds1[i]); 479 + } 480 + TH_LOG("Managed to correctly setns to %s namespace of %d via nsfd %d", 481 + info->name, self->child_pid1, self->child_pidfd_derived_nsfds1[i]); 482 + } 483 + } 484 + 587 485 TEST_F(current_nsset, pidfd_one_shot_setns) 588 486 { 589 487 unsigned flags = 0; ··· 715 541 TH_LOG("%m - Correctly failed to setns to %s namespace of %d via nsfd %d", 716 542 info->name, self->child_pid2, 717 543 self->child_nsfds2[i]); 544 + } 545 + 546 + /* 547 + * Can't setns to a user namespace outside of our hierarchy since we 548 + * don't have caps in there and didn't create it. That means that under 549 + * no circumstances should we be able to setns to any of the other 550 + * ones since they aren't owned by our user namespace. 551 + */ 552 + for (i = 0; i < PIDFD_NS_MAX; i++) { 553 + const struct ns_info *info = &ns_info[i]; 554 + 555 + if (self->child_pidfd_derived_nsfds2[i] < 0 || !info->flag) 556 + continue; 557 + 558 + ASSERT_NE(setns(self->child_pidfd_derived_nsfds2[i], info->flag), 0) { 559 + TH_LOG("Managed to setns to %s namespace of %d via nsfd %d", 560 + info->name, self->child_pid2, 561 + self->child_pidfd_derived_nsfds2[i]); 562 + } 563 + TH_LOG("%m - Correctly failed to setns to %s namespace of %d via nsfd %d", 564 + info->name, self->child_pid2, 565 + self->child_pidfd_derived_nsfds2[i]); 718 566 } 719 567 } 720 568