Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1.. SPDX-License-Identifier: GPL-2.0
2
3======================
4The SGI XFS Filesystem
5======================
6
7XFS is a high performance journaling filesystem which originated
8on the SGI IRIX platform. It is completely multi-threaded, can
9support large files and large filesystems, extended attributes,
10variable block sizes, is extent based, and makes extensive use of
11Btrees (directories, extents, free space) to aid both performance
12and scalability.
13
14Refer to the documentation at https://xfs.wiki.kernel.org/
15for further details. This implementation is on-disk compatible
16with the IRIX version of XFS.
17
18
19Mount Options
20=============
21
22When mounting an XFS filesystem, the following options are accepted.
23
24 allocsize=size
25 Sets the buffered I/O end-of-file preallocation size when
26 doing delayed allocation writeout (default size is 64KiB).
27 Valid values for this option are page size (typically 4KiB)
28 through to 1GiB, inclusive, in power-of-2 increments.
29
30 The default behaviour is for dynamic end-of-file
31 preallocation size, which uses a set of heuristics to
32 optimise the preallocation size based on the current
33 allocation patterns within the file and the access patterns
34 to the file. Specifying a fixed ``allocsize`` value turns off
35 the dynamic behaviour.
36
37 discard or nodiscard (default)
38 Enable/disable the issuing of commands to let the block
39 device reclaim space freed by the filesystem. This is
40 useful for SSD devices, thinly provisioned LUNs and virtual
41 machine images, but may have a performance impact.
42
43 Note: It is currently recommended that you use the ``fstrim``
44 application to ``discard`` unused blocks rather than the ``discard``
45 mount option because the performance impact of this option
46 is quite severe.
47
48 grpid/bsdgroups or nogrpid/sysvgroups (default)
49 These options define what group ID a newly created file
50 gets. When ``grpid`` is set, it takes the group ID of the
51 directory in which it is created; otherwise it takes the
52 ``fsgid`` of the current process, unless the directory has the
53 ``setgid`` bit set, in which case it takes the ``gid`` from the
54 parent directory, and also gets the ``setgid`` bit set if it is
55 a directory itself.
56
57 filestreams
58 Make the data allocator use the filestreams allocation mode
59 across the entire filesystem rather than just on directories
60 configured to use it.
61
62 inode32 or inode64 (default)
63 When ``inode32`` is specified, it indicates that XFS limits
64 inode creation to locations which will not result in inode
65 numbers with more than 32 bits of significance.
66
67 When ``inode64`` is specified, it indicates that XFS is allowed
68 to create inodes at any location in the filesystem,
69 including those which will result in inode numbers occupying
70 more than 32 bits of significance.
71
72 ``inode32`` is provided for backwards compatibility with older
73 systems and applications, since 64 bits inode numbers might
74 cause problems for some applications that cannot handle
75 large inode numbers. If applications are in use which do
76 not handle inode numbers bigger than 32 bits, the ``inode32``
77 option should be specified.
78
79 largeio or nolargeio (default)
80 If ``nolargeio`` is specified, the optimal I/O reported in
81 ``st_blksize`` by **stat(2)** will be as small as possible to allow
82 user applications to avoid inefficient read/modify/write
83 I/O. This is typically the page size of the machine, as
84 this is the granularity of the page cache.
85
86 If ``largeio`` is specified, a filesystem that was created with a
87 ``swidth`` specified will return the ``swidth`` value (in bytes)
88 in ``st_blksize``. If the filesystem does not have a ``swidth``
89 specified but does specify an ``allocsize`` then ``allocsize``
90 (in bytes) will be returned instead. Otherwise the behaviour
91 is the same as if ``nolargeio`` was specified.
92
93 logbufs=value
94 Set the number of in-memory log buffers. Valid numbers
95 range from 2-8 inclusive.
96
97 The default value is 8 buffers.
98
99 If the memory cost of 8 log buffers is too high on small
100 systems, then it may be reduced at some cost to performance
101 on metadata intensive workloads. The ``logbsize`` option below
102 controls the size of each buffer and so is also relevant to
103 this case.
104
105 lifetime (default) or nolifetime
106 Enable data placement based on write life time hints provided
107 by the user. This turns on co-allocation of data of similar
108 life times when statistically favorable to reduce garbage
109 collection cost.
110
111 These options are only available for zoned rt file systems.
112
113 logbsize=value
114 Set the size of each in-memory log buffer. The size may be
115 specified in bytes, or in kilobytes with a "k" suffix.
116 Valid sizes for version 1 and version 2 logs are 16384 (16k)
117 and 32768 (32k). Valid sizes for version 2 logs also
118 include 65536 (64k), 131072 (128k) and 262144 (256k). The
119 logbsize must be an integer multiple of the log
120 stripe unit configured at **mkfs(8)** time.
121
122 The default value for version 1 logs is 32768, while the
123 default value for version 2 logs is MAX(32768, log_sunit).
124
125 logdev=device and rtdev=device
126 Use an external log (metadata journal) and/or real-time device.
127 An XFS filesystem has up to three parts: a data section, a log
128 section, and a real-time section. The real-time section is
129 optional, and the log section can be separate from the data
130 section or contained within it.
131
132 max_atomic_write=value
133 Set the maximum size of an atomic write. The size may be
134 specified in bytes, in kilobytes with a "k" suffix, in megabytes
135 with a "m" suffix, or in gigabytes with a "g" suffix. The size
136 cannot be larger than the maximum write size, larger than the
137 size of any allocation group, or larger than the size of a
138 remapping operation that the log can complete atomically.
139
140 The default value is to set the maximum I/O completion size
141 to allow each CPU to handle one at a time.
142
143 max_open_zones=value
144 Specify the max number of zones to keep open for writing on a
145 zoned rt device. Many open zones aids file data separation
146 but may impact performance on HDDs.
147
148 If ``max_open_zones`` is not specified, the value is determined
149 by the capabilities and the size of the zoned rt device.
150
151 noalign
152 Data allocations will not be aligned at stripe unit
153 boundaries. This is only relevant to filesystems created
154 with non-zero data alignment parameters (``sunit``, ``swidth``) by
155 **mkfs(8)**.
156
157 norecovery
158 The filesystem will be mounted without running log recovery.
159 If the filesystem was not cleanly unmounted, it is likely to
160 be inconsistent when mounted in ``norecovery`` mode.
161 Some files or directories may not be accessible because of this.
162 Filesystems mounted ``norecovery`` must be mounted read-only or
163 the mount will fail.
164
165 nouuid
166 Don't check for double mounted file systems using the file
167 system ``uuid``. This is useful to mount LVM snapshot volumes,
168 and often used in combination with ``norecovery`` for mounting
169 read-only snapshots.
170
171 noquota
172 Forcibly turns off all quota accounting and enforcement
173 within the filesystem.
174
175 uquota/usrquota/uqnoenforce/quota
176 User disk quota accounting enabled, and limits (optionally)
177 enforced. Refer to **xfs_quota(8)** for further details.
178
179 gquota/grpquota/gqnoenforce
180 Group disk quota accounting enabled and limits (optionally)
181 enforced. Refer to **xfs_quota(8)** for further details.
182
183 pquota/prjquota/pqnoenforce
184 Project disk quota accounting enabled and limits (optionally)
185 enforced. Refer to **xfs_quota(8)** for further details.
186
187 sunit=value and swidth=value
188 Used to specify the stripe unit and width for a RAID device
189 or a stripe volume. "value" must be specified in 512-byte
190 block units. These options are only relevant to filesystems
191 that were created with non-zero data alignment parameters.
192
193 The ``sunit`` and ``swidth`` parameters specified must be compatible
194 with the existing filesystem alignment characteristics. In
195 general, that means the only valid changes to ``sunit`` are
196 increasing it by a power-of-2 multiple. Valid ``swidth`` values
197 are any integer multiple of a valid ``sunit`` value.
198
199 Typically the only time these mount options are necessary if
200 after an underlying RAID device has had its geometry
201 modified, such as adding a new disk to a RAID5 lun and
202 reshaping it.
203
204 swalloc
205 Data allocations will be rounded up to stripe width boundaries
206 when the current end of file is being extended and the file
207 size is larger than the stripe width size.
208
209 wsync
210 When specified, all filesystem namespace operations are
211 executed synchronously. This ensures that when the namespace
212 operation (create, unlink, etc) completes, the change to the
213 namespace is on stable storage. This is useful in HA setups
214 where failover must not result in clients seeing
215 inconsistent namespace presentation during or after a
216 failover event.
217
218Deprecation of V4 Format
219========================
220
221The V4 filesystem format lacks certain features that are supported by
222the V5 format, such as metadata checksumming, strengthened metadata
223verification, and the ability to store timestamps past the year 2038.
224Because of this, the V4 format is deprecated. All users should upgrade
225by backing up their files, reformatting, and restoring from the backup.
226
227Administrators and users can detect a V4 filesystem by running xfs_info
228against a filesystem mountpoint and checking for a string containing
229"crc=". If no such string is found, please upgrade xfsprogs to the
230latest version and try again.
231
232The deprecation will take place in two parts. Support for mounting V4
233filesystems can now be disabled at kernel build time via Kconfig option.
234These options were changed to default to no in September 2025. In
235September 2030, support will be removed from the codebase entirely.
236
237Note: Distributors may choose to withdraw V4 format support earlier than
238the dates listed above.
239
240Deprecated Mount Options
241========================
242
243============================ ================
244 Name Removal Schedule
245============================ ================
246Mounting with V4 filesystem September 2030
247Mounting ascii-ci filesystem September 2030
248============================ ================
249
250
251Removed Mount Options
252=====================
253
254=========================== =======
255 Name Removed
256=========================== =======
257 delaylog/nodelaylog v4.0
258 ihashsize v4.0
259 irixsgid v4.0
260 osyncisdsync/osyncisosync v4.0
261 barrier v4.19
262 nobarrier v4.19
263 ikeep/noikeep v6.18
264 attr2/noattr2 v6.18
265=========================== =======
266
267sysctls
268=======
269
270The following sysctls are available for the XFS filesystem:
271
272 fs.xfs.stats_clear (Min: 0 Default: 0 Max: 1)
273 Setting this to "1" clears accumulated XFS statistics
274 in /proc/fs/xfs/stat. It then immediately resets to "0".
275
276 fs.xfs.xfssyncd_centisecs (Min: 100 Default: 3000 Max: 720000)
277 The interval at which the filesystem flushes metadata
278 out to disk and runs internal cache cleanup routines.
279
280 fs.xfs.filestream_centisecs (Min: 1 Default: 3000 Max: 360000)
281 The interval at which the filesystem ages filestreams cache
282 references and returns timed-out AGs back to the free stream
283 pool.
284
285 fs.xfs.speculative_prealloc_lifetime
286 (Units: seconds Min: 1 Default: 300 Max: 86400)
287 The interval at which the background scanning for inodes
288 with unused speculative preallocation runs. The scan
289 removes unused preallocation from clean inodes and releases
290 the unused space back to the free pool.
291
292 fs.xfs.error_level (Min: 0 Default: 3 Max: 11)
293 A volume knob for error reporting when internal errors occur.
294 This will generate detailed messages & backtraces for filesystem
295 shutdowns, for example. Current threshold values are:
296
297 XFS_ERRLEVEL_OFF: 0
298 XFS_ERRLEVEL_LOW: 1
299 XFS_ERRLEVEL_HIGH: 5
300
301 fs.xfs.panic_mask (Min: 0 Default: 0 Max: 511)
302 Causes certain error conditions to call BUG(). Value is a bitmask;
303 OR together the tags which represent errors which should cause panics:
304
305 XFS_NO_PTAG 0
306 XFS_PTAG_IFLUSH 0x00000001
307 XFS_PTAG_LOGRES 0x00000002
308 XFS_PTAG_AILDELETE 0x00000004
309 XFS_PTAG_ERROR_REPORT 0x00000008
310 XFS_PTAG_SHUTDOWN_CORRUPT 0x00000010
311 XFS_PTAG_SHUTDOWN_IOERROR 0x00000020
312 XFS_PTAG_SHUTDOWN_LOGERROR 0x00000040
313 XFS_PTAG_FSBLOCK_ZERO 0x00000080
314 XFS_PTAG_VERIFIER_ERROR 0x00000100
315
316 This option is intended for debugging only.
317
318 fs.xfs.inherit_sync (Min: 0 Default: 1 Max: 1)
319 Setting this to "1" will cause the "sync" flag set
320 by the **xfs_io(8)** chattr command on a directory to be
321 inherited by files in that directory.
322
323 fs.xfs.inherit_nodump (Min: 0 Default: 1 Max: 1)
324 Setting this to "1" will cause the "nodump" flag set
325 by the **xfs_io(8)** chattr command on a directory to be
326 inherited by files in that directory.
327
328 fs.xfs.inherit_noatime (Min: 0 Default: 1 Max: 1)
329 Setting this to "1" will cause the "noatime" flag set
330 by the **xfs_io(8)** chattr command on a directory to be
331 inherited by files in that directory.
332
333 fs.xfs.inherit_nosymlinks (Min: 0 Default: 1 Max: 1)
334 Setting this to "1" will cause the "nosymlinks" flag set
335 by the **xfs_io(8)** chattr command on a directory to be
336 inherited by files in that directory.
337
338 fs.xfs.inherit_nodefrag (Min: 0 Default: 1 Max: 1)
339 Setting this to "1" will cause the "nodefrag" flag set
340 by the **xfs_io(8)** chattr command on a directory to be
341 inherited by files in that directory.
342
343 fs.xfs.rotorstep (Min: 1 Default: 1 Max: 256)
344 In "inode32" allocation mode, this option determines how many
345 files the allocator attempts to allocate in the same allocation
346 group before moving to the next allocation group. The intent
347 is to control the rate at which the allocator moves between
348 allocation groups when allocating extents for new files.
349
350Deprecated Sysctls
351==================
352
353None currently.
354
355Removed Sysctls
356===============
357
358========================================== =======
359 Name Removed
360========================================== =======
361 fs.xfs.xfsbufd_centisec v4.0
362 fs.xfs.age_buffer_centisecs v4.0
363 fs.xfs.irix_symlink_mode v6.18
364 fs.xfs.irix_sgid_inherit v6.18
365 fs.xfs.speculative_cow_prealloc_lifetime v6.18
366========================================== =======
367
368Error handling
369==============
370
371XFS can act differently according to the type of error found during its
372operation. The implementation introduces the following concepts to the error
373handler:
374
375 -failure speed:
376 Defines how fast XFS should propagate an error upwards when a specific
377 error is found during the filesystem operation. It can propagate
378 immediately, after a defined number of retries, after a set time period,
379 or simply retry forever.
380
381 -error classes:
382 Specifies the subsystem the error configuration will apply to, such as
383 metadata IO or memory allocation. Different subsystems will have
384 different error handlers for which behaviour can be configured.
385
386 -error handlers:
387 Defines the behavior for a specific error.
388
389The filesystem behavior during an error can be set via ``sysfs`` files. Each
390error handler works independently - the first condition met by an error handler
391for a specific class will cause the error to be propagated rather than reset and
392retried.
393
394The action taken by the filesystem when the error is propagated is context
395dependent - it may cause a shut down in the case of an unrecoverable error,
396it may be reported back to userspace, or it may even be ignored because
397there's nothing useful we can with the error or anyone we can report it to (e.g.
398during unmount).
399
400The configuration files are organized into the following hierarchy for each
401mounted filesystem:
402
403 /sys/fs/xfs/<dev>/error/<class>/<error>/
404
405Where:
406 <dev>
407 The short device name of the mounted filesystem. This is the same device
408 name that shows up in XFS kernel error messages as "XFS(<dev>): ..."
409
410 <class>
411 The subsystem the error configuration belongs to. As of 4.9, the defined
412 classes are:
413
414 - "metadata": applies metadata buffer write IO
415
416 <error>
417 The individual error handler configurations.
418
419
420Each filesystem has "global" error configuration options defined in their top
421level directory:
422
423 /sys/fs/xfs/<dev>/error/
424
425 fail_at_unmount (Min: 0 Default: 1 Max: 1)
426 Defines the filesystem error behavior at unmount time.
427
428 If set to a value of 1, XFS will override all other error configurations
429 during unmount and replace them with "immediate fail" characteristics.
430 i.e. no retries, no retry timeout. This will always allow unmount to
431 succeed when there are persistent errors present.
432
433 If set to 0, the configured retry behaviour will continue until all
434 retries and/or timeouts have been exhausted. This will delay unmount
435 completion when there are persistent errors, and it may prevent the
436 filesystem from ever unmounting fully in the case of "retry forever"
437 handler configurations.
438
439 Note: there is no guarantee that fail_at_unmount can be set while an
440 unmount is in progress. It is possible that the ``sysfs`` entries are
441 removed by the unmounting filesystem before a "retry forever" error
442 handler configuration causes unmount to hang, and hence the filesystem
443 must be configured appropriately before unmount begins to prevent
444 unmount hangs.
445
446Each filesystem has specific error class handlers that define the error
447propagation behaviour for specific errors. There is also a "default" error
448handler defined, which defines the behaviour for all errors that don't have
449specific handlers defined. Where multiple retry constraints are configured for
450a single error, the first retry configuration that expires will cause the error
451to be propagated. The handler configurations are found in the directory:
452
453 /sys/fs/xfs/<dev>/error/<class>/<error>/
454
455 max_retries (Min: -1 Default: Varies Max: INTMAX)
456 Defines the allowed number of retries of a specific error before
457 the filesystem will propagate the error. The retry count for a given
458 error context (e.g. a specific metadata buffer) is reset every time
459 there is a successful completion of the operation.
460
461 Setting the value to "-1" will cause XFS to retry forever for this
462 specific error.
463
464 Setting the value to "0" will cause XFS to fail immediately when the
465 specific error is reported.
466
467 Setting the value to "N" (where 0 < N < Max) will make XFS retry the
468 operation "N" times before propagating the error.
469
470 retry_timeout_seconds (Min: -1 Default: Varies Max: 1 day)
471 Define the amount of time (in seconds) that the filesystem is
472 allowed to retry its operations when the specific error is
473 found.
474
475 Setting the value to "-1" will allow XFS to retry forever for this
476 specific error.
477
478 Setting the value to "0" will cause XFS to fail immediately when the
479 specific error is reported.
480
481 Setting the value to "N" (where 0 < N < Max) will allow XFS to retry the
482 operation for up to "N" seconds before propagating the error.
483
484**Note:** The default behaviour for a specific error handler is dependent on both
485the class and error context. For example, the default values for
486"metadata/ENODEV" are "0" rather than "-1" so that this error handler defaults
487to "fail immediately" behaviour. This is done because ENODEV is a fatal,
488unrecoverable error no matter how many times the metadata IO is retried.
489
490Workqueue Concurrency
491=====================
492
493XFS uses kernel workqueues to parallelize metadata update processes. This
494enables it to take advantage of storage hardware that can service many IO
495operations simultaneously. This interface exposes internal implementation
496details of XFS, and as such is explicitly not part of any userspace API/ABI
497guarantee the kernel may give userspace. These are undocumented features of
498the generic workqueue implementation XFS uses for concurrency, and they are
499provided here purely for diagnostic and tuning purposes and may change at any
500time in the future.
501
502The control knobs for a filesystem's workqueues are organized by task at hand
503and the short name of the data device. They all can be found in:
504
505 /sys/bus/workqueue/devices/${task}!${device}
506
507================ ===========
508 Task Description
509================ ===========
510 xfs_iwalk-$pid Inode scans of the entire filesystem. Currently limited to
511 mount time quotacheck.
512 xfs-gc Background garbage collection of disk space that have been
513 speculatively allocated beyond EOF or for staging copy on
514 write operations.
515================ ===========
516
517For example, the knobs for the quotacheck workqueue for /dev/nvme0n1 would be
518found in /sys/bus/workqueue/devices/xfs_iwalk-1111!nvme0n1/.
519
520The interesting knobs for XFS workqueues are as follows:
521
522============ ===========
523 Knob Description
524============ ===========
525 max_active Maximum number of background threads that can be started to
526 run the work.
527 cpumask CPUs upon which the threads are allowed to run.
528 nice Relative priority of scheduling the threads. These are the
529 same nice levels that can be applied to userspace processes.
530============ ===========
531
532Zoned Filesystems
533=================
534
535For zoned file systems, the following attributes are exposed in:
536
537 /sys/fs/xfs/<dev>/zoned/
538
539 max_open_zones (Min: 1 Default: Varies Max: UINTMAX)
540 This read-only attribute exposes the maximum number of open zones
541 available for data placement. The value is determined at mount time and
542 is limited by the capabilities of the backing zoned device, file system
543 size and the max_open_zones mount option.
544
545 zonegc_low_space (Min: 0 Default: 0 Max: 100)
546 Define a percentage for how much of the unused space that GC should keep
547 available for writing. A high value will reclaim more of the space
548 occupied by unused blocks, creating a larger buffer against write
549 bursts at the cost of increased write amplification. Regardless
550 of this value, garbage collection will always aim to free a minimum
551 amount of blocks to keep max_open_zones open for data placement purposes.