Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

xfs: expanding delayed logging design with background material

I wrote up a description of how transactions, space reservations and
relogging work together in response to a question for background
material on the delayed logging design. Add this to the existing
document for ease of future reference.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>

authored by

Dave Chinner and committed by
Dave Chinner
51a117ed d9f68777

+319 -36
+319 -36
Documentation/filesystems/xfs-delayed-logging-design.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 - ========================== 4 - XFS Delayed Logging Design 5 - ========================== 3 + ================== 4 + XFS Logging Design 5 + ================== 6 6 7 - Introduction to Re-logging in XFS 8 - ================================= 7 + Preamble 8 + ======== 9 9 10 - XFS logging is a combination of logical and physical logging. Some objects, 11 - such as inodes and dquots, are logged in logical format where the details 12 - logged are made up of the changes to in-core structures rather than on-disk 13 - structures. Other objects - typically buffers - have their physical changes 14 - logged. The reason for these differences is to reduce the amount of log space 15 - required for objects that are frequently logged. Some parts of inodes are more 16 - frequently logged than others, and inodes are typically more frequently logged 17 - than any other object (except maybe the superblock buffer) so keeping the 18 - amount of metadata logged low is of prime importance. 10 + This document describes the design and algorithms that the XFS journalling 11 + subsystem is based on. This document describes the design and algorithms that 12 + the XFS journalling subsystem is based on so that readers may familiarize 13 + themselves with the general concepts of how transaction processing in XFS works. 19 14 20 - The reason that this is such a concern is that XFS allows multiple separate 21 - modifications to a single object to be carried in the log at any given time. 22 - This allows the log to avoid needing to flush each change to disk before 23 - recording a new change to the object. XFS does this via a method called 24 - "re-logging". Conceptually, this is quite simple - all it requires is that any 25 - new change to the object is recorded with a *new copy* of all the existing 26 - changes in the new transaction that is written to the log. 15 + We begin with an overview of transactions in XFS, followed by describing how 16 + transaction reservations are structured and accounted, and then move into how we 17 + guarantee forwards progress for long running transactions with finite initial 18 + reservations bounds. At this point we need to explain how relogging works. With 19 + the basic concepts covered, the design of the delayed logging mechanism is 20 + documented. 21 + 22 + 23 + Introduction 24 + ============ 25 + 26 + XFS uses Write Ahead Logging for ensuring changes to the filesystem metadata 27 + are atomic and recoverable. For reasons of space and time efficiency, the 28 + logging mechanisms are varied and complex, combining intents, logical and 29 + physical logging mechanisms to provide the necessary recovery guarantees the 30 + filesystem requires. 31 + 32 + Some objects, such as inodes and dquots, are logged in logical format where the 33 + details logged are made up of the changes to in-core structures rather than 34 + on-disk structures. Other objects - typically buffers - have their physical 35 + changes logged. Long running atomic modifications have individual changes 36 + chained together by intents, ensuring that journal recovery can restart and 37 + finish an operation that was only partially done when the system stopped 38 + functioning. 39 + 40 + The reason for these differences is to keep the amount of log space and CPU time 41 + required to process objects being modified as small as possible and hence the 42 + logging overhead as low as possible. Some items are very frequently modified, 43 + and some parts of objects are more frequently modified than others, so keeping 44 + the overhead of metadata logging low is of prime importance. 45 + 46 + The method used to log an item or chain modifications together isn't 47 + particularly important in the scope of this document. It suffices to know that 48 + the method used for logging a particular object or chaining modifications 49 + together are different and are dependent on the object and/or modification being 50 + performed. The logging subsystem only cares that certain specific rules are 51 + followed to guarantee forwards progress and prevent deadlocks. 52 + 53 + 54 + Transactions in XFS 55 + =================== 56 + 57 + XFS has two types of high level transactions, defined by the type of log space 58 + reservation they take. These are known as "one shot" and "permanent" 59 + transactions. Permanent transaction reservations can take reservations that span 60 + commit boundaries, whilst "one shot" transactions are for a single atomic 61 + modification. 62 + 63 + The type and size of reservation must be matched to the modification taking 64 + place. This means that permanent transactions can be used for one-shot 65 + modifications, but one-shot reservations cannot be used for permanent 66 + transactions. 67 + 68 + In the code, a one-shot transaction pattern looks somewhat like this:: 69 + 70 + tp = xfs_trans_alloc(<reservation>) 71 + <lock items> 72 + <join item to transaction> 73 + <do modification> 74 + xfs_trans_commit(tp); 75 + 76 + As items are modified in the transaction, the dirty regions in those items are 77 + tracked via the transaction handle. Once the transaction is committed, all 78 + resources joined to it are released, along with the remaining unused reservation 79 + space that was taken at the transaction allocation time. 80 + 81 + In contrast, a permanent transaction is made up of multiple linked individual 82 + transactions, and the pattern looks like this:: 83 + 84 + tp = xfs_trans_alloc(<reservation>) 85 + xfs_ilock(ip, XFS_ILOCK_EXCL) 86 + 87 + loop { 88 + xfs_trans_ijoin(tp, 0); 89 + <do modification> 90 + xfs_trans_log_inode(tp, ip); 91 + xfs_trans_roll(&tp); 92 + } 93 + 94 + xfs_trans_commit(tp); 95 + xfs_iunlock(ip, XFS_ILOCK_EXCL); 96 + 97 + While this might look similar to a one-shot transaction, there is an important 98 + difference: xfs_trans_roll() performs a specific operation that links two 99 + transactions together:: 100 + 101 + ntp = xfs_trans_dup(tp); 102 + xfs_trans_commit(tp); 103 + xfs_log_reserve(ntp); 104 + 105 + This results in a series of "rolling transactions" where the inode is locked 106 + across the entire chain of transactions. Hence while this series of rolling 107 + transactions is running, nothing else can read from or write to the inode and 108 + this provides a mechanism for complex changes to appear atomic from an external 109 + observer's point of view. 110 + 111 + It is important to note that a series of rolling transactions in a permanent 112 + transaction does not form an atomic change in the journal. While each 113 + individual modification is atomic, the chain is *not atomic*. If we crash half 114 + way through, then recovery will only replay up to the last transactional 115 + modification the loop made that was committed to the journal. 116 + 117 + This affects long running permanent transactions in that it is not possible to 118 + predict how much of a long running operation will actually be recovered because 119 + there is no guarantee of how much of the operation reached stale storage. Hence 120 + if a long running operation requires multiple transactions to fully complete, 121 + the high level operation must use intents and deferred operations to guarantee 122 + recovery can complete the operation once the first transactions is persisted in 123 + the on-disk journal. 124 + 125 + 126 + Transactions are Asynchronous 127 + ============================= 128 + 129 + In XFS, all high level transactions are asynchronous by default. This means that 130 + xfs_trans_commit() does not guarantee that the modification has been committed 131 + to stable storage when it returns. Hence when a system crashes, not all the 132 + completed transactions will be replayed during recovery. 133 + 134 + However, the logging subsystem does provide global ordering guarantees, such 135 + that if a specific change is seen after recovery, all metadata modifications 136 + that were committed prior to that change will also be seen. 137 + 138 + For single shot operations that need to reach stable storage immediately, or 139 + ensuring that a long running permanent transaction is fully committed once it is 140 + complete, we can explicitly tag a transaction as synchronous. This will trigger 141 + a "log force" to flush the outstanding committed transactions to stable storage 142 + in the journal and wait for that to complete. 143 + 144 + Synchronous transactions are rarely used, however, because they limit logging 145 + throughput to the IO latency limitations of the underlying storage. Instead, we 146 + tend to use log forces to ensure modifications are on stable storage only when 147 + a user operation requires a synchronisation point to occur (e.g. fsync). 148 + 149 + 150 + Transaction Reservations 151 + ======================== 152 + 153 + It has been mentioned a number of times now that the logging subsystem needs to 154 + provide a forwards progress guarantee so that no modification ever stalls 155 + because it can't be written to the journal due to a lack of space in the 156 + journal. This is achieved by the transaction reservations that are made when 157 + a transaction is first allocated. For permanent transactions, these reservations 158 + are maintained as part of the transaction rolling mechanism. 159 + 160 + A transaction reservation provides a guarantee that there is physical log space 161 + available to write the modification into the journal before we start making 162 + modifications to objects and items. As such, the reservation needs to be large 163 + enough to take into account the amount of metadata that the change might need to 164 + log in the worst case. This means that if we are modifying a btree in the 165 + transaction, we have to reserve enough space to record a full leaf-to-root split 166 + of the btree. As such, the reservations are quite complex because we have to 167 + take into account all the hidden changes that might occur. 168 + 169 + For example, a user data extent allocation involves allocating an extent from 170 + free space, which modifies the free space trees. That's two btrees. Inserting 171 + the extent into the inode's extent map might require a split of the extent map 172 + btree, which requires another allocation that can modify the free space trees 173 + again. Then we might have to update reverse mappings, which modifies yet 174 + another btree which might require more space. And so on. Hence the amount of 175 + metadata that a "simple" operation can modify can be quite large. 176 + 177 + This "worst case" calculation provides us with the static "unit reservation" 178 + for the transaction that is calculated at mount time. We must guarantee that the 179 + log has this much space available before the transaction is allowed to proceed 180 + so that when we come to write the dirty metadata into the log we don't run out 181 + of log space half way through the write. 182 + 183 + For one-shot transactions, a single unit space reservation is all that is 184 + required for the transaction to proceed. For permanent transactions, however, we 185 + also have a "log count" that affects the size of the reservation that is to be 186 + made. 187 + 188 + While a permanent transaction can get by with a single unit of space 189 + reservation, it is somewhat inefficient to do this as it requires the 190 + transaction rolling mechanism to re-reserve space on every transaction roll. We 191 + know from the implementation of the permanent transactions how many transaction 192 + rolls are likely for the common modifications that need to be made. 193 + 194 + For example, and inode allocation is typically two transactions - one to 195 + physically allocate a free inode chunk on disk, and another to allocate an inode 196 + from an inode chunk that has free inodes in it. Hence for an inode allocation 197 + transaction, we might set the reservation log count to a value of 2 to indicate 198 + that the common/fast path transaction will commit two linked transactions in a 199 + chain. Each time a permanent transaction rolls, it consumes an entire unit 200 + reservation. 201 + 202 + Hence when the permanent transaction is first allocated, the log space 203 + reservation is increases from a single unit reservation to multiple unit 204 + reservations. That multiple is defined by the reservation log count, and this 205 + means we can roll the transaction multiple times before we have to re-reserve 206 + log space when we roll the transaction. This ensures that the common 207 + modifications we make only need to reserve log space once. 208 + 209 + If the log count for a permanent transaction reaches zero, then it needs to 210 + re-reserve physical space in the log. This is somewhat complex, and requires 211 + an understanding of how the log accounts for space that has been reserved. 212 + 213 + 214 + Log Space Accounting 215 + ==================== 216 + 217 + The position in the log is typically referred to as a Log Sequence Number (LSN). 218 + The log is circular, so the positions in the log are defined by the combination 219 + of a cycle number - the number of times the log has been overwritten - and the 220 + offset into the log. A LSN carries the cycle in the upper 32 bits and the 221 + offset in the lower 32 bits. The offset is in units of "basic blocks" (512 222 + bytes). Hence we can do realtively simple LSN based math to keep track of 223 + available space in the log. 224 + 225 + Log space accounting is done via a pair of constructs called "grant heads". The 226 + position of the grant heads is an absolute value, so the amount of space 227 + available in the log is defined by the distance between the position of the 228 + grant head and the current log tail. That is, how much space can be 229 + reserved/consumed before the grant heads would fully wrap the log and overtake 230 + the tail position. 231 + 232 + The first grant head is the "reserve" head. This tracks the byte count of the 233 + reservations currently held by active transactions. It is a purely in-memory 234 + accounting of the space reservation and, as such, actually tracks byte offsets 235 + into the log rather than basic blocks. Hence it technically isn't using LSNs to 236 + represent the log position, but it is still treated like a split {cycle,offset} 237 + tuple for the purposes of tracking reservation space. 238 + 239 + The reserve grant head is used to accurately account for exact transaction 240 + reservations amounts and the exact byte count that modifications actually make 241 + and need to write into the log. The reserve head is used to prevent new 242 + transactions from taking new reservations when the head reaches the current 243 + tail. It will block new reservations in a FIFO queue and as the log tail moves 244 + forward it will wake them in order once sufficient space is available. This FIFO 245 + mechanism ensures no transaction is starved of resources when log space 246 + shortages occur. 247 + 248 + The other grant head is the "write" head. Unlike the reserve head, this grant 249 + head contains an LSN and it tracks the physical space usage in the log. While 250 + this might sound like it is accounting the same state as the reserve grant head 251 + - and it mostly does track exactly the same location as the reserve grant head - 252 + there are critical differences in behaviour between them that provides the 253 + forwards progress guarantees that rolling permanent transactions require. 254 + 255 + These differences when a permanent transaction is rolled and the internal "log 256 + count" reaches zero and the initial set of unit reservations have been 257 + exhausted. At this point, we still require a log space reservation to continue 258 + the next transaction in the sequeunce, but we have none remaining. We cannot 259 + sleep during the transaction commit process waiting for new log space to become 260 + available, as we may end up on the end of the FIFO queue and the items we have 261 + locked while we sleep could end up pinning the tail of the log before there is 262 + enough free space in the log to fulfil all of the pending reservations and 263 + then wake up transaction commit in progress. 264 + 265 + To take a new reservation without sleeping requires us to be able to take a 266 + reservation even if there is no reservation space currently available. That is, 267 + we need to be able to *overcommit* the log reservation space. As has already 268 + been detailed, we cannot overcommit physical log space. However, the reserve 269 + grant head does not track physical space - it only accounts for the amount of 270 + reservations we currently have outstanding. Hence if the reserve head passes 271 + over the tail of the log all it means is that new reservations will be throttled 272 + immediately and remain throttled until the log tail is moved forward far enough 273 + to remove the overcommit and start taking new reservations. In other words, we 274 + can overcommit the reserve head without violating the physical log head and tail 275 + rules. 276 + 277 + As a result, permanent transactions only "regrant" reservation space during 278 + xfs_trans_commit() calls, while the physical log space reservation - tracked by 279 + the write head - is then reserved separately by a call to xfs_log_reserve() 280 + after the commit completes. Once the commit completes, we can sleep waiting for 281 + physical log space to be reserved from the write grant head, but only if one 282 + critical rule has been observed:: 283 + 284 + Code using permanent reservations must always log the items they hold 285 + locked across each transaction they roll in the chain. 286 + 287 + "Re-logging" the locked items on every transaction roll ensures that the items 288 + attached to the transaction chain being rolled are always relocated to the 289 + physical head of the log and so do not pin the tail of the log. If a locked item 290 + pins the tail of the log when we sleep on the write reservation, then we will 291 + deadlock the log as we cannot take the locks needed to write back that item and 292 + move the tail of the log forwards to free up write grant space. Re-logging the 293 + locked items avoids this deadlock and guarantees that the log reservation we are 294 + making cannot self-deadlock. 295 + 296 + If all rolling transactions obey this rule, then they can all make forwards 297 + progress independently because nothing will block the progress of the log 298 + tail moving forwards and hence ensuring that write grant space is always 299 + (eventually) made available to permanent transactions no matter how many times 300 + they roll. 301 + 302 + 303 + Re-logging Explained 304 + ==================== 305 + 306 + XFS allows multiple separate modifications to a single object to be carried in 307 + the log at any given time. This allows the log to avoid needing to flush each 308 + change to disk before recording a new change to the object. XFS does this via a 309 + method called "re-logging". Conceptually, this is quite simple - all it requires 310 + is that any new change to the object is recorded with a *new copy* of all the 311 + existing changes in the new transaction that is written to the log. 27 312 28 313 That is, if we have a sequence of changes A through to F, and the object was 29 314 written to disk after change D, we would see in the log the following series ··· 327 42 In other words, each time an object is relogged, the new transaction contains 328 43 the aggregation of all the previous changes currently held only in the log. 329 44 330 - This relogging technique also allows objects to be moved forward in the log so 331 - that an object being relogged does not prevent the tail of the log from ever 332 - moving forward. This can be seen in the table above by the changing 333 - (increasing) LSN of each subsequent transaction - the LSN is effectively a 334 - direct encoding of the location in the log of the transaction. 45 + This relogging technique allows objects to be moved forward in the log so that 46 + an object being relogged does not prevent the tail of the log from ever moving 47 + forward. This can be seen in the table above by the changing (increasing) LSN 48 + of each subsequent transaction, and it's the technique that allows us to 49 + implement long-running, multiple-commit permanent transactions. 335 50 336 - This relogging is also used to implement long-running, multiple-commit 337 - transactions. These transaction are known as rolling transactions, and require 338 - a special log reservation known as a permanent transaction reservation. A 339 - typical example of a rolling transaction is the removal of extents from an 51 + A typical example of a rolling transaction is the removal of extents from an 340 52 inode which can only be done at a rate of two extents per transaction because 341 53 of reservation size limitations. Hence a rolling extent removal transaction 342 54 keeps relogging the inode and btree buffers as they get modified in each ··· 349 67 dirtier as they get relogged, so each subsequent transaction is writing more 350 68 metadata into the log. 351 69 352 - Another feature of the XFS transaction subsystem is that most transactions are 353 - asynchronous. That is, they don't commit to disk until either a log buffer is 354 - filled (a log buffer can hold multiple transactions) or a synchronous operation 355 - forces the log buffers holding the transactions to disk. This means that XFS is 356 - doing aggregation of transactions in memory - batching them, if you like - to 357 - minimise the impact of the log IO on transaction throughput. 70 + It should now also be obvious how relogging and asynchronous transactions go 71 + hand in hand. That is, transactions don't get written to the physical journal 72 + until either a log buffer is filled (a log buffer can hold multiple 73 + transactions) or a synchronous operation forces the log buffers holding the 74 + transactions to disk. This means that XFS is doing aggregation of transactions 75 + in memory - batching them, if you like - to minimise the impact of the log IO on 76 + transaction throughput. 358 77 359 78 The limitation on asynchronous transaction throughput is the number and size of 360 79 log buffers made available by the log manager. By default there are 8 log