Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

docs: convert docs to ReST and rename to *.rst

The conversion is actually:
- add blank lines and indentation in order to identify paragraphs;
- fix tables markups;
- add some lists markups;
- mark literal blocks;
- adjust title markups.

At its new index.rst, let's add a :orphan: while this is not linked to
the main index.rst file, in order to avoid build warnings.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>

authored by

Mauro Carvalho Chehab and committed by
Jonathan Corbet
f0ba4377 8ea61889

+1124 -796
+17 -7
Documentation/device-mapper/cache-policies.txt Documentation/device-mapper/cache-policies.rst
··· 1 + ============================= 1 2 Guidance for writing policies 2 3 ============================= 3 4 ··· 31 30 32 31 This policy is now an alias for smq (see below). 33 32 34 - The following tunables are accepted, but have no effect: 33 + The following tunables are accepted, but have no effect:: 35 34 36 35 'sequential_threshold <#nr_sequential_ios>' 37 36 'random_threshold <#nr_random_ios>' ··· 57 56 degrade slightly until smq recalculates the origin device's hotspots 58 57 that should be cached. 59 58 60 - Memory usage: 59 + Memory usage 60 + ^^^^^^^^^^^^ 61 + 61 62 The mq policy used a lot of memory; 88 bytes per cache block on a 64 62 63 bit machine. 63 64 ··· 72 69 All this means smq uses ~25bytes per cache block. Still a lot of 73 70 memory, but a substantial improvement nontheless. 74 71 75 - Level balancing: 72 + Level balancing 73 + ^^^^^^^^^^^^^^^ 74 + 76 75 mq placed entries in different levels of the multiqueue structures 77 76 based on their hit count (~ln(hit count)). This meant the bottom 78 77 levels generally had the most entries, and the top ones had very ··· 99 94 performing badly then it starts moving entries more quickly between 100 95 levels. This lets it adapt to new IO patterns very quickly. 101 96 102 - Performance: 97 + Performance 98 + ^^^^^^^^^^^ 99 + 103 100 Testing smq shows substantially better performance than mq. 104 101 105 102 cleaner ··· 112 105 Examples 113 106 ======== 114 107 115 - The syntax for a table is: 108 + The syntax for a table is:: 109 + 116 110 cache <metadata dev> <cache dev> <origin dev> <block size> 117 111 <#feature_args> [<feature arg>]* 118 112 <policy> <#policy_args> [<policy arg>]* 119 113 120 - The syntax to send a message using the dmsetup command is: 114 + The syntax to send a message using the dmsetup command is:: 115 + 121 116 dmsetup message <mapped device> 0 sequential_threshold 1024 122 117 dmsetup message <mapped device> 0 random_threshold 8 123 118 124 - Using dmsetup: 119 + Using dmsetup:: 120 + 125 121 dmsetup create blah --table "0 268435456 cache /dev/sdb /dev/sdc \ 126 122 /dev/sdd 512 0 mq 4 sequential_threshold 1024 random_threshold 8" 127 123 creates a 128GB large mapped device named 'blah' with the
+116 -90
Documentation/device-mapper/cache.txt Documentation/device-mapper/cache.rst
··· 1 + ===== 2 + Cache 3 + ===== 4 + 1 5 Introduction 2 6 ============ 3 7 ··· 28 24 Glossary 29 25 ======== 30 26 31 - Migration - Movement of the primary copy of a logical block from one 27 + Migration 28 + Movement of the primary copy of a logical block from one 32 29 device to the other. 33 - Promotion - Migration from slow device to fast device. 34 - Demotion - Migration from fast device to slow device. 30 + Promotion 31 + Migration from slow device to fast device. 32 + Demotion 33 + Migration from fast device to slow device. 35 34 36 35 The origin device always contains a copy of the logical block, which 37 36 may be out of date or kept in sync with the copy on the cache device ··· 176 169 Constructor 177 170 ----------- 178 171 179 - cache <metadata dev> <cache dev> <origin dev> <block size> 180 - <#feature args> [<feature arg>]* 181 - <policy> <#policy args> [policy args]* 172 + :: 182 173 183 - metadata dev : fast device holding the persistent metadata 184 - cache dev : fast device holding cached data blocks 185 - origin dev : slow device holding original data blocks 186 - block size : cache unit size in sectors 174 + cache <metadata dev> <cache dev> <origin dev> <block size> 175 + <#feature args> [<feature arg>]* 176 + <policy> <#policy args> [policy args]* 187 177 188 - #feature args : number of feature arguments passed 189 - feature args : writethrough or passthrough (The default is writeback.) 178 + ================ ======================================================= 179 + metadata dev fast device holding the persistent metadata 180 + cache dev fast device holding cached data blocks 181 + origin dev slow device holding original data blocks 182 + block size cache unit size in sectors 190 183 191 - policy : the replacement policy to use 192 - #policy args : an even number of arguments corresponding to 193 - key/value pairs passed to the policy 194 - policy args : key/value pairs passed to the policy 195 - E.g. 'sequential_threshold 1024' 196 - See cache-policies.txt for details. 184 + #feature args number of feature arguments passed 185 + feature args writethrough or passthrough (The default is writeback.) 186 + 187 + policy the replacement policy to use 188 + #policy args an even number of arguments corresponding to 189 + key/value pairs passed to the policy 190 + policy args key/value pairs passed to the policy 191 + E.g. 'sequential_threshold 1024' 192 + See cache-policies.txt for details. 193 + ================ ======================================================= 197 194 198 195 Optional feature arguments are: 199 - writethrough : write through caching that prohibits cache block 200 - content from being different from origin block content. 201 - Without this argument, the default behaviour is to write 202 - back cache block contents later for performance reasons, 203 - so they may differ from the corresponding origin blocks. 204 196 205 - passthrough : a degraded mode useful for various cache coherency 206 - situations (e.g., rolling back snapshots of 207 - underlying storage). Reads and writes always go to 208 - the origin. If a write goes to a cached origin 209 - block, then the cache block is invalidated. 210 - To enable passthrough mode the cache must be clean. 211 197 212 - metadata2 : use version 2 of the metadata. This stores the dirty bits 213 - in a separate btree, which improves speed of shutting 214 - down the cache. 198 + ==================== ======================================================== 199 + writethrough write through caching that prohibits cache block 200 + content from being different from origin block content. 201 + Without this argument, the default behaviour is to write 202 + back cache block contents later for performance reasons, 203 + so they may differ from the corresponding origin blocks. 215 204 216 - no_discard_passdown : disable passing down discards from the cache 217 - to the origin's data device. 205 + passthrough a degraded mode useful for various cache coherency 206 + situations (e.g., rolling back snapshots of 207 + underlying storage). Reads and writes always go to 208 + the origin. If a write goes to a cached origin 209 + block, then the cache block is invalidated. 210 + To enable passthrough mode the cache must be clean. 211 + 212 + metadata2 use version 2 of the metadata. This stores the dirty 213 + bits in a separate btree, which improves speed of 214 + shutting down the cache. 215 + 216 + no_discard_passdown disable passing down discards from the cache 217 + to the origin's data device. 218 + ==================== ======================================================== 218 219 219 220 A policy called 'default' is always registered. This is an alias for 220 221 the policy we currently think is giving best all round performance. ··· 233 218 Status 234 219 ------ 235 220 236 - <metadata block size> <#used metadata blocks>/<#total metadata blocks> 237 - <cache block size> <#used cache blocks>/<#total cache blocks> 238 - <#read hits> <#read misses> <#write hits> <#write misses> 239 - <#demotions> <#promotions> <#dirty> <#features> <features>* 240 - <#core args> <core args>* <policy name> <#policy args> <policy args>* 241 - <cache metadata mode> 221 + :: 242 222 243 - metadata block size : Fixed block size for each metadata block in 244 - sectors 245 - #used metadata blocks : Number of metadata blocks used 246 - #total metadata blocks : Total number of metadata blocks 247 - cache block size : Configurable block size for the cache device 248 - in sectors 249 - #used cache blocks : Number of blocks resident in the cache 250 - #total cache blocks : Total number of cache blocks 251 - #read hits : Number of times a READ bio has been mapped 252 - to the cache 253 - #read misses : Number of times a READ bio has been mapped 254 - to the origin 255 - #write hits : Number of times a WRITE bio has been mapped 256 - to the cache 257 - #write misses : Number of times a WRITE bio has been 258 - mapped to the origin 259 - #demotions : Number of times a block has been removed 260 - from the cache 261 - #promotions : Number of times a block has been moved to 262 - the cache 263 - #dirty : Number of blocks in the cache that differ 264 - from the origin 265 - #feature args : Number of feature args to follow 266 - feature args : 'writethrough' (optional) 267 - #core args : Number of core arguments (must be even) 268 - core args : Key/value pairs for tuning the core 269 - e.g. migration_threshold 270 - policy name : Name of the policy 271 - #policy args : Number of policy arguments to follow (must be even) 272 - policy args : Key/value pairs e.g. sequential_threshold 273 - cache metadata mode : ro if read-only, rw if read-write 274 - In serious cases where even a read-only mode is deemed unsafe 275 - no further I/O will be permitted and the status will just 276 - contain the string 'Fail'. The userspace recovery tools 277 - should then be used. 278 - needs_check : 'needs_check' if set, '-' if not set 279 - A metadata operation has failed, resulting in the needs_check 280 - flag being set in the metadata's superblock. The metadata 281 - device must be deactivated and checked/repaired before the 282 - cache can be made fully operational again. '-' indicates 283 - needs_check is not set. 223 + <metadata block size> <#used metadata blocks>/<#total metadata blocks> 224 + <cache block size> <#used cache blocks>/<#total cache blocks> 225 + <#read hits> <#read misses> <#write hits> <#write misses> 226 + <#demotions> <#promotions> <#dirty> <#features> <features>* 227 + <#core args> <core args>* <policy name> <#policy args> <policy args>* 228 + <cache metadata mode> 229 + 230 + 231 + ========================= ===================================================== 232 + metadata block size Fixed block size for each metadata block in 233 + sectors 234 + #used metadata blocks Number of metadata blocks used 235 + #total metadata blocks Total number of metadata blocks 236 + cache block size Configurable block size for the cache device 237 + in sectors 238 + #used cache blocks Number of blocks resident in the cache 239 + #total cache blocks Total number of cache blocks 240 + #read hits Number of times a READ bio has been mapped 241 + to the cache 242 + #read misses Number of times a READ bio has been mapped 243 + to the origin 244 + #write hits Number of times a WRITE bio has been mapped 245 + to the cache 246 + #write misses Number of times a WRITE bio has been 247 + mapped to the origin 248 + #demotions Number of times a block has been removed 249 + from the cache 250 + #promotions Number of times a block has been moved to 251 + the cache 252 + #dirty Number of blocks in the cache that differ 253 + from the origin 254 + #feature args Number of feature args to follow 255 + feature args 'writethrough' (optional) 256 + #core args Number of core arguments (must be even) 257 + core args Key/value pairs for tuning the core 258 + e.g. migration_threshold 259 + policy name Name of the policy 260 + #policy args Number of policy arguments to follow (must be even) 261 + policy args Key/value pairs e.g. sequential_threshold 262 + cache metadata mode ro if read-only, rw if read-write 263 + 264 + In serious cases where even a read-only mode is 265 + deemed unsafe no further I/O will be permitted and 266 + the status will just contain the string 'Fail'. 267 + The userspace recovery tools should then be used. 268 + needs_check 'needs_check' if set, '-' if not set 269 + A metadata operation has failed, resulting in the 270 + needs_check flag being set in the metadata's 271 + superblock. The metadata device must be 272 + deactivated and checked/repaired before the 273 + cache can be made fully operational again. 274 + '-' indicates needs_check is not set. 275 + ========================= ===================================================== 284 276 285 277 Messages 286 278 -------- ··· 296 274 need a generic way of getting and setting these. Device-mapper 297 275 messages are used. (A sysfs interface would also be possible.) 298 276 299 - The message format is: 277 + The message format is:: 300 278 301 279 <key> <value> 302 280 303 - E.g. 281 + E.g.:: 282 + 304 283 dmsetup message my_cache 0 sequential_threshold 1024 305 284 306 285 ··· 313 290 value, in the future a variant message that takes cblock ranges 314 291 expressed in hexadecimal may be needed to better support efficient 315 292 invalidation of larger caches. The cache must be in passthrough mode 316 - when invalidate_cblocks is used. 293 + when invalidate_cblocks is used:: 317 294 318 295 invalidate_cblocks [<cblock>|<cblock begin>-<cblock end>]* 319 296 320 - E.g. 297 + E.g.:: 298 + 321 299 dmsetup message my_cache 0 invalidate_cblocks 2345 3456-4567 5678-6789 322 300 323 301 Examples ··· 328 304 329 305 https://github.com/jthornber/device-mapper-test-suite 330 306 331 - dmsetup create my_cache --table '0 41943040 cache /dev/mapper/metadata \ 332 - /dev/mapper/ssd /dev/mapper/origin 512 1 writeback default 0' 333 - dmsetup create my_cache --table '0 41943040 cache /dev/mapper/metadata \ 334 - /dev/mapper/ssd /dev/mapper/origin 1024 1 writeback \ 335 - mq 4 sequential_threshold 1024 random_threshold 8' 307 + :: 308 + 309 + dmsetup create my_cache --table '0 41943040 cache /dev/mapper/metadata \ 310 + /dev/mapper/ssd /dev/mapper/origin 512 1 writeback default 0' 311 + dmsetup create my_cache --table '0 41943040 cache /dev/mapper/metadata \ 312 + /dev/mapper/ssd /dev/mapper/origin 1024 1 writeback \ 313 + mq 4 sequential_threshold 1024 random_threshold 8'
+15 -12
Documentation/device-mapper/delay.txt Documentation/device-mapper/delay.rst
··· 1 + ======== 1 2 dm-delay 2 3 ======== 3 4 4 5 Device-Mapper's "delay" target delays reads and/or writes 5 6 and maps them to different devices. 6 7 7 - Parameters: 8 + Parameters:: 9 + 8 10 <device> <offset> <delay> [<write_device> <write_offset> <write_delay> 9 11 [<flush_device> <flush_offset> <flush_delay>]] 10 12 ··· 16 14 17 15 Example scripts 18 16 =============== 19 - [[ 20 - #!/bin/sh 21 - # Create device delaying rw operation for 500ms 22 - echo "0 `blockdev --getsz $1` delay $1 0 500" | dmsetup create delayed 23 - ]] 24 17 25 - [[ 26 - #!/bin/sh 27 - # Create device delaying only write operation for 500ms and 28 - # splitting reads and writes to different devices $1 $2 29 - echo "0 `blockdev --getsz $1` delay $1 0 0 $2 0 500" | dmsetup create delayed 30 - ]] 18 + :: 19 + 20 + #!/bin/sh 21 + # Create device delaying rw operation for 500ms 22 + echo "0 `blockdev --getsz $1` delay $1 0 500" | dmsetup create delayed 23 + 24 + :: 25 + 26 + #!/bin/sh 27 + # Create device delaying only write operation for 500ms and 28 + # splitting reads and writes to different devices $1 $2 29 + echo "0 `blockdev --getsz $1` delay $1 0 0 $2 0 500" | dmsetup create delayed
+34 -23
Documentation/device-mapper/dm-crypt.txt Documentation/device-mapper/dm-crypt.rst
··· 1 + ======== 1 2 dm-crypt 2 - ========= 3 + ======== 3 4 4 5 Device-Mapper's "crypt" target provides transparent encryption of block devices 5 6 using the kernel crypto API. ··· 8 7 For a more detailed description of supported parameters see: 9 8 https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt 10 9 11 - Parameters: <cipher> <key> <iv_offset> <device path> \ 10 + Parameters:: 11 + 12 + <cipher> <key> <iv_offset> <device path> \ 12 13 <offset> [<#opt_params> <opt_params>] 13 14 14 15 <cipher> 15 16 Encryption cipher, encryption mode and Initial Vector (IV) generator. 16 17 17 - The cipher specifications format is: 18 + The cipher specifications format is:: 19 + 18 20 cipher[:keycount]-chainmode-ivmode[:ivopts] 19 - Examples: 21 + 22 + Examples:: 23 + 20 24 aes-cbc-essiv:sha256 21 25 aes-xts-plain64 22 26 serpent-xts-plain64 ··· 31 25 as for the first format type. 32 26 This format is mainly used for specification of authenticated modes. 33 27 34 - The crypto API cipher specifications format is: 28 + The crypto API cipher specifications format is:: 29 + 35 30 capi:cipher_api_spec-ivmode[:ivopts] 36 - Examples: 31 + 32 + Examples:: 33 + 37 34 capi:cbc(aes)-essiv:sha256 38 35 capi:xts(aes)-plain64 39 - Examples of authenticated modes: 36 + 37 + Examples of authenticated modes:: 38 + 40 39 capi:gcm(aes)-random 41 40 capi:authenc(hmac(sha256),xts(aes))-random 42 41 capi:rfc7539(chacha20,poly1305)-random ··· 153 142 encryption with dm-crypt using the 'cryptsetup' utility, see 154 143 https://gitlab.com/cryptsetup/cryptsetup 155 144 156 - [[ 157 - #!/bin/sh 158 - # Create a crypt device using dmsetup 159 - dmsetup create crypt1 --table "0 `blockdev --getsz $1` crypt aes-cbc-essiv:sha256 babebabebabebabebabebabebabebabe 0 $1 0" 160 - ]] 145 + :: 161 146 162 - [[ 163 - #!/bin/sh 164 - # Create a crypt device using dmsetup when encryption key is stored in keyring service 165 - dmsetup create crypt2 --table "0 `blockdev --getsize $1` crypt aes-cbc-essiv:sha256 :32:logon:my_prefix:my_key 0 $1 0" 166 - ]] 147 + #!/bin/sh 148 + # Create a crypt device using dmsetup 149 + dmsetup create crypt1 --table "0 `blockdev --getsz $1` crypt aes-cbc-essiv:sha256 babebabebabebabebabebabebabebabe 0 $1 0" 167 150 168 - [[ 169 - #!/bin/sh 170 - # Create a crypt device using cryptsetup and LUKS header with default cipher 171 - cryptsetup luksFormat $1 172 - cryptsetup luksOpen $1 crypt1 173 - ]] 151 + :: 152 + 153 + #!/bin/sh 154 + # Create a crypt device using dmsetup when encryption key is stored in keyring service 155 + dmsetup create crypt2 --table "0 `blockdev --getsize $1` crypt aes-cbc-essiv:sha256 :32:logon:my_prefix:my_key 0 $1 0" 156 + 157 + :: 158 + 159 + #!/bin/sh 160 + # Create a crypt device using cryptsetup and LUKS header with default cipher 161 + cryptsetup luksFormat $1 162 + cryptsetup luksOpen $1 crypt1
+31 -14
Documentation/device-mapper/dm-flakey.txt Documentation/device-mapper/dm-flakey.rst
··· 1 + ========= 1 2 dm-flakey 2 3 ========= 3 4 ··· 16 15 17 16 Table parameters 18 17 ---------------- 18 + 19 + :: 20 + 19 21 <dev path> <offset> <up interval> <down interval> \ 20 22 [<num_features> [<feature arguments>]] 21 23 22 24 Mandatory parameters: 23 - <dev path>: Full pathname to the underlying block-device, or a 24 - "major:minor" device-number. 25 - <offset>: Starting sector within the device. 26 - <up interval>: Number of seconds device is available. 27 - <down interval>: Number of seconds device returns errors. 25 + 26 + <dev path>: 27 + Full pathname to the underlying block-device, or a 28 + "major:minor" device-number. 29 + <offset>: 30 + Starting sector within the device. 31 + <up interval>: 32 + Number of seconds device is available. 33 + <down interval>: 34 + Number of seconds device returns errors. 28 35 29 36 Optional feature parameters: 37 + 30 38 If no feature parameters are present, during the periods of 31 39 unreliability, all I/O returns errors. 32 40 ··· 51 41 During <down interval>, replace <Nth_byte> of the data of 52 42 each matching bio with <value>. 53 43 54 - <Nth_byte>: The offset of the byte to replace. 55 - Counting starts at 1, to replace the first byte. 56 - <direction>: Either 'r' to corrupt reads or 'w' to corrupt writes. 57 - 'w' is incompatible with drop_writes. 58 - <value>: The value (from 0-255) to write. 59 - <flags>: Perform the replacement only if bio->bi_opf has all the 60 - selected flags set. 44 + <Nth_byte>: 45 + The offset of the byte to replace. 46 + Counting starts at 1, to replace the first byte. 47 + <direction>: 48 + Either 'r' to corrupt reads or 'w' to corrupt writes. 49 + 'w' is incompatible with drop_writes. 50 + <value>: 51 + The value (from 0-255) to write. 52 + <flags>: 53 + Perform the replacement only if bio->bi_opf has all the 54 + selected flags set. 61 55 62 56 Examples: 57 + 58 + Replaces the 32nd byte of READ bios with the value 1:: 59 + 63 60 corrupt_bio_byte 32 r 1 0 64 - - replaces the 32nd byte of READ bios with the value 1 61 + 62 + Replaces the 224th byte of REQ_META (=32) bios with the value 0:: 65 63 66 64 corrupt_bio_byte 224 w 0 32 67 - - replaces the 224th byte of REQ_META (=32) bios with the value 0
+43 -32
Documentation/device-mapper/dm-init.txt Documentation/device-mapper/dm-init.rst
··· 1 + ================================ 1 2 Early creation of mapped devices 2 - ==================================== 3 + ================================ 3 4 4 5 It is possible to configure a device-mapper device to act as the root device for 5 6 your system in two ways. ··· 13 12 14 13 The format is specified as a string of data separated by commas and optionally 15 14 semi-colons, where: 15 + 16 16 - a comma is used to separate fields like name, uuid, flags and table 17 17 (specifies one device) 18 18 - a semi-colon is used to separate devices. 19 19 20 - So the format will look like this: 20 + So the format will look like this:: 21 21 22 22 dm-mod.create=<name>,<uuid>,<minor>,<flags>,<table>[,<table>+][;<name>,<uuid>,<minor>,<flags>,<table>[,<table>+]+] 23 23 24 - Where, 24 + Where:: 25 + 25 26 <name> ::= The device name. 26 27 <uuid> ::= xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | "" 27 28 <minor> ::= The device minor number | "" ··· 32 29 <target_type> ::= "verity" | "linear" | ... (see list below) 33 30 34 31 The dm line should be equivalent to the one used by the dmsetup tool with the 35 - --concise argument. 32 + `--concise` argument. 36 33 37 34 Target types 38 35 ============ ··· 41 38 activation of certain DM targets without first using userspace tools to check 42 39 the validity of associated metadata. 43 40 44 - "cache": constrained, userspace should verify cache device 45 - "crypt": allowed 46 - "delay": allowed 47 - "era": constrained, userspace should verify metadata device 48 - "flakey": constrained, meant for test 49 - "linear": allowed 50 - "log-writes": constrained, userspace should verify metadata device 51 - "mirror": constrained, userspace should verify main/mirror device 52 - "raid": constrained, userspace should verify metadata device 53 - "snapshot": constrained, userspace should verify src/dst device 54 - "snapshot-origin": allowed 55 - "snapshot-merge": constrained, userspace should verify src/dst device 56 - "striped": allowed 57 - "switch": constrained, userspace should verify dev path 58 - "thin": constrained, requires dm target message from userspace 59 - "thin-pool": constrained, requires dm target message from userspace 60 - "verity": allowed 61 - "writecache": constrained, userspace should verify cache device 62 - "zero": constrained, not meant for rootfs 41 + ======================= ======================================================= 42 + `cache` constrained, userspace should verify cache device 43 + `crypt` allowed 44 + `delay` allowed 45 + `era` constrained, userspace should verify metadata device 46 + `flakey` constrained, meant for test 47 + `linear` allowed 48 + `log-writes` constrained, userspace should verify metadata device 49 + `mirror` constrained, userspace should verify main/mirror device 50 + `raid` constrained, userspace should verify metadata device 51 + `snapshot` constrained, userspace should verify src/dst device 52 + `snapshot-origin` allowed 53 + `snapshot-merge` constrained, userspace should verify src/dst device 54 + `striped` allowed 55 + `switch` constrained, userspace should verify dev path 56 + `thin` constrained, requires dm target message from userspace 57 + `thin-pool` constrained, requires dm target message from userspace 58 + `verity` allowed 59 + `writecache` constrained, userspace should verify cache device 60 + `zero` constrained, not meant for rootfs 61 + ======================= ======================================================= 63 62 64 63 If the target is not listed above, it is constrained by default (not tested). 65 64 66 65 Examples 67 66 ======== 68 67 An example of booting to a linear array made up of user-mode linux block 69 - devices: 68 + devices:: 70 69 71 70 dm-mod.create="lroot,,,rw, 0 4096 linear 98:16 0, 4096 4096 linear 98:32 0" root=/dev/dm-0 72 71 ··· 76 71 devices identified by their major:minor numbers. After boot, udev will rename 77 72 this target to /dev/mapper/lroot (depending on the rules). No uuid was assigned. 78 73 79 - An example of multiple device-mappers, with the dm-mod.create="..." contents is shown here 80 - split on multiple lines for readability: 74 + An example of multiple device-mappers, with the dm-mod.create="..." contents 75 + is shown here split on multiple lines for readability:: 81 76 82 77 dm-linear,,1,rw, 83 78 0 32768 linear 8:1 0, ··· 89 84 90 85 Other examples (per target): 91 86 92 - "crypt": 87 + "crypt":: 88 + 93 89 dm-crypt,,8,ro, 94 90 0 1048576 crypt aes-xts-plain64 95 91 babebabebabebabebabebabebabebabebabebabebabebabebabebabebabebabe 0 96 92 /dev/sda 0 1 allow_discards 97 93 98 - "delay": 94 + "delay":: 95 + 99 96 dm-delay,,4,ro,0 409600 delay /dev/sda1 0 500 100 97 101 - "linear": 98 + "linear":: 99 + 102 100 dm-linear,,,rw, 103 101 0 32768 linear /dev/sda1 0, 104 102 32768 1024000 linear /dev/sda2 0, 105 103 1056768 204800 linear /dev/sda3 0, 106 104 1261568 512000 linear /dev/sda4 0 107 105 108 - "snapshot-origin": 106 + "snapshot-origin":: 107 + 109 108 dm-snap-orig,,4,ro,0 409600 snapshot-origin 8:2 110 109 111 - "striped": 110 + "striped":: 111 + 112 112 dm-striped,,4,ro,0 1638400 striped 4 4096 113 113 /dev/sda1 0 /dev/sda2 0 /dev/sda3 0 /dev/sda4 0 114 114 115 - "verity": 115 + "verity":: 116 + 116 117 dm-verity,,4,ro, 117 118 0 1638400 verity 1 8:1 8:2 4096 4096 204800 1 sha256 118 119 fb1a5a0f00deb908d8b53cb270858975e76cf64105d412ce764225d53b8f3cfd
+44 -18
Documentation/device-mapper/dm-integrity.txt Documentation/device-mapper/dm-integrity.rst
··· 1 + ============ 2 + dm-integrity 3 + ============ 4 + 1 5 The dm-integrity target emulates a block device that has additional 2 6 per-sector tags that can be used for storing integrity information. 3 7 ··· 39 35 target can't be loaded. 40 36 41 37 To use the target for the first time: 38 + 42 39 1. overwrite the superblock with zeroes 43 40 2. load the dm-integrity target with one-sector size, the kernel driver 44 - will format the device 41 + will format the device 45 42 3. unload the dm-integrity target 46 43 4. read the "provided_data_sectors" value from the superblock 47 44 5. load the dm-integrity target with the the target size 48 - "provided_data_sectors" 45 + "provided_data_sectors" 49 46 6. if you want to use dm-integrity with dm-crypt, load the dm-crypt target 50 - with the size "provided_data_sectors" 47 + with the size "provided_data_sectors" 51 48 52 49 53 50 Target arguments: ··· 56 51 1. the underlying block device 57 52 58 53 2. the number of reserved sector at the beginning of the device - the 59 - dm-integrity won't read of write these sectors 54 + dm-integrity won't read of write these sectors 60 55 61 56 3. the size of the integrity tag (if "-" is used, the size is taken from 62 - the internal-hash algorithm) 57 + the internal-hash algorithm) 63 58 64 59 4. mode: 65 - D - direct writes (without journal) - in this mode, journaling is 60 + 61 + D - direct writes (without journal) 62 + in this mode, journaling is 66 63 not used and data sectors and integrity tags are written 67 64 separately. In case of crash, it is possible that the data 68 65 and integrity tag doesn't match. 69 - J - journaled writes - data and integrity tags are written to the 66 + J - journaled writes 67 + data and integrity tags are written to the 70 68 journal and atomicity is guaranteed. In case of crash, 71 69 either both data and tag or none of them are written. The 72 70 journaled mode degrades write throughput twice because the ··· 186 178 187 179 188 180 The layout of the formatted block device: 189 - * reserved sectors (they are not used by this target, they can be used for 190 - storing LUKS metadata or for other purpose), the size of the reserved 191 - area is specified in the target arguments 181 + 182 + * reserved sectors 183 + (they are not used by this target, they can be used for 184 + storing LUKS metadata or for other purpose), the size of the reserved 185 + area is specified in the target arguments 186 + 192 187 * superblock (4kiB) 193 188 * magic string - identifies that the device was formatted 194 189 * version ··· 203 192 metadata and padding). The user of this target should not send 204 193 bios that access data beyond the "provided data sectors" limit. 205 194 * flags 206 - SB_FLAG_HAVE_JOURNAL_MAC - a flag is set if journal_mac is used 207 - SB_FLAG_RECALCULATING - recalculating is in progress 208 - SB_FLAG_DIRTY_BITMAP - journal area contains the bitmap of dirty 209 - blocks 195 + SB_FLAG_HAVE_JOURNAL_MAC 196 + - a flag is set if journal_mac is used 197 + SB_FLAG_RECALCULATING 198 + - recalculating is in progress 199 + SB_FLAG_DIRTY_BITMAP 200 + - journal area contains the bitmap of dirty 201 + blocks 210 202 * log2(sectors per block) 211 203 * a position where recalculating finished 212 204 * journal 213 205 The journal is divided into sections, each section contains: 206 + 214 207 * metadata area (4kiB), it contains journal entries 215 - every journal entry contains: 208 + 209 + - every journal entry contains: 210 + 216 211 * logical sector (specifies where the data and tag should 217 212 be written) 218 213 * last 8 bytes of data 219 214 * integrity tag (the size is specified in the superblock) 220 - every metadata sector ends with 215 + 216 + - every metadata sector ends with 217 + 221 218 * mac (8-bytes), all the macs in 8 metadata sectors form a 222 219 64-byte value. It is used to store hmac of sector 223 220 numbers in the journal section, to protect against a 224 221 possibility that the attacker tampers with sector 225 222 numbers in the journal. 226 223 * commit id 224 + 227 225 * data area (the size is variable; it depends on how many journal 228 226 entries fit into the metadata area) 229 - every sector in the data area contains: 227 + 228 + - every sector in the data area contains: 229 + 230 230 * data (504 bytes of data, the last 8 bytes are stored in 231 231 the journal entry) 232 232 * commit id 233 + 233 234 To test if the whole journal section was written correctly, every 234 235 512-byte sector of the journal ends with 8-byte commit id. If the 235 236 commit id matches on all sectors in a journal section, then it is 236 237 assumed that the section was written correctly. If the commit id 237 238 doesn't match, the section was written partially and it should not 238 239 be replayed. 239 - * one or more runs of interleaved tags and data. Each run contains: 240 + 241 + * one or more runs of interleaved tags and data. 242 + Each run contains: 243 + 240 244 * tag area - it contains integrity tags. There is one tag for each 241 245 sector in the data area 242 246 * data area - it contains data sectors. The number of data sectors
+7 -7
Documentation/device-mapper/dm-io.txt Documentation/device-mapper/dm-io.rst
··· 1 + ===== 1 2 dm-io 2 3 ===== 3 4 ··· 8 7 9 8 The user must set up an io_region structure to describe the desired location 10 9 of the I/O. Each io_region indicates a block-device along with the starting 11 - sector and size of the region. 10 + sector and size of the region:: 12 11 13 12 struct io_region { 14 13 struct block_device *bdev; ··· 20 19 to multiple regions are specified by an array of io_region structures. 21 20 22 21 The first I/O service type takes a list of memory pages as the data buffer for 23 - the I/O, along with an offset into the first page. 22 + the I/O, along with an offset into the first page:: 24 23 25 24 struct page_list { 26 25 struct page_list *next; ··· 36 35 37 36 The second I/O service type takes an array of bio vectors as the data buffer 38 37 for the I/O. This service can be handy if the caller has a pre-assembled bio, 39 - but wants to direct different portions of the bio to different devices. 38 + but wants to direct different portions of the bio to different devices:: 40 39 41 40 int dm_io_sync_bvec(unsigned int num_regions, struct io_region *where, 42 41 int rw, struct bio_vec *bvec, ··· 48 47 The third I/O service type takes a pointer to a vmalloc'd memory buffer as the 49 48 data buffer for the I/O. This service can be handy if the caller needs to do 50 49 I/O to a large region but doesn't want to allocate a large number of individual 51 - memory pages. 50 + memory pages:: 52 51 53 52 int dm_io_sync_vm(unsigned int num_regions, struct io_region *where, int rw, 54 53 void *data, unsigned long *error_bits); ··· 56 55 void *data, io_notify_fn fn, void *context); 57 56 58 57 Callers of the asynchronous I/O services must include the name of a completion 59 - callback routine and a pointer to some context data for the I/O. 58 + callback routine and a pointer to some context data for the I/O:: 60 59 61 60 typedef void (*io_notify_fn)(unsigned long error, void *context); 62 61 63 - The "error" parameter in this callback, as well as the "*error" parameter in 62 + The "error" parameter in this callback, as well as the `*error` parameter in 64 63 all of the synchronous versions, is a bitset (instead of a simple error value). 65 64 In the case of an write-I/O to multiple regions, this bitset allows dm-io to 66 65 indicate success or failure on each individual region. ··· 73 72 When the user is finished using the dm-io services, they should call 74 73 dm_io_put() and specify the same number of pages that were given on the 75 74 dm_io_get() call. 76 -
+4 -1
Documentation/device-mapper/dm-log.txt Documentation/device-mapper/dm-log.rst
··· 1 + ===================== 1 2 Device-Mapper Logging 2 3 ===================== 3 4 The device-mapper logging code is used by some of the device-mapper ··· 17 16 logging implementations are available and provide different 18 17 capabilities. The list includes: 19 18 19 + ============== ============================================================== 20 20 Type Files 21 - ==== ===== 21 + ============== ============================================================== 22 22 disk drivers/md/dm-log.c 23 23 core drivers/md/dm-log.c 24 24 userspace drivers/md/dm-log-userspace* include/linux/dm-log-userspace.h 25 + ============== ============================================================== 25 26 26 27 The "disk" log type 27 28 -------------------
+17 -8
Documentation/device-mapper/dm-queue-length.txt Documentation/device-mapper/dm-queue-length.rst
··· 1 + =============== 1 2 dm-queue-length 2 3 =============== 3 4 ··· 7 6 The path selector name is 'queue-length'. 8 7 9 8 Table parameters for each path: [<repeat_count>] 9 + 10 + :: 11 + 10 12 <repeat_count>: The number of I/Os to dispatch using the selected 11 13 path before switching to the next path. 12 14 If not given, internal default is used. To check 13 15 the default value, see the activated table. 14 16 15 17 Status for each path: <status> <fail-count> <in-flight> 18 + 19 + :: 20 + 16 21 <status>: 'A' if the path is active, 'F' if the path is failed. 17 22 <fail-count>: The number of path failures. 18 23 <in-flight>: The number of in-flight I/Os on the path. ··· 36 29 ======== 37 30 In case that 2 paths (sda and sdb) are used with repeat_count == 128. 38 31 39 - # echo "0 10 multipath 0 0 1 1 queue-length 0 2 1 8:0 128 8:16 128" \ 40 - dmsetup create test 41 - # 42 - # dmsetup table 43 - test: 0 10 multipath 0 0 1 1 queue-length 0 2 1 8:0 128 8:16 128 44 - # 45 - # dmsetup status 46 - test: 0 10 multipath 2 0 0 0 1 1 E 0 2 1 8:0 A 0 0 8:16 A 0 0 32 + :: 33 + 34 + # echo "0 10 multipath 0 0 1 1 queue-length 0 2 1 8:0 128 8:16 128" \ 35 + dmsetup create test 36 + # 37 + # dmsetup table 38 + test: 0 10 multipath 0 0 1 1 queue-length 0 2 1 8:0 128 8:16 128 39 + # 40 + # dmsetup status 41 + test: 0 10 multipath 2 0 0 0 1 1 E 0 2 1 8:0 A 0 0 8:16 A 0 0
+145 -80
Documentation/device-mapper/dm-raid.txt Documentation/device-mapper/dm-raid.rst
··· 1 + ======= 1 2 dm-raid 2 3 ======= 3 4 ··· 9 8 10 9 Mapping Table Interface 11 10 ----------------------- 12 - The target is named "raid" and it accepts the following parameters: 11 + The target is named "raid" and it accepts the following parameters:: 13 12 14 13 <raid_type> <#raid_params> <raid_params> \ 15 14 <#raid_devs> <metadata_dev0> <dev0> [.. <metadata_devN> <devN>] 16 15 17 16 <raid_type>: 17 + 18 + ============= =============================================================== 18 19 raid0 RAID0 striping (no resilience) 19 20 raid1 RAID1 mirroring 20 21 raid4 RAID4 with dedicated last parity disk 21 22 raid5_n RAID5 with dedicated last parity disk supporting takeover 22 23 Same as raid4 23 - -Transitory layout 24 + 25 + - Transitory layout 24 26 raid5_la RAID5 left asymmetric 27 + 25 28 - rotating parity 0 with data continuation 26 29 raid5_ra RAID5 right asymmetric 30 + 27 31 - rotating parity N with data continuation 28 32 raid5_ls RAID5 left symmetric 33 + 29 34 - rotating parity 0 with data restart 30 35 raid5_rs RAID5 right symmetric 36 + 31 37 - rotating parity N with data restart 32 38 raid6_zr RAID6 zero restart 39 + 33 40 - rotating parity zero (left-to-right) with data restart 34 41 raid6_nr RAID6 N restart 42 + 35 43 - rotating parity N (right-to-left) with data restart 36 44 raid6_nc RAID6 N continue 45 + 37 46 - rotating parity N (right-to-left) with data continuation 38 47 raid6_n_6 RAID6 with dedicate parity disks 48 + 39 49 - parity and Q-syndrome on the last 2 disks; 40 50 layout for takeover from/to raid4/raid5_n 41 51 raid6_la_6 Same as "raid_la" plus dedicated last Q-syndrome disk 52 + 42 53 - layout for takeover from raid5_la from/to raid6 43 54 raid6_ra_6 Same as "raid5_ra" dedicated last Q-syndrome disk 55 + 44 56 - layout for takeover from raid5_ra from/to raid6 45 57 raid6_ls_6 Same as "raid5_ls" dedicated last Q-syndrome disk 58 + 46 59 - layout for takeover from raid5_ls from/to raid6 47 60 raid6_rs_6 Same as "raid5_rs" dedicated last Q-syndrome disk 61 + 48 62 - layout for takeover from raid5_rs from/to raid6 49 63 raid10 Various RAID10 inspired algorithms chosen by additional params 50 64 (see raid10_format and raid10_copies below) 65 + 51 66 - RAID10: Striped Mirrors (aka 'Striping on top of mirrors') 52 67 - RAID1E: Integrated Adjacent Stripe Mirroring 53 68 - RAID1E: Integrated Offset Stripe Mirroring 54 - - and other similar RAID10 variants 69 + - and other similar RAID10 variants 70 + ============= =============================================================== 55 71 56 72 Reference: Chapter 4 of 57 73 http://www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf ··· 76 58 <#raid_params>: The number of parameters that follow. 77 59 78 60 <raid_params> consists of 61 + 79 62 Mandatory parameters: 80 - <chunk_size>: Chunk size in sectors. This parameter is often known as 63 + <chunk_size>: 64 + Chunk size in sectors. This parameter is often known as 81 65 "stripe size". It is the only mandatory parameter and 82 66 is placed first. 83 67 84 68 followed by optional parameters (in any order): 85 - [sync|nosync] Force or prevent RAID initialization. 69 + [sync|nosync] 70 + Force or prevent RAID initialization. 86 71 87 - [rebuild <idx>] Rebuild drive number 'idx' (first drive is 0). 72 + [rebuild <idx>] 73 + Rebuild drive number 'idx' (first drive is 0). 88 74 89 75 [daemon_sleep <ms>] 90 76 Interval between runs of the bitmap daemon that 91 77 clear bits. A longer interval means less bitmap I/O but 92 78 resyncing after a failure is likely to take longer. 93 79 94 - [min_recovery_rate <kB/sec/disk>] Throttle RAID initialization 95 - [max_recovery_rate <kB/sec/disk>] Throttle RAID initialization 96 - [write_mostly <idx>] Mark drive index 'idx' write-mostly. 97 - [max_write_behind <sectors>] See '--write-behind=' (man mdadm) 98 - [stripe_cache <sectors>] Stripe cache size (RAID 4/5/6 only) 80 + [min_recovery_rate <kB/sec/disk>] 81 + Throttle RAID initialization 82 + [max_recovery_rate <kB/sec/disk>] 83 + Throttle RAID initialization 84 + [write_mostly <idx>] 85 + Mark drive index 'idx' write-mostly. 86 + [max_write_behind <sectors>] 87 + See '--write-behind=' (man mdadm) 88 + [stripe_cache <sectors>] 89 + Stripe cache size (RAID 4/5/6 only) 99 90 [region_size <sectors>] 100 91 The region_size multiplied by the number of regions is the 101 92 logical size of the array. The bitmap records the device 102 93 synchronisation state for each region. 103 94 104 - [raid10_copies <# copies>] 105 - [raid10_format <near|far|offset>] 95 + [raid10_copies <# copies>], [raid10_format <near|far|offset>] 106 96 These two options are used to alter the default layout of 107 97 a RAID10 configuration. The number of copies is can be 108 98 specified, but the default is 2. There are also three ··· 119 93 respect to mirroring. If these options are left unspecified, 120 94 or 'raid10_copies 2' and/or 'raid10_format near' are given, 121 95 then the layouts for 2, 3 and 4 devices are: 96 + 97 + ======== ========== ============== 122 98 2 drives 3 drives 4 drives 123 - -------- ---------- -------------- 99 + ======== ========== ============== 124 100 A1 A1 A1 A1 A2 A1 A1 A2 A2 125 101 A2 A2 A2 A3 A3 A3 A3 A4 A4 126 102 A3 A3 A4 A4 A5 A5 A5 A6 A6 127 103 A4 A4 A5 A6 A6 A7 A7 A8 A8 128 104 .. .. .. .. .. .. .. .. .. 105 + ======== ========== ============== 106 + 129 107 The 2-device layout is equivalent 2-way RAID1. The 4-device 130 108 layout is what a traditional RAID10 would look like. The 131 109 3-device layout is what might be called a 'RAID1E - Integrated ··· 137 107 138 108 If 'raid10_copies 2' and 'raid10_format far', then the layouts 139 109 for 2, 3 and 4 devices are: 110 + 111 + ======== ============ =================== 140 112 2 drives 3 drives 4 drives 141 - -------- -------------- -------------------- 113 + ======== ============ =================== 142 114 A1 A2 A1 A2 A3 A1 A2 A3 A4 143 115 A3 A4 A4 A5 A6 A5 A6 A7 A8 144 116 A5 A6 A7 A8 A9 A9 A10 A11 A12 ··· 149 117 A4 A3 A6 A4 A5 A6 A5 A8 A7 150 118 A6 A5 A9 A7 A8 A10 A9 A12 A11 151 119 .. .. .. .. .. .. .. .. .. 120 + ======== ============ =================== 152 121 153 122 If 'raid10_copies 2' and 'raid10_format offset', then the 154 123 layouts for 2, 3 and 4 devices are: 124 + 125 + ======== ========== ================ 155 126 2 drives 3 drives 4 drives 156 - -------- ------------ ----------------- 127 + ======== ========== ================ 157 128 A1 A2 A1 A2 A3 A1 A2 A3 A4 158 129 A2 A1 A3 A1 A2 A2 A1 A4 A3 159 130 A3 A4 A4 A5 A6 A5 A6 A7 A8 ··· 164 129 A5 A6 A7 A8 A9 A9 A10 A11 A12 165 130 A6 A5 A9 A7 A8 A10 A9 A12 A11 166 131 .. .. .. .. .. .. .. .. .. 132 + ======== ========== ================ 133 + 167 134 Here we see layouts closely akin to 'RAID1E - Integrated 168 135 Offset Stripe Mirroring'. 169 136 ··· 227 190 228 191 Example Tables 229 192 -------------- 230 - # RAID4 - 4 data drives, 1 parity (no metadata devices) 231 - # No metadata devices specified to hold superblock/bitmap info 232 - # Chunk size of 1MiB 233 - # (Lines separated for easy reading) 234 193 235 - 0 1960893648 raid \ 236 - raid4 1 2048 \ 237 - 5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81 194 + :: 238 195 239 - # RAID4 - 4 data drives, 1 parity (with metadata devices) 240 - # Chunk size of 1MiB, force RAID initialization, 241 - # min recovery rate at 20 kiB/sec/disk 196 + # RAID4 - 4 data drives, 1 parity (no metadata devices) 197 + # No metadata devices specified to hold superblock/bitmap info 198 + # Chunk size of 1MiB 199 + # (Lines separated for easy reading) 242 200 243 - 0 1960893648 raid \ 244 - raid4 4 2048 sync min_recovery_rate 20 \ 245 - 5 8:17 8:18 8:33 8:34 8:49 8:50 8:65 8:66 8:81 8:82 201 + 0 1960893648 raid \ 202 + raid4 1 2048 \ 203 + 5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81 204 + 205 + # RAID4 - 4 data drives, 1 parity (with metadata devices) 206 + # Chunk size of 1MiB, force RAID initialization, 207 + # min recovery rate at 20 kiB/sec/disk 208 + 209 + 0 1960893648 raid \ 210 + raid4 4 2048 sync min_recovery_rate 20 \ 211 + 5 8:17 8:18 8:33 8:34 8:49 8:50 8:65 8:66 8:81 8:82 246 212 247 213 248 214 Status Output ··· 259 219 260 220 'dmsetup status' yields information on the state and health of the array. 261 221 The output is as follows (normally a single line, but expanded here for 262 - clarity): 263 - 1: <s> <l> raid \ 264 - 2: <raid_type> <#devices> <health_chars> \ 265 - 3: <sync_ratio> <sync_action> <mismatch_cnt> 222 + clarity):: 223 + 224 + 1: <s> <l> raid \ 225 + 2: <raid_type> <#devices> <health_chars> \ 226 + 3: <sync_ratio> <sync_action> <mismatch_cnt> 266 227 267 228 Line 1 is the standard output produced by device-mapper. 268 - Line 2 & 3 are produced by the raid target and are best explained by example: 229 + 230 + Line 2 & 3 are produced by the raid target and are best explained by example:: 231 + 269 232 0 1960893648 raid raid4 5 AAAAA 2/490221568 init 0 233 + 270 234 Here we can see the RAID type is raid4, there are 5 devices - all of 271 235 which are 'A'live, and the array is 2/490221568 complete with its initial 272 236 recovery. Here is a fuller description of the individual fields: 237 + 238 + =============== ========================================================= 273 239 <raid_type> Same as the <raid_type> used to create the array. 274 - <health_chars> One char for each device, indicating: 'A' = alive and 275 - in-sync, 'a' = alive but not in-sync, 'D' = dead/failed. 240 + <health_chars> One char for each device, indicating: 241 + 242 + - 'A' = alive and in-sync 243 + - 'a' = alive but not in-sync 244 + - 'D' = dead/failed. 276 245 <sync_ratio> The ratio indicating how much of the array has undergone 277 246 the process described by 'sync_action'. If the 278 247 'sync_action' is "check" or "repair", then the process 279 248 of "resync" or "recover" can be considered complete. 280 249 <sync_action> One of the following possible states: 281 - idle - No synchronization action is being performed. 282 - frozen - The current action has been halted. 283 - resync - Array is undergoing its initial synchronization 250 + 251 + idle 252 + - No synchronization action is being performed. 253 + frozen 254 + - The current action has been halted. 255 + resync 256 + - Array is undergoing its initial synchronization 284 257 or is resynchronizing after an unclean shutdown 285 258 (possibly aided by a bitmap). 286 - recover - A device in the array is being rebuilt or 259 + recover 260 + - A device in the array is being rebuilt or 287 261 replaced. 288 - check - A user-initiated full check of the array is 262 + check 263 + - A user-initiated full check of the array is 289 264 being performed. All blocks are read and 290 265 checked for consistency. The number of 291 266 discrepancies found are recorded in 292 267 <mismatch_cnt>. No changes are made to the 293 268 array by this action. 294 - repair - The same as "check", but discrepancies are 269 + repair 270 + - The same as "check", but discrepancies are 295 271 corrected. 296 - reshape - The array is undergoing a reshape. 272 + reshape 273 + - The array is undergoing a reshape. 297 274 <mismatch_cnt> The number of discrepancies found between mirror copies 298 275 in RAID1/10 or wrong parity values found in RAID4/5/6. 299 276 This value is valid only after a "check" of the array ··· 318 261 <data_offset> The current data offset to the start of the user data on 319 262 each component device of a raid set (see the respective 320 263 raid parameter to support out-of-place reshaping). 321 - <journal_char> 'A' - active write-through journal device. 322 - 'a' - active write-back journal device. 323 - 'D' - dead journal device. 324 - '-' - no journal device. 264 + <journal_char> - 'A' - active write-through journal device. 265 + - 'a' - active write-back journal device. 266 + - 'D' - dead journal device. 267 + - '-' - no journal device. 268 + =============== ========================================================= 325 269 326 270 327 271 Message Interface ··· 330 272 The dm-raid target will accept certain actions through the 'message' interface. 331 273 ('man dmsetup' for more information on the message interface.) These actions 332 274 include: 333 - "idle" - Halt the current sync action. 334 - "frozen" - Freeze the current sync action. 335 - "resync" - Initiate/continue a resync. 336 - "recover"- Initiate/continue a recover process. 337 - "check" - Initiate a check (i.e. a "scrub") of the array. 338 - "repair" - Initiate a repair of the array. 275 + 276 + ========= ================================================ 277 + "idle" Halt the current sync action. 278 + "frozen" Freeze the current sync action. 279 + "resync" Initiate/continue a resync. 280 + "recover" Initiate/continue a recover process. 281 + "check" Initiate a check (i.e. a "scrub") of the array. 282 + "repair" Initiate a repair of the array. 283 + ========= ================================================ 339 284 340 285 341 286 Discard Support ··· 368 307 369 308 For trusted devices, the following dm-raid module parameter can be set 370 309 to safely enable discard support for RAID 4/5/6: 310 + 371 311 'devices_handle_discards_safely' 372 312 373 313 374 314 Version History 375 315 --------------- 376 - 1.0.0 Initial version. Support for RAID 4/5/6 377 - 1.1.0 Added support for RAID 1 378 - 1.2.0 Handle creation of arrays that contain failed devices. 379 - 1.3.0 Added support for RAID 10 380 - 1.3.1 Allow device replacement/rebuild for RAID 10 381 - 1.3.2 Fix/improve redundancy checking for RAID10 382 - 1.4.0 Non-functional change. Removes arg from mapping function. 383 - 1.4.1 RAID10 fix redundancy validation checks (commit 55ebbb5). 384 - 1.4.2 Add RAID10 "far" and "offset" algorithm support. 385 - 1.5.0 Add message interface to allow manipulation of the sync_action. 316 + 317 + :: 318 + 319 + 1.0.0 Initial version. Support for RAID 4/5/6 320 + 1.1.0 Added support for RAID 1 321 + 1.2.0 Handle creation of arrays that contain failed devices. 322 + 1.3.0 Added support for RAID 10 323 + 1.3.1 Allow device replacement/rebuild for RAID 10 324 + 1.3.2 Fix/improve redundancy checking for RAID10 325 + 1.4.0 Non-functional change. Removes arg from mapping function. 326 + 1.4.1 RAID10 fix redundancy validation checks (commit 55ebbb5). 327 + 1.4.2 Add RAID10 "far" and "offset" algorithm support. 328 + 1.5.0 Add message interface to allow manipulation of the sync_action. 386 329 New status (STATUSTYPE_INFO) fields: sync_action and mismatch_cnt. 387 - 1.5.1 Add ability to restore transiently failed devices on resume. 388 - 1.5.2 'mismatch_cnt' is zero unless [last_]sync_action is "check". 389 - 1.6.0 Add discard support (and devices_handle_discard_safely module param). 390 - 1.7.0 Add support for MD RAID0 mappings. 391 - 1.8.0 Explicitly check for compatible flags in the superblock metadata 330 + 1.5.1 Add ability to restore transiently failed devices on resume. 331 + 1.5.2 'mismatch_cnt' is zero unless [last_]sync_action is "check". 332 + 1.6.0 Add discard support (and devices_handle_discard_safely module param). 333 + 1.7.0 Add support for MD RAID0 mappings. 334 + 1.8.0 Explicitly check for compatible flags in the superblock metadata 392 335 and reject to start the raid set if any are set by a newer 393 336 target version, thus avoiding data corruption on a raid set 394 337 with a reshape in progress. 395 - 1.9.0 Add support for RAID level takeover/reshape/region size 338 + 1.9.0 Add support for RAID level takeover/reshape/region size 396 339 and set size reduction. 397 - 1.9.1 Fix activation of existing RAID 4/10 mapped devices 398 - 1.9.2 Don't emit '- -' on the status table line in case the constructor 340 + 1.9.1 Fix activation of existing RAID 4/10 mapped devices 341 + 1.9.2 Don't emit '- -' on the status table line in case the constructor 399 342 fails reading a superblock. Correctly emit 'maj:min1 maj:min2' and 400 343 'D' on the status line. If '- -' is passed into the constructor, emit 401 344 '- -' on the table line and '-' as the status line health character. 402 - 1.10.0 Add support for raid4/5/6 journal device 403 - 1.10.1 Fix data corruption on reshape request 404 - 1.11.0 Fix table line argument order 345 + 1.10.0 Add support for raid4/5/6 journal device 346 + 1.10.1 Fix data corruption on reshape request 347 + 1.11.0 Fix table line argument order 405 348 (wrong raid10_copies/raid10_format sequence) 406 - 1.11.1 Add raid4/5/6 journal write-back support via journal_mode option 407 - 1.12.1 Fix for MD deadlock between mddev_suspend() and md_write_start() available 408 - 1.13.0 Fix dev_health status at end of "recover" (was 'a', now 'A') 409 - 1.13.1 Fix deadlock caused by early md_stop_writes(). Also fix size an 349 + 1.11.1 Add raid4/5/6 journal write-back support via journal_mode option 350 + 1.12.1 Fix for MD deadlock between mddev_suspend() and md_write_start() available 351 + 1.13.0 Fix dev_health status at end of "recover" (was 'a', now 'A') 352 + 1.13.1 Fix deadlock caused by early md_stop_writes(). Also fix size an 410 353 state races. 411 - 1.13.2 Fix raid redundancy validation and avoid keeping raid set frozen 412 - 1.14.0 Fix reshape race on small devices. Fix stripe adding reshape 354 + 1.13.2 Fix raid redundancy validation and avoid keeping raid set frozen 355 + 1.14.0 Fix reshape race on small devices. Fix stripe adding reshape 413 356 deadlock/potential data corruption. Update superblock when 414 357 specific devices are requested via rebuild. Fix RAID leg 415 358 rebuild errors.
+39 -29
Documentation/device-mapper/dm-service-time.txt Documentation/device-mapper/dm-service-time.rst
··· 1 + =============== 1 2 dm-service-time 2 3 =============== 3 4 ··· 13 12 14 13 The path selector name is 'service-time'. 15 14 16 - Table parameters for each path: [<repeat_count> [<relative_throughput>]] 17 - <repeat_count>: The number of I/Os to dispatch using the selected 15 + Table parameters for each path: 16 + 17 + [<repeat_count> [<relative_throughput>]] 18 + <repeat_count>: 19 + The number of I/Os to dispatch using the selected 18 20 path before switching to the next path. 19 21 If not given, internal default is used. To check 20 22 the default value, see the activated table. 21 - <relative_throughput>: The relative throughput value of the path 23 + <relative_throughput>: 24 + The relative throughput value of the path 22 25 among all paths in the path-group. 23 26 The valid range is 0-100. 24 27 If not given, minimum value '1' is used. 25 28 If '0' is given, the path isn't selected while 26 29 other paths having a positive value are available. 27 30 28 - Status for each path: <status> <fail-count> <in-flight-size> \ 29 - <relative_throughput> 30 - <status>: 'A' if the path is active, 'F' if the path is failed. 31 - <fail-count>: The number of path failures. 32 - <in-flight-size>: The size of in-flight I/Os on the path. 33 - <relative_throughput>: The relative throughput value of the path 34 - among all paths in the path-group. 31 + Status for each path: 32 + 33 + <status> <fail-count> <in-flight-size> <relative_throughput> 34 + <status>: 35 + 'A' if the path is active, 'F' if the path is failed. 36 + <fail-count>: 37 + The number of path failures. 38 + <in-flight-size>: 39 + The size of in-flight I/Os on the path. 40 + <relative_throughput>: 41 + The relative throughput value of the path 42 + among all paths in the path-group. 35 43 36 44 37 45 Algorithm ··· 49 39 dm-service-time adds the I/O size to 'in-flight-size' when the I/O is 50 40 dispatched and subtracts when completed. 51 41 Basically, dm-service-time selects a path having minimum service time 52 - which is calculated by: 42 + which is calculated by:: 53 43 54 44 ('in-flight-size' + 'size-of-incoming-io') / 'relative_throughput' 55 45 ··· 77 67 ======== 78 68 In case that 2 paths (sda and sdb) are used with repeat_count == 128 79 69 and sda has an average throughput 1GB/s and sdb has 4GB/s, 80 - 'relative_throughput' value may be '1' for sda and '4' for sdb. 70 + 'relative_throughput' value may be '1' for sda and '4' for sdb:: 81 71 82 - # echo "0 10 multipath 0 0 1 1 service-time 0 2 2 8:0 128 1 8:16 128 4" \ 83 - dmsetup create test 84 - # 85 - # dmsetup table 86 - test: 0 10 multipath 0 0 1 1 service-time 0 2 2 8:0 128 1 8:16 128 4 87 - # 88 - # dmsetup status 89 - test: 0 10 multipath 2 0 0 0 1 1 E 0 2 2 8:0 A 0 0 1 8:16 A 0 0 4 72 + # echo "0 10 multipath 0 0 1 1 service-time 0 2 2 8:0 128 1 8:16 128 4" \ 73 + dmsetup create test 74 + # 75 + # dmsetup table 76 + test: 0 10 multipath 0 0 1 1 service-time 0 2 2 8:0 128 1 8:16 128 4 77 + # 78 + # dmsetup status 79 + test: 0 10 multipath 2 0 0 0 1 1 E 0 2 2 8:0 A 0 0 1 8:16 A 0 0 4 90 80 91 81 92 - Or '2' for sda and '8' for sdb would be also true. 82 + Or '2' for sda and '8' for sdb would be also true:: 93 83 94 - # echo "0 10 multipath 0 0 1 1 service-time 0 2 2 8:0 128 2 8:16 128 8" \ 95 - dmsetup create test 96 - # 97 - # dmsetup table 98 - test: 0 10 multipath 0 0 1 1 service-time 0 2 2 8:0 128 2 8:16 128 8 99 - # 100 - # dmsetup status 101 - test: 0 10 multipath 2 0 0 0 1 1 E 0 2 2 8:0 A 0 0 2 8:16 A 0 0 8 84 + # echo "0 10 multipath 0 0 1 1 service-time 0 2 2 8:0 128 2 8:16 128 8" \ 85 + dmsetup create test 86 + # 87 + # dmsetup table 88 + test: 0 10 multipath 0 0 1 1 service-time 0 2 2 8:0 128 2 8:16 128 8 89 + # 90 + # dmsetup status 91 + test: 0 10 multipath 2 0 0 0 1 1 E 0 2 2 8:0 A 0 0 2 8:16 A 0 0 8
+110
Documentation/device-mapper/dm-uevent.rst
··· 1 + ==================== 2 + device-mapper uevent 3 + ==================== 4 + 5 + The device-mapper uevent code adds the capability to device-mapper to create 6 + and send kobject uevents (uevents). Previously device-mapper events were only 7 + available through the ioctl interface. The advantage of the uevents interface 8 + is the event contains environment attributes providing increased context for 9 + the event avoiding the need to query the state of the device-mapper device after 10 + the event is received. 11 + 12 + There are two functions currently for device-mapper events. The first function 13 + listed creates the event and the second function sends the event(s):: 14 + 15 + void dm_path_uevent(enum dm_uevent_type event_type, struct dm_target *ti, 16 + const char *path, unsigned nr_valid_paths) 17 + 18 + void dm_send_uevents(struct list_head *events, struct kobject *kobj) 19 + 20 + 21 + The variables added to the uevent environment are: 22 + 23 + Variable Name: DM_TARGET 24 + ------------------------ 25 + :Uevent Action(s): KOBJ_CHANGE 26 + :Type: string 27 + :Description: 28 + :Value: Name of device-mapper target that generated the event. 29 + 30 + Variable Name: DM_ACTION 31 + ------------------------ 32 + :Uevent Action(s): KOBJ_CHANGE 33 + :Type: string 34 + :Description: 35 + :Value: Device-mapper specific action that caused the uevent action. 36 + PATH_FAILED - A path has failed; 37 + PATH_REINSTATED - A path has been reinstated. 38 + 39 + Variable Name: DM_SEQNUM 40 + ------------------------ 41 + :Uevent Action(s): KOBJ_CHANGE 42 + :Type: unsigned integer 43 + :Description: A sequence number for this specific device-mapper device. 44 + :Value: Valid unsigned integer range. 45 + 46 + Variable Name: DM_PATH 47 + ---------------------- 48 + :Uevent Action(s): KOBJ_CHANGE 49 + :Type: string 50 + :Description: Major and minor number of the path device pertaining to this 51 + event. 52 + :Value: Path name in the form of "Major:Minor" 53 + 54 + Variable Name: DM_NR_VALID_PATHS 55 + -------------------------------- 56 + :Uevent Action(s): KOBJ_CHANGE 57 + :Type: unsigned integer 58 + :Description: 59 + :Value: Valid unsigned integer range. 60 + 61 + Variable Name: DM_NAME 62 + ---------------------- 63 + :Uevent Action(s): KOBJ_CHANGE 64 + :Type: string 65 + :Description: Name of the device-mapper device. 66 + :Value: Name 67 + 68 + Variable Name: DM_UUID 69 + ---------------------- 70 + :Uevent Action(s): KOBJ_CHANGE 71 + :Type: string 72 + :Description: UUID of the device-mapper device. 73 + :Value: UUID. (Empty string if there isn't one.) 74 + 75 + An example of the uevents generated as captured by udevmonitor is shown 76 + below 77 + 78 + 1.) Path failure:: 79 + 80 + UEVENT[1192521009.711215] change@/block/dm-3 81 + ACTION=change 82 + DEVPATH=/block/dm-3 83 + SUBSYSTEM=block 84 + DM_TARGET=multipath 85 + DM_ACTION=PATH_FAILED 86 + DM_SEQNUM=1 87 + DM_PATH=8:32 88 + DM_NR_VALID_PATHS=0 89 + DM_NAME=mpath2 90 + DM_UUID=mpath-35333333000002328 91 + MINOR=3 92 + MAJOR=253 93 + SEQNUM=1130 94 + 95 + 2.) Path reinstate:: 96 + 97 + UEVENT[1192521132.989927] change@/block/dm-3 98 + ACTION=change 99 + DEVPATH=/block/dm-3 100 + SUBSYSTEM=block 101 + DM_TARGET=multipath 102 + DM_ACTION=PATH_REINSTATED 103 + DM_SEQNUM=2 104 + DM_PATH=8:32 105 + DM_NR_VALID_PATHS=1 106 + DM_NAME=mpath2 107 + DM_UUID=mpath-35333333000002328 108 + MINOR=3 109 + MAJOR=253 110 + SEQNUM=1131
-97
Documentation/device-mapper/dm-uevent.txt
··· 1 - The device-mapper uevent code adds the capability to device-mapper to create 2 - and send kobject uevents (uevents). Previously device-mapper events were only 3 - available through the ioctl interface. The advantage of the uevents interface 4 - is the event contains environment attributes providing increased context for 5 - the event avoiding the need to query the state of the device-mapper device after 6 - the event is received. 7 - 8 - There are two functions currently for device-mapper events. The first function 9 - listed creates the event and the second function sends the event(s). 10 - 11 - void dm_path_uevent(enum dm_uevent_type event_type, struct dm_target *ti, 12 - const char *path, unsigned nr_valid_paths) 13 - 14 - void dm_send_uevents(struct list_head *events, struct kobject *kobj) 15 - 16 - 17 - The variables added to the uevent environment are: 18 - 19 - Variable Name: DM_TARGET 20 - Uevent Action(s): KOBJ_CHANGE 21 - Type: string 22 - Description: 23 - Value: Name of device-mapper target that generated the event. 24 - 25 - Variable Name: DM_ACTION 26 - Uevent Action(s): KOBJ_CHANGE 27 - Type: string 28 - Description: 29 - Value: Device-mapper specific action that caused the uevent action. 30 - PATH_FAILED - A path has failed. 31 - PATH_REINSTATED - A path has been reinstated. 32 - 33 - Variable Name: DM_SEQNUM 34 - Uevent Action(s): KOBJ_CHANGE 35 - Type: unsigned integer 36 - Description: A sequence number for this specific device-mapper device. 37 - Value: Valid unsigned integer range. 38 - 39 - Variable Name: DM_PATH 40 - Uevent Action(s): KOBJ_CHANGE 41 - Type: string 42 - Description: Major and minor number of the path device pertaining to this 43 - event. 44 - Value: Path name in the form of "Major:Minor" 45 - 46 - Variable Name: DM_NR_VALID_PATHS 47 - Uevent Action(s): KOBJ_CHANGE 48 - Type: unsigned integer 49 - Description: 50 - Value: Valid unsigned integer range. 51 - 52 - Variable Name: DM_NAME 53 - Uevent Action(s): KOBJ_CHANGE 54 - Type: string 55 - Description: Name of the device-mapper device. 56 - Value: Name 57 - 58 - Variable Name: DM_UUID 59 - Uevent Action(s): KOBJ_CHANGE 60 - Type: string 61 - Description: UUID of the device-mapper device. 62 - Value: UUID. (Empty string if there isn't one.) 63 - 64 - An example of the uevents generated as captured by udevmonitor is shown 65 - below. 66 - 67 - 1.) Path failure. 68 - UEVENT[1192521009.711215] change@/block/dm-3 69 - ACTION=change 70 - DEVPATH=/block/dm-3 71 - SUBSYSTEM=block 72 - DM_TARGET=multipath 73 - DM_ACTION=PATH_FAILED 74 - DM_SEQNUM=1 75 - DM_PATH=8:32 76 - DM_NR_VALID_PATHS=0 77 - DM_NAME=mpath2 78 - DM_UUID=mpath-35333333000002328 79 - MINOR=3 80 - MAJOR=253 81 - SEQNUM=1130 82 - 83 - 2.) Path reinstate. 84 - UEVENT[1192521132.989927] change@/block/dm-3 85 - ACTION=change 86 - DEVPATH=/block/dm-3 87 - SUBSYSTEM=block 88 - DM_TARGET=multipath 89 - DM_ACTION=PATH_REINSTATED 90 - DM_SEQNUM=2 91 - DM_PATH=8:32 92 - DM_NR_VALID_PATHS=1 93 - DM_NAME=mpath2 94 - DM_UUID=mpath-35333333000002328 95 - MINOR=3 96 - MAJOR=253 97 - SEQNUM=1131
+6 -4
Documentation/device-mapper/dm-zoned.txt Documentation/device-mapper/dm-zoned.rst
··· 1 + ======== 1 2 dm-zoned 2 3 ======== 3 4 ··· 134 133 will analyze the device zone configuration, determine where to place the 135 134 metadata sets on the device and initialize the metadata sets. 136 135 137 - Ex: 136 + Ex:: 138 137 139 - dmzadm --format /dev/sdxx 138 + dmzadm --format /dev/sdxx 140 139 141 140 For a formatted device, the target can be created normally with the 142 141 dmsetup utility. The only parameter that dm-zoned requires is the 143 - underlying zoned block device name. Ex: 142 + underlying zoned block device name. Ex:: 144 143 145 - echo "0 `blockdev --getsize ${dev}` zoned ${dev}" | dmsetup create dmz-`basename ${dev}` 144 + echo "0 `blockdev --getsize ${dev}` zoned ${dev}" | \ 145 + dmsetup create dmz-`basename ${dev}`
+22 -14
Documentation/device-mapper/era.txt Documentation/device-mapper/era.rst
··· 1 + ====== 2 + dm-era 3 + ====== 4 + 1 5 Introduction 2 6 ============ 3 7 ··· 18 14 Constructor 19 15 =========== 20 16 21 - era <metadata dev> <origin dev> <block size> 17 + era <metadata dev> <origin dev> <block size> 22 18 23 - metadata dev : fast device holding the persistent metadata 24 - origin dev : device holding data blocks that may change 25 - block size : block size of origin data device, granularity that is 26 - tracked by the target 19 + ================ ====================================================== 20 + metadata dev fast device holding the persistent metadata 21 + origin dev device holding data blocks that may change 22 + block size block size of origin data device, granularity that is 23 + tracked by the target 24 + ================ ====================================================== 27 25 28 26 Messages 29 27 ======== ··· 55 49 <metadata block size> <#used metadata blocks>/<#total metadata blocks> 56 50 <current era> <held metadata root | '-'> 57 51 58 - metadata block size : Fixed block size for each metadata block in 59 - sectors 60 - #used metadata blocks : Number of metadata blocks used 61 - #total metadata blocks : Total number of metadata blocks 62 - current era : The current era 63 - held metadata root : The location, in blocks, of the metadata root 64 - that has been 'held' for userspace read 65 - access. '-' indicates there is no held root 52 + ========================= ============================================== 53 + metadata block size Fixed block size for each metadata block in 54 + sectors 55 + #used metadata blocks Number of metadata blocks used 56 + #total metadata blocks Total number of metadata blocks 57 + current era The current era 58 + held metadata root The location, in blocks, of the metadata root 59 + that has been 'held' for userspace read 60 + access. '-' indicates there is no held root 61 + ========================= ============================================== 66 62 67 63 Detailed use case 68 64 ================= ··· 96 88 97 89 The target uses a bitset to record writes in the current era. It also 98 90 has a spare bitset ready for switching over to a new era. Other than 99 - that it uses a few 4k blocks for updating metadata. 91 + that it uses a few 4k blocks for updating metadata:: 100 92 101 93 (4 * nr_blocks) bytes + buffers 102 94
+44
Documentation/device-mapper/index.rst
··· 1 + :orphan: 2 + 3 + ============= 4 + Device Mapper 5 + ============= 6 + 7 + .. toctree:: 8 + :maxdepth: 1 9 + 10 + cache-policies 11 + cache 12 + delay 13 + dm-crypt 14 + dm-flakey 15 + dm-init 16 + dm-integrity 17 + dm-io 18 + dm-log 19 + dm-queue-length 20 + dm-raid 21 + dm-service-time 22 + dm-uevent 23 + dm-zoned 24 + era 25 + kcopyd 26 + linear 27 + log-writes 28 + persistent-data 29 + snapshot 30 + statistics 31 + striped 32 + switch 33 + thin-provisioning 34 + unstriped 35 + verity 36 + writecache 37 + zero 38 + 39 + .. only:: subproject and html 40 + 41 + Indices 42 + ======= 43 + 44 + * :ref:`genindex`
+5 -5
Documentation/device-mapper/kcopyd.txt Documentation/device-mapper/kcopyd.rst
··· 1 + ====== 1 2 kcopyd 2 3 ====== 3 4 ··· 8 7 9 8 Users of kcopyd must first create a client and indicate how many memory pages 10 9 to set aside for their copy jobs. This is done with a call to 11 - kcopyd_client_create(). 10 + kcopyd_client_create():: 12 11 13 12 int kcopyd_client_create(unsigned int num_pages, 14 13 struct kcopyd_client **result); ··· 17 16 the source and destinations of the copy. Each io_region indicates a 18 17 block-device along with the starting sector and size of the region. The source 19 18 of the copy is given as one io_region structure, and the destinations of the 20 - copy are given as an array of io_region structures. 19 + copy are given as an array of io_region structures:: 21 20 22 21 struct io_region { 23 22 struct block_device *bdev; ··· 27 26 28 27 To start the copy, the user calls kcopyd_copy(), passing in the client 29 28 pointer, pointers to the source and destination io_regions, the name of a 30 - completion callback routine, and a pointer to some context data for the copy. 29 + completion callback routine, and a pointer to some context data for the copy:: 31 30 32 31 int kcopyd_copy(struct kcopyd_client *kc, struct io_region *from, 33 32 unsigned int num_dests, struct io_region *dests, ··· 42 41 43 42 When a user is done with all their copy jobs, they should call 44 43 kcopyd_client_destroy() to delete the kcopyd client, which will release the 45 - associated memory pages. 44 + associated memory pages:: 46 45 47 46 void kcopyd_client_destroy(struct kcopyd_client *kc); 48 -
+63
Documentation/device-mapper/linear.rst
··· 1 + ========= 2 + dm-linear 3 + ========= 4 + 5 + Device-Mapper's "linear" target maps a linear range of the Device-Mapper 6 + device onto a linear range of another device. This is the basic building 7 + block of logical volume managers. 8 + 9 + Parameters: <dev path> <offset> 10 + <dev path>: 11 + Full pathname to the underlying block-device, or a 12 + "major:minor" device-number. 13 + <offset>: 14 + Starting sector within the device. 15 + 16 + 17 + Example scripts 18 + =============== 19 + 20 + :: 21 + 22 + #!/bin/sh 23 + # Create an identity mapping for a device 24 + echo "0 `blockdev --getsz $1` linear $1 0" | dmsetup create identity 25 + 26 + :: 27 + 28 + #!/bin/sh 29 + # Join 2 devices together 30 + size1=`blockdev --getsz $1` 31 + size2=`blockdev --getsz $2` 32 + echo "0 $size1 linear $1 0 33 + $size1 $size2 linear $2 0" | dmsetup create joined 34 + 35 + :: 36 + 37 + #!/usr/bin/perl -w 38 + # Split a device into 4M chunks and then join them together in reverse order. 39 + 40 + my $name = "reverse"; 41 + my $extent_size = 4 * 1024 * 2; 42 + my $dev = $ARGV[0]; 43 + my $table = ""; 44 + my $count = 0; 45 + 46 + if (!defined($dev)) { 47 + die("Please specify a device.\n"); 48 + } 49 + 50 + my $dev_size = `blockdev --getsz $dev`; 51 + my $extents = int($dev_size / $extent_size) - 52 + (($dev_size % $extent_size) ? 1 : 0); 53 + 54 + while ($extents > 0) { 55 + my $this_start = $count * $extent_size; 56 + $extents--; 57 + $count++; 58 + my $this_offset = $extents * $extent_size; 59 + 60 + $table .= "$this_start $extent_size linear $dev $this_offset\n"; 61 + } 62 + 63 + `echo \"$table\" | dmsetup create $name`;
-61
Documentation/device-mapper/linear.txt
··· 1 - dm-linear 2 - ========= 3 - 4 - Device-Mapper's "linear" target maps a linear range of the Device-Mapper 5 - device onto a linear range of another device. This is the basic building 6 - block of logical volume managers. 7 - 8 - Parameters: <dev path> <offset> 9 - <dev path>: Full pathname to the underlying block-device, or a 10 - "major:minor" device-number. 11 - <offset>: Starting sector within the device. 12 - 13 - 14 - Example scripts 15 - =============== 16 - [[ 17 - #!/bin/sh 18 - # Create an identity mapping for a device 19 - echo "0 `blockdev --getsz $1` linear $1 0" | dmsetup create identity 20 - ]] 21 - 22 - 23 - [[ 24 - #!/bin/sh 25 - # Join 2 devices together 26 - size1=`blockdev --getsz $1` 27 - size2=`blockdev --getsz $2` 28 - echo "0 $size1 linear $1 0 29 - $size1 $size2 linear $2 0" | dmsetup create joined 30 - ]] 31 - 32 - 33 - [[ 34 - #!/usr/bin/perl -w 35 - # Split a device into 4M chunks and then join them together in reverse order. 36 - 37 - my $name = "reverse"; 38 - my $extent_size = 4 * 1024 * 2; 39 - my $dev = $ARGV[0]; 40 - my $table = ""; 41 - my $count = 0; 42 - 43 - if (!defined($dev)) { 44 - die("Please specify a device.\n"); 45 - } 46 - 47 - my $dev_size = `blockdev --getsz $dev`; 48 - my $extents = int($dev_size / $extent_size) - 49 - (($dev_size % $extent_size) ? 1 : 0); 50 - 51 - while ($extents > 0) { 52 - my $this_start = $count * $extent_size; 53 - $extents--; 54 - $count++; 55 - my $this_offset = $extents * $extent_size; 56 - 57 - $table .= "$this_start $extent_size linear $dev $this_offset\n"; 58 - } 59 - 60 - `echo \"$table\" | dmsetup create $name`; 61 - ]]
+48 -43
Documentation/device-mapper/log-writes.txt Documentation/device-mapper/log-writes.rst
··· 1 + ============= 1 2 dm-log-writes 2 3 ============= 3 4 ··· 26 25 simulate the worst case scenario with regard to power failures. Consider the 27 26 following example (W means write, C means complete): 28 27 29 - W1,W2,W3,C3,C2,Wflush,C1,Cflush 28 + W1,W2,W3,C3,C2,Wflush,C1,Cflush 30 29 31 - The log would show the following 30 + The log would show the following: 32 31 33 - W3,W2,flush,W1.... 32 + W3,W2,flush,W1.... 34 33 35 34 Again this is to simulate what is actually on disk, this allows us to detect 36 35 cases where a power failure at a particular point in time would create an ··· 43 42 have all the DISCARD requests, and then the WRITE requests and then the FLUSH 44 43 request. Consider the following example: 45 44 46 - WRITE block 1, DISCARD block 1, FLUSH 45 + WRITE block 1, DISCARD block 1, FLUSH 47 46 48 - If we logged DISCARD when it completed, the replay would look like this 47 + If we logged DISCARD when it completed, the replay would look like this: 49 48 50 - DISCARD 1, WRITE 1, FLUSH 49 + DISCARD 1, WRITE 1, FLUSH 51 50 52 51 which isn't quite what happened and wouldn't be caught during the log replay. 53 52 ··· 58 57 59 58 log-writes <dev_path> <log_dev_path> 60 59 61 - dev_path : Device that all of the IO will go to normally. 62 - log_dev_path : Device where the log entries are written to. 60 + ============= ============================================== 61 + dev_path Device that all of the IO will go to normally. 62 + log_dev_path Device where the log entries are written to. 63 + ============= ============================================== 63 64 64 65 ii) Status 65 66 66 67 <#logged entries> <highest allocated sector> 67 68 68 - #logged entries : Number of logged entries 69 - highest allocated sector : Highest allocated sector 69 + =========================== ======================== 70 + #logged entries Number of logged entries 71 + highest allocated sector Highest allocated sector 72 + =========================== ======================== 70 73 71 74 iii) Messages 72 75 ··· 80 75 For example say you want to fsck a file system after every 81 76 write, but first you need to replay up to the mkfs to make sure 82 77 we're fsck'ing something reasonable, you would do something like 83 - this: 78 + this:: 84 79 85 80 mkfs.btrfs -f /dev/mapper/log 86 81 dmsetup message log 0 mark mkfs 87 82 <run test> 88 83 89 - This would allow you to replay the log up to the mkfs mark and 90 - then replay from that point on doing the fsck check in the 91 - interval that you want. 84 + This would allow you to replay the log up to the mkfs mark and 85 + then replay from that point on doing the fsck check in the 86 + interval that you want. 92 87 93 88 Every log has a mark at the end labeled "dm-log-writes-end". 94 89 ··· 102 97 ============= 103 98 104 99 Say you want to test fsync on your file system. You would do something like 105 - this: 100 + this:: 106 101 107 - TABLE="0 $(blockdev --getsz /dev/sdb) log-writes /dev/sdb /dev/sdc" 108 - dmsetup create log --table "$TABLE" 109 - mkfs.btrfs -f /dev/mapper/log 110 - dmsetup message log 0 mark mkfs 102 + TABLE="0 $(blockdev --getsz /dev/sdb) log-writes /dev/sdb /dev/sdc" 103 + dmsetup create log --table "$TABLE" 104 + mkfs.btrfs -f /dev/mapper/log 105 + dmsetup message log 0 mark mkfs 111 106 112 - mount /dev/mapper/log /mnt/btrfs-test 113 - <some test that does fsync at the end> 114 - dmsetup message log 0 mark fsync 115 - md5sum /mnt/btrfs-test/foo 116 - umount /mnt/btrfs-test 107 + mount /dev/mapper/log /mnt/btrfs-test 108 + <some test that does fsync at the end> 109 + dmsetup message log 0 mark fsync 110 + md5sum /mnt/btrfs-test/foo 111 + umount /mnt/btrfs-test 117 112 118 - dmsetup remove log 119 - replay-log --log /dev/sdc --replay /dev/sdb --end-mark fsync 120 - mount /dev/sdb /mnt/btrfs-test 121 - md5sum /mnt/btrfs-test/foo 122 - <verify md5sum's are correct> 113 + dmsetup remove log 114 + replay-log --log /dev/sdc --replay /dev/sdb --end-mark fsync 115 + mount /dev/sdb /mnt/btrfs-test 116 + md5sum /mnt/btrfs-test/foo 117 + <verify md5sum's are correct> 123 118 124 - Another option is to do a complicated file system operation and verify the file 125 - system is consistent during the entire operation. You could do this with: 119 + Another option is to do a complicated file system operation and verify the file 120 + system is consistent during the entire operation. You could do this with: 126 121 127 - TABLE="0 $(blockdev --getsz /dev/sdb) log-writes /dev/sdb /dev/sdc" 128 - dmsetup create log --table "$TABLE" 129 - mkfs.btrfs -f /dev/mapper/log 130 - dmsetup message log 0 mark mkfs 122 + TABLE="0 $(blockdev --getsz /dev/sdb) log-writes /dev/sdb /dev/sdc" 123 + dmsetup create log --table "$TABLE" 124 + mkfs.btrfs -f /dev/mapper/log 125 + dmsetup message log 0 mark mkfs 131 126 132 - mount /dev/mapper/log /mnt/btrfs-test 133 - <fsstress to dirty the fs> 134 - btrfs filesystem balance /mnt/btrfs-test 135 - umount /mnt/btrfs-test 136 - dmsetup remove log 127 + mount /dev/mapper/log /mnt/btrfs-test 128 + <fsstress to dirty the fs> 129 + btrfs filesystem balance /mnt/btrfs-test 130 + umount /mnt/btrfs-test 131 + dmsetup remove log 137 132 138 - replay-log --log /dev/sdc --replay /dev/sdb --end-mark mkfs 139 - btrfsck /dev/sdb 140 - replay-log --log /dev/sdc --replay /dev/sdb --start-mark mkfs \ 133 + replay-log --log /dev/sdc --replay /dev/sdb --end-mark mkfs 134 + btrfsck /dev/sdb 135 + replay-log --log /dev/sdc --replay /dev/sdb --start-mark mkfs \ 141 136 --fsck "btrfsck /dev/sdb" --check fua 142 137 143 138 And that will replay the log until it sees a FUA request, run the fsck command
+4
Documentation/device-mapper/persistent-data.txt Documentation/device-mapper/persistent-data.rst
··· 1 + =============== 2 + Persistent data 3 + =============== 4 + 1 5 Introduction 2 6 ============ 3 7
+60 -56
Documentation/device-mapper/snapshot.txt Documentation/device-mapper/snapshot.rst
··· 1 + ============================== 1 2 Device-mapper snapshot support 2 3 ============================== 3 4 4 5 Device-mapper allows you, without massive data copying: 5 6 6 - *) To create snapshots of any block device i.e. mountable, saved states of 7 - the block device which are also writable without interfering with the 8 - original content; 9 - *) To create device "forks", i.e. multiple different versions of the 10 - same data stream. 11 - *) To merge a snapshot of a block device back into the snapshot's origin 12 - device. 7 + - To create snapshots of any block device i.e. mountable, saved states of 8 + the block device which are also writable without interfering with the 9 + original content; 10 + - To create device "forks", i.e. multiple different versions of the 11 + same data stream. 12 + - To merge a snapshot of a block device back into the snapshot's origin 13 + device. 13 14 14 15 In the first two cases, dm copies only the chunks of data that get 15 16 changed and uses a separate copy-on-write (COW) block device for ··· 23 22 There are three dm targets available: 24 23 snapshot, snapshot-origin, and snapshot-merge. 25 24 26 - *) snapshot-origin <origin> 25 + - snapshot-origin <origin> 27 26 28 27 which will normally have one or more snapshots based on it. 29 28 Reads will be mapped directly to the backing device. For each write, the ··· 31 30 its visible content unchanged, at least until the <COW device> fills up. 32 31 33 32 34 - *) snapshot <origin> <COW device> <persistent?> <chunksize> 33 + - snapshot <origin> <COW device> <persistent?> <chunksize> 35 34 36 35 A snapshot of the <origin> block device is created. Changed chunks of 37 36 <chunksize> sectors will be stored on the <COW device>. Writes will ··· 84 83 source volume), whose table is replaced by a "snapshot-origin" mapping 85 84 from device #1. 86 85 87 - A fixed naming scheme is used, so with the following commands: 86 + A fixed naming scheme is used, so with the following commands:: 88 87 89 - lvcreate -L 1G -n base volumeGroup 90 - lvcreate -L 100M --snapshot -n snap volumeGroup/base 88 + lvcreate -L 1G -n base volumeGroup 89 + lvcreate -L 100M --snapshot -n snap volumeGroup/base 91 90 92 - we'll have this situation (with volumes in above order): 91 + we'll have this situation (with volumes in above order):: 93 92 94 - # dmsetup table|grep volumeGroup 93 + # dmsetup table|grep volumeGroup 95 94 96 - volumeGroup-base-real: 0 2097152 linear 8:19 384 97 - volumeGroup-snap-cow: 0 204800 linear 8:19 2097536 98 - volumeGroup-snap: 0 2097152 snapshot 254:11 254:12 P 16 99 - volumeGroup-base: 0 2097152 snapshot-origin 254:11 95 + volumeGroup-base-real: 0 2097152 linear 8:19 384 96 + volumeGroup-snap-cow: 0 204800 linear 8:19 2097536 97 + volumeGroup-snap: 0 2097152 snapshot 254:11 254:12 P 16 98 + volumeGroup-base: 0 2097152 snapshot-origin 254:11 100 99 101 - # ls -lL /dev/mapper/volumeGroup-* 102 - brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real 103 - brw------- 1 root root 254, 12 29 ago 18:15 /dev/mapper/volumeGroup-snap-cow 104 - brw------- 1 root root 254, 13 29 ago 18:15 /dev/mapper/volumeGroup-snap 105 - brw------- 1 root root 254, 10 29 ago 18:14 /dev/mapper/volumeGroup-base 100 + # ls -lL /dev/mapper/volumeGroup-* 101 + brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real 102 + brw------- 1 root root 254, 12 29 ago 18:15 /dev/mapper/volumeGroup-snap-cow 103 + brw------- 1 root root 254, 13 29 ago 18:15 /dev/mapper/volumeGroup-snap 104 + brw------- 1 root root 254, 10 29 ago 18:14 /dev/mapper/volumeGroup-base 106 105 107 106 108 107 How snapshot-merge is used by LVM2 ··· 115 114 COW device to the "snapshot-merge" is deactivated (unless using lvchange 116 115 --refresh); but if it is left active it will simply return I/O errors. 117 116 118 - A snapshot will merge into its origin with the following command: 117 + A snapshot will merge into its origin with the following command:: 119 118 120 - lvconvert --merge volumeGroup/snap 119 + lvconvert --merge volumeGroup/snap 121 120 122 - we'll now have this situation: 121 + we'll now have this situation:: 123 122 124 - # dmsetup table|grep volumeGroup 123 + # dmsetup table|grep volumeGroup 125 124 126 - volumeGroup-base-real: 0 2097152 linear 8:19 384 127 - volumeGroup-base-cow: 0 204800 linear 8:19 2097536 128 - volumeGroup-base: 0 2097152 snapshot-merge 254:11 254:12 P 16 125 + volumeGroup-base-real: 0 2097152 linear 8:19 384 126 + volumeGroup-base-cow: 0 204800 linear 8:19 2097536 127 + volumeGroup-base: 0 2097152 snapshot-merge 254:11 254:12 P 16 129 128 130 - # ls -lL /dev/mapper/volumeGroup-* 131 - brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real 132 - brw------- 1 root root 254, 12 29 ago 18:16 /dev/mapper/volumeGroup-base-cow 133 - brw------- 1 root root 254, 10 29 ago 18:16 /dev/mapper/volumeGroup-base 129 + # ls -lL /dev/mapper/volumeGroup-* 130 + brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real 131 + brw------- 1 root root 254, 12 29 ago 18:16 /dev/mapper/volumeGroup-base-cow 132 + brw------- 1 root root 254, 10 29 ago 18:16 /dev/mapper/volumeGroup-base 134 133 135 134 136 135 How to determine when a merging is complete 137 136 =========================================== 138 137 The snapshot-merge and snapshot status lines end with: 138 + 139 139 <sectors_allocated>/<total_sectors> <metadata_sectors> 140 140 141 141 Both <sectors_allocated> and <total_sectors> include both data and metadata. ··· 144 142 smaller. Merging has finished when the number of sectors holding data 145 143 is zero, in other words <sectors_allocated> == <metadata_sectors>. 146 144 147 - Here is a practical example (using a hybrid of lvm and dmsetup commands): 145 + Here is a practical example (using a hybrid of lvm and dmsetup commands):: 148 146 149 - # lvs 150 - LV VG Attr LSize Origin Snap% Move Log Copy% Convert 151 - base volumeGroup owi-a- 4.00g 152 - snap volumeGroup swi-a- 1.00g base 18.97 147 + # lvs 148 + LV VG Attr LSize Origin Snap% Move Log Copy% Convert 149 + base volumeGroup owi-a- 4.00g 150 + snap volumeGroup swi-a- 1.00g base 18.97 153 151 154 - # dmsetup status volumeGroup-snap 155 - 0 8388608 snapshot 397896/2097152 1560 156 - ^^^^ metadata sectors 152 + # dmsetup status volumeGroup-snap 153 + 0 8388608 snapshot 397896/2097152 1560 154 + ^^^^ metadata sectors 157 155 158 - # lvconvert --merge -b volumeGroup/snap 159 - Merging of volume snap started. 156 + # lvconvert --merge -b volumeGroup/snap 157 + Merging of volume snap started. 160 158 161 - # lvs volumeGroup/snap 162 - LV VG Attr LSize Origin Snap% Move Log Copy% Convert 163 - base volumeGroup Owi-a- 4.00g 17.23 159 + # lvs volumeGroup/snap 160 + LV VG Attr LSize Origin Snap% Move Log Copy% Convert 161 + base volumeGroup Owi-a- 4.00g 17.23 164 162 165 - # dmsetup status volumeGroup-base 166 - 0 8388608 snapshot-merge 281688/2097152 1104 163 + # dmsetup status volumeGroup-base 164 + 0 8388608 snapshot-merge 281688/2097152 1104 167 165 168 - # dmsetup status volumeGroup-base 169 - 0 8388608 snapshot-merge 180480/2097152 712 166 + # dmsetup status volumeGroup-base 167 + 0 8388608 snapshot-merge 180480/2097152 712 170 168 171 - # dmsetup status volumeGroup-base 172 - 0 8388608 snapshot-merge 16/2097152 16 169 + # dmsetup status volumeGroup-base 170 + 0 8388608 snapshot-merge 16/2097152 16 173 171 174 172 Merging has finished. 175 173 176 - # lvs 177 - LV VG Attr LSize Origin Snap% Move Log Copy% Convert 178 - base volumeGroup owi-a- 4.00g 174 + :: 175 + 176 + # lvs 177 + LV VG Attr LSize Origin Snap% Move Log Copy% Convert 178 + base volumeGroup owi-a- 4.00g
+32 -30
Documentation/device-mapper/statistics.txt Documentation/device-mapper/statistics.rst
··· 1 + ============= 1 2 DM statistics 2 3 ============= 3 4 ··· 12 11 the range specified. 13 12 14 13 The I/O statistics counters for each step-sized area of a region are 15 - in the same format as /sys/block/*/stat or /proc/diskstats (see: 14 + in the same format as `/sys/block/*/stat` or `/proc/diskstats` (see: 16 15 Documentation/iostats.txt). But two extra counters (12 and 13) are 17 16 provided: total time spent reading and writing. When the histogram 18 17 argument is used, the 14th parameter is reported that represents the ··· 33 32 The creation of DM statistics will allocate memory via kmalloc or 34 33 fallback to using vmalloc space. At most, 1/4 of the overall system 35 34 memory may be allocated by DM statistics. The admin can see how much 36 - memory is used by reading 37 - /sys/module/dm_mod/parameters/stats_current_allocated_bytes 35 + memory is used by reading: 36 + 37 + /sys/module/dm_mod/parameters/stats_current_allocated_bytes 38 38 39 39 Messages 40 40 ======== 41 41 42 - @stats_create <range> <step> 43 - [<number_of_optional_arguments> <optional_arguments>...] 44 - [<program_id> [<aux_data>]] 45 - 42 + @stats_create <range> <step> [<number_of_optional_arguments> <optional_arguments>...] [<program_id> [<aux_data>]] 46 43 Create a new region and return the region_id. 47 44 48 45 <range> 49 - "-" - whole device 50 - "<start_sector>+<length>" - a range of <length> 512-byte sectors 51 - starting with <start_sector>. 46 + "-" 47 + whole device 48 + "<start_sector>+<length>" 49 + a range of <length> 512-byte sectors 50 + starting with <start_sector>. 52 51 53 52 <step> 54 - "<area_size>" - the range is subdivided into areas each containing 55 - <area_size> sectors. 56 - "/<number_of_areas>" - the range is subdivided into the specified 57 - number of areas. 53 + "<area_size>" 54 + the range is subdivided into areas each containing 55 + <area_size> sectors. 56 + "/<number_of_areas>" 57 + the range is subdivided into the specified 58 + number of areas. 58 59 59 60 <number_of_optional_arguments> 60 61 The number of optional arguments 61 62 62 63 <optional_arguments> 63 - The following optional arguments are supported 64 - precise_timestamps - use precise timer with nanosecond resolution 64 + The following optional arguments are supported: 65 + 66 + precise_timestamps 67 + use precise timer with nanosecond resolution 65 68 instead of the "jiffies" variable. When this argument is 66 69 used, the resulting times are in nanoseconds instead of 67 70 milliseconds. Precise timestamps are a little bit slower 68 71 to obtain than jiffies-based timestamps. 69 - histogram:n1,n2,n3,n4,... - collect histogram of latencies. The 72 + histogram:n1,n2,n3,n4,... 73 + collect histogram of latencies. The 70 74 numbers n1, n2, etc are times that represent the boundaries 71 75 of the histogram. If precise_timestamps is not used, the 72 76 times are in milliseconds, otherwise they are in ··· 102 96 @stats_list message, but it doesn't use this value for anything. 103 97 104 98 @stats_delete <region_id> 105 - 106 99 Delete the region with the specified id. 107 100 108 101 <region_id> 109 102 region_id returned from @stats_create 110 103 111 104 @stats_clear <region_id> 112 - 113 105 Clear all the counters except the in-flight i/o counters. 114 106 115 107 <region_id> 116 108 region_id returned from @stats_create 117 109 118 110 @stats_list [<program_id>] 119 - 120 111 List all regions registered with @stats_create. 121 112 122 113 <program_id> ··· 130 127 if they were specified when creating the region. 131 128 132 129 @stats_print <region_id> [<starting_line> <number_of_lines>] 133 - 134 130 Print counters for each step-sized area of a region. 135 131 136 132 <region_id> ··· 145 143 146 144 Output format for each step-sized area of a region: 147 145 148 - <start_sector>+<length> counters 146 + <start_sector>+<length> 147 + counters 149 148 150 149 The first 11 counters have the same meaning as 151 - /sys/block/*/stat or /proc/diskstats. 150 + `/sys/block/*/stat or /proc/diskstats`. 152 151 153 152 Please refer to Documentation/iostats.txt for details. 154 153 ··· 166 163 11. the weighted number of milliseconds spent doing I/Os 167 164 168 165 Additional counters: 166 + 169 167 12. the total time spent reading in milliseconds 170 168 13. the total time spent writing in milliseconds 171 169 172 170 @stats_print_clear <region_id> [<starting_line> <number_of_lines>] 173 - 174 171 Atomically print and then clear all the counters except the 175 172 in-flight i/o counters. Useful when the client consuming the 176 173 statistics does not want to lose any statistics (those updated ··· 188 185 If omitted, all lines are printed and then cleared. 189 186 190 187 @stats_set_aux <region_id> <aux_data> 191 - 192 188 Store auxiliary data aux_data for the specified region. 193 189 194 190 <region_id> ··· 203 201 ======== 204 202 205 203 Subdivide the DM device 'vol' into 100 pieces and start collecting 206 - statistics on them: 204 + statistics on them:: 207 205 208 206 dmsetup message vol 0 @stats_create - /100 209 207 210 208 Set the auxiliary data string to "foo bar baz" (the escape for each 211 - space must also be escaped, otherwise the shell will consume them): 209 + space must also be escaped, otherwise the shell will consume them):: 212 210 213 211 dmsetup message vol 0 @stats_set_aux 0 foo\\ bar\\ baz 214 212 215 - List the statistics: 213 + List the statistics:: 216 214 217 215 dmsetup message vol 0 @stats_list 218 216 219 - Print the statistics: 217 + Print the statistics:: 220 218 221 219 dmsetup message vol 0 @stats_print 0 222 220 223 - Delete the statistics: 221 + Delete the statistics:: 224 222 225 223 dmsetup message vol 0 @stats_delete 0
+61
Documentation/device-mapper/striped.rst
··· 1 + ========= 2 + dm-stripe 3 + ========= 4 + 5 + Device-Mapper's "striped" target is used to create a striped (i.e. RAID-0) 6 + device across one or more underlying devices. Data is written in "chunks", 7 + with consecutive chunks rotating among the underlying devices. This can 8 + potentially provide improved I/O throughput by utilizing several physical 9 + devices in parallel. 10 + 11 + Parameters: <num devs> <chunk size> [<dev path> <offset>]+ 12 + <num devs>: 13 + Number of underlying devices. 14 + <chunk size>: 15 + Size of each chunk of data. Must be at least as 16 + large as the system's PAGE_SIZE. 17 + <dev path>: 18 + Full pathname to the underlying block-device, or a 19 + "major:minor" device-number. 20 + <offset>: 21 + Starting sector within the device. 22 + 23 + One or more underlying devices can be specified. The striped device size must 24 + be a multiple of the chunk size multiplied by the number of underlying devices. 25 + 26 + 27 + Example scripts 28 + =============== 29 + 30 + :: 31 + 32 + #!/usr/bin/perl -w 33 + # Create a striped device across any number of underlying devices. The device 34 + # will be called "stripe_dev" and have a chunk-size of 128k. 35 + 36 + my $chunk_size = 128 * 2; 37 + my $dev_name = "stripe_dev"; 38 + my $num_devs = @ARGV; 39 + my @devs = @ARGV; 40 + my ($min_dev_size, $stripe_dev_size, $i); 41 + 42 + if (!$num_devs) { 43 + die("Specify at least one device\n"); 44 + } 45 + 46 + $min_dev_size = `blockdev --getsz $devs[0]`; 47 + for ($i = 1; $i < $num_devs; $i++) { 48 + my $this_size = `blockdev --getsz $devs[$i]`; 49 + $min_dev_size = ($min_dev_size < $this_size) ? 50 + $min_dev_size : $this_size; 51 + } 52 + 53 + $stripe_dev_size = $min_dev_size * $num_devs; 54 + $stripe_dev_size -= $stripe_dev_size % ($chunk_size * $num_devs); 55 + 56 + $table = "0 $stripe_dev_size striped $num_devs $chunk_size"; 57 + for ($i = 0; $i < $num_devs; $i++) { 58 + $table .= " $devs[$i] 0"; 59 + } 60 + 61 + `echo $table | dmsetup create $dev_name`;
-57
Documentation/device-mapper/striped.txt
··· 1 - dm-stripe 2 - ========= 3 - 4 - Device-Mapper's "striped" target is used to create a striped (i.e. RAID-0) 5 - device across one or more underlying devices. Data is written in "chunks", 6 - with consecutive chunks rotating among the underlying devices. This can 7 - potentially provide improved I/O throughput by utilizing several physical 8 - devices in parallel. 9 - 10 - Parameters: <num devs> <chunk size> [<dev path> <offset>]+ 11 - <num devs>: Number of underlying devices. 12 - <chunk size>: Size of each chunk of data. Must be at least as 13 - large as the system's PAGE_SIZE. 14 - <dev path>: Full pathname to the underlying block-device, or a 15 - "major:minor" device-number. 16 - <offset>: Starting sector within the device. 17 - 18 - One or more underlying devices can be specified. The striped device size must 19 - be a multiple of the chunk size multiplied by the number of underlying devices. 20 - 21 - 22 - Example scripts 23 - =============== 24 - 25 - [[ 26 - #!/usr/bin/perl -w 27 - # Create a striped device across any number of underlying devices. The device 28 - # will be called "stripe_dev" and have a chunk-size of 128k. 29 - 30 - my $chunk_size = 128 * 2; 31 - my $dev_name = "stripe_dev"; 32 - my $num_devs = @ARGV; 33 - my @devs = @ARGV; 34 - my ($min_dev_size, $stripe_dev_size, $i); 35 - 36 - if (!$num_devs) { 37 - die("Specify at least one device\n"); 38 - } 39 - 40 - $min_dev_size = `blockdev --getsz $devs[0]`; 41 - for ($i = 1; $i < $num_devs; $i++) { 42 - my $this_size = `blockdev --getsz $devs[$i]`; 43 - $min_dev_size = ($min_dev_size < $this_size) ? 44 - $min_dev_size : $this_size; 45 - } 46 - 47 - $stripe_dev_size = $min_dev_size * $num_devs; 48 - $stripe_dev_size -= $stripe_dev_size % ($chunk_size * $num_devs); 49 - 50 - $table = "0 $stripe_dev_size striped $num_devs $chunk_size"; 51 - for ($i = 0; $i < $num_devs; $i++) { 52 - $table .= " $devs[$i] 0"; 53 - } 54 - 55 - `echo $table | dmsetup create $dev_name`; 56 - ]] 57 -
+25 -22
Documentation/device-mapper/switch.txt Documentation/device-mapper/switch.rst
··· 1 + ========= 1 2 dm-switch 2 3 ========= 3 4 ··· 68 67 Construction Parameters 69 68 ======================= 70 69 71 - <num_paths> <region_size> <num_optional_args> [<optional_args>...] 72 - [<dev_path> <offset>]+ 70 + <num_paths> <region_size> <num_optional_args> [<optional_args>...] [<dev_path> <offset>]+ 71 + <num_paths> 72 + The number of paths across which to distribute the I/O. 73 73 74 - <num_paths> 75 - The number of paths across which to distribute the I/O. 74 + <region_size> 75 + The number of 512-byte sectors in a region. Each region can be redirected 76 + to any of the available paths. 76 77 77 - <region_size> 78 - The number of 512-byte sectors in a region. Each region can be redirected 79 - to any of the available paths. 78 + <num_optional_args> 79 + The number of optional arguments. Currently, no optional arguments 80 + are supported and so this must be zero. 80 81 81 - <num_optional_args> 82 - The number of optional arguments. Currently, no optional arguments 83 - are supported and so this must be zero. 82 + <dev_path> 83 + The block device that represents a specific path to the device. 84 84 85 - <dev_path> 86 - The block device that represents a specific path to the device. 87 - 88 - <offset> 89 - The offset of the start of data on the specific <dev_path> (in units 90 - of 512-byte sectors). This number is added to the sector number when 91 - forwarding the request to the specific path. Typically it is zero. 85 + <offset> 86 + The offset of the start of data on the specific <dev_path> (in units 87 + of 512-byte sectors). This number is added to the sector number when 88 + forwarding the request to the specific path. Typically it is zero. 92 89 93 90 Messages 94 91 ======== ··· 121 122 Assume that you have volumes vg1/switch0 vg1/switch1 vg1/switch2 with 122 123 the same size. 123 124 124 - Create a switch device with 64kB region size: 125 + Create a switch device with 64kB region size:: 126 + 125 127 dmsetup create switch --table "0 `blockdev --getsz /dev/vg1/switch0` 126 128 switch 3 128 0 /dev/vg1/switch0 0 /dev/vg1/switch1 0 /dev/vg1/switch2 0" 127 129 128 130 Set mappings for the first 7 entries to point to devices switch0, switch1, 129 - switch2, switch0, switch1, switch2, switch1: 131 + switch2, switch0, switch1, switch2, switch1:: 132 + 130 133 dmsetup message switch 0 set_region_mappings 0:0 :1 :2 :0 :1 :2 :1 131 134 132 - Set repetitive mapping. This command: 135 + Set repetitive mapping. This command:: 136 + 133 137 dmsetup message switch 0 set_region_mappings 1000:1 :2 R2,10 134 - is equivalent to: 138 + 139 + is equivalent to:: 140 + 135 141 dmsetup message switch 0 set_region_mappings 1000:1 :2 :1 :2 :1 :2 :1 :2 \ 136 142 :1 :2 :1 :2 :1 :2 :1 :2 :1 :2 137 -
+42 -26
Documentation/device-mapper/thin-provisioning.txt Documentation/device-mapper/thin-provisioning.rst
··· 1 + ================= 2 + Thin provisioning 3 + ================= 4 + 1 5 Introduction 2 6 ============ 3 7 ··· 99 95 Using an existing pool device 100 96 ----------------------------- 101 97 98 + :: 99 + 102 100 dmsetup create pool \ 103 101 --table "0 20971520 thin-pool $metadata_dev $data_dev \ 104 102 $data_block_size $low_water_mark" ··· 160 154 i) Creating a new thinly-provisioned volume. 161 155 162 156 To create a new thinly- provisioned volume you must send a message to an 163 - active pool device, /dev/mapper/pool in this example. 157 + active pool device, /dev/mapper/pool in this example:: 164 158 165 159 dmsetup message /dev/mapper/pool 0 "create_thin 0" 166 160 ··· 170 164 171 165 ii) Using a thinly-provisioned volume. 172 166 173 - Thinly-provisioned volumes are activated using the 'thin' target: 167 + Thinly-provisioned volumes are activated using the 'thin' target:: 174 168 175 169 dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0" 176 170 ··· 186 180 N.B. If the origin device that you wish to snapshot is active, you 187 181 must suspend it before creating the snapshot to avoid corruption. 188 182 This is NOT enforced at the moment, so please be careful! 183 + 184 + :: 189 185 190 186 dmsetup suspend /dev/mapper/thin 191 187 dmsetup message /dev/mapper/pool 0 "create_snap 1 0" ··· 206 198 activating or removing them both. (This differs from conventional 207 199 device-mapper snapshots.) 208 200 209 - Activate it exactly the same way as any other thinly-provisioned volume: 201 + Activate it exactly the same way as any other thinly-provisioned volume:: 210 202 211 203 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1" 212 204 213 205 External snapshots 214 206 ------------------ 215 207 216 - You can use an external _read only_ device as an origin for a 208 + You can use an external **read only** device as an origin for a 217 209 thinly-provisioned volume. Any read to an unprovisioned area of the 218 210 thin device will be passed through to the origin. Writes trigger 219 211 the allocation of new blocks as usual. ··· 231 223 This is the same as creating a thin device. 232 224 You don't mention the origin at this stage. 233 225 226 + :: 227 + 234 228 dmsetup message /dev/mapper/pool 0 "create_thin 0" 235 229 236 230 ii) Using a snapshot of an external device. 237 231 238 - Append an extra parameter to the thin target specifying the origin: 232 + Append an extra parameter to the thin target specifying the origin:: 239 233 240 234 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 0 /dev/image" 241 235 ··· 249 239 250 240 All devices using a pool must be deactivated before the pool itself 251 241 can be. 242 + 243 + :: 252 244 253 245 dmsetup remove thin 254 246 dmsetup remove snap ··· 264 252 265 253 i) Constructor 266 254 267 - thin-pool <metadata dev> <data dev> <data block size (sectors)> \ 268 - <low water mark (blocks)> [<number of feature args> [<arg>]*] 255 + :: 256 + 257 + thin-pool <metadata dev> <data dev> <data block size (sectors)> \ 258 + <low water mark (blocks)> [<number of feature args> [<arg>]*] 269 259 270 260 Optional feature arguments: 271 261 272 - skip_block_zeroing: Skip the zeroing of newly-provisioned blocks. 262 + skip_block_zeroing: 263 + Skip the zeroing of newly-provisioned blocks. 273 264 274 - ignore_discard: Disable discard support. 265 + ignore_discard: 266 + Disable discard support. 275 267 276 - no_discard_passdown: Don't pass discards down to the underlying 277 - data device, but just remove the mapping. 268 + no_discard_passdown: 269 + Don't pass discards down to the underlying 270 + data device, but just remove the mapping. 278 271 279 - read_only: Don't allow any changes to be made to the pool 272 + read_only: 273 + Don't allow any changes to be made to the pool 280 274 metadata. This mode is only available after the 281 275 thin-pool has been created and first used in full 282 276 read/write mode. It cannot be specified on initial 283 277 thin-pool creation. 284 278 285 - error_if_no_space: Error IOs, instead of queueing, if no space. 279 + error_if_no_space: 280 + Error IOs, instead of queueing, if no space. 286 281 287 282 Data block size must be between 64KB (128 sectors) and 1GB 288 283 (2097152 sectors) inclusive. ··· 297 278 298 279 ii) Status 299 280 300 - <transaction id> <used metadata blocks>/<total metadata blocks> 301 - <used data blocks>/<total data blocks> <held metadata root> 302 - ro|rw|out_of_data_space [no_]discard_passdown [error|queue]_if_no_space 303 - needs_check|- metadata_low_watermark 281 + :: 282 + 283 + <transaction id> <used metadata blocks>/<total metadata blocks> 284 + <used data blocks>/<total data blocks> <held metadata root> 285 + ro|rw|out_of_data_space [no_]discard_passdown [error|queue]_if_no_space 286 + needs_check|- metadata_low_watermark 304 287 305 288 transaction id: 306 289 A 64-bit number used by userspace to help synchronise with metadata ··· 357 336 iii) Messages 358 337 359 338 create_thin <dev id> 360 - 361 339 Create a new thinly-provisioned device. 362 340 <dev id> is an arbitrary unique 24-bit identifier chosen by 363 341 the caller. 364 342 365 343 create_snap <dev id> <origin id> 366 - 367 344 Create a new snapshot of another thinly-provisioned device. 368 345 <dev id> is an arbitrary unique 24-bit identifier chosen by 369 346 the caller. ··· 369 350 of which the new device will be a snapshot. 370 351 371 352 delete <dev id> 372 - 373 353 Deletes a thin device. Irreversible. 374 354 375 355 set_transaction_id <current id> <new id> 376 - 377 356 Userland volume managers, such as LVM, need a way to 378 357 synchronise their external metadata with the internal metadata of the 379 358 pool target. The thin-pool target offers to store an ··· 381 364 compare-and-swap message. 382 365 383 366 reserve_metadata_snap 384 - 385 367 Reserve a copy of the data mapping btree for use by userland. 386 368 This allows userland to inspect the mappings as they were when 387 369 this message was executed. Use the pool's status command to 388 370 get the root block associated with the metadata snapshot. 389 371 390 372 release_metadata_snap 391 - 392 373 Release a previously reserved copy of the data mapping btree. 393 374 394 375 'thin' target ··· 394 379 395 380 i) Constructor 396 381 397 - thin <pool dev> <dev id> [<external origin dev>] 382 + :: 383 + 384 + thin <pool dev> <dev id> [<external origin dev>] 398 385 399 386 pool dev: 400 387 the thin-pool device, e.g. /dev/mapper/my_pool or 253:0 ··· 418 401 419 402 ii) Status 420 403 421 - <nr mapped sectors> <highest mapped sector> 422 - 404 + <nr mapped sectors> <highest mapped sector> 423 405 If the pool has encountered device errors and failed, the status 424 406 will just contain the string 'Fail'. The userspace recovery 425 407 tools should then be used.
+52 -41
Documentation/device-mapper/unstriped.txt Documentation/device-mapper/unstriped.rst
··· 1 + ================================ 2 + Device-mapper "unstriped" target 3 + ================================ 4 + 1 5 Introduction 2 6 ============ 3 7 ··· 38 34 the unstriped target ontop of the striped device to access the 39 35 individual backing loop devices. We write data to the newly exposed 40 36 unstriped devices and verify the data written matches the correct 41 - underlying device on the striped array. 37 + underlying device on the striped array:: 42 38 43 - #!/bin/bash 39 + #!/bin/bash 44 40 45 - MEMBER_SIZE=$((128 * 1024 * 1024)) 46 - NUM=4 47 - SEQ_END=$((${NUM}-1)) 48 - CHUNK=256 49 - BS=4096 41 + MEMBER_SIZE=$((128 * 1024 * 1024)) 42 + NUM=4 43 + SEQ_END=$((${NUM}-1)) 44 + CHUNK=256 45 + BS=4096 50 46 51 - RAID_SIZE=$((${MEMBER_SIZE}*${NUM}/512)) 52 - DM_PARMS="0 ${RAID_SIZE} striped ${NUM} ${CHUNK}" 53 - COUNT=$((${MEMBER_SIZE} / ${BS})) 47 + RAID_SIZE=$((${MEMBER_SIZE}*${NUM}/512)) 48 + DM_PARMS="0 ${RAID_SIZE} striped ${NUM} ${CHUNK}" 49 + COUNT=$((${MEMBER_SIZE} / ${BS})) 54 50 55 - for i in $(seq 0 ${SEQ_END}); do 56 - dd if=/dev/zero of=member-${i} bs=${MEMBER_SIZE} count=1 oflag=direct 57 - losetup /dev/loop${i} member-${i} 58 - DM_PARMS+=" /dev/loop${i} 0" 59 - done 51 + for i in $(seq 0 ${SEQ_END}); do 52 + dd if=/dev/zero of=member-${i} bs=${MEMBER_SIZE} count=1 oflag=direct 53 + losetup /dev/loop${i} member-${i} 54 + DM_PARMS+=" /dev/loop${i} 0" 55 + done 60 56 61 - echo $DM_PARMS | dmsetup create raid0 62 - for i in $(seq 0 ${SEQ_END}); do 63 - echo "0 1 unstriped ${NUM} ${CHUNK} ${i} /dev/mapper/raid0 0" | dmsetup create set-${i} 64 - done; 57 + echo $DM_PARMS | dmsetup create raid0 58 + for i in $(seq 0 ${SEQ_END}); do 59 + echo "0 1 unstriped ${NUM} ${CHUNK} ${i} /dev/mapper/raid0 0" | dmsetup create set-${i} 60 + done; 65 61 66 - for i in $(seq 0 ${SEQ_END}); do 67 - dd if=/dev/urandom of=/dev/mapper/set-${i} bs=${BS} count=${COUNT} oflag=direct 68 - diff /dev/mapper/set-${i} member-${i} 69 - done; 62 + for i in $(seq 0 ${SEQ_END}); do 63 + dd if=/dev/urandom of=/dev/mapper/set-${i} bs=${BS} count=${COUNT} oflag=direct 64 + diff /dev/mapper/set-${i} member-${i} 65 + done; 70 66 71 - for i in $(seq 0 ${SEQ_END}); do 72 - dmsetup remove set-${i} 73 - done 67 + for i in $(seq 0 ${SEQ_END}); do 68 + dmsetup remove set-${i} 69 + done 74 70 75 - dmsetup remove raid0 71 + dmsetup remove raid0 76 72 77 - for i in $(seq 0 ${SEQ_END}); do 78 - losetup -d /dev/loop${i} 79 - rm -f member-${i} 80 - done 73 + for i in $(seq 0 ${SEQ_END}); do 74 + losetup -d /dev/loop${i} 75 + rm -f member-${i} 76 + done 81 77 82 78 Another example 83 79 --------------- ··· 85 81 Intel NVMe drives contain two cores on the physical device. 86 82 Each core of the drive has segregated access to its LBA range. 87 83 The current LBA model has a RAID 0 128k chunk on each core, resulting 88 - in a 256k stripe across the two cores: 84 + in a 256k stripe across the two cores:: 89 85 90 86 Core 0: Core 1: 91 87 __________ __________ ··· 112 108 113 109 unstriped ontop of Intel NVMe device that has 2 cores 114 110 ----------------------------------------------------- 115 - dmsetup create nvmset0 --table '0 512 unstriped 2 256 0 /dev/nvme0n1 0' 116 - dmsetup create nvmset1 --table '0 512 unstriped 2 256 1 /dev/nvme0n1 0' 111 + 112 + :: 113 + 114 + dmsetup create nvmset0 --table '0 512 unstriped 2 256 0 /dev/nvme0n1 0' 115 + dmsetup create nvmset1 --table '0 512 unstriped 2 256 1 /dev/nvme0n1 0' 117 116 118 117 There will now be two devices that expose Intel NVMe core 0 and 1 119 - respectively: 120 - /dev/mapper/nvmset0 121 - /dev/mapper/nvmset1 118 + respectively:: 119 + 120 + /dev/mapper/nvmset0 121 + /dev/mapper/nvmset1 122 122 123 123 unstriped ontop of striped with 4 drives using 128K chunk size 124 124 -------------------------------------------------------------- 125 - dmsetup create raid_disk0 --table '0 512 unstriped 4 256 0 /dev/mapper/striped 0' 126 - dmsetup create raid_disk1 --table '0 512 unstriped 4 256 1 /dev/mapper/striped 0' 127 - dmsetup create raid_disk2 --table '0 512 unstriped 4 256 2 /dev/mapper/striped 0' 128 - dmsetup create raid_disk3 --table '0 512 unstriped 4 256 3 /dev/mapper/striped 0' 125 + 126 + :: 127 + 128 + dmsetup create raid_disk0 --table '0 512 unstriped 4 256 0 /dev/mapper/striped 0' 129 + dmsetup create raid_disk1 --table '0 512 unstriped 4 256 1 /dev/mapper/striped 0' 130 + dmsetup create raid_disk2 --table '0 512 unstriped 4 256 2 /dev/mapper/striped 0' 131 + dmsetup create raid_disk3 --table '0 512 unstriped 4 256 3 /dev/mapper/striped 0'
+15 -5
Documentation/device-mapper/verity.txt Documentation/device-mapper/verity.rst
··· 1 + ========= 1 2 dm-verity 2 - ========== 3 + ========= 3 4 4 5 Device-Mapper's "verity" target provides transparent integrity checking of 5 6 block devices using a cryptographic digest provided by the kernel crypto API. ··· 8 7 9 8 Construction Parameters 10 9 ======================= 10 + 11 + :: 12 + 11 13 <version> <dev> <hash_dev> 12 14 <data_block_size> <hash_block_size> 13 15 <num_data_blocks> <hash_start_block> ··· 164 160 165 161 The tree looks something like: 166 162 167 - alg = sha256, num_blocks = 32768, block_size = 4096 163 + alg = sha256, num_blocks = 32768, block_size = 4096 164 + 165 + :: 168 166 169 167 [ root ] 170 168 / . . . \ ··· 195 189 196 190 The full specification of kernel parameters and on-disk metadata format 197 191 is available at the cryptsetup project's wiki page 192 + 198 193 https://gitlab.com/cryptsetup/cryptsetup/wikis/DMVerity 199 194 200 195 Status ··· 205 198 206 199 Example 207 200 ======= 208 - Set up a device: 201 + Set up a device:: 202 + 209 203 # dmsetup create vroot --readonly --table \ 210 204 "0 2097152 verity 1 /dev/sda1 /dev/sda2 4096 4096 262144 1 sha256 "\ 211 205 "4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 "\ ··· 217 209 the cryptsetup upstream repository https://gitlab.com/cryptsetup/cryptsetup/ 218 210 (as a libcryptsetup extension). 219 211 220 - Create hash on the device: 212 + Create hash on the device:: 213 + 221 214 # veritysetup format /dev/sda1 /dev/sda2 222 215 ... 223 216 Root hash: 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 224 217 225 - Activate the device: 218 + Activate the device:: 219 + 226 220 # veritysetup create vroot /dev/sda1 /dev/sda2 \ 227 221 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
+11 -2
Documentation/device-mapper/writecache.txt Documentation/device-mapper/writecache.rst
··· 1 + ================= 2 + Writecache target 3 + ================= 4 + 1 5 The writecache target caches writes on persistent memory or on SSD. It 2 6 doesn't cache reads because reads are supposed to be cached in page cache 3 7 in normal RAM. ··· 10 6 first sector should contain valid superblock from previous invocation. 11 7 12 8 Constructor parameters: 9 + 13 10 1. type of the cache device - "p" or "s" 14 - p - persistent memory 15 - s - SSD 11 + 12 + - p - persistent memory 13 + - s - SSD 16 14 2. the underlying device that will be cached 17 15 3. the cache device 18 16 4. block size (4096 is recommended; the maximum block size is the page 19 17 size) 20 18 5. the number of optional parameters (the parameters with an argument 21 19 count as two) 20 + 22 21 start_sector n (default: 0) 23 22 offset from the start of cache device in 512-byte sectors 24 23 high_watermark n (default: 50) ··· 50 43 applicable only to persistent memory - don't use the FUA 51 44 flag when writing back data and send the FLUSH request 52 45 afterwards 46 + 53 47 - some underlying devices perform better with fua, some 54 48 with nofua. The user should test it 55 49 ··· 68 60 flush the cache device on next suspend. Use this message 69 61 when you are going to remove the cache device. The proper 70 62 sequence for removing the cache device is: 63 + 71 64 1. send the "flush_on_suspend" message 72 65 2. load an inactive table with a linear target that maps 73 66 to the underlying device
+7 -7
Documentation/device-mapper/zero.txt Documentation/device-mapper/zero.rst
··· 1 + ======= 1 2 dm-zero 2 3 ======= 3 4 ··· 19 18 20 19 To create a sparse device, start by creating a dm-zero device that's the 21 20 desired size of the sparse device. For this example, we'll assume a 10TB 22 - sparse device. 21 + sparse device:: 23 22 24 - TEN_TERABYTES=`expr 10 \* 1024 \* 1024 \* 1024 \* 2` # 10 TB in sectors 25 - echo "0 $TEN_TERABYTES zero" | dmsetup create zero1 23 + TEN_TERABYTES=`expr 10 \* 1024 \* 1024 \* 1024 \* 2` # 10 TB in sectors 24 + echo "0 $TEN_TERABYTES zero" | dmsetup create zero1 26 25 27 26 Then create a snapshot of the zero device, using any available block-device as 28 27 the COW device. The size of the COW device will determine the amount of real 29 28 space available to the sparse device. For this example, we'll assume /dev/sdb1 30 - is an available 10GB partition. 29 + is an available 10GB partition:: 31 30 32 - echo "0 $TEN_TERABYTES snapshot /dev/mapper/zero1 /dev/sdb1 p 128" | \ 33 - dmsetup create sparse1 31 + echo "0 $TEN_TERABYTES snapshot /dev/mapper/zero1 /dev/sdb1 p 128" | \ 32 + dmsetup create sparse1 34 33 35 34 This will create a 10TB sparse device called /dev/mapper/sparse1 that has 36 35 10GB of actual storage space available. If more than 10GB of data is written 37 36 to this device, it will start returning I/O errors. 38 -
+2 -2
Documentation/filesystems/ubifs-authentication.md
··· 417 417 418 418 [DMC-CBC-ATTACK] http://www.jakoblell.com/blog/2013/12/22/practical-malleability-attack-against-cbc-encrypted-luks-partitions/ 419 419 420 - [DM-INTEGRITY] https://www.kernel.org/doc/Documentation/device-mapper/dm-integrity.txt 420 + [DM-INTEGRITY] https://www.kernel.org/doc/Documentation/device-mapper/dm-integrity.rst 421 421 422 - [DM-VERITY] https://www.kernel.org/doc/Documentation/device-mapper/verity.txt 422 + [DM-VERITY] https://www.kernel.org/doc/Documentation/device-mapper/verity.rst 423 423 424 424 [FSCRYPT-POLICY2] https://www.spinics.net/lists/linux-ext4/msg58710.html 425 425
+1 -1
drivers/md/Kconfig
··· 453 453 Enable "dm-mod.create=" parameter to create mapped devices at init time. 454 454 This option is useful to allow mounting rootfs without requiring an 455 455 initramfs. 456 - See Documentation/device-mapper/dm-init.txt for dm-mod.create="..." 456 + See Documentation/device-mapper/dm-init.rst for dm-mod.create="..." 457 457 format. 458 458 459 459 If unsure, say N.
+1 -1
drivers/md/dm-init.c
··· 25 25 * Format: dm-mod.create=<name>,<uuid>,<minor>,<flags>,<table>[,<table>+][;<name>,<uuid>,<minor>,<flags>,<table>[,<table>+]+] 26 26 * Table format: <start_sector> <num_sectors> <target_type> <target_args> 27 27 * 28 - * See Documentation/device-mapper/dm-init.txt for dm-mod.create="..." format 28 + * See Documentation/device-mapper/dm-init.rst for dm-mod.create="..." format 29 29 * details. 30 30 */ 31 31
+1 -1
drivers/md/dm-raid.c
··· 3558 3558 * v1.5.0+: 3559 3559 * 3560 3560 * Sync action: 3561 - * See Documentation/device-mapper/dm-raid.txt for 3561 + * See Documentation/device-mapper/dm-raid.rst for 3562 3562 * information on each of these states. 3563 3563 */ 3564 3564 DMEMIT(" %s", sync_action);