Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v2.6.17 357 lines 14 kB view raw
1Tools that manage md devices can be found at 2 http://www.<country>.kernel.org/pub/linux/utils/raid/.... 3 4 5Boot time assembly of RAID arrays 6--------------------------------- 7 8You can boot with your md device with the following kernel command 9lines: 10 11for old raid arrays without persistent superblocks: 12 md=<md device no.>,<raid level>,<chunk size factor>,<fault level>,dev0,dev1,...,devn 13 14for raid arrays with persistent superblocks 15 md=<md device no.>,dev0,dev1,...,devn 16or, to assemble a partitionable array: 17 md=d<md device no.>,dev0,dev1,...,devn 18 19md device no. = the number of the md device ... 20 0 means md0, 21 1 md1, 22 2 md2, 23 3 md3, 24 4 md4 25 26raid level = -1 linear mode 27 0 striped mode 28 other modes are only supported with persistent super blocks 29 30chunk size factor = (raid-0 and raid-1 only) 31 Set the chunk size as 4k << n. 32 33fault level = totally ignored 34 35dev0-devn: e.g. /dev/hda1,/dev/hdc1,/dev/sda1,/dev/sdb1 36 37A possible loadlin line (Harald Hoyer <HarryH@Royal.Net>) looks like this: 38 39e:\loadlin\loadlin e:\zimage root=/dev/md0 md=0,0,4,0,/dev/hdb2,/dev/hdc3 ro 40 41 42Boot time autodetection of RAID arrays 43-------------------------------------- 44 45When md is compiled into the kernel (not as module), partitions of 46type 0xfd are scanned and automatically assembled into RAID arrays. 47This autodetection may be suppressed with the kernel parameter 48"raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 49superblock can be autodetected and run at boot time. 50 51The kernel parameter "raid=partitionable" (or "raid=part") means 52that all auto-detected arrays are assembled as partitionable. 53 54Boot time assembly of degraded/dirty arrays 55------------------------------------------- 56 57If a raid5 or raid6 array is both dirty and degraded, it could have 58undetectable data corruption. This is because the fact that it is 59'dirty' means that the parity cannot be trusted, and the fact that it 60is degraded means that some datablocks are missing and cannot reliably 61be reconstructed (due to no parity). 62 63For this reason, md will normally refuse to start such an array. This 64requires the sysadmin to take action to explicitly start the array 65desipite possible corruption. This is normally done with 66 mdadm --assemble --force .... 67 68This option is not really available if the array has the root 69filesystem on it. In order to support this booting from such an 70array, md supports a module parameter "start_dirty_degraded" which, 71when set to 1, bypassed the checks and will allows dirty degraded 72arrays to be started. 73 74So, to boot with a root filesystem of a dirty degraded raid[56], use 75 76 md-mod.start_dirty_degraded=1 77 78 79Superblock formats 80------------------ 81 82The md driver can support a variety of different superblock formats. 83Currently, it supports superblock formats "0.90.0" and the "md-1" format 84introduced in the 2.5 development series. 85 86The kernel will autodetect which format superblock is being used. 87 88Superblock format '0' is treated differently to others for legacy 89reasons - it is the original superblock format. 90 91 92General Rules - apply for all superblock formats 93------------------------------------------------ 94 95An array is 'created' by writing appropriate superblocks to all 96devices. 97 98It is 'assembled' by associating each of these devices with an 99particular md virtual device. Once it is completely assembled, it can 100be accessed. 101 102An array should be created by a user-space tool. This will write 103superblocks to all devices. It will usually mark the array as 104'unclean', or with some devices missing so that the kernel md driver 105can create appropriate redundancy (copying in raid1, parity 106calculation in raid4/5). 107 108When an array is assembled, it is first initialized with the 109SET_ARRAY_INFO ioctl. This contains, in particular, a major and minor 110version number. The major version number selects which superblock 111format is to be used. The minor number might be used to tune handling 112of the format, such as suggesting where on each device to look for the 113superblock. 114 115Then each device is added using the ADD_NEW_DISK ioctl. This 116provides, in particular, a major and minor number identifying the 117device to add. 118 119The array is started with the RUN_ARRAY ioctl. 120 121Once started, new devices can be added. They should have an 122appropriate superblock written to them, and then passed be in with 123ADD_NEW_DISK. 124 125Devices that have failed or are not yet active can be detached from an 126array using HOT_REMOVE_DISK. 127 128 129Specific Rules that apply to format-0 super block arrays, and 130 arrays with no superblock (non-persistent). 131------------------------------------------------------------- 132 133An array can be 'created' by describing the array (level, chunksize 134etc) in a SET_ARRAY_INFO ioctl. This must has major_version==0 and 135raid_disks != 0. 136 137Then uninitialized devices can be added with ADD_NEW_DISK. The 138structure passed to ADD_NEW_DISK must specify the state of the device 139and it's role in the array. 140 141Once started with RUN_ARRAY, uninitialized spares can be added with 142HOT_ADD_DISK. 143 144 145 146MD devices in sysfs 147------------------- 148md devices appear in sysfs (/sys) as regular block devices, 149e.g. 150 /sys/block/md0 151 152Each 'md' device will contain a subdirectory called 'md' which 153contains further md-specific information about the device. 154 155All md devices contain: 156 level 157 a text file indicating the 'raid level'. This may be a standard 158 numerical level prefixed by "RAID-" - e.g. "RAID-5", or some 159 other name such as "linear" or "multipath". 160 If no raid level has been set yet (array is still being 161 assembled), this file will be empty. 162 163 raid_disks 164 a text file with a simple number indicating the number of devices 165 in a fully functional array. If this is not yet known, the file 166 will be empty. If an array is being resized (not currently 167 possible) this will contain the larger of the old and new sizes. 168 Some raid level (RAID1) allow this value to be set while the 169 array is active. This will reconfigure the array. Otherwise 170 it can only be set while assembling an array. 171 172 chunk_size 173 This is the size if bytes for 'chunks' and is only relevant to 174 raid levels that involve striping (1,4,5,6,10). The address space 175 of the array is conceptually divided into chunks and consecutive 176 chunks are striped onto neighbouring devices. 177 The size should be atleast PAGE_SIZE (4k) and should be a power 178 of 2. This can only be set while assembling an array 179 180 component_size 181 For arrays with data redundancy (i.e. not raid0, linear, faulty, 182 multipath), all components must be the same size - or at least 183 there must a size that they all provide space for. This is a key 184 part or the geometry of the array. It is measured in sectors 185 and can be read from here. Writing to this value may resize 186 the array if the personality supports it (raid1, raid5, raid6), 187 and if the component drives are large enough. 188 189 metadata_version 190 This indicates the format that is being used to record metadata 191 about the array. It can be 0.90 (traditional format), 1.0, 1.1, 192 1.2 (newer format in varying locations) or "none" indicating that 193 the kernel isn't managing metadata at all. 194 195 level 196 The raid 'level' for this array. The name will often (but not 197 always) be the same as the name of the module that implements the 198 level. To be auto-loaded the module must have an alias 199 md-$LEVEL e.g. md-raid5 200 This can be written only while the array is being assembled, not 201 after it is started. 202 203 new_dev 204 This file can be written but not read. The value written should 205 be a block device number as major:minor. e.g. 8:0 206 This will cause that device to be attached to the array, if it is 207 available. It will then appear at md/dev-XXX (depending on the 208 name of the device) and further configuration is then possible. 209 210 sync_speed_min 211 sync_speed_max 212 This are similar to /proc/sys/dev/raid/speed_limit_{min,max} 213 however they only apply to the particular array. 214 If no value has been written to these, of if the word 'system' 215 is written, then the system-wide value is used. If a value, 216 in kibibytes-per-second is written, then it is used. 217 When the files are read, they show the currently active value 218 followed by "(local)" or "(system)" depending on whether it is 219 a locally set or system-wide value. 220 221 sync_completed 222 This shows the number of sectors that have been completed of 223 whatever the current sync_action is, followed by the number of 224 sectors in total that could need to be processed. The two 225 numbers are separated by a '/' thus effectively showing one 226 value, a fraction of the process that is complete. 227 228 sync_speed 229 This shows the current actual speed, in K/sec, of the current 230 sync_action. It is averaged over the last 30 seconds. 231 232 233As component devices are added to an md array, they appear in the 'md' 234directory as new directories named 235 dev-XXX 236where XXX is a name that the kernel knows for the device, e.g. hdb1. 237Each directory contains: 238 239 block 240 a symlink to the block device in /sys/block, e.g. 241 /sys/block/md0/md/dev-hdb1/block -> ../../../../block/hdb/hdb1 242 243 super 244 A file containing an image of the superblock read from, or 245 written to, that device. 246 247 state 248 A file recording the current state of the device in the array 249 which can be a comma separated list of 250 faulty - device has been kicked from active use due to 251 a detected fault 252 in_sync - device is a fully in-sync member of the array 253 spare - device is working, but not a full member. 254 This includes spares that are in the process 255 of being recoverred to 256 This list make grow in future. 257 258 errors 259 An approximate count of read errors that have been detected on 260 this device but have not caused the device to be evicted from 261 the array (either because they were corrected or because they 262 happened while the array was read-only). When using version-1 263 metadata, this value persists across restarts of the array. 264 265 This value can be written while assembling an array thus 266 providing an ongoing count for arrays with metadata managed by 267 userspace. 268 269 slot 270 This gives the role that the device has in the array. It will 271 either be 'none' if the device is not active in the array 272 (i.e. is a spare or has failed) or an integer less than the 273 'raid_disks' number for the array indicating which possition 274 it currently fills. This can only be set while assembling an 275 array. A device for which this is set is assumed to be working. 276 277 offset 278 This gives the location in the device (in sectors from the 279 start) where data from the array will be stored. Any part of 280 the device before this offset us not touched, unless it is 281 used for storing metadata (Formats 1.1 and 1.2). 282 283 size 284 The amount of the device, after the offset, that can be used 285 for storage of data. This will normally be the same as the 286 component_size. This can be written while assembling an 287 array. If a value less than the current component_size is 288 written, component_size will be reduced to this value. 289 290 291An active md device will also contain and entry for each active device 292in the array. These are named 293 294 rdNN 295 296where 'NN' is the possition in the array, starting from 0. 297So for a 3 drive array there will be rd0, rd1, rd2. 298These are symbolic links to the appropriate 'dev-XXX' entry. 299Thus, for example, 300 cat /sys/block/md*/md/rd*/state 301will show 'in_sync' on every line. 302 303 304 305Active md devices for levels that support data redundancy (1,4,5,6) 306also have 307 308 sync_action 309 a text file that can be used to monitor and control the rebuild 310 process. It contains one word which can be one of: 311 resync - redundancy is being recalculated after unclean 312 shutdown or creation 313 recover - a hot spare is being built to replace a 314 failed/missing device 315 idle - nothing is happening 316 check - A full check of redundancy was requested and is 317 happening. This reads all block and checks 318 them. A repair may also happen for some raid 319 levels. 320 repair - A full check and repair is happening. This is 321 similar to 'resync', but was requested by the 322 user, and the write-intent bitmap is NOT used to 323 optimise the process. 324 325 This file is writable, and each of the strings that could be 326 read are meaningful for writing. 327 328 'idle' will stop an active resync/recovery etc. There is no 329 guarantee that another resync/recovery may not be automatically 330 started again, though some event will be needed to trigger 331 this. 332 'resync' or 'recovery' can be used to restart the 333 corresponding operation if it was stopped with 'idle'. 334 'check' and 'repair' will start the appropriate process 335 providing the current state is 'idle'. 336 337 mismatch_count 338 When performing 'check' and 'repair', and possibly when 339 performing 'resync', md will count the number of errors that are 340 found. The count in 'mismatch_cnt' is the number of sectors 341 that were re-written, or (for 'check') would have been 342 re-written. As most raid levels work in units of pages rather 343 than sectors, this my be larger than the number of actual errors 344 by a factor of the number of sectors in a page. 345 346Each active md device may also have attributes specific to the 347personality module that manages it. 348These are specific to the implementation of the module and could 349change substantially if the implementation changes. 350 351These currently include 352 353 stripe_cache_size (currently raid5 only) 354 number of entries in the stripe cache. This is writable, but 355 there are upper and lower limits (32768, 16). Default is 128. 356 strip_cache_active (currently raid5 only) 357 number of active entries in the stripe cache