Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v2.6.29-rc2 619 lines 22 kB view raw
1Documentation for /proc/sys/vm/* kernel version 2.6.29 2 (c) 1998, 1999, Rik van Riel <riel@nl.linux.org> 3 (c) 2008 Peter W. Morreale <pmorreale@novell.com> 4 5For general info and legal blurb, please look in README. 6 7============================================================== 8 9This file contains the documentation for the sysctl files in 10/proc/sys/vm and is valid for Linux kernel version 2.6.29. 11 12The files in this directory can be used to tune the operation 13of the virtual memory (VM) subsystem of the Linux kernel and 14the writeout of dirty data to disk. 15 16Default values and initialization routines for most of these 17files can be found in mm/swap.c. 18 19Currently, these files are in /proc/sys/vm: 20 21- block_dump 22- dirty_background_bytes 23- dirty_background_ratio 24- dirty_bytes 25- dirty_expire_centisecs 26- dirty_ratio 27- dirty_writeback_centisecs 28- drop_caches 29- hugepages_treat_as_movable 30- hugetlb_shm_group 31- laptop_mode 32- legacy_va_layout 33- lowmem_reserve_ratio 34- max_map_count 35- min_free_kbytes 36- min_slab_ratio 37- min_unmapped_ratio 38- mmap_min_addr 39- nr_hugepages 40- nr_overcommit_hugepages 41- nr_pdflush_threads 42- nr_trim_pages (only if CONFIG_MMU=n) 43- numa_zonelist_order 44- oom_dump_tasks 45- oom_kill_allocating_task 46- overcommit_memory 47- overcommit_ratio 48- page-cluster 49- panic_on_oom 50- percpu_pagelist_fraction 51- stat_interval 52- swappiness 53- vfs_cache_pressure 54- zone_reclaim_mode 55 56 57============================================================== 58 59block_dump 60 61block_dump enables block I/O debugging when set to a nonzero value. More 62information on block I/O debugging is in Documentation/laptops/laptop-mode.txt. 63 64============================================================== 65 66dirty_background_bytes 67 68Contains the amount of dirty memory at which the pdflush background writeback 69daemon will start writeback. 70 71If dirty_background_bytes is written, dirty_background_ratio becomes a function 72of its value (dirty_background_bytes / the amount of dirtyable system memory). 73 74============================================================== 75 76dirty_background_ratio 77 78Contains, as a percentage of total system memory, the number of pages at which 79the pdflush background writeback daemon will start writing out dirty data. 80 81============================================================== 82 83dirty_bytes 84 85Contains the amount of dirty memory at which a process generating disk writes 86will itself start writeback. 87 88If dirty_bytes is written, dirty_ratio becomes a function of its value 89(dirty_bytes / the amount of dirtyable system memory). 90 91============================================================== 92 93dirty_expire_centisecs 94 95This tunable is used to define when dirty data is old enough to be eligible 96for writeout by the pdflush daemons. It is expressed in 100'ths of a second. 97Data which has been dirty in-memory for longer than this interval will be 98written out next time a pdflush daemon wakes up. 99 100============================================================== 101 102dirty_ratio 103 104Contains, as a percentage of total system memory, the number of pages at which 105a process which is generating disk writes will itself start writing out dirty 106data. 107 108============================================================== 109 110dirty_writeback_centisecs 111 112The pdflush writeback daemons will periodically wake up and write `old' data 113out to disk. This tunable expresses the interval between those wakeups, in 114100'ths of a second. 115 116Setting this to zero disables periodic writeback altogether. 117 118============================================================== 119 120drop_caches 121 122Writing to this will cause the kernel to drop clean caches, dentries and 123inodes from memory, causing that memory to become free. 124 125To free pagecache: 126 echo 1 > /proc/sys/vm/drop_caches 127To free dentries and inodes: 128 echo 2 > /proc/sys/vm/drop_caches 129To free pagecache, dentries and inodes: 130 echo 3 > /proc/sys/vm/drop_caches 131 132As this is a non-destructive operation and dirty objects are not freeable, the 133user should run `sync' first. 134 135============================================================== 136 137hugepages_treat_as_movable 138 139This parameter is only useful when kernelcore= is specified at boot time to 140create ZONE_MOVABLE for pages that may be reclaimed or migrated. Huge pages 141are not movable so are not normally allocated from ZONE_MOVABLE. A non-zero 142value written to hugepages_treat_as_movable allows huge pages to be allocated 143from ZONE_MOVABLE. 144 145Once enabled, the ZONE_MOVABLE is treated as an area of memory the huge 146pages pool can easily grow or shrink within. Assuming that applications are 147not running that mlock() a lot of memory, it is likely the huge pages pool 148can grow to the size of ZONE_MOVABLE by repeatedly entering the desired value 149into nr_hugepages and triggering page reclaim. 150 151============================================================== 152 153hugetlb_shm_group 154 155hugetlb_shm_group contains group id that is allowed to create SysV 156shared memory segment using hugetlb page. 157 158============================================================== 159 160laptop_mode 161 162laptop_mode is a knob that controls "laptop mode". All the things that are 163controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt. 164 165============================================================== 166 167legacy_va_layout 168 169If non-zero, this sysctl disables the new 32-bit mmap mmap layout - the kernel 170will use the legacy (2.4) layout for all processes. 171 172============================================================== 173 174lowmem_reserve_ratio 175 176For some specialised workloads on highmem machines it is dangerous for 177the kernel to allow process memory to be allocated from the "lowmem" 178zone. This is because that memory could then be pinned via the mlock() 179system call, or by unavailability of swapspace. 180 181And on large highmem machines this lack of reclaimable lowmem memory 182can be fatal. 183 184So the Linux page allocator has a mechanism which prevents allocations 185which _could_ use highmem from using too much lowmem. This means that 186a certain amount of lowmem is defended from the possibility of being 187captured into pinned user memory. 188 189(The same argument applies to the old 16 megabyte ISA DMA region. This 190mechanism will also defend that region from allocations which could use 191highmem or lowmem). 192 193The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is 194in defending these lower zones. 195 196If you have a machine which uses highmem or ISA DMA and your 197applications are using mlock(), or if you are running with no swap then 198you probably should change the lowmem_reserve_ratio setting. 199 200The lowmem_reserve_ratio is an array. You can see them by reading this file. 201- 202% cat /proc/sys/vm/lowmem_reserve_ratio 203256 256 32 204- 205Note: # of this elements is one fewer than number of zones. Because the highest 206 zone's value is not necessary for following calculation. 207 208But, these values are not used directly. The kernel calculates # of protection 209pages for each zones from them. These are shown as array of protection pages 210in /proc/zoneinfo like followings. (This is an example of x86-64 box). 211Each zone has an array of protection pages like this. 212 213- 214Node 0, zone DMA 215 pages free 1355 216 min 3 217 low 3 218 high 4 219 : 220 : 221 numa_other 0 222 protection: (0, 2004, 2004, 2004) 223 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 224 pagesets 225 cpu: 0 pcp: 0 226 : 227- 228These protections are added to score to judge whether this zone should be used 229for page allocation or should be reclaimed. 230 231In this example, if normal pages (index=2) are required to this DMA zone and 232pages_high is used for watermark, the kernel judges this zone should not be 233used because pages_free(1355) is smaller than watermark + protection[2] 234(4 + 2004 = 2008). If this protection value is 0, this zone would be used for 235normal page requirement. If requirement is DMA zone(index=0), protection[0] 236(=0) is used. 237 238zone[i]'s protection[j] is calculated by following expression. 239 240(i < j): 241 zone[i]->protection[j] 242 = (total sums of present_pages from zone[i+1] to zone[j] on the node) 243 / lowmem_reserve_ratio[i]; 244(i = j): 245 (should not be protected. = 0; 246(i > j): 247 (not necessary, but looks 0) 248 249The default values of lowmem_reserve_ratio[i] are 250 256 (if zone[i] means DMA or DMA32 zone) 251 32 (others). 252As above expression, they are reciprocal number of ratio. 253256 means 1/256. # of protection pages becomes about "0.39%" of total present 254pages of higher zones on the node. 255 256If you would like to protect more pages, smaller values are effective. 257The minimum value is 1 (1/1 -> 100%). 258 259============================================================== 260 261max_map_count: 262 263This file contains the maximum number of memory map areas a process 264may have. Memory map areas are used as a side-effect of calling 265malloc, directly by mmap and mprotect, and also when loading shared 266libraries. 267 268While most applications need less than a thousand maps, certain 269programs, particularly malloc debuggers, may consume lots of them, 270e.g., up to one or two maps per allocation. 271 272The default value is 65536. 273 274============================================================== 275 276min_free_kbytes: 277 278This is used to force the Linux VM to keep a minimum number 279of kilobytes free. The VM uses this number to compute a pages_min 280value for each lowmem zone in the system. Each lowmem zone gets 281a number of reserved free pages based proportionally on its size. 282 283Some minimal amount of memory is needed to satisfy PF_MEMALLOC 284allocations; if you set this to lower than 1024KB, your system will 285become subtly broken, and prone to deadlock under high loads. 286 287Setting this too high will OOM your machine instantly. 288 289============================================================= 290 291min_slab_ratio: 292 293This is available only on NUMA kernels. 294 295A percentage of the total pages in each zone. On Zone reclaim 296(fallback from the local zone occurs) slabs will be reclaimed if more 297than this percentage of pages in a zone are reclaimable slab pages. 298This insures that the slab growth stays under control even in NUMA 299systems that rarely perform global reclaim. 300 301The default is 5 percent. 302 303Note that slab reclaim is triggered in a per zone / node fashion. 304The process of reclaiming slab memory is currently not node specific 305and may not be fast. 306 307============================================================= 308 309min_unmapped_ratio: 310 311This is available only on NUMA kernels. 312 313A percentage of the total pages in each zone. Zone reclaim will only 314occur if more than this percentage of pages are file backed and unmapped. 315This is to insure that a minimal amount of local pages is still available for 316file I/O even if the node is overallocated. 317 318The default is 1 percent. 319 320============================================================== 321 322mmap_min_addr 323 324This file indicates the amount of address space which a user process will 325be restricted from mmaping. Since kernel null dereference bugs could 326accidentally operate based on the information in the first couple of pages 327of memory userspace processes should not be allowed to write to them. By 328default this value is set to 0 and no protections will be enforced by the 329security module. Setting this value to something like 64k will allow the 330vast majority of applications to work correctly and provide defense in depth 331against future potential kernel bugs. 332 333============================================================== 334 335nr_hugepages 336 337Change the minimum size of the hugepage pool. 338 339See Documentation/vm/hugetlbpage.txt 340 341============================================================== 342 343nr_overcommit_hugepages 344 345Change the maximum size of the hugepage pool. The maximum is 346nr_hugepages + nr_overcommit_hugepages. 347 348See Documentation/vm/hugetlbpage.txt 349 350============================================================== 351 352nr_pdflush_threads 353 354The current number of pdflush threads. This value is read-only. 355The value changes according to the number of dirty pages in the system. 356 357When neccessary, additional pdflush threads are created, one per second, up to 358nr_pdflush_threads_max. 359 360============================================================== 361 362nr_trim_pages 363 364This is available only on NOMMU kernels. 365 366This value adjusts the excess page trimming behaviour of power-of-2 aligned 367NOMMU mmap allocations. 368 369A value of 0 disables trimming of allocations entirely, while a value of 1 370trims excess pages aggressively. Any value >= 1 acts as the watermark where 371trimming of allocations is initiated. 372 373The default value is 1. 374 375See Documentation/nommu-mmap.txt for more information. 376 377============================================================== 378 379numa_zonelist_order 380 381This sysctl is only for NUMA. 382'where the memory is allocated from' is controlled by zonelists. 383(This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. 384 you may be able to read ZONE_DMA as ZONE_DMA32...) 385 386In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. 387ZONE_NORMAL -> ZONE_DMA 388This means that a memory allocation request for GFP_KERNEL will 389get memory from ZONE_DMA only when ZONE_NORMAL is not available. 390 391In NUMA case, you can think of following 2 types of order. 392Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL 393 394(A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL 395(B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. 396 397Type(A) offers the best locality for processes on Node(0), but ZONE_DMA 398will be used before ZONE_NORMAL exhaustion. This increases possibility of 399out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. 400 401Type(B) cannot offer the best locality but is more robust against OOM of 402the DMA zone. 403 404Type(A) is called as "Node" order. Type (B) is "Zone" order. 405 406"Node order" orders the zonelists by node, then by zone within each node. 407Specify "[Nn]ode" for zone order 408 409"Zone Order" orders the zonelists by zone type, then by node within each 410zone. Specify "[Zz]one"for zode order. 411 412Specify "[Dd]efault" to request automatic configuration. Autoconfiguration 413will select "node" order in following case. 414(1) if the DMA zone does not exist or 415(2) if the DMA zone comprises greater than 50% of the available memory or 416(3) if any node's DMA zone comprises greater than 60% of its local memory and 417 the amount of local memory is big enough. 418 419Otherwise, "zone" order will be selected. Default order is recommended unless 420this is causing problems for your system/application. 421 422============================================================== 423 424oom_dump_tasks 425 426Enables a system-wide task dump (excluding kernel threads) to be 427produced when the kernel performs an OOM-killing and includes such 428information as pid, uid, tgid, vm size, rss, cpu, oom_adj score, and 429name. This is helpful to determine why the OOM killer was invoked 430and to identify the rogue task that caused it. 431 432If this is set to zero, this information is suppressed. On very 433large systems with thousands of tasks it may not be feasible to dump 434the memory state information for each one. Such systems should not 435be forced to incur a performance penalty in OOM conditions when the 436information may not be desired. 437 438If this is set to non-zero, this information is shown whenever the 439OOM killer actually kills a memory-hogging task. 440 441The default value is 0. 442 443============================================================== 444 445oom_kill_allocating_task 446 447This enables or disables killing the OOM-triggering task in 448out-of-memory situations. 449 450If this is set to zero, the OOM killer will scan through the entire 451tasklist and select a task based on heuristics to kill. This normally 452selects a rogue memory-hogging task that frees up a large amount of 453memory when killed. 454 455If this is set to non-zero, the OOM killer simply kills the task that 456triggered the out-of-memory condition. This avoids the expensive 457tasklist scan. 458 459If panic_on_oom is selected, it takes precedence over whatever value 460is used in oom_kill_allocating_task. 461 462The default value is 0. 463 464============================================================== 465 466overcommit_memory: 467 468This value contains a flag that enables memory overcommitment. 469 470When this flag is 0, the kernel attempts to estimate the amount 471of free memory left when userspace requests more memory. 472 473When this flag is 1, the kernel pretends there is always enough 474memory until it actually runs out. 475 476When this flag is 2, the kernel uses a "never overcommit" 477policy that attempts to prevent any overcommit of memory. 478 479This feature can be very useful because there are a lot of 480programs that malloc() huge amounts of memory "just-in-case" 481and don't use much of it. 482 483The default value is 0. 484 485See Documentation/vm/overcommit-accounting and 486security/commoncap.c::cap_vm_enough_memory() for more information. 487 488============================================================== 489 490overcommit_ratio: 491 492When overcommit_memory is set to 2, the committed address 493space is not permitted to exceed swap plus this percentage 494of physical RAM. See above. 495 496============================================================== 497 498page-cluster 499 500page-cluster controls the number of pages which are written to swap in 501a single attempt. The swap I/O size. 502 503It is a logarithmic value - setting it to zero means "1 page", setting 504it to 1 means "2 pages", setting it to 2 means "4 pages", etc. 505 506The default value is three (eight pages at a time). There may be some 507small benefits in tuning this to a different value if your workload is 508swap-intensive. 509 510============================================================= 511 512panic_on_oom 513 514This enables or disables panic on out-of-memory feature. 515 516If this is set to 0, the kernel will kill some rogue process, 517called oom_killer. Usually, oom_killer can kill rogue processes and 518system will survive. 519 520If this is set to 1, the kernel panics when out-of-memory happens. 521However, if a process limits using nodes by mempolicy/cpusets, 522and those nodes become memory exhaustion status, one process 523may be killed by oom-killer. No panic occurs in this case. 524Because other nodes' memory may be free. This means system total status 525may be not fatal yet. 526 527If this is set to 2, the kernel panics compulsorily even on the 528above-mentioned. 529 530The default value is 0. 5311 and 2 are for failover of clustering. Please select either 532according to your policy of failover. 533 534============================================================= 535 536percpu_pagelist_fraction 537 538This is the fraction of pages at most (high mark pcp->high) in each zone that 539are allocated for each per cpu page list. The min value for this is 8. It 540means that we don't allow more than 1/8th of pages in each zone to be 541allocated in any single per_cpu_pagelist. This entry only changes the value 542of hot per cpu pagelists. User can specify a number like 100 to allocate 5431/100th of each zone to each per cpu page list. 544 545The batch value of each per cpu pagelist is also updated as a result. It is 546set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8) 547 548The initial value is zero. Kernel does not use this value at boot time to set 549the high water marks for each per cpu page list. 550 551============================================================== 552 553stat_interval 554 555The time interval between which vm statistics are updated. The default 556is 1 second. 557 558============================================================== 559 560swappiness 561 562This control is used to define how aggressive the kernel will swap 563memory pages. Higher values will increase agressiveness, lower values 564descrease the amount of swap. 565 566The default value is 60. 567 568============================================================== 569 570vfs_cache_pressure 571------------------ 572 573Controls the tendency of the kernel to reclaim the memory which is used for 574caching of directory and inode objects. 575 576At the default value of vfs_cache_pressure=100 the kernel will attempt to 577reclaim dentries and inodes at a "fair" rate with respect to pagecache and 578swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer 579to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100 580causes the kernel to prefer to reclaim dentries and inodes. 581 582============================================================== 583 584zone_reclaim_mode: 585 586Zone_reclaim_mode allows someone to set more or less aggressive approaches to 587reclaim memory when a zone runs out of memory. If it is set to zero then no 588zone reclaim occurs. Allocations will be satisfied from other zones / nodes 589in the system. 590 591This is value ORed together of 592 5931 = Zone reclaim on 5942 = Zone reclaim writes dirty pages out 5954 = Zone reclaim swaps pages 596 597zone_reclaim_mode is set during bootup to 1 if it is determined that pages 598from remote zones will cause a measurable performance reduction. The 599page allocator will then reclaim easily reusable pages (those page 600cache pages that are currently not used) before allocating off node pages. 601 602It may be beneficial to switch off zone reclaim if the system is 603used for a file server and all of memory should be used for caching files 604from disk. In that case the caching effect is more important than 605data locality. 606 607Allowing zone reclaim to write out pages stops processes that are 608writing large amounts of data from dirtying pages on other nodes. Zone 609reclaim will write out dirty pages if a zone fills up and so effectively 610throttle the process. This may decrease the performance of a single process 611since it cannot use all of system memory to buffer the outgoing writes 612anymore but it preserve the memory on other nodes so that the performance 613of other processes running on other nodes will not be affected. 614 615Allowing regular swap effectively restricts allocations to the local 616node unless explicitly overridden by memory policies or cpuset 617configurations. 618 619============ End of Document =================================