Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'akpm' (patches from Andrew)

Merge yet more updates from Andrew Morton:
"The rest of MM and a kernel-wide procfs cleanup.

Summary of the more significant patches:

- Patch series "mm/memory_hotplug: Factor out memory block
devicehandling", v3. David Hildenbrand.

Some spring-cleaning of the memory hotplug code, notably in
drivers/base/memory.c

- "mm: thp: fix false negative of shmem vma's THP eligibility". Yang
Shi.

Fix /proc/pid/smaps output for THP pages used in shmem.

- "resource: fix locking in find_next_iomem_res()" + 1. Nadav Amit.

Bugfix and speedup for kernel/resource.c

- Patch series "mm: Further memory block device cleanups", David
Hildenbrand.

More spring-cleaning of the memory hotplug code.

- Patch series "mm: Sub-section memory hotplug support". Dan
Williams.

Generalise the memory hotplug code so that pmem can use it more
completely. Then remove the hacks from the libnvdimm code which
were there to work around the memory-hotplug code's constraints.

- "proc/sysctl: add shared variables for range check", Matteo Croce.

We have about 250 instances of

int zero;
...
.extra1 = &zero,

in the tree. This is a tree-wide sweep to make all those private
"zero"s and "one"s use global variables.

Alas, it isn't practical to make those two global integers const"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (38 commits)
proc/sysctl: add shared variables for range check
mm: migrate: remove unused mode argument
mm/sparsemem: cleanup 'section number' data types
libnvdimm/pfn: stop padding pmem namespaces to section alignment
libnvdimm/pfn: fix fsdax-mode namespace info-block zero-fields
mm/devm_memremap_pages: enable sub-section remap
mm: document ZONE_DEVICE memory-model implications
mm/sparsemem: support sub-section hotplug
mm/sparsemem: prepare for sub-section ranges
mm: kill is_dev_zone() helper
mm/hotplug: kill is_dev_zone() usage in __remove_pages()
mm/sparsemem: convert kmalloc_section_memmap() to populate_section_memmap()
mm/hotplug: prepare shrink_{zone, pgdat}_span for sub-section removal
mm/sparsemem: add helpers track active portions of a section at boot
mm/sparsemem: introduce a SECTION_IS_EARLY flag
mm/sparsemem: introduce struct mem_section_usage
drivers/base/memory.c: get rid of find_memory_block_hinted()
mm/memory_hotplug: move and simplify walk_memory_blocks()
mm/memory_hotplug: rename walk_memory_range() and pass start+size instead of pfns
mm: make register_mem_sect_under_node() static
...

+1091 -1024
+2 -2
Documentation/filesystems/proc.txt
··· 486 486 "SwapPss" shows proportional swap share of this mapping. Unlike "Swap", this 487 487 does not take into account swapped out page of underlying shmem objects. 488 488 "Locked" indicates whether the mapping is locked in memory or not. 489 - "THPeligible" indicates whether the mapping is eligible for THP pages - 1 if 490 - true, 0 otherwise. 489 + "THPeligible" indicates whether the mapping is eligible for allocating THP 490 + pages - 1 if true, 0 otherwise. It just shows the current status. 491 491 492 492 "VmFlags" field deserves a separate description. This member represents the kernel 493 493 flags associated with the particular virtual memory area in two letter encoded
+40
Documentation/vm/memory-model.rst
··· 181 181 of function calls. The vmemmap_populate() implementation may use the 182 182 `vmem_altmap` along with :c:func:`altmap_alloc_block_buf` helper to 183 183 allocate memory map on the persistent memory device. 184 + 185 + ZONE_DEVICE 186 + =========== 187 + The `ZONE_DEVICE` facility builds upon `SPARSEMEM_VMEMMAP` to offer 188 + `struct page` `mem_map` services for device driver identified physical 189 + address ranges. The "device" aspect of `ZONE_DEVICE` relates to the fact 190 + that the page objects for these address ranges are never marked online, 191 + and that a reference must be taken against the device, not just the page 192 + to keep the memory pinned for active use. `ZONE_DEVICE`, via 193 + :c:func:`devm_memremap_pages`, performs just enough memory hotplug to 194 + turn on :c:func:`pfn_to_page`, :c:func:`page_to_pfn`, and 195 + :c:func:`get_user_pages` service for the given range of pfns. Since the 196 + page reference count never drops below 1 the page is never tracked as 197 + free memory and the page's `struct list_head lru` space is repurposed 198 + for back referencing to the host device / driver that mapped the memory. 199 + 200 + While `SPARSEMEM` presents memory as a collection of sections, 201 + optionally collected into memory blocks, `ZONE_DEVICE` users have a need 202 + for smaller granularity of populating the `mem_map`. Given that 203 + `ZONE_DEVICE` memory is never marked online it is subsequently never 204 + subject to its memory ranges being exposed through the sysfs memory 205 + hotplug api on memory block boundaries. The implementation relies on 206 + this lack of user-api constraint to allow sub-section sized memory 207 + ranges to be specified to :c:func:`arch_add_memory`, the top-half of 208 + memory hotplug. Sub-section support allows for 2MB as the cross-arch 209 + common alignment granularity for :c:func:`devm_memremap_pages`. 210 + 211 + The users of `ZONE_DEVICE` are: 212 + 213 + * pmem: Map platform persistent memory to be used as a direct-I/O target 214 + via DAX mappings. 215 + 216 + * hmm: Extend `ZONE_DEVICE` with `->page_fault()` and `->page_free()` 217 + event callbacks to allow a device-driver to coordinate memory management 218 + events related to device-memory, typically GPU memory. See 219 + Documentation/vm/hmm.rst. 220 + 221 + * p2pdma: Create `struct page` objects to allow peer devices in a 222 + PCI/-E topology to coordinate direct-DMA operations between themselves, 223 + i.e. bypass host memory.
+17
arch/arm64/mm/mmu.c
··· 1074 1074 return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, 1075 1075 restrictions); 1076 1076 } 1077 + void arch_remove_memory(int nid, u64 start, u64 size, 1078 + struct vmem_altmap *altmap) 1079 + { 1080 + unsigned long start_pfn = start >> PAGE_SHIFT; 1081 + unsigned long nr_pages = size >> PAGE_SHIFT; 1082 + struct zone *zone; 1083 + 1084 + /* 1085 + * FIXME: Cleanup page tables (also in arch_add_memory() in case 1086 + * adding fails). Until then, this function should only be used 1087 + * during memory hotplug (adding memory), not for memory 1088 + * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be 1089 + * unlocked yet. 1090 + */ 1091 + zone = page_zone(pfn_to_page(start_pfn)); 1092 + __remove_pages(zone, start_pfn, nr_pages, altmap); 1093 + } 1077 1094 #endif
-2
arch/ia64/mm/init.c
··· 681 681 return ret; 682 682 } 683 683 684 - #ifdef CONFIG_MEMORY_HOTREMOVE 685 684 void arch_remove_memory(int nid, u64 start, u64 size, 686 685 struct vmem_altmap *altmap) 687 686 { ··· 691 692 zone = page_zone(pfn_to_page(start_pfn)); 692 693 __remove_pages(zone, start_pfn, nr_pages, altmap); 693 694 } 694 - #endif 695 695 #endif
-2
arch/powerpc/mm/mem.c
··· 125 125 return __add_pages(nid, start_pfn, nr_pages, restrictions); 126 126 } 127 127 128 - #ifdef CONFIG_MEMORY_HOTREMOVE 129 128 void __ref arch_remove_memory(int nid, u64 start, u64 size, 130 129 struct vmem_altmap *altmap) 131 130 { ··· 150 151 pr_warn("Hash collision while resizing HPT\n"); 151 152 } 152 153 #endif 153 - #endif /* CONFIG_MEMORY_HOTPLUG */ 154 154 155 155 #ifndef CONFIG_NEED_MULTIPLE_NODES 156 156 void __init mem_topology_setup(void)
+11 -12
arch/powerpc/platforms/powernv/memtrace.c
··· 70 70 /* called with device_hotplug_lock held */ 71 71 static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages) 72 72 { 73 - u64 end_pfn = start_pfn + nr_pages - 1; 73 + const unsigned long start = PFN_PHYS(start_pfn); 74 + const unsigned long size = PFN_PHYS(nr_pages); 74 75 75 - if (walk_memory_range(start_pfn, end_pfn, NULL, 76 - check_memblock_online)) 76 + if (walk_memory_blocks(start, size, NULL, check_memblock_online)) 77 77 return false; 78 78 79 - walk_memory_range(start_pfn, end_pfn, (void *)MEM_GOING_OFFLINE, 80 - change_memblock_state); 79 + walk_memory_blocks(start, size, (void *)MEM_GOING_OFFLINE, 80 + change_memblock_state); 81 81 82 82 if (offline_pages(start_pfn, nr_pages)) { 83 - walk_memory_range(start_pfn, end_pfn, (void *)MEM_ONLINE, 84 - change_memblock_state); 83 + walk_memory_blocks(start, size, (void *)MEM_ONLINE, 84 + change_memblock_state); 85 85 return false; 86 86 } 87 87 88 - walk_memory_range(start_pfn, end_pfn, (void *)MEM_OFFLINE, 89 - change_memblock_state); 88 + walk_memory_blocks(start, size, (void *)MEM_OFFLINE, 89 + change_memblock_state); 90 90 91 91 92 92 return true; ··· 242 242 */ 243 243 if (!memhp_auto_online) { 244 244 lock_device_hotplug(); 245 - walk_memory_range(PFN_DOWN(ent->start), 246 - PFN_UP(ent->start + ent->size - 1), 247 - NULL, online_mem_block); 245 + walk_memory_blocks(ent->start, ent->size, NULL, 246 + online_mem_block); 248 247 unlock_device_hotplug(); 249 248 } 250 249
+5 -10
arch/s390/appldata/appldata_base.c
··· 220 220 void __user *buffer, size_t *lenp, loff_t *ppos) 221 221 { 222 222 int timer_active = appldata_timer_active; 223 - int zero = 0; 224 - int one = 1; 225 223 int rc; 226 224 struct ctl_table ctl_entry = { 227 225 .procname = ctl->procname, 228 226 .data = &timer_active, 229 227 .maxlen = sizeof(int), 230 - .extra1 = &zero, 231 - .extra2 = &one, 228 + .extra1 = SYSCTL_ZERO, 229 + .extra2 = SYSCTL_ONE, 232 230 }; 233 231 234 232 rc = proc_douintvec_minmax(&ctl_entry, write, buffer, lenp, ppos); ··· 253 255 void __user *buffer, size_t *lenp, loff_t *ppos) 254 256 { 255 257 int interval = appldata_interval; 256 - int one = 1; 257 258 int rc; 258 259 struct ctl_table ctl_entry = { 259 260 .procname = ctl->procname, 260 261 .data = &interval, 261 262 .maxlen = sizeof(int), 262 - .extra1 = &one, 263 + .extra1 = SYSCTL_ONE, 263 264 }; 264 265 265 266 rc = proc_dointvec_minmax(&ctl_entry, write, buffer, lenp, ppos); ··· 286 289 struct list_head *lh; 287 290 int rc, found; 288 291 int active; 289 - int zero = 0; 290 - int one = 1; 291 292 struct ctl_table ctl_entry = { 292 293 .data = &active, 293 294 .maxlen = sizeof(int), 294 - .extra1 = &zero, 295 - .extra2 = &one, 295 + .extra1 = SYSCTL_ZERO, 296 + .extra2 = SYSCTL_ONE, 296 297 }; 297 298 298 299 found = 0;
+2 -4
arch/s390/kernel/topology.c
··· 587 587 { 588 588 int enabled = topology_is_enabled(); 589 589 int new_mode; 590 - int zero = 0; 591 - int one = 1; 592 590 int rc; 593 591 struct ctl_table ctl_entry = { 594 592 .procname = ctl->procname, 595 593 .data = &enabled, 596 594 .maxlen = sizeof(int), 597 - .extra1 = &zero, 598 - .extra2 = &one, 595 + .extra1 = SYSCTL_ZERO, 596 + .extra2 = SYSCTL_ONE, 599 597 }; 600 598 601 599 rc = proc_douintvec_minmax(&ctl_entry, write, buffer, lenp, ppos);
+10 -8
arch/s390/mm/init.c
··· 273 273 unsigned long size_pages = PFN_DOWN(size); 274 274 int rc; 275 275 276 + if (WARN_ON_ONCE(restrictions->altmap)) 277 + return -EINVAL; 278 + 276 279 rc = vmem_add_mapping(start, size); 277 280 if (rc) 278 281 return rc; ··· 286 283 return rc; 287 284 } 288 285 289 - #ifdef CONFIG_MEMORY_HOTREMOVE 290 286 void arch_remove_memory(int nid, u64 start, u64 size, 291 287 struct vmem_altmap *altmap) 292 288 { 293 - /* 294 - * There is no hardware or firmware interface which could trigger a 295 - * hot memory remove on s390. So there is nothing that needs to be 296 - * implemented. 297 - */ 298 - BUG(); 289 + unsigned long start_pfn = start >> PAGE_SHIFT; 290 + unsigned long nr_pages = size >> PAGE_SHIFT; 291 + struct zone *zone; 292 + 293 + zone = page_zone(pfn_to_page(start_pfn)); 294 + __remove_pages(zone, start_pfn, nr_pages, altmap); 295 + vmem_remove_mapping(start, size); 299 296 } 300 - #endif 301 297 #endif /* CONFIG_MEMORY_HOTPLUG */
-2
arch/sh/mm/init.c
··· 429 429 EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); 430 430 #endif 431 431 432 - #ifdef CONFIG_MEMORY_HOTREMOVE 433 432 void arch_remove_memory(int nid, u64 start, u64 size, 434 433 struct vmem_altmap *altmap) 435 434 { ··· 439 440 zone = page_zone(pfn_to_page(start_pfn)); 440 441 __remove_pages(zone, start_pfn, nr_pages, altmap); 441 442 } 442 - #endif 443 443 #endif /* CONFIG_MEMORY_HOTPLUG */
+2 -5
arch/x86/entry/vdso/vdso32-setup.c
··· 65 65 /* Register vsyscall32 into the ABI table */ 66 66 #include <linux/sysctl.h> 67 67 68 - static const int zero; 69 - static const int one = 1; 70 - 71 68 static struct ctl_table abi_table2[] = { 72 69 { 73 70 .procname = "vsyscall32", ··· 72 75 .maxlen = sizeof(int), 73 76 .mode = 0644, 74 77 .proc_handler = proc_dointvec_minmax, 75 - .extra1 = (int *)&zero, 76 - .extra2 = (int *)&one, 78 + .extra1 = SYSCTL_ZERO, 79 + .extra2 = SYSCTL_ONE, 77 80 }, 78 81 {} 79 82 };
+2 -4
arch/x86/kernel/itmt.c
··· 65 65 return ret; 66 66 } 67 67 68 - static unsigned int zero; 69 - static unsigned int one = 1; 70 68 static struct ctl_table itmt_kern_table[] = { 71 69 { 72 70 .procname = "sched_itmt_enabled", ··· 72 74 .maxlen = sizeof(unsigned int), 73 75 .mode = 0644, 74 76 .proc_handler = sched_itmt_update_handler, 75 - .extra1 = &zero, 76 - .extra2 = &one, 77 + .extra1 = SYSCTL_ZERO, 78 + .extra2 = SYSCTL_ONE, 77 79 }, 78 80 {} 79 81 };
-2
arch/x86/mm/init_32.c
··· 860 860 return __add_pages(nid, start_pfn, nr_pages, restrictions); 861 861 } 862 862 863 - #ifdef CONFIG_MEMORY_HOTREMOVE 864 863 void arch_remove_memory(int nid, u64 start, u64 size, 865 864 struct vmem_altmap *altmap) 866 865 { ··· 870 871 zone = page_zone(pfn_to_page(start_pfn)); 871 872 __remove_pages(zone, start_pfn, nr_pages, altmap); 872 873 } 873 - #endif 874 874 #endif 875 875 876 876 int kernel_set_to_readonly __read_mostly;
+3 -3
arch/x86/mm/init_64.c
··· 1198 1198 remove_pagetable(start, end, false, altmap); 1199 1199 } 1200 1200 1201 - #ifdef CONFIG_MEMORY_HOTREMOVE 1202 1201 static void __meminit 1203 1202 kernel_physical_mapping_remove(unsigned long start, unsigned long end) 1204 1203 { ··· 1218 1219 __remove_pages(zone, start_pfn, nr_pages, altmap); 1219 1220 kernel_physical_mapping_remove(start, start + size); 1220 1221 } 1221 - #endif 1222 1222 #endif /* CONFIG_MEMORY_HOTPLUG */ 1223 1223 1224 1224 static struct kcore_list kcore_vsyscall; ··· 1518 1520 { 1519 1521 int err; 1520 1522 1521 - if (boot_cpu_has(X86_FEATURE_PSE)) 1523 + if (end - start < PAGES_PER_SECTION * sizeof(struct page)) 1524 + err = vmemmap_populate_basepages(start, end, node); 1525 + else if (boot_cpu_has(X86_FEATURE_PSE)) 1522 1526 err = vmemmap_populate_hugepages(start, end, node, altmap); 1523 1527 else if (altmap) { 1524 1528 pr_err_once("%s: no cpu support for altmap allocations\n",
+4 -15
drivers/acpi/acpi_memhotplug.c
··· 155 155 return 0; 156 156 } 157 157 158 - static unsigned long acpi_meminfo_start_pfn(struct acpi_memory_info *info) 159 - { 160 - return PFN_DOWN(info->start_addr); 161 - } 162 - 163 - static unsigned long acpi_meminfo_end_pfn(struct acpi_memory_info *info) 164 - { 165 - return PFN_UP(info->start_addr + info->length-1); 166 - } 167 - 168 158 static int acpi_bind_memblk(struct memory_block *mem, void *arg) 169 159 { 170 160 return acpi_bind_one(&mem->dev, arg); ··· 163 173 static int acpi_bind_memory_blocks(struct acpi_memory_info *info, 164 174 struct acpi_device *adev) 165 175 { 166 - return walk_memory_range(acpi_meminfo_start_pfn(info), 167 - acpi_meminfo_end_pfn(info), adev, 168 - acpi_bind_memblk); 176 + return walk_memory_blocks(info->start_addr, info->length, adev, 177 + acpi_bind_memblk); 169 178 } 170 179 171 180 static int acpi_unbind_memblk(struct memory_block *mem, void *arg) ··· 175 186 176 187 static void acpi_unbind_memory_blocks(struct acpi_memory_info *info) 177 188 { 178 - walk_memory_range(acpi_meminfo_start_pfn(info), 179 - acpi_meminfo_end_pfn(info), NULL, acpi_unbind_memblk); 189 + walk_memory_blocks(info->start_addr, info->length, NULL, 190 + acpi_unbind_memblk); 180 191 } 181 192 182 193 static int acpi_memory_enable_device(struct acpi_memory_device *mem_device)
+6 -7
drivers/base/firmware_loader/fallback_table.c
··· 16 16 * firmware fallback configuration table 17 17 */ 18 18 19 - static unsigned int zero; 20 - static unsigned int one = 1; 21 - 22 19 struct firmware_fallback_config fw_fallback_config = { 23 20 .force_sysfs_fallback = IS_ENABLED(CONFIG_FW_LOADER_USER_HELPER_FALLBACK), 24 21 .loading_timeout = 60, ··· 23 26 }; 24 27 EXPORT_SYMBOL_GPL(fw_fallback_config); 25 28 29 + #ifdef CONFIG_SYSCTL 26 30 struct ctl_table firmware_config_table[] = { 27 31 { 28 32 .procname = "force_sysfs_fallback", ··· 31 33 .maxlen = sizeof(unsigned int), 32 34 .mode = 0644, 33 35 .proc_handler = proc_douintvec_minmax, 34 - .extra1 = &zero, 35 - .extra2 = &one, 36 + .extra1 = SYSCTL_ZERO, 37 + .extra2 = SYSCTL_ONE, 36 38 }, 37 39 { 38 40 .procname = "ignore_sysfs_fallback", ··· 40 42 .maxlen = sizeof(unsigned int), 41 43 .mode = 0644, 42 44 .proc_handler = proc_douintvec_minmax, 43 - .extra1 = &zero, 44 - .extra2 = &one, 45 + .extra1 = SYSCTL_ZERO, 46 + .extra2 = SYSCTL_ONE, 45 47 }, 46 48 { } 47 49 }; 48 50 EXPORT_SYMBOL_GPL(firmware_config_table); 51 + #endif
+138 -91
drivers/base/memory.c
··· 34 34 35 35 static int sections_per_block; 36 36 37 - static inline int base_memory_block_id(int section_nr) 37 + static inline unsigned long base_memory_block_id(unsigned long section_nr) 38 38 { 39 39 return section_nr / sections_per_block; 40 + } 41 + 42 + static inline unsigned long pfn_to_block_id(unsigned long pfn) 43 + { 44 + return base_memory_block_id(pfn_to_section_nr(pfn)); 45 + } 46 + 47 + static inline unsigned long phys_to_block_id(unsigned long phys) 48 + { 49 + return pfn_to_block_id(PFN_DOWN(phys)); 40 50 } 41 51 42 52 static int memory_subsys_online(struct device *dev); ··· 136 126 static ssize_t removable_show(struct device *dev, struct device_attribute *attr, 137 127 char *buf) 138 128 { 139 - unsigned long i, pfn; 140 - int ret = 1; 141 129 struct memory_block *mem = to_memory_block(dev); 130 + unsigned long pfn; 131 + int ret = 1, i; 142 132 143 133 if (mem->state != MEM_ONLINE) 144 134 goto out; ··· 588 578 return 0; 589 579 } 590 580 591 - /* 592 - * A reference for the returned object is held and the reference for the 593 - * hinted object is released. 594 - */ 595 - struct memory_block *find_memory_block_hinted(struct mem_section *section, 596 - struct memory_block *hint) 581 + /* A reference for the returned memory block device is acquired. */ 582 + static struct memory_block *find_memory_block_by_id(unsigned long block_id) 597 583 { 598 - int block_id = base_memory_block_id(__section_nr(section)); 599 - struct device *hintdev = hint ? &hint->dev : NULL; 600 584 struct device *dev; 601 585 602 - dev = subsys_find_device_by_id(&memory_subsys, block_id, hintdev); 603 - if (hint) 604 - put_device(&hint->dev); 605 - if (!dev) 606 - return NULL; 607 - return to_memory_block(dev); 586 + dev = subsys_find_device_by_id(&memory_subsys, block_id, NULL); 587 + return dev ? to_memory_block(dev) : NULL; 608 588 } 609 589 610 590 /* ··· 607 607 */ 608 608 struct memory_block *find_memory_block(struct mem_section *section) 609 609 { 610 - return find_memory_block_hinted(section, NULL); 610 + unsigned long block_id = base_memory_block_id(__section_nr(section)); 611 + 612 + return find_memory_block_by_id(block_id); 611 613 } 612 614 613 615 static struct attribute *memory_memblk_attrs[] = { ··· 654 652 } 655 653 656 654 static int init_memory_block(struct memory_block **memory, 657 - struct mem_section *section, unsigned long state) 655 + unsigned long block_id, unsigned long state) 658 656 { 659 657 struct memory_block *mem; 660 658 unsigned long start_pfn; 661 - int scn_nr; 662 659 int ret = 0; 663 660 661 + mem = find_memory_block_by_id(block_id); 662 + if (mem) { 663 + put_device(&mem->dev); 664 + return -EEXIST; 665 + } 664 666 mem = kzalloc(sizeof(*mem), GFP_KERNEL); 665 667 if (!mem) 666 668 return -ENOMEM; 667 669 668 - scn_nr = __section_nr(section); 669 - mem->start_section_nr = 670 - base_memory_block_id(scn_nr) * sections_per_block; 670 + mem->start_section_nr = block_id * sections_per_block; 671 671 mem->end_section_nr = mem->start_section_nr + sections_per_block - 1; 672 672 mem->state = state; 673 673 start_pfn = section_nr_to_pfn(mem->start_section_nr); ··· 681 677 return ret; 682 678 } 683 679 684 - static int add_memory_block(int base_section_nr) 680 + static int add_memory_block(unsigned long base_section_nr) 685 681 { 682 + int ret, section_count = 0; 686 683 struct memory_block *mem; 687 - int i, ret, section_count = 0, section_nr; 684 + unsigned long nr; 688 685 689 - for (i = base_section_nr; 690 - i < base_section_nr + sections_per_block; 691 - i++) { 692 - if (!present_section_nr(i)) 693 - continue; 694 - if (section_count == 0) 695 - section_nr = i; 696 - section_count++; 697 - } 686 + for (nr = base_section_nr; nr < base_section_nr + sections_per_block; 687 + nr++) 688 + if (present_section_nr(nr)) 689 + section_count++; 698 690 699 691 if (section_count == 0) 700 692 return 0; 701 - ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE); 693 + ret = init_memory_block(&mem, base_memory_block_id(base_section_nr), 694 + MEM_ONLINE); 702 695 if (ret) 703 696 return ret; 704 697 mem->section_count = section_count; 705 698 return 0; 706 699 } 707 700 708 - /* 709 - * need an interface for the VM to add new memory regions, 710 - * but without onlining it. 711 - */ 712 - int hotplug_memory_register(int nid, struct mem_section *section) 701 + static void unregister_memory(struct memory_block *memory) 713 702 { 714 - int ret = 0; 715 - struct memory_block *mem; 716 - 717 - mutex_lock(&mem_sysfs_mutex); 718 - 719 - mem = find_memory_block(section); 720 - if (mem) { 721 - mem->section_count++; 722 - put_device(&mem->dev); 723 - } else { 724 - ret = init_memory_block(&mem, section, MEM_OFFLINE); 725 - if (ret) 726 - goto out; 727 - mem->section_count++; 728 - } 729 - 730 - out: 731 - mutex_unlock(&mem_sysfs_mutex); 732 - return ret; 733 - } 734 - 735 - #ifdef CONFIG_MEMORY_HOTREMOVE 736 - static void 737 - unregister_memory(struct memory_block *memory) 738 - { 739 - BUG_ON(memory->dev.bus != &memory_subsys); 703 + if (WARN_ON_ONCE(memory->dev.bus != &memory_subsys)) 704 + return; 740 705 741 706 /* drop the ref. we got via find_memory_block() */ 742 707 put_device(&memory->dev); 743 708 device_unregister(&memory->dev); 744 709 } 745 710 746 - void unregister_memory_section(struct mem_section *section) 711 + /* 712 + * Create memory block devices for the given memory area. Start and size 713 + * have to be aligned to memory block granularity. Memory block devices 714 + * will be initialized as offline. 715 + */ 716 + int create_memory_block_devices(unsigned long start, unsigned long size) 747 717 { 718 + const unsigned long start_block_id = pfn_to_block_id(PFN_DOWN(start)); 719 + unsigned long end_block_id = pfn_to_block_id(PFN_DOWN(start + size)); 748 720 struct memory_block *mem; 721 + unsigned long block_id; 722 + int ret = 0; 749 723 750 - if (WARN_ON_ONCE(!present_section(section))) 724 + if (WARN_ON_ONCE(!IS_ALIGNED(start, memory_block_size_bytes()) || 725 + !IS_ALIGNED(size, memory_block_size_bytes()))) 726 + return -EINVAL; 727 + 728 + mutex_lock(&mem_sysfs_mutex); 729 + for (block_id = start_block_id; block_id != end_block_id; block_id++) { 730 + ret = init_memory_block(&mem, block_id, MEM_OFFLINE); 731 + if (ret) 732 + break; 733 + mem->section_count = sections_per_block; 734 + } 735 + if (ret) { 736 + end_block_id = block_id; 737 + for (block_id = start_block_id; block_id != end_block_id; 738 + block_id++) { 739 + mem = find_memory_block_by_id(block_id); 740 + mem->section_count = 0; 741 + unregister_memory(mem); 742 + } 743 + } 744 + mutex_unlock(&mem_sysfs_mutex); 745 + return ret; 746 + } 747 + 748 + /* 749 + * Remove memory block devices for the given memory area. Start and size 750 + * have to be aligned to memory block granularity. Memory block devices 751 + * have to be offline. 752 + */ 753 + void remove_memory_block_devices(unsigned long start, unsigned long size) 754 + { 755 + const unsigned long start_block_id = pfn_to_block_id(PFN_DOWN(start)); 756 + const unsigned long end_block_id = pfn_to_block_id(PFN_DOWN(start + size)); 757 + struct memory_block *mem; 758 + unsigned long block_id; 759 + 760 + if (WARN_ON_ONCE(!IS_ALIGNED(start, memory_block_size_bytes()) || 761 + !IS_ALIGNED(size, memory_block_size_bytes()))) 751 762 return; 752 763 753 764 mutex_lock(&mem_sysfs_mutex); 754 - 755 - /* 756 - * Some users of the memory hotplug do not want/need memblock to 757 - * track all sections. Skip over those. 758 - */ 759 - mem = find_memory_block(section); 760 - if (!mem) 761 - goto out_unlock; 762 - 763 - unregister_mem_sect_under_nodes(mem, __section_nr(section)); 764 - 765 - mem->section_count--; 766 - if (mem->section_count == 0) 765 + for (block_id = start_block_id; block_id != end_block_id; block_id++) { 766 + mem = find_memory_block_by_id(block_id); 767 + if (WARN_ON_ONCE(!mem)) 768 + continue; 769 + mem->section_count = 0; 770 + unregister_memory_block_under_nodes(mem); 767 771 unregister_memory(mem); 768 - else 769 - put_device(&mem->dev); 770 - 771 - out_unlock: 772 + } 772 773 mutex_unlock(&mem_sysfs_mutex); 773 774 } 774 - #endif /* CONFIG_MEMORY_HOTREMOVE */ 775 775 776 776 /* return true if the memory block is offlined, otherwise, return false */ 777 777 bool is_memblock_offlined(struct memory_block *mem) ··· 812 804 */ 813 805 int __init memory_dev_init(void) 814 806 { 815 - unsigned int i; 816 807 int ret; 817 808 int err; 818 - unsigned long block_sz; 809 + unsigned long block_sz, nr; 819 810 820 811 ret = subsys_system_register(&memory_subsys, memory_root_attr_groups); 821 812 if (ret) ··· 828 821 * during boot and have been initialized 829 822 */ 830 823 mutex_lock(&mem_sysfs_mutex); 831 - for (i = 0; i <= __highest_present_section_nr; 832 - i += sections_per_block) { 833 - err = add_memory_block(i); 824 + for (nr = 0; nr <= __highest_present_section_nr; 825 + nr += sections_per_block) { 826 + err = add_memory_block(nr); 834 827 if (!ret) 835 828 ret = err; 836 829 } ··· 839 832 out: 840 833 if (ret) 841 834 printk(KERN_ERR "%s() failed: %d\n", __func__, ret); 835 + return ret; 836 + } 837 + 838 + /** 839 + * walk_memory_blocks - walk through all present memory blocks overlapped 840 + * by the range [start, start + size) 841 + * 842 + * @start: start address of the memory range 843 + * @size: size of the memory range 844 + * @arg: argument passed to func 845 + * @func: callback for each memory section walked 846 + * 847 + * This function walks through all present memory blocks overlapped by the 848 + * range [start, start + size), calling func on each memory block. 849 + * 850 + * In case func() returns an error, walking is aborted and the error is 851 + * returned. 852 + */ 853 + int walk_memory_blocks(unsigned long start, unsigned long size, 854 + void *arg, walk_memory_blocks_func_t func) 855 + { 856 + const unsigned long start_block_id = phys_to_block_id(start); 857 + const unsigned long end_block_id = phys_to_block_id(start + size - 1); 858 + struct memory_block *mem; 859 + unsigned long block_id; 860 + int ret = 0; 861 + 862 + if (!size) 863 + return 0; 864 + 865 + for (block_id = start_block_id; block_id <= end_block_id; block_id++) { 866 + mem = find_memory_block_by_id(block_id); 867 + if (!mem) 868 + continue; 869 + 870 + ret = func(mem, arg); 871 + put_device(&mem->dev); 872 + if (ret) 873 + break; 874 + } 842 875 return ret; 843 876 }
+15 -20
drivers/base/node.c
··· 753 753 } 754 754 755 755 /* register memory section under specified node if it spans that node */ 756 - int register_mem_sect_under_node(struct memory_block *mem_blk, void *arg) 756 + static int register_mem_sect_under_node(struct memory_block *mem_blk, 757 + void *arg) 757 758 { 758 759 int ret, nid = *(int *)arg; 759 760 unsigned long pfn, sect_start_pfn, sect_end_pfn; ··· 803 802 return 0; 804 803 } 805 804 806 - /* unregister memory section under all nodes that it spans */ 807 - int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, 808 - unsigned long phys_index) 805 + /* 806 + * Unregister memory block device under all nodes that it spans. 807 + * Has to be called with mem_sysfs_mutex held (due to unlinked_nodes). 808 + */ 809 + void unregister_memory_block_under_nodes(struct memory_block *mem_blk) 809 810 { 810 - NODEMASK_ALLOC(nodemask_t, unlinked_nodes, GFP_KERNEL); 811 811 unsigned long pfn, sect_start_pfn, sect_end_pfn; 812 + static nodemask_t unlinked_nodes; 812 813 813 - if (!mem_blk) { 814 - NODEMASK_FREE(unlinked_nodes); 815 - return -EFAULT; 816 - } 817 - if (!unlinked_nodes) 818 - return -ENOMEM; 819 - nodes_clear(*unlinked_nodes); 820 - 821 - sect_start_pfn = section_nr_to_pfn(phys_index); 822 - sect_end_pfn = sect_start_pfn + PAGES_PER_SECTION - 1; 814 + nodes_clear(unlinked_nodes); 815 + sect_start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); 816 + sect_end_pfn = section_nr_to_pfn(mem_blk->end_section_nr); 823 817 for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) { 824 818 int nid; 825 819 ··· 823 827 continue; 824 828 if (!node_online(nid)) 825 829 continue; 826 - if (node_test_and_set(nid, *unlinked_nodes)) 830 + if (node_test_and_set(nid, unlinked_nodes)) 827 831 continue; 828 832 sysfs_remove_link(&node_devices[nid]->dev.kobj, 829 833 kobject_name(&mem_blk->dev.kobj)); 830 834 sysfs_remove_link(&mem_blk->dev.kobj, 831 835 kobject_name(&node_devices[nid]->dev.kobj)); 832 836 } 833 - NODEMASK_FREE(unlinked_nodes); 834 - return 0; 835 837 } 836 838 837 839 int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) 838 840 { 839 - return walk_memory_range(start_pfn, end_pfn, (void *)&nid, 840 - register_mem_sect_under_node); 841 + return walk_memory_blocks(PFN_PHYS(start_pfn), 842 + PFN_PHYS(end_pfn - start_pfn), (void *)&nid, 843 + register_mem_sect_under_node); 841 844 } 842 845 843 846 #ifdef CONFIG_HUGETLBFS
+3 -5
drivers/gpu/drm/i915/i915_perf.c
··· 274 274 #define POLL_PERIOD (NSEC_PER_SEC / POLL_FREQUENCY) 275 275 276 276 /* for sysctl proc_dointvec_minmax of dev.i915.perf_stream_paranoid */ 277 - static int zero; 278 - static int one = 1; 279 277 static u32 i915_perf_stream_paranoid = true; 280 278 281 279 /* The maximum exponent the hardware accepts is 63 (essentially it selects one ··· 3364 3366 .maxlen = sizeof(i915_perf_stream_paranoid), 3365 3367 .mode = 0644, 3366 3368 .proc_handler = proc_dointvec_minmax, 3367 - .extra1 = &zero, 3368 - .extra2 = &one, 3369 + .extra1 = SYSCTL_ZERO, 3370 + .extra2 = SYSCTL_ONE, 3369 3371 }, 3370 3372 { 3371 3373 .procname = "oa_max_sample_rate", ··· 3373 3375 .maxlen = sizeof(i915_oa_max_sample_rate), 3374 3376 .mode = 0644, 3375 3377 .proc_handler = proc_dointvec_minmax, 3376 - .extra1 = &zero, 3378 + .extra1 = SYSCTL_ZERO, 3377 3379 .extra2 = &oa_sample_rate_hard_limit, 3378 3380 }, 3379 3381 {}
+2 -4
drivers/hv/vmbus_drv.c
··· 1197 1197 }; 1198 1198 1199 1199 static struct ctl_table_header *hv_ctl_table_hdr; 1200 - static int zero; 1201 - static int one = 1; 1202 1200 1203 1201 /* 1204 1202 * sysctl option to allow the user to control whether kmsg data should be ··· 1209 1211 .maxlen = sizeof(int), 1210 1212 .mode = 0644, 1211 1213 .proc_handler = proc_dointvec_minmax, 1212 - .extra1 = &zero, 1213 - .extra2 = &one 1214 + .extra1 = SYSCTL_ZERO, 1215 + .extra2 = SYSCTL_ONE 1214 1216 }, 1215 1217 {} 1216 1218 };
+1 -1
drivers/nvdimm/dax_devs.c
··· 118 118 nvdimm_bus_unlock(&ndns->dev); 119 119 if (!dax_dev) 120 120 return -ENOMEM; 121 - pfn_sb = devm_kzalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); 121 + pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); 122 122 nd_pfn->pfn_sb = pfn_sb; 123 123 rc = nd_pfn_validate(nd_pfn, DAX_SIG); 124 124 dev_dbg(dev, "dax: %s\n", rc == 0 ? dev_name(dax_dev) : "<none>");
+1 -14
drivers/nvdimm/pfn.h
··· 28 28 __le32 end_trunc; 29 29 /* minor-version-2 record the base alignment of the mapping */ 30 30 __le32 align; 31 + /* minor-version-3 guarantee the padding and flags are zero */ 31 32 u8 padding[4000]; 32 33 __le64 checksum; 33 34 }; 34 35 35 - #ifdef CONFIG_SPARSEMEM 36 - #define PFN_SECTION_ALIGN_DOWN(x) SECTION_ALIGN_DOWN(x) 37 - #define PFN_SECTION_ALIGN_UP(x) SECTION_ALIGN_UP(x) 38 - #else 39 - /* 40 - * In this case ZONE_DEVICE=n and we will disable 'pfn' device support, 41 - * but we still want pmem to compile. 42 - */ 43 - #define PFN_SECTION_ALIGN_DOWN(x) (x) 44 - #define PFN_SECTION_ALIGN_UP(x) (x) 45 - #endif 46 - 47 - #define PHYS_SECTION_ALIGN_DOWN(x) PFN_PHYS(PFN_SECTION_ALIGN_DOWN(PHYS_PFN(x))) 48 - #define PHYS_SECTION_ALIGN_UP(x) PFN_PHYS(PFN_SECTION_ALIGN_UP(PHYS_PFN(x))) 49 36 #endif /* __NVDIMM_PFN_H */
+28 -67
drivers/nvdimm/pfn_devs.c
··· 412 412 return 0; 413 413 } 414 414 415 + /** 416 + * nd_pfn_validate - read and validate info-block 417 + * @nd_pfn: fsdax namespace runtime state / properties 418 + * @sig: 'devdax' or 'fsdax' signature 419 + * 420 + * Upon return the info-block buffer contents (->pfn_sb) are 421 + * indeterminate when validation fails, and a coherent info-block 422 + * otherwise. 423 + */ 415 424 int nd_pfn_validate(struct nd_pfn *nd_pfn, const char *sig) 416 425 { 417 426 u64 checksum, offset; ··· 566 557 nvdimm_bus_unlock(&ndns->dev); 567 558 if (!pfn_dev) 568 559 return -ENOMEM; 569 - pfn_sb = devm_kzalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); 560 + pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); 570 561 nd_pfn = to_nd_pfn(pfn_dev); 571 562 nd_pfn->pfn_sb = pfn_sb; 572 563 rc = nd_pfn_validate(nd_pfn, PFN_SIG); ··· 587 578 } 588 579 589 580 /* 590 - * We hotplug memory at section granularity, pad the reserved area from 591 - * the previous section base to the namespace base address. 581 + * We hotplug memory at sub-section granularity, pad the reserved area 582 + * from the previous section base to the namespace base address. 592 583 */ 593 584 static unsigned long init_altmap_base(resource_size_t base) 594 585 { 595 586 unsigned long base_pfn = PHYS_PFN(base); 596 587 597 - return PFN_SECTION_ALIGN_DOWN(base_pfn); 588 + return SUBSECTION_ALIGN_DOWN(base_pfn); 598 589 } 599 590 600 591 static unsigned long init_altmap_reserve(resource_size_t base) ··· 602 593 unsigned long reserve = info_block_reserve() >> PAGE_SHIFT; 603 594 unsigned long base_pfn = PHYS_PFN(base); 604 595 605 - reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn); 596 + reserve += base_pfn - SUBSECTION_ALIGN_DOWN(base_pfn); 606 597 return reserve; 607 598 } 608 599 ··· 632 623 return -EINVAL; 633 624 nd_pfn->npfns = le64_to_cpu(pfn_sb->npfns); 634 625 } else if (nd_pfn->mode == PFN_MODE_PMEM) { 635 - nd_pfn->npfns = PFN_SECTION_ALIGN_UP((resource_size(res) 636 - - offset) / PAGE_SIZE); 626 + nd_pfn->npfns = PHYS_PFN((resource_size(res) - offset)); 637 627 if (le64_to_cpu(nd_pfn->pfn_sb->npfns) > nd_pfn->npfns) 638 628 dev_info(&nd_pfn->dev, 639 629 "number of pfns truncated from %lld to %ld\n", ··· 648 640 return 0; 649 641 } 650 642 651 - static u64 phys_pmem_align_down(struct nd_pfn *nd_pfn, u64 phys) 652 - { 653 - return min_t(u64, PHYS_SECTION_ALIGN_DOWN(phys), 654 - ALIGN_DOWN(phys, nd_pfn->align)); 655 - } 656 - 657 - /* 658 - * Check if pmem collides with 'System RAM', or other regions when 659 - * section aligned. Trim it accordingly. 660 - */ 661 - static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trunc) 662 - { 663 - struct nd_namespace_common *ndns = nd_pfn->ndns; 664 - struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); 665 - struct nd_region *nd_region = to_nd_region(nd_pfn->dev.parent); 666 - const resource_size_t start = nsio->res.start; 667 - const resource_size_t end = start + resource_size(&nsio->res); 668 - resource_size_t adjust, size; 669 - 670 - *start_pad = 0; 671 - *end_trunc = 0; 672 - 673 - adjust = start - PHYS_SECTION_ALIGN_DOWN(start); 674 - size = resource_size(&nsio->res) + adjust; 675 - if (region_intersects(start - adjust, size, IORESOURCE_SYSTEM_RAM, 676 - IORES_DESC_NONE) == REGION_MIXED 677 - || nd_region_conflict(nd_region, start - adjust, size)) 678 - *start_pad = PHYS_SECTION_ALIGN_UP(start) - start; 679 - 680 - /* Now check that end of the range does not collide. */ 681 - adjust = PHYS_SECTION_ALIGN_UP(end) - end; 682 - size = resource_size(&nsio->res) + adjust; 683 - if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, 684 - IORES_DESC_NONE) == REGION_MIXED 685 - || !IS_ALIGNED(end, nd_pfn->align) 686 - || nd_region_conflict(nd_region, start, size)) 687 - *end_trunc = end - phys_pmem_align_down(nd_pfn, end); 688 - } 689 - 690 643 static int nd_pfn_init(struct nd_pfn *nd_pfn) 691 644 { 692 645 struct nd_namespace_common *ndns = nd_pfn->ndns; 693 646 struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); 694 - u32 start_pad, end_trunc, reserve = info_block_reserve(); 695 647 resource_size_t start, size; 696 648 struct nd_region *nd_region; 649 + unsigned long npfns, align; 697 650 struct nd_pfn_sb *pfn_sb; 698 - unsigned long npfns; 699 651 phys_addr_t offset; 700 652 const char *sig; 701 653 u64 checksum; 702 654 int rc; 703 655 704 - pfn_sb = devm_kzalloc(&nd_pfn->dev, sizeof(*pfn_sb), GFP_KERNEL); 656 + pfn_sb = devm_kmalloc(&nd_pfn->dev, sizeof(*pfn_sb), GFP_KERNEL); 705 657 if (!pfn_sb) 706 658 return -ENOMEM; 707 659 ··· 670 702 sig = DAX_SIG; 671 703 else 672 704 sig = PFN_SIG; 705 + 673 706 rc = nd_pfn_validate(nd_pfn, sig); 674 707 if (rc != -ENODEV) 675 708 return rc; 676 709 677 710 /* no info block, do init */; 711 + memset(pfn_sb, 0, sizeof(*pfn_sb)); 712 + 678 713 nd_region = to_nd_region(nd_pfn->dev.parent); 679 714 if (nd_region->ro) { 680 715 dev_info(&nd_pfn->dev, ··· 686 715 return -ENXIO; 687 716 } 688 717 689 - memset(pfn_sb, 0, sizeof(*pfn_sb)); 690 - 691 - trim_pfn_device(nd_pfn, &start_pad, &end_trunc); 692 - if (start_pad + end_trunc) 693 - dev_info(&nd_pfn->dev, "%s alignment collision, truncate %d bytes\n", 694 - dev_name(&ndns->dev), start_pad + end_trunc); 695 - 696 718 /* 697 719 * Note, we use 64 here for the standard size of struct page, 698 720 * debugging options may cause it to be larger in which case the 699 721 * implementation will limit the pfns advertised through 700 722 * ->direct_access() to those that are included in the memmap. 701 723 */ 702 - start = nsio->res.start + start_pad; 724 + start = nsio->res.start; 703 725 size = resource_size(&nsio->res); 704 - npfns = PFN_SECTION_ALIGN_UP((size - start_pad - end_trunc - reserve) 705 - / PAGE_SIZE); 726 + npfns = PHYS_PFN(size - SZ_8K); 727 + align = max(nd_pfn->align, (1UL << SUBSECTION_SHIFT)); 706 728 if (nd_pfn->mode == PFN_MODE_PMEM) { 707 729 /* 708 730 * The altmap should be padded out to the block size used 709 731 * when populating the vmemmap. This *should* be equal to 710 732 * PMD_SIZE for most architectures. 711 733 */ 712 - offset = ALIGN(start + reserve + 64 * npfns, 713 - max(nd_pfn->align, PMD_SIZE)) - start; 734 + offset = ALIGN(start + SZ_8K + 64 * npfns, align) - start; 714 735 } else if (nd_pfn->mode == PFN_MODE_RAM) 715 - offset = ALIGN(start + reserve, nd_pfn->align) - start; 736 + offset = ALIGN(start + SZ_8K, align) - start; 716 737 else 717 738 return -ENXIO; 718 739 719 - if (offset + start_pad + end_trunc >= size) { 740 + if (offset >= size) { 720 741 dev_err(&nd_pfn->dev, "%s unable to satisfy requested alignment\n", 721 742 dev_name(&ndns->dev)); 722 743 return -ENXIO; 723 744 } 724 745 725 - npfns = (size - offset - start_pad - end_trunc) / SZ_4K; 746 + npfns = PHYS_PFN(size - offset); 726 747 pfn_sb->mode = cpu_to_le32(nd_pfn->mode); 727 748 pfn_sb->dataoff = cpu_to_le64(offset); 728 749 pfn_sb->npfns = cpu_to_le64(npfns); ··· 722 759 memcpy(pfn_sb->uuid, nd_pfn->uuid, 16); 723 760 memcpy(pfn_sb->parent_uuid, nd_dev_to_uuid(&ndns->dev), 16); 724 761 pfn_sb->version_major = cpu_to_le16(1); 725 - pfn_sb->version_minor = cpu_to_le16(2); 726 - pfn_sb->start_pad = cpu_to_le32(start_pad); 727 - pfn_sb->end_trunc = cpu_to_le32(end_trunc); 762 + pfn_sb->version_minor = cpu_to_le16(3); 728 763 pfn_sb->align = cpu_to_le32(nd_pfn->align); 729 764 checksum = nd_sb_checksum((struct nd_gen_sb *) pfn_sb); 730 765 pfn_sb->checksum = cpu_to_le64(checksum);
+2 -4
drivers/tty/tty_ldisc.c
··· 855 855 tty->ldisc = NULL; 856 856 } 857 857 858 - static int zero; 859 - static int one = 1; 860 858 static struct ctl_table tty_table[] = { 861 859 { 862 860 .procname = "ldisc_autoload", ··· 862 864 .maxlen = sizeof(tty_ldisc_autoload), 863 865 .mode = 0644, 864 866 .proc_handler = proc_dointvec, 865 - .extra1 = &zero, 866 - .extra2 = &one, 867 + .extra1 = SYSCTL_ZERO, 868 + .extra2 = SYSCTL_ONE, 867 869 }, 868 870 { } 869 871 };
+2 -5
drivers/xen/balloon.c
··· 77 77 78 78 #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG 79 79 80 - static int zero; 81 - static int one = 1; 82 - 83 80 static struct ctl_table balloon_table[] = { 84 81 { 85 82 .procname = "hotplug_unpopulated", ··· 84 87 .maxlen = sizeof(int), 85 88 .mode = 0644, 86 89 .proc_handler = proc_dointvec_minmax, 87 - .extra1 = &zero, 88 - .extra2 = &one, 90 + .extra1 = SYSCTL_ZERO, 91 + .extra2 = SYSCTL_ONE, 89 92 }, 90 93 { } 91 94 };
+1 -1
fs/aio.c
··· 425 425 BUG_ON(PageWriteback(old)); 426 426 get_page(new); 427 427 428 - rc = migrate_page_move_mapping(mapping, new, old, mode, 1); 428 + rc = migrate_page_move_mapping(mapping, new, old, 1); 429 429 if (rc != MIGRATEPAGE_SUCCESS) { 430 430 put_page(new); 431 431 goto out_unlock;
+2 -2
fs/eventpoll.c
··· 291 291 292 292 #include <linux/sysctl.h> 293 293 294 - static long zero; 294 + static long long_zero; 295 295 static long long_max = LONG_MAX; 296 296 297 297 struct ctl_table epoll_table[] = { ··· 301 301 .maxlen = sizeof(max_user_watches), 302 302 .mode = 0644, 303 303 .proc_handler = proc_doulongvec_minmax, 304 - .extra1 = &zero, 304 + .extra1 = &long_zero, 305 305 .extra2 = &long_max, 306 306 }, 307 307 { }
+1 -1
fs/f2fs/data.c
··· 2919 2919 /* one extra reference was held for atomic_write page */ 2920 2920 extra_count = atomic_written ? 1 : 0; 2921 2921 rc = migrate_page_move_mapping(mapping, newpage, 2922 - page, mode, extra_count); 2922 + page, extra_count); 2923 2923 if (rc != MIGRATEPAGE_SUCCESS) { 2924 2924 if (atomic_written) 2925 2925 mutex_unlock(&fi->inmem_lock);
+1 -1
fs/iomap.c
··· 566 566 { 567 567 int ret; 568 568 569 - ret = migrate_page_move_mapping(mapping, newpage, page, mode, 0); 569 + ret = migrate_page_move_mapping(mapping, newpage, page, 0); 570 570 if (ret != MIGRATEPAGE_SUCCESS) 571 571 return ret; 572 572
+3 -5
fs/notify/inotify/inotify_user.c
··· 45 45 46 46 #include <linux/sysctl.h> 47 47 48 - static int zero; 49 - 50 48 struct ctl_table inotify_table[] = { 51 49 { 52 50 .procname = "max_user_instances", ··· 52 54 .maxlen = sizeof(int), 53 55 .mode = 0644, 54 56 .proc_handler = proc_dointvec_minmax, 55 - .extra1 = &zero, 57 + .extra1 = SYSCTL_ZERO, 56 58 }, 57 59 { 58 60 .procname = "max_user_watches", ··· 60 62 .maxlen = sizeof(int), 61 63 .mode = 0644, 62 64 .proc_handler = proc_dointvec_minmax, 63 - .extra1 = &zero, 65 + .extra1 = SYSCTL_ZERO, 64 66 }, 65 67 { 66 68 .procname = "max_queued_events", ··· 68 70 .maxlen = sizeof(int), 69 71 .mode = 0644, 70 72 .proc_handler = proc_dointvec_minmax, 71 - .extra1 = &zero 73 + .extra1 = SYSCTL_ZERO 72 74 }, 73 75 { } 74 76 };
+4
fs/proc/proc_sysctl.c
··· 22 22 static const struct file_operations proc_sys_dir_file_operations; 23 23 static const struct inode_operations proc_sys_dir_operations; 24 24 25 + /* shared constants to be used in various sysctls */ 26 + const int sysctl_vals[] = { 0, 1, INT_MAX }; 27 + EXPORT_SYMBOL(sysctl_vals); 28 + 25 29 /* Support for permanently empty directories */ 26 30 27 31 struct ctl_table sysctl_mount_point[] = {
+2 -1
fs/proc/task_mmu.c
··· 832 832 833 833 __show_smap(m, &mss, false); 834 834 835 - seq_printf(m, "THPeligible: %d\n", transparent_hugepage_enabled(vma)); 835 + seq_printf(m, "THPeligible: %d\n", 836 + transparent_hugepage_enabled(vma)); 836 837 837 838 if (arch_pkeys_enabled()) 838 839 seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
+1 -1
fs/ubifs/file.c
··· 1470 1470 { 1471 1471 int rc; 1472 1472 1473 - rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); 1473 + rc = migrate_page_move_mapping(mapping, newpage, page, 0); 1474 1474 if (rc != MIGRATEPAGE_SUCCESS) 1475 1475 return rc; 1476 1476
+23
include/linux/huge_mm.h
··· 121 121 122 122 bool transparent_hugepage_enabled(struct vm_area_struct *vma); 123 123 124 + #define HPAGE_CACHE_INDEX_MASK (HPAGE_PMD_NR - 1) 125 + 126 + static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, 127 + unsigned long haddr) 128 + { 129 + /* Don't have to check pgoff for anonymous vma */ 130 + if (!vma_is_anonymous(vma)) { 131 + if (((vma->vm_start >> PAGE_SHIFT) & HPAGE_CACHE_INDEX_MASK) != 132 + (vma->vm_pgoff & HPAGE_CACHE_INDEX_MASK)) 133 + return false; 134 + } 135 + 136 + if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) 137 + return false; 138 + return true; 139 + } 140 + 124 141 #define transparent_hugepage_use_zero_page() \ 125 142 (transparent_hugepage_flags & \ 126 143 (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG)) ··· 284 267 } 285 268 286 269 static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) 270 + { 271 + return false; 272 + } 273 + 274 + static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, 275 + unsigned long haddr) 287 276 { 288 277 return false; 289 278 }
+5 -6
include/linux/memory.h
··· 111 111 extern void unregister_memory_notifier(struct notifier_block *nb); 112 112 extern int register_memory_isolate_notifier(struct notifier_block *nb); 113 113 extern void unregister_memory_isolate_notifier(struct notifier_block *nb); 114 - int hotplug_memory_register(int nid, struct mem_section *section); 115 - #ifdef CONFIG_MEMORY_HOTREMOVE 116 - extern void unregister_memory_section(struct mem_section *); 117 - #endif 114 + int create_memory_block_devices(unsigned long start, unsigned long size); 115 + void remove_memory_block_devices(unsigned long start, unsigned long size); 118 116 extern int memory_dev_init(void); 119 117 extern int memory_notify(unsigned long val, void *v); 120 118 extern int memory_isolate_notify(unsigned long val, void *v); 121 - extern struct memory_block *find_memory_block_hinted(struct mem_section *, 122 - struct memory_block *); 123 119 extern struct memory_block *find_memory_block(struct mem_section *); 120 + typedef int (*walk_memory_blocks_func_t)(struct memory_block *, void *); 121 + extern int walk_memory_blocks(unsigned long start, unsigned long size, 122 + void *arg, walk_memory_blocks_func_t func); 124 123 #define CONFIG_MEM_BLOCK_SIZE (PAGES_PER_SECTION<<PAGE_SHIFT) 125 124 #endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */ 126 125
+4 -15
include/linux/memory_hotplug.h
··· 123 123 return movable_node_enabled; 124 124 } 125 125 126 - #ifdef CONFIG_MEMORY_HOTREMOVE 127 126 extern void arch_remove_memory(int nid, u64 start, u64 size, 128 127 struct vmem_altmap *altmap); 129 128 extern void __remove_pages(struct zone *zone, unsigned long start_pfn, 130 129 unsigned long nr_pages, struct vmem_altmap *altmap); 131 - #endif /* CONFIG_MEMORY_HOTREMOVE */ 132 - 133 - /* 134 - * Do we want sysfs memblock files created. This will allow userspace to online 135 - * and offline memory explicitly. Lack of this bit means that the caller has to 136 - * call move_pfn_range_to_zone to finish the initialization. 137 - */ 138 - 139 - #define MHP_MEMBLOCK_API (1<<0) 140 130 141 131 /* reasonably generic interface to expand the physical pages */ 142 132 extern int __add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, ··· 340 350 #endif /* CONFIG_MEMORY_HOTREMOVE */ 341 351 342 352 extern void __ref free_area_init_core_hotplug(int nid); 343 - extern int walk_memory_range(unsigned long start_pfn, unsigned long end_pfn, 344 - void *arg, int (*func)(struct memory_block *, void *)); 345 353 extern int __add_memory(int nid, u64 start, u64 size); 346 354 extern int add_memory(int nid, u64 start, u64 size); 347 355 extern int add_memory_resource(int nid, struct resource *resource); 348 356 extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, 349 357 unsigned long nr_pages, struct vmem_altmap *altmap); 350 358 extern bool is_memblock_offlined(struct memory_block *mem); 351 - extern int sparse_add_one_section(int nid, unsigned long start_pfn, 352 - struct vmem_altmap *altmap); 353 - extern void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, 359 + extern int sparse_add_section(int nid, unsigned long pfn, 360 + unsigned long nr_pages, struct vmem_altmap *altmap); 361 + extern void sparse_remove_section(struct mem_section *ms, 362 + unsigned long pfn, unsigned long nr_pages, 354 363 unsigned long map_offset, struct vmem_altmap *altmap); 355 364 extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, 356 365 unsigned long pnum);
+1 -2
include/linux/migrate.h
··· 77 77 extern int migrate_huge_page_move_mapping(struct address_space *mapping, 78 78 struct page *newpage, struct page *page); 79 79 extern int migrate_page_move_mapping(struct address_space *mapping, 80 - struct page *newpage, struct page *page, enum migrate_mode mode, 81 - int extra_count); 80 + struct page *newpage, struct page *page, int extra_count); 82 81 #else 83 82 84 83 static inline void putback_movable_pages(struct list_head *l) {}
+19 -19
include/linux/mm.h
··· 541 541 vma->vm_ops = NULL; 542 542 } 543 543 544 + static inline bool vma_is_anonymous(struct vm_area_struct *vma) 545 + { 546 + return !vma->vm_ops; 547 + } 548 + 549 + #ifdef CONFIG_SHMEM 550 + /* 551 + * The vma_is_shmem is not inline because it is used only by slow 552 + * paths in userfault. 553 + */ 554 + bool vma_is_shmem(struct vm_area_struct *vma); 555 + #else 556 + static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false; } 557 + #endif 558 + 559 + int vma_is_stack_for_current(struct vm_area_struct *vma); 560 + 544 561 /* flush_tlb_range() takes a vma, not a mm, and can care about flags */ 545 562 #define TLB_FLUSH_VMA(mm,flags) { .vm_mm = (mm), .vm_flags = (flags) } 546 563 ··· 1636 1619 int clear_page_dirty_for_io(struct page *page); 1637 1620 1638 1621 int get_cmdline(struct task_struct *task, char *buffer, int buflen); 1639 - 1640 - static inline bool vma_is_anonymous(struct vm_area_struct *vma) 1641 - { 1642 - return !vma->vm_ops; 1643 - } 1644 - 1645 - #ifdef CONFIG_SHMEM 1646 - /* 1647 - * The vma_is_shmem is not inline because it is used only by slow 1648 - * paths in userfault. 1649 - */ 1650 - bool vma_is_shmem(struct vm_area_struct *vma); 1651 - #else 1652 - static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false; } 1653 - #endif 1654 - 1655 - int vma_is_stack_for_current(struct vm_area_struct *vma); 1656 1622 1657 1623 extern unsigned long move_page_tables(struct vm_area_struct *vma, 1658 1624 unsigned long old_addr, struct vm_area_struct *new_vma, ··· 2767 2767 #endif 2768 2768 2769 2769 void *sparse_buffer_alloc(unsigned long size); 2770 - struct page *sparse_mem_map_populate(unsigned long pnum, int nid, 2771 - struct vmem_altmap *altmap); 2770 + struct page * __populate_section_memmap(unsigned long pfn, 2771 + unsigned long nr_pages, int nid, struct vmem_altmap *altmap); 2772 2772 pgd_t *vmemmap_pgd_populate(unsigned long addr, int node); 2773 2773 p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); 2774 2774 pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node);
+69 -19
include/linux/mmzone.h
··· 855 855 */ 856 856 #define zone_idx(zone) ((zone) - (zone)->zone_pgdat->node_zones) 857 857 858 - #ifdef CONFIG_ZONE_DEVICE 859 - static inline bool is_dev_zone(const struct zone *zone) 860 - { 861 - return zone_idx(zone) == ZONE_DEVICE; 862 - } 863 - #else 864 - static inline bool is_dev_zone(const struct zone *zone) 865 - { 866 - return false; 867 - } 868 - #endif 869 - 870 858 /* 871 859 * Returns true if a zone has pages managed by the buddy allocator. 872 860 * All the reclaim decisions have to use this function rather than ··· 1148 1160 #define SECTION_ALIGN_UP(pfn) (((pfn) + PAGES_PER_SECTION - 1) & PAGE_SECTION_MASK) 1149 1161 #define SECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SECTION_MASK) 1150 1162 1163 + #define SUBSECTION_SHIFT 21 1164 + 1165 + #define PFN_SUBSECTION_SHIFT (SUBSECTION_SHIFT - PAGE_SHIFT) 1166 + #define PAGES_PER_SUBSECTION (1UL << PFN_SUBSECTION_SHIFT) 1167 + #define PAGE_SUBSECTION_MASK (~(PAGES_PER_SUBSECTION-1)) 1168 + 1169 + #if SUBSECTION_SHIFT > SECTION_SIZE_BITS 1170 + #error Subsection size exceeds section size 1171 + #else 1172 + #define SUBSECTIONS_PER_SECTION (1UL << (SECTION_SIZE_BITS - SUBSECTION_SHIFT)) 1173 + #endif 1174 + 1175 + #define SUBSECTION_ALIGN_UP(pfn) ALIGN((pfn), PAGES_PER_SUBSECTION) 1176 + #define SUBSECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SUBSECTION_MASK) 1177 + 1178 + struct mem_section_usage { 1179 + DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION); 1180 + /* See declaration of similar field in struct zone */ 1181 + unsigned long pageblock_flags[0]; 1182 + }; 1183 + 1184 + void subsection_map_init(unsigned long pfn, unsigned long nr_pages); 1185 + 1151 1186 struct page; 1152 1187 struct page_ext; 1153 1188 struct mem_section { ··· 1188 1177 */ 1189 1178 unsigned long section_mem_map; 1190 1179 1191 - /* See declaration of similar field in struct zone */ 1192 - unsigned long *pageblock_flags; 1180 + struct mem_section_usage *usage; 1193 1181 #ifdef CONFIG_PAGE_EXTENSION 1194 1182 /* 1195 1183 * If SPARSEMEM, pgdat doesn't have page_ext pointer. We use ··· 1219 1209 extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT]; 1220 1210 #endif 1221 1211 1212 + static inline unsigned long *section_to_usemap(struct mem_section *ms) 1213 + { 1214 + return ms->usage->pageblock_flags; 1215 + } 1216 + 1222 1217 static inline struct mem_section *__nr_to_section(unsigned long nr) 1223 1218 { 1224 1219 #ifdef CONFIG_SPARSEMEM_EXTREME ··· 1234 1219 return NULL; 1235 1220 return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK]; 1236 1221 } 1237 - extern int __section_nr(struct mem_section* ms); 1238 - extern unsigned long usemap_size(void); 1222 + extern unsigned long __section_nr(struct mem_section *ms); 1223 + extern size_t mem_section_usage_size(void); 1239 1224 1240 1225 /* 1241 1226 * We use the lower bits of the mem_map pointer to store ··· 1253 1238 #define SECTION_MARKED_PRESENT (1UL<<0) 1254 1239 #define SECTION_HAS_MEM_MAP (1UL<<1) 1255 1240 #define SECTION_IS_ONLINE (1UL<<2) 1256 - #define SECTION_MAP_LAST_BIT (1UL<<3) 1241 + #define SECTION_IS_EARLY (1UL<<3) 1242 + #define SECTION_MAP_LAST_BIT (1UL<<4) 1257 1243 #define SECTION_MAP_MASK (~(SECTION_MAP_LAST_BIT-1)) 1258 1244 #define SECTION_NID_SHIFT 3 1259 1245 ··· 1278 1262 static inline int valid_section(struct mem_section *section) 1279 1263 { 1280 1264 return (section && (section->section_mem_map & SECTION_HAS_MEM_MAP)); 1265 + } 1266 + 1267 + static inline int early_section(struct mem_section *section) 1268 + { 1269 + return (section && (section->section_mem_map & SECTION_IS_EARLY)); 1281 1270 } 1282 1271 1283 1272 static inline int valid_section_nr(unsigned long nr) ··· 1312 1291 return __nr_to_section(pfn_to_section_nr(pfn)); 1313 1292 } 1314 1293 1315 - extern int __highest_present_section_nr; 1294 + extern unsigned long __highest_present_section_nr; 1295 + 1296 + static inline int subsection_map_index(unsigned long pfn) 1297 + { 1298 + return (pfn & ~(PAGE_SECTION_MASK)) / PAGES_PER_SUBSECTION; 1299 + } 1300 + 1301 + #ifdef CONFIG_SPARSEMEM_VMEMMAP 1302 + static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) 1303 + { 1304 + int idx = subsection_map_index(pfn); 1305 + 1306 + return test_bit(idx, ms->usage->subsection_map); 1307 + } 1308 + #else 1309 + static inline int pfn_section_valid(struct mem_section *ms, unsigned long pfn) 1310 + { 1311 + return 1; 1312 + } 1313 + #endif 1316 1314 1317 1315 #ifndef CONFIG_HAVE_ARCH_PFN_VALID 1318 1316 static inline int pfn_valid(unsigned long pfn) 1319 1317 { 1318 + struct mem_section *ms; 1319 + 1320 1320 if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) 1321 1321 return 0; 1322 - return valid_section(__nr_to_section(pfn_to_section_nr(pfn))); 1322 + ms = __nr_to_section(pfn_to_section_nr(pfn)); 1323 + if (!valid_section(ms)) 1324 + return 0; 1325 + /* 1326 + * Traditionally early sections always returned pfn_valid() for 1327 + * the entire section-sized span. 1328 + */ 1329 + return early_section(ms) || pfn_section_valid(ms, pfn); 1323 1330 } 1324 1331 #endif 1325 1332 ··· 1379 1330 #define sparse_init() do {} while (0) 1380 1331 #define sparse_index_init(_sec, _nid) do {} while (0) 1381 1332 #define pfn_present pfn_valid 1333 + #define subsection_map_init(_pfn, _nr_pages) do {} while (0) 1382 1334 #endif /* CONFIG_SPARSEMEM */ 1383 1335 1384 1336 /*
+2 -12
include/linux/node.h
··· 137 137 extern void unregister_one_node(int nid); 138 138 extern int register_cpu_under_node(unsigned int cpu, unsigned int nid); 139 139 extern int unregister_cpu_under_node(unsigned int cpu, unsigned int nid); 140 - extern int register_mem_sect_under_node(struct memory_block *mem_blk, 141 - void *arg); 142 - extern int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, 143 - unsigned long phys_index); 140 + extern void unregister_memory_block_under_nodes(struct memory_block *mem_blk); 144 141 145 142 extern int register_memory_node_under_compute_node(unsigned int mem_nid, 146 143 unsigned int cpu_nid, ··· 168 171 { 169 172 return 0; 170 173 } 171 - static inline int register_mem_sect_under_node(struct memory_block *mem_blk, 172 - void *arg) 174 + static inline void unregister_memory_block_under_nodes(struct memory_block *mem_blk) 173 175 { 174 - return 0; 175 - } 176 - static inline int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, 177 - unsigned long phys_index) 178 - { 179 - return 0; 180 176 } 181 177 182 178 static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
+7
include/linux/sysctl.h
··· 37 37 struct ctl_table_header; 38 38 struct ctl_dir; 39 39 40 + /* Keep the same order as in fs/proc/proc_sysctl.c */ 41 + #define SYSCTL_ZERO ((void *)&sysctl_vals[0]) 42 + #define SYSCTL_ONE ((void *)&sysctl_vals[1]) 43 + #define SYSCTL_INT_MAX ((void *)&sysctl_vals[2]) 44 + 45 + extern const int sysctl_vals[]; 46 + 40 47 typedef int proc_handler (struct ctl_table *ctl, int write, 41 48 void __user *buffer, size_t *lenp, loff_t *ppos); 42 49
+16 -19
ipc/ipc_sysctl.c
··· 113 113 #define proc_ipc_sem_dointvec NULL 114 114 #endif 115 115 116 - static int zero; 117 - static int one = 1; 118 - static int int_max = INT_MAX; 119 116 int ipc_mni = IPCMNI; 120 117 int ipc_mni_shift = IPCMNI_SHIFT; 121 118 int ipc_min_cycle = RADIX_TREE_MAP_SIZE; ··· 138 141 .maxlen = sizeof(init_ipc_ns.shm_ctlmni), 139 142 .mode = 0644, 140 143 .proc_handler = proc_ipc_dointvec_minmax, 141 - .extra1 = &zero, 144 + .extra1 = SYSCTL_ZERO, 142 145 .extra2 = &ipc_mni, 143 146 }, 144 147 { ··· 147 150 .maxlen = sizeof(init_ipc_ns.shm_rmid_forced), 148 151 .mode = 0644, 149 152 .proc_handler = proc_ipc_dointvec_minmax_orphans, 150 - .extra1 = &zero, 151 - .extra2 = &one, 153 + .extra1 = SYSCTL_ZERO, 154 + .extra2 = SYSCTL_ONE, 152 155 }, 153 156 { 154 157 .procname = "msgmax", ··· 156 159 .maxlen = sizeof(init_ipc_ns.msg_ctlmax), 157 160 .mode = 0644, 158 161 .proc_handler = proc_ipc_dointvec_minmax, 159 - .extra1 = &zero, 160 - .extra2 = &int_max, 162 + .extra1 = SYSCTL_ZERO, 163 + .extra2 = SYSCTL_INT_MAX, 161 164 }, 162 165 { 163 166 .procname = "msgmni", ··· 165 168 .maxlen = sizeof(init_ipc_ns.msg_ctlmni), 166 169 .mode = 0644, 167 170 .proc_handler = proc_ipc_dointvec_minmax, 168 - .extra1 = &zero, 171 + .extra1 = SYSCTL_ZERO, 169 172 .extra2 = &ipc_mni, 170 173 }, 171 174 { ··· 174 177 .maxlen = sizeof(int), 175 178 .mode = 0644, 176 179 .proc_handler = proc_ipc_auto_msgmni, 177 - .extra1 = &zero, 178 - .extra2 = &one, 180 + .extra1 = SYSCTL_ZERO, 181 + .extra2 = SYSCTL_ONE, 179 182 }, 180 183 { 181 184 .procname = "msgmnb", ··· 183 186 .maxlen = sizeof(init_ipc_ns.msg_ctlmnb), 184 187 .mode = 0644, 185 188 .proc_handler = proc_ipc_dointvec_minmax, 186 - .extra1 = &zero, 187 - .extra2 = &int_max, 189 + .extra1 = SYSCTL_ZERO, 190 + .extra2 = SYSCTL_INT_MAX, 188 191 }, 189 192 { 190 193 .procname = "sem", ··· 200 203 .maxlen = sizeof(init_ipc_ns.ids[IPC_SEM_IDS].next_id), 201 204 .mode = 0644, 202 205 .proc_handler = proc_ipc_dointvec_minmax, 203 - .extra1 = &zero, 204 - .extra2 = &int_max, 206 + .extra1 = SYSCTL_ZERO, 207 + .extra2 = SYSCTL_INT_MAX, 205 208 }, 206 209 { 207 210 .procname = "msg_next_id", ··· 209 212 .maxlen = sizeof(init_ipc_ns.ids[IPC_MSG_IDS].next_id), 210 213 .mode = 0644, 211 214 .proc_handler = proc_ipc_dointvec_minmax, 212 - .extra1 = &zero, 213 - .extra2 = &int_max, 215 + .extra1 = SYSCTL_ZERO, 216 + .extra2 = SYSCTL_INT_MAX, 214 217 }, 215 218 { 216 219 .procname = "shm_next_id", ··· 218 221 .maxlen = sizeof(init_ipc_ns.ids[IPC_SHM_IDS].next_id), 219 222 .mode = 0644, 220 223 .proc_handler = proc_ipc_dointvec_minmax, 221 - .extra1 = &zero, 222 - .extra2 = &int_max, 224 + .extra1 = SYSCTL_ZERO, 225 + .extra2 = SYSCTL_INT_MAX, 223 226 }, 224 227 #endif 225 228 {}
+23 -34
kernel/memremap.c
··· 54 54 55 55 static unsigned long pfn_first(struct dev_pagemap *pgmap) 56 56 { 57 - return (pgmap->res.start >> PAGE_SHIFT) + 57 + return PHYS_PFN(pgmap->res.start) + 58 58 vmem_altmap_offset(pgmap_altmap(pgmap)); 59 59 } 60 60 ··· 98 98 struct dev_pagemap *pgmap = data; 99 99 struct device *dev = pgmap->dev; 100 100 struct resource *res = &pgmap->res; 101 - resource_size_t align_start, align_size; 102 101 unsigned long pfn; 103 102 int nid; 104 103 ··· 107 108 dev_pagemap_cleanup(pgmap); 108 109 109 110 /* pages are dead and unused, undo the arch mapping */ 110 - align_start = res->start & ~(SECTION_SIZE - 1); 111 - align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) 112 - - align_start; 113 - 114 - nid = page_to_nid(pfn_to_page(align_start >> PAGE_SHIFT)); 111 + nid = page_to_nid(pfn_to_page(PHYS_PFN(res->start))); 115 112 116 113 mem_hotplug_begin(); 117 114 if (pgmap->type == MEMORY_DEVICE_PRIVATE) { 118 - pfn = align_start >> PAGE_SHIFT; 115 + pfn = PHYS_PFN(res->start); 119 116 __remove_pages(page_zone(pfn_to_page(pfn)), pfn, 120 - align_size >> PAGE_SHIFT, NULL); 117 + PHYS_PFN(resource_size(res)), NULL); 121 118 } else { 122 - arch_remove_memory(nid, align_start, align_size, 119 + arch_remove_memory(nid, res->start, resource_size(res), 123 120 pgmap_altmap(pgmap)); 124 - kasan_remove_zero_shadow(__va(align_start), align_size); 121 + kasan_remove_zero_shadow(__va(res->start), resource_size(res)); 125 122 } 126 123 mem_hotplug_done(); 127 124 128 - untrack_pfn(NULL, PHYS_PFN(align_start), align_size); 125 + untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res)); 129 126 pgmap_array_delete(res); 130 127 dev_WARN_ONCE(dev, pgmap->altmap.alloc, 131 128 "%s: failed to free all reserved pages\n", __func__); ··· 157 162 */ 158 163 void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) 159 164 { 160 - resource_size_t align_start, align_size, align_end; 161 165 struct resource *res = &pgmap->res; 162 166 struct dev_pagemap *conflict_pgmap; 163 167 struct mhp_restrictions restrictions = { 164 168 /* 165 169 * We do not want any optional features only our own memmap 166 - */ 170 + */ 167 171 .altmap = pgmap_altmap(pgmap), 168 172 }; 169 173 pgprot_t pgprot = PAGE_KERNEL; ··· 219 225 return ERR_PTR(error); 220 226 } 221 227 222 - align_start = res->start & ~(SECTION_SIZE - 1); 223 - align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) 224 - - align_start; 225 - align_end = align_start + align_size - 1; 226 - 227 - conflict_pgmap = get_dev_pagemap(PHYS_PFN(align_start), NULL); 228 + conflict_pgmap = get_dev_pagemap(PHYS_PFN(res->start), NULL); 228 229 if (conflict_pgmap) { 229 230 dev_WARN(dev, "Conflicting mapping in same section\n"); 230 231 put_dev_pagemap(conflict_pgmap); ··· 227 238 goto err_array; 228 239 } 229 240 230 - conflict_pgmap = get_dev_pagemap(PHYS_PFN(align_end), NULL); 241 + conflict_pgmap = get_dev_pagemap(PHYS_PFN(res->end), NULL); 231 242 if (conflict_pgmap) { 232 243 dev_WARN(dev, "Conflicting mapping in same section\n"); 233 244 put_dev_pagemap(conflict_pgmap); ··· 235 246 goto err_array; 236 247 } 237 248 238 - is_ram = region_intersects(align_start, align_size, 249 + is_ram = region_intersects(res->start, resource_size(res), 239 250 IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE); 240 251 241 252 if (is_ram != REGION_DISJOINT) { ··· 256 267 if (nid < 0) 257 268 nid = numa_mem_id(); 258 269 259 - error = track_pfn_remap(NULL, &pgprot, PHYS_PFN(align_start), 0, 260 - align_size); 270 + error = track_pfn_remap(NULL, &pgprot, PHYS_PFN(res->start), 0, 271 + resource_size(res)); 261 272 if (error) 262 273 goto err_pfn_remap; 263 274 ··· 275 286 * arch_add_memory(). 276 287 */ 277 288 if (pgmap->type == MEMORY_DEVICE_PRIVATE) { 278 - error = add_pages(nid, align_start >> PAGE_SHIFT, 279 - align_size >> PAGE_SHIFT, &restrictions); 289 + error = add_pages(nid, PHYS_PFN(res->start), 290 + PHYS_PFN(resource_size(res)), &restrictions); 280 291 } else { 281 - error = kasan_add_zero_shadow(__va(align_start), align_size); 292 + error = kasan_add_zero_shadow(__va(res->start), resource_size(res)); 282 293 if (error) { 283 294 mem_hotplug_done(); 284 295 goto err_kasan; 285 296 } 286 297 287 - error = arch_add_memory(nid, align_start, align_size, 298 + error = arch_add_memory(nid, res->start, resource_size(res), 288 299 &restrictions); 289 300 } 290 301 ··· 292 303 struct zone *zone; 293 304 294 305 zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE]; 295 - move_pfn_range_to_zone(zone, align_start >> PAGE_SHIFT, 296 - align_size >> PAGE_SHIFT, pgmap_altmap(pgmap)); 306 + move_pfn_range_to_zone(zone, PHYS_PFN(res->start), 307 + PHYS_PFN(resource_size(res)), restrictions.altmap); 297 308 } 298 309 299 310 mem_hotplug_done(); ··· 305 316 * to allow us to do the work while not holding the hotplug lock. 306 317 */ 307 318 memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], 308 - align_start >> PAGE_SHIFT, 309 - align_size >> PAGE_SHIFT, pgmap); 319 + PHYS_PFN(res->start), 320 + PHYS_PFN(resource_size(res)), pgmap); 310 321 percpu_ref_get_many(pgmap->ref, pfn_end(pgmap) - pfn_first(pgmap)); 311 322 312 323 error = devm_add_action_or_reset(dev, devm_memremap_pages_release, ··· 317 328 return __va(res->start); 318 329 319 330 err_add_memory: 320 - kasan_remove_zero_shadow(__va(align_start), align_size); 331 + kasan_remove_zero_shadow(__va(res->start), resource_size(res)); 321 332 err_kasan: 322 - untrack_pfn(NULL, PHYS_PFN(align_start), align_size); 333 + untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res)); 323 334 err_pfn_remap: 324 335 pgmap_array_delete(res); 325 336 err_array:
+1 -2
kernel/pid_namespace.c
··· 291 291 } 292 292 293 293 extern int pid_max; 294 - static int zero = 0; 295 294 static struct ctl_table pid_ns_ctl_table[] = { 296 295 { 297 296 .procname = "ns_last_pid", 298 297 .maxlen = sizeof(int), 299 298 .mode = 0666, /* permissions are checked in the handler */ 300 299 .proc_handler = pid_ns_ctl_handler, 301 - .extra1 = &zero, 300 + .extra1 = SYSCTL_ZERO, 302 301 .extra2 = &pid_max, 303 302 }, 304 303 { }
+32 -17
kernel/resource.c
··· 326 326 * 327 327 * If a resource is found, returns 0 and @*res is overwritten with the part 328 328 * of the resource that's within [@start..@end]; if none is found, returns 329 - * -1 or -EINVAL for other invalid parameters. 329 + * -ENODEV. Returns -EINVAL for invalid parameters. 330 330 * 331 331 * This function walks the whole tree and not just first level children 332 332 * unless @first_lvl is true. ··· 342 342 unsigned long flags, unsigned long desc, 343 343 bool first_lvl, struct resource *res) 344 344 { 345 + bool siblings_only = true; 345 346 struct resource *p; 346 347 347 348 if (!res) ··· 353 352 354 353 read_lock(&resource_lock); 355 354 356 - for (p = iomem_resource.child; p; p = next_resource(p, first_lvl)) { 357 - if ((p->flags & flags) != flags) 358 - continue; 359 - if ((desc != IORES_DESC_NONE) && (desc != p->desc)) 360 - continue; 355 + for (p = iomem_resource.child; p; p = next_resource(p, siblings_only)) { 356 + /* If we passed the resource we are looking for, stop */ 361 357 if (p->start > end) { 362 358 p = NULL; 363 359 break; 364 360 } 365 - if ((p->end >= start) && (p->start <= end)) 366 - break; 361 + 362 + /* Skip until we find a range that matches what we look for */ 363 + if (p->end < start) 364 + continue; 365 + 366 + /* 367 + * Now that we found a range that matches what we look for, 368 + * check the flags and the descriptor. If we were not asked to 369 + * use only the first level, start looking at children as well. 370 + */ 371 + siblings_only = first_lvl; 372 + 373 + if ((p->flags & flags) != flags) 374 + continue; 375 + if ((desc != IORES_DESC_NONE) && (desc != p->desc)) 376 + continue; 377 + 378 + /* Found a match, break */ 379 + break; 380 + } 381 + 382 + if (p) { 383 + /* copy data */ 384 + res->start = max(start, p->start); 385 + res->end = min(end, p->end); 386 + res->flags = p->flags; 387 + res->desc = p->desc; 367 388 } 368 389 369 390 read_unlock(&resource_lock); 370 - if (!p) 371 - return -1; 372 - 373 - /* copy data */ 374 - res->start = max(start, p->start); 375 - res->end = min(end, p->end); 376 - res->flags = p->flags; 377 - res->desc = p->desc; 378 - return 0; 391 + return p ? 0 : -ENODEV; 379 392 } 380 393 381 394 static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
+97 -100
kernel/sysctl.c
··· 125 125 #endif 126 126 127 127 static int __maybe_unused neg_one = -1; 128 - 129 - static int zero; 130 - static int __maybe_unused one = 1; 131 128 static int __maybe_unused two = 2; 132 129 static int __maybe_unused four = 4; 133 130 static unsigned long zero_ul; ··· 382 385 .maxlen = sizeof(unsigned int), 383 386 .mode = 0644, 384 387 .proc_handler = sysctl_schedstats, 385 - .extra1 = &zero, 386 - .extra2 = &one, 388 + .extra1 = SYSCTL_ZERO, 389 + .extra2 = SYSCTL_ONE, 387 390 }, 388 391 #endif /* CONFIG_SCHEDSTATS */ 389 392 #endif /* CONFIG_SMP */ ··· 415 418 .maxlen = sizeof(unsigned int), 416 419 .mode = 0644, 417 420 .proc_handler = proc_dointvec_minmax, 418 - .extra1 = &one, 421 + .extra1 = SYSCTL_ONE, 419 422 }, 420 423 { 421 424 .procname = "numa_balancing", ··· 423 426 .maxlen = sizeof(unsigned int), 424 427 .mode = 0644, 425 428 .proc_handler = sysctl_numa_balancing, 426 - .extra1 = &zero, 427 - .extra2 = &one, 429 + .extra1 = SYSCTL_ZERO, 430 + .extra2 = SYSCTL_ONE, 428 431 }, 429 432 #endif /* CONFIG_NUMA_BALANCING */ 430 433 #endif /* CONFIG_SCHED_DEBUG */ ··· 472 475 .maxlen = sizeof(unsigned int), 473 476 .mode = 0644, 474 477 .proc_handler = proc_dointvec_minmax, 475 - .extra1 = &zero, 476 - .extra2 = &one, 478 + .extra1 = SYSCTL_ZERO, 479 + .extra2 = SYSCTL_ONE, 477 480 }, 478 481 #endif 479 482 #ifdef CONFIG_CFS_BANDWIDTH ··· 483 486 .maxlen = sizeof(unsigned int), 484 487 .mode = 0644, 485 488 .proc_handler = proc_dointvec_minmax, 486 - .extra1 = &one, 489 + .extra1 = SYSCTL_ONE, 487 490 }, 488 491 #endif 489 492 #if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) ··· 493 496 .maxlen = sizeof(unsigned int), 494 497 .mode = 0644, 495 498 .proc_handler = sched_energy_aware_handler, 496 - .extra1 = &zero, 497 - .extra2 = &one, 499 + .extra1 = SYSCTL_ZERO, 500 + .extra2 = SYSCTL_ONE, 498 501 }, 499 502 #endif 500 503 #ifdef CONFIG_PROVE_LOCKING ··· 559 562 .mode = 0644, 560 563 .proc_handler = proc_dointvec_minmax, 561 564 .extra1 = &neg_one, 562 - .extra2 = &one, 565 + .extra2 = SYSCTL_ONE, 563 566 }, 564 567 #endif 565 568 #ifdef CONFIG_LATENCYTOP ··· 693 696 .mode = 0644, 694 697 /* only handle a transition from default "0" to "1" */ 695 698 .proc_handler = proc_dointvec_minmax, 696 - .extra1 = &one, 697 - .extra2 = &one, 699 + .extra1 = SYSCTL_ONE, 700 + .extra2 = SYSCTL_ONE, 698 701 }, 699 702 #endif 700 703 #ifdef CONFIG_MODULES ··· 712 715 .mode = 0644, 713 716 /* only handle a transition from default "0" to "1" */ 714 717 .proc_handler = proc_dointvec_minmax, 715 - .extra1 = &one, 716 - .extra2 = &one, 718 + .extra1 = SYSCTL_ONE, 719 + .extra2 = SYSCTL_ONE, 717 720 }, 718 721 #endif 719 722 #ifdef CONFIG_UEVENT_HELPER ··· 872 875 .maxlen = sizeof(int), 873 876 .mode = 0644, 874 877 .proc_handler = proc_dointvec_minmax, 875 - .extra1 = &zero, 878 + .extra1 = SYSCTL_ZERO, 876 879 .extra2 = &ten_thousand, 877 880 }, 878 881 { ··· 888 891 .maxlen = sizeof(int), 889 892 .mode = 0644, 890 893 .proc_handler = proc_dointvec_minmax_sysadmin, 891 - .extra1 = &zero, 892 - .extra2 = &one, 894 + .extra1 = SYSCTL_ZERO, 895 + .extra2 = SYSCTL_ONE, 893 896 }, 894 897 { 895 898 .procname = "kptr_restrict", ··· 897 900 .maxlen = sizeof(int), 898 901 .mode = 0644, 899 902 .proc_handler = proc_dointvec_minmax_sysadmin, 900 - .extra1 = &zero, 903 + .extra1 = SYSCTL_ZERO, 901 904 .extra2 = &two, 902 905 }, 903 906 #endif ··· 922 925 .maxlen = sizeof(int), 923 926 .mode = 0644, 924 927 .proc_handler = proc_watchdog, 925 - .extra1 = &zero, 926 - .extra2 = &one, 928 + .extra1 = SYSCTL_ZERO, 929 + .extra2 = SYSCTL_ONE, 927 930 }, 928 931 { 929 932 .procname = "watchdog_thresh", ··· 931 934 .maxlen = sizeof(int), 932 935 .mode = 0644, 933 936 .proc_handler = proc_watchdog_thresh, 934 - .extra1 = &zero, 937 + .extra1 = SYSCTL_ZERO, 935 938 .extra2 = &sixty, 936 939 }, 937 940 { ··· 940 943 .maxlen = sizeof(int), 941 944 .mode = NMI_WATCHDOG_SYSCTL_PERM, 942 945 .proc_handler = proc_nmi_watchdog, 943 - .extra1 = &zero, 944 - .extra2 = &one, 946 + .extra1 = SYSCTL_ZERO, 947 + .extra2 = SYSCTL_ONE, 945 948 }, 946 949 { 947 950 .procname = "watchdog_cpumask", ··· 957 960 .maxlen = sizeof(int), 958 961 .mode = 0644, 959 962 .proc_handler = proc_soft_watchdog, 960 - .extra1 = &zero, 961 - .extra2 = &one, 963 + .extra1 = SYSCTL_ZERO, 964 + .extra2 = SYSCTL_ONE, 962 965 }, 963 966 { 964 967 .procname = "softlockup_panic", ··· 966 969 .maxlen = sizeof(int), 967 970 .mode = 0644, 968 971 .proc_handler = proc_dointvec_minmax, 969 - .extra1 = &zero, 970 - .extra2 = &one, 972 + .extra1 = SYSCTL_ZERO, 973 + .extra2 = SYSCTL_ONE, 971 974 }, 972 975 #ifdef CONFIG_SMP 973 976 { ··· 976 979 .maxlen = sizeof(int), 977 980 .mode = 0644, 978 981 .proc_handler = proc_dointvec_minmax, 979 - .extra1 = &zero, 980 - .extra2 = &one, 982 + .extra1 = SYSCTL_ZERO, 983 + .extra2 = SYSCTL_ONE, 981 984 }, 982 985 #endif /* CONFIG_SMP */ 983 986 #endif ··· 988 991 .maxlen = sizeof(int), 989 992 .mode = 0644, 990 993 .proc_handler = proc_dointvec_minmax, 991 - .extra1 = &zero, 992 - .extra2 = &one, 994 + .extra1 = SYSCTL_ZERO, 995 + .extra2 = SYSCTL_ONE, 993 996 }, 994 997 #ifdef CONFIG_SMP 995 998 { ··· 998 1001 .maxlen = sizeof(int), 999 1002 .mode = 0644, 1000 1003 .proc_handler = proc_dointvec_minmax, 1001 - .extra1 = &zero, 1002 - .extra2 = &one, 1004 + .extra1 = SYSCTL_ZERO, 1005 + .extra2 = SYSCTL_ONE, 1003 1006 }, 1004 1007 #endif /* CONFIG_SMP */ 1005 1008 #endif ··· 1112 1115 .maxlen = sizeof(int), 1113 1116 .mode = 0644, 1114 1117 .proc_handler = proc_dointvec_minmax, 1115 - .extra1 = &zero, 1116 - .extra2 = &one, 1118 + .extra1 = SYSCTL_ZERO, 1119 + .extra2 = SYSCTL_ONE, 1117 1120 }, 1118 1121 { 1119 1122 .procname = "hung_task_check_count", ··· 1121 1124 .maxlen = sizeof(int), 1122 1125 .mode = 0644, 1123 1126 .proc_handler = proc_dointvec_minmax, 1124 - .extra1 = &zero, 1127 + .extra1 = SYSCTL_ZERO, 1125 1128 }, 1126 1129 { 1127 1130 .procname = "hung_task_timeout_secs", ··· 1198 1201 .maxlen = sizeof(sysctl_perf_event_sample_rate), 1199 1202 .mode = 0644, 1200 1203 .proc_handler = perf_proc_update_handler, 1201 - .extra1 = &one, 1204 + .extra1 = SYSCTL_ONE, 1202 1205 }, 1203 1206 { 1204 1207 .procname = "perf_cpu_time_max_percent", ··· 1206 1209 .maxlen = sizeof(sysctl_perf_cpu_time_max_percent), 1207 1210 .mode = 0644, 1208 1211 .proc_handler = perf_cpu_time_max_percent_handler, 1209 - .extra1 = &zero, 1212 + .extra1 = SYSCTL_ZERO, 1210 1213 .extra2 = &one_hundred, 1211 1214 }, 1212 1215 { ··· 1215 1218 .maxlen = sizeof(sysctl_perf_event_max_stack), 1216 1219 .mode = 0644, 1217 1220 .proc_handler = perf_event_max_stack_handler, 1218 - .extra1 = &zero, 1221 + .extra1 = SYSCTL_ZERO, 1219 1222 .extra2 = &six_hundred_forty_kb, 1220 1223 }, 1221 1224 { ··· 1224 1227 .maxlen = sizeof(sysctl_perf_event_max_contexts_per_stack), 1225 1228 .mode = 0644, 1226 1229 .proc_handler = perf_event_max_stack_handler, 1227 - .extra1 = &zero, 1230 + .extra1 = SYSCTL_ZERO, 1228 1231 .extra2 = &one_thousand, 1229 1232 }, 1230 1233 #endif ··· 1234 1237 .maxlen = sizeof(int), 1235 1238 .mode = 0644, 1236 1239 .proc_handler = proc_dointvec_minmax, 1237 - .extra1 = &zero, 1238 - .extra2 = &one, 1240 + .extra1 = SYSCTL_ZERO, 1241 + .extra2 = SYSCTL_ONE, 1239 1242 }, 1240 1243 #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) 1241 1244 { ··· 1244 1247 .maxlen = sizeof(unsigned int), 1245 1248 .mode = 0644, 1246 1249 .proc_handler = timer_migration_handler, 1247 - .extra1 = &zero, 1248 - .extra2 = &one, 1250 + .extra1 = SYSCTL_ZERO, 1251 + .extra2 = SYSCTL_ONE, 1249 1252 }, 1250 1253 #endif 1251 1254 #ifdef CONFIG_BPF_SYSCALL ··· 1256 1259 .mode = 0644, 1257 1260 /* only handle a transition from default "0" to "1" */ 1258 1261 .proc_handler = proc_dointvec_minmax, 1259 - .extra1 = &one, 1260 - .extra2 = &one, 1262 + .extra1 = SYSCTL_ONE, 1263 + .extra2 = SYSCTL_ONE, 1261 1264 }, 1262 1265 { 1263 1266 .procname = "bpf_stats_enabled", ··· 1274 1277 .maxlen = sizeof(sysctl_panic_on_rcu_stall), 1275 1278 .mode = 0644, 1276 1279 .proc_handler = proc_dointvec_minmax, 1277 - .extra1 = &zero, 1278 - .extra2 = &one, 1280 + .extra1 = SYSCTL_ZERO, 1281 + .extra2 = SYSCTL_ONE, 1279 1282 }, 1280 1283 #endif 1281 1284 #ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE ··· 1285 1288 .maxlen = sizeof(int), 1286 1289 .mode = 0600, 1287 1290 .proc_handler = stack_erasing_sysctl, 1288 - .extra1 = &zero, 1289 - .extra2 = &one, 1291 + .extra1 = SYSCTL_ZERO, 1292 + .extra2 = SYSCTL_ONE, 1290 1293 }, 1291 1294 #endif 1292 1295 { } ··· 1299 1302 .maxlen = sizeof(sysctl_overcommit_memory), 1300 1303 .mode = 0644, 1301 1304 .proc_handler = proc_dointvec_minmax, 1302 - .extra1 = &zero, 1305 + .extra1 = SYSCTL_ZERO, 1303 1306 .extra2 = &two, 1304 1307 }, 1305 1308 { ··· 1308 1311 .maxlen = sizeof(sysctl_panic_on_oom), 1309 1312 .mode = 0644, 1310 1313 .proc_handler = proc_dointvec_minmax, 1311 - .extra1 = &zero, 1314 + .extra1 = SYSCTL_ZERO, 1312 1315 .extra2 = &two, 1313 1316 }, 1314 1317 { ··· 1345 1348 .maxlen = sizeof(int), 1346 1349 .mode = 0644, 1347 1350 .proc_handler = proc_dointvec_minmax, 1348 - .extra1 = &zero, 1351 + .extra1 = SYSCTL_ZERO, 1349 1352 }, 1350 1353 { 1351 1354 .procname = "dirty_background_ratio", ··· 1353 1356 .maxlen = sizeof(dirty_background_ratio), 1354 1357 .mode = 0644, 1355 1358 .proc_handler = dirty_background_ratio_handler, 1356 - .extra1 = &zero, 1359 + .extra1 = SYSCTL_ZERO, 1357 1360 .extra2 = &one_hundred, 1358 1361 }, 1359 1362 { ··· 1370 1373 .maxlen = sizeof(vm_dirty_ratio), 1371 1374 .mode = 0644, 1372 1375 .proc_handler = dirty_ratio_handler, 1373 - .extra1 = &zero, 1376 + .extra1 = SYSCTL_ZERO, 1374 1377 .extra2 = &one_hundred, 1375 1378 }, 1376 1379 { ··· 1394 1397 .maxlen = sizeof(dirty_expire_interval), 1395 1398 .mode = 0644, 1396 1399 .proc_handler = proc_dointvec_minmax, 1397 - .extra1 = &zero, 1400 + .extra1 = SYSCTL_ZERO, 1398 1401 }, 1399 1402 { 1400 1403 .procname = "dirtytime_expire_seconds", ··· 1402 1405 .maxlen = sizeof(dirtytime_expire_interval), 1403 1406 .mode = 0644, 1404 1407 .proc_handler = dirtytime_interval_handler, 1405 - .extra1 = &zero, 1408 + .extra1 = SYSCTL_ZERO, 1406 1409 }, 1407 1410 { 1408 1411 .procname = "swappiness", ··· 1410 1413 .maxlen = sizeof(vm_swappiness), 1411 1414 .mode = 0644, 1412 1415 .proc_handler = proc_dointvec_minmax, 1413 - .extra1 = &zero, 1416 + .extra1 = SYSCTL_ZERO, 1414 1417 .extra2 = &one_hundred, 1415 1418 }, 1416 1419 #ifdef CONFIG_HUGETLB_PAGE ··· 1435 1438 .maxlen = sizeof(int), 1436 1439 .mode = 0644, 1437 1440 .proc_handler = sysctl_vm_numa_stat_handler, 1438 - .extra1 = &zero, 1439 - .extra2 = &one, 1441 + .extra1 = SYSCTL_ZERO, 1442 + .extra2 = SYSCTL_ONE, 1440 1443 }, 1441 1444 #endif 1442 1445 { ··· 1467 1470 .maxlen = sizeof(int), 1468 1471 .mode = 0644, 1469 1472 .proc_handler = drop_caches_sysctl_handler, 1470 - .extra1 = &one, 1473 + .extra1 = SYSCTL_ONE, 1471 1474 .extra2 = &four, 1472 1475 }, 1473 1476 #ifdef CONFIG_COMPACTION ··· 1493 1496 .maxlen = sizeof(int), 1494 1497 .mode = 0644, 1495 1498 .proc_handler = proc_dointvec, 1496 - .extra1 = &zero, 1497 - .extra2 = &one, 1499 + .extra1 = SYSCTL_ZERO, 1500 + .extra2 = SYSCTL_ONE, 1498 1501 }, 1499 1502 1500 1503 #endif /* CONFIG_COMPACTION */ ··· 1504 1507 .maxlen = sizeof(min_free_kbytes), 1505 1508 .mode = 0644, 1506 1509 .proc_handler = min_free_kbytes_sysctl_handler, 1507 - .extra1 = &zero, 1510 + .extra1 = SYSCTL_ZERO, 1508 1511 }, 1509 1512 { 1510 1513 .procname = "watermark_boost_factor", ··· 1512 1515 .maxlen = sizeof(watermark_boost_factor), 1513 1516 .mode = 0644, 1514 1517 .proc_handler = watermark_boost_factor_sysctl_handler, 1515 - .extra1 = &zero, 1518 + .extra1 = SYSCTL_ZERO, 1516 1519 }, 1517 1520 { 1518 1521 .procname = "watermark_scale_factor", ··· 1520 1523 .maxlen = sizeof(watermark_scale_factor), 1521 1524 .mode = 0644, 1522 1525 .proc_handler = watermark_scale_factor_sysctl_handler, 1523 - .extra1 = &one, 1526 + .extra1 = SYSCTL_ONE, 1524 1527 .extra2 = &one_thousand, 1525 1528 }, 1526 1529 { ··· 1529 1532 .maxlen = sizeof(percpu_pagelist_fraction), 1530 1533 .mode = 0644, 1531 1534 .proc_handler = percpu_pagelist_fraction_sysctl_handler, 1532 - .extra1 = &zero, 1535 + .extra1 = SYSCTL_ZERO, 1533 1536 }, 1534 1537 #ifdef CONFIG_MMU 1535 1538 { ··· 1538 1541 .maxlen = sizeof(sysctl_max_map_count), 1539 1542 .mode = 0644, 1540 1543 .proc_handler = proc_dointvec_minmax, 1541 - .extra1 = &zero, 1544 + .extra1 = SYSCTL_ZERO, 1542 1545 }, 1543 1546 #else 1544 1547 { ··· 1547 1550 .maxlen = sizeof(sysctl_nr_trim_pages), 1548 1551 .mode = 0644, 1549 1552 .proc_handler = proc_dointvec_minmax, 1550 - .extra1 = &zero, 1553 + .extra1 = SYSCTL_ZERO, 1551 1554 }, 1552 1555 #endif 1553 1556 { ··· 1563 1566 .maxlen = sizeof(block_dump), 1564 1567 .mode = 0644, 1565 1568 .proc_handler = proc_dointvec, 1566 - .extra1 = &zero, 1569 + .extra1 = SYSCTL_ZERO, 1567 1570 }, 1568 1571 { 1569 1572 .procname = "vfs_cache_pressure", ··· 1571 1574 .maxlen = sizeof(sysctl_vfs_cache_pressure), 1572 1575 .mode = 0644, 1573 1576 .proc_handler = proc_dointvec, 1574 - .extra1 = &zero, 1577 + .extra1 = SYSCTL_ZERO, 1575 1578 }, 1576 1579 #ifdef HAVE_ARCH_PICK_MMAP_LAYOUT 1577 1580 { ··· 1580 1583 .maxlen = sizeof(sysctl_legacy_va_layout), 1581 1584 .mode = 0644, 1582 1585 .proc_handler = proc_dointvec, 1583 - .extra1 = &zero, 1586 + .extra1 = SYSCTL_ZERO, 1584 1587 }, 1585 1588 #endif 1586 1589 #ifdef CONFIG_NUMA ··· 1590 1593 .maxlen = sizeof(node_reclaim_mode), 1591 1594 .mode = 0644, 1592 1595 .proc_handler = proc_dointvec, 1593 - .extra1 = &zero, 1596 + .extra1 = SYSCTL_ZERO, 1594 1597 }, 1595 1598 { 1596 1599 .procname = "min_unmapped_ratio", ··· 1598 1601 .maxlen = sizeof(sysctl_min_unmapped_ratio), 1599 1602 .mode = 0644, 1600 1603 .proc_handler = sysctl_min_unmapped_ratio_sysctl_handler, 1601 - .extra1 = &zero, 1604 + .extra1 = SYSCTL_ZERO, 1602 1605 .extra2 = &one_hundred, 1603 1606 }, 1604 1607 { ··· 1607 1610 .maxlen = sizeof(sysctl_min_slab_ratio), 1608 1611 .mode = 0644, 1609 1612 .proc_handler = sysctl_min_slab_ratio_sysctl_handler, 1610 - .extra1 = &zero, 1613 + .extra1 = SYSCTL_ZERO, 1611 1614 .extra2 = &one_hundred, 1612 1615 }, 1613 1616 #endif ··· 1658 1661 #endif 1659 1662 .mode = 0644, 1660 1663 .proc_handler = proc_dointvec, 1661 - .extra1 = &zero, 1664 + .extra1 = SYSCTL_ZERO, 1662 1665 }, 1663 1666 #endif 1664 1667 #ifdef CONFIG_HIGHMEM ··· 1668 1671 .maxlen = sizeof(vm_highmem_is_dirtyable), 1669 1672 .mode = 0644, 1670 1673 .proc_handler = proc_dointvec_minmax, 1671 - .extra1 = &zero, 1672 - .extra2 = &one, 1674 + .extra1 = SYSCTL_ZERO, 1675 + .extra2 = SYSCTL_ONE, 1673 1676 }, 1674 1677 #endif 1675 1678 #ifdef CONFIG_MEMORY_FAILURE ··· 1679 1682 .maxlen = sizeof(sysctl_memory_failure_early_kill), 1680 1683 .mode = 0644, 1681 1684 .proc_handler = proc_dointvec_minmax, 1682 - .extra1 = &zero, 1683 - .extra2 = &one, 1685 + .extra1 = SYSCTL_ZERO, 1686 + .extra2 = SYSCTL_ONE, 1684 1687 }, 1685 1688 { 1686 1689 .procname = "memory_failure_recovery", ··· 1688 1691 .maxlen = sizeof(sysctl_memory_failure_recovery), 1689 1692 .mode = 0644, 1690 1693 .proc_handler = proc_dointvec_minmax, 1691 - .extra1 = &zero, 1692 - .extra2 = &one, 1694 + .extra1 = SYSCTL_ZERO, 1695 + .extra2 = SYSCTL_ONE, 1693 1696 }, 1694 1697 #endif 1695 1698 { ··· 1735 1738 .maxlen = sizeof(sysctl_unprivileged_userfaultfd), 1736 1739 .mode = 0644, 1737 1740 .proc_handler = proc_dointvec_minmax, 1738 - .extra1 = &zero, 1739 - .extra2 = &one, 1741 + .extra1 = SYSCTL_ZERO, 1742 + .extra2 = SYSCTL_ONE, 1740 1743 }, 1741 1744 #endif 1742 1745 { } ··· 1872 1875 .maxlen = sizeof(int), 1873 1876 .mode = 0600, 1874 1877 .proc_handler = proc_dointvec_minmax, 1875 - .extra1 = &zero, 1876 - .extra2 = &one, 1878 + .extra1 = SYSCTL_ZERO, 1879 + .extra2 = SYSCTL_ONE, 1877 1880 }, 1878 1881 { 1879 1882 .procname = "protected_hardlinks", ··· 1881 1884 .maxlen = sizeof(int), 1882 1885 .mode = 0600, 1883 1886 .proc_handler = proc_dointvec_minmax, 1884 - .extra1 = &zero, 1885 - .extra2 = &one, 1887 + .extra1 = SYSCTL_ZERO, 1888 + .extra2 = SYSCTL_ONE, 1886 1889 }, 1887 1890 { 1888 1891 .procname = "protected_fifos", ··· 1890 1893 .maxlen = sizeof(int), 1891 1894 .mode = 0600, 1892 1895 .proc_handler = proc_dointvec_minmax, 1893 - .extra1 = &zero, 1896 + .extra1 = SYSCTL_ZERO, 1894 1897 .extra2 = &two, 1895 1898 }, 1896 1899 { ··· 1899 1902 .maxlen = sizeof(int), 1900 1903 .mode = 0600, 1901 1904 .proc_handler = proc_dointvec_minmax, 1902 - .extra1 = &zero, 1905 + .extra1 = SYSCTL_ZERO, 1903 1906 .extra2 = &two, 1904 1907 }, 1905 1908 { ··· 1908 1911 .maxlen = sizeof(int), 1909 1912 .mode = 0644, 1910 1913 .proc_handler = proc_dointvec_minmax_coredump, 1911 - .extra1 = &zero, 1914 + .extra1 = SYSCTL_ZERO, 1912 1915 .extra2 = &two, 1913 1916 }, 1914 1917 #if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE) ··· 1945 1948 .maxlen = sizeof(unsigned int), 1946 1949 .mode = 0644, 1947 1950 .proc_handler = proc_dointvec_minmax, 1948 - .extra1 = &one, 1951 + .extra1 = SYSCTL_ONE, 1949 1952 }, 1950 1953 { } 1951 1954 }; ··· 1967 1970 .maxlen = sizeof(int), 1968 1971 .mode = 0644, 1969 1972 .proc_handler = proc_kprobes_optimization_handler, 1970 - .extra1 = &zero, 1971 - .extra2 = &one, 1973 + .extra1 = SYSCTL_ZERO, 1974 + .extra2 = SYSCTL_ONE, 1972 1975 }, 1973 1976 #endif 1974 1977 { } ··· 3392 3395 .data = &val, 3393 3396 .maxlen = sizeof(val), 3394 3397 .mode = table->mode, 3395 - .extra1 = &zero, 3396 - .extra2 = &one, 3398 + .extra1 = SYSCTL_ZERO, 3399 + .extra2 = SYSCTL_ONE, 3397 3400 }; 3398 3401 3399 3402 if (write && !capable(CAP_SYS_ADMIN))
+2 -4
kernel/ucount.c
··· 52 52 .permissions = set_permissions, 53 53 }; 54 54 55 - static int zero = 0; 56 - static int int_max = INT_MAX; 57 55 #define UCOUNT_ENTRY(name) \ 58 56 { \ 59 57 .procname = name, \ 60 58 .maxlen = sizeof(int), \ 61 59 .mode = 0644, \ 62 60 .proc_handler = proc_dointvec_minmax, \ 63 - .extra1 = &zero, \ 64 - .extra2 = &int_max, \ 61 + .extra1 = SYSCTL_ZERO, \ 62 + .extra2 = SYSCTL_INT_MAX, \ 65 63 } 66 64 static struct ctl_table user_table[] = { 67 65 UCOUNT_ENTRY("max_user_namespaces"),
+8 -3
mm/huge_memory.c
··· 63 63 64 64 bool transparent_hugepage_enabled(struct vm_area_struct *vma) 65 65 { 66 + /* The addr is used to check if the vma size fits */ 67 + unsigned long addr = (vma->vm_end & HPAGE_PMD_MASK) - HPAGE_PMD_SIZE; 68 + 69 + if (!transhuge_vma_suitable(vma, addr)) 70 + return false; 66 71 if (vma_is_anonymous(vma)) 67 72 return __transparent_hugepage_enabled(vma); 68 - if (vma_is_shmem(vma) && shmem_huge_enabled(vma)) 69 - return __transparent_hugepage_enabled(vma); 73 + if (vma_is_shmem(vma)) 74 + return shmem_huge_enabled(vma); 70 75 71 76 return false; 72 77 } ··· 694 689 struct page *page; 695 690 unsigned long haddr = vmf->address & HPAGE_PMD_MASK; 696 691 697 - if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) 692 + if (!transhuge_vma_suitable(vma, haddr)) 698 693 return VM_FAULT_FALLBACK; 699 694 if (unlikely(anon_vma_prepare(vma))) 700 695 return VM_FAULT_OOM;
-13
mm/memory.c
··· 3162 3162 } 3163 3163 3164 3164 #ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE 3165 - 3166 - #define HPAGE_CACHE_INDEX_MASK (HPAGE_PMD_NR - 1) 3167 - static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, 3168 - unsigned long haddr) 3169 - { 3170 - if (((vma->vm_start >> PAGE_SHIFT) & HPAGE_CACHE_INDEX_MASK) != 3171 - (vma->vm_pgoff & HPAGE_CACHE_INDEX_MASK)) 3172 - return false; 3173 - if (haddr < vma->vm_start || haddr + HPAGE_PMD_SIZE > vma->vm_end) 3174 - return false; 3175 - return true; 3176 - } 3177 - 3178 3165 static void deposit_prealloc_pte(struct vm_fault *vmf) 3179 3166 { 3180 3167 struct vm_area_struct *vma = vmf->vma;
+104 -169
mm/memory_hotplug.c
··· 166 166 #ifndef CONFIG_SPARSEMEM_VMEMMAP 167 167 static void register_page_bootmem_info_section(unsigned long start_pfn) 168 168 { 169 - unsigned long *usemap, mapsize, section_nr, i; 169 + unsigned long mapsize, section_nr, i; 170 170 struct mem_section *ms; 171 171 struct page *page, *memmap; 172 + struct mem_section_usage *usage; 172 173 173 174 section_nr = pfn_to_section_nr(start_pfn); 174 175 ms = __nr_to_section(section_nr); ··· 189 188 for (i = 0; i < mapsize; i++, page++) 190 189 get_page_bootmem(section_nr, page, SECTION_INFO); 191 190 192 - usemap = ms->pageblock_flags; 193 - page = virt_to_page(usemap); 191 + usage = ms->usage; 192 + page = virt_to_page(usage); 194 193 195 - mapsize = PAGE_ALIGN(usemap_size()) >> PAGE_SHIFT; 194 + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; 196 195 197 196 for (i = 0; i < mapsize; i++, page++) 198 197 get_page_bootmem(section_nr, page, MIX_SECTION_INFO); ··· 201 200 #else /* CONFIG_SPARSEMEM_VMEMMAP */ 202 201 static void register_page_bootmem_info_section(unsigned long start_pfn) 203 202 { 204 - unsigned long *usemap, mapsize, section_nr, i; 203 + unsigned long mapsize, section_nr, i; 205 204 struct mem_section *ms; 206 205 struct page *page, *memmap; 206 + struct mem_section_usage *usage; 207 207 208 208 section_nr = pfn_to_section_nr(start_pfn); 209 209 ms = __nr_to_section(section_nr); ··· 213 211 214 212 register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION); 215 213 216 - usemap = ms->pageblock_flags; 217 - page = virt_to_page(usemap); 214 + usage = ms->usage; 215 + page = virt_to_page(usage); 218 216 219 - mapsize = PAGE_ALIGN(usemap_size()) >> PAGE_SHIFT; 217 + mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT; 220 218 221 219 for (i = 0; i < mapsize; i++, page++) 222 220 get_page_bootmem(section_nr, page, MIX_SECTION_INFO); ··· 252 250 } 253 251 #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ 254 252 255 - static int __meminit __add_section(int nid, unsigned long phys_start_pfn, 256 - struct vmem_altmap *altmap, bool want_memblock) 253 + static int check_pfn_span(unsigned long pfn, unsigned long nr_pages, 254 + const char *reason) 257 255 { 258 - int ret; 256 + /* 257 + * Disallow all operations smaller than a sub-section and only 258 + * allow operations smaller than a section for 259 + * SPARSEMEM_VMEMMAP. Note that check_hotplug_memory_range() 260 + * enforces a larger memory_block_size_bytes() granularity for 261 + * memory that will be marked online, so this check should only 262 + * fire for direct arch_{add,remove}_memory() users outside of 263 + * add_memory_resource(). 264 + */ 265 + unsigned long min_align; 259 266 260 - if (pfn_valid(phys_start_pfn)) 261 - return -EEXIST; 262 - 263 - ret = sparse_add_one_section(nid, phys_start_pfn, altmap); 264 - if (ret < 0) 265 - return ret; 266 - 267 - if (!want_memblock) 268 - return 0; 269 - 270 - return hotplug_memory_register(nid, __pfn_to_section(phys_start_pfn)); 267 + if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP)) 268 + min_align = PAGES_PER_SUBSECTION; 269 + else 270 + min_align = PAGES_PER_SECTION; 271 + if (!IS_ALIGNED(pfn, min_align) 272 + || !IS_ALIGNED(nr_pages, min_align)) { 273 + WARN(1, "Misaligned __%s_pages start: %#lx end: #%lx\n", 274 + reason, pfn, pfn + nr_pages - 1); 275 + return -EINVAL; 276 + } 277 + return 0; 271 278 } 272 279 273 280 /* ··· 285 274 * call this function after deciding the zone to which to 286 275 * add the new pages. 287 276 */ 288 - int __ref __add_pages(int nid, unsigned long phys_start_pfn, 289 - unsigned long nr_pages, struct mhp_restrictions *restrictions) 277 + int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, 278 + struct mhp_restrictions *restrictions) 290 279 { 291 - unsigned long i; 292 - int err = 0; 293 - int start_sec, end_sec; 280 + int err; 281 + unsigned long nr, start_sec, end_sec; 294 282 struct vmem_altmap *altmap = restrictions->altmap; 295 - 296 - /* during initialize mem_map, align hot-added range to section */ 297 - start_sec = pfn_to_section_nr(phys_start_pfn); 298 - end_sec = pfn_to_section_nr(phys_start_pfn + nr_pages - 1); 299 283 300 284 if (altmap) { 301 285 /* 302 286 * Validate altmap is within bounds of the total request 303 287 */ 304 - if (altmap->base_pfn != phys_start_pfn 288 + if (altmap->base_pfn != pfn 305 289 || vmem_altmap_offset(altmap) > nr_pages) { 306 290 pr_warn_once("memory add fail, invalid altmap\n"); 307 - err = -EINVAL; 308 - goto out; 291 + return -EINVAL; 309 292 } 310 293 altmap->alloc = 0; 311 294 } 312 295 313 - for (i = start_sec; i <= end_sec; i++) { 314 - err = __add_section(nid, section_nr_to_pfn(i), altmap, 315 - restrictions->flags & MHP_MEMBLOCK_API); 296 + err = check_pfn_span(pfn, nr_pages, "add"); 297 + if (err) 298 + return err; 316 299 317 - /* 318 - * EEXIST is finally dealt with by ioresource collision 319 - * check. see add_memory() => register_memory_resource() 320 - * Warning will be printed if there is collision. 321 - */ 322 - if (err && (err != -EEXIST)) 300 + start_sec = pfn_to_section_nr(pfn); 301 + end_sec = pfn_to_section_nr(pfn + nr_pages - 1); 302 + for (nr = start_sec; nr <= end_sec; nr++) { 303 + unsigned long pfns; 304 + 305 + pfns = min(nr_pages, PAGES_PER_SECTION 306 + - (pfn & ~PAGE_SECTION_MASK)); 307 + err = sparse_add_section(nid, pfn, pfns, altmap); 308 + if (err) 323 309 break; 324 - err = 0; 310 + pfn += pfns; 311 + nr_pages -= pfns; 325 312 cond_resched(); 326 313 } 327 314 vmemmap_populate_print_last(); 328 - out: 329 315 return err; 330 316 } 331 317 332 - #ifdef CONFIG_MEMORY_HOTREMOVE 333 318 /* find the smallest valid pfn in the range [start_pfn, end_pfn) */ 334 319 static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, 335 320 unsigned long start_pfn, 336 321 unsigned long end_pfn) 337 322 { 338 - struct mem_section *ms; 339 - 340 - for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SECTION) { 341 - ms = __pfn_to_section(start_pfn); 342 - 343 - if (unlikely(!valid_section(ms))) 323 + for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SUBSECTION) { 324 + if (unlikely(!pfn_valid(start_pfn))) 344 325 continue; 345 326 346 327 if (unlikely(pfn_to_nid(start_pfn) != nid)) ··· 352 349 unsigned long start_pfn, 353 350 unsigned long end_pfn) 354 351 { 355 - struct mem_section *ms; 356 352 unsigned long pfn; 357 353 358 354 /* pfn is the end pfn of a memory section. */ 359 355 pfn = end_pfn - 1; 360 - for (; pfn >= start_pfn; pfn -= PAGES_PER_SECTION) { 361 - ms = __pfn_to_section(pfn); 362 - 363 - if (unlikely(!valid_section(ms))) 356 + for (; pfn >= start_pfn; pfn -= PAGES_PER_SUBSECTION) { 357 + if (unlikely(!pfn_valid(pfn))) 364 358 continue; 365 359 366 360 if (unlikely(pfn_to_nid(pfn) != nid)) ··· 379 379 unsigned long z = zone_end_pfn(zone); /* zone_end_pfn namespace clash */ 380 380 unsigned long zone_end_pfn = z; 381 381 unsigned long pfn; 382 - struct mem_section *ms; 383 382 int nid = zone_to_nid(zone); 384 383 385 384 zone_span_writelock(zone); ··· 415 416 * it check the zone has only hole or not. 416 417 */ 417 418 pfn = zone_start_pfn; 418 - for (; pfn < zone_end_pfn; pfn += PAGES_PER_SECTION) { 419 - ms = __pfn_to_section(pfn); 420 - 421 - if (unlikely(!valid_section(ms))) 419 + for (; pfn < zone_end_pfn; pfn += PAGES_PER_SUBSECTION) { 420 + if (unlikely(!pfn_valid(pfn))) 422 421 continue; 423 422 424 423 if (page_zone(pfn_to_page(pfn)) != zone) 425 424 continue; 426 425 427 - /* If the section is current section, it continues the loop */ 428 - if (start_pfn == pfn) 426 + /* Skip range to be removed */ 427 + if (pfn >= start_pfn && pfn < end_pfn) 429 428 continue; 430 429 431 430 /* If we find valid section, we have nothing to do */ ··· 444 447 unsigned long p = pgdat_end_pfn(pgdat); /* pgdat_end_pfn namespace clash */ 445 448 unsigned long pgdat_end_pfn = p; 446 449 unsigned long pfn; 447 - struct mem_section *ms; 448 450 int nid = pgdat->node_id; 449 451 450 452 if (pgdat_start_pfn == start_pfn) { ··· 480 484 * has only hole or not. 481 485 */ 482 486 pfn = pgdat_start_pfn; 483 - for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SECTION) { 484 - ms = __pfn_to_section(pfn); 485 - 486 - if (unlikely(!valid_section(ms))) 487 + for (; pfn < pgdat_end_pfn; pfn += PAGES_PER_SUBSECTION) { 488 + if (unlikely(!pfn_valid(pfn))) 487 489 continue; 488 490 489 491 if (pfn_to_nid(pfn) != nid) 490 492 continue; 491 493 492 - /* If the section is current section, it continues the loop */ 493 - if (start_pfn == pfn) 494 + /* Skip range to be removed */ 495 + if (pfn >= start_pfn && pfn < end_pfn) 494 496 continue; 495 497 496 498 /* If we find valid section, we have nothing to do */ ··· 500 506 pgdat->node_spanned_pages = 0; 501 507 } 502 508 503 - static void __remove_zone(struct zone *zone, unsigned long start_pfn) 509 + static void __remove_zone(struct zone *zone, unsigned long start_pfn, 510 + unsigned long nr_pages) 504 511 { 505 512 struct pglist_data *pgdat = zone->zone_pgdat; 506 - int nr_pages = PAGES_PER_SECTION; 507 513 unsigned long flags; 508 514 509 515 pgdat_resize_lock(zone->zone_pgdat, &flags); ··· 512 518 pgdat_resize_unlock(zone->zone_pgdat, &flags); 513 519 } 514 520 515 - static void __remove_section(struct zone *zone, struct mem_section *ms, 516 - unsigned long map_offset, 517 - struct vmem_altmap *altmap) 521 + static void __remove_section(struct zone *zone, unsigned long pfn, 522 + unsigned long nr_pages, unsigned long map_offset, 523 + struct vmem_altmap *altmap) 518 524 { 519 - unsigned long start_pfn; 520 - int scn_nr; 525 + struct mem_section *ms = __nr_to_section(pfn_to_section_nr(pfn)); 521 526 522 527 if (WARN_ON_ONCE(!valid_section(ms))) 523 528 return; 524 529 525 - unregister_memory_section(ms); 526 - 527 - scn_nr = __section_nr(ms); 528 - start_pfn = section_nr_to_pfn((unsigned long)scn_nr); 529 - __remove_zone(zone, start_pfn); 530 - 531 - sparse_remove_one_section(zone, ms, map_offset, altmap); 530 + __remove_zone(zone, pfn, nr_pages); 531 + sparse_remove_section(ms, pfn, nr_pages, map_offset, altmap); 532 532 } 533 533 534 534 /** 535 535 * __remove_pages() - remove sections of pages from a zone 536 536 * @zone: zone from which pages need to be removed 537 - * @phys_start_pfn: starting pageframe (must be aligned to start of a section) 537 + * @pfn: starting pageframe (must be aligned to start of a section) 538 538 * @nr_pages: number of pages to remove (must be multiple of section size) 539 539 * @altmap: alternative device page map or %NULL if default memmap is used 540 540 * ··· 537 549 * sure that pages are marked reserved and zones are adjust properly by 538 550 * calling offline_pages(). 539 551 */ 540 - void __remove_pages(struct zone *zone, unsigned long phys_start_pfn, 552 + void __remove_pages(struct zone *zone, unsigned long pfn, 541 553 unsigned long nr_pages, struct vmem_altmap *altmap) 542 554 { 543 - unsigned long i; 544 555 unsigned long map_offset = 0; 545 - int sections_to_remove; 556 + unsigned long nr, start_sec, end_sec; 546 557 547 - /* In the ZONE_DEVICE case device driver owns the memory region */ 548 - if (is_dev_zone(zone)) 549 - map_offset = vmem_altmap_offset(altmap); 558 + map_offset = vmem_altmap_offset(altmap); 550 559 551 560 clear_zone_contiguous(zone); 552 561 553 - /* 554 - * We can only remove entire sections 555 - */ 556 - BUG_ON(phys_start_pfn & ~PAGE_SECTION_MASK); 557 - BUG_ON(nr_pages % PAGES_PER_SECTION); 562 + if (check_pfn_span(pfn, nr_pages, "remove")) 563 + return; 558 564 559 - sections_to_remove = nr_pages / PAGES_PER_SECTION; 560 - for (i = 0; i < sections_to_remove; i++) { 561 - unsigned long pfn = phys_start_pfn + i*PAGES_PER_SECTION; 565 + start_sec = pfn_to_section_nr(pfn); 566 + end_sec = pfn_to_section_nr(pfn + nr_pages - 1); 567 + for (nr = start_sec; nr <= end_sec; nr++) { 568 + unsigned long pfns; 562 569 563 570 cond_resched(); 564 - __remove_section(zone, __pfn_to_section(pfn), map_offset, 565 - altmap); 571 + pfns = min(nr_pages, PAGES_PER_SECTION 572 + - (pfn & ~PAGE_SECTION_MASK)); 573 + __remove_section(zone, pfn, pfns, map_offset, altmap); 574 + pfn += pfns; 575 + nr_pages -= pfns; 566 576 map_offset = 0; 567 577 } 568 578 569 579 set_zone_contiguous(zone); 570 580 } 571 - #endif /* CONFIG_MEMORY_HOTREMOVE */ 572 581 573 582 int set_online_page_callback(online_page_callback_t callback) 574 583 { ··· 1034 1049 1035 1050 static int check_hotplug_memory_range(u64 start, u64 size) 1036 1051 { 1037 - unsigned long block_sz = memory_block_size_bytes(); 1038 - u64 block_nr_pages = block_sz >> PAGE_SHIFT; 1039 - u64 nr_pages = size >> PAGE_SHIFT; 1040 - u64 start_pfn = PFN_DOWN(start); 1041 - 1042 1052 /* memory range must be block size aligned */ 1043 - if (!nr_pages || !IS_ALIGNED(start_pfn, block_nr_pages) || 1044 - !IS_ALIGNED(nr_pages, block_nr_pages)) { 1053 + if (!size || !IS_ALIGNED(start, memory_block_size_bytes()) || 1054 + !IS_ALIGNED(size, memory_block_size_bytes())) { 1045 1055 pr_err("Block size [%#lx] unaligned hotplug range: start %#llx, size %#llx", 1046 - block_sz, start, size); 1056 + memory_block_size_bytes(), start, size); 1047 1057 return -EINVAL; 1048 1058 } 1049 1059 ··· 1058 1078 */ 1059 1079 int __ref add_memory_resource(int nid, struct resource *res) 1060 1080 { 1061 - struct mhp_restrictions restrictions = { 1062 - .flags = MHP_MEMBLOCK_API, 1063 - }; 1081 + struct mhp_restrictions restrictions = {}; 1064 1082 u64 start, size; 1065 1083 bool new_node = false; 1066 1084 int ret; ··· 1090 1112 if (ret < 0) 1091 1113 goto error; 1092 1114 1115 + /* create memory block devices after memory was added */ 1116 + ret = create_memory_block_devices(start, size); 1117 + if (ret) { 1118 + arch_remove_memory(nid, start, size, NULL); 1119 + goto error; 1120 + } 1121 + 1093 1122 if (new_node) { 1094 1123 /* If sysfs file of new node can't be created, cpu on the node 1095 1124 * can't be hot-added. There is no rollback way now. ··· 1120 1135 1121 1136 /* online pages if requested */ 1122 1137 if (memhp_auto_online) 1123 - walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1), 1124 - NULL, online_memory_block); 1138 + walk_memory_blocks(start, size, NULL, online_memory_block); 1125 1139 1126 1140 return ret; 1127 1141 error: ··· 1655 1671 { 1656 1672 return __offline_pages(start_pfn, start_pfn + nr_pages); 1657 1673 } 1658 - #endif /* CONFIG_MEMORY_HOTREMOVE */ 1659 1674 1660 - /** 1661 - * walk_memory_range - walks through all mem sections in [start_pfn, end_pfn) 1662 - * @start_pfn: start pfn of the memory range 1663 - * @end_pfn: end pfn of the memory range 1664 - * @arg: argument passed to func 1665 - * @func: callback for each memory section walked 1666 - * 1667 - * This function walks through all present mem sections in range 1668 - * [start_pfn, end_pfn) and call func on each mem section. 1669 - * 1670 - * Returns the return value of func. 1671 - */ 1672 - int walk_memory_range(unsigned long start_pfn, unsigned long end_pfn, 1673 - void *arg, int (*func)(struct memory_block *, void *)) 1674 - { 1675 - struct memory_block *mem = NULL; 1676 - struct mem_section *section; 1677 - unsigned long pfn, section_nr; 1678 - int ret; 1679 - 1680 - for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) { 1681 - section_nr = pfn_to_section_nr(pfn); 1682 - if (!present_section_nr(section_nr)) 1683 - continue; 1684 - 1685 - section = __nr_to_section(section_nr); 1686 - /* same memblock? */ 1687 - if (mem) 1688 - if ((section_nr >= mem->start_section_nr) && 1689 - (section_nr <= mem->end_section_nr)) 1690 - continue; 1691 - 1692 - mem = find_memory_block_hinted(section, mem); 1693 - if (!mem) 1694 - continue; 1695 - 1696 - ret = func(mem, arg); 1697 - if (ret) { 1698 - kobject_put(&mem->dev.kobj); 1699 - return ret; 1700 - } 1701 - } 1702 - 1703 - if (mem) 1704 - kobject_put(&mem->dev.kobj); 1705 - 1706 - return 0; 1707 - } 1708 - 1709 - #ifdef CONFIG_MEMORY_HOTREMOVE 1710 1675 static int check_memblock_offlined_cb(struct memory_block *mem, void *arg) 1711 1676 { 1712 1677 int ret = !is_memblock_offlined(mem); ··· 1766 1833 * whether all memory blocks in question are offline and return error 1767 1834 * if this is not the case. 1768 1835 */ 1769 - rc = walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1), NULL, 1770 - check_memblock_offlined_cb); 1836 + rc = walk_memory_blocks(start, size, NULL, check_memblock_offlined_cb); 1771 1837 if (rc) 1772 1838 goto done; 1773 1839 ··· 1774 1842 firmware_map_remove(start, start + size, "System RAM"); 1775 1843 memblock_free(start, size); 1776 1844 memblock_remove(start, size); 1845 + 1846 + /* remove memory block devices before removing memory */ 1847 + remove_memory_block_devices(start, size); 1777 1848 1778 1849 arch_remove_memory(nid, start, size, NULL); 1779 1850 __release_memory_resource(start, size);
+3 -4
mm/migrate.c
··· 394 394 * 3 for pages with a mapping and PagePrivate/PagePrivate2 set. 395 395 */ 396 396 int migrate_page_move_mapping(struct address_space *mapping, 397 - struct page *newpage, struct page *page, enum migrate_mode mode, 398 - int extra_count) 397 + struct page *newpage, struct page *page, int extra_count) 399 398 { 400 399 XA_STATE(xas, &mapping->i_pages, page_index(page)); 401 400 struct zone *oldzone, *newzone; ··· 680 681 681 682 BUG_ON(PageWriteback(page)); /* Writeback must be complete */ 682 683 683 - rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); 684 + rc = migrate_page_move_mapping(mapping, newpage, page, 0); 684 685 685 686 if (rc != MIGRATEPAGE_SUCCESS) 686 687 return rc; ··· 779 780 } 780 781 } 781 782 782 - rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); 783 + rc = migrate_page_move_mapping(mapping, newpage, page, 0); 783 784 if (rc != MIGRATEPAGE_SUCCESS) 784 785 goto unlock_buffers; 785 786
+11 -5
mm/page_alloc.c
··· 450 450 unsigned long pfn) 451 451 { 452 452 #ifdef CONFIG_SPARSEMEM 453 - return __pfn_to_section(pfn)->pageblock_flags; 453 + return section_to_usemap(__pfn_to_section(pfn)); 454 454 #else 455 455 return page_zone(page)->pageblock_flags; 456 456 #endif /* CONFIG_SPARSEMEM */ ··· 5926 5926 unsigned long start = jiffies; 5927 5927 int nid = pgdat->node_id; 5928 5928 5929 - if (WARN_ON_ONCE(!pgmap || !is_dev_zone(zone))) 5929 + if (WARN_ON_ONCE(!pgmap || zone_idx(zone) != ZONE_DEVICE)) 5930 5930 return; 5931 5931 5932 5932 /* ··· 5974 5974 * pfn out of zone. 5975 5975 * 5976 5976 * Please note that MEMMAP_HOTPLUG path doesn't clear memmap 5977 - * because this is done early in sparse_add_one_section 5977 + * because this is done early in section_activate() 5978 5978 */ 5979 5979 if (!(pfn & (pageblock_nr_pages - 1))) { 5980 5980 set_pageblock_migratetype(page, MIGRATE_MOVABLE); ··· 7351 7351 (u64)zone_movable_pfn[i] << PAGE_SHIFT); 7352 7352 } 7353 7353 7354 - /* Print out the early node map */ 7354 + /* 7355 + * Print out the early node map, and initialize the 7356 + * subsection-map relative to active online memory ranges to 7357 + * enable future "sub-section" extensions of the memory map. 7358 + */ 7355 7359 pr_info("Early memory node ranges\n"); 7356 - for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) 7360 + for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { 7357 7361 pr_info(" node %3d: [mem %#018Lx-%#018Lx]\n", nid, 7358 7362 (u64)start_pfn << PAGE_SHIFT, 7359 7363 ((u64)end_pfn << PAGE_SHIFT) - 1); 7364 + subsection_map_init(start_pfn, end_pfn - start_pfn); 7365 + } 7360 7366 7361 7367 /* Initialise every node */ 7362 7368 mminit_verify_pageflags_layout();
+3
mm/shmem.c
··· 3874 3874 loff_t i_size; 3875 3875 pgoff_t off; 3876 3876 3877 + if ((vma->vm_flags & VM_NOHUGEPAGE) || 3878 + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) 3879 + return false; 3877 3880 if (shmem_huge == SHMEM_HUGE_FORCE) 3878 3881 return true; 3879 3882 if (shmem_huge == SHMEM_HUGE_DENY)
+14 -7
mm/sparse-vmemmap.c
··· 245 245 return 0; 246 246 } 247 247 248 - struct page * __meminit sparse_mem_map_populate(unsigned long pnum, int nid, 249 - struct vmem_altmap *altmap) 248 + struct page * __meminit __populate_section_memmap(unsigned long pfn, 249 + unsigned long nr_pages, int nid, struct vmem_altmap *altmap) 250 250 { 251 251 unsigned long start; 252 252 unsigned long end; 253 - struct page *map; 254 253 255 - map = pfn_to_page(pnum * PAGES_PER_SECTION); 256 - start = (unsigned long)map; 257 - end = (unsigned long)(map + PAGES_PER_SECTION); 254 + /* 255 + * The minimum granularity of memmap extensions is 256 + * PAGES_PER_SUBSECTION as allocations are tracked in the 257 + * 'subsection_map' bitmap of the section. 258 + */ 259 + end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION); 260 + pfn &= PAGE_SUBSECTION_MASK; 261 + nr_pages = end - pfn; 262 + 263 + start = (unsigned long) pfn_to_page(pfn); 264 + end = start + nr_pages * sizeof(struct page); 258 265 259 266 if (vmemmap_populate(start, end, nid, altmap)) 260 267 return NULL; 261 268 262 - return map; 269 + return pfn_to_page(pfn); 263 270 }
+224 -131
mm/sparse.c
··· 83 83 unsigned long root = SECTION_NR_TO_ROOT(section_nr); 84 84 struct mem_section *section; 85 85 86 + /* 87 + * An existing section is possible in the sub-section hotplug 88 + * case. First hot-add instantiates, follow-on hot-add reuses 89 + * the existing section. 90 + * 91 + * The mem_hotplug_lock resolves the apparent race below. 92 + */ 86 93 if (mem_section[root]) 87 - return -EEXIST; 94 + return 0; 88 95 89 96 section = sparse_index_alloc(nid); 90 97 if (!section) ··· 109 102 #endif 110 103 111 104 #ifdef CONFIG_SPARSEMEM_EXTREME 112 - int __section_nr(struct mem_section* ms) 105 + unsigned long __section_nr(struct mem_section *ms) 113 106 { 114 107 unsigned long root_nr; 115 108 struct mem_section *root = NULL; ··· 128 121 return (root_nr * SECTIONS_PER_ROOT) + (ms - root); 129 122 } 130 123 #else 131 - int __section_nr(struct mem_section* ms) 124 + unsigned long __section_nr(struct mem_section *ms) 132 125 { 133 - return (int)(ms - mem_section[0]); 126 + return (unsigned long)(ms - mem_section[0]); 134 127 } 135 128 #endif 136 129 ··· 185 178 * Keeping track of this gives us an easy way to break out of 186 179 * those loops early. 187 180 */ 188 - int __highest_present_section_nr; 181 + unsigned long __highest_present_section_nr; 189 182 static void section_mark_present(struct mem_section *ms) 190 183 { 191 - int section_nr = __section_nr(ms); 184 + unsigned long section_nr = __section_nr(ms); 192 185 193 186 if (section_nr > __highest_present_section_nr) 194 187 __highest_present_section_nr = section_nr; ··· 196 189 ms->section_mem_map |= SECTION_MARKED_PRESENT; 197 190 } 198 191 199 - static inline int next_present_section_nr(int section_nr) 192 + static inline unsigned long next_present_section_nr(unsigned long section_nr) 200 193 { 201 194 do { 202 195 section_nr++; ··· 215 208 static inline unsigned long first_present_section_nr(void) 216 209 { 217 210 return next_present_section_nr(-1); 211 + } 212 + 213 + void subsection_mask_set(unsigned long *map, unsigned long pfn, 214 + unsigned long nr_pages) 215 + { 216 + int idx = subsection_map_index(pfn); 217 + int end = subsection_map_index(pfn + nr_pages - 1); 218 + 219 + bitmap_set(map, idx, end - idx + 1); 220 + } 221 + 222 + void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) 223 + { 224 + int end_sec = pfn_to_section_nr(pfn + nr_pages - 1); 225 + unsigned long nr, start_sec = pfn_to_section_nr(pfn); 226 + 227 + if (!nr_pages) 228 + return; 229 + 230 + for (nr = start_sec; nr <= end_sec; nr++) { 231 + struct mem_section *ms; 232 + unsigned long pfns; 233 + 234 + pfns = min(nr_pages, PAGES_PER_SECTION 235 + - (pfn & ~PAGE_SECTION_MASK)); 236 + ms = __nr_to_section(nr); 237 + subsection_mask_set(ms->usage->subsection_map, pfn, pfns); 238 + 239 + pr_debug("%s: sec: %lu pfns: %lu set(%d, %d)\n", __func__, nr, 240 + pfns, subsection_map_index(pfn), 241 + subsection_map_index(pfn + pfns - 1)); 242 + 243 + pfn += pfns; 244 + nr_pages -= pfns; 245 + } 218 246 } 219 247 220 248 /* Record a memory area against a node. */ ··· 330 288 331 289 static void __meminit sparse_init_one_section(struct mem_section *ms, 332 290 unsigned long pnum, struct page *mem_map, 333 - unsigned long *pageblock_bitmap) 291 + struct mem_section_usage *usage, unsigned long flags) 334 292 { 335 293 ms->section_mem_map &= ~SECTION_MAP_MASK; 336 - ms->section_mem_map |= sparse_encode_mem_map(mem_map, pnum) | 337 - SECTION_HAS_MEM_MAP; 338 - ms->pageblock_flags = pageblock_bitmap; 294 + ms->section_mem_map |= sparse_encode_mem_map(mem_map, pnum) 295 + | SECTION_HAS_MEM_MAP | flags; 296 + ms->usage = usage; 339 297 } 340 298 341 - unsigned long usemap_size(void) 299 + static unsigned long usemap_size(void) 342 300 { 343 301 return BITS_TO_LONGS(SECTION_BLOCKFLAGS_BITS) * sizeof(unsigned long); 344 302 } 345 303 346 - #ifdef CONFIG_MEMORY_HOTPLUG 347 - static unsigned long *__kmalloc_section_usemap(void) 304 + size_t mem_section_usage_size(void) 348 305 { 349 - return kmalloc(usemap_size(), GFP_KERNEL); 306 + return sizeof(struct mem_section_usage) + usemap_size(); 350 307 } 351 - #endif /* CONFIG_MEMORY_HOTPLUG */ 352 308 353 309 #ifdef CONFIG_MEMORY_HOTREMOVE 354 - static unsigned long * __init 310 + static struct mem_section_usage * __init 355 311 sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat, 356 312 unsigned long size) 357 313 { 314 + struct mem_section_usage *usage; 358 315 unsigned long goal, limit; 359 - unsigned long *p; 360 316 int nid; 361 317 /* 362 318 * A page may contain usemaps for other sections preventing the ··· 370 330 limit = goal + (1UL << PA_SECTION_SHIFT); 371 331 nid = early_pfn_to_nid(goal >> PAGE_SHIFT); 372 332 again: 373 - p = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, goal, limit, nid); 374 - if (!p && limit) { 333 + usage = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, goal, limit, nid); 334 + if (!usage && limit) { 375 335 limit = 0; 376 336 goto again; 377 337 } 378 - return p; 338 + return usage; 379 339 } 380 340 381 - static void __init check_usemap_section_nr(int nid, unsigned long *usemap) 341 + static void __init check_usemap_section_nr(int nid, 342 + struct mem_section_usage *usage) 382 343 { 383 344 unsigned long usemap_snr, pgdat_snr; 384 345 static unsigned long old_usemap_snr; ··· 393 352 old_pgdat_snr = NR_MEM_SECTIONS; 394 353 } 395 354 396 - usemap_snr = pfn_to_section_nr(__pa(usemap) >> PAGE_SHIFT); 355 + usemap_snr = pfn_to_section_nr(__pa(usage) >> PAGE_SHIFT); 397 356 pgdat_snr = pfn_to_section_nr(__pa(pgdat) >> PAGE_SHIFT); 398 357 if (usemap_snr == pgdat_snr) 399 358 return; ··· 421 380 usemap_snr, pgdat_snr, nid); 422 381 } 423 382 #else 424 - static unsigned long * __init 383 + static struct mem_section_usage * __init 425 384 sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat, 426 385 unsigned long size) 427 386 { 428 387 return memblock_alloc_node(size, SMP_CACHE_BYTES, pgdat->node_id); 429 388 } 430 389 431 - static void __init check_usemap_section_nr(int nid, unsigned long *usemap) 390 + static void __init check_usemap_section_nr(int nid, 391 + struct mem_section_usage *usage) 432 392 { 433 393 } 434 394 #endif /* CONFIG_MEMORY_HOTREMOVE */ ··· 446 404 return PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION); 447 405 } 448 406 449 - struct page __init *sparse_mem_map_populate(unsigned long pnum, int nid, 450 - struct vmem_altmap *altmap) 407 + struct page __init *__populate_section_memmap(unsigned long pfn, 408 + unsigned long nr_pages, int nid, struct vmem_altmap *altmap) 451 409 { 452 410 unsigned long size = section_map_size(); 453 411 struct page *map = sparse_buffer_alloc(size); ··· 516 474 unsigned long pnum_end, 517 475 unsigned long map_count) 518 476 { 519 - unsigned long pnum, usemap_longs, *usemap; 477 + struct mem_section_usage *usage; 478 + unsigned long pnum; 520 479 struct page *map; 521 480 522 - usemap_longs = BITS_TO_LONGS(SECTION_BLOCKFLAGS_BITS); 523 - usemap = sparse_early_usemaps_alloc_pgdat_section(NODE_DATA(nid), 524 - usemap_size() * 525 - map_count); 526 - if (!usemap) { 481 + usage = sparse_early_usemaps_alloc_pgdat_section(NODE_DATA(nid), 482 + mem_section_usage_size() * map_count); 483 + if (!usage) { 527 484 pr_err("%s: node[%d] usemap allocation failed", __func__, nid); 528 485 goto failed; 529 486 } 530 487 sparse_buffer_init(map_count * section_map_size(), nid); 531 488 for_each_present_section_nr(pnum_begin, pnum) { 489 + unsigned long pfn = section_nr_to_pfn(pnum); 490 + 532 491 if (pnum >= pnum_end) 533 492 break; 534 493 535 - map = sparse_mem_map_populate(pnum, nid, NULL); 494 + map = __populate_section_memmap(pfn, PAGES_PER_SECTION, 495 + nid, NULL); 536 496 if (!map) { 537 497 pr_err("%s: node[%d] memory map backing failed. Some memory will not be available.", 538 498 __func__, nid); 539 499 pnum_begin = pnum; 540 500 goto failed; 541 501 } 542 - check_usemap_section_nr(nid, usemap); 543 - sparse_init_one_section(__nr_to_section(pnum), pnum, map, usemap); 544 - usemap += usemap_longs; 502 + check_usemap_section_nr(nid, usage); 503 + sparse_init_one_section(__nr_to_section(pnum), pnum, map, usage, 504 + SECTION_IS_EARLY); 505 + usage = (void *) usage + mem_section_usage_size(); 545 506 } 546 507 sparse_buffer_fini(); 547 508 return; ··· 635 590 #endif 636 591 637 592 #ifdef CONFIG_SPARSEMEM_VMEMMAP 638 - static inline struct page *kmalloc_section_memmap(unsigned long pnum, int nid, 639 - struct vmem_altmap *altmap) 593 + static struct page *populate_section_memmap(unsigned long pfn, 594 + unsigned long nr_pages, int nid, struct vmem_altmap *altmap) 640 595 { 641 - /* This will make the necessary allocations eventually. */ 642 - return sparse_mem_map_populate(pnum, nid, altmap); 596 + return __populate_section_memmap(pfn, nr_pages, nid, altmap); 643 597 } 644 - static void __kfree_section_memmap(struct page *memmap, 598 + 599 + static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages, 645 600 struct vmem_altmap *altmap) 646 601 { 647 - unsigned long start = (unsigned long)memmap; 648 - unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION); 602 + unsigned long start = (unsigned long) pfn_to_page(pfn); 603 + unsigned long end = start + nr_pages * sizeof(struct page); 649 604 650 605 vmemmap_free(start, end, altmap); 651 606 } 652 - #ifdef CONFIG_MEMORY_HOTREMOVE 653 607 static void free_map_bootmem(struct page *memmap) 654 608 { 655 609 unsigned long start = (unsigned long)memmap; ··· 656 612 657 613 vmemmap_free(start, end, NULL); 658 614 } 659 - #endif /* CONFIG_MEMORY_HOTREMOVE */ 660 615 #else 661 - static struct page *__kmalloc_section_memmap(void) 616 + struct page *populate_section_memmap(unsigned long pfn, 617 + unsigned long nr_pages, int nid, struct vmem_altmap *altmap) 662 618 { 663 619 struct page *page, *ret; 664 620 unsigned long memmap_size = sizeof(struct page) * PAGES_PER_SECTION; ··· 679 635 return ret; 680 636 } 681 637 682 - static inline struct page *kmalloc_section_memmap(unsigned long pnum, int nid, 638 + static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages, 683 639 struct vmem_altmap *altmap) 684 640 { 685 - return __kmalloc_section_memmap(); 686 - } 641 + struct page *memmap = pfn_to_page(pfn); 687 642 688 - static void __kfree_section_memmap(struct page *memmap, 689 - struct vmem_altmap *altmap) 690 - { 691 643 if (is_vmalloc_addr(memmap)) 692 644 vfree(memmap); 693 645 else ··· 691 651 get_order(sizeof(struct page) * PAGES_PER_SECTION)); 692 652 } 693 653 694 - #ifdef CONFIG_MEMORY_HOTREMOVE 695 654 static void free_map_bootmem(struct page *memmap) 696 655 { 697 656 unsigned long maps_section_nr, removing_section_nr, i; ··· 720 681 put_page_bootmem(page); 721 682 } 722 683 } 723 - #endif /* CONFIG_MEMORY_HOTREMOVE */ 724 684 #endif /* CONFIG_SPARSEMEM_VMEMMAP */ 725 685 686 + static void section_deactivate(unsigned long pfn, unsigned long nr_pages, 687 + struct vmem_altmap *altmap) 688 + { 689 + DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; 690 + DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) = { 0 }; 691 + struct mem_section *ms = __pfn_to_section(pfn); 692 + bool section_is_early = early_section(ms); 693 + struct page *memmap = NULL; 694 + unsigned long *subsection_map = ms->usage 695 + ? &ms->usage->subsection_map[0] : NULL; 696 + 697 + subsection_mask_set(map, pfn, nr_pages); 698 + if (subsection_map) 699 + bitmap_and(tmp, map, subsection_map, SUBSECTIONS_PER_SECTION); 700 + 701 + if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTION), 702 + "section already deactivated (%#lx + %ld)\n", 703 + pfn, nr_pages)) 704 + return; 705 + 706 + /* 707 + * There are 3 cases to handle across two configurations 708 + * (SPARSEMEM_VMEMMAP={y,n}): 709 + * 710 + * 1/ deactivation of a partial hot-added section (only possible 711 + * in the SPARSEMEM_VMEMMAP=y case). 712 + * a/ section was present at memory init 713 + * b/ section was hot-added post memory init 714 + * 2/ deactivation of a complete hot-added section 715 + * 3/ deactivation of a complete section from memory init 716 + * 717 + * For 1/, when subsection_map does not empty we will not be 718 + * freeing the usage map, but still need to free the vmemmap 719 + * range. 720 + * 721 + * For 2/ and 3/ the SPARSEMEM_VMEMMAP={y,n} cases are unified 722 + */ 723 + bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION); 724 + if (bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION)) { 725 + unsigned long section_nr = pfn_to_section_nr(pfn); 726 + 727 + if (!section_is_early) { 728 + kfree(ms->usage); 729 + ms->usage = NULL; 730 + } 731 + memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr); 732 + ms->section_mem_map = sparse_encode_mem_map(NULL, section_nr); 733 + } 734 + 735 + if (section_is_early && memmap) 736 + free_map_bootmem(memmap); 737 + else 738 + depopulate_section_memmap(pfn, nr_pages, altmap); 739 + } 740 + 741 + static struct page * __meminit section_activate(int nid, unsigned long pfn, 742 + unsigned long nr_pages, struct vmem_altmap *altmap) 743 + { 744 + DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; 745 + struct mem_section *ms = __pfn_to_section(pfn); 746 + struct mem_section_usage *usage = NULL; 747 + unsigned long *subsection_map; 748 + struct page *memmap; 749 + int rc = 0; 750 + 751 + subsection_mask_set(map, pfn, nr_pages); 752 + 753 + if (!ms->usage) { 754 + usage = kzalloc(mem_section_usage_size(), GFP_KERNEL); 755 + if (!usage) 756 + return ERR_PTR(-ENOMEM); 757 + ms->usage = usage; 758 + } 759 + subsection_map = &ms->usage->subsection_map[0]; 760 + 761 + if (bitmap_empty(map, SUBSECTIONS_PER_SECTION)) 762 + rc = -EINVAL; 763 + else if (bitmap_intersects(map, subsection_map, SUBSECTIONS_PER_SECTION)) 764 + rc = -EEXIST; 765 + else 766 + bitmap_or(subsection_map, map, subsection_map, 767 + SUBSECTIONS_PER_SECTION); 768 + 769 + if (rc) { 770 + if (usage) 771 + ms->usage = NULL; 772 + kfree(usage); 773 + return ERR_PTR(rc); 774 + } 775 + 776 + /* 777 + * The early init code does not consider partially populated 778 + * initial sections, it simply assumes that memory will never be 779 + * referenced. If we hot-add memory into such a section then we 780 + * do not need to populate the memmap and can simply reuse what 781 + * is already there. 782 + */ 783 + if (nr_pages < PAGES_PER_SECTION && early_section(ms)) 784 + return pfn_to_page(pfn); 785 + 786 + memmap = populate_section_memmap(pfn, nr_pages, nid, altmap); 787 + if (!memmap) { 788 + section_deactivate(pfn, nr_pages, altmap); 789 + return ERR_PTR(-ENOMEM); 790 + } 791 + 792 + return memmap; 793 + } 794 + 726 795 /** 727 - * sparse_add_one_section - add a memory section 796 + * sparse_add_section - add a memory section, or populate an existing one 728 797 * @nid: The node to add section on 729 798 * @start_pfn: start pfn of the memory range 799 + * @nr_pages: number of pfns to add in the section 730 800 * @altmap: device page map 731 801 * 732 802 * This is only intended for hotplug. ··· 845 697 * * -EEXIST - Section has been present. 846 698 * * -ENOMEM - Out of memory. 847 699 */ 848 - int __meminit sparse_add_one_section(int nid, unsigned long start_pfn, 849 - struct vmem_altmap *altmap) 700 + int __meminit sparse_add_section(int nid, unsigned long start_pfn, 701 + unsigned long nr_pages, struct vmem_altmap *altmap) 850 702 { 851 703 unsigned long section_nr = pfn_to_section_nr(start_pfn); 852 704 struct mem_section *ms; 853 705 struct page *memmap; 854 - unsigned long *usemap; 855 706 int ret; 856 707 857 - /* 858 - * no locking for this, because it does its own 859 - * plus, it does a kmalloc 860 - */ 861 708 ret = sparse_index_init(section_nr, nid); 862 - if (ret < 0 && ret != -EEXIST) 709 + if (ret < 0) 863 710 return ret; 864 - ret = 0; 865 - memmap = kmalloc_section_memmap(section_nr, nid, altmap); 866 - if (!memmap) 867 - return -ENOMEM; 868 - usemap = __kmalloc_section_usemap(); 869 - if (!usemap) { 870 - __kfree_section_memmap(memmap, altmap); 871 - return -ENOMEM; 872 - } 873 711 874 - ms = __pfn_to_section(start_pfn); 875 - if (ms->section_mem_map & SECTION_MARKED_PRESENT) { 876 - ret = -EEXIST; 877 - goto out; 878 - } 712 + memmap = section_activate(nid, start_pfn, nr_pages, altmap); 713 + if (IS_ERR(memmap)) 714 + return PTR_ERR(memmap); 879 715 880 716 /* 881 717 * Poison uninitialized struct pages in order to catch invalid flags 882 718 * combinations. 883 719 */ 884 - page_init_poison(memmap, sizeof(struct page) * PAGES_PER_SECTION); 720 + page_init_poison(pfn_to_page(start_pfn), sizeof(struct page) * nr_pages); 885 721 722 + ms = __pfn_to_section(start_pfn); 723 + set_section_nid(section_nr, nid); 886 724 section_mark_present(ms); 887 - sparse_init_one_section(ms, section_nr, memmap, usemap); 888 725 889 - out: 890 - if (ret < 0) { 891 - kfree(usemap); 892 - __kfree_section_memmap(memmap, altmap); 893 - } 894 - return ret; 726 + /* Align memmap to section boundary in the subsection case */ 727 + if (section_nr_to_pfn(section_nr) != start_pfn) 728 + memmap = pfn_to_kaddr(section_nr_to_pfn(section_nr)); 729 + sparse_init_one_section(ms, section_nr, memmap, ms->usage, 0); 730 + 731 + return 0; 895 732 } 896 733 897 - #ifdef CONFIG_MEMORY_HOTREMOVE 898 734 #ifdef CONFIG_MEMORY_FAILURE 899 735 static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) 900 736 { ··· 909 777 } 910 778 #endif 911 779 912 - static void free_section_usemap(struct page *memmap, unsigned long *usemap, 780 + void sparse_remove_section(struct mem_section *ms, unsigned long pfn, 781 + unsigned long nr_pages, unsigned long map_offset, 913 782 struct vmem_altmap *altmap) 914 783 { 915 - struct page *usemap_page; 916 - 917 - if (!usemap) 918 - return; 919 - 920 - usemap_page = virt_to_page(usemap); 921 - /* 922 - * Check to see if allocation came from hot-plug-add 923 - */ 924 - if (PageSlab(usemap_page) || PageCompound(usemap_page)) { 925 - kfree(usemap); 926 - if (memmap) 927 - __kfree_section_memmap(memmap, altmap); 928 - return; 929 - } 930 - 931 - /* 932 - * The usemap came from bootmem. This is packed with other usemaps 933 - * on the section which has pgdat at boot time. Just keep it as is now. 934 - */ 935 - 936 - if (memmap) 937 - free_map_bootmem(memmap); 784 + clear_hwpoisoned_pages(pfn_to_page(pfn) + map_offset, 785 + nr_pages - map_offset); 786 + section_deactivate(pfn, nr_pages, altmap); 938 787 } 939 - 940 - void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, 941 - unsigned long map_offset, struct vmem_altmap *altmap) 942 - { 943 - struct page *memmap = NULL; 944 - unsigned long *usemap = NULL; 945 - 946 - if (ms->section_mem_map) { 947 - usemap = ms->pageblock_flags; 948 - memmap = sparse_decode_mem_map(ms->section_mem_map, 949 - __section_nr(ms)); 950 - ms->section_mem_map = 0; 951 - ms->pageblock_flags = NULL; 952 - } 953 - 954 - clear_hwpoisoned_pages(memmap + map_offset, 955 - PAGES_PER_SECTION - map_offset); 956 - free_section_usemap(memmap, usemap, altmap); 957 - } 958 - #endif /* CONFIG_MEMORY_HOTREMOVE */ 959 788 #endif /* CONFIG_MEMORY_HOTPLUG */
+9 -11
net/core/neighbour.c
··· 3374 3374 EXPORT_SYMBOL(neigh_app_ns); 3375 3375 3376 3376 #ifdef CONFIG_SYSCTL 3377 - static int zero; 3378 - static int int_max = INT_MAX; 3379 3377 static int unres_qlen_max = INT_MAX / SKB_TRUESIZE(ETH_FRAME_LEN); 3380 3378 3381 3379 static int proc_unres_qlen(struct ctl_table *ctl, int write, ··· 3382 3384 int size, ret; 3383 3385 struct ctl_table tmp = *ctl; 3384 3386 3385 - tmp.extra1 = &zero; 3387 + tmp.extra1 = SYSCTL_ZERO; 3386 3388 tmp.extra2 = &unres_qlen_max; 3387 3389 tmp.data = &size; 3388 3390 ··· 3447 3449 struct ctl_table tmp = *ctl; 3448 3450 int ret; 3449 3451 3450 - tmp.extra1 = &zero; 3451 - tmp.extra2 = &int_max; 3452 + tmp.extra1 = SYSCTL_ZERO; 3453 + tmp.extra2 = SYSCTL_INT_MAX; 3452 3454 3453 3455 ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos); 3454 3456 neigh_proc_update(ctl, write); ··· 3593 3595 .procname = "gc_thresh1", 3594 3596 .maxlen = sizeof(int), 3595 3597 .mode = 0644, 3596 - .extra1 = &zero, 3597 - .extra2 = &int_max, 3598 + .extra1 = SYSCTL_ZERO, 3599 + .extra2 = SYSCTL_INT_MAX, 3598 3600 .proc_handler = proc_dointvec_minmax, 3599 3601 }, 3600 3602 [NEIGH_VAR_GC_THRESH2] = { 3601 3603 .procname = "gc_thresh2", 3602 3604 .maxlen = sizeof(int), 3603 3605 .mode = 0644, 3604 - .extra1 = &zero, 3605 - .extra2 = &int_max, 3606 + .extra1 = SYSCTL_ZERO, 3607 + .extra2 = SYSCTL_INT_MAX, 3606 3608 .proc_handler = proc_dointvec_minmax, 3607 3609 }, 3608 3610 [NEIGH_VAR_GC_THRESH3] = { 3609 3611 .procname = "gc_thresh3", 3610 3612 .maxlen = sizeof(int), 3611 3613 .mode = 0644, 3612 - .extra1 = &zero, 3613 - .extra2 = &int_max, 3614 + .extra1 = SYSCTL_ZERO, 3615 + .extra2 = SYSCTL_INT_MAX, 3614 3616 .proc_handler = proc_dointvec_minmax, 3615 3617 }, 3616 3618 {},
+16 -18
net/core/sysctl_net_core.c
··· 22 22 #include <net/busy_poll.h> 23 23 #include <net/pkt_sched.h> 24 24 25 - static int zero = 0; 26 - static int one = 1; 27 25 static int two __maybe_unused = 2; 28 26 static int min_sndbuf = SOCK_MIN_SNDBUF; 29 27 static int min_rcvbuf = SOCK_MIN_RCVBUF; ··· 388 390 .mode = 0644, 389 391 .proc_handler = proc_dointvec_minmax_bpf_enable, 390 392 # ifdef CONFIG_BPF_JIT_ALWAYS_ON 391 - .extra1 = &one, 392 - .extra2 = &one, 393 + .extra1 = SYSCTL_ONE, 394 + .extra2 = SYSCTL_ONE, 393 395 # else 394 - .extra1 = &zero, 396 + .extra1 = SYSCTL_ZERO, 395 397 .extra2 = &two, 396 398 # endif 397 399 }, ··· 402 404 .maxlen = sizeof(int), 403 405 .mode = 0600, 404 406 .proc_handler = proc_dointvec_minmax_bpf_restricted, 405 - .extra1 = &zero, 407 + .extra1 = SYSCTL_ZERO, 406 408 .extra2 = &two, 407 409 }, 408 410 { ··· 411 413 .maxlen = sizeof(int), 412 414 .mode = 0600, 413 415 .proc_handler = proc_dointvec_minmax_bpf_restricted, 414 - .extra1 = &zero, 415 - .extra2 = &one, 416 + .extra1 = SYSCTL_ZERO, 417 + .extra2 = SYSCTL_ONE, 416 418 }, 417 419 # endif 418 420 { ··· 459 461 .maxlen = sizeof(int), 460 462 .mode = 0644, 461 463 .proc_handler = proc_dointvec_minmax, 462 - .extra1 = &zero, 463 - .extra2 = &one 464 + .extra1 = SYSCTL_ZERO, 465 + .extra2 = SYSCTL_ONE 464 466 }, 465 467 #ifdef CONFIG_RPS 466 468 { ··· 491 493 .maxlen = sizeof(unsigned int), 492 494 .mode = 0644, 493 495 .proc_handler = proc_dointvec_minmax, 494 - .extra1 = &zero, 496 + .extra1 = SYSCTL_ZERO, 495 497 }, 496 498 { 497 499 .procname = "busy_read", ··· 499 501 .maxlen = sizeof(unsigned int), 500 502 .mode = 0644, 501 503 .proc_handler = proc_dointvec_minmax, 502 - .extra1 = &zero, 504 + .extra1 = SYSCTL_ZERO, 503 505 }, 504 506 #endif 505 507 #ifdef CONFIG_NET_SCHED ··· 531 533 .maxlen = sizeof(int), 532 534 .mode = 0644, 533 535 .proc_handler = proc_dointvec_minmax, 534 - .extra1 = &one, 536 + .extra1 = SYSCTL_ONE, 535 537 .extra2 = &max_skb_frags, 536 538 }, 537 539 { ··· 540 542 .maxlen = sizeof(unsigned int), 541 543 .mode = 0644, 542 544 .proc_handler = proc_dointvec_minmax, 543 - .extra1 = &zero, 545 + .extra1 = SYSCTL_ZERO, 544 546 }, 545 547 { 546 548 .procname = "fb_tunnels_only_for_init_net", ··· 548 550 .maxlen = sizeof(int), 549 551 .mode = 0644, 550 552 .proc_handler = proc_dointvec_minmax, 551 - .extra1 = &zero, 552 - .extra2 = &one, 553 + .extra1 = SYSCTL_ZERO, 554 + .extra2 = SYSCTL_ONE, 553 555 }, 554 556 { 555 557 .procname = "devconf_inherit_init_net", ··· 557 559 .maxlen = sizeof(int), 558 560 .mode = 0644, 559 561 .proc_handler = proc_dointvec_minmax, 560 - .extra1 = &zero, 562 + .extra1 = SYSCTL_ZERO, 561 563 .extra2 = &two, 562 564 }, 563 565 { ··· 576 578 .data = &init_net.core.sysctl_somaxconn, 577 579 .maxlen = sizeof(int), 578 580 .mode = 0644, 579 - .extra1 = &zero, 581 + .extra1 = SYSCTL_ZERO, 580 582 .proc_handler = proc_dointvec_minmax 581 583 }, 582 584 { }
+7 -9
net/dccp/sysctl.c
··· 16 16 #endif 17 17 18 18 /* Boundary values */ 19 - static int zero = 0, 20 - one = 1, 21 - u8_max = 0xFF; 19 + static int u8_max = 0xFF; 22 20 static unsigned long seqw_min = DCCPF_SEQ_WMIN, 23 21 seqw_max = 0xFFFFFFFF; /* maximum on 32 bit */ 24 22 ··· 36 38 .maxlen = sizeof(sysctl_dccp_rx_ccid), 37 39 .mode = 0644, 38 40 .proc_handler = proc_dointvec_minmax, 39 - .extra1 = &zero, 41 + .extra1 = SYSCTL_ZERO, 40 42 .extra2 = &u8_max, /* RFC 4340, 10. */ 41 43 }, 42 44 { ··· 45 47 .maxlen = sizeof(sysctl_dccp_tx_ccid), 46 48 .mode = 0644, 47 49 .proc_handler = proc_dointvec_minmax, 48 - .extra1 = &zero, 50 + .extra1 = SYSCTL_ZERO, 49 51 .extra2 = &u8_max, /* RFC 4340, 10. */ 50 52 }, 51 53 { ··· 54 56 .maxlen = sizeof(sysctl_dccp_request_retries), 55 57 .mode = 0644, 56 58 .proc_handler = proc_dointvec_minmax, 57 - .extra1 = &one, 59 + .extra1 = SYSCTL_ONE, 58 60 .extra2 = &u8_max, 59 61 }, 60 62 { ··· 63 65 .maxlen = sizeof(sysctl_dccp_retries1), 64 66 .mode = 0644, 65 67 .proc_handler = proc_dointvec_minmax, 66 - .extra1 = &zero, 68 + .extra1 = SYSCTL_ZERO, 67 69 .extra2 = &u8_max, 68 70 }, 69 71 { ··· 72 74 .maxlen = sizeof(sysctl_dccp_retries2), 73 75 .mode = 0644, 74 76 .proc_handler = proc_dointvec_minmax, 75 - .extra1 = &zero, 77 + .extra1 = SYSCTL_ZERO, 76 78 .extra2 = &u8_max, 77 79 }, 78 80 { ··· 81 83 .maxlen = sizeof(sysctl_dccp_tx_qlen), 82 84 .mode = 0644, 83 85 .proc_handler = proc_dointvec_minmax, 84 - .extra1 = &zero, 86 + .extra1 = SYSCTL_ZERO, 85 87 }, 86 88 { 87 89 .procname = "sync_ratelimit",
+29 -31
net/ipv4/sysctl_net_ipv4.c
··· 28 28 #include <net/protocol.h> 29 29 #include <net/netevent.h> 30 30 31 - static int zero; 32 - static int one = 1; 33 31 static int two = 2; 34 32 static int four = 4; 35 33 static int thousand = 1000; ··· 574 576 .maxlen = sizeof(int), 575 577 .mode = 0644, 576 578 .proc_handler = proc_dointvec_minmax, 577 - .extra1 = &zero, 579 + .extra1 = SYSCTL_ZERO, 578 580 }, 579 581 { 580 582 .procname = "icmp_msgs_burst", ··· 582 584 .maxlen = sizeof(int), 583 585 .mode = 0644, 584 586 .proc_handler = proc_dointvec_minmax, 585 - .extra1 = &zero, 587 + .extra1 = SYSCTL_ZERO, 586 588 }, 587 589 { 588 590 .procname = "udp_mem", ··· 672 674 .maxlen = sizeof(int), 673 675 .mode = 0644, 674 676 .proc_handler = proc_dointvec_minmax, 675 - .extra1 = &zero, 676 - .extra2 = &one, 677 + .extra1 = SYSCTL_ZERO, 678 + .extra2 = SYSCTL_ONE, 677 679 }, 678 680 #endif 679 681 { ··· 761 763 .maxlen = sizeof(int), 762 764 .mode = 0644, 763 765 .proc_handler = ipv4_fwd_update_priority, 764 - .extra1 = &zero, 765 - .extra2 = &one, 766 + .extra1 = SYSCTL_ZERO, 767 + .extra2 = SYSCTL_ONE, 766 768 }, 767 769 { 768 770 .procname = "ip_nonlocal_bind", ··· 792 794 .maxlen = sizeof(int), 793 795 .mode = 0644, 794 796 .proc_handler = proc_dointvec_minmax, 795 - .extra1 = &zero, 796 - .extra2 = &one, 797 + .extra1 = SYSCTL_ZERO, 798 + .extra2 = SYSCTL_ONE, 797 799 }, 798 800 #endif 799 801 { ··· 862 864 .maxlen = sizeof(int), 863 865 .mode = 0644, 864 866 .proc_handler = proc_dointvec_minmax, 865 - .extra1 = &one 867 + .extra1 = SYSCTL_ONE 866 868 }, 867 869 #endif 868 870 { ··· 967 969 .maxlen = sizeof(int), 968 970 .mode = 0644, 969 971 .proc_handler = proc_dointvec_minmax, 970 - .extra1 = &zero, 972 + .extra1 = SYSCTL_ZERO, 971 973 .extra2 = &two, 972 974 }, 973 975 { ··· 1009 1011 .maxlen = sizeof(int), 1010 1012 .mode = 0644, 1011 1013 .proc_handler = proc_tfo_blackhole_detect_timeout, 1012 - .extra1 = &zero, 1014 + .extra1 = SYSCTL_ZERO, 1013 1015 }, 1014 1016 #ifdef CONFIG_IP_ROUTE_MULTIPATH 1015 1017 { ··· 1018 1020 .maxlen = sizeof(int), 1019 1021 .mode = 0644, 1020 1022 .proc_handler = proc_dointvec_minmax, 1021 - .extra1 = &zero, 1022 - .extra2 = &one, 1023 + .extra1 = SYSCTL_ZERO, 1024 + .extra2 = SYSCTL_ONE, 1023 1025 }, 1024 1026 { 1025 1027 .procname = "fib_multipath_hash_policy", ··· 1027 1029 .maxlen = sizeof(int), 1028 1030 .mode = 0644, 1029 1031 .proc_handler = proc_fib_multipath_hash_policy, 1030 - .extra1 = &zero, 1031 - .extra2 = &two, 1032 + .extra1 = SYSCTL_ZERO, 1033 + .extra2 = SYSCTL_ONE, 1032 1034 }, 1033 1035 #endif 1034 1036 { ··· 1045 1047 .maxlen = sizeof(int), 1046 1048 .mode = 0644, 1047 1049 .proc_handler = proc_dointvec_minmax, 1048 - .extra1 = &zero, 1049 - .extra2 = &one, 1050 + .extra1 = SYSCTL_ZERO, 1051 + .extra2 = SYSCTL_ONE, 1050 1052 }, 1051 1053 #endif 1052 1054 { ··· 1076 1078 .maxlen = sizeof(int), 1077 1079 .mode = 0644, 1078 1080 .proc_handler = proc_dointvec_minmax, 1079 - .extra1 = &zero, 1081 + .extra1 = SYSCTL_ZERO, 1080 1082 .extra2 = &four, 1081 1083 }, 1082 1084 { ··· 1220 1222 .maxlen = sizeof(int), 1221 1223 .mode = 0644, 1222 1224 .proc_handler = proc_dointvec_minmax, 1223 - .extra1 = &one, 1225 + .extra1 = SYSCTL_ONE, 1224 1226 .extra2 = &gso_max_segs, 1225 1227 }, 1226 1228 { ··· 1229 1231 .maxlen = sizeof(int), 1230 1232 .mode = 0644, 1231 1233 .proc_handler = proc_dointvec_minmax, 1232 - .extra1 = &zero, 1234 + .extra1 = SYSCTL_ZERO, 1233 1235 .extra2 = &one_day_secs 1234 1236 }, 1235 1237 { ··· 1238 1240 .maxlen = sizeof(int), 1239 1241 .mode = 0644, 1240 1242 .proc_handler = proc_dointvec_minmax, 1241 - .extra1 = &zero, 1242 - .extra2 = &one, 1243 + .extra1 = SYSCTL_ZERO, 1244 + .extra2 = SYSCTL_ONE, 1243 1245 }, 1244 1246 { 1245 1247 .procname = "tcp_invalid_ratelimit", ··· 1254 1256 .maxlen = sizeof(int), 1255 1257 .mode = 0644, 1256 1258 .proc_handler = proc_dointvec_minmax, 1257 - .extra1 = &zero, 1259 + .extra1 = SYSCTL_ZERO, 1258 1260 .extra2 = &thousand, 1259 1261 }, 1260 1262 { ··· 1263 1265 .maxlen = sizeof(int), 1264 1266 .mode = 0644, 1265 1267 .proc_handler = proc_dointvec_minmax, 1266 - .extra1 = &zero, 1268 + .extra1 = SYSCTL_ZERO, 1267 1269 .extra2 = &thousand, 1268 1270 }, 1269 1271 { ··· 1272 1274 .maxlen = sizeof(init_net.ipv4.sysctl_tcp_wmem), 1273 1275 .mode = 0644, 1274 1276 .proc_handler = proc_dointvec_minmax, 1275 - .extra1 = &one, 1277 + .extra1 = SYSCTL_ONE, 1276 1278 }, 1277 1279 { 1278 1280 .procname = "tcp_rmem", ··· 1280 1282 .maxlen = sizeof(init_net.ipv4.sysctl_tcp_rmem), 1281 1283 .mode = 0644, 1282 1284 .proc_handler = proc_dointvec_minmax, 1283 - .extra1 = &one, 1285 + .extra1 = SYSCTL_ONE, 1284 1286 }, 1285 1287 { 1286 1288 .procname = "tcp_comp_sack_delay_ns", ··· 1295 1297 .maxlen = sizeof(int), 1296 1298 .mode = 0644, 1297 1299 .proc_handler = proc_dointvec_minmax, 1298 - .extra1 = &zero, 1300 + .extra1 = SYSCTL_ZERO, 1299 1301 .extra2 = &comp_sack_nr_max, 1300 1302 }, 1301 1303 { ··· 1304 1306 .maxlen = sizeof(init_net.ipv4.sysctl_udp_rmem_min), 1305 1307 .mode = 0644, 1306 1308 .proc_handler = proc_dointvec_minmax, 1307 - .extra1 = &one 1309 + .extra1 = SYSCTL_ONE 1308 1310 }, 1309 1311 { 1310 1312 .procname = "udp_wmem_min", ··· 1312 1314 .maxlen = sizeof(init_net.ipv4.sysctl_udp_wmem_min), 1313 1315 .mode = 0644, 1314 1316 .proc_handler = proc_dointvec_minmax, 1315 - .extra1 = &one 1317 + .extra1 = SYSCTL_ONE 1316 1318 }, 1317 1319 { } 1318 1320 };
+2 -4
net/ipv6/addrconf.c
··· 6432 6432 } 6433 6433 6434 6434 static int minus_one = -1; 6435 - static const int zero = 0; 6436 - static const int one = 1; 6437 6435 static const int two_five_five = 255; 6438 6436 6439 6437 static const struct ctl_table addrconf_sysctl[] = { ··· 6448 6450 .maxlen = sizeof(int), 6449 6451 .mode = 0644, 6450 6452 .proc_handler = proc_dointvec_minmax, 6451 - .extra1 = (void *)&one, 6453 + .extra1 = (void *)SYSCTL_ONE, 6452 6454 .extra2 = (void *)&two_five_five, 6453 6455 }, 6454 6456 { ··· 6807 6809 .maxlen = sizeof(int), 6808 6810 .mode = 0644, 6809 6811 .proc_handler = proc_dointvec_minmax, 6810 - .extra1 = (void *)&zero, 6812 + .extra1 = (void *)SYSCTL_ZERO, 6811 6813 .extra2 = (void *)&two_five_five, 6812 6814 }, 6813 6815 {
+2 -5
net/ipv6/route.c
··· 6031 6031 return 0; 6032 6032 } 6033 6033 6034 - static int zero; 6035 - static int one = 1; 6036 - 6037 6034 static struct ctl_table ipv6_route_table_template[] = { 6038 6035 { 6039 6036 .procname = "flush", ··· 6108 6111 .maxlen = sizeof(int), 6109 6112 .mode = 0644, 6110 6113 .proc_handler = proc_dointvec_minmax, 6111 - .extra1 = &zero, 6112 - .extra2 = &one, 6114 + .extra1 = SYSCTL_ZERO, 6115 + .extra2 = SYSCTL_ONE, 6113 6116 }, 6114 6117 { } 6115 6118 };
+4 -6
net/ipv6/sysctl_net_ipv6.c
··· 21 21 #include <net/calipso.h> 22 22 #endif 23 23 24 - static int zero; 25 - static int one = 1; 26 24 static int flowlabel_reflect_max = 0x7; 27 25 static int auto_flowlabels_min; 28 26 static int auto_flowlabels_max = IP6_AUTO_FLOW_LABEL_MAX; ··· 113 115 .maxlen = sizeof(int), 114 116 .mode = 0644, 115 117 .proc_handler = proc_dointvec_minmax, 116 - .extra1 = &zero, 118 + .extra1 = SYSCTL_ZERO, 117 119 .extra2 = &flowlabel_reflect_max, 118 120 }, 119 121 { ··· 150 152 .maxlen = sizeof(int), 151 153 .mode = 0644, 152 154 .proc_handler = proc_rt6_multipath_hash_policy, 153 - .extra1 = &zero, 154 - .extra2 = &one, 155 + .extra1 = SYSCTL_ZERO, 156 + .extra2 = SYSCTL_ONE, 155 157 }, 156 158 { 157 159 .procname = "seg6_flowlabel", ··· 177 179 .maxlen = sizeof(int), 178 180 .mode = 0644, 179 181 .proc_handler = proc_dointvec_minmax, 180 - .extra1 = &one 182 + .extra1 = SYSCTL_ONE 181 183 }, 182 184 #ifdef CONFIG_NETLABEL 183 185 {
+4 -6
net/mpls/af_mpls.c
··· 37 37 38 38 #define MPLS_NEIGH_TABLE_UNSPEC (NEIGH_LINK_TABLE + 1) 39 39 40 - static int zero = 0; 41 - static int one = 1; 42 40 static int label_limit = (1 << 20) - 1; 43 41 static int ttl_max = 255; 44 42 ··· 2605 2607 .data = &platform_labels, 2606 2608 .maxlen = sizeof(int), 2607 2609 .mode = table->mode, 2608 - .extra1 = &zero, 2610 + .extra1 = SYSCTL_ZERO, 2609 2611 .extra2 = &label_limit, 2610 2612 }; 2611 2613 ··· 2634 2636 .maxlen = sizeof(int), 2635 2637 .mode = 0644, 2636 2638 .proc_handler = proc_dointvec_minmax, 2637 - .extra1 = &zero, 2638 - .extra2 = &one, 2639 + .extra1 = SYSCTL_ZERO, 2640 + .extra2 = SYSCTL_ONE, 2639 2641 }, 2640 2642 { 2641 2643 .procname = "default_ttl", ··· 2643 2645 .maxlen = sizeof(int), 2644 2646 .mode = 0644, 2645 2647 .proc_handler = proc_dointvec_minmax, 2646 - .extra1 = &one, 2648 + .extra1 = SYSCTL_ONE, 2647 2649 .extra2 = &ttl_max, 2648 2650 }, 2649 2651 { }
+1 -2
net/netfilter/ipvs/ip_vs_ctl.c
··· 1726 1726 1727 1727 #ifdef CONFIG_SYSCTL 1728 1728 1729 - static int zero; 1730 1729 static int three = 3; 1731 1730 1732 1731 static int ··· 1934 1935 .maxlen = sizeof(int), 1935 1936 .mode = 0644, 1936 1937 .proc_handler = proc_dointvec_minmax, 1937 - .extra1 = &zero, 1938 + .extra1 = SYSCTL_ZERO, 1938 1939 .extra2 = &three, 1939 1940 }, 1940 1941 {
+4 -5
net/rxrpc/sysctl.c
··· 11 11 #include "ar-internal.h" 12 12 13 13 static struct ctl_table_header *rxrpc_sysctl_reg_table; 14 - static const unsigned int one = 1; 15 14 static const unsigned int four = 4; 16 15 static const unsigned int thirtytwo = 32; 17 16 static const unsigned int n_65535 = 65535; ··· 96 97 .maxlen = sizeof(unsigned int), 97 98 .mode = 0644, 98 99 .proc_handler = proc_dointvec_minmax, 99 - .extra1 = (void *)&one, 100 + .extra1 = (void *)SYSCTL_ONE, 100 101 .extra2 = (void *)&rxrpc_max_client_connections, 101 102 }, 102 103 { ··· 114 115 .maxlen = sizeof(unsigned int), 115 116 .mode = 0644, 116 117 .proc_handler = proc_dointvec_minmax, 117 - .extra1 = (void *)&one, 118 + .extra1 = (void *)SYSCTL_ONE, 118 119 .extra2 = (void *)&n_max_acks, 119 120 }, 120 121 { ··· 123 124 .maxlen = sizeof(unsigned int), 124 125 .mode = 0644, 125 126 .proc_handler = proc_dointvec_minmax, 126 - .extra1 = (void *)&one, 127 + .extra1 = (void *)SYSCTL_ONE, 127 128 .extra2 = (void *)&n_65535, 128 129 }, 129 130 { ··· 132 133 .maxlen = sizeof(unsigned int), 133 134 .mode = 0644, 134 135 .proc_handler = proc_dointvec_minmax, 135 - .extra1 = (void *)&one, 136 + .extra1 = (void *)SYSCTL_ONE, 136 137 .extra2 = (void *)&four, 137 138 }, 138 139
+16 -19
net/sctp/sysctl.c
··· 25 25 #include <net/sctp/sctp.h> 26 26 #include <linux/sysctl.h> 27 27 28 - static int zero = 0; 29 - static int one = 1; 30 28 static int timer_max = 86400000; /* ms in one day */ 31 - static int int_max = INT_MAX; 32 29 static int sack_timer_min = 1; 33 30 static int sack_timer_max = 500; 34 31 static int addr_scope_max = SCTP_SCOPE_POLICY_MAX; ··· 89 92 .maxlen = sizeof(unsigned int), 90 93 .mode = 0644, 91 94 .proc_handler = proc_dointvec_minmax, 92 - .extra1 = &one, 95 + .extra1 = SYSCTL_ONE, 93 96 .extra2 = &timer_max 94 97 }, 95 98 { ··· 98 101 .maxlen = sizeof(unsigned int), 99 102 .mode = 0644, 100 103 .proc_handler = proc_sctp_do_rto_min, 101 - .extra1 = &one, 104 + .extra1 = SYSCTL_ONE, 102 105 .extra2 = &init_net.sctp.rto_max 103 106 }, 104 107 { ··· 134 137 .maxlen = sizeof(int), 135 138 .mode = 0644, 136 139 .proc_handler = proc_dointvec_minmax, 137 - .extra1 = &zero, 138 - .extra2 = &int_max 140 + .extra1 = SYSCTL_ZERO, 141 + .extra2 = SYSCTL_INT_MAX, 139 142 }, 140 143 { 141 144 .procname = "cookie_preserve_enable", ··· 157 160 .maxlen = sizeof(unsigned int), 158 161 .mode = 0644, 159 162 .proc_handler = proc_dointvec_minmax, 160 - .extra1 = &one, 163 + .extra1 = SYSCTL_ONE, 161 164 .extra2 = &timer_max 162 165 }, 163 166 { ··· 175 178 .maxlen = sizeof(unsigned int), 176 179 .mode = 0644, 177 180 .proc_handler = proc_dointvec_minmax, 178 - .extra1 = &one, 181 + .extra1 = SYSCTL_ONE, 179 182 .extra2 = &timer_max 180 183 }, 181 184 { ··· 184 187 .maxlen = sizeof(int), 185 188 .mode = 0644, 186 189 .proc_handler = proc_dointvec_minmax, 187 - .extra1 = &one, 188 - .extra2 = &int_max 190 + .extra1 = SYSCTL_ONE, 191 + .extra2 = SYSCTL_INT_MAX, 189 192 }, 190 193 { 191 194 .procname = "path_max_retrans", ··· 193 196 .maxlen = sizeof(int), 194 197 .mode = 0644, 195 198 .proc_handler = proc_dointvec_minmax, 196 - .extra1 = &one, 197 - .extra2 = &int_max 199 + .extra1 = SYSCTL_ONE, 200 + .extra2 = SYSCTL_INT_MAX, 198 201 }, 199 202 { 200 203 .procname = "max_init_retransmits", ··· 202 205 .maxlen = sizeof(int), 203 206 .mode = 0644, 204 207 .proc_handler = proc_dointvec_minmax, 205 - .extra1 = &one, 206 - .extra2 = &int_max 208 + .extra1 = SYSCTL_ONE, 209 + .extra2 = SYSCTL_INT_MAX, 207 210 }, 208 211 { 209 212 .procname = "pf_retrans", ··· 211 214 .maxlen = sizeof(int), 212 215 .mode = 0644, 213 216 .proc_handler = proc_dointvec_minmax, 214 - .extra1 = &zero, 215 - .extra2 = &int_max 217 + .extra1 = SYSCTL_ZERO, 218 + .extra2 = SYSCTL_INT_MAX, 216 219 }, 217 220 { 218 221 .procname = "sndbuf_policy", ··· 283 286 .maxlen = sizeof(int), 284 287 .mode = 0644, 285 288 .proc_handler = proc_dointvec_minmax, 286 - .extra1 = &zero, 289 + .extra1 = SYSCTL_ZERO, 287 290 .extra2 = &addr_scope_max, 288 291 }, 289 292 { ··· 292 295 .maxlen = sizeof(int), 293 296 .mode = 0644, 294 297 .proc_handler = &proc_dointvec_minmax, 295 - .extra1 = &one, 298 + .extra1 = SYSCTL_ONE, 296 299 .extra2 = &rwnd_scale_max, 297 300 }, 298 301 {
+1 -2
net/sunrpc/xprtrdma/transport.c
··· 80 80 static unsigned int max_slot_table_size = RPCRDMA_MAX_SLOT_TABLE; 81 81 static unsigned int min_inline_size = RPCRDMA_MIN_INLINE; 82 82 static unsigned int max_inline_size = RPCRDMA_MAX_INLINE; 83 - static unsigned int zero; 84 83 static unsigned int max_padding = PAGE_SIZE; 85 84 static unsigned int min_memreg = RPCRDMA_BOUNCEBUFFERS; 86 85 static unsigned int max_memreg = RPCRDMA_LAST - 1; ··· 121 122 .maxlen = sizeof(unsigned int), 122 123 .mode = 0644, 123 124 .proc_handler = proc_dointvec_minmax, 124 - .extra1 = &zero, 125 + .extra1 = SYSCTL_ZERO, 125 126 .extra2 = &max_padding, 126 127 }, 127 128 {
+2 -4
net/tipc/sysctl.c
··· 38 38 39 39 #include <linux/sysctl.h> 40 40 41 - static int zero; 42 - static int one = 1; 43 41 static struct ctl_table_header *tipc_ctl_hdr; 44 42 45 43 static struct ctl_table tipc_table[] = { ··· 47 49 .maxlen = sizeof(sysctl_tipc_rmem), 48 50 .mode = 0644, 49 51 .proc_handler = proc_dointvec_minmax, 50 - .extra1 = &one, 52 + .extra1 = SYSCTL_ONE, 51 53 }, 52 54 { 53 55 .procname = "named_timeout", ··· 55 57 .maxlen = sizeof(sysctl_tipc_named_timeout), 56 58 .mode = 0644, 57 59 .proc_handler = proc_dointvec_minmax, 58 - .extra1 = &zero, 60 + .extra1 = SYSCTL_ZERO, 59 61 }, 60 62 { 61 63 .procname = "sk_filter",
+12 -14
security/keys/sysctl.c
··· 9 9 #include <linux/sysctl.h> 10 10 #include "internal.h" 11 11 12 - static const int zero, one = 1, max = INT_MAX; 13 - 14 12 struct ctl_table key_sysctls[] = { 15 13 { 16 14 .procname = "maxkeys", ··· 16 18 .maxlen = sizeof(unsigned), 17 19 .mode = 0644, 18 20 .proc_handler = proc_dointvec_minmax, 19 - .extra1 = (void *) &one, 20 - .extra2 = (void *) &max, 21 + .extra1 = (void *) SYSCTL_ONE, 22 + .extra2 = (void *) SYSCTL_INT_MAX, 21 23 }, 22 24 { 23 25 .procname = "maxbytes", ··· 25 27 .maxlen = sizeof(unsigned), 26 28 .mode = 0644, 27 29 .proc_handler = proc_dointvec_minmax, 28 - .extra1 = (void *) &one, 29 - .extra2 = (void *) &max, 30 + .extra1 = (void *) SYSCTL_ONE, 31 + .extra2 = (void *) SYSCTL_INT_MAX, 30 32 }, 31 33 { 32 34 .procname = "root_maxkeys", ··· 34 36 .maxlen = sizeof(unsigned), 35 37 .mode = 0644, 36 38 .proc_handler = proc_dointvec_minmax, 37 - .extra1 = (void *) &one, 38 - .extra2 = (void *) &max, 39 + .extra1 = (void *) SYSCTL_ONE, 40 + .extra2 = (void *) SYSCTL_INT_MAX, 39 41 }, 40 42 { 41 43 .procname = "root_maxbytes", ··· 43 45 .maxlen = sizeof(unsigned), 44 46 .mode = 0644, 45 47 .proc_handler = proc_dointvec_minmax, 46 - .extra1 = (void *) &one, 47 - .extra2 = (void *) &max, 48 + .extra1 = (void *) SYSCTL_ONE, 49 + .extra2 = (void *) SYSCTL_INT_MAX, 48 50 }, 49 51 { 50 52 .procname = "gc_delay", ··· 52 54 .maxlen = sizeof(unsigned), 53 55 .mode = 0644, 54 56 .proc_handler = proc_dointvec_minmax, 55 - .extra1 = (void *) &zero, 56 - .extra2 = (void *) &max, 57 + .extra1 = (void *) SYSCTL_ZERO, 58 + .extra2 = (void *) SYSCTL_INT_MAX, 57 59 }, 58 60 #ifdef CONFIG_PERSISTENT_KEYRINGS 59 61 { ··· 62 64 .maxlen = sizeof(unsigned), 63 65 .mode = 0644, 64 66 .proc_handler = proc_dointvec_minmax, 65 - .extra1 = (void *) &zero, 66 - .extra2 = (void *) &max, 67 + .extra1 = (void *) SYSCTL_ZERO, 68 + .extra2 = (void *) SYSCTL_INT_MAX, 67 69 }, 68 70 #endif 69 71 { }
+2 -4
security/loadpin/loadpin.c
··· 43 43 static DEFINE_SPINLOCK(pinned_root_spinlock); 44 44 45 45 #ifdef CONFIG_SYSCTL 46 - static int zero; 47 - static int one = 1; 48 46 49 47 static struct ctl_path loadpin_sysctl_path[] = { 50 48 { .procname = "kernel", }, ··· 57 59 .maxlen = sizeof(int), 58 60 .mode = 0644, 59 61 .proc_handler = proc_dointvec_minmax, 60 - .extra1 = &zero, 61 - .extra2 = &one, 62 + .extra1 = SYSCTL_ZERO, 63 + .extra2 = SYSCTL_ONE, 62 64 }, 63 65 { } 64 66 };
+1 -2
security/yama/yama_lsm.c
··· 445 445 return proc_dointvec_minmax(&table_copy, write, buffer, lenp, ppos); 446 446 } 447 447 448 - static int zero; 449 448 static int max_scope = YAMA_SCOPE_NO_ATTACH; 450 449 451 450 static struct ctl_path yama_sysctl_path[] = { ··· 460 461 .maxlen = sizeof(int), 461 462 .mode = 0644, 462 463 .proc_handler = yama_dointvec_minmax, 463 - .extra1 = &zero, 464 + .extra1 = SYSCTL_ZERO, 464 465 .extra2 = &max_scope, 465 466 }, 466 467 { }