Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: add new api to enable ksm per process

Patch series "mm: process/cgroup ksm support", v9.

So far KSM can only be enabled by calling madvise for memory regions. To
be able to use KSM for more workloads, KSM needs to have the ability to be
enabled / disabled at the process / cgroup level.

Use case 1:
The madvise call is not available in the programming language. An
example for this are programs with forked workloads using a garbage
collected language without pointers. In such a language madvise cannot
be made available.

In addition the addresses of objects get moved around as they are
garbage collected. KSM sharing needs to be enabled "from the outside"
for these type of workloads.

Use case 2:
The same interpreter can also be used for workloads where KSM brings
no benefit or even has overhead. We'd like to be able to enable KSM on
a workload by workload basis.

Use case 3:
With the madvise call sharing opportunities are only enabled for the
current process: it is a workload-local decision. A considerable number
of sharing opportunities may exist across multiple workloads or jobs (if
they are part of the same security domain). Only a higler level entity
like a job scheduler or container can know for certain if its running
one or more instances of a job. That job scheduler however doesn't have
the necessary internal workload knowledge to make targeted madvise
calls.

Security concerns:

In previous discussions security concerns have been brought up. The
problem is that an individual workload does not have the knowledge about
what else is running on a machine. Therefore it has to be very
conservative in what memory areas can be shared or not. However, if the
system is dedicated to running multiple jobs within the same security
domain, its the job scheduler that has the knowledge that sharing can be
safely enabled and is even desirable.

Performance:

Experiments with using UKSM have shown a capacity increase of around 20%.

Here are the metrics from an instagram workload (taken from a machine
with 64GB main memory):

full_scans: 445
general_profit: 20158298048
max_page_sharing: 256
merge_across_nodes: 1
pages_shared: 129547
pages_sharing: 5119146
pages_to_scan: 4000
pages_unshared: 1760924
pages_volatile: 10761341
run: 1
sleep_millisecs: 20
stable_node_chains: 167
stable_node_chains_prune_millisecs: 2000
stable_node_dups: 2751
use_zero_pages: 0
zero_pages_sharing: 0

After the service is running for 30 minutes to an hour, 4 to 5 million
shared pages are common for this workload when using KSM.


Detailed changes:

1. New options for prctl system command
This patch series adds two new options to the prctl system call.
The first one allows to enable KSM at the process level and the second
one to query the setting.

The setting will be inherited by child processes.

With the above setting, KSM can be enabled for the seed process of a cgroup
and all processes in the cgroup will inherit the setting.

2. Changes to KSM processing
When KSM is enabled at the process level, the KSM code will iterate
over all the VMA's and enable KSM for the eligible VMA's.

When forking a process that has KSM enabled, the setting will be
inherited by the new child process.

3. Add general_profit metric
The general_profit metric of KSM is specified in the documentation,
but not calculated. This adds the general profit metric to
/sys/kernel/debug/mm/ksm.

4. Add more metrics to ksm_stat
This adds the process profit metric to /proc/<pid>/ksm_stat.

5. Add more tests to ksm_tests and ksm_functional_tests
This adds an option to specify the merge type to the ksm_tests.
This allows to test madvise and prctl KSM.

It also adds a two new tests to ksm_functional_tests: one to test
the new prctl options and the other one is a fork test to verify that
the KSM process setting is inherited by client processes.


This patch (of 3):

So far KSM can only be enabled by calling madvise for memory regions. To
be able to use KSM for more workloads, KSM needs to have the ability to be
enabled / disabled at the process / cgroup level.

1. New options for prctl system command

This patch series adds two new options to the prctl system call.
The first one allows to enable KSM at the process level and the second
one to query the setting.

The setting will be inherited by child processes.

With the above setting, KSM can be enabled for the seed process of a
cgroup and all processes in the cgroup will inherit the setting.

2. Changes to KSM processing

When KSM is enabled at the process level, the KSM code will iterate
over all the VMA's and enable KSM for the eligible VMA's.

When forking a process that has KSM enabled, the setting will be
inherited by the new child process.

1) Introduce new MMF_VM_MERGE_ANY flag

This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag
is set, kernel samepage merging (ksm) gets enabled for all vma's of a
process.

2) Setting VM_MERGEABLE on VMA creation

When a VMA is created, if the MMF_VM_MERGE_ANY flag is set, the
VM_MERGEABLE flag will be set for this VMA.

3) support disabling of ksm for a process

This adds the ability to disable ksm for a process if ksm has been
enabled for the process with prctl.

4) add new prctl option to get and set ksm for a process

This adds two new options to the prctl system call
- enable ksm for all vmas of a process (if the vmas support it).
- query if ksm has been enabled for a process.

3. Disabling MMF_VM_MERGE_ANY for storage keys in s390

In the s390 architecture when storage keys are used, the
MMF_VM_MERGE_ANY will be disabled.

Link: https://lkml.kernel.org/r/20230418051342.1919757-1-shr@devkernel.io
Link: https://lkml.kernel.org/r/20230418051342.1919757-2-shr@devkernel.io
Signed-off-by: Stefan Roesch <shr@devkernel.io>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Stefan Roesch and committed by
Andrew Morton
d7597f59 2124f79d

+146 -19
+7
arch/s390/mm/gmap.c
··· 2591 2591 int ret; 2592 2592 VMA_ITERATOR(vmi, mm, 0); 2593 2593 2594 + /* 2595 + * Make sure to disable KSM (if enabled for the whole process or 2596 + * individual VMAs). Note that nothing currently hinders user space 2597 + * from re-enabling it. 2598 + */ 2599 + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); 2600 + 2594 2601 for_each_vma(vmi, vma) { 2595 2602 /* Copy vm_flags to avoid partial modifications in ksm_madvise */ 2596 2603 vm_flags = vma->vm_flags;
+19 -2
include/linux/ksm.h
··· 18 18 #ifdef CONFIG_KSM 19 19 int ksm_madvise(struct vm_area_struct *vma, unsigned long start, 20 20 unsigned long end, int advice, unsigned long *vm_flags); 21 + 22 + void ksm_add_vma(struct vm_area_struct *vma); 23 + int ksm_enable_merge_any(struct mm_struct *mm); 24 + 21 25 int __ksm_enter(struct mm_struct *mm); 22 26 void __ksm_exit(struct mm_struct *mm); 23 27 24 28 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) 25 29 { 26 - if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) 27 - return __ksm_enter(mm); 30 + int ret; 31 + 32 + if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags)) { 33 + ret = __ksm_enter(mm); 34 + if (ret) 35 + return ret; 36 + } 37 + 38 + if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags)) 39 + set_bit(MMF_VM_MERGE_ANY, &mm->flags); 40 + 28 41 return 0; 29 42 } 30 43 ··· 69 56 int force_early); 70 57 #endif 71 58 #else /* !CONFIG_KSM */ 59 + 60 + static inline void ksm_add_vma(struct vm_area_struct *vma) 61 + { 62 + } 72 63 73 64 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm) 74 65 {
+1
include/linux/sched/coredump.h
··· 90 90 #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\ 91 91 MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK) 92 92 93 + #define MMF_VM_MERGE_ANY 29 93 94 #endif /* _LINUX_SCHED_COREDUMP_H */
+2
include/uapi/linux/prctl.h
··· 292 292 293 293 #define PR_GET_AUXV 0x41555856 294 294 295 + #define PR_SET_MEMORY_MERGE 67 296 + #define PR_GET_MEMORY_MERGE 68 295 297 #endif /* _LINUX_PRCTL_H */
+27
kernel/sys.c
··· 15 15 #include <linux/highuid.h> 16 16 #include <linux/fs.h> 17 17 #include <linux/kmod.h> 18 + #include <linux/ksm.h> 18 19 #include <linux/perf_event.h> 19 20 #include <linux/resource.h> 20 21 #include <linux/kernel.h> ··· 2688 2687 case PR_SET_VMA: 2689 2688 error = prctl_set_vma(arg2, arg3, arg4, arg5); 2690 2689 break; 2690 + #ifdef CONFIG_KSM 2691 + case PR_SET_MEMORY_MERGE: 2692 + if (arg3 || arg4 || arg5) 2693 + return -EINVAL; 2694 + if (mmap_write_lock_killable(me->mm)) 2695 + return -EINTR; 2696 + 2697 + if (arg2) { 2698 + error = ksm_enable_merge_any(me->mm); 2699 + } else { 2700 + /* 2701 + * TODO: we might want disable KSM on all VMAs and 2702 + * trigger unsharing to completely disable KSM. 2703 + */ 2704 + clear_bit(MMF_VM_MERGE_ANY, &me->mm->flags); 2705 + error = 0; 2706 + } 2707 + mmap_write_unlock(me->mm); 2708 + break; 2709 + case PR_GET_MEMORY_MERGE: 2710 + if (arg2 || arg3 || arg4 || arg5) 2711 + return -EINVAL; 2712 + 2713 + error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags); 2714 + break; 2715 + #endif 2691 2716 default: 2692 2717 error = -EINVAL; 2693 2718 break;
+87 -17
mm/ksm.c
··· 515 515 return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; 516 516 } 517 517 518 + static bool vma_ksm_compatible(struct vm_area_struct *vma) 519 + { 520 + if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE | VM_PFNMAP | 521 + VM_IO | VM_DONTEXPAND | VM_HUGETLB | 522 + VM_MIXEDMAP)) 523 + return false; /* just ignore the advice */ 524 + 525 + if (vma_is_dax(vma)) 526 + return false; 527 + 528 + #ifdef VM_SAO 529 + if (vma->vm_flags & VM_SAO) 530 + return false; 531 + #endif 532 + #ifdef VM_SPARC_ADI 533 + if (vma->vm_flags & VM_SPARC_ADI) 534 + return false; 535 + #endif 536 + 537 + return true; 538 + } 539 + 518 540 static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, 519 541 unsigned long addr) 520 542 { ··· 1048 1026 1049 1027 mm_slot_free(mm_slot_cache, mm_slot); 1050 1028 clear_bit(MMF_VM_MERGEABLE, &mm->flags); 1029 + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); 1051 1030 mmdrop(mm); 1052 1031 } else 1053 1032 spin_unlock(&ksm_mmlist_lock); ··· 2431 2408 2432 2409 mm_slot_free(mm_slot_cache, mm_slot); 2433 2410 clear_bit(MMF_VM_MERGEABLE, &mm->flags); 2411 + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); 2434 2412 mmap_read_unlock(mm); 2435 2413 mmdrop(mm); 2436 2414 } else { ··· 2509 2485 return 0; 2510 2486 } 2511 2487 2488 + static void __ksm_add_vma(struct vm_area_struct *vma) 2489 + { 2490 + unsigned long vm_flags = vma->vm_flags; 2491 + 2492 + if (vm_flags & VM_MERGEABLE) 2493 + return; 2494 + 2495 + if (vma_ksm_compatible(vma)) 2496 + vm_flags_set(vma, VM_MERGEABLE); 2497 + } 2498 + 2499 + /** 2500 + * ksm_add_vma - Mark vma as mergeable if compatible 2501 + * 2502 + * @vma: Pointer to vma 2503 + */ 2504 + void ksm_add_vma(struct vm_area_struct *vma) 2505 + { 2506 + struct mm_struct *mm = vma->vm_mm; 2507 + 2508 + if (test_bit(MMF_VM_MERGE_ANY, &mm->flags)) 2509 + __ksm_add_vma(vma); 2510 + } 2511 + 2512 + static void ksm_add_vmas(struct mm_struct *mm) 2513 + { 2514 + struct vm_area_struct *vma; 2515 + 2516 + VMA_ITERATOR(vmi, mm, 0); 2517 + for_each_vma(vmi, vma) 2518 + __ksm_add_vma(vma); 2519 + } 2520 + 2521 + /** 2522 + * ksm_enable_merge_any - Add mm to mm ksm list and enable merging on all 2523 + * compatible VMA's 2524 + * 2525 + * @mm: Pointer to mm 2526 + * 2527 + * Returns 0 on success, otherwise error code 2528 + */ 2529 + int ksm_enable_merge_any(struct mm_struct *mm) 2530 + { 2531 + int err; 2532 + 2533 + if (test_bit(MMF_VM_MERGE_ANY, &mm->flags)) 2534 + return 0; 2535 + 2536 + if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) { 2537 + err = __ksm_enter(mm); 2538 + if (err) 2539 + return err; 2540 + } 2541 + 2542 + set_bit(MMF_VM_MERGE_ANY, &mm->flags); 2543 + ksm_add_vmas(mm); 2544 + 2545 + return 0; 2546 + } 2547 + 2512 2548 int ksm_madvise(struct vm_area_struct *vma, unsigned long start, 2513 2549 unsigned long end, int advice, unsigned long *vm_flags) 2514 2550 { ··· 2577 2493 2578 2494 switch (advice) { 2579 2495 case MADV_MERGEABLE: 2580 - /* 2581 - * Be somewhat over-protective for now! 2582 - */ 2583 - if (*vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE | 2584 - VM_PFNMAP | VM_IO | VM_DONTEXPAND | 2585 - VM_HUGETLB | VM_MIXEDMAP)) 2586 - return 0; /* just ignore the advice */ 2587 - 2588 - if (vma_is_dax(vma)) 2496 + if (vma->vm_flags & VM_MERGEABLE) 2589 2497 return 0; 2590 - 2591 - #ifdef VM_SAO 2592 - if (*vm_flags & VM_SAO) 2498 + if (!vma_ksm_compatible(vma)) 2593 2499 return 0; 2594 - #endif 2595 - #ifdef VM_SPARC_ADI 2596 - if (*vm_flags & VM_SPARC_ADI) 2597 - return 0; 2598 - #endif 2599 2500 2600 2501 if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) { 2601 2502 err = __ksm_enter(mm); ··· 2684 2615 2685 2616 if (easy_to_free) { 2686 2617 mm_slot_free(mm_slot_cache, mm_slot); 2618 + clear_bit(MMF_VM_MERGE_ANY, &mm->flags); 2687 2619 clear_bit(MMF_VM_MERGEABLE, &mm->flags); 2688 2620 mmdrop(mm); 2689 2621 } else if (mm_slot) {
+3
mm/mmap.c
··· 46 46 #include <linux/pkeys.h> 47 47 #include <linux/oom.h> 48 48 #include <linux/sched/mm.h> 49 + #include <linux/ksm.h> 49 50 50 51 #include <linux/uaccess.h> 51 52 #include <asm/cacheflush.h> ··· 2730 2729 if (file && vm_flags & VM_SHARED) 2731 2730 mapping_unmap_writable(file->f_mapping); 2732 2731 file = vma->vm_file; 2732 + ksm_add_vma(vma); 2733 2733 expanded: 2734 2734 perf_event_mmap(vma); 2735 2735 ··· 3003 3001 goto mas_store_fail; 3004 3002 3005 3003 mm->map_count++; 3004 + ksm_add_vma(vma); 3006 3005 out: 3007 3006 perf_event_mmap(vma); 3008 3007 mm->total_vm += len >> PAGE_SHIFT;