Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: numa: return the number of base pages altered by protection changes

Commit 0255d4918480 ("mm: Account for a THP NUMA hinting update as one
PTE update") was added to account for the number of PTE updates when
marking pages prot_numa. task_numa_work was using the old return value
to track how much address space had been updated. Altering the return
value causes the scanner to do more work than it is configured or
documented to in a single unit of work.

This patch reverts that commit and accounts for the number of THP
updates separately in vmstat. It is up to the administrator to
interpret the pair of values correctly. This is a straight-forward
operation and likely to only be of interest when actively debugging NUMA
balancing problems.

The impact of this patch is that the NUMA PTE scanner will scan slower
when THP is enabled and workloads may converge slower as a result. On
the flip size system CPU usage should be lower than recent tests
reported. This is an illustrative example of a short single JVM specjbb
test

specjbb
3.12.0 3.12.0
vanilla acctupdates
TPut 1 26143.00 ( 0.00%) 25747.00 ( -1.51%)
TPut 7 185257.00 ( 0.00%) 183202.00 ( -1.11%)
TPut 13 329760.00 ( 0.00%) 346577.00 ( 5.10%)
TPut 19 442502.00 ( 0.00%) 460146.00 ( 3.99%)
TPut 25 540634.00 ( 0.00%) 549053.00 ( 1.56%)
TPut 31 512098.00 ( 0.00%) 519611.00 ( 1.47%)
TPut 37 461276.00 ( 0.00%) 474973.00 ( 2.97%)
TPut 43 403089.00 ( 0.00%) 414172.00 ( 2.75%)

3.12.0 3.12.0
vanillaacctupdates
User 5169.64 5184.14
System 100.45 80.02
Elapsed 252.75 251.85

Performance is similar but note the reduction in system CPU time. While
this showed a performance gain, it will not be universal but at least
it'll be behaving as documented. The vmstats are obviously different but
here is an obvious interpretation of them from mmtests.

3.12.0 3.12.0
vanillaacctupdates
NUMA page range updates 1408326 11043064
NUMA huge PMD updates 0 21040
NUMA PTE updates 1408326 291624

"NUMA page range updates" == nr_pte_updates and is the value returned to
the NUMA pte scanner. NUMA huge PMD updates were the number of THP
updates which in combination can be used to calculate how many ptes were
updated from userspace.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: Alex Thorlton <athorlton@sgi.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Mel Gorman and committed by
Linus Torvalds
72403b4a 00619bcc

+9 -3
+1
include/linux/vm_event_item.h
··· 39 39 PAGEOUTRUN, ALLOCSTALL, PGROTATED, 40 40 #ifdef CONFIG_NUMA_BALANCING 41 41 NUMA_PTE_UPDATES, 42 + NUMA_HUGE_PTE_UPDATES, 42 43 NUMA_HINT_FAULTS, 43 44 NUMA_HINT_FAULTS_LOCAL, 44 45 NUMA_PAGE_MIGRATE,
+7 -3
mm/mprotect.c
··· 112 112 pmd_t *pmd; 113 113 unsigned long next; 114 114 unsigned long pages = 0; 115 + unsigned long nr_huge_updates = 0; 115 116 116 117 pmd = pmd_offset(pud, addr); 117 118 do { ··· 127 126 newprot, prot_numa); 128 127 129 128 if (nr_ptes) { 130 - if (nr_ptes == HPAGE_PMD_NR) 131 - pages++; 132 - 129 + if (nr_ptes == HPAGE_PMD_NR) { 130 + pages += HPAGE_PMD_NR; 131 + nr_huge_updates++; 132 + } 133 133 continue; 134 134 } 135 135 } ··· 143 141 pages += this_pages; 144 142 } while (pmd++, addr = next, addr != end); 145 143 144 + if (nr_huge_updates) 145 + count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates); 146 146 return pages; 147 147 } 148 148
+1
mm/vmstat.c
··· 812 812 813 813 #ifdef CONFIG_NUMA_BALANCING 814 814 "numa_pte_updates", 815 + "numa_huge_pte_updates", 815 816 "numa_hint_faults", 816 817 "numa_hint_faults_local", 817 818 "numa_pages_migrated",