x86/mm: Use PAGE_ALIGNED(x) instead of IS_ALIGNED(x, PAGE_SIZE)

The <linux/mm.h> already provides the PAGE_ALIGNED() macro. Let's
use this macro instead of IS_ALIGNED() and passing PAGE_SIZE directly.

No change in functionality.

[ mingo: Tweak changelog. ]

Signed-off-by: Fanjun Kong <bh1scw@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20220526142038.1582839-1-bh1scw@gmail.com

authored by Fanjun Kong and committed by Ingo Molnar e19d1126 6f3f04c1

+4 -4
+4 -4
arch/x86/mm/init_64.c
··· 1240 void __ref vmemmap_free(unsigned long start, unsigned long end, 1241 struct vmem_altmap *altmap) 1242 { 1243 - VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); 1244 - VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); 1245 1246 remove_pagetable(start, end, false, altmap); 1247 } ··· 1605 { 1606 int err; 1607 1608 - VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); 1609 - VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); 1610 1611 if (end - start < PAGES_PER_SECTION * sizeof(struct page)) 1612 err = vmemmap_populate_basepages(start, end, node, NULL);
··· 1240 void __ref vmemmap_free(unsigned long start, unsigned long end, 1241 struct vmem_altmap *altmap) 1242 { 1243 + VM_BUG_ON(!PAGE_ALIGNED(start)); 1244 + VM_BUG_ON(!PAGE_ALIGNED(end)); 1245 1246 remove_pagetable(start, end, false, altmap); 1247 } ··· 1605 { 1606 int err; 1607 1608 + VM_BUG_ON(!PAGE_ALIGNED(start)); 1609 + VM_BUG_ON(!PAGE_ALIGNED(end)); 1610 1611 if (end - start < PAGES_PER_SECTION * sizeof(struct page)) 1612 err = vmemmap_populate_basepages(start, end, node, NULL);