changing include/asm-generic/pgtable.h for non-mmu

There are some parts of include/asm-generic/pgtable.h that are relevant to
the non-mmu architectures. To make it easier to include this from them I
would like to ifdef the relevant parts.

Without this there is a handful of functions that are referenced in here
that are not defined on many non-mmu architectures. They could be defined
out of course, as an alternative approach.

Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by Greg Ungerer and committed by Linus Torvalds 9535239f 73c59afc

+38 -35
+38 -35
include/asm-generic/pgtable.h
··· 2 2 #define _ASM_GENERIC_PGTABLE_H 3 3 4 4 #ifndef __ASSEMBLY__ 5 + #ifdef CONFIG_MMU 5 6 6 7 #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS 7 8 /* ··· 134 133 #endif 135 134 136 135 /* 137 - * A facility to provide lazy MMU batching. This allows PTE updates and 138 - * page invalidations to be delayed until a call to leave lazy MMU mode 139 - * is issued. Some architectures may benefit from doing this, and it is 140 - * beneficial for both shadow and direct mode hypervisors, which may batch 141 - * the PTE updates which happen during this window. Note that using this 142 - * interface requires that read hazards be removed from the code. A read 143 - * hazard could result in the direct mode hypervisor case, since the actual 144 - * write to the page tables may not yet have taken place, so reads though 145 - * a raw PTE pointer after it has been modified are not guaranteed to be 146 - * up to date. This mode can only be entered and left under the protection of 147 - * the page table locks for all page tables which may be modified. In the UP 148 - * case, this is required so that preemption is disabled, and in the SMP case, 149 - * it must synchronize the delayed page table writes properly on other CPUs. 150 - */ 151 - #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE 152 - #define arch_enter_lazy_mmu_mode() do {} while (0) 153 - #define arch_leave_lazy_mmu_mode() do {} while (0) 154 - #define arch_flush_lazy_mmu_mode() do {} while (0) 155 - #endif 156 - 157 - /* 158 - * A facility to provide batching of the reload of page tables with the 159 - * actual context switch code for paravirtualized guests. By convention, 160 - * only one of the lazy modes (CPU, MMU) should be active at any given 161 - * time, entry should never be nested, and entry and exits should always 162 - * be paired. This is for sanity of maintaining and reasoning about the 163 - * kernel code. 164 - */ 165 - #ifndef __HAVE_ARCH_ENTER_LAZY_CPU_MODE 166 - #define arch_enter_lazy_cpu_mode() do {} while (0) 167 - #define arch_leave_lazy_cpu_mode() do {} while (0) 168 - #define arch_flush_lazy_cpu_mode() do {} while (0) 169 - #endif 170 - 171 - /* 172 136 * When walking page tables, get the address of the next boundary, 173 137 * or the end address of the range if that comes earlier. Although no 174 138 * vma end wraps to 0, rounded up __boundary may wrap to 0 throughout. ··· 199 233 } 200 234 return 0; 201 235 } 236 + #endif /* CONFIG_MMU */ 237 + 238 + /* 239 + * A facility to provide lazy MMU batching. This allows PTE updates and 240 + * page invalidations to be delayed until a call to leave lazy MMU mode 241 + * is issued. Some architectures may benefit from doing this, and it is 242 + * beneficial for both shadow and direct mode hypervisors, which may batch 243 + * the PTE updates which happen during this window. Note that using this 244 + * interface requires that read hazards be removed from the code. A read 245 + * hazard could result in the direct mode hypervisor case, since the actual 246 + * write to the page tables may not yet have taken place, so reads though 247 + * a raw PTE pointer after it has been modified are not guaranteed to be 248 + * up to date. This mode can only be entered and left under the protection of 249 + * the page table locks for all page tables which may be modified. In the UP 250 + * case, this is required so that preemption is disabled, and in the SMP case, 251 + * it must synchronize the delayed page table writes properly on other CPUs. 252 + */ 253 + #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE 254 + #define arch_enter_lazy_mmu_mode() do {} while (0) 255 + #define arch_leave_lazy_mmu_mode() do {} while (0) 256 + #define arch_flush_lazy_mmu_mode() do {} while (0) 257 + #endif 258 + 259 + /* 260 + * A facility to provide batching of the reload of page tables with the 261 + * actual context switch code for paravirtualized guests. By convention, 262 + * only one of the lazy modes (CPU, MMU) should be active at any given 263 + * time, entry should never be nested, and entry and exits should always 264 + * be paired. This is for sanity of maintaining and reasoning about the 265 + * kernel code. 266 + */ 267 + #ifndef __HAVE_ARCH_ENTER_LAZY_CPU_MODE 268 + #define arch_enter_lazy_cpu_mode() do {} while (0) 269 + #define arch_leave_lazy_cpu_mode() do {} while (0) 270 + #define arch_flush_lazy_cpu_mode() do {} while (0) 271 + #endif 272 + 202 273 #endif /* !__ASSEMBLY__ */ 203 274 204 275 #endif /* _ASM_GENERIC_PGTABLE_H */