Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'akpm' (patches from Andrew)

Merge misc updates from Andrew Morton:

- large KASAN update to use arm's "software tag-based mode"

- a few misc things

- sh updates

- ocfs2 updates

- just about all of MM

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (167 commits)
kernel/fork.c: mark 'stack_vm_area' with __maybe_unused
memcg, oom: notify on oom killer invocation from the charge path
mm, swap: fix swapoff with KSM pages
include/linux/gfp.h: fix typo
mm/hmm: fix memremap.h, move dev_page_fault_t callback to hmm
hugetlbfs: Use i_mmap_rwsem to fix page fault/truncate race
hugetlbfs: use i_mmap_rwsem for more pmd sharing synchronization
memory_hotplug: add missing newlines to debugging output
mm: remove __hugepage_set_anon_rmap()
include/linux/vmstat.h: remove unused page state adjustment macro
mm/page_alloc.c: allow error injection
mm: migrate: drop unused argument of migrate_page_move_mapping()
blkdev: avoid migration stalls for blkdev pages
mm: migrate: provide buffer_migrate_page_norefs()
mm: migrate: move migrate_page_lock_buffers()
mm: migrate: lock buffers before migrate_page_move_mapping()
mm: migration: factor out code to compute expected number of page references
mm, page_alloc: enable pcpu_drain with zone capability
kmemleak: add config to select auto scan
mm/page_alloc.c: don't call kasan_free_pages() at deferred mem init
...

+5402 -4906
+32
Documentation/ABI/testing/sysfs-block-zram
··· 98 98 The backing_dev file is read-write and set up backing 99 99 device for zram to write incompressible pages. 100 100 For using, user should enable CONFIG_ZRAM_WRITEBACK. 101 + 102 + What: /sys/block/zram<id>/idle 103 + Date: November 2018 104 + Contact: Minchan Kim <minchan@kernel.org> 105 + Description: 106 + idle file is write-only and mark zram slot as idle. 107 + If system has mounted debugfs, user can see which slots 108 + are idle via /sys/kernel/debug/zram/zram<id>/block_state 109 + 110 + What: /sys/block/zram<id>/writeback 111 + Date: November 2018 112 + Contact: Minchan Kim <minchan@kernel.org> 113 + Description: 114 + The writeback file is write-only and trigger idle and/or 115 + huge page writeback to backing device. 116 + 117 + What: /sys/block/zram<id>/bd_stat 118 + Date: November 2018 119 + Contact: Minchan Kim <minchan@kernel.org> 120 + Description: 121 + The bd_stat file is read-only and represents backing device's 122 + statistics (bd_count, bd_reads, bd_writes) in a format 123 + similar to block layer statistics file format. 124 + 125 + What: /sys/block/zram<id>/writeback_limit 126 + Date: November 2018 127 + Contact: Minchan Kim <minchan@kernel.org> 128 + Description: 129 + The writeback_limit file is read-write and specifies the maximum 130 + amount of writeback ZRAM can do. The limit could be changed 131 + in run time and "0" means disable the limit. 132 + No limit is the initial state.
+72 -8
Documentation/blockdev/zram.txt
··· 164 164 mem_used_max WO reset the `mem_used_max' counter (see later) 165 165 mem_limit WO specifies the maximum amount of memory ZRAM can use 166 166 to store the compressed data 167 + writeback_limit WO specifies the maximum amount of write IO zram can 168 + write out to backing device as 4KB unit 167 169 max_comp_streams RW the number of possible concurrent compress operations 168 170 comp_algorithm RW show and change the compression algorithm 169 171 compact WO trigger memory compaction 170 172 debug_stat RO this file is used for zram debugging purposes 171 173 backing_dev RW set up backend storage for zram to write out 174 + idle WO mark allocated slot as idle 172 175 173 176 174 177 User space is advised to use the following files to read the device statistics. ··· 223 220 pages_compacted the number of pages freed during compaction 224 221 huge_pages the number of incompressible pages 225 222 223 + File /sys/block/zram<id>/bd_stat 224 + 225 + The stat file represents device's backing device statistics. It consists of 226 + a single line of text and contains the following stats separated by whitespace: 227 + bd_count size of data written in backing device. 228 + Unit: 4K bytes 229 + bd_reads the number of reads from backing device 230 + Unit: 4K bytes 231 + bd_writes the number of writes to backing device 232 + Unit: 4K bytes 233 + 226 234 9) Deactivate: 227 235 swapoff /dev/zram0 228 236 umount /dev/zram1 ··· 251 237 252 238 = writeback 253 239 254 - With incompressible pages, there is no memory saving with zram. 255 - Instead, with CONFIG_ZRAM_WRITEBACK, zram can write incompressible page 240 + With CONFIG_ZRAM_WRITEBACK, zram can write idle/incompressible page 256 241 to backing storage rather than keeping it in memory. 257 - User should set up backing device via /sys/block/zramX/backing_dev 258 - before disksize setting. 242 + To use the feature, admin should set up backing device via 243 + 244 + "echo /dev/sda5 > /sys/block/zramX/backing_dev" 245 + 246 + before disksize setting. It supports only partition at this moment. 247 + If admin want to use incompressible page writeback, they could do via 248 + 249 + "echo huge > /sys/block/zramX/write" 250 + 251 + To use idle page writeback, first, user need to declare zram pages 252 + as idle. 253 + 254 + "echo all > /sys/block/zramX/idle" 255 + 256 + From now on, any pages on zram are idle pages. The idle mark 257 + will be removed until someone request access of the block. 258 + IOW, unless there is access request, those pages are still idle pages. 259 + 260 + Admin can request writeback of those idle pages at right timing via 261 + 262 + "echo idle > /sys/block/zramX/writeback" 263 + 264 + With the command, zram writeback idle pages from memory to the storage. 265 + 266 + If there are lots of write IO with flash device, potentially, it has 267 + flash wearout problem so that admin needs to design write limitation 268 + to guarantee storage health for entire product life. 269 + To overcome the concern, zram supports "writeback_limit". 270 + The "writeback_limit"'s default value is 0 so that it doesn't limit 271 + any writeback. If admin want to measure writeback count in a certain 272 + period, he could know it via /sys/block/zram0/bd_stat's 3rd column. 273 + 274 + If admin want to limit writeback as per-day 400M, he could do it 275 + like below. 276 + 277 + MB_SHIFT=20 278 + 4K_SHIFT=12 279 + echo $((400<<MB_SHIFT>>4K_SHIFT)) > \ 280 + /sys/block/zram0/writeback_limit. 281 + 282 + If admin want to allow further write again, he could do it like below 283 + 284 + echo 0 > /sys/block/zram0/writeback_limit 285 + 286 + If admin want to see remaining writeback budget since he set, 287 + 288 + cat /sys/block/zram0/writeback_limit 289 + 290 + The writeback_limit count will reset whenever you reset zram(e.g., 291 + system reboot, echo 1 > /sys/block/zramX/reset) so keeping how many of 292 + writeback happened until you reset the zram to allocate extra writeback 293 + budget in next setting is user's job. 259 294 260 295 = memory tracking 261 296 ··· 314 251 If you enable the feature, you could see block state via 315 252 /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 316 253 317 - 300 75.033841 .wh 318 - 301 63.806904 s.. 319 - 302 63.806919 ..h 254 + 300 75.033841 .wh. 255 + 301 63.806904 s... 256 + 302 63.806919 ..hi 320 257 321 258 First column is zram's block index. 322 259 Second column is access time since the system was booted 323 260 Third column is state of the block. 324 261 (s: same page 325 262 w: written page to backing store 326 - h: huge page) 263 + h: huge page 264 + i: idle page) 327 265 328 266 First line of above example says 300th block is accessed at 75.033841sec 329 267 and the block's state is huge so it is written back to the backing
+137 -93
Documentation/dev-tools/kasan.rst
··· 4 4 Overview 5 5 -------- 6 6 7 - KernelAddressSANitizer (KASAN) is a dynamic memory error detector. It provides 8 - a fast and comprehensive solution for finding use-after-free and out-of-bounds 9 - bugs. 7 + KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to 8 + find out-of-bound and use-after-free bugs. KASAN has two modes: generic KASAN 9 + (similar to userspace ASan) and software tag-based KASAN (similar to userspace 10 + HWASan). 10 11 11 - KASAN uses compile-time instrumentation for checking every memory access, 12 - therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is 13 - required for detection of out-of-bounds accesses to stack or global variables. 12 + KASAN uses compile-time instrumentation to insert validity checks before every 13 + memory access, and therefore requires a compiler version that supports that. 14 14 15 - Currently KASAN is supported only for the x86_64 and arm64 architectures. 15 + Generic KASAN is supported in both GCC and Clang. With GCC it requires version 16 + 4.9.2 or later for basic support and version 5.0 or later for detection of 17 + out-of-bounds accesses for stack and global variables and for inline 18 + instrumentation mode (see the Usage section). With Clang it requires version 19 + 7.0.0 or later and it doesn't support detection of out-of-bounds accesses for 20 + global variables yet. 21 + 22 + Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later. 23 + 24 + Currently generic KASAN is supported for the x86_64, arm64, xtensa and s390 25 + architectures, and tag-based KASAN is supported only for arm64. 16 26 17 27 Usage 18 28 ----- ··· 31 21 32 22 CONFIG_KASAN = y 33 23 34 - and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and 35 - inline are compiler instrumentation types. The former produces smaller binary 36 - the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC 37 - version 5.0 or later. 24 + and choose between CONFIG_KASAN_GENERIC (to enable generic KASAN) and 25 + CONFIG_KASAN_SW_TAGS (to enable software tag-based KASAN). 38 26 39 - KASAN works with both SLUB and SLAB memory allocators. 27 + You also need to choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. 28 + Outline and inline are compiler instrumentation types. The former produces 29 + smaller binary while the latter is 1.1 - 2 times faster. 30 + 31 + Both KASAN modes work with both SLUB and SLAB memory allocators. 40 32 For better bug detection and nicer reporting, enable CONFIG_STACKTRACE. 41 33 42 34 To disable instrumentation for specific files or directories, add a line ··· 55 43 Error reports 56 44 ~~~~~~~~~~~~~ 57 45 58 - A typical out of bounds access report looks like this:: 46 + A typical out-of-bounds access generic KASAN report looks like this:: 59 47 60 48 ================================================================== 61 - BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3 62 - Write of size 1 by task modprobe/1689 63 - ============================================================================= 64 - BUG kmalloc-128 (Not tainted): kasan error 65 - ----------------------------------------------------------------------------- 49 + BUG: KASAN: slab-out-of-bounds in kmalloc_oob_right+0xa8/0xbc [test_kasan] 50 + Write of size 1 at addr ffff8801f44ec37b by task insmod/2760 66 51 67 - Disabling lock debugging due to kernel taint 68 - INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689 69 - __slab_alloc+0x4b4/0x4f0 70 - kmem_cache_alloc_trace+0x10b/0x190 71 - kmalloc_oob_right+0x3d/0x75 [test_kasan] 72 - init_module+0x9/0x47 [test_kasan] 73 - do_one_initcall+0x99/0x200 74 - load_module+0x2cb3/0x3b20 75 - SyS_finit_module+0x76/0x80 76 - system_call_fastpath+0x12/0x17 77 - INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080 78 - INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720 79 - 80 - Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ 81 - Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 82 - Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 83 - Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 84 - Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 85 - Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 86 - Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 87 - Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 88 - Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk. 89 - Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc ........ 90 - Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ 91 - CPU: 0 PID: 1689 Comm: modprobe Tainted: G B 3.18.0-rc1-mm1+ #98 92 - Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 93 - ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78 94 - ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8 95 - ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558 52 + CPU: 1 PID: 2760 Comm: insmod Not tainted 4.19.0-rc3+ #698 53 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014 96 54 Call Trace: 97 - [<ffffffff81cc68ae>] dump_stack+0x46/0x58 98 - [<ffffffff811fd848>] print_trailer+0xf8/0x160 99 - [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan] 100 - [<ffffffff811ff0f5>] object_err+0x35/0x40 101 - [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan] 102 - [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0 103 - [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40 104 - [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40 105 - [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40 106 - [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan] 107 - [<ffffffff8120a995>] __asan_store1+0x75/0xb0 108 - [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan] 109 - [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan] 110 - [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan] 111 - [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan] 112 - [<ffffffff810002d9>] do_one_initcall+0x99/0x200 113 - [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160 114 - [<ffffffff81114f63>] load_module+0x2cb3/0x3b20 115 - [<ffffffff8110fd70>] ? m_show+0x240/0x240 116 - [<ffffffff81115f06>] SyS_finit_module+0x76/0x80 117 - [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17 55 + dump_stack+0x94/0xd8 56 + print_address_description+0x73/0x280 57 + kasan_report+0x144/0x187 58 + __asan_report_store1_noabort+0x17/0x20 59 + kmalloc_oob_right+0xa8/0xbc [test_kasan] 60 + kmalloc_tests_init+0x16/0x700 [test_kasan] 61 + do_one_initcall+0xa5/0x3ae 62 + do_init_module+0x1b6/0x547 63 + load_module+0x75df/0x8070 64 + __do_sys_init_module+0x1c6/0x200 65 + __x64_sys_init_module+0x6e/0xb0 66 + do_syscall_64+0x9f/0x2c0 67 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 68 + RIP: 0033:0x7f96443109da 69 + RSP: 002b:00007ffcf0b51b08 EFLAGS: 00000202 ORIG_RAX: 00000000000000af 70 + RAX: ffffffffffffffda RBX: 000055dc3ee521a0 RCX: 00007f96443109da 71 + RDX: 00007f96445cff88 RSI: 0000000000057a50 RDI: 00007f9644992000 72 + RBP: 000055dc3ee510b0 R08: 0000000000000003 R09: 0000000000000000 73 + R10: 00007f964430cd0a R11: 0000000000000202 R12: 00007f96445cff88 74 + R13: 000055dc3ee51090 R14: 0000000000000000 R15: 0000000000000000 75 + 76 + Allocated by task 2760: 77 + save_stack+0x43/0xd0 78 + kasan_kmalloc+0xa7/0xd0 79 + kmem_cache_alloc_trace+0xe1/0x1b0 80 + kmalloc_oob_right+0x56/0xbc [test_kasan] 81 + kmalloc_tests_init+0x16/0x700 [test_kasan] 82 + do_one_initcall+0xa5/0x3ae 83 + do_init_module+0x1b6/0x547 84 + load_module+0x75df/0x8070 85 + __do_sys_init_module+0x1c6/0x200 86 + __x64_sys_init_module+0x6e/0xb0 87 + do_syscall_64+0x9f/0x2c0 88 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 89 + 90 + Freed by task 815: 91 + save_stack+0x43/0xd0 92 + __kasan_slab_free+0x135/0x190 93 + kasan_slab_free+0xe/0x10 94 + kfree+0x93/0x1a0 95 + umh_complete+0x6a/0xa0 96 + call_usermodehelper_exec_async+0x4c3/0x640 97 + ret_from_fork+0x35/0x40 98 + 99 + The buggy address belongs to the object at ffff8801f44ec300 100 + which belongs to the cache kmalloc-128 of size 128 101 + The buggy address is located 123 bytes inside of 102 + 128-byte region [ffff8801f44ec300, ffff8801f44ec380) 103 + The buggy address belongs to the page: 104 + page:ffffea0007d13b00 count:1 mapcount:0 mapping:ffff8801f7001640 index:0x0 105 + flags: 0x200000000000100(slab) 106 + raw: 0200000000000100 ffffea0007d11dc0 0000001a0000001a ffff8801f7001640 107 + raw: 0000000000000000 0000000080150015 00000001ffffffff 0000000000000000 108 + page dumped because: kasan: bad access detected 109 + 118 110 Memory state around the buggy address: 119 - ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 120 - ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc 121 - ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 122 - ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 123 - ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 124 - >ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc 125 - ^ 126 - ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 127 - ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 128 - ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb 129 - ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb 130 - ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb 111 + ffff8801f44ec200: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb 112 + ffff8801f44ec280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc 113 + >ffff8801f44ec300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 03 114 + ^ 115 + ffff8801f44ec380: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb 116 + ffff8801f44ec400: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc 131 117 ================================================================== 132 118 133 - The header of the report discribe what kind of bug happened and what kind of 134 - access caused it. It's followed by the description of the accessed slub object 135 - (see 'SLUB Debug output' section in Documentation/vm/slub.rst for details) and 136 - the description of the accessed memory page. 119 + The header of the report provides a short summary of what kind of bug happened 120 + and what kind of access caused it. It's followed by a stack trace of the bad 121 + access, a stack trace of where the accessed memory was allocated (in case bad 122 + access happens on a slab object), and a stack trace of where the object was 123 + freed (in case of a use-after-free bug report). Next comes a description of 124 + the accessed slab object and information about the accessed memory page. 137 125 138 126 In the last section the report shows memory state around the accessed address. 139 127 Reading this part requires some understanding of how KASAN works. ··· 150 138 In the report above the arrows point to the shadow byte 03, which means that 151 139 the accessed address is partially accessible. 152 140 141 + For tag-based KASAN this last report section shows the memory tags around the 142 + accessed address (see Implementation details section). 143 + 153 144 154 145 Implementation details 155 146 ---------------------- 156 147 148 + Generic KASAN 149 + ~~~~~~~~~~~~~ 150 + 157 151 From a high level, our approach to memory error detection is similar to that 158 152 of kmemcheck: use shadow memory to record whether each byte of memory is safe 159 - to access, and use compile-time instrumentation to check shadow memory on each 160 - memory access. 153 + to access, and use compile-time instrumentation to insert checks of shadow 154 + memory on each memory access. 161 155 162 - AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory 163 - (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and 164 - offset to translate a memory address to its corresponding shadow address. 156 + Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB 157 + to cover 128TB on x86_64) and uses direct mapping with a scale and offset to 158 + translate a memory address to its corresponding shadow address. 165 159 166 160 Here is the function which translates an address to its corresponding shadow 167 161 address:: ··· 180 162 181 163 where ``KASAN_SHADOW_SCALE_SHIFT = 3``. 182 164 183 - Compile-time instrumentation used for checking memory accesses. Compiler inserts 184 - function calls (__asan_load*(addr), __asan_store*(addr)) before each memory 185 - access of size 1, 2, 4, 8 or 16. These functions check whether memory access is 186 - valid or not by checking corresponding shadow memory. 165 + Compile-time instrumentation is used to insert memory access checks. Compiler 166 + inserts function calls (__asan_load*(addr), __asan_store*(addr)) before each 167 + memory access of size 1, 2, 4, 8 or 16. These functions check whether memory 168 + access is valid or not by checking corresponding shadow memory. 187 169 188 170 GCC 5.0 has possibility to perform inline instrumentation. Instead of making 189 171 function calls GCC directly inserts the code to check the shadow memory. 190 172 This option significantly enlarges kernel but it gives x1.1-x2 performance 191 173 boost over outline instrumented kernel. 174 + 175 + Software tag-based KASAN 176 + ~~~~~~~~~~~~~~~~~~~~~~~~ 177 + 178 + Tag-based KASAN uses the Top Byte Ignore (TBI) feature of modern arm64 CPUs to 179 + store a pointer tag in the top byte of kernel pointers. Like generic KASAN it 180 + uses shadow memory to store memory tags associated with each 16-byte memory 181 + cell (therefore it dedicates 1/16th of the kernel memory for shadow memory). 182 + 183 + On each memory allocation tag-based KASAN generates a random tag, tags the 184 + allocated memory with this tag, and embeds this tag into the returned pointer. 185 + Software tag-based KASAN uses compile-time instrumentation to insert checks 186 + before each memory access. These checks make sure that tag of the memory that 187 + is being accessed is equal to tag of the pointer that is used to access this 188 + memory. In case of a tag mismatch tag-based KASAN prints a bug report. 189 + 190 + Software tag-based KASAN also has two instrumentation modes (outline, that 191 + emits callbacks to check memory accesses; and inline, that performs the shadow 192 + memory checks inline). With outline instrumentation mode, a bug report is 193 + simply printed from the function that performs the access check. With inline 194 + instrumentation a brk instruction is emitted by the compiler, and a dedicated 195 + brk handler is used to print bug reports. 196 + 197 + A potential expansion of this mode is a hardware tag-based mode, which would 198 + use hardware memory tagging support instead of compiler instrumentation and 199 + manual shadow memory manipulation.
+9 -1
Documentation/filesystems/proc.txt
··· 182 182 VmSwap: 0 kB 183 183 HugetlbPages: 0 kB 184 184 CoreDumping: 0 185 + THP_enabled: 1 185 186 Threads: 1 186 187 SigQ: 0/28578 187 188 SigPnd: 0000000000000000 ··· 257 256 HugetlbPages size of hugetlb memory portions 258 257 CoreDumping process's memory is currently being dumped 259 258 (killing the process may lead to a corrupted core) 259 + THP_enabled process is allowed to use THP (returns 0 when 260 + PR_SET_THP_DISABLE is set on the process 260 261 Threads number of threads 261 262 SigQ number of signals queued/max. number for queue 262 263 SigPnd bitmap of pending signals for the thread ··· 428 425 KernelPageSize: 4 kB 429 426 MMUPageSize: 4 kB 430 427 Locked: 0 kB 428 + THPeligible: 0 431 429 VmFlags: rd ex mr mw me dw 432 430 433 431 the first of these lines shows the same information as is displayed for the ··· 466 462 "SwapPss" shows proportional swap share of this mapping. Unlike "Swap", this 467 463 does not take into account swapped out page of underlying shmem objects. 468 464 "Locked" indicates whether the mapping is locked in memory or not. 465 + "THPeligible" indicates whether the mapping is eligible for THP pages - 1 if 466 + true, 0 otherwise. 469 467 470 468 "VmFlags" field deserves a separate description. This member represents the kernel 471 469 flags associated with the particular virtual memory area in two letter encoded ··· 502 496 503 497 Note that there is no guarantee that every flag and associated mnemonic will 504 498 be present in all further kernel releases. Things get changed, the flags may 505 - be vanished or the reverse -- new added. 499 + be vanished or the reverse -- new added. Interpretation of their meaning 500 + might change in future as well. So each consumer of these flags has to 501 + follow each specific kernel version for the exact semantic. 506 502 507 503 This file is only present if the CONFIG_MMU kernel configuration option is 508 504 enabled.
+21
Documentation/sysctl/vm.txt
··· 63 63 - swappiness 64 64 - user_reserve_kbytes 65 65 - vfs_cache_pressure 66 + - watermark_boost_factor 66 67 - watermark_scale_factor 67 68 - zone_reclaim_mode 68 69 ··· 854 853 performance impact. Reclaim code needs to take various locks to find freeable 855 854 directory and inode objects. With vfs_cache_pressure=1000, it will look for 856 855 ten times more freeable objects than there are. 856 + 857 + ============================================================= 858 + 859 + watermark_boost_factor: 860 + 861 + This factor controls the level of reclaim when memory is being fragmented. 862 + It defines the percentage of the high watermark of a zone that will be 863 + reclaimed if pages of different mobility are being mixed within pageblocks. 864 + The intent is that compaction has less work to do in the future and to 865 + increase the success rate of future high-order allocations such as SLUB 866 + allocations, THP and hugetlbfs pages. 867 + 868 + To make it sensible with respect to the watermark_scale_factor parameter, 869 + the unit is in fractions of 10,000. The default value of 15,000 means 870 + that up to 150% of the high watermark will be reclaimed in the event of 871 + a pageblock being mixed due to fragmentation. The level of reclaim is 872 + determined by the number of fragmentation events that occurred in the 873 + recent past. If this value is smaller than a pageblock then a pageblocks 874 + worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor 875 + of 0 will disable the feature. 857 876 858 877 ============================================================= 859 878
+1
arch/arm64/Kconfig
··· 110 110 select HAVE_ARCH_JUMP_LABEL 111 111 select HAVE_ARCH_JUMP_LABEL_RELATIVE 112 112 select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) 113 + select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN 113 114 select HAVE_ARCH_KGDB 114 115 select HAVE_ARCH_MMAP_RND_BITS 115 116 select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
+10 -1
arch/arm64/Makefile
··· 101 101 TEXT_OFFSET := 0x00080000 102 102 endif 103 103 104 + ifeq ($(CONFIG_KASAN_SW_TAGS), y) 105 + KASAN_SHADOW_SCALE_SHIFT := 4 106 + else 107 + KASAN_SHADOW_SCALE_SHIFT := 3 108 + endif 109 + 110 + KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) 111 + KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) 112 + KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT) 113 + 104 114 # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) 105 115 # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) 106 116 # in 32-bit arithmetic 107 - KASAN_SHADOW_SCALE_SHIFT := 3 108 117 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ 109 118 (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ 110 119 + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
+2
arch/arm64/include/asm/brk-imm.h
··· 16 16 * 0x400: for dynamic BRK instruction 17 17 * 0x401: for compile time BRK instruction 18 18 * 0x800: kernel-mode BUG() and WARN() traps 19 + * 0x9xx: tag-based KASAN trap (allowed values 0x900 - 0x9ff) 19 20 */ 20 21 #define FAULT_BRK_IMM 0x100 21 22 #define KGDB_DYN_DBG_BRK_IMM 0x400 22 23 #define KGDB_COMPILED_DBG_BRK_IMM 0x401 23 24 #define BUG_BRK_IMM 0x800 25 + #define KASAN_BRK_IMM 0x900 24 26 25 27 #endif
+6 -2
arch/arm64/include/asm/kasan.h
··· 4 4 5 5 #ifndef __ASSEMBLY__ 6 6 7 - #ifdef CONFIG_KASAN 8 - 9 7 #include <linux/linkage.h> 10 8 #include <asm/memory.h> 11 9 #include <asm/pgtable-types.h> 10 + 11 + #define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) 12 + #define arch_kasan_reset_tag(addr) __tag_reset(addr) 13 + #define arch_kasan_get_tag(addr) __tag_get(addr) 14 + 15 + #ifdef CONFIG_KASAN 12 16 13 17 /* 14 18 * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
+34 -9
arch/arm64/include/asm/memory.h
··· 74 74 #endif 75 75 76 76 /* 77 - * KASAN requires 1/8th of the kernel virtual address space for the shadow 78 - * region. KASAN can bloat the stack significantly, so double the (minimum) 79 - * stack size when KASAN is in use, and then double it again if KASAN_EXTRA is 80 - * on. 77 + * Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual 78 + * address space for the shadow region respectively. They can bloat the stack 79 + * significantly, so double the (minimum) stack size when they are in use. 81 80 */ 82 81 #ifdef CONFIG_KASAN 83 - #define KASAN_SHADOW_SCALE_SHIFT 3 84 82 #define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) 85 83 #ifdef CONFIG_KASAN_EXTRA 86 84 #define KASAN_THREAD_SHIFT 2 ··· 219 221 #define PHYS_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT) 220 222 221 223 /* 224 + * When dealing with data aborts, watchpoints, or instruction traps we may end 225 + * up with a tagged userland pointer. Clear the tag to get a sane pointer to 226 + * pass on to access_ok(), for instance. 227 + */ 228 + #define untagged_addr(addr) \ 229 + ((__typeof__(addr))sign_extend64((u64)(addr), 55)) 230 + 231 + #ifdef CONFIG_KASAN_SW_TAGS 232 + #define __tag_shifted(tag) ((u64)(tag) << 56) 233 + #define __tag_set(addr, tag) (__typeof__(addr))( \ 234 + ((u64)(addr) & ~__tag_shifted(0xff)) | __tag_shifted(tag)) 235 + #define __tag_reset(addr) untagged_addr(addr) 236 + #define __tag_get(addr) (__u8)((u64)(addr) >> 56) 237 + #else 238 + #define __tag_set(addr, tag) (addr) 239 + #define __tag_reset(addr) (addr) 240 + #define __tag_get(addr) 0 241 + #endif 242 + 243 + /* 222 244 * Physical vs virtual RAM address space conversion. These are 223 245 * private definitions which should NOT be used outside memory.h 224 246 * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. ··· 321 303 #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) 322 304 #define __page_to_voff(kaddr) (((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) 323 305 324 - #define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET)) 306 + #define page_to_virt(page) ({ \ 307 + unsigned long __addr = \ 308 + ((__page_to_voff(page)) | PAGE_OFFSET); \ 309 + __addr = __tag_set(__addr, page_kasan_tag(page)); \ 310 + ((void *)__addr); \ 311 + }) 312 + 325 313 #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) 326 314 327 315 #define _virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ ··· 335 311 #endif 336 312 #endif 337 313 338 - #define _virt_addr_is_linear(kaddr) (((u64)(kaddr)) >= PAGE_OFFSET) 339 - #define virt_addr_valid(kaddr) (_virt_addr_is_linear(kaddr) && \ 340 - _virt_addr_valid(kaddr)) 314 + #define _virt_addr_is_linear(kaddr) \ 315 + (__tag_reset((u64)(kaddr)) >= PAGE_OFFSET) 316 + #define virt_addr_valid(kaddr) \ 317 + (_virt_addr_is_linear(kaddr) && _virt_addr_valid(kaddr)) 341 318 342 319 #include <asm-generic/memory_model.h> 343 320
+1
arch/arm64/include/asm/pgtable-hwdef.h
··· 299 299 #define TCR_A1 (UL(1) << 22) 300 300 #define TCR_ASID16 (UL(1) << 36) 301 301 #define TCR_TBI0 (UL(1) << 37) 302 + #define TCR_TBI1 (UL(1) << 38) 302 303 #define TCR_HA (UL(1) << 39) 303 304 #define TCR_HD (UL(1) << 40) 304 305 #define TCR_NFD1 (UL(1) << 54)
-7
arch/arm64/include/asm/uaccess.h
··· 95 95 return ret; 96 96 } 97 97 98 - /* 99 - * When dealing with data aborts, watchpoints, or instruction traps we may end 100 - * up with a tagged userland pointer. Clear the tag to get a sane pointer to 101 - * pass on to access_ok(), for instance. 102 - */ 103 - #define untagged_addr(addr) sign_extend64(addr, 55) 104 - 105 98 #define access_ok(type, addr, size) __range_ok(addr, size) 106 99 #define user_addr_max get_fs 107 100
+60
arch/arm64/kernel/traps.c
··· 35 35 #include <linux/sizes.h> 36 36 #include <linux/syscalls.h> 37 37 #include <linux/mm_types.h> 38 + #include <linux/kasan.h> 38 39 39 40 #include <asm/atomic.h> 40 41 #include <asm/bug.h> ··· 970 969 .fn = bug_handler, 971 970 }; 972 971 972 + #ifdef CONFIG_KASAN_SW_TAGS 973 + 974 + #define KASAN_ESR_RECOVER 0x20 975 + #define KASAN_ESR_WRITE 0x10 976 + #define KASAN_ESR_SIZE_MASK 0x0f 977 + #define KASAN_ESR_SIZE(esr) (1 << ((esr) & KASAN_ESR_SIZE_MASK)) 978 + 979 + static int kasan_handler(struct pt_regs *regs, unsigned int esr) 980 + { 981 + bool recover = esr & KASAN_ESR_RECOVER; 982 + bool write = esr & KASAN_ESR_WRITE; 983 + size_t size = KASAN_ESR_SIZE(esr); 984 + u64 addr = regs->regs[0]; 985 + u64 pc = regs->pc; 986 + 987 + if (user_mode(regs)) 988 + return DBG_HOOK_ERROR; 989 + 990 + kasan_report(addr, size, write, pc); 991 + 992 + /* 993 + * The instrumentation allows to control whether we can proceed after 994 + * a crash was detected. This is done by passing the -recover flag to 995 + * the compiler. Disabling recovery allows to generate more compact 996 + * code. 997 + * 998 + * Unfortunately disabling recovery doesn't work for the kernel right 999 + * now. KASAN reporting is disabled in some contexts (for example when 1000 + * the allocator accesses slab object metadata; this is controlled by 1001 + * current->kasan_depth). All these accesses are detected by the tool, 1002 + * even though the reports for them are not printed. 1003 + * 1004 + * This is something that might be fixed at some point in the future. 1005 + */ 1006 + if (!recover) 1007 + die("Oops - KASAN", regs, 0); 1008 + 1009 + /* If thread survives, skip over the brk instruction and continue: */ 1010 + arm64_skip_faulting_instruction(regs, AARCH64_INSN_SIZE); 1011 + return DBG_HOOK_HANDLED; 1012 + } 1013 + 1014 + #define KASAN_ESR_VAL (0xf2000000 | KASAN_BRK_IMM) 1015 + #define KASAN_ESR_MASK 0xffffff00 1016 + 1017 + static struct break_hook kasan_break_hook = { 1018 + .esr_val = KASAN_ESR_VAL, 1019 + .esr_mask = KASAN_ESR_MASK, 1020 + .fn = kasan_handler, 1021 + }; 1022 + #endif 1023 + 973 1024 /* 974 1025 * Initial handler for AArch64 BRK exceptions 975 1026 * This handler only used until debug_traps_init(). ··· 1029 976 int __init early_brk64(unsigned long addr, unsigned int esr, 1030 977 struct pt_regs *regs) 1031 978 { 979 + #ifdef CONFIG_KASAN_SW_TAGS 980 + if ((esr & KASAN_ESR_MASK) == KASAN_ESR_VAL) 981 + return kasan_handler(regs, esr) != DBG_HOOK_HANDLED; 982 + #endif 1032 983 return bug_handler(regs, esr) != DBG_HOOK_HANDLED; 1033 984 } 1034 985 ··· 1040 983 void __init trap_init(void) 1041 984 { 1042 985 register_break_hook(&bug_break_hook); 986 + #ifdef CONFIG_KASAN_SW_TAGS 987 + register_break_hook(&kasan_break_hook); 988 + #endif 1043 989 }
+22 -9
arch/arm64/mm/fault.c
··· 40 40 #include <asm/daifflags.h> 41 41 #include <asm/debug-monitors.h> 42 42 #include <asm/esr.h> 43 + #include <asm/kasan.h> 43 44 #include <asm/sysreg.h> 44 45 #include <asm/system_misc.h> 45 46 #include <asm/pgtable.h> ··· 133 132 data_abort_decode(esr); 134 133 } 135 134 135 + static inline bool is_ttbr0_addr(unsigned long addr) 136 + { 137 + /* entry assembly clears tags for TTBR0 addrs */ 138 + return addr < TASK_SIZE; 139 + } 140 + 141 + static inline bool is_ttbr1_addr(unsigned long addr) 142 + { 143 + /* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */ 144 + return arch_kasan_reset_tag(addr) >= VA_START; 145 + } 146 + 136 147 /* 137 148 * Dump out the page tables associated with 'addr' in the currently active mm. 138 149 */ ··· 154 141 pgd_t *pgdp; 155 142 pgd_t pgd; 156 143 157 - if (addr < TASK_SIZE) { 144 + if (is_ttbr0_addr(addr)) { 158 145 /* TTBR0 */ 159 146 mm = current->active_mm; 160 147 if (mm == &init_mm) { ··· 162 149 addr); 163 150 return; 164 151 } 165 - } else if (addr >= VA_START) { 152 + } else if (is_ttbr1_addr(addr)) { 166 153 /* TTBR1 */ 167 154 mm = &init_mm; 168 155 } else { ··· 267 254 if (fsc_type == ESR_ELx_FSC_PERM) 268 255 return true; 269 256 270 - if (addr < TASK_SIZE && system_uses_ttbr0_pan()) 257 + if (is_ttbr0_addr(addr) && system_uses_ttbr0_pan()) 271 258 return fsc_type == ESR_ELx_FSC_FAULT && 272 259 (regs->pstate & PSR_PAN_BIT); 273 260 ··· 332 319 * type", so we ignore this wrinkle and just return the translation 333 320 * fault.) 334 321 */ 335 - if (current->thread.fault_address >= TASK_SIZE) { 322 + if (!is_ttbr0_addr(current->thread.fault_address)) { 336 323 switch (ESR_ELx_EC(esr)) { 337 324 case ESR_ELx_EC_DABT_LOW: 338 325 /* ··· 468 455 mm_flags |= FAULT_FLAG_WRITE; 469 456 } 470 457 471 - if (addr < TASK_SIZE && is_el1_permission_fault(addr, esr, regs)) { 458 + if (is_ttbr0_addr(addr) && is_el1_permission_fault(addr, esr, regs)) { 472 459 /* regs->orig_addr_limit may be 0 if we entered from EL0 */ 473 460 if (regs->orig_addr_limit == KERNEL_DS) 474 461 die_kernel_fault("access to user memory with fs=KERNEL_DS", ··· 616 603 unsigned int esr, 617 604 struct pt_regs *regs) 618 605 { 619 - if (addr < TASK_SIZE) 606 + if (is_ttbr0_addr(addr)) 620 607 return do_page_fault(addr, esr, regs); 621 608 622 609 do_bad_area(addr, esr, regs); ··· 771 758 * re-enabled IRQs. If the address is a kernel address, apply 772 759 * BP hardening prior to enabling IRQs and pre-emption. 773 760 */ 774 - if (addr > TASK_SIZE) 761 + if (!is_ttbr0_addr(addr)) 775 762 arm64_apply_bp_hardening(); 776 763 777 764 local_daif_restore(DAIF_PROCCTX); ··· 784 771 struct pt_regs *regs) 785 772 { 786 773 if (user_mode(regs)) { 787 - if (instruction_pointer(regs) > TASK_SIZE) 774 + if (!is_ttbr0_addr(instruction_pointer(regs))) 788 775 arm64_apply_bp_hardening(); 789 776 local_daif_restore(DAIF_PROCCTX); 790 777 } ··· 838 825 if (interrupts_enabled(regs)) 839 826 trace_hardirqs_off(); 840 827 841 - if (user_mode(regs) && instruction_pointer(regs) > TASK_SIZE) 828 + if (user_mode(regs) && !is_ttbr0_addr(instruction_pointer(regs))) 842 829 arm64_apply_bp_hardening(); 843 830 844 831 if (!inf->fn(addr, esr, regs)) {
+37 -20
arch/arm64/mm/kasan_init.c
··· 39 39 { 40 40 void *p = memblock_alloc_try_nid(PAGE_SIZE, PAGE_SIZE, 41 41 __pa(MAX_DMA_ADDRESS), 42 - MEMBLOCK_ALLOC_ACCESSIBLE, node); 42 + MEMBLOCK_ALLOC_KASAN, node); 43 + return __pa(p); 44 + } 45 + 46 + static phys_addr_t __init kasan_alloc_raw_page(int node) 47 + { 48 + void *p = memblock_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE, 49 + __pa(MAX_DMA_ADDRESS), 50 + MEMBLOCK_ALLOC_KASAN, node); 43 51 return __pa(p); 44 52 } 45 53 ··· 55 47 bool early) 56 48 { 57 49 if (pmd_none(READ_ONCE(*pmdp))) { 58 - phys_addr_t pte_phys = early ? __pa_symbol(kasan_zero_pte) 59 - : kasan_alloc_zeroed_page(node); 50 + phys_addr_t pte_phys = early ? 51 + __pa_symbol(kasan_early_shadow_pte) 52 + : kasan_alloc_zeroed_page(node); 60 53 __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE); 61 54 } 62 55 ··· 69 60 bool early) 70 61 { 71 62 if (pud_none(READ_ONCE(*pudp))) { 72 - phys_addr_t pmd_phys = early ? __pa_symbol(kasan_zero_pmd) 73 - : kasan_alloc_zeroed_page(node); 63 + phys_addr_t pmd_phys = early ? 64 + __pa_symbol(kasan_early_shadow_pmd) 65 + : kasan_alloc_zeroed_page(node); 74 66 __pud_populate(pudp, pmd_phys, PMD_TYPE_TABLE); 75 67 } 76 68 ··· 82 72 bool early) 83 73 { 84 74 if (pgd_none(READ_ONCE(*pgdp))) { 85 - phys_addr_t pud_phys = early ? __pa_symbol(kasan_zero_pud) 86 - : kasan_alloc_zeroed_page(node); 75 + phys_addr_t pud_phys = early ? 76 + __pa_symbol(kasan_early_shadow_pud) 77 + : kasan_alloc_zeroed_page(node); 87 78 __pgd_populate(pgdp, pud_phys, PMD_TYPE_TABLE); 88 79 } 89 80 ··· 98 87 pte_t *ptep = kasan_pte_offset(pmdp, addr, node, early); 99 88 100 89 do { 101 - phys_addr_t page_phys = early ? __pa_symbol(kasan_zero_page) 102 - : kasan_alloc_zeroed_page(node); 90 + phys_addr_t page_phys = early ? 91 + __pa_symbol(kasan_early_shadow_page) 92 + : kasan_alloc_raw_page(node); 93 + if (!early) 94 + memset(__va(page_phys), KASAN_SHADOW_INIT, PAGE_SIZE); 103 95 next = addr + PAGE_SIZE; 104 96 set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); 105 97 } while (ptep++, addr = next, addr != end && pte_none(READ_ONCE(*ptep))); ··· 219 205 kasan_map_populate(kimg_shadow_start, kimg_shadow_end, 220 206 early_pfn_to_nid(virt_to_pfn(lm_alias(_text)))); 221 207 222 - kasan_populate_zero_shadow((void *)KASAN_SHADOW_START, 223 - (void *)mod_shadow_start); 224 - kasan_populate_zero_shadow((void *)kimg_shadow_end, 225 - kasan_mem_to_shadow((void *)PAGE_OFFSET)); 208 + kasan_populate_early_shadow((void *)KASAN_SHADOW_START, 209 + (void *)mod_shadow_start); 210 + kasan_populate_early_shadow((void *)kimg_shadow_end, 211 + kasan_mem_to_shadow((void *)PAGE_OFFSET)); 226 212 227 213 if (kimg_shadow_start > mod_shadow_end) 228 - kasan_populate_zero_shadow((void *)mod_shadow_end, 229 - (void *)kimg_shadow_start); 214 + kasan_populate_early_shadow((void *)mod_shadow_end, 215 + (void *)kimg_shadow_start); 230 216 231 217 for_each_memblock(memory, reg) { 232 218 void *start = (void *)__phys_to_virt(reg->base); ··· 241 227 } 242 228 243 229 /* 244 - * KAsan may reuse the contents of kasan_zero_pte directly, so we 245 - * should make sure that it maps the zero page read-only. 230 + * KAsan may reuse the contents of kasan_early_shadow_pte directly, 231 + * so we should make sure that it maps the zero page read-only. 246 232 */ 247 233 for (i = 0; i < PTRS_PER_PTE; i++) 248 - set_pte(&kasan_zero_pte[i], 249 - pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO)); 234 + set_pte(&kasan_early_shadow_pte[i], 235 + pfn_pte(sym_to_pfn(kasan_early_shadow_page), 236 + PAGE_KERNEL_RO)); 250 237 251 - memset(kasan_zero_page, 0, PAGE_SIZE); 238 + memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); 252 239 cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); 240 + 241 + kasan_init_tags(); 253 242 254 243 /* At this point kasan is fully initialized. Enable error messages */ 255 244 init_task.kasan_depth = 0;
+7 -6
arch/arm64/mm/mmu.c
··· 1003 1003 1004 1004 pmd = READ_ONCE(*pmdp); 1005 1005 1006 - if (!pmd_present(pmd)) 1007 - return 1; 1008 1006 if (!pmd_table(pmd)) { 1009 - VM_WARN_ON(!pmd_table(pmd)); 1007 + VM_WARN_ON(1); 1010 1008 return 1; 1011 1009 } 1012 1010 ··· 1024 1026 1025 1027 pud = READ_ONCE(*pudp); 1026 1028 1027 - if (!pud_present(pud)) 1028 - return 1; 1029 1029 if (!pud_table(pud)) { 1030 - VM_WARN_ON(!pud_table(pud)); 1030 + VM_WARN_ON(1); 1031 1031 return 1; 1032 1032 } 1033 1033 ··· 1041 1045 __flush_tlb_kernel_pgtable(addr); 1042 1046 pmd_free(NULL, table); 1043 1047 return 1; 1048 + } 1049 + 1050 + int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) 1051 + { 1052 + return 0; /* Don't attempt a block mapping */ 1044 1053 } 1045 1054 1046 1055 #ifdef CONFIG_MEMORY_HOTPLUG
+7 -1
arch/arm64/mm/proc.S
··· 47 47 /* PTWs cacheable, inner/outer WBWA */ 48 48 #define TCR_CACHE_FLAGS TCR_IRGN_WBWA | TCR_ORGN_WBWA 49 49 50 + #ifdef CONFIG_KASAN_SW_TAGS 51 + #define TCR_KASAN_FLAGS TCR_TBI1 52 + #else 53 + #define TCR_KASAN_FLAGS 0 54 + #endif 55 + 50 56 #define MAIR(attr, mt) ((attr) << ((mt) * 8)) 51 57 52 58 /* ··· 455 449 */ 456 450 ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \ 457 451 TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \ 458 - TCR_TBI0 | TCR_A1 452 + TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS 459 453 460 454 #ifdef CONFIG_ARM64_USER_VA_BITS_52 461 455 ldr_l x9, vabits_user
+2 -2
arch/csky/mm/init.c
··· 71 71 ClearPageReserved(virt_to_page(start)); 72 72 init_page_count(virt_to_page(start)); 73 73 free_page(start); 74 - totalram_pages++; 74 + totalram_pages_inc(); 75 75 } 76 76 } 77 77 #endif ··· 88 88 ClearPageReserved(virt_to_page(addr)); 89 89 init_page_count(virt_to_page(addr)); 90 90 free_page(addr); 91 - totalram_pages++; 91 + totalram_pages_inc(); 92 92 addr += PAGE_SIZE; 93 93 } 94 94
+1 -1
arch/ia64/mm/init.c
··· 658 658 } 659 659 660 660 #ifdef CONFIG_MEMORY_HOTREMOVE 661 - int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) 661 + int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) 662 662 { 663 663 unsigned long start_pfn = start >> PAGE_SHIFT; 664 664 unsigned long nr_pages = size >> PAGE_SHIFT;
+2 -1
arch/powerpc/mm/mem.c
··· 139 139 } 140 140 141 141 #ifdef CONFIG_MEMORY_HOTREMOVE 142 - int __meminit arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) 142 + int __meminit arch_remove_memory(int nid, u64 start, u64 size, 143 + struct vmem_altmap *altmap) 143 144 { 144 145 unsigned long start_pfn = start >> PAGE_SHIFT; 145 146 unsigned long nr_pages = size >> PAGE_SHIFT;
+5 -5
arch/powerpc/platforms/pseries/cmm.c
··· 208 208 209 209 pa->page[pa->index++] = addr; 210 210 loaned_pages++; 211 - totalram_pages--; 211 + totalram_pages_dec(); 212 212 spin_unlock(&cmm_lock); 213 213 nr--; 214 214 } ··· 247 247 free_page(addr); 248 248 loaned_pages--; 249 249 nr--; 250 - totalram_pages++; 250 + totalram_pages_inc(); 251 251 } 252 252 spin_unlock(&cmm_lock); 253 253 cmm_dbg("End request with %ld pages unfulfilled\n", nr); ··· 291 291 int rc; 292 292 struct hvcall_mpp_data mpp_data; 293 293 signed long active_pages_target, page_loan_request, target; 294 - signed long total_pages = totalram_pages + loaned_pages; 294 + signed long total_pages = totalram_pages() + loaned_pages; 295 295 signed long min_mem_pages = (min_mem_mb * 1024 * 1024) / PAGE_SIZE; 296 296 297 297 rc = h_get_mpp(&mpp_data); ··· 322 322 323 323 cmm_dbg("delta = %ld, loaned = %lu, target = %lu, oom = %lu, totalram = %lu\n", 324 324 page_loan_request, loaned_pages, loaned_pages_target, 325 - oom_freed_pages, totalram_pages); 325 + oom_freed_pages, totalram_pages()); 326 326 } 327 327 328 328 static struct notifier_block cmm_oom_nb = { ··· 581 581 free_page(pa_curr->page[idx]); 582 582 freed++; 583 583 loaned_pages--; 584 - totalram_pages++; 584 + totalram_pages_inc(); 585 585 pa_curr->page[idx] = pa_last->page[--pa_last->index]; 586 586 if (pa_last->index == 0) { 587 587 if (pa_curr == pa_last)
+9 -8
arch/s390/mm/dump_pagetables.c
··· 111 111 } 112 112 113 113 #ifdef CONFIG_KASAN 114 - static void note_kasan_zero_page(struct seq_file *m, struct pg_state *st) 114 + static void note_kasan_early_shadow_page(struct seq_file *m, 115 + struct pg_state *st) 115 116 { 116 117 unsigned int prot; 117 118 118 - prot = pte_val(*kasan_zero_pte) & 119 + prot = pte_val(*kasan_early_shadow_pte) & 119 120 (_PAGE_PROTECT | _PAGE_INVALID | _PAGE_NOEXEC); 120 121 note_page(m, st, prot, 4); 121 122 } ··· 155 154 int i; 156 155 157 156 #ifdef CONFIG_KASAN 158 - if ((pud_val(*pud) & PAGE_MASK) == __pa(kasan_zero_pmd)) { 159 - note_kasan_zero_page(m, st); 157 + if ((pud_val(*pud) & PAGE_MASK) == __pa(kasan_early_shadow_pmd)) { 158 + note_kasan_early_shadow_page(m, st); 160 159 return; 161 160 } 162 161 #endif ··· 186 185 int i; 187 186 188 187 #ifdef CONFIG_KASAN 189 - if ((p4d_val(*p4d) & PAGE_MASK) == __pa(kasan_zero_pud)) { 190 - note_kasan_zero_page(m, st); 188 + if ((p4d_val(*p4d) & PAGE_MASK) == __pa(kasan_early_shadow_pud)) { 189 + note_kasan_early_shadow_page(m, st); 191 190 return; 192 191 } 193 192 #endif ··· 216 215 int i; 217 216 218 217 #ifdef CONFIG_KASAN 219 - if ((pgd_val(*pgd) & PAGE_MASK) == __pa(kasan_zero_p4d)) { 220 - note_kasan_zero_page(m, st); 218 + if ((pgd_val(*pgd) & PAGE_MASK) == __pa(kasan_early_shadow_p4d)) { 219 + note_kasan_early_shadow_page(m, st); 221 220 return; 222 221 } 223 222 #endif
+2 -2
arch/s390/mm/init.c
··· 59 59 order = 7; 60 60 61 61 /* Limit number of empty zero pages for small memory sizes */ 62 - while (order > 2 && (totalram_pages >> 10) < (1UL << order)) 62 + while (order > 2 && (totalram_pages() >> 10) < (1UL << order)) 63 63 order--; 64 64 65 65 empty_zero_page = __get_free_pages(GFP_KERNEL | __GFP_ZERO, order); ··· 242 242 } 243 243 244 244 #ifdef CONFIG_MEMORY_HOTREMOVE 245 - int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) 245 + int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) 246 246 { 247 247 /* 248 248 * There is no hardware or firmware interface which could trigger a
+20 -13
arch/s390/mm/kasan_init.c
··· 107 107 if (mode == POPULATE_ZERO_SHADOW && 108 108 IS_ALIGNED(address, PGDIR_SIZE) && 109 109 end - address >= PGDIR_SIZE) { 110 - pgd_populate(&init_mm, pg_dir, kasan_zero_p4d); 110 + pgd_populate(&init_mm, pg_dir, 111 + kasan_early_shadow_p4d); 111 112 address = (address + PGDIR_SIZE) & PGDIR_MASK; 112 113 continue; 113 114 } ··· 121 120 if (mode == POPULATE_ZERO_SHADOW && 122 121 IS_ALIGNED(address, P4D_SIZE) && 123 122 end - address >= P4D_SIZE) { 124 - p4d_populate(&init_mm, p4_dir, kasan_zero_pud); 123 + p4d_populate(&init_mm, p4_dir, 124 + kasan_early_shadow_pud); 125 125 address = (address + P4D_SIZE) & P4D_MASK; 126 126 continue; 127 127 } ··· 135 133 if (mode == POPULATE_ZERO_SHADOW && 136 134 IS_ALIGNED(address, PUD_SIZE) && 137 135 end - address >= PUD_SIZE) { 138 - pud_populate(&init_mm, pu_dir, kasan_zero_pmd); 136 + pud_populate(&init_mm, pu_dir, 137 + kasan_early_shadow_pmd); 139 138 address = (address + PUD_SIZE) & PUD_MASK; 140 139 continue; 141 140 } ··· 149 146 if (mode == POPULATE_ZERO_SHADOW && 150 147 IS_ALIGNED(address, PMD_SIZE) && 151 148 end - address >= PMD_SIZE) { 152 - pmd_populate(&init_mm, pm_dir, kasan_zero_pte); 149 + pmd_populate(&init_mm, pm_dir, 150 + kasan_early_shadow_pte); 153 151 address = (address + PMD_SIZE) & PMD_MASK; 154 152 continue; 155 153 } ··· 192 188 pte_val(*pt_dir) = __pa(page) | pgt_prot; 193 189 break; 194 190 case POPULATE_ZERO_SHADOW: 195 - page = kasan_zero_page; 191 + page = kasan_early_shadow_page; 196 192 pte_val(*pt_dir) = __pa(page) | pgt_prot_zero; 197 193 break; 198 194 } ··· 260 256 unsigned long vmax; 261 257 unsigned long pgt_prot = pgprot_val(PAGE_KERNEL_RO); 262 258 pte_t pte_z; 263 - pmd_t pmd_z = __pmd(__pa(kasan_zero_pte) | _SEGMENT_ENTRY); 264 - pud_t pud_z = __pud(__pa(kasan_zero_pmd) | _REGION3_ENTRY); 265 - p4d_t p4d_z = __p4d(__pa(kasan_zero_pud) | _REGION2_ENTRY); 259 + pmd_t pmd_z = __pmd(__pa(kasan_early_shadow_pte) | _SEGMENT_ENTRY); 260 + pud_t pud_z = __pud(__pa(kasan_early_shadow_pmd) | _REGION3_ENTRY); 261 + p4d_t p4d_z = __p4d(__pa(kasan_early_shadow_pud) | _REGION2_ENTRY); 266 262 267 263 kasan_early_detect_facilities(); 268 264 if (!has_nx) 269 265 pgt_prot &= ~_PAGE_NOEXEC; 270 - pte_z = __pte(__pa(kasan_zero_page) | pgt_prot); 266 + pte_z = __pte(__pa(kasan_early_shadow_page) | pgt_prot); 271 267 272 268 memsize = get_mem_detect_end(); 273 269 if (!memsize) ··· 296 292 } 297 293 298 294 /* init kasan zero shadow */ 299 - crst_table_init((unsigned long *)kasan_zero_p4d, p4d_val(p4d_z)); 300 - crst_table_init((unsigned long *)kasan_zero_pud, pud_val(pud_z)); 301 - crst_table_init((unsigned long *)kasan_zero_pmd, pmd_val(pmd_z)); 302 - memset64((u64 *)kasan_zero_pte, pte_val(pte_z), PTRS_PER_PTE); 295 + crst_table_init((unsigned long *)kasan_early_shadow_p4d, 296 + p4d_val(p4d_z)); 297 + crst_table_init((unsigned long *)kasan_early_shadow_pud, 298 + pud_val(pud_z)); 299 + crst_table_init((unsigned long *)kasan_early_shadow_pmd, 300 + pmd_val(pmd_z)); 301 + memset64((u64 *)kasan_early_shadow_pte, pte_val(pte_z), PTRS_PER_PTE); 303 302 304 303 shadow_alloc_size = memsize >> KASAN_SHADOW_SCALE_SHIFT; 305 304 pgalloc_low = round_up((unsigned long)_end, _SEGMENT_SIZE);
+1 -4
arch/sh/boards/board-apsh4a3a.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * ALPHAPROJECT AP-SH4A-3A Support. 3 4 * 4 5 * Copyright (C) 2010 ALPHAPROJECT Co.,Ltd. 5 6 * Copyright (C) 2008 Yoshihiro Shimoda 6 7 * Copyright (C) 2009 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/platform_device.h>
+1 -4
arch/sh/boards/board-apsh4ad0a.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * ALPHAPROJECT AP-SH4AD-0A Support. 3 4 * 4 5 * Copyright (C) 2010 ALPHAPROJECT Co.,Ltd. 5 6 * Copyright (C) 2010 Matt Fleming 6 7 * Copyright (C) 2010 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/platform_device.h>
+1 -14
arch/sh/boards/board-edosk7760.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* 2 3 * Renesas Europe EDOSK7760 Board Support 3 4 * 4 5 * Copyright (C) 2008 SPES Societa' Progettazione Elettronica e Software Ltd. 5 6 * Author: Luca Santini <luca.santini@spesonline.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License as published by 9 - * the Free Software Foundation; either version 2 of the License, or 10 - * (at your option) any later version. 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 7 */ 21 8 #include <linux/init.h> 22 9 #include <linux/types.h>
+1 -4
arch/sh/boards/board-espt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Data Technology Inc. ESPT-GIGA board support 3 4 * 4 5 * Copyright (C) 2008, 2009 Renesas Solutions Corp. 5 6 * Copyright (C) 2008, 2009 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/init.h> 12 9 #include <linux/platform_device.h>
+1 -4
arch/sh/boards/board-magicpanelr2.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/magicpanel/setup.c 3 4 * 4 5 * Copyright (C) 2007 Markus Brunner, Mark Jonas 5 6 * 6 7 * Magic Panel Release 2 board setup 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/irq.h>
+1 -4
arch/sh/boards/board-sh7757lcr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas R0P7757LC0012RL Support. 3 4 * 4 5 * Copyright (C) 2009 - 2010 Renesas Solutions Corp. 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/init.h>
+1 -4
arch/sh/boards/board-sh7785lcr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Technology Corp. R0P7785LC0011RL Support. 3 4 * 4 5 * Copyright (C) 2008 Yoshihiro Shimoda 5 6 * Copyright (C) 2009 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/init.h> 12 9 #include <linux/platform_device.h>
+1 -4
arch/sh/boards/board-titan.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/titan/setup.c - Setup for Titan 3 4 * 4 5 * Copyright (C) 2006 Jamie Lenehan 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/irq.h>
+1 -4
arch/sh/boards/board-urquell.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Technology Corp. SH7786 Urquell Support. 3 4 * ··· 7 6 * 8 7 * Based on board-sh7785lcr.c 9 8 * Copyright (C) 2008 Yoshihiro Shimoda 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/init.h> 16 11 #include <linux/platform_device.h>
+1
arch/sh/boards/mach-ap325rxa/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-y := setup.o sdram.o 2 3
+2 -5
arch/sh/boards/mach-ap325rxa/sdram.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * AP325RXA sdram self/auto-refresh setup code 3 4 * 4 5 * Copyright (C) 2009 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/sys.h>
+1
arch/sh/boards/mach-cayman/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the Hitachi Cayman specific parts of the kernel 3 4 #
+1 -4
arch/sh/boards/mach-cayman/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/mach-cayman/irq.c - SH-5 Cayman Interrupt Support 3 4 * 4 5 * This file handles the board specific parts of the Cayman interrupt system 5 6 * 6 7 * Copyright (C) 2002 Stuart Menefy 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/io.h> 13 10 #include <linux/irq.h>
+1 -4
arch/sh/boards/mach-cayman/panic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2003 Richard Curnow, SuperH UK Limited 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 4 */ 8 5 9 6 #include <linux/kernel.h>
+1 -4
arch/sh/boards/mach-cayman/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/mach-cayman/setup.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2002 David J. Mckay & Benedict Gaster 8 7 * Copyright (C) 2003 - 2007 Paul Mundt 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/io.h>
+1
arch/sh/boards/mach-dreamcast/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the Sega Dreamcast specific parts of the kernel 3 4 #
+1 -1
arch/sh/boards/mach-dreamcast/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/dreamcast/irq.c 3 4 * ··· 7 6 * Copyright (c) 2001, 2002 M. R. Brown <mrbrown@0xd6.org> 8 7 * 9 8 * This file is part of the LinuxDC project (www.linuxdc.org) 10 - * Released under the terms of the GNU GPL v2.0 11 9 */ 12 10 #include <linux/irq.h> 13 11 #include <linux/io.h>
+1 -3
arch/sh/boards/mach-dreamcast/rtc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/dreamcast/rtc.c 3 4 * ··· 6 5 * 7 6 * Copyright (c) 2001, 2002 M. R. Brown <mrbrown@0xd6.org> 8 7 * Copyright (c) 2002 Paul Mundt <lethal@chaoticdreams.org> 9 - * 10 - * Released under the terms of the GNU GPL v2.0. 11 - * 12 8 */ 13 9 14 10 #include <linux/time.h>
+1 -2
arch/sh/boards/mach-dreamcast/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/dreamcast/setup.c 3 4 * ··· 8 7 * Copyright (c) 2002, 2003, 2004 Paul Mundt <lethal@linux-sh.org> 9 8 * 10 9 * This file is part of the LinuxDC project (www.linuxdc.org) 11 - * 12 - * Released under the terms of the GNU GPL v2.0. 13 10 * 14 11 * This file originally bore the message (with enclosed-$): 15 12 * Id: setup_dc.c,v 1.5 2001/05/24 05:09:16 mrbrown Exp
+1
arch/sh/boards/mach-ecovec24/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the R0P7724LC0011/21RL (EcoVec) 3 4 #
+2 -5
arch/sh/boards/mach-ecovec24/sdram.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Ecovec24 sdram self/auto-refresh setup code 3 4 * 4 5 * Copyright (C) 2009 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/sys.h>
+1 -4
arch/sh/boards/mach-ecovec24/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2009 Renesas Solutions Corp. 3 4 * 4 5 * Kuninori Morimoto <morimoto.kuninori@renesas.com> 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <asm/clock.h> 11 8 #include <asm/heartbeat.h>
+1 -4
arch/sh/boards/mach-highlander/irq-r7780mp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Solutions Highlander R7780MP Support. 3 4 * 4 5 * Copyright (C) 2002 Atom Create Engineering Co., Ltd. 5 6 * Copyright (C) 2006 Paul Mundt 6 7 * Copyright (C) 2007 Magnus Damm 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/irq.h>
+1 -4
arch/sh/boards/mach-highlander/irq-r7780rp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Solutions Highlander R7780RP-1 Support. 3 4 * 4 5 * Copyright (C) 2002 Atom Create Engineering Co., Ltd. 5 6 * Copyright (C) 2006 Paul Mundt 6 7 * Copyright (C) 2008 Magnus Damm 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/irq.h>
+1 -4
arch/sh/boards/mach-highlander/irq-r7785rp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Solutions Highlander R7785RP Support. 3 4 * 4 5 * Copyright (C) 2002 Atom Create Engineering Co., Ltd. 5 6 * Copyright (C) 2006 - 2008 Paul Mundt 6 7 * Copyright (C) 2007 Magnus Damm 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/irq.h>
+1 -4
arch/sh/boards/mach-highlander/pinmux-r7785rp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2008 Paul Mundt 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 4 */ 8 5 #include <linux/init.h> 9 6 #include <linux/gpio.h>
+1 -4
arch/sh/boards/mach-highlander/psw.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/renesas/r7780rp/psw.c 3 4 * 4 5 * push switch support for RDBRP-1/RDBREVRP-1 debug boards. 5 6 * 6 7 * Copyright (C) 2006 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/io.h> 13 10 #include <linux/module.h>
+1 -4
arch/sh/boards/mach-highlander/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/renesas/r7780rp/setup.c 3 4 * ··· 9 8 * 10 9 * This contains support for the R7780RP-1, R7780MP, and R7785RP 11 10 * Highlander modules. 12 - * 13 - * This file is subject to the terms and conditions of the GNU General Public 14 - * License. See the file "COPYING" in the main directory of this archive 15 - * for more details. 16 11 */ 17 12 #include <linux/init.h> 18 13 #include <linux/io.h>
+1
arch/sh/boards/mach-hp6xx/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the HP6xx specific parts of the kernel 3 4 #
+1 -3
arch/sh/boards/mach-hp6xx/hp6xx_apm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * bios-less APM driver for hp680 3 4 * 4 5 * Copyright 2005 (c) Andriy Skulysh <askulysh@gmail.com> 5 6 * Copyright 2008 (c) Kristoffer Ericson <kristoffer.ericson@gmail.com> 6 - * 7 - * This program is free software; you can redistribute it and/or 8 - * modify it under the terms of the GNU General Public License. 9 7 */ 10 8 #include <linux/module.h> 11 9 #include <linux/kernel.h>
+1 -3
arch/sh/boards/mach-hp6xx/pm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * hp6x0 Power Management Routines 3 4 * 4 5 * Copyright (c) 2006 Andriy Skulysh <askulsyh@gmail.com> 5 - * 6 - * This program is free software; you can redistribute it and/or 7 - * modify it under the terms of the GNU General Public License. 8 6 */ 9 7 #include <linux/init.h> 10 8 #include <linux/suspend.h>
+2 -6
arch/sh/boards/mach-hp6xx/pm_wakeup.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (c) 2006 Andriy Skulysh <askulsyh@gmail.com> 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 - * 8 4 */ 9 5 10 6 #include <linux/linkage.h>
+1 -3
arch/sh/boards/mach-hp6xx/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/hp6xx/setup.c 3 4 * 4 5 * Copyright (C) 2002 Andriy Skulysh 5 6 * Copyright (C) 2007 Kristoffer Ericson <Kristoffer_e1@hotmail.com> 6 - * 7 - * May be copied or modified under the terms of the GNU General Public 8 - * License. See linux/COPYING for more information. 9 7 * 10 8 * Setup code for HP620/HP660/HP680/HP690 (internal peripherials only) 11 9 */
+1
arch/sh/boards/mach-kfr2r09/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-y := setup.o sdram.o 2 3 ifneq ($(CONFIG_FB_SH_MOBILE_LCDC),) 3 4 obj-y += lcd_wqvga.o
+1 -4
arch/sh/boards/mach-kfr2r09/lcd_wqvga.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * KFR2R09 LCD panel support 3 4 * ··· 6 5 * 7 6 * Register settings based on the out-of-tree t33fb.c driver 8 7 * Copyright (C) 2008 Lineo Solutions, Inc. 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file COPYING in the main directory of this archive for 12 - * more details. 13 8 */ 14 9 15 10 #include <linux/delay.h>
+2 -5
arch/sh/boards/mach-kfr2r09/sdram.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * KFR2R09 sdram self/auto-refresh setup code 3 4 * 4 5 * Copyright (C) 2009 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/sys.h>
+1 -2
arch/sh/boards/mach-kfr2r09/setup.c
··· 25 25 #include <linux/memblock.h> 26 26 #include <linux/mfd/tmio.h> 27 27 #include <linux/mmc/host.h> 28 - #include <linux/mtd/onenand.h> 29 28 #include <linux/mtd/physmap.h> 30 29 #include <linux/platform_data/lv5207lp.h> 31 30 #include <linux/platform_device.h> ··· 477 478 478 479 static int __init kfr2r09_devices_setup(void) 479 480 { 480 - static struct clk *camera_clk; 481 + struct clk *camera_clk; 481 482 482 483 /* register board specific self-refresh code */ 483 484 sh_mobile_register_self_refresh(SUSP_SH_STANDBY | SUSP_SH_SF |
+1
arch/sh/boards/mach-landisk/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for I-O DATA DEVICE, INC. "LANDISK Series" 3 4 #
+1 -5
arch/sh/boards/mach-landisk/gio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/landisk/gio.c - driver for landisk 3 4 * ··· 7 6 * 8 7 * Copylight (C) 2006 kogiidena 9 8 * Copylight (C) 2002 Atom Create Engineering Co., Ltd. * 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 - * 15 9 */ 16 10 #include <linux/module.h> 17 11 #include <linux/init.h>
+1 -4
arch/sh/boards/mach-landisk/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/mach-landisk/irq.c 3 4 * ··· 9 8 * 10 9 * Copyright (C) 2001 Ian da Silva, Jeremy Siegel 11 10 * Based largely on io_se.c. 12 - * 13 - * This file is subject to the terms and conditions of the GNU General Public 14 - * License. See the file "COPYING" in the main directory of this archive 15 - * for more details. 16 11 */ 17 12 18 13 #include <linux/init.h>
+1 -4
arch/sh/boards/mach-landisk/psw.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/landisk/psw.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2006-2007 Paul Mundt 8 7 * Copyright (C) 2007 kogiidena 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/io.h> 15 10 #include <linux/init.h>
+1 -4
arch/sh/boards/mach-landisk/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/landisk/setup.c 3 4 * ··· 8 7 * Copyright (C) 2002 Paul Mundt 9 8 * Copylight (C) 2002 Atom Create Engineering Co., Ltd. 10 9 * Copyright (C) 2005-2007 kogiidena 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/init.h> 17 12 #include <linux/platform_device.h>
+1
arch/sh/boards/mach-lboxre2/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the L-BOX RE2 specific parts of the kernel 3 4 # Copyright (c) 2007 Nobuhiro Iwamatsu
+1 -5
arch/sh/boards/mach-lboxre2/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/lboxre2/irq.c 3 4 * 4 5 * Copyright (C) 2007 Nobuhiro Iwamatsu 5 6 * 6 7 * NTT COMWARE L-BOX RE2 Support. 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 - * 12 8 */ 13 9 #include <linux/init.h> 14 10 #include <linux/interrupt.h>
+1 -5
arch/sh/boards/mach-lboxre2/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/lbox/setup.c 3 4 * 4 5 * Copyright (C) 2007 Nobuhiro Iwamatsu 5 6 * 6 7 * NTT COMWARE L-BOX RE2 Support 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 - * 12 8 */ 13 9 14 10 #include <linux/init.h>
+1
arch/sh/boards/mach-microdev/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the SuperH MicroDev specific parts of the kernel 3 4 #
+1 -4
arch/sh/boards/mach-microdev/fdc37c93xapm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 - * 3 3 * Setup for the SMSC FDC37C93xAPM 4 4 * 5 5 * Copyright (C) 2003 Sean McGoogan (Sean.McGoogan@superh.com) ··· 7 7 * Copyright (C) 2004, 2005 Paul Mundt 8 8 * 9 9 * SuperH SH4-202 MicroDev board support. 10 - * 11 - * May be copied or modified under the terms of the GNU General Public 12 - * License. See linux/COPYING for more information. 13 10 */ 14 11 #include <linux/init.h> 15 12 #include <linux/ioport.h>
+1 -3
arch/sh/boards/mach-microdev/io.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/superh/microdev/io.c 3 4 * ··· 7 6 * Copyright (C) 2004 Paul Mundt 8 7 * 9 8 * SuperH SH4-202 MicroDev board support. 10 - * 11 - * May be copied or modified under the terms of the GNU General Public 12 - * License. See linux/COPYING for more information. 13 9 */ 14 10 15 11 #include <linux/init.h>
+1 -3
arch/sh/boards/mach-microdev/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/superh/microdev/irq.c 3 4 * 4 5 * Copyright (C) 2003 Sean McGoogan (Sean.McGoogan@superh.com) 5 6 * 6 7 * SuperH SH4-202 MicroDev board support. 7 - * 8 - * May be copied or modified under the terms of the GNU General Public 9 - * License. See linux/COPYING for more information. 10 8 */ 11 9 12 10 #include <linux/init.h>
+1 -3
arch/sh/boards/mach-microdev/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/superh/microdev/setup.c 3 4 * ··· 7 6 * Copyright (C) 2004, 2005 Paul Mundt 8 7 * 9 8 * SuperH SH4-202 MicroDev board support. 10 - * 11 - * May be copied or modified under the terms of the GNU General Public 12 - * License. See linux/COPYING for more information. 13 9 */ 14 10 #include <linux/init.h> 15 11 #include <linux/platform_device.h>
+1
arch/sh/boards/mach-migor/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-y := setup.o sdram.o 2 3 obj-$(CONFIG_SH_MIGOR_QVGA) += lcd_qvga.o
+1 -4
arch/sh/boards/mach-migor/lcd_qvga.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Support for SuperH MigoR Quarter VGA LCD Panel 3 4 * ··· 6 5 * 7 6 * Based on lcd_powertip.c from Kenati Technologies Pvt Ltd. 8 7 * Copyright (c) 2007 Ujjwal Pande <ujjwal@kenati.com>, 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License version 2 as 12 - * published by the Free Software Foundation. 13 8 */ 14 9 15 10 #include <linux/delay.h>
+2 -5
arch/sh/boards/mach-migor/sdram.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Migo-R sdram self/auto-refresh setup code 3 4 * 4 5 * Copyright (C) 2009 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/sys.h>
+1
arch/sh/boards/mach-r2d/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the RTS7751R2D specific parts of the kernel 3 4 #
+1 -4
arch/sh/boards/mach-r2d/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Technology Sales RTS7751R2D Support. 3 4 * 4 5 * Copyright (C) 2002 - 2006 Atom Create Engineering Co., Ltd. 5 6 * Copyright (C) 2004 - 2007 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/init.h> 12 9 #include <linux/platform_device.h>
+1
arch/sh/boards/mach-rsk/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-y := setup.o 2 3 obj-$(CONFIG_SH_RSK7203) += devices-rsk7203.o 3 4 obj-$(CONFIG_SH_RSK7264) += devices-rsk7264.o
+1 -4
arch/sh/boards/mach-rsk/devices-rsk7203.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Technology Europe RSK+ 7203 Support. 3 4 * 4 5 * Copyright (C) 2008 - 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/types.h>
+1 -4
arch/sh/boards/mach-rsk/devices-rsk7264.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * RSK+SH7264 Support. 3 4 * 4 5 * Copyright (C) 2012 Renesas Electronics Europe 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/types.h>
+1 -4
arch/sh/boards/mach-rsk/devices-rsk7269.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * RSK+SH7269 Support 3 4 * 4 5 * Copyright (C) 2012 Renesas Electronics Europe Ltd 5 6 * Copyright (C) 2012 Phil Edworthy 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/init.h> 12 9 #include <linux/types.h>
+1 -4
arch/sh/boards/mach-rsk/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Technology Europe RSK+ Support. 3 4 * 4 5 * Copyright (C) 2008 Paul Mundt 5 6 * Copyright (C) 2008 Peter Griffin <pgriffin@mpc-data.co.uk> 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/init.h> 12 9 #include <linux/types.h>
+1
arch/sh/boards/mach-sdk7780/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the SDK7780 specific parts of the kernel 3 4 #
+1 -4
arch/sh/boards/mach-sdk7780/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/renesas/sdk7780/irq.c 3 4 * 4 5 * Renesas Technology Europe SDK7780 Support. 5 6 * 6 7 * Copyright (C) 2008 Nicholas Beck <nbeck@mpc-data.co.uk> 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/irq.h>
+1 -4
arch/sh/boards/mach-sdk7780/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/renesas/sdk7780/setup.c 3 4 * 4 5 * Renesas Solutions SH7780 SDK Support 5 6 * Copyright (C) 2008 Nicholas Beck <nbeck@mpc-data.co.uk> 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/init.h> 12 9 #include <linux/types.h>
+1
arch/sh/boards/mach-sdk7786/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-y := fpga.o irq.o nmi.o setup.o 2 3 3 4 obj-$(CONFIG_GPIOLIB) += gpio.o
+1 -4
arch/sh/boards/mach-sdk7786/fpga.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SDK7786 FPGA Support. 3 4 * 4 5 * Copyright (C) 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/io.h>
+1 -4
arch/sh/boards/mach-sdk7786/gpio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SDK7786 FPGA USRGPIR Support. 3 4 * 4 5 * Copyright (C) 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/interrupt.h>
+1 -4
arch/sh/boards/mach-sdk7786/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SDK7786 FPGA IRQ Controller Support. 3 4 * 4 5 * Copyright (C) 2010 Matt Fleming 5 6 * Copyright (C) 2010 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/irq.h> 12 9 #include <mach/fpga.h>
+1 -4
arch/sh/boards/mach-sdk7786/nmi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SDK7786 FPGA NMI Support. 3 4 * 4 5 * Copyright (C) 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/kernel.h>
+1 -4
arch/sh/boards/mach-sdk7786/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas Technology Europe SDK7786 Support. 3 4 * 4 5 * Copyright (C) 2010 Matt Fleming 5 6 * Copyright (C) 2010 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/init.h> 12 9 #include <linux/platform_device.h>
+1 -4
arch/sh/boards/mach-sdk7786/sram.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SDK7786 FPGA SRAM Support. 3 4 * 4 5 * Copyright (C) 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 11 8
+1
arch/sh/boards/mach-se/7206/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the 7206 SolutionEngine specific parts of the kernel 3 4 #
+1
arch/sh/boards/mach-se/7343/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the 7343 SolutionEngine specific parts of the kernel 3 4 #
+1 -4
arch/sh/boards/mach-se/7343/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Hitachi UL SolutionEngine 7343 FPGA IRQ Support. 3 4 * ··· 7 6 * 8 7 * Based on linux/arch/sh/boards/se/7343/irq.c 9 8 * Copyright (C) 2007 Nobuhiro Iwamatsu 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #define DRV_NAME "SE7343-FPGA" 16 11 #define pr_fmt(fmt) DRV_NAME ": " fmt
+1
arch/sh/boards/mach-se/770x/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the 770x SolutionEngine specific parts of the kernel 3 4 #
+1
arch/sh/boards/mach-se/7721/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-y := setup.o irq.o
+1 -4
arch/sh/boards/mach-se/7721/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/se/7721/irq.c 3 4 * 4 5 * Copyright (C) 2008 Renesas Solutions Corp. 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/irq.h>
+1 -5
arch/sh/boards/mach-se/7721/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/se/7721/setup.c 3 4 * 4 5 * Copyright (C) 2008 Renesas Solutions Corp. 5 6 * 6 7 * Hitachi UL SolutionEngine 7721 Support. 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 - * 12 8 */ 13 9 #include <linux/init.h> 14 10 #include <linux/platform_device.h>
+1
arch/sh/boards/mach-se/7722/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the HITACHI UL SolutionEngine 7722 specific parts of the kernel 3 4 #
+1 -4
arch/sh/boards/mach-se/7722/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Hitachi UL SolutionEngine 7722 FPGA IRQ Support. 3 4 * 4 5 * Copyright (C) 2007 Nobuhiro Iwamatsu 5 6 * Copyright (C) 2012 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #define DRV_NAME "SE7722-FPGA" 12 9 #define pr_fmt(fmt) DRV_NAME ": " fmt
+1 -5
arch/sh/boards/mach-se/7722/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/se/7722/setup.c 3 4 * ··· 6 5 * Copyright (C) 2012 Paul Mundt 7 6 * 8 7 * Hitachi UL SolutionEngine 7722 Support. 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 - * 14 8 */ 15 9 #include <linux/init.h> 16 10 #include <linux/platform_device.h>
+1
arch/sh/boards/mach-se/7724/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the HITACHI UL SolutionEngine 7724 specific parts of the kernel 3 4 #
+1 -4
arch/sh/boards/mach-se/7724/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/se/7724/irq.c 3 4 * ··· 10 9 * Copyright (C) 2007 Nobuhiro Iwamatsu 11 10 * 12 11 * Hitachi UL SolutionEngine 7724 Support. 13 - * 14 - * This file is subject to the terms and conditions of the GNU General Public 15 - * License. See the file "COPYING" in the main directory of this archive 16 - * for more details. 17 12 */ 18 13 #include <linux/init.h> 19 14 #include <linux/irq.h>
+2 -5
arch/sh/boards/mach-se/7724/sdram.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * MS7724SE sdram self/auto-refresh setup code 3 4 * 4 5 * Copyright (C) 2009 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/sys.h>
+1
arch/sh/boards/mach-se/7751/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the 7751 SolutionEngine specific parts of the kernel 3 4 #
+1
arch/sh/boards/mach-se/7780/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the HITACHI UL SolutionEngine 7780 specific parts of the kernel 3 4 #
+1 -4
arch/sh/boards/mach-se/7780/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/se/7780/irq.c 3 4 * 4 5 * Copyright (C) 2006,2007 Nobuhiro Iwamatsu 5 6 * 6 7 * Hitachi UL SolutionEngine 7780 Support. 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/irq.h>
+1 -4
arch/sh/boards/mach-se/7780/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/se/7780/setup.c 3 4 * 4 5 * Copyright (C) 2006,2007 Nobuhiro Iwamatsu 5 6 * 6 7 * Hitachi UL SolutionEngine 7780 Support. 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/platform_device.h>
+1
arch/sh/boards/mach-sh03/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the Interface (CTP/PCI-SH03) specific parts of the kernel 3 4 #
+1
arch/sh/boards/mach-sh7763rdp/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-y := setup.o irq.o
+1 -4
arch/sh/boards/mach-sh7763rdp/irq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/renesas/sh7763rdp/irq.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2008 Renesas Solutions Corp. 8 7 * Copyright (C) 2008 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 15 10 #include <linux/init.h>
+1 -4
arch/sh/boards/mach-sh7763rdp/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * linux/arch/sh/boards/renesas/sh7763rdp/setup.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2008 Renesas Solutions Corp. 8 7 * Copyright (C) 2008 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/platform_device.h>
+1
arch/sh/boards/mach-x3proto/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 obj-y += setup.o ilsel.o 2 3 3 4 obj-$(CONFIG_GPIOLIB) += gpio.o
+1 -4
arch/sh/boards/mach-x3proto/gpio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/mach-x3proto/gpio.c 3 4 * 4 5 * Renesas SH-X3 Prototype Baseboard GPIO Support. 5 6 * 6 7 * Copyright (C) 2010 - 2012 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 10
+1 -4
arch/sh/boards/mach-x3proto/ilsel.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/mach-x3proto/ilsel.c 3 4 * 4 5 * Helper routines for SH-X3 proto board ILSEL. 5 6 * 6 7 * Copyright (C) 2007 - 2010 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 10
+1 -4
arch/sh/boards/mach-x3proto/setup.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/boards/mach-x3proto/setup.c 3 4 * 4 5 * Renesas SH-X3 Prototype Board Support. 5 6 * 6 7 * Copyright (C) 2007 - 2010 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/platform_device.h>
+1 -4
arch/sh/boards/of-generic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH generic board support, using device tree 3 4 * 4 5 * Copyright (C) 2015-2016 Smart Energy Instruments, Inc. 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/of.h>
+1
arch/sh/drivers/dma/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the SuperH DMA specific kernel interface routines under Linux. 3 4 #
+2 -5
arch/sh/drivers/dma/dma-api.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/dma/dma-api.c 3 4 * 4 5 * SuperH-specific DMA management API 5 6 * 6 7 * Copyright (C) 2003, 2004, 2005 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/module.h> ··· 414 417 415 418 MODULE_AUTHOR("Paul Mundt <lethal@linux-sh.org>"); 416 419 MODULE_DESCRIPTION("DMA API for SuperH"); 417 - MODULE_LICENSE("GPL"); 420 + MODULE_LICENSE("GPL v2");
+2 -5
arch/sh/drivers/dma/dma-g2.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/dma/dma-g2.c 3 4 * 4 5 * G2 bus DMA support 5 6 * 6 7 * Copyright (C) 2003 - 2006 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h> ··· 194 197 195 198 MODULE_AUTHOR("Paul Mundt <lethal@linux-sh.org>"); 196 199 MODULE_DESCRIPTION("G2 bus DMA driver"); 197 - MODULE_LICENSE("GPL"); 200 + MODULE_LICENSE("GPL v2");
+2 -5
arch/sh/drivers/dma/dma-pvr2.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/dma/dma-pvr2.c 3 4 * 4 5 * NEC PowerVR 2 (Dreamcast) DMA support 5 6 * 6 7 * Copyright (C) 2003, 2004 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h> ··· 102 105 103 106 MODULE_AUTHOR("Paul Mundt <lethal@linux-sh.org>"); 104 107 MODULE_DESCRIPTION("NEC PowerVR 2 DMA driver"); 105 - MODULE_LICENSE("GPL"); 108 + MODULE_LICENSE("GPL v2");
+2 -5
arch/sh/drivers/dma/dma-sh.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/dma/dma-sh.c 3 4 * ··· 7 6 * Copyright (C) 2000 Takashi YOSHII 8 7 * Copyright (C) 2003, 2004 Paul Mundt 9 8 * Copyright (C) 2005 Andriy Skulysh 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/init.h> 16 11 #include <linux/interrupt.h> ··· 411 414 412 415 MODULE_AUTHOR("Takashi YOSHII, Paul Mundt, Andriy Skulysh"); 413 416 MODULE_DESCRIPTION("SuperH On-Chip DMAC Support"); 414 - MODULE_LICENSE("GPL"); 417 + MODULE_LICENSE("GPL v2");
+1 -4
arch/sh/drivers/dma/dma-sysfs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/dma/dma-sysfs.c 3 4 * 4 5 * sysfs interface for SH DMA API 5 6 * 6 7 * Copyright (C) 2004 - 2006 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/kernel.h> 13 10 #include <linux/init.h>
+1 -2
arch/sh/drivers/dma/dmabrg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7760 DMABRG IRQ handling 3 4 * 4 5 * (c) 2007 MSC Vertriebsges.m.b.H, Manuel Lauss <mlau@msc-ge.com> 5 - * licensed under the GPLv2. 6 - * 7 6 */ 8 7 9 8 #include <linux/interrupt.h>
+1 -4
arch/sh/drivers/heartbeat.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Generic heartbeat driver for regular LED banks 3 4 * ··· 14 13 * traditionally used for strobing the load average. This use case is 15 14 * handled by this driver, rather than giving each LED bit position its 16 15 * own struct device. 17 - * 18 - * This file is subject to the terms and conditions of the GNU General Public 19 - * License. See the file "COPYING" in the main directory of this archive 20 - * for more details. 21 16 */ 22 17 #include <linux/init.h> 23 18 #include <linux/platform_device.h>
+1 -4
arch/sh/drivers/pci/fixups-dreamcast.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/pci/fixups-dreamcast.c 3 4 * ··· 10 9 * This file originally bore the message (with enclosed-$): 11 10 * Id: pci.c,v 1.3 2003/05/04 19:29:46 lethal Exp 12 11 * Dreamcast PCI: Supports SEGA Broadband Adaptor only. 13 - * 14 - * This file is subject to the terms and conditions of the GNU General Public 15 - * License. See the file "COPYING" in the main directory of this archive 16 - * for more details. 17 12 */ 18 13 19 14 #include <linux/sched.h>
+1 -3
arch/sh/drivers/pci/fixups-landisk.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/pci/fixups-landisk.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2006 kogiidena 8 7 * Copyright (C) 2010 Nobuhiro Iwamatsu 9 - * 10 - * May be copied or modified under the terms of the GNU General Public 11 - * License. See linux/COPYING for more information. 12 8 */ 13 9 #include <linux/kernel.h> 14 10 #include <linux/types.h>
+1 -4
arch/sh/drivers/pci/fixups-r7780rp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/pci/fixups-r7780rp.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2003 Lineo uSolutions, Inc. 8 7 * Copyright (C) 2004 - 2006 Paul Mundt 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/pci.h> 15 10 #include <linux/io.h>
+1 -4
arch/sh/drivers/pci/fixups-rts7751r2d.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/pci/fixups-rts7751r2d.c 3 4 * ··· 7 6 * Copyright (C) 2003 Lineo uSolutions, Inc. 8 7 * Copyright (C) 2004 Paul Mundt 9 8 * Copyright (C) 2007 Nobuhiro Iwamatsu 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/pci.h> 16 11 #include <mach/lboxre2.h>
+1 -4
arch/sh/drivers/pci/fixups-sdk7780.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/pci/fixups-sdk7780.c 3 4 * ··· 7 6 * Copyright (C) 2003 Lineo uSolutions, Inc. 8 7 * Copyright (C) 2004 - 2006 Paul Mundt 9 8 * Copyright (C) 2006 Nobuhiro Iwamatsu 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/pci.h> 16 11 #include <linux/io.h>
+1 -4
arch/sh/drivers/pci/fixups-sdk7786.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SDK7786 FPGA PCIe mux handling 3 4 * 4 5 * Copyright (C) 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #define pr_fmt(fmt) "PCI: " fmt 11 8
+1 -3
arch/sh/drivers/pci/fixups-snapgear.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/pci/ops-snapgear.c 3 4 * ··· 7 6 * Ported to new API by Paul Mundt <lethal@linux-sh.org> 8 7 * 9 8 * Highly leveraged from pci-bigsur.c, written by Dustin McIntire. 10 - * 11 - * May be copied or modified under the terms of the GNU General Public 12 - * License. See linux/COPYING for more information. 13 9 * 14 10 * PCI initialization for the SnapGear boards 15 11 */
+1 -3
arch/sh/drivers/pci/fixups-titan.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/pci/ops-titan.c 3 4 * ··· 6 5 * 7 6 * Modified from ops-snapgear.c written by David McCullough 8 7 * Highly leveraged from pci-bigsur.c, written by Dustin McIntire. 9 - * 10 - * May be copied or modified under the terms of the GNU General Public 11 - * License. See linux/COPYING for more information. 12 8 * 13 9 * PCI initialization for the Titan boards 14 10 */
+1 -4
arch/sh/drivers/pci/ops-dreamcast.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * PCI operations for the Sega Dreamcast 3 4 * 4 5 * Copyright (C) 2001, 2002 M. R. Brown 5 6 * Copyright (C) 2002, 2003 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 12 9 #include <linux/sched.h>
+1 -4
arch/sh/drivers/pci/ops-sh4.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Generic SH-4 / SH-4A PCIC operations (SH7751, SH7780). 3 4 * 4 5 * Copyright (C) 2002 - 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License v2. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/pci.h> 11 8 #include <linux/io.h>
+1 -3
arch/sh/drivers/pci/ops-sh5.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Support functions for the SH5 PCI hardware. 3 4 * 4 5 * Copyright (C) 2001 David J. Mckay (david.mckay@st.com) 5 6 * Copyright (C) 2003, 2004 Paul Mundt 6 7 * Copyright (C) 2004 Richard Curnow 7 - * 8 - * May be copied or modified under the terms of the GNU General Public 9 - * License. See linux/COPYING for more information. 10 8 */ 11 9 #include <linux/kernel.h> 12 10 #include <linux/rwsem.h>
+1 -4
arch/sh/drivers/pci/ops-sh7786.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Generic SH7786 PCI-Express operations. 3 4 * 4 5 * Copyright (C) 2009 - 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License v2. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/kernel.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/drivers/pci/pci-dreamcast.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * PCI support for the Sega Dreamcast 3 4 * ··· 8 7 * This file originally bore the message (with enclosed-$): 9 8 * Id: pci.c,v 1.3 2003/05/04 19:29:46 lethal Exp 10 9 * Dreamcast PCI: Supports SEGA Broadband Adaptor only. 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 17 12 #include <linux/sched.h>
+1 -3
arch/sh/drivers/pci/pci-sh5.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2001 David J. Mckay (david.mckay@st.com) 3 4 * Copyright (C) 2003, 2004 Paul Mundt 4 5 * Copyright (C) 2004 Richard Curnow 5 - * 6 - * May be copied or modified under the terms of the GNU General Public 7 - * License. See linux/COPYING for more information. 8 6 * 9 7 * Support functions for the SH5 PCI hardware. 10 8 */
+2 -4
arch/sh/drivers/pci/pci-sh5.h
··· 1 - /* 2 - * Copyright (C) 2001 David J. Mckay (david.mckay@st.com) 1 + /* SPDX-License-Identifier: GPL-2.0 3 2 * 4 - * May be copied or modified under the terms of the GNU General Public 5 - * License. See linux/COPYING for more information. 3 + * Copyright (C) 2001 David J. Mckay (david.mckay@st.com) 6 4 * 7 5 * Definitions for the SH5 PCI hardware. 8 6 */
+1 -4
arch/sh/drivers/pci/pci-sh7751.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Low-Level PCI Support for the SH7751 3 4 * ··· 6 5 * Copyright (C) 2001 Dustin McIntire 7 6 * 8 7 * With cleanup by Paul van Gool <pvangool@mimotech.com>, 2003. 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/pci.h>
+2 -5
arch/sh/drivers/pci/pci-sh7751.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Low-Level PCI Support for SH7751 targets 3 4 * 4 5 * Dustin McIntire (dustin@sensoria.com) (c) 2001 5 6 * Paul Mundt (lethal@linux-sh.org) (c) 2003 6 - * 7 - * May be copied or modified under the terms of the GNU General Public 8 - * License. See linux/COPYING for more information. 9 - * 10 7 */ 11 8 12 9 #ifndef _PCI_SH7751_H_
+1 -4
arch/sh/drivers/pci/pci-sh7780.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Low-Level PCI Support for the SH7780 3 4 * 4 5 * Copyright (C) 2005 - 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/types.h> 11 8 #include <linux/kernel.h>
+2 -5
arch/sh/drivers/pci/pci-sh7780.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Low-Level PCI Support for SH7780 targets 3 4 * 4 5 * Dustin McIntire (dustin@sensoria.com) (c) 2001 5 6 * Paul Mundt (lethal@linux-sh.org) (c) 2003 6 - * 7 - * May be copied or modified under the terms of the GNU General Public 8 - * License. See linux/COPYING for more information. 9 - * 10 7 */ 11 8 12 9 #ifndef _PCI_SH7780_H_
+1 -4
arch/sh/drivers/pci/pci.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * New-style PCI core. 3 4 * ··· 7 6 * 8 7 * Modelled after arch/mips/pci/pci.c: 9 8 * Copyright (C) 2003, 04 Ralf Baechle (ralf@linux-mips.org) 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/kernel.h> 16 11 #include <linux/mm.h>
+1 -4
arch/sh/drivers/pci/pcie-sh7786.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Low-Level PCI Express Support for the SH7786 3 4 * 4 5 * Copyright (C) 2009 - 2011 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #define pr_fmt(fmt) "PCI: " fmt 11 8
+2 -5
arch/sh/drivers/pci/pcie-sh7786.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * SH7786 PCI-Express controller definitions. 3 4 * 4 5 * Copyright (C) 2008, 2009 Renesas Technology Corp. 5 6 * All rights reserved. 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #ifndef __PCI_SH7786_H 12 9 #define __PCI_SH7786_H
+1 -4
arch/sh/drivers/push-switch.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Generic push-switch framework 3 4 * 4 5 * Copyright (C) 2006 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/slab.h>
+1
arch/sh/drivers/superhyway/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the SuperHyway specific kernel interface routines under Linux. 3 4 #
+1 -4
arch/sh/drivers/superhyway/ops-sh4-202.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/drivers/superhyway/ops-sh4-202.c 3 4 * 4 5 * SuperHyway bus support for SH4-202 5 6 * 6 7 * Copyright (C) 2005 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU 9 - * General Public License. See the file "COPYING" in the main 10 - * directory of this archive for more details. 11 8 */ 12 9 #include <linux/kernel.h> 13 10 #include <linux/init.h>
+1
arch/sh/include/asm/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 generated-y += syscall_table.h 2 3 generic-y += compat.h 3 4 generic-y += current.h
+1 -4
arch/sh/include/asm/addrspace.h
··· 1 - /* 2 - * This file is subject to the terms and conditions of the GNU General Public 3 - * License. See the file "COPYING" in the main directory of this archive 4 - * for more details. 1 + /* SPDX-License-Identifier: GPL-2.0 5 2 * 6 3 * Copyright (C) 1999 by Kaz Kojima 7 4 *
+1
arch/sh/include/asm/asm-offsets.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #include <generated/asm-offsets.h>
+2 -5
arch/sh/include/asm/bl_bit_64.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (C) 2000, 2001 Paolo Alberelli 3 4 * Copyright (C) 2003 Paul Mundt 4 5 * Copyright (C) 2004 Richard Curnow 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_BL_BIT_64_H 11 8 #define __ASM_SH_BL_BIT_64_H
+2 -5
arch/sh/include/asm/cache_insns_64.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (C) 2000, 2001 Paolo Alberelli 3 4 * Copyright (C) 2003 Paul Mundt 4 5 * Copyright (C) 2004 Richard Curnow 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_CACHE_INSNS_64_H 11 8 #define __ASM_SH_CACHE_INSNS_64_H
+1 -4
arch/sh/include/asm/checksum_32.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_CHECKSUM_H 2 3 #define __ASM_SH_CHECKSUM_H 3 4 4 5 /* 5 - * This file is subject to the terms and conditions of the GNU General Public 6 - * License. See the file "COPYING" in the main directory of this archive 7 - * for more details. 8 - * 9 6 * Copyright (C) 1999 by Kaz Kojima & Niibe Yutaka 10 7 */ 11 8
+1 -3
arch/sh/include/asm/cmpxchg-xchg.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_CMPXCHG_XCHG_H 2 3 #define __ASM_SH_CMPXCHG_XCHG_H 3 4 4 5 /* 5 6 * Copyright (C) 2016 Red Hat, Inc. 6 7 * Author: Michael S. Tsirkin <mst@redhat.com> 7 - * 8 - * This work is licensed under the terms of the GNU GPL, version 2. See the 9 - * file "COPYING" in the main directory of this archive for more details. 10 8 */ 11 9 #include <linux/bits.h> 12 10 #include <linux/compiler.h>
+2 -3
arch/sh/include/asm/device.h
··· 1 - /* 2 - * Arch specific extensions to struct device 1 + /* SPDX-License-Identifier: GPL-2.0 3 2 * 4 - * This file is released under the GPLv2 3 + * Arch specific extensions to struct device 5 4 */ 6 5 #ifndef __ASM_SH_DEVICE_H 7 6 #define __ASM_SH_DEVICE_H
+2 -5
arch/sh/include/asm/dma-register.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Common header for the legacy SH DMA driver and the new dmaengine driver 3 4 * 4 5 * extracted from arch/sh/include/asm/dma-sh.h: 5 6 * 6 7 * Copyright (C) 2000 Takashi YOSHII 7 8 * Copyright (C) 2003 Paul Mundt 8 - * 9 - * This file is subject to the terms and conditions of the GNU General Public 10 - * License. See the file "COPYING" in the main directory of this archive 11 - * for more details. 12 9 */ 13 10 #ifndef DMA_REGISTER_H 14 11 #define DMA_REGISTER_H
+2 -5
arch/sh/include/asm/dma.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/dma.h 3 4 * 4 5 * Copyright (C) 2003, 2004 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_DMA_H 11 8 #define __ASM_SH_DMA_H
+2 -6
arch/sh/include/asm/dwarf.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (C) 2009 Matt Fleming <matt@console-pimps.org> 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 - * 8 4 */ 9 5 #ifndef __ASM_SH_DWARF_H 10 6 #define __ASM_SH_DWARF_H
+1
arch/sh/include/asm/fb.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef _ASM_FB_H_ 2 3 #define _ASM_FB_H_ 3 4
+2 -5
arch/sh/include/asm/fixmap.h
··· 1 - /* 2 - * fixmap.h: compile-time virtual memory allocation 1 + /* SPDX-License-Identifier: GPL-2.0 3 2 * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 3 + * fixmap.h: compile-time virtual memory allocation 7 4 * 8 5 * Copyright (C) 1998 Ingo Molnar 9 6 *
+2 -5
arch/sh/include/asm/flat.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/flat.h 3 4 * 4 5 * uClinux flat-format executables 5 6 * 6 7 * Copyright (C) 2003 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive for 10 - * more details. 11 8 */ 12 9 #ifndef __ASM_SH_FLAT_H 13 10 #define __ASM_SH_FLAT_H
+2 -6
arch/sh/include/asm/freq.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0+ 2 + * 2 3 * include/asm-sh/freq.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms of the GNU General Public License as published by the 8 - * Free Software Foundation; either version 2 of the License, or (at your 9 - * option) any later version. 10 6 */ 11 7 #ifndef __ASM_SH_FREQ_H 12 8 #define __ASM_SH_FREQ_H
+2 -5
arch/sh/include/asm/gpio.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/gpio.h 3 4 * 4 5 * Generic GPIO API and pinmux table support for SuperH. 5 6 * 6 7 * Copyright (c) 2008 Magnus Damm 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #ifndef __ASM_SH_GPIO_H 13 10 #define __ASM_SH_GPIO_H
+2 -4
arch/sh/include/asm/machvec.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/machvec.h 3 4 * 4 5 * Copyright 2000 Stuart Menefy (stuart.menefy@st.com) 5 - * 6 - * May be copied or modified under the terms of the GNU General Public 7 - * License. See linux/COPYING for more information. 8 6 */ 9 7 10 8 #ifndef _ASM_SH_MACHVEC_H
+1 -4
arch/sh/include/asm/mmu_context_64.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_MMU_CONTEXT_64_H 2 3 #define __ASM_SH_MMU_CONTEXT_64_H 3 4 ··· 7 6 * 8 7 * Copyright (C) 2000, 2001 Paolo Alberelli 9 8 * Copyright (C) 2003 - 2007 Paul Mundt 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <cpu/registers.h> 16 11 #include <asm/cacheflush.h>
+2 -5
arch/sh/include/asm/pgtable.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * This file contains the functions and defines necessary to modify and 3 4 * use the SuperH page table tree. 4 5 * 5 6 * Copyright (C) 1999 Niibe Yutaka 6 7 * Copyright (C) 2002 - 2007 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General 9 - * Public License. See the file "COPYING" in the main directory of this 10 - * archive for more details. 11 8 */ 12 9 #ifndef __ASM_SH_PGTABLE_H 13 10 #define __ASM_SH_PGTABLE_H
+1 -4
arch/sh/include/asm/pgtable_64.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_PGTABLE_64_H 2 3 #define __ASM_SH_PGTABLE_64_H 3 4 ··· 11 10 * Copyright (C) 2000, 2001 Paolo Alberelli 12 11 * Copyright (C) 2003, 2004 Paul Mundt 13 12 * Copyright (C) 2003, 2004 Richard Curnow 14 - * 15 - * This file is subject to the terms and conditions of the GNU General Public 16 - * License. See the file "COPYING" in the main directory of this archive 17 - * for more details. 18 13 */ 19 14 #include <linux/threads.h> 20 15 #include <asm/processor.h>
+1 -4
arch/sh/include/asm/processor_64.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_PROCESSOR_64_H 2 3 #define __ASM_SH_PROCESSOR_64_H 3 4 ··· 8 7 * Copyright (C) 2000, 2001 Paolo Alberelli 9 8 * Copyright (C) 2003 Paul Mundt 10 9 * Copyright (C) 2004 Richard Curnow 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #ifndef __ASSEMBLY__ 17 12
+4 -16
arch/sh/include/asm/sfp-machine.h
··· 1 - /* Machine-dependent software floating-point definitions. 1 + /* SPDX-License-Identifier: GPL-2.0+ 2 + * 3 + * Machine-dependent software floating-point definitions. 2 4 SuperH kernel version. 3 5 Copyright (C) 1997,1998,1999 Free Software Foundation, Inc. 4 6 This file is part of the GNU C Library. ··· 8 6 Jakub Jelinek (jj@ultra.linux.cz), 9 7 David S. Miller (davem@redhat.com) and 10 8 Peter Maydell (pmaydell@chiark.greenend.org.uk). 11 - 12 - The GNU C Library is free software; you can redistribute it and/or 13 - modify it under the terms of the GNU Library General Public License as 14 - published by the Free Software Foundation; either version 2 of the 15 - License, or (at your option) any later version. 16 - 17 - The GNU C Library is distributed in the hope that it will be useful, 18 - but WITHOUT ANY WARRANTY; without even the implied warranty of 19 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 20 - Library General Public License for more details. 21 - 22 - You should have received a copy of the GNU Library General Public 23 - License along with the GNU C Library; see the file COPYING.LIB. If 24 - not, write to the Free Software Foundation, Inc., 25 - 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ 9 + */ 26 10 27 11 #ifndef _SFP_MACHINE_H 28 12 #define _SFP_MACHINE_H
+2 -5
arch/sh/include/asm/shmparam.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/shmparam.h 3 4 * 4 5 * Copyright (C) 1999 Niibe Yutaka 5 6 * Copyright (C) 2006 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #ifndef __ASM_SH_SHMPARAM_H 12 9 #define __ASM_SH_SHMPARAM_H
+2 -5
arch/sh/include/asm/siu.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * platform header for the SIU ASoC driver 3 4 * 4 5 * Copyright (C) 2009-2010 Guennadi Liakhovetski <g.liakhovetski@gmx.de> 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 6 */ 10 7 11 8 #ifndef ASM_SIU_H
+2 -5
arch/sh/include/asm/spinlock-cas.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/spinlock-cas.h 3 4 * 4 5 * Copyright (C) 2015 SEI 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_SPINLOCK_CAS_H 11 8 #define __ASM_SH_SPINLOCK_CAS_H
+2 -5
arch/sh/include/asm/spinlock-llsc.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/spinlock-llsc.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 6 * Copyright (C) 2006, 2007 Akio Idehara 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #ifndef __ASM_SH_SPINLOCK_LLSC_H 12 9 #define __ASM_SH_SPINLOCK_LLSC_H
+2 -5
arch/sh/include/asm/spinlock.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/spinlock.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 6 * Copyright (C) 2006, 2007 Akio Idehara 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #ifndef __ASM_SH_SPINLOCK_H 12 9 #define __ASM_SH_SPINLOCK_H
+1
arch/sh/include/asm/string_32.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_STRING_H 2 3 #define __ASM_SH_STRING_H 3 4
+2 -5
arch/sh/include/asm/switch_to.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (C) 2000, 2001 Paolo Alberelli 3 4 * Copyright (C) 2003 Paul Mundt 4 5 * Copyright (C) 2004 Richard Curnow 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_SWITCH_TO_H 11 8 #define __ASM_SH_SWITCH_TO_H
+2 -5
arch/sh/include/asm/switch_to_64.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (C) 2000, 2001 Paolo Alberelli 3 4 * Copyright (C) 2003 Paul Mundt 4 5 * Copyright (C) 2004 Richard Curnow 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_SWITCH_TO_64_H 11 8 #define __ASM_SH_SWITCH_TO_64_H
+2 -5
arch/sh/include/asm/tlb_64.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/tlb_64.h 3 4 * 4 5 * Copyright (C) 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_TLB_64_H 11 8 #define __ASM_SH_TLB_64_H
+2 -5
arch/sh/include/asm/traps_64.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (C) 2000, 2001 Paolo Alberelli 3 4 * Copyright (C) 2003 Paul Mundt 4 5 * Copyright (C) 2004 Richard Curnow 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_TRAPS_64_H 11 8 #define __ASM_SH_TRAPS_64_H
+1 -4
arch/sh/include/asm/uaccess_64.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_UACCESS_64_H 2 3 #define __ASM_SH_UACCESS_64_H 3 4 ··· 16 15 * MIPS implementation version 1.15 by 17 16 * Copyright (C) 1996, 1997, 1998 by Ralf Baechle 18 17 * and i386 version. 19 - * 20 - * This file is subject to the terms and conditions of the GNU General Public 21 - * License. See the file "COPYING" in the main directory of this archive 22 - * for more details. 23 18 */ 24 19 25 20 #define __get_user_size(x,ptr,size,retval) \
+1
arch/sh/include/asm/vga.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_VGA_H 2 3 #define __ASM_SH_VGA_H 3 4
+2 -6
arch/sh/include/asm/watchdog.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0+ 2 + * 2 3 * include/asm-sh/watchdog.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 6 * Copyright (C) 2009 Siemens AG 6 7 * Copyright (C) 2009 Valentin Sitdikov 7 - * 8 - * This program is free software; you can redistribute it and/or modify it 9 - * under the terms of the GNU General Public License as published by the 10 - * Free Software Foundation; either version 2 of the License, or (at your 11 - * option) any later version. 12 8 */ 13 9 #ifndef __ASM_SH_WATCHDOG_H 14 10 #define __ASM_SH_WATCHDOG_H
+2 -5
arch/sh/include/cpu-common/cpu/addrspace.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Definitions for the address spaces of the SH-2 CPUs. 3 4 * 4 5 * Copyright (C) 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH2_ADDRSPACE_H 11 8 #define __ASM_CPU_SH2_ADDRSPACE_H
+2 -5
arch/sh/include/cpu-common/cpu/mmu_context.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh2/mmu_context.h 3 4 * 4 5 * Copyright (C) 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH2_MMU_CONTEXT_H 11 8 #define __ASM_CPU_SH2_MMU_CONTEXT_H
+2 -10
arch/sh/include/cpu-common/cpu/pfc.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * SH Pin Function Control Initialization 3 4 * 4 5 * Copyright (C) 2012 Renesas Solutions Corp. 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; version 2 of the License. 9 - * 10 - * This program is distributed in the hope that it will be useful, 11 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 6 */ 15 7 16 8 #ifndef __ARCH_SH_CPU_PFC_H__
+1
arch/sh/include/cpu-common/cpu/timer.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_CPU_SH2_TIMER_H 2 3 #define __ASM_CPU_SH2_TIMER_H 3 4
+2 -5
arch/sh/include/cpu-sh2/cpu/cache.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh2/cache.h 3 4 * 4 5 * Copyright (C) 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH2_CACHE_H 11 8 #define __ASM_CPU_SH2_CACHE_H
+2 -5
arch/sh/include/cpu-sh2/cpu/freq.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh2/freq.h 3 4 * 4 5 * Copyright (C) 2006 Yoshinori Sato 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH2_FREQ_H 11 8 #define __ASM_CPU_SH2_FREQ_H
+2 -5
arch/sh/include/cpu-sh2/cpu/watchdog.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh2/watchdog.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH2_WATCHDOG_H 11 8 #define __ASM_CPU_SH2_WATCHDOG_H
+2 -5
arch/sh/include/cpu-sh2a/cpu/cache.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh2a/cache.h 3 4 * 4 5 * Copyright (C) 2004 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH2A_CACHE_H 11 8 #define __ASM_CPU_SH2A_CACHE_H
+2 -5
arch/sh/include/cpu-sh2a/cpu/freq.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh2a/freq.h 3 4 * 4 5 * Copyright (C) 2006 Yoshinori Sato 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH2A_FREQ_H 11 8 #define __ASM_CPU_SH2A_FREQ_H
+1
arch/sh/include/cpu-sh2a/cpu/watchdog.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #include <cpu-sh2/cpu/watchdog.h>
+2 -5
arch/sh/include/cpu-sh3/cpu/cache.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh3/cache.h 3 4 * 4 5 * Copyright (C) 1999 Niibe Yutaka 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH3_CACHE_H 11 8 #define __ASM_CPU_SH3_CACHE_H
+2 -5
arch/sh/include/cpu-sh3/cpu/dma-register.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * SH3 CPU-specific DMA definitions, used by both DMA drivers 3 4 * 4 5 * Copyright (C) 2010 Guennadi Liakhovetski <g.liakhovetski@gmx.de> 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 6 */ 10 7 #ifndef CPU_DMA_REGISTER_H 11 8 #define CPU_DMA_REGISTER_H
+2 -5
arch/sh/include/cpu-sh3/cpu/freq.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh3/freq.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH3_FREQ_H 11 8 #define __ASM_CPU_SH3_FREQ_H
+2 -5
arch/sh/include/cpu-sh3/cpu/gpio.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh3/gpio.h 3 4 * 4 5 * Copyright (C) 2007 Markus Brunner, Mark Jonas 5 6 * 6 7 * Addresses for the Pin Function Controller 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #ifndef _CPU_SH3_GPIO_H 13 10 #define _CPU_SH3_GPIO_H
+2 -5
arch/sh/include/cpu-sh3/cpu/mmu_context.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh3/mmu_context.h 3 4 * 4 5 * Copyright (C) 1999 Niibe Yutaka 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH3_MMU_CONTEXT_H 11 8 #define __ASM_CPU_SH3_MMU_CONTEXT_H
+2 -5
arch/sh/include/cpu-sh3/cpu/watchdog.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh3/watchdog.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH3_WATCHDOG_H 11 8 #define __ASM_CPU_SH3_WATCHDOG_H
+1 -4
arch/sh/include/cpu-sh4/cpu/addrspace.h
··· 1 - /* 2 - * This file is subject to the terms and conditions of the GNU General Public 3 - * License. See the file "COPYING" in the main directory of this archive 4 - * for more details. 1 + /* SPDX-License-Identifier: GPL-2.0 5 2 * 6 3 * Copyright (C) 1999 by Kaz Kojima 7 4 *
+2 -5
arch/sh/include/cpu-sh4/cpu/cache.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh4/cache.h 3 4 * 4 5 * Copyright (C) 1999 Niibe Yutaka 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH4_CACHE_H 11 8 #define __ASM_CPU_SH4_CACHE_H
+2 -5
arch/sh/include/cpu-sh4/cpu/dma-register.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * SH4 CPU-specific DMA definitions, used by both DMA drivers 3 4 * 4 5 * Copyright (C) 2010 Guennadi Liakhovetski <g.liakhovetski@gmx.de> 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 6 */ 10 7 #ifndef CPU_DMA_REGISTER_H 11 8 #define CPU_DMA_REGISTER_H
+2 -4
arch/sh/include/cpu-sh4/cpu/fpu.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * linux/arch/sh/kernel/cpu/sh4/sh4_fpu.h 3 4 * 4 5 * Copyright (C) 2006 STMicroelectronics Limited 5 6 * Author: Carl Shaw <carl.shaw@st.com> 6 - * 7 - * May be copied or modified under the terms of the GNU General Public 8 - * License Version 2. See linux/COPYING for more information. 9 7 * 10 8 * Definitions for SH4 FPU operations 11 9 */
+2 -5
arch/sh/include/cpu-sh4/cpu/freq.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh4/freq.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH4_FREQ_H 11 8 #define __ASM_CPU_SH4_FREQ_H
+2 -5
arch/sh/include/cpu-sh4/cpu/mmu_context.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh4/mmu_context.h 3 4 * 4 5 * Copyright (C) 1999 Niibe Yutaka 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_CPU_SH4_MMU_CONTEXT_H 11 8 #define __ASM_CPU_SH4_MMU_CONTEXT_H
+2 -5
arch/sh/include/cpu-sh4/cpu/sh7786.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * SH7786 Pinmux 3 4 * 4 5 * Copyright (C) 2008, 2009 Renesas Solutions Corp. 5 6 * Kuninori Morimoto <morimoto.kuninori@renesas.com> 6 7 * 7 8 * Based on sh7785.h 8 - * 9 - * This file is subject to the terms and conditions of the GNU General Public 10 - * License. See the file "COPYING" in the main directory of this archive 11 - * for more details. 12 9 */ 13 10 14 11 #ifndef __CPU_SH7786_H__
+2 -5
arch/sh/include/cpu-sh4/cpu/sq.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh4/sq.h 3 4 * 4 5 * Copyright (C) 2001, 2002, 2003 Paul Mundt 5 6 * Copyright (C) 2001, 2002 M. R. Brown 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #ifndef __ASM_CPU_SH4_SQ_H 12 9 #define __ASM_CPU_SH4_SQ_H
+2 -5
arch/sh/include/cpu-sh4/cpu/watchdog.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/cpu-sh4/watchdog.h 3 4 * 4 5 * Copyright (C) 2002, 2003 Paul Mundt 5 6 * Copyright (C) 2009 Siemens AG 6 7 * Copyright (C) 2009 Sitdikov Valentin 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #ifndef __ASM_CPU_SH4_WATCHDOG_H 13 10 #define __ASM_CPU_SH4_WATCHDOG_H
+1 -4
arch/sh/include/cpu-sh5/cpu/cache.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_CPU_SH5_CACHE_H 2 3 #define __ASM_SH_CPU_SH5_CACHE_H 3 4 ··· 7 6 * 8 7 * Copyright (C) 2000, 2001 Paolo Alberelli 9 8 * Copyright (C) 2003, 2004 Paul Mundt 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 16 11 #define L1_CACHE_SHIFT 5
+1 -4
arch/sh/include/cpu-sh5/cpu/irq.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_CPU_SH5_IRQ_H 2 3 #define __ASM_SH_CPU_SH5_IRQ_H 3 4 ··· 6 5 * include/asm-sh/cpu-sh5/irq.h 7 6 * 8 7 * Copyright (C) 2000, 2001 Paolo Alberelli 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 15 10
+1 -4
arch/sh/include/cpu-sh5/cpu/registers.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_CPU_SH5_REGISTERS_H 2 3 #define __ASM_SH_CPU_SH5_REGISTERS_H 3 4 ··· 7 6 * 8 7 * Copyright (C) 2000, 2001 Paolo Alberelli 9 8 * Copyright (C) 2004 Richard Curnow 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 16 11 #ifdef __ASSEMBLY__
+4 -8
arch/sh/include/mach-common/mach/hp6xx.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 3 + * Copyright (C) 2003, 2004, 2005 Andriy Skulysh 4 + */ 1 5 #ifndef __ASM_SH_HP6XX_H 2 6 #define __ASM_SH_HP6XX_H 3 7 4 - /* 5 - * Copyright (C) 2003, 2004, 2005 Andriy Skulysh 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 - * 11 - */ 12 8 #include <linux/sh_intc.h> 13 9 14 10 #define HP680_BTN_IRQ evt2irq(0x600) /* IRQ0_IRQ */
+1 -5
arch/sh/include/mach-common/mach/lboxre2.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_LBOXRE2_H 2 3 #define __ASM_SH_LBOXRE2_H 3 4 ··· 6 5 * Copyright (C) 2007 Nobuhiro Iwamatsu 7 6 * 8 7 * NTT COMWARE L-BOX RE2 support 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 - * 14 8 */ 15 9 #include <linux/sh_intc.h> 16 10
+2 -5
arch/sh/include/mach-common/mach/magicpanelr2.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/magicpanelr2.h 3 4 * 4 5 * Copyright (C) 2007 Markus Brunner, Mark Jonas 5 6 * 6 7 * I/O addresses and bitmasks for Magic Panel Release 2 board 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 13 10 #ifndef __ASM_SH_MAGICPANELR2_H
+2 -5
arch/sh/include/mach-common/mach/mangle-port.h
··· 1 - /* 2 - * SH version cribbed from the MIPS copy: 1 + /* SPDX-License-Identifier: GPL-2.0 3 2 * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 3 + * SH version cribbed from the MIPS copy: 7 4 * 8 5 * Copyright (C) 2003, 2004 Ralf Baechle 9 6 */
+2 -4
arch/sh/include/mach-common/mach/microdev.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * linux/include/asm-sh/microdev.h 3 4 * 4 5 * Copyright (C) 2003 Sean McGoogan (Sean.McGoogan@superh.com) 5 6 * 6 7 * Definitions for the SuperH SH4-202 MicroDev board. 7 - * 8 - * May be copied or modified under the terms of the GNU General Public 9 - * License. See linux/COPYING for more information. 10 8 */ 11 9 #ifndef __ASM_SH_MICRODEV_H 12 10 #define __ASM_SH_MICRODEV_H
+1 -4
arch/sh/include/mach-common/mach/sdk7780.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_RENESAS_SDK7780_H 2 3 #define __ASM_SH_RENESAS_SDK7780_H 3 4 ··· 7 6 * 8 7 * Renesas Solutions SH7780 SDK Support 9 8 * Copyright (C) 2008 Nicholas Beck <nbeck@mpc-data.co.uk> 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/sh_intc.h> 16 11 #include <asm/addrspace.h>
+2 -4
arch/sh/include/mach-common/mach/secureedge5410.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/snapgear.h 3 4 * 4 5 * Modified version of io_se.h for the snapgear-specific functions. 5 - * 6 - * May be copied or modified under the terms of the GNU General Public 7 - * License. See linux/COPYING for more information. 8 6 * 9 7 * IO functions for a SnapGear 10 8 */
+1 -5
arch/sh/include/mach-common/mach/sh7763rdp.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_SH7763RDP_H 2 3 #define __ASM_SH_SH7763RDP_H 3 4 ··· 7 6 * 8 7 * Copyright (C) 2008 Renesas Solutions 9 8 * Copyright (C) 2008 Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 - * 15 9 */ 16 10 #include <asm/addrspace.h> 17 11
+2 -5
arch/sh/include/mach-dreamcast/mach/dma.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/dreamcast/dma.h 3 4 * 4 5 * Copyright (C) 2003 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #ifndef __ASM_SH_DREAMCAST_DMA_H 11 8 #define __ASM_SH_DREAMCAST_DMA_H
+2 -5
arch/sh/include/mach-dreamcast/mach/pci.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * include/asm-sh/dreamcast/pci.h 3 4 * 4 5 * Copyright (C) 2001, 2002 M. R. Brown 5 6 * Copyright (C) 2002, 2003 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #ifndef __ASM_SH_DREAMCAST_PCI_H 12 9 #define __ASM_SH_DREAMCAST_PCI_H
+3 -4
arch/sh/include/mach-dreamcast/mach/sysasic.h
··· 1 - /* include/asm-sh/dreamcast/sysasic.h 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 3 + * include/asm-sh/dreamcast/sysasic.h 2 4 * 3 5 * Definitions for the Dreamcast System ASIC and related peripherals. 4 6 * ··· 8 6 * Copyright (C) 2003 Paul Mundt <lethal@linux-sh.org> 9 7 * 10 8 * This file is part of the LinuxDC project (www.linuxdc.org) 11 - * 12 - * Released under the terms of the GNU GPL v2.0. 13 - * 14 9 */ 15 10 #ifndef __ASM_SH_DREAMCAST_SYSASIC_H 16 11 #define __ASM_SH_DREAMCAST_SYSASIC_H
+1
arch/sh/include/mach-ecovec24/mach/partner-jet-setup.txt
··· 1 + LIST "SPDX-License-Identifier: GPL-2.0" 1 2 LIST "partner-jet-setup.txt" 2 3 LIST "(C) Copyright 2009 Renesas Solutions Corp" 3 4 LIST "Kuninori Morimoto <morimoto.kuninori@renesas.com>"
+1
arch/sh/include/mach-kfr2r09/mach/partner-jet-setup.txt
··· 1 + LIST "SPDX-License-Identifier: GPL-2.0" 1 2 LIST "partner-jet-setup.txt - 20090729 Magnus Damm" 2 3 LIST "set up enough of the kfr2r09 hardware to boot the kernel" 3 4
+2 -6
arch/sh/include/mach-se/mach/se7721.h
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (C) 2008 Renesas Solutions Corp. 3 4 * 4 5 * Hitachi UL SolutionEngine 7721 Support. 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 - * 10 6 */ 11 7 12 8 #ifndef __ASM_SH_SE7721_H
+1 -5
arch/sh/include/mach-se/mach/se7722.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_SE7722_H 2 3 #define __ASM_SH_SE7722_H 3 4 ··· 8 7 * Copyright (C) 2007 Nobuhiro Iwamatsu 9 8 * 10 9 * Hitachi UL SolutionEngine 7722 Support. 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 - * 16 10 */ 17 11 #include <linux/sh_intc.h> 18 12 #include <asm/addrspace.h>
+1 -5
arch/sh/include/mach-se/mach/se7724.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_SE7724_H 2 3 #define __ASM_SH_SE7724_H 3 4 ··· 13 12 * 14 13 * Based on se7722.h 15 14 * Copyright (C) 2007 Nobuhiro Iwamatsu 16 - * 17 - * This file is subject to the terms and conditions of the GNU General Public 18 - * License. See the file "COPYING" in the main directory of this archive 19 - * for more details. 20 - * 21 15 */ 22 16 #include <linux/sh_intc.h> 23 17 #include <asm/addrspace.h>
+1 -4
arch/sh/include/mach-se/mach/se7780.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #ifndef __ASM_SH_SE7780_H 2 3 #define __ASM_SH_SE7780_H 3 4 ··· 8 7 * Copyright (C) 2006,2007 Nobuhiro Iwamatsu 9 8 * 10 9 * Hitachi UL SolutionEngine 7780 Support. 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/sh_intc.h> 17 12 #include <asm/addrspace.h>
+1
arch/sh/include/uapi/asm/Kbuild
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # UAPI Header export list 2 3 include include/uapi/asm-generic/Kbuild.asm 3 4
+1
arch/sh/include/uapi/asm/setup.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #include <asm-generic/setup.h>
+1
arch/sh/include/uapi/asm/types.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 #include <asm-generic/types.h>
+1 -4
arch/sh/kernel/cpu/clock.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/clock.c - SuperH clock framework 3 4 * ··· 10 9 * Written by Tuukka Tikkanen <tuukka.tikkanen@elektrobit.com> 11 10 * 12 11 * Modified for omap shared clock framework by Tony Lindgren <tony@atomide.com> 13 - * 14 - * This file is subject to the terms and conditions of the GNU General Public 15 - * License. See the file "COPYING" in the main directory of this archive 16 - * for more details. 17 12 */ 18 13 #include <linux/kernel.h> 19 14 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/init.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/init.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2002 - 2009 Paul Mundt 8 7 * Copyright (C) 2003 Richard Curnow 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/kernel.h>
+1
arch/sh/kernel/cpu/irq/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the Linux/SuperH CPU-specific IRQ handlers. 3 4 #
+1 -4
arch/sh/kernel/cpu/irq/intc-sh5.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/irq/intc-sh5.c 3 4 * ··· 10 9 * Per-interrupt selective. IRLM=0 (Fixed priority) is not 11 10 * supported being useless without a cascaded interrupt 12 11 * controller. 13 - * 14 - * This file is subject to the terms and conditions of the GNU General Public 15 - * License. See the file "COPYING" in the main directory of this archive 16 - * for more details. 17 12 */ 18 13 #include <linux/init.h> 19 14 #include <linux/interrupt.h>
+1 -4
arch/sh/kernel/cpu/irq/ipr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Interrupt handling for IPR-based IRQ. 3 4 * ··· 12 11 * On-chip supporting modules for SH7709/SH7709A/SH7729. 13 12 * Hitachi SolutionEngine external I/O: 14 13 * MS7709SE01, MS7709ASE01, and MS7750SE01 15 - * 16 - * This file is subject to the terms and conditions of the GNU General Public 17 - * License. See the file "COPYING" in the main directory of this archive 18 - * for more details. 19 14 */ 20 15 #include <linux/init.h> 21 16 #include <linux/interrupt.h>
+1 -9
arch/sh/kernel/cpu/pfc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH Pin Function Control Initialization 3 4 * 4 5 * Copyright (C) 2012 Renesas Solutions Corp. 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; version 2 of the License. 9 - * 10 - * This program is distributed in the hope that it will be useful, 11 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 6 */ 15 7 16 8 #include <linux/init.h>
+1
arch/sh/kernel/cpu/sh2/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the Linux/SuperH SH-2 backends. 3 4 #
+1 -4
arch/sh/kernel/cpu/sh2/clock-sh7619.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2/clock-sh7619.c 3 4 * ··· 8 7 * 9 8 * Based on clock-sh4.c 10 9 * Copyright (C) 2005 Paul Mundt 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/init.h> 17 12 #include <linux/kernel.h>
+2 -5
arch/sh/kernel/cpu/sh2/entry.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh2/entry.S 3 4 * 4 5 * The SH-2 exception entry 5 6 * 6 7 * Copyright (C) 2005-2008 Yoshinori Sato 7 8 * Copyright (C) 2005 AXE,Inc. 8 - * 9 - * This file is subject to the terms and conditions of the GNU General Public 10 - * License. See the file "COPYING" in the main directory of this archive 11 - * for more details. 12 9 */ 13 10 14 11 #include <linux/linkage.h>
+2 -5
arch/sh/kernel/cpu/sh2/ex.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh2/ex.S 3 4 * 4 5 * The SH-2 exception vector table 5 6 * 6 7 * Copyright (C) 2005 Yoshinori Sato 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 13 10 #include <linux/linkage.h>
+1 -4
arch/sh/kernel/cpu/sh2/probe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2/probe.c 3 4 * 4 5 * CPU Subtype Probing for SH-2. 5 6 * 6 7 * Copyright (C) 2002 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/of_fdt.h>
+1 -4
arch/sh/kernel/cpu/sh2/setup-sh7619.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7619 Setup 3 4 * 4 5 * Copyright (C) 2006 Yoshinori Sato 5 6 * Copyright (C) 2009 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/platform_device.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh2/smp-j2.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SMP support for J2 processor 3 4 * 4 5 * Copyright (C) 2015-2016 Smart Energy Instruments, Inc. 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/smp.h>
+1 -4
arch/sh/kernel/cpu/sh2a/clock-sh7201.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2a/clock-sh7201.c 3 4 * ··· 8 7 * 9 8 * Based on clock-sh4.c 10 9 * Copyright (C) 2005 Paul Mundt 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/init.h> 17 12 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh2a/clock-sh7203.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2a/clock-sh7203.c 3 4 * ··· 11 10 * 12 11 * Based on clock-sh4.c 13 12 * Copyright (C) 2005 Paul Mundt 14 - * 15 - * This file is subject to the terms and conditions of the GNU General Public 16 - * License. See the file "COPYING" in the main directory of this archive 17 - * for more details. 18 13 */ 19 14 #include <linux/init.h> 20 15 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh2a/clock-sh7206.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2a/clock-sh7206.c 3 4 * ··· 8 7 * 9 8 * Based on clock-sh4.c 10 9 * Copyright (C) 2005 Paul Mundt 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/init.h> 17 12 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh2a/clock-sh7264.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2a/clock-sh7264.c 3 4 * 4 5 * SH7264 clock framework support 5 6 * 6 7 * Copyright (C) 2012 Phil Edworthy 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh2a/clock-sh7269.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2a/clock-sh7269.c 3 4 * 4 5 * SH7269 clock framework support 5 6 * 6 7 * Copyright (C) 2012 Phil Edworthy 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+2 -5
arch/sh/kernel/cpu/sh2a/entry.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh2a/entry.S 3 4 * 4 5 * The SH-2A exception entry 5 6 * 6 7 * Copyright (C) 2008 Yoshinori Sato 7 8 * Based on arch/sh/kernel/cpu/sh2/entry.S 8 - * 9 - * This file is subject to the terms and conditions of the GNU General Public 10 - * License. See the file "COPYING" in the main directory of this archive 11 - * for more details. 12 9 */ 13 10 14 11 #include <linux/linkage.h>
+2 -5
arch/sh/kernel/cpu/sh2a/ex.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh2a/ex.S 3 4 * 4 5 * The SH-2A exception vector table 5 6 * 6 7 * Copyright (C) 2008 Yoshinori Sato 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 13 10 #include <linux/linkage.h>
+1 -4
arch/sh/kernel/cpu/sh2a/fpu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Save/restore floating point context for signal handlers. 3 4 * 4 5 * Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 * 10 7 * FIXME! These routines can be optimized in big endian case. 11 8 */
+1 -4
arch/sh/kernel/cpu/sh2a/opcode_helper.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2a/opcode_helper.c 3 4 * 4 5 * Helper for the SH-2A 32-bit opcodes. 5 6 * 6 7 * Copyright (C) 2007 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/kernel.h> 13 10
+1 -4
arch/sh/kernel/cpu/sh2a/pinmux-sh7203.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7203 Pinmux 3 4 * 4 5 * Copyright (C) 2008 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh2a/pinmux-sh7264.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7264 Pinmux 3 4 * 4 5 * Copyright (C) 2012 Renesas Electronics Europe Ltd 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh2a/pinmux-sh7269.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7269 Pinmux 3 4 * 4 5 * Copyright (C) 2012 Renesas Electronics Europe Ltd 5 6 * Copyright (C) 2012 Phil Edworthy 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 12 9 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh2a/probe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh2a/probe.c 3 4 * 4 5 * CPU Subtype Probing for SH-2A. 5 6 * 6 7 * Copyright (C) 2004 - 2007 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <asm/processor.h>
+1 -4
arch/sh/kernel/cpu/sh2a/setup-mxg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas MX-G (R8A03022BG) Setup 3 4 * 4 5 * Copyright (C) 2008, 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh2a/setup-sh7201.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7201 setup 3 4 * 4 5 * Copyright (C) 2008 Peter Griffin pgriffin@mpc-data.co.uk 5 6 * Copyright (C) 2009 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/platform_device.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh2a/setup-sh7203.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7203 and SH7263 Setup 3 4 * 4 5 * Copyright (C) 2007 - 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh2a/setup-sh7206.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7206 Setup 3 4 * 4 5 * Copyright (C) 2006 Yoshinori Sato 5 6 * Copyright (C) 2009 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/platform_device.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh2a/setup-sh7264.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7264 Setup 3 4 * 4 5 * Copyright (C) 2012 Renesas Electronics Europe Ltd 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh2a/setup-sh7269.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7269 Setup 3 4 * 4 5 * Copyright (C) 2012 Renesas Electronics Europe Ltd 5 6 * Copyright (C) 2012 Phil Edworthy 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/platform_device.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh3/clock-sh3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh3/clock-sh3.c 3 4 * ··· 12 11 * Copyright (C) 2000 Philipp Rumpf <prumpf@tux.org> 13 12 * Copyright (C) 2002, 2003, 2004 Paul Mundt 14 13 * Copyright (C) 2002 M. R. Brown <mrbrown@linux-sh.org> 15 - * 16 - * This file is subject to the terms and conditions of the GNU General Public 17 - * License. See the file "COPYING" in the main directory of this archive 18 - * for more details. 19 14 */ 20 15 #include <linux/init.h> 21 16 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh3/clock-sh7705.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh3/clock-sh7705.c 3 4 * ··· 12 11 * Copyright (C) 2000 Philipp Rumpf <prumpf@tux.org> 13 12 * Copyright (C) 2002, 2003, 2004 Paul Mundt 14 13 * Copyright (C) 2002 M. R. Brown <mrbrown@linux-sh.org> 15 - * 16 - * This file is subject to the terms and conditions of the GNU General Public 17 - * License. See the file "COPYING" in the main directory of this archive 18 - * for more details. 19 14 */ 20 15 #include <linux/init.h> 21 16 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh3/clock-sh7706.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh3/clock-sh7706.c 3 4 * ··· 8 7 * 9 8 * Based on arch/sh/kernel/cpu/sh3/clock-sh7709.c 10 9 * Copyright (C) 2005 Andriy Skulysh 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/init.h> 17 12 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh3/clock-sh7709.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh3/clock-sh7709.c 3 4 * ··· 8 7 * 9 8 * Based on arch/sh/kernel/cpu/sh3/clock-sh7705.c 10 9 * Copyright (C) 2005 Paul Mundt 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/init.h> 17 12 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh3/clock-sh7710.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh3/clock-sh7710.c 3 4 * ··· 12 11 * Copyright (C) 2000 Philipp Rumpf <prumpf@tux.org> 13 12 * Copyright (C) 2002, 2003, 2004 Paul Mundt 14 13 * Copyright (C) 2002 M. R. Brown <mrbrown@linux-sh.org> 15 - * 16 - * This file is subject to the terms and conditions of the GNU General Public 17 - * License. See the file "COPYING" in the main directory of this archive 18 - * for more details. 19 14 */ 20 15 #include <linux/init.h> 21 16 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh3/clock-sh7712.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh3/clock-sh7712.c 3 4 * ··· 8 7 * 9 8 * Based on arch/sh/kernel/cpu/sh3/clock-sh3.c 10 9 * Copyright (C) 2005 Paul Mundt 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/init.h> 17 12 #include <linux/kernel.h>
+2 -5
arch/sh/kernel/cpu/sh3/entry.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh3/entry.S 3 4 * 4 5 * Copyright (C) 1999, 2000, 2002 Niibe Yutaka 5 6 * Copyright (C) 2003 - 2012 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/sys.h> 12 9 #include <linux/errno.h>
+3 -6
arch/sh/kernel/cpu/sh3/ex.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh3/ex.S 3 4 * 4 5 * The SH-3 and SH-4 exception vector table. 5 - 6 + * 6 7 * Copyright (C) 1999, 2000, 2002 Niibe Yutaka 7 8 * Copyright (C) 2003 - 2008 Paul Mundt 8 - * 9 - * This file is subject to the terms and conditions of the GNU General Public 10 - * License. See the file "COPYING" in the main directory of this archive 11 - * for more details. 12 9 */ 13 10 #include <linux/linkage.h> 14 11
+1 -4
arch/sh/kernel/cpu/sh3/pinmux-sh7720.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7720 Pinmux 3 4 * 4 5 * Copyright (C) 2008 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh3/probe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh3/probe.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 1999, 2000 Niibe Yutaka 8 7 * Copyright (C) 2002 Paul Mundt 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 15 10 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh3/setup-sh3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Shared SH3 Setup code 3 4 * 4 5 * Copyright (C) 2008 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh3/setup-sh7705.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7705 Setup 3 4 * 4 5 * Copyright (C) 2006 - 2009 Paul Mundt 5 6 * Copyright (C) 2007 Nobuhiro Iwamatsu 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/platform_device.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh3/setup-sh770x.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH3 Setup code for SH7706, SH7707, SH7708, SH7709 3 4 * ··· 8 7 * Based on setup-sh7709.c 9 8 * 10 9 * Copyright (C) 2006 Paul Mundt 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/init.h> 17 12 #include <linux/io.h>
+1 -4
arch/sh/kernel/cpu/sh3/setup-sh7710.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH3 Setup code for SH7710, SH7712 3 4 * 4 5 * Copyright (C) 2006 - 2009 Paul Mundt 5 6 * Copyright (C) 2007 Nobuhiro Iwamatsu 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/platform_device.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh3/setup-sh7720.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Setup code for SH7720, SH7721. 3 4 * ··· 9 8 * 10 9 * Copyright (C) 2006 Paul Mundt 11 10 * Copyright (C) 2006 Jamie Lenehan 12 - * 13 - * This file is subject to the terms and conditions of the GNU General Public 14 - * License. See the file "COPYING" in the main directory of this archive 15 - * for more details. 16 11 */ 17 12 #include <linux/platform_device.h> 18 13 #include <linux/init.h>
+2 -5
arch/sh/kernel/cpu/sh3/swsusp.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh3/swsusp.S 3 4 * 4 5 * Copyright (C) 2009 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/sys.h> 11 8 #include <linux/errno.h>
+1 -4
arch/sh/kernel/cpu/sh4/clock-sh4-202.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4/clock-sh4-202.c 3 4 * 4 5 * Additional SH4-202 support for the clock framework 5 6 * 6 7 * Copyright (C) 2005 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4/clock-sh4.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4/clock-sh4.c 3 4 * ··· 12 11 * Copyright (C) 2000 Philipp Rumpf <prumpf@tux.org> 13 12 * Copyright (C) 2002, 2003, 2004 Paul Mundt 14 13 * Copyright (C) 2002 M. R. Brown <mrbrown@linux-sh.org> 15 - * 16 - * This file is subject to the terms and conditions of the GNU General Public 17 - * License. See the file "COPYING" in the main directory of this archive 18 - * for more details. 19 14 */ 20 15 #include <linux/init.h> 21 16 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4/fpu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Save/restore floating point context for signal handlers. 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 4 * 8 5 * Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka 9 6 * Copyright (C) 2006 ST Microelectronics Ltd. (denorm support)
+1 -4
arch/sh/kernel/cpu/sh4/perf_event.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Performance events support for SH7750-style performance counters 3 4 * 4 5 * Copyright (C) 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/kernel.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4/probe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4/probe.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2001 - 2007 Paul Mundt 8 7 * Copyright (C) 2003 Richard Curnow 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/io.h>
+1 -4
arch/sh/kernel/cpu/sh4/setup-sh4-202.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH4-202 Setup 3 4 * 4 5 * Copyright (C) 2006 Paul Mundt 5 6 * Copyright (C) 2009 Magnus Damm 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/platform_device.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4/setup-sh7750.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7091/SH7750/SH7750S/SH7750R/SH7751/SH7751R Setup 3 4 * 4 5 * Copyright (C) 2006 Paul Mundt 5 6 * Copyright (C) 2006 Jamie Lenehan 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/platform_device.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4/setup-sh7760.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7760 Setup 3 4 * 4 5 * Copyright (C) 2006 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4/sq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4/sq.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2001 - 2006 Paul Mundt 8 7 * Copyright (C) 2001, 2002 M. R. Brown 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/cpu.h>
+1 -13
arch/sh/kernel/cpu/sh4a/clock-sh7343.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7343.c 3 4 * 4 5 * SH7343 clock framework support 5 6 * 6 7 * Copyright (C) 2009 Magnus Damm 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 8 */ 21 9 #include <linux/init.h> 22 10 #include <linux/kernel.h>
+1 -13
arch/sh/kernel/cpu/sh4a/clock-sh7366.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7366.c 3 4 * 4 5 * SH7366 clock framework support 5 6 * 6 7 * Copyright (C) 2009 Magnus Damm 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 8 */ 21 9 #include <linux/init.h> 22 10 #include <linux/kernel.h>
+1 -13
arch/sh/kernel/cpu/sh4a/clock-sh7722.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7722.c 3 4 * 4 5 * SH7722 clock framework support 5 6 * 6 7 * Copyright (C) 2009 Magnus Damm 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 8 */ 21 9 #include <linux/init.h> 22 10 #include <linux/kernel.h>
+1 -13
arch/sh/kernel/cpu/sh4a/clock-sh7723.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7723.c 3 4 * 4 5 * SH7723 clock framework support 5 6 * 6 7 * Copyright (C) 2009 Magnus Damm 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 8 */ 21 9 #include <linux/init.h> 22 10 #include <linux/kernel.h>
+1 -13
arch/sh/kernel/cpu/sh4a/clock-sh7724.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7724.c 3 4 * 4 5 * SH7724 clock framework support 5 6 * 6 7 * Copyright (C) 2009 Magnus Damm 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 8 */ 21 9 #include <linux/init.h> 22 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/clock-sh7734.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7734.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2011, 2012 Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com> 8 7 * Copyright (C) 2011, 2012 Renesas Solutions Corp. 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 15 10 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/clock-sh7757.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4/clock-sh7757.c 3 4 * 4 5 * SH7757 support for the clock framework 5 6 * 6 7 * Copyright (C) 2009-2010 Renesas Solutions Corp. 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/clock-sh7763.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7763.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2005 Paul Mundt 8 7 * Copyright (C) 2007 Yoshihiro Shimoda 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/clock-sh7770.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7770.c 3 4 * 4 5 * SH7770 support for the clock framework 5 6 * 6 7 * Copyright (C) 2005 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/clock-sh7780.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7780.c 3 4 * 4 5 * SH7780 support for the clock framework 5 6 * 6 7 * Copyright (C) 2005 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/clock-sh7785.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7785.c 3 4 * 4 5 * SH7785 support for the clock framework 5 6 * 6 7 * Copyright (C) 2007 - 2010 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/clock-sh7786.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/clock-sh7786.c 3 4 * 4 5 * SH7786 support for the clock framework 5 6 * 6 7 * Copyright (C) 2010 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/clock-shx3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4/clock-shx3.c 3 4 * ··· 7 6 * Copyright (C) 2006-2007 Renesas Technology Corp. 8 7 * Copyright (C) 2006-2007 Renesas Solutions Corp. 9 8 * Copyright (C) 2006-2010 Paul Mundt 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/init.h> 16 11 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/intc-shx3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Shared support for SH-X3 interrupt controllers. 3 4 * 4 5 * Copyright (C) 2009 - 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/irq.h> 11 8 #include <linux/io.h>
+1 -4
arch/sh/kernel/cpu/sh4a/perf_event.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Performance events support for SH-4A performance counters 3 4 * 4 5 * Copyright (C) 2009, 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/kernel.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/pinmux-sh7723.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7723 Pinmux 3 4 * 4 5 * Copyright (C) 2008 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh4a/pinmux-sh7724.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7724 Pinmux 3 4 * ··· 8 7 * 9 8 * Based on SH7723 Pinmux 10 9 * Copyright (C) 2008 Magnus Damm 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 17 12 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh4a/pinmux-sh7734.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7734 processor support - PFC hardware block 3 4 * 4 5 * Copyright (C) 2012 Renesas Solutions Corp. 5 6 * Copyright (C) 2012 Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com> 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/bug.h> 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/pinmux-sh7757.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7757 (B0 step) Pinmux 3 4 * ··· 8 7 * 9 8 * Based on SH7723 Pinmux 10 9 * Copyright (C) 2008 Magnus Damm 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 17 12 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh4a/pinmux-sh7785.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7785 Pinmux 3 4 * 4 5 * Copyright (C) 2008 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh4a/pinmux-sh7786.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7786 Pinmux 3 4 * ··· 8 7 * Based on SH7785 pinmux 9 8 * 10 9 * Copyright (C) 2008 Magnus Damm 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 17 12 #include <linux/bug.h>
+1 -4
arch/sh/kernel/cpu/sh4a/pinmux-shx3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH-X3 prototype CPU pinmux 3 4 * 4 5 * Copyright (C) 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/bug.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7343.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7343 Setup 3 4 * 4 5 * Copyright (C) 2006 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7366.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7366 Setup 3 4 * 4 5 * Copyright (C) 2008 Renesas Solutions 5 6 * 6 7 * Based on linux/arch/sh/kernel/cpu/sh4a/setup-sh7722.c 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/platform_device.h> 13 10 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7722.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7722 Setup 3 4 * 4 5 * Copyright (C) 2006 - 2008 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/init.h> 11 8 #include <linux/mm.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7723.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7723 Setup 3 4 * 4 5 * Copyright (C) 2008 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7724.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7724 Setup 3 4 * ··· 8 7 * 9 8 * Based on SH7723 Setup 10 9 * Copyright (C) 2008 Paul Mundt 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/platform_device.h> 17 12 #include <linux/init.h>
+2 -5
arch/sh/kernel/cpu/sh4a/setup-sh7734.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/setup-sh7734.c 3 - 4 + * 4 5 * SH7734 Setup 5 6 * 6 7 * Copyright (C) 2011,2012 Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com> 7 8 * Copyright (C) 2011,2012 Renesas Solutions Corp. 8 - * 9 - * This file is subject to the terms and conditions of the GNU General Public 10 - * License. See the file "COPYING" in the main directory of this archive 11 - * for more details. 12 9 */ 13 10 14 11 #include <linux/platform_device.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7757.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7757 Setup 3 4 * 4 5 * Copyright (C) 2009, 2011 Renesas Solutions Corp. 5 6 * 6 7 * based on setup-sh7785.c : Copyright (C) 2007 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/platform_device.h> 13 10 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7763.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7763 Setup 3 4 * 4 5 * Copyright (C) 2006 Paul Mundt 5 6 * Copyright (C) 2007 Yoshihiro Shimoda 6 7 * Copyright (C) 2008, 2009 Nobuhiro Iwamatsu 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/platform_device.h> 13 10 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7770.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7770 Setup 3 4 * 4 5 * Copyright (C) 2006 - 2008 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7780.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7780 Setup 3 4 * 4 5 * Copyright (C) 2006 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7785.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7785 Setup 3 4 * 4 5 * Copyright (C) 2007 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-sh7786.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH7786 Setup 3 4 * ··· 9 8 * Based on SH7785 Setup 10 9 * 11 10 * Copyright (C) 2007 Paul Mundt 12 - * 13 - * This file is subject to the terms and conditions of the GNU General Public 14 - * License. See the file "COPYING" in the main directory of this archive 15 - * for more details. 16 11 */ 17 12 #include <linux/platform_device.h> 18 13 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/setup-shx3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH-X3 Prototype Setup 3 4 * 4 5 * Copyright (C) 2007 - 2010 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh4a/smp-shx3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH-X3 SMP 3 4 * 4 5 * Copyright (C) 2007 - 2010 Paul Mundt 5 6 * Copyright (C) 2007 Magnus Damm 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/init.h> 12 9 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/sh4a/ubc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh4a/ubc.c 3 4 * 4 5 * On-chip UBC support for SH-4A CPUs. 5 6 * 6 7 * Copyright (C) 2009 - 2010 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/err.h>
+1 -4
arch/sh/kernel/cpu/sh5/clock-sh5.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh5/clock-sh5.c 3 4 * 4 5 * SH-5 support for the clock framework 5 6 * 6 7 * Copyright (C) 2008 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+2 -5
arch/sh/kernel/cpu/sh5/entry.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh5/entry.S 3 4 * 4 5 * Copyright (C) 2000, 2001 Paolo Alberelli 5 6 * Copyright (C) 2004 - 2008 Paul Mundt 6 7 * Copyright (C) 2003, 2004 Richard Curnow 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/errno.h> 13 10 #include <linux/init.h>
+1 -4
arch/sh/kernel/cpu/sh5/fpu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh5/fpu.c 3 4 * ··· 8 7 * 9 8 * Started from SH4 version: 10 9 * Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka 11 - * 12 - * This file is subject to the terms and conditions of the GNU General Public 13 - * License. See the file "COPYING" in the main directory of this archive 14 - * for more details. 15 10 */ 16 11 #include <linux/sched.h> 17 12 #include <linux/signal.h>
+1 -4
arch/sh/kernel/cpu/sh5/probe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh5/probe.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2000, 2001 Paolo Alberelli 8 7 * Copyright (C) 2003 - 2007 Paul Mundt 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/io.h>
+1 -4
arch/sh/kernel/cpu/sh5/setup-sh5.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SH5-101/SH5-103 CPU Setup 3 4 * 4 5 * Copyright (C) 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/platform_device.h> 11 8 #include <linux/init.h>
+2 -5
arch/sh/kernel/cpu/sh5/switchto.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh5/switchto.S 3 4 * 4 5 * sh64 context switch 5 6 * 6 7 * Copyright (C) 2004 Richard Curnow 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 13 10 .section .text..SHmedia32,"ax"
+1 -4
arch/sh/kernel/cpu/sh5/unwind.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/sh5/unwind.c 3 4 * 4 5 * Copyright (C) 2004 Paul Mundt 5 6 * Copyright (C) 2004 Richard Curnow 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/kallsyms.h> 12 9 #include <linux/kernel.h>
+1
arch/sh/kernel/cpu/shmobile/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # 2 3 # Makefile for the Linux/SuperH SH-Mobile backends. 3 4 #
+1 -4
arch/sh/kernel/cpu/shmobile/cpuidle.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/shmobile/cpuidle.c 3 4 * 4 5 * Cpuidle support code for SuperH Mobile 5 6 * 6 7 * Copyright (C) 2009 Magnus Damm 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+1 -4
arch/sh/kernel/cpu/shmobile/pm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/cpu/shmobile/pm.c 3 4 * 4 5 * Power management support code for SuperH Mobile 5 6 * 6 7 * Copyright (C) 2009 Magnus Damm 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/kernel.h>
+2 -5
arch/sh/kernel/cpu/shmobile/sleep.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/cpu/sh4a/sleep-sh_mobile.S 3 4 * 4 5 * Sleep mode and Standby modes support for SuperH Mobile 5 6 * 6 7 * Copyright (C) 2009 Magnus Damm 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 13 10 #include <linux/sys.h>
+2 -5
arch/sh/kernel/debugtraps.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/debugtraps.S 3 4 * 4 5 * Debug trap jump tables for SuperH 5 6 * 6 7 * Copyright (C) 2006 - 2008 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/sys.h> 13 10 #include <linux/linkage.h>
+1 -4
arch/sh/kernel/disassemble.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Disassemble SuperH instructions. 3 4 * 4 5 * Copyright (C) 1999 kaz Kojima 5 6 * Copyright (C) 2008 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/kernel.h> 12 9 #include <linux/string.h>
+1 -4
arch/sh/kernel/dma-coherent.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2004 - 2007 Paul Mundt 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 4 */ 8 5 #include <linux/mm.h> 9 6 #include <linux/init.h>
+1 -4
arch/sh/kernel/dumpstack.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 1991, 1992 Linus Torvalds 3 4 * Copyright (C) 2000, 2001, 2002 Andi Kleen, SuSE Labs 4 5 * Copyright (C) 2009 Matt Fleming 5 6 * Copyright (C) 2002 - 2012 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/kallsyms.h> 12 9 #include <linux/ftrace.h>
+1 -4
arch/sh/kernel/dwarf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2009 Matt Fleming <matt@console-pimps.org> 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 4 * 8 5 * This is an implementation of a DWARF unwinder. Its main purpose is 9 6 * for generating stacktrace information. Based on the DWARF 3
+2 -6
arch/sh/kernel/entry-common.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * Copyright (C) 1999, 2000, 2002 Niibe Yutaka 3 4 * Copyright (C) 2003 - 2008 Paul Mundt 4 - * 5 - * This file is subject to the terms and conditions of the GNU General Public 6 - * License. See the file "COPYING" in the main directory of this archive 7 - * for more details. 8 - * 9 5 */ 10 6 11 7 ! NOTE:
+2 -5
arch/sh/kernel/head_32.S
··· 1 - /* $Id: head.S,v 1.7 2003/09/01 17:58:19 lethal Exp $ 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * $Id: head.S,v 1.7 2003/09/01 17:58:19 lethal Exp $ 2 3 * 3 4 * arch/sh/kernel/head.S 4 5 * 5 6 * Copyright (C) 1999, 2000 Niibe Yutaka & Kaz Kojima 6 7 * Copyright (C) 2010 Matt Fleming 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 * 12 9 * Head.S contains the SH exception handlers and startup code. 13 10 */
+2 -5
arch/sh/kernel/head_64.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/head_64.S 3 4 * 4 5 * Copyright (C) 2000, 2001 Paolo Alberelli 5 6 * Copyright (C) 2003, 2004 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 12 9 #include <linux/init.h>
+1 -4
arch/sh/kernel/hw_breakpoint.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/hw_breakpoint.c 3 4 * 4 5 * Unified kernel/user-space hardware breakpoint facility for the on-chip UBC. 5 6 * 6 7 * Copyright (C) 2009 - 2010 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/init.h> 13 10 #include <linux/perf_event.h>
+1 -4
arch/sh/kernel/idle.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * The idle loop for all SuperH platforms. 3 4 * 4 5 * Copyright (C) 2002 - 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/module.h> 11 8 #include <linux/init.h>
+1 -4
arch/sh/kernel/io.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/io.c - Machine independent I/O functions. 3 4 * 4 5 * Copyright (C) 2000 - 2009 Stuart Menefy 5 6 * Copyright (C) 2005 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/module.h> 12 9 #include <linux/pci.h>
+1 -4
arch/sh/kernel/io_trapped.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Trapped io support 3 4 * 4 5 * Copyright (C) 2008 Magnus Damm 5 6 * 6 7 * Intercept io operations by trapping. 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/kernel.h> 13 10 #include <linux/mm.h>
+1 -4
arch/sh/kernel/iomap.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/iomap.c 3 4 * 4 5 * Copyright (C) 2000 Niibe Yutaka 5 6 * Copyright (C) 2005 - 2007 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/module.h> 12 9 #include <linux/io.h>
+1 -4
arch/sh/kernel/ioport.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/ioport.c 3 4 * 4 5 * Copyright (C) 2000 Niibe Yutaka 5 6 * Copyright (C) 2005 - 2007 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/module.h> 12 9 #include <linux/io.h>
+1 -4
arch/sh/kernel/irq_32.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SHcompact irqflags support 3 4 * 4 5 * Copyright (C) 2006 - 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/irqflags.h> 11 8 #include <linux/module.h>
+1 -4
arch/sh/kernel/irq_64.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SHmedia irqflags support 3 4 * 4 5 * Copyright (C) 2006 - 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/irqflags.h> 11 8 #include <linux/module.h>
+1 -4
arch/sh/kernel/kgdb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SuperH KGDB support 3 4 * 4 5 * Copyright (C) 2008 - 2012 Paul Mundt 5 6 * 6 7 * Single stepping taken from the old stub by Henry Bell and Jeremy Siegel. 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/kgdb.h> 13 10 #include <linux/kdebug.h>
+1 -4
arch/sh/kernel/kprobes.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Kernel probes (kprobes) for SuperH 3 4 * 4 5 * Copyright (C) 2007 Chris Smith <chris.smith@st.com> 5 6 * Copyright (C) 2006 Lineo Solutions, Inc. 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/kprobes.h> 12 9 #include <linux/extable.h>
+1 -3
arch/sh/kernel/machine_kexec.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * machine_kexec.c - handle transition of Linux booting another kernel 3 4 * Copyright (C) 2002-2003 Eric Biederman <ebiederm@xmission.com> 4 5 * 5 6 * GameCube/ppc32 port Copyright (C) 2004 Albert Herranz 6 7 * LANDISK/sh4 supported by kogiidena 7 - * 8 - * This source code is licensed under the GNU General Public License, 9 - * Version 2. See the file COPYING for more details. 10 8 */ 11 9 #include <linux/mm.h> 12 10 #include <linux/kexec.h>
+1 -4
arch/sh/kernel/machvec.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/machvec.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 1999 Niibe Yutaka 8 7 * Copyright (C) 2002 - 2007 Paul Mundt 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/init.h> 15 10 #include <linux/string.h>
+1 -14
arch/sh/kernel/module.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* Kernel module help for SH. 2 3 3 4 SHcompact version by Kaz Kojima and Paul Mundt. ··· 10 9 11 10 Based on the sh version, and on code from the sh64-specific parts of 12 11 modutils, originally written by Richard Curnow and Ben Gaster. 13 - 14 - This program is free software; you can redistribute it and/or modify 15 - it under the terms of the GNU General Public License as published by 16 - the Free Software Foundation; either version 2 of the License, or 17 - (at your option) any later version. 18 - 19 - This program is distributed in the hope that it will be useful, 20 - but WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 22 - GNU General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; if not, write to the Free Software 26 - Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 27 12 */ 28 13 #include <linux/moduleloader.h> 29 14 #include <linux/elf.h>
+1 -4
arch/sh/kernel/nmi_debug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2007 Atmel Corporation 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 4 */ 8 5 #include <linux/delay.h> 9 6 #include <linux/kdebug.h>
+1 -4
arch/sh/kernel/perf_callchain.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Performance event callchain support - SuperH architecture code 3 4 * 4 5 * Copyright (C) 2009 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/kernel.h> 11 8 #include <linux/sched.h>
+1 -4
arch/sh/kernel/perf_event.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Performance event support framework for SuperH hardware counters. 3 4 * ··· 16 15 * 17 16 * ppc: 18 17 * Copyright 2008-2009 Paul Mackerras, IBM Corporation. 19 - * 20 - * This file is subject to the terms and conditions of the GNU General Public 21 - * License. See the file "COPYING" in the main directory of this archive 22 - * for more details. 23 18 */ 24 19 #include <linux/kernel.h> 25 20 #include <linux/init.h>
+1 -4
arch/sh/kernel/process_32.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/process.c 3 4 * ··· 9 8 * SuperH version: Copyright (C) 1999, 2000 Niibe Yutaka & Kaz Kojima 10 9 * Copyright (C) 2006 Lineo Solutions Inc. support SH4A UBC 11 10 * Copyright (C) 2002 - 2008 Paul Mundt 12 - * 13 - * This file is subject to the terms and conditions of the GNU General Public 14 - * License. See the file "COPYING" in the main directory of this archive 15 - * for more details. 16 11 */ 17 12 #include <linux/module.h> 18 13 #include <linux/mm.h>
+1 -4
arch/sh/kernel/process_64.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/process_64.c 3 4 * ··· 13 12 * 14 13 * In turn started from i386 version: 15 14 * Copyright (C) 1995 Linus Torvalds 16 - * 17 - * This file is subject to the terms and conditions of the GNU General Public 18 - * License. See the file "COPYING" in the main directory of this archive 19 - * for more details. 20 15 */ 21 16 #include <linux/mm.h> 22 17 #include <linux/fs.h>
+1 -4
arch/sh/kernel/ptrace_32.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * SuperH process tracing 3 4 * ··· 6 5 * Copyright (C) 2002 - 2009 Paul Mundt 7 6 * 8 7 * Audit support by Yuichi Nakamura <ynakam@hitachisoft.jp> 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/kernel.h> 15 10 #include <linux/sched.h>
+1 -4
arch/sh/kernel/ptrace_64.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/ptrace_64.c 3 4 * ··· 11 10 * Original x86 implementation: 12 11 * By Ross Biro 1/23/92 13 12 * edited by Linus Torvalds 14 - * 15 - * This file is subject to the terms and conditions of the GNU General Public 16 - * License. See the file "COPYING" in the main directory of this archive 17 - * for more details. 18 13 */ 19 14 #include <linux/kernel.h> 20 15 #include <linux/rwsem.h>
+2 -4
arch/sh/kernel/relocate_kernel.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * relocate_kernel.S - put the kernel image in place to boot 3 4 * 2005.9.17 kogiidena@eggplant.ddo.jp 4 5 * 5 6 * LANDISK/sh4 is supported. Maybe, SH archtecture works well. 6 7 * 7 8 * 2009-03-18 Magnus Damm - Added Kexec Jump support 8 - * 9 - * This source code is licensed under the GNU General Public License, 10 - * Version 2. See the file COPYING for more details. 11 9 */ 12 10 #include <linux/linkage.h> 13 11 #include <asm/addrspace.h>
+1 -4
arch/sh/kernel/return_address.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/return_address.c 3 4 * 4 5 * Copyright (C) 2009 Matt Fleming 5 6 * Copyright (C) 2009 Paul Mundt 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <linux/kernel.h> 12 9 #include <linux/module.h>
+1 -4
arch/sh/kernel/sh_bios.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * C interface for trapping into the standard LinuxSH BIOS. 3 4 * ··· 6 5 * Copyright (C) 1999, 2000 Niibe Yutaka 7 6 * Copyright (C) 2002 M. R. Brown 8 7 * Copyright (C) 2004 - 2010 Paul Mundt 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/module.h> 15 10 #include <linux/console.h>
+1 -4
arch/sh/kernel/sh_ksyms_64.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/sh_ksyms_64.c 3 4 * 4 5 * Copyright (C) 2000, 2001 Paolo Alberelli 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/rwsem.h> 11 8 #include <linux/module.h>
+1 -4
arch/sh/kernel/signal_64.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/signal_64.c 3 4 * 4 5 * Copyright (C) 2000, 2001 Paolo Alberelli 5 6 * Copyright (C) 2003 - 2008 Paul Mundt 6 7 * Copyright (C) 2004 Richard Curnow 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/rwsem.h> 13 10 #include <linux/sched.h>
+1 -4
arch/sh/kernel/smp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/smp.c 3 4 * ··· 6 5 * 7 6 * Copyright (C) 2002 - 2010 Paul Mundt 8 7 * Copyright (C) 2006 - 2007 Akio Idehara 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/err.h> 15 10 #include <linux/cache.h>
+1 -4
arch/sh/kernel/stacktrace.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/stacktrace.c 3 4 * 4 5 * Stack trace management functions 5 6 * 6 7 * Copyright (C) 2006 - 2008 Paul Mundt 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/sched.h> 13 10 #include <linux/sched/debug.h>
+1 -4
arch/sh/kernel/swsusp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * swsusp.c - SuperH hibernation support 3 4 * 4 5 * Copyright (C) 2009 Magnus Damm 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 11 8 #include <linux/mm.h>
+2 -6
arch/sh/kernel/syscalls_32.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/syscalls.S 3 4 * 4 5 * System call table for SuperH 5 6 * 6 7 * Copyright (C) 1999, 2000, 2002 Niibe Yutaka 7 8 * Copyright (C) 2003 Paul Mundt 8 - * 9 - * This file is subject to the terms and conditions of the GNU General Public 10 - * License. See the file "COPYING" in the main directory of this archive 11 - * for more details. 12 - * 13 9 */ 14 10 #include <linux/sys.h> 15 11 #include <linux/linkage.h>
+2 -5
arch/sh/kernel/syscalls_64.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/kernel/syscalls_64.S 3 4 * 4 5 * Copyright (C) 2000, 2001 Paolo Alberelli 5 6 * Copyright (C) 2004 - 2007 Paul Mundt 6 7 * Copyright (C) 2003, 2004 Richard Curnow 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 13 10 #include <linux/sys.h>
+1 -4
arch/sh/kernel/time.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/time.c 3 4 * ··· 6 5 * Copyright (C) 2000 Philipp Rumpf <prumpf@tux.org> 7 6 * Copyright (C) 2002 - 2009 Paul Mundt 8 7 * Copyright (C) 2002 M. R. Brown <mrbrown@linux-sh.org> 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/kernel.h> 15 10 #include <linux/init.h>
+1 -4
arch/sh/kernel/topology.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/topology.c 3 4 * 4 5 * Copyright (C) 2007 Paul Mundt 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 6 */ 10 7 #include <linux/cpu.h> 11 8 #include <linux/cpumask.h>
+1 -4
arch/sh/kernel/traps_32.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * 'traps.c' handles hardware traps and faults after we have saved some 3 4 * state in 'entry.S'. ··· 7 6 * Copyright (C) 2000 Philipp Rumpf 8 7 * Copyright (C) 2000 David Howells 9 8 * Copyright (C) 2002 - 2010 Paul Mundt 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/kernel.h> 16 11 #include <linux/ptrace.h>
+1 -4
arch/sh/kernel/traps_64.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/traps_64.c 3 4 * 4 5 * Copyright (C) 2000, 2001 Paolo Alberelli 5 6 * Copyright (C) 2003, 2004 Paul Mundt 6 7 * Copyright (C) 2003, 2004 Richard Curnow 7 - * 8 - * This file is subject to the terms and conditions of the GNU General Public 9 - * License. See the file "COPYING" in the main directory of this archive 10 - * for more details. 11 8 */ 12 9 #include <linux/sched.h> 13 10 #include <linux/sched/debug.h>
+1
arch/sh/kernel/unwinder.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2009 Matt Fleming 3 4 *
+1 -4
arch/sh/kernel/vsyscall/vsyscall.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/kernel/vsyscall/vsyscall.c 3 4 * ··· 6 5 * 7 6 * vDSO randomization 8 7 * Copyright(C) 2005-2006, Red Hat, Inc., Ingo Molnar 9 - * 10 - * This file is subject to the terms and conditions of the GNU General Public 11 - * License. See the file "COPYING" in the main directory of this archive 12 - * for more details. 13 8 */ 14 9 #include <linux/mm.h> 15 10 #include <linux/kernel.h>
+4 -25
arch/sh/lib/ashiftrt.S
··· 1 - /* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + 3 + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2 4 2004, 2005, 2006 3 5 Free Software Foundation, Inc. 4 - 5 - This file is free software; you can redistribute it and/or modify it 6 - under the terms of the GNU General Public License as published by the 7 - Free Software Foundation; either version 2, or (at your option) any 8 - later version. 9 - 10 - In addition to the permissions in the GNU General Public License, the 11 - Free Software Foundation gives you unlimited permission to link the 12 - compiled version of this file into combinations with other programs, 13 - and to distribute those combinations without any restriction coming 14 - from the use of this file. (The General Public License restrictions 15 - do apply in other respects; for example, they cover modification of 16 - the file, and distribution when not linked into a combine 17 - executable.) 18 - 19 - This file is distributed in the hope that it will be useful, but 20 - WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 - General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; see the file COPYING. If not, write to 26 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 27 - Boston, MA 02110-1301, USA. */ 6 + */ 28 7 29 8 !! libgcc routines for the Renesas / SuperH SH CPUs. 30 9 !! Contributed by Steve Chamberlain.
+4 -25
arch/sh/lib/ashlsi3.S
··· 1 - /* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + 3 + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2 4 2004, 2005, 2006 3 5 Free Software Foundation, Inc. 4 - 5 - This file is free software; you can redistribute it and/or modify it 6 - under the terms of the GNU General Public License as published by the 7 - Free Software Foundation; either version 2, or (at your option) any 8 - later version. 9 - 10 - In addition to the permissions in the GNU General Public License, the 11 - Free Software Foundation gives you unlimited permission to link the 12 - compiled version of this file into combinations with other programs, 13 - and to distribute those combinations without any restriction coming 14 - from the use of this file. (The General Public License restrictions 15 - do apply in other respects; for example, they cover modification of 16 - the file, and distribution when not linked into a combine 17 - executable.) 18 - 19 - This file is distributed in the hope that it will be useful, but 20 - WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 - General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; see the file COPYING. If not, write to 26 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 27 - Boston, MA 02110-1301, USA. */ 6 + */ 28 7 29 8 !! libgcc routines for the Renesas / SuperH SH CPUs. 30 9 !! Contributed by Steve Chamberlain.
+4 -25
arch/sh/lib/ashrsi3.S
··· 1 - /* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + 3 + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2 4 2004, 2005, 2006 3 5 Free Software Foundation, Inc. 4 - 5 - This file is free software; you can redistribute it and/or modify it 6 - under the terms of the GNU General Public License as published by the 7 - Free Software Foundation; either version 2, or (at your option) any 8 - later version. 9 - 10 - In addition to the permissions in the GNU General Public License, the 11 - Free Software Foundation gives you unlimited permission to link the 12 - compiled version of this file into combinations with other programs, 13 - and to distribute those combinations without any restriction coming 14 - from the use of this file. (The General Public License restrictions 15 - do apply in other respects; for example, they cover modification of 16 - the file, and distribution when not linked into a combine 17 - executable.) 18 - 19 - This file is distributed in the hope that it will be useful, but 20 - WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 - General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; see the file COPYING. If not, write to 26 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 27 - Boston, MA 02110-1301, USA. */ 6 + */ 28 7 29 8 !! libgcc routines for the Renesas / SuperH SH CPUs. 30 9 !! Contributed by Steve Chamberlain.
+3 -6
arch/sh/lib/checksum.S
··· 1 - /* $Id: checksum.S,v 1.10 2001/07/06 13:11:32 gniibe Exp $ 1 + /* SPDX-License-Identifier: GPL-2.0+ 2 + * 3 + * $Id: checksum.S,v 1.10 2001/07/06 13:11:32 gniibe Exp $ 2 4 * 3 5 * INET An implementation of the TCP/IP protocol suite for the LINUX 4 6 * operating system. INET is implemented using the BSD Socket ··· 23 21 * converted to pure assembler 24 22 * 25 23 * SuperH version: Copyright (C) 1999 Niibe Yutaka 26 - * 27 - * This program is free software; you can redistribute it and/or 28 - * modify it under the terms of the GNU General Public License 29 - * as published by the Free Software Foundation; either version 30 - * 2 of the License, or (at your option) any later version. 31 24 */ 32 25 33 26 #include <asm/errno.h>
+1 -4
arch/sh/lib/io.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * arch/sh/lib/io.c - SH32 optimized I/O routines 3 4 * ··· 7 6 * 8 7 * Provide real functions which expand to whatever the header file defined. 9 8 * Also definitions of machine independent IO functions. 10 - * 11 - * This file is subject to the terms and conditions of the GNU General Public 12 - * License. See the file "COPYING" in the main directory of this archive 13 - * for more details. 14 9 */ 15 10 #include <linux/module.h> 16 11 #include <linux/io.h>
+2
arch/sh/lib/libgcc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 1 3 #ifndef __ASM_LIBGCC_H 2 4 #define __ASM_LIBGCC_H 3 5
+4 -25
arch/sh/lib/lshrsi3.S
··· 1 - /* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + 3 + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2 4 2004, 2005, 2006 3 5 Free Software Foundation, Inc. 4 - 5 - This file is free software; you can redistribute it and/or modify it 6 - under the terms of the GNU General Public License as published by the 7 - Free Software Foundation; either version 2, or (at your option) any 8 - later version. 9 - 10 - In addition to the permissions in the GNU General Public License, the 11 - Free Software Foundation gives you unlimited permission to link the 12 - compiled version of this file into combinations with other programs, 13 - and to distribute those combinations without any restriction coming 14 - from the use of this file. (The General Public License restrictions 15 - do apply in other respects; for example, they cover modification of 16 - the file, and distribution when not linked into a combine 17 - executable.) 18 - 19 - This file is distributed in the hope that it will be useful, but 20 - WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 - General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; see the file COPYING. If not, write to 26 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 27 - Boston, MA 02110-1301, USA. */ 6 + */ 28 7 29 8 !! libgcc routines for the Renesas / SuperH SH CPUs. 30 9 !! Contributed by Steve Chamberlain.
+2 -5
arch/sh/lib/mcount.S
··· 1 - /* 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 2 3 * arch/sh/lib/mcount.S 3 4 * 4 5 * Copyright (C) 2008, 2009 Paul Mundt 5 6 * Copyright (C) 2008, 2009 Matt Fleming 6 - * 7 - * This file is subject to the terms and conditions of the GNU General Public 8 - * License. See the file "COPYING" in the main directory of this archive 9 - * for more details. 10 7 */ 11 8 #include <asm/ftrace.h> 12 9 #include <asm/thread_info.h>
+4 -25
arch/sh/lib/movmem.S
··· 1 - /* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + 3 + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2 4 2004, 2005, 2006 3 5 Free Software Foundation, Inc. 4 - 5 - This file is free software; you can redistribute it and/or modify it 6 - under the terms of the GNU General Public License as published by the 7 - Free Software Foundation; either version 2, or (at your option) any 8 - later version. 9 - 10 - In addition to the permissions in the GNU General Public License, the 11 - Free Software Foundation gives you unlimited permission to link the 12 - compiled version of this file into combinations with other programs, 13 - and to distribute those combinations without any restriction coming 14 - from the use of this file. (The General Public License restrictions 15 - do apply in other respects; for example, they cover modification of 16 - the file, and distribution when not linked into a combine 17 - executable.) 18 - 19 - This file is distributed in the hope that it will be useful, but 20 - WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 - General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; see the file COPYING. If not, write to 26 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 27 - Boston, MA 02110-1301, USA. */ 6 + */ 28 7 29 8 !! libgcc routines for the Renesas / SuperH SH CPUs. 30 9 !! Contributed by Steve Chamberlain.
+4 -25
arch/sh/lib/udiv_qrnnd.S
··· 1 - /* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + 3 + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2 4 2004, 2005, 2006 3 5 Free Software Foundation, Inc. 4 - 5 - This file is free software; you can redistribute it and/or modify it 6 - under the terms of the GNU General Public License as published by the 7 - Free Software Foundation; either version 2, or (at your option) any 8 - later version. 9 - 10 - In addition to the permissions in the GNU General Public License, the 11 - Free Software Foundation gives you unlimited permission to link the 12 - compiled version of this file into combinations with other programs, 13 - and to distribute those combinations without any restriction coming 14 - from the use of this file. (The General Public License restrictions 15 - do apply in other respects; for example, they cover modification of 16 - the file, and distribution when not linked into a combine 17 - executable.) 18 - 19 - This file is distributed in the hope that it will be useful, but 20 - WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 - General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; see the file COPYING. If not, write to 26 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 27 - Boston, MA 02110-1301, USA. */ 6 + */ 28 7 29 8 !! libgcc routines for the Renesas / SuperH SH CPUs. 30 9 !! Contributed by Steve Chamberlain.
+4 -25
arch/sh/lib/udivsi3.S
··· 1 - /* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + 3 + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2 4 2004, 2005 3 5 Free Software Foundation, Inc. 4 - 5 - This file is free software; you can redistribute it and/or modify it 6 - under the terms of the GNU General Public License as published by the 7 - Free Software Foundation; either version 2, or (at your option) any 8 - later version. 9 - 10 - In addition to the permissions in the GNU General Public License, the 11 - Free Software Foundation gives you unlimited permission to link the 12 - compiled version of this file into combinations with other programs, 13 - and to distribute those combinations without any restriction coming 14 - from the use of this file. (The General Public License restrictions 15 - do apply in other respects; for example, they cover modification of 16 - the file, and distribution when not linked into a combine 17 - executable.) 18 - 19 - This file is distributed in the hope that it will be useful, but 20 - WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 - General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; see the file COPYING. If not, write to 26 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 27 - Boston, MA 02110-1301, USA. */ 6 + */ 28 7 29 8 !! libgcc routines for the Renesas / SuperH SH CPUs. 30 9 !! Contributed by Steve Chamberlain.
+4 -25
arch/sh/lib/udivsi3_i4i-Os.S
··· 1 - /* Copyright (C) 2006 Free Software Foundation, Inc. 2 - 3 - This file is free software; you can redistribute it and/or modify it 4 - under the terms of the GNU General Public License as published by the 5 - Free Software Foundation; either version 2, or (at your option) any 6 - later version. 7 - 8 - In addition to the permissions in the GNU General Public License, the 9 - Free Software Foundation gives you unlimited permission to link the 10 - compiled version of this file into combinations with other programs, 11 - and to distribute those combinations without any restriction coming 12 - from the use of this file. (The General Public License restrictions 13 - do apply in other respects; for example, they cover modification of 14 - the file, and distribution when not linked into a combine 15 - executable.) 16 - 17 - This file is distributed in the hope that it will be useful, but 18 - WITHOUT ANY WARRANTY; without even the implied warranty of 19 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 20 - General Public License for more details. 21 - 22 - You should have received a copy of the GNU General Public License 23 - along with this program; see the file COPYING. If not, write to 24 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 25 - Boston, MA 02110-1301, USA. */ 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + * 3 + * Copyright (C) 2006 Free Software Foundation, Inc. 4 + */ 26 5 27 6 /* Moderately Space-optimized libgcc routines for the Renesas SH / 28 7 STMicroelectronics ST40 CPUs.
+4 -25
arch/sh/lib/udivsi3_i4i.S
··· 1 - /* Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH GCC-exception-2.0 2 + 3 + Copyright (C) 1994, 1995, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2 4 2004, 2005, 2006 3 5 Free Software Foundation, Inc. 4 - 5 - This file is free software; you can redistribute it and/or modify it 6 - under the terms of the GNU General Public License as published by the 7 - Free Software Foundation; either version 2, or (at your option) any 8 - later version. 9 - 10 - In addition to the permissions in the GNU General Public License, the 11 - Free Software Foundation gives you unlimited permission to link the 12 - compiled version of this file into combinations with other programs, 13 - and to distribute those combinations without any restriction coming 14 - from the use of this file. (The General Public License restrictions 15 - do apply in other respects; for example, they cover modification of 16 - the file, and distribution when not linked into a combine 17 - executable.) 18 - 19 - This file is distributed in the hope that it will be useful, but 20 - WITHOUT ANY WARRANTY; without even the implied warranty of 21 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 22 - General Public License for more details. 23 - 24 - You should have received a copy of the GNU General Public License 25 - along with this program; see the file COPYING. If not, write to 26 - the Free Software Foundation, 51 Franklin Street, Fifth Floor, 27 - Boston, MA 02110-1301, USA. */ 6 + */ 28 7 29 8 !! libgcc routines for the Renesas / SuperH SH CPUs. 30 9 !! Contributed by Steve Chamberlain.
+1 -1
arch/sh/mm/init.c
··· 443 443 #endif 444 444 445 445 #ifdef CONFIG_MEMORY_HOTREMOVE 446 - int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) 446 + int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) 447 447 { 448 448 unsigned long start_pfn = PFN_DOWN(start); 449 449 unsigned long nr_pages = size >> PAGE_SHIFT;
+2 -2
arch/um/kernel/mem.c
··· 51 51 52 52 /* this will put all low memory onto the freelists */ 53 53 memblock_free_all(); 54 - max_low_pfn = totalram_pages; 55 - max_pfn = totalram_pages; 54 + max_low_pfn = totalram_pages(); 55 + max_pfn = max_low_pfn; 56 56 mem_init_print_info(NULL); 57 57 kmalloc_ok = 1; 58 58 }
+1 -1
arch/x86/include/asm/processor.h
··· 967 967 } 968 968 969 969 extern unsigned long arch_align_stack(unsigned long sp); 970 - extern void free_init_pages(char *what, unsigned long begin, unsigned long end); 970 + void free_init_pages(const char *what, unsigned long begin, unsigned long end); 971 971 extern void free_kernel_image_pages(void *begin, void *end); 972 972 973 973 void default_idle(void);
+3 -2
arch/x86/kernel/cpu/microcode/core.c
··· 434 434 size_t len, loff_t *ppos) 435 435 { 436 436 ssize_t ret = -EINVAL; 437 + unsigned long nr_pages = totalram_pages(); 437 438 438 - if ((len >> PAGE_SHIFT) > totalram_pages) { 439 - pr_err("too much data (max %ld pages)\n", totalram_pages); 439 + if ((len >> PAGE_SHIFT) > nr_pages) { 440 + pr_err("too much data (max %ld pages)\n", nr_pages); 440 441 return ret; 441 442 } 442 443
+6 -5
arch/x86/mm/dump_pagetables.c
··· 377 377 378 378 /* 379 379 * This is an optimization for KASAN=y case. Since all kasan page tables 380 - * eventually point to the kasan_zero_page we could call note_page() 380 + * eventually point to the kasan_early_shadow_page we could call note_page() 381 381 * right away without walking through lower level page tables. This saves 382 382 * us dozens of seconds (minutes for 5-level config) while checking for 383 383 * W+X mapping or reading kernel_page_tables debugfs file. ··· 385 385 static inline bool kasan_page_table(struct seq_file *m, struct pg_state *st, 386 386 void *pt) 387 387 { 388 - if (__pa(pt) == __pa(kasan_zero_pmd) || 389 - (pgtable_l5_enabled() && __pa(pt) == __pa(kasan_zero_p4d)) || 390 - __pa(pt) == __pa(kasan_zero_pud)) { 391 - pgprotval_t prot = pte_flags(kasan_zero_pte[0]); 388 + if (__pa(pt) == __pa(kasan_early_shadow_pmd) || 389 + (pgtable_l5_enabled() && 390 + __pa(pt) == __pa(kasan_early_shadow_p4d)) || 391 + __pa(pt) == __pa(kasan_early_shadow_pud)) { 392 + pgprotval_t prot = pte_flags(kasan_early_shadow_pte[0]); 392 393 note_page(m, st, __pgprot(prot), 0, 5); 393 394 return true; 394 395 }
+1 -1
arch/x86/mm/init.c
··· 742 742 return 1; 743 743 } 744 744 745 - void free_init_pages(char *what, unsigned long begin, unsigned long end) 745 + void free_init_pages(const char *what, unsigned long begin, unsigned long end) 746 746 { 747 747 unsigned long begin_aligned, end_aligned; 748 748
+1 -1
arch/x86/mm/init_32.c
··· 860 860 } 861 861 862 862 #ifdef CONFIG_MEMORY_HOTREMOVE 863 - int arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) 863 + int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) 864 864 { 865 865 unsigned long start_pfn = start >> PAGE_SHIFT; 866 866 unsigned long nr_pages = size >> PAGE_SHIFT;
+2 -1
arch/x86/mm/init_64.c
··· 1141 1141 remove_pagetable(start, end, true, NULL); 1142 1142 } 1143 1143 1144 - int __ref arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap) 1144 + int __ref arch_remove_memory(int nid, u64 start, u64 size, 1145 + struct vmem_altmap *altmap) 1145 1146 { 1146 1147 unsigned long start_pfn = start >> PAGE_SHIFT; 1147 1148 unsigned long nr_pages = size >> PAGE_SHIFT;
+29 -26
arch/x86/mm/kasan_init_64.c
··· 211 211 unsigned long next; 212 212 213 213 if (pgd_none(*pgd)) { 214 - pgd_entry = __pgd(_KERNPG_TABLE | __pa_nodebug(kasan_zero_p4d)); 214 + pgd_entry = __pgd(_KERNPG_TABLE | 215 + __pa_nodebug(kasan_early_shadow_p4d)); 215 216 set_pgd(pgd, pgd_entry); 216 217 } 217 218 ··· 223 222 if (!p4d_none(*p4d)) 224 223 continue; 225 224 226 - p4d_entry = __p4d(_KERNPG_TABLE | __pa_nodebug(kasan_zero_pud)); 225 + p4d_entry = __p4d(_KERNPG_TABLE | 226 + __pa_nodebug(kasan_early_shadow_pud)); 227 227 set_p4d(p4d, p4d_entry); 228 228 } while (p4d++, addr = next, addr != end && p4d_none(*p4d)); 229 229 } ··· 263 261 void __init kasan_early_init(void) 264 262 { 265 263 int i; 266 - pteval_t pte_val = __pa_nodebug(kasan_zero_page) | __PAGE_KERNEL | _PAGE_ENC; 267 - pmdval_t pmd_val = __pa_nodebug(kasan_zero_pte) | _KERNPG_TABLE; 268 - pudval_t pud_val = __pa_nodebug(kasan_zero_pmd) | _KERNPG_TABLE; 269 - p4dval_t p4d_val = __pa_nodebug(kasan_zero_pud) | _KERNPG_TABLE; 264 + pteval_t pte_val = __pa_nodebug(kasan_early_shadow_page) | 265 + __PAGE_KERNEL | _PAGE_ENC; 266 + pmdval_t pmd_val = __pa_nodebug(kasan_early_shadow_pte) | _KERNPG_TABLE; 267 + pudval_t pud_val = __pa_nodebug(kasan_early_shadow_pmd) | _KERNPG_TABLE; 268 + p4dval_t p4d_val = __pa_nodebug(kasan_early_shadow_pud) | _KERNPG_TABLE; 270 269 271 270 /* Mask out unsupported __PAGE_KERNEL bits: */ 272 271 pte_val &= __default_kernel_pte_mask; ··· 276 273 p4d_val &= __default_kernel_pte_mask; 277 274 278 275 for (i = 0; i < PTRS_PER_PTE; i++) 279 - kasan_zero_pte[i] = __pte(pte_val); 276 + kasan_early_shadow_pte[i] = __pte(pte_val); 280 277 281 278 for (i = 0; i < PTRS_PER_PMD; i++) 282 - kasan_zero_pmd[i] = __pmd(pmd_val); 279 + kasan_early_shadow_pmd[i] = __pmd(pmd_val); 283 280 284 281 for (i = 0; i < PTRS_PER_PUD; i++) 285 - kasan_zero_pud[i] = __pud(pud_val); 282 + kasan_early_shadow_pud[i] = __pud(pud_val); 286 283 287 284 for (i = 0; pgtable_l5_enabled() && i < PTRS_PER_P4D; i++) 288 - kasan_zero_p4d[i] = __p4d(p4d_val); 285 + kasan_early_shadow_p4d[i] = __p4d(p4d_val); 289 286 290 287 kasan_map_early_shadow(early_top_pgt); 291 288 kasan_map_early_shadow(init_top_pgt); ··· 329 326 330 327 clear_pgds(KASAN_SHADOW_START & PGDIR_MASK, KASAN_SHADOW_END); 331 328 332 - kasan_populate_zero_shadow((void *)(KASAN_SHADOW_START & PGDIR_MASK), 329 + kasan_populate_early_shadow((void *)(KASAN_SHADOW_START & PGDIR_MASK), 333 330 kasan_mem_to_shadow((void *)PAGE_OFFSET)); 334 331 335 332 for (i = 0; i < E820_MAX_ENTRIES; i++) { ··· 341 338 342 339 shadow_cpu_entry_begin = (void *)CPU_ENTRY_AREA_BASE; 343 340 shadow_cpu_entry_begin = kasan_mem_to_shadow(shadow_cpu_entry_begin); 344 - shadow_cpu_entry_begin = (void *)round_down((unsigned long)shadow_cpu_entry_begin, 345 - PAGE_SIZE); 341 + shadow_cpu_entry_begin = (void *)round_down( 342 + (unsigned long)shadow_cpu_entry_begin, PAGE_SIZE); 346 343 347 344 shadow_cpu_entry_end = (void *)(CPU_ENTRY_AREA_BASE + 348 345 CPU_ENTRY_AREA_MAP_SIZE); 349 346 shadow_cpu_entry_end = kasan_mem_to_shadow(shadow_cpu_entry_end); 350 - shadow_cpu_entry_end = (void *)round_up((unsigned long)shadow_cpu_entry_end, 351 - PAGE_SIZE); 347 + shadow_cpu_entry_end = (void *)round_up( 348 + (unsigned long)shadow_cpu_entry_end, PAGE_SIZE); 352 349 353 - kasan_populate_zero_shadow( 350 + kasan_populate_early_shadow( 354 351 kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), 355 352 shadow_cpu_entry_begin); 356 353 357 354 kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin, 358 355 (unsigned long)shadow_cpu_entry_end, 0); 359 356 360 - kasan_populate_zero_shadow(shadow_cpu_entry_end, 361 - kasan_mem_to_shadow((void *)__START_KERNEL_map)); 357 + kasan_populate_early_shadow(shadow_cpu_entry_end, 358 + kasan_mem_to_shadow((void *)__START_KERNEL_map)); 362 359 363 360 kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(_stext), 364 361 (unsigned long)kasan_mem_to_shadow(_end), 365 362 early_pfn_to_nid(__pa(_stext))); 366 363 367 - kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END), 368 - (void *)KASAN_SHADOW_END); 364 + kasan_populate_early_shadow(kasan_mem_to_shadow((void *)MODULES_END), 365 + (void *)KASAN_SHADOW_END); 369 366 370 367 load_cr3(init_top_pgt); 371 368 __flush_tlb_all(); 372 369 373 370 /* 374 - * kasan_zero_page has been used as early shadow memory, thus it may 375 - * contain some garbage. Now we can clear and write protect it, since 376 - * after the TLB flush no one should write to it. 371 + * kasan_early_shadow_page has been used as early shadow memory, thus 372 + * it may contain some garbage. Now we can clear and write protect it, 373 + * since after the TLB flush no one should write to it. 377 374 */ 378 - memset(kasan_zero_page, 0, PAGE_SIZE); 375 + memset(kasan_early_shadow_page, 0, PAGE_SIZE); 379 376 for (i = 0; i < PTRS_PER_PTE; i++) { 380 377 pte_t pte; 381 378 pgprot_t prot; ··· 383 380 prot = __pgprot(__PAGE_KERNEL_RO | _PAGE_ENC); 384 381 pgprot_val(prot) &= __default_kernel_pte_mask; 385 382 386 - pte = __pte(__pa(kasan_zero_page) | pgprot_val(prot)); 387 - set_pte(&kasan_zero_pte[i], pte); 383 + pte = __pte(__pa(kasan_early_shadow_page) | pgprot_val(prot)); 384 + set_pte(&kasan_early_shadow_pte[i], pte); 388 385 } 389 386 /* Flush TLBs again to be sure that write protection applied. */ 390 387 __flush_tlb_all();
+8 -6
arch/x86/mm/pgtable.c
··· 794 794 return 0; 795 795 } 796 796 797 + /* 798 + * Until we support 512GB pages, skip them in the vmap area. 799 + */ 800 + int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) 801 + { 802 + return 0; 803 + } 804 + 797 805 #ifdef CONFIG_X86_64 798 806 /** 799 807 * pud_free_pmd_page - Clear pud entry and free pmd page. ··· 818 810 pmd_t *pmd, *pmd_sv; 819 811 pte_t *pte; 820 812 int i; 821 - 822 - if (pud_none(*pud)) 823 - return 1; 824 813 825 814 pmd = (pmd_t *)pud_page_vaddr(*pud); 826 815 pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); ··· 859 854 int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) 860 855 { 861 856 pte_t *pte; 862 - 863 - if (pmd_none(*pmd)) 864 - return 1; 865 857 866 858 pte = (pte_t *)pmd_page_vaddr(*pmd); 867 859 pmd_clear(pmd);
+11 -7
arch/xtensa/mm/kasan_init.c
··· 24 24 int i; 25 25 26 26 for (i = 0; i < PTRS_PER_PTE; ++i) 27 - set_pte(kasan_zero_pte + i, 28 - mk_pte(virt_to_page(kasan_zero_page), PAGE_KERNEL)); 27 + set_pte(kasan_early_shadow_pte + i, 28 + mk_pte(virt_to_page(kasan_early_shadow_page), 29 + PAGE_KERNEL)); 29 30 30 31 for (vaddr = 0; vaddr < KASAN_SHADOW_SIZE; vaddr += PMD_SIZE, ++pmd) { 31 32 BUG_ON(!pmd_none(*pmd)); 32 - set_pmd(pmd, __pmd((unsigned long)kasan_zero_pte)); 33 + set_pmd(pmd, __pmd((unsigned long)kasan_early_shadow_pte)); 33 34 } 34 35 early_trap_init(); 35 36 } ··· 81 80 populate(kasan_mem_to_shadow((void *)VMALLOC_START), 82 81 kasan_mem_to_shadow((void *)XCHAL_KSEG_BYPASS_VADDR)); 83 82 84 - /* Write protect kasan_zero_page and zero-initialize it again. */ 83 + /* 84 + * Write protect kasan_early_shadow_page and zero-initialize it again. 85 + */ 85 86 for (i = 0; i < PTRS_PER_PTE; ++i) 86 - set_pte(kasan_zero_pte + i, 87 - mk_pte(virt_to_page(kasan_zero_page), PAGE_KERNEL_RO)); 87 + set_pte(kasan_early_shadow_pte + i, 88 + mk_pte(virt_to_page(kasan_early_shadow_page), 89 + PAGE_KERNEL_RO)); 88 90 89 91 local_flush_tlb_all(); 90 - memset(kasan_zero_page, 0, PAGE_SIZE); 92 + memset(kasan_early_shadow_page, 0, PAGE_SIZE); 91 93 92 94 /* At this point kasan is fully initialized. Enable error messages. */ 93 95 current->kasan_depth = 0;
+4 -4
drivers/base/memory.c
··· 207 207 return false; 208 208 209 209 if (!present_section_nr(section_nr)) { 210 - pr_warn("section %ld pfn[%lx, %lx) not present", 210 + pr_warn("section %ld pfn[%lx, %lx) not present\n", 211 211 section_nr, pfn, pfn + PAGES_PER_SECTION); 212 212 return false; 213 213 } else if (!valid_section_nr(section_nr)) { 214 - pr_warn("section %ld pfn[%lx, %lx) no valid memmap", 214 + pr_warn("section %ld pfn[%lx, %lx) no valid memmap\n", 215 215 section_nr, pfn, pfn + PAGES_PER_SECTION); 216 216 return false; 217 217 } else if (online_section_nr(section_nr)) { 218 - pr_warn("section %ld pfn[%lx, %lx) is already online", 218 + pr_warn("section %ld pfn[%lx, %lx) is already online\n", 219 219 section_nr, pfn, pfn + PAGES_PER_SECTION); 220 220 return false; 221 221 } ··· 688 688 int i, ret, section_count = 0, section_nr; 689 689 690 690 for (i = base_section_nr; 691 - (i < base_section_nr + sections_per_block) && i < NR_MEM_SECTIONS; 691 + i < base_section_nr + sections_per_block; 692 692 i++) { 693 693 if (!present_section_nr(i)) 694 694 continue;
+4 -1
drivers/block/zram/Kconfig
··· 15 15 See Documentation/blockdev/zram.txt for more information. 16 16 17 17 config ZRAM_WRITEBACK 18 - bool "Write back incompressible page to backing device" 18 + bool "Write back incompressible or idle page to backing device" 19 19 depends on ZRAM 20 20 help 21 21 With incompressible page, there is no memory saving to keep it 22 22 in memory. Instead, write it out to backing device. 23 23 For this feature, admin should set up backing device via 24 24 /sys/block/zramX/backing_dev. 25 + 26 + With /sys/block/zramX/{idle,writeback}, application could ask 27 + idle page's writeback to the backing device to save in memory. 25 28 26 29 See Documentation/blockdev/zram.txt for more information. 27 30
+355 -149
drivers/block/zram/zram_drv.c
··· 52 52 static size_t huge_class_size; 53 53 54 54 static void zram_free_page(struct zram *zram, size_t index); 55 + static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec, 56 + u32 index, int offset, struct bio *bio); 57 + 58 + 59 + static int zram_slot_trylock(struct zram *zram, u32 index) 60 + { 61 + return bit_spin_trylock(ZRAM_LOCK, &zram->table[index].flags); 62 + } 55 63 56 64 static void zram_slot_lock(struct zram *zram, u32 index) 57 65 { 58 - bit_spin_lock(ZRAM_LOCK, &zram->table[index].value); 66 + bit_spin_lock(ZRAM_LOCK, &zram->table[index].flags); 59 67 } 60 68 61 69 static void zram_slot_unlock(struct zram *zram, u32 index) 62 70 { 63 - bit_spin_unlock(ZRAM_LOCK, &zram->table[index].value); 71 + bit_spin_unlock(ZRAM_LOCK, &zram->table[index].flags); 64 72 } 65 73 66 74 static inline bool init_done(struct zram *zram) 67 75 { 68 76 return zram->disksize; 69 - } 70 - 71 - static inline bool zram_allocated(struct zram *zram, u32 index) 72 - { 73 - 74 - return (zram->table[index].value >> (ZRAM_FLAG_SHIFT + 1)) || 75 - zram->table[index].handle; 76 77 } 77 78 78 79 static inline struct zram *dev_to_zram(struct device *dev) ··· 95 94 static bool zram_test_flag(struct zram *zram, u32 index, 96 95 enum zram_pageflags flag) 97 96 { 98 - return zram->table[index].value & BIT(flag); 97 + return zram->table[index].flags & BIT(flag); 99 98 } 100 99 101 100 static void zram_set_flag(struct zram *zram, u32 index, 102 101 enum zram_pageflags flag) 103 102 { 104 - zram->table[index].value |= BIT(flag); 103 + zram->table[index].flags |= BIT(flag); 105 104 } 106 105 107 106 static void zram_clear_flag(struct zram *zram, u32 index, 108 107 enum zram_pageflags flag) 109 108 { 110 - zram->table[index].value &= ~BIT(flag); 109 + zram->table[index].flags &= ~BIT(flag); 111 110 } 112 111 113 112 static inline void zram_set_element(struct zram *zram, u32 index, ··· 123 122 124 123 static size_t zram_get_obj_size(struct zram *zram, u32 index) 125 124 { 126 - return zram->table[index].value & (BIT(ZRAM_FLAG_SHIFT) - 1); 125 + return zram->table[index].flags & (BIT(ZRAM_FLAG_SHIFT) - 1); 127 126 } 128 127 129 128 static void zram_set_obj_size(struct zram *zram, 130 129 u32 index, size_t size) 131 130 { 132 - unsigned long flags = zram->table[index].value >> ZRAM_FLAG_SHIFT; 131 + unsigned long flags = zram->table[index].flags >> ZRAM_FLAG_SHIFT; 133 132 134 - zram->table[index].value = (flags << ZRAM_FLAG_SHIFT) | size; 133 + zram->table[index].flags = (flags << ZRAM_FLAG_SHIFT) | size; 134 + } 135 + 136 + static inline bool zram_allocated(struct zram *zram, u32 index) 137 + { 138 + return zram_get_obj_size(zram, index) || 139 + zram_test_flag(zram, index, ZRAM_SAME) || 140 + zram_test_flag(zram, index, ZRAM_WB); 135 141 } 136 142 137 143 #if PAGE_SIZE != 4096 ··· 284 276 return len; 285 277 } 286 278 287 - #ifdef CONFIG_ZRAM_WRITEBACK 288 - static bool zram_wb_enabled(struct zram *zram) 279 + static ssize_t idle_store(struct device *dev, 280 + struct device_attribute *attr, const char *buf, size_t len) 289 281 { 290 - return zram->backing_dev; 282 + struct zram *zram = dev_to_zram(dev); 283 + unsigned long nr_pages = zram->disksize >> PAGE_SHIFT; 284 + int index; 285 + char mode_buf[8]; 286 + ssize_t sz; 287 + 288 + sz = strscpy(mode_buf, buf, sizeof(mode_buf)); 289 + if (sz <= 0) 290 + return -EINVAL; 291 + 292 + /* ignore trailing new line */ 293 + if (mode_buf[sz - 1] == '\n') 294 + mode_buf[sz - 1] = 0x00; 295 + 296 + if (strcmp(mode_buf, "all")) 297 + return -EINVAL; 298 + 299 + down_read(&zram->init_lock); 300 + if (!init_done(zram)) { 301 + up_read(&zram->init_lock); 302 + return -EINVAL; 303 + } 304 + 305 + for (index = 0; index < nr_pages; index++) { 306 + /* 307 + * Do not mark ZRAM_UNDER_WB slot as ZRAM_IDLE to close race. 308 + * See the comment in writeback_store. 309 + */ 310 + zram_slot_lock(zram, index); 311 + if (!zram_allocated(zram, index) || 312 + zram_test_flag(zram, index, ZRAM_UNDER_WB)) 313 + goto next; 314 + zram_set_flag(zram, index, ZRAM_IDLE); 315 + next: 316 + zram_slot_unlock(zram, index); 317 + } 318 + 319 + up_read(&zram->init_lock); 320 + 321 + return len; 322 + } 323 + 324 + #ifdef CONFIG_ZRAM_WRITEBACK 325 + static ssize_t writeback_limit_store(struct device *dev, 326 + struct device_attribute *attr, const char *buf, size_t len) 327 + { 328 + struct zram *zram = dev_to_zram(dev); 329 + u64 val; 330 + ssize_t ret = -EINVAL; 331 + 332 + if (kstrtoull(buf, 10, &val)) 333 + return ret; 334 + 335 + down_read(&zram->init_lock); 336 + atomic64_set(&zram->stats.bd_wb_limit, val); 337 + if (val == 0) 338 + zram->stop_writeback = false; 339 + up_read(&zram->init_lock); 340 + ret = len; 341 + 342 + return ret; 343 + } 344 + 345 + static ssize_t writeback_limit_show(struct device *dev, 346 + struct device_attribute *attr, char *buf) 347 + { 348 + u64 val; 349 + struct zram *zram = dev_to_zram(dev); 350 + 351 + down_read(&zram->init_lock); 352 + val = atomic64_read(&zram->stats.bd_wb_limit); 353 + up_read(&zram->init_lock); 354 + 355 + return scnprintf(buf, PAGE_SIZE, "%llu\n", val); 291 356 } 292 357 293 358 static void reset_bdev(struct zram *zram) 294 359 { 295 360 struct block_device *bdev; 296 361 297 - if (!zram_wb_enabled(zram)) 362 + if (!zram->backing_dev) 298 363 return; 299 364 300 365 bdev = zram->bdev; ··· 394 313 ssize_t ret; 395 314 396 315 down_read(&zram->init_lock); 397 - if (!zram_wb_enabled(zram)) { 316 + if (!zram->backing_dev) { 398 317 memcpy(buf, "none\n", 5); 399 318 up_read(&zram->init_lock); 400 319 return 5; ··· 463 382 464 383 bdev = bdgrab(I_BDEV(inode)); 465 384 err = blkdev_get(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL, zram); 466 - if (err < 0) 385 + if (err < 0) { 386 + bdev = NULL; 467 387 goto out; 388 + } 468 389 469 390 nr_pages = i_size_read(inode) >> PAGE_SHIFT; 470 391 bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long); ··· 482 399 goto out; 483 400 484 401 reset_bdev(zram); 485 - spin_lock_init(&zram->bitmap_lock); 486 402 487 403 zram->old_block_size = old_block_size; 488 404 zram->bdev = bdev; ··· 523 441 return err; 524 442 } 525 443 526 - static unsigned long get_entry_bdev(struct zram *zram) 444 + static unsigned long alloc_block_bdev(struct zram *zram) 527 445 { 528 - unsigned long entry; 529 - 530 - spin_lock(&zram->bitmap_lock); 446 + unsigned long blk_idx = 1; 447 + retry: 531 448 /* skip 0 bit to confuse zram.handle = 0 */ 532 - entry = find_next_zero_bit(zram->bitmap, zram->nr_pages, 1); 533 - if (entry == zram->nr_pages) { 534 - spin_unlock(&zram->bitmap_lock); 449 + blk_idx = find_next_zero_bit(zram->bitmap, zram->nr_pages, blk_idx); 450 + if (blk_idx == zram->nr_pages) 535 451 return 0; 536 - } 537 452 538 - set_bit(entry, zram->bitmap); 539 - spin_unlock(&zram->bitmap_lock); 453 + if (test_and_set_bit(blk_idx, zram->bitmap)) 454 + goto retry; 540 455 541 - return entry; 456 + atomic64_inc(&zram->stats.bd_count); 457 + return blk_idx; 542 458 } 543 459 544 - static void put_entry_bdev(struct zram *zram, unsigned long entry) 460 + static void free_block_bdev(struct zram *zram, unsigned long blk_idx) 545 461 { 546 462 int was_set; 547 463 548 - spin_lock(&zram->bitmap_lock); 549 - was_set = test_and_clear_bit(entry, zram->bitmap); 550 - spin_unlock(&zram->bitmap_lock); 464 + was_set = test_and_clear_bit(blk_idx, zram->bitmap); 551 465 WARN_ON_ONCE(!was_set); 466 + atomic64_dec(&zram->stats.bd_count); 552 467 } 553 468 554 469 static void zram_page_end_io(struct bio *bio) ··· 586 507 587 508 submit_bio(bio); 588 509 return 1; 510 + } 511 + 512 + #define HUGE_WRITEBACK 0x1 513 + #define IDLE_WRITEBACK 0x2 514 + 515 + static ssize_t writeback_store(struct device *dev, 516 + struct device_attribute *attr, const char *buf, size_t len) 517 + { 518 + struct zram *zram = dev_to_zram(dev); 519 + unsigned long nr_pages = zram->disksize >> PAGE_SHIFT; 520 + unsigned long index; 521 + struct bio bio; 522 + struct bio_vec bio_vec; 523 + struct page *page; 524 + ssize_t ret, sz; 525 + char mode_buf[8]; 526 + unsigned long mode = -1UL; 527 + unsigned long blk_idx = 0; 528 + 529 + sz = strscpy(mode_buf, buf, sizeof(mode_buf)); 530 + if (sz <= 0) 531 + return -EINVAL; 532 + 533 + /* ignore trailing newline */ 534 + if (mode_buf[sz - 1] == '\n') 535 + mode_buf[sz - 1] = 0x00; 536 + 537 + if (!strcmp(mode_buf, "idle")) 538 + mode = IDLE_WRITEBACK; 539 + else if (!strcmp(mode_buf, "huge")) 540 + mode = HUGE_WRITEBACK; 541 + 542 + if (mode == -1UL) 543 + return -EINVAL; 544 + 545 + down_read(&zram->init_lock); 546 + if (!init_done(zram)) { 547 + ret = -EINVAL; 548 + goto release_init_lock; 549 + } 550 + 551 + if (!zram->backing_dev) { 552 + ret = -ENODEV; 553 + goto release_init_lock; 554 + } 555 + 556 + page = alloc_page(GFP_KERNEL); 557 + if (!page) { 558 + ret = -ENOMEM; 559 + goto release_init_lock; 560 + } 561 + 562 + for (index = 0; index < nr_pages; index++) { 563 + struct bio_vec bvec; 564 + 565 + bvec.bv_page = page; 566 + bvec.bv_len = PAGE_SIZE; 567 + bvec.bv_offset = 0; 568 + 569 + if (zram->stop_writeback) { 570 + ret = -EIO; 571 + break; 572 + } 573 + 574 + if (!blk_idx) { 575 + blk_idx = alloc_block_bdev(zram); 576 + if (!blk_idx) { 577 + ret = -ENOSPC; 578 + break; 579 + } 580 + } 581 + 582 + zram_slot_lock(zram, index); 583 + if (!zram_allocated(zram, index)) 584 + goto next; 585 + 586 + if (zram_test_flag(zram, index, ZRAM_WB) || 587 + zram_test_flag(zram, index, ZRAM_SAME) || 588 + zram_test_flag(zram, index, ZRAM_UNDER_WB)) 589 + goto next; 590 + 591 + if ((mode & IDLE_WRITEBACK && 592 + !zram_test_flag(zram, index, ZRAM_IDLE)) && 593 + (mode & HUGE_WRITEBACK && 594 + !zram_test_flag(zram, index, ZRAM_HUGE))) 595 + goto next; 596 + /* 597 + * Clearing ZRAM_UNDER_WB is duty of caller. 598 + * IOW, zram_free_page never clear it. 599 + */ 600 + zram_set_flag(zram, index, ZRAM_UNDER_WB); 601 + /* Need for hugepage writeback racing */ 602 + zram_set_flag(zram, index, ZRAM_IDLE); 603 + zram_slot_unlock(zram, index); 604 + if (zram_bvec_read(zram, &bvec, index, 0, NULL)) { 605 + zram_slot_lock(zram, index); 606 + zram_clear_flag(zram, index, ZRAM_UNDER_WB); 607 + zram_clear_flag(zram, index, ZRAM_IDLE); 608 + zram_slot_unlock(zram, index); 609 + continue; 610 + } 611 + 612 + bio_init(&bio, &bio_vec, 1); 613 + bio_set_dev(&bio, zram->bdev); 614 + bio.bi_iter.bi_sector = blk_idx * (PAGE_SIZE >> 9); 615 + bio.bi_opf = REQ_OP_WRITE | REQ_SYNC; 616 + 617 + bio_add_page(&bio, bvec.bv_page, bvec.bv_len, 618 + bvec.bv_offset); 619 + /* 620 + * XXX: A single page IO would be inefficient for write 621 + * but it would be not bad as starter. 622 + */ 623 + ret = submit_bio_wait(&bio); 624 + if (ret) { 625 + zram_slot_lock(zram, index); 626 + zram_clear_flag(zram, index, ZRAM_UNDER_WB); 627 + zram_clear_flag(zram, index, ZRAM_IDLE); 628 + zram_slot_unlock(zram, index); 629 + continue; 630 + } 631 + 632 + atomic64_inc(&zram->stats.bd_writes); 633 + /* 634 + * We released zram_slot_lock so need to check if the slot was 635 + * changed. If there is freeing for the slot, we can catch it 636 + * easily by zram_allocated. 637 + * A subtle case is the slot is freed/reallocated/marked as 638 + * ZRAM_IDLE again. To close the race, idle_store doesn't 639 + * mark ZRAM_IDLE once it found the slot was ZRAM_UNDER_WB. 640 + * Thus, we could close the race by checking ZRAM_IDLE bit. 641 + */ 642 + zram_slot_lock(zram, index); 643 + if (!zram_allocated(zram, index) || 644 + !zram_test_flag(zram, index, ZRAM_IDLE)) { 645 + zram_clear_flag(zram, index, ZRAM_UNDER_WB); 646 + zram_clear_flag(zram, index, ZRAM_IDLE); 647 + goto next; 648 + } 649 + 650 + zram_free_page(zram, index); 651 + zram_clear_flag(zram, index, ZRAM_UNDER_WB); 652 + zram_set_flag(zram, index, ZRAM_WB); 653 + zram_set_element(zram, index, blk_idx); 654 + blk_idx = 0; 655 + atomic64_inc(&zram->stats.pages_stored); 656 + if (atomic64_add_unless(&zram->stats.bd_wb_limit, 657 + -1 << (PAGE_SHIFT - 12), 0)) { 658 + if (atomic64_read(&zram->stats.bd_wb_limit) == 0) 659 + zram->stop_writeback = true; 660 + } 661 + next: 662 + zram_slot_unlock(zram, index); 663 + } 664 + 665 + if (blk_idx) 666 + free_block_bdev(zram, blk_idx); 667 + ret = len; 668 + __free_page(page); 669 + release_init_lock: 670 + up_read(&zram->init_lock); 671 + 672 + return ret; 589 673 } 590 674 591 675 struct zram_work { ··· 803 561 static int read_from_bdev(struct zram *zram, struct bio_vec *bvec, 804 562 unsigned long entry, struct bio *parent, bool sync) 805 563 { 564 + atomic64_inc(&zram->stats.bd_reads); 806 565 if (sync) 807 566 return read_from_bdev_sync(zram, bvec, entry, parent); 808 567 else 809 568 return read_from_bdev_async(zram, bvec, entry, parent); 810 569 } 811 - 812 - static int write_to_bdev(struct zram *zram, struct bio_vec *bvec, 813 - u32 index, struct bio *parent, 814 - unsigned long *pentry) 815 - { 816 - struct bio *bio; 817 - unsigned long entry; 818 - 819 - bio = bio_alloc(GFP_ATOMIC, 1); 820 - if (!bio) 821 - return -ENOMEM; 822 - 823 - entry = get_entry_bdev(zram); 824 - if (!entry) { 825 - bio_put(bio); 826 - return -ENOSPC; 827 - } 828 - 829 - bio->bi_iter.bi_sector = entry * (PAGE_SIZE >> 9); 830 - bio_set_dev(bio, zram->bdev); 831 - if (!bio_add_page(bio, bvec->bv_page, bvec->bv_len, 832 - bvec->bv_offset)) { 833 - bio_put(bio); 834 - put_entry_bdev(zram, entry); 835 - return -EIO; 836 - } 837 - 838 - if (!parent) { 839 - bio->bi_opf = REQ_OP_WRITE | REQ_SYNC; 840 - bio->bi_end_io = zram_page_end_io; 841 - } else { 842 - bio->bi_opf = parent->bi_opf; 843 - bio_chain(bio, parent); 844 - } 845 - 846 - submit_bio(bio); 847 - *pentry = entry; 848 - 849 - return 0; 850 - } 851 - 852 - static void zram_wb_clear(struct zram *zram, u32 index) 853 - { 854 - unsigned long entry; 855 - 856 - zram_clear_flag(zram, index, ZRAM_WB); 857 - entry = zram_get_element(zram, index); 858 - zram_set_element(zram, index, 0); 859 - put_entry_bdev(zram, entry); 860 - } 861 - 862 570 #else 863 - static bool zram_wb_enabled(struct zram *zram) { return false; } 864 571 static inline void reset_bdev(struct zram *zram) {}; 865 - static int write_to_bdev(struct zram *zram, struct bio_vec *bvec, 866 - u32 index, struct bio *parent, 867 - unsigned long *pentry) 868 - 869 - { 870 - return -EIO; 871 - } 872 - 873 572 static int read_from_bdev(struct zram *zram, struct bio_vec *bvec, 874 573 unsigned long entry, struct bio *parent, bool sync) 875 574 { 876 575 return -EIO; 877 576 } 878 - static void zram_wb_clear(struct zram *zram, u32 index) {} 577 + 578 + static void free_block_bdev(struct zram *zram, unsigned long blk_idx) {}; 879 579 #endif 880 580 881 581 #ifdef CONFIG_ZRAM_MEMORY_TRACKING ··· 836 652 837 653 static void zram_accessed(struct zram *zram, u32 index) 838 654 { 655 + zram_clear_flag(zram, index, ZRAM_IDLE); 839 656 zram->table[index].ac_time = ktime_get_boottime(); 840 - } 841 - 842 - static void zram_reset_access(struct zram *zram, u32 index) 843 - { 844 - zram->table[index].ac_time = 0; 845 657 } 846 658 847 659 static ssize_t read_block_state(struct file *file, char __user *buf, ··· 869 689 870 690 ts = ktime_to_timespec64(zram->table[index].ac_time); 871 691 copied = snprintf(kbuf + written, count, 872 - "%12zd %12lld.%06lu %c%c%c\n", 692 + "%12zd %12lld.%06lu %c%c%c%c\n", 873 693 index, (s64)ts.tv_sec, 874 694 ts.tv_nsec / NSEC_PER_USEC, 875 695 zram_test_flag(zram, index, ZRAM_SAME) ? 's' : '.', 876 696 zram_test_flag(zram, index, ZRAM_WB) ? 'w' : '.', 877 - zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.'); 697 + zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.', 698 + zram_test_flag(zram, index, ZRAM_IDLE) ? 'i' : '.'); 878 699 879 700 if (count < copied) { 880 701 zram_slot_unlock(zram, index); ··· 920 739 #else 921 740 static void zram_debugfs_create(void) {}; 922 741 static void zram_debugfs_destroy(void) {}; 923 - static void zram_accessed(struct zram *zram, u32 index) {}; 924 - static void zram_reset_access(struct zram *zram, u32 index) {}; 742 + static void zram_accessed(struct zram *zram, u32 index) 743 + { 744 + zram_clear_flag(zram, index, ZRAM_IDLE); 745 + }; 925 746 static void zram_debugfs_register(struct zram *zram) {}; 926 747 static void zram_debugfs_unregister(struct zram *zram) {}; 927 748 #endif ··· 1060 877 return ret; 1061 878 } 1062 879 880 + #ifdef CONFIG_ZRAM_WRITEBACK 881 + #define FOUR_K(x) ((x) * (1 << (PAGE_SHIFT - 12))) 882 + static ssize_t bd_stat_show(struct device *dev, 883 + struct device_attribute *attr, char *buf) 884 + { 885 + struct zram *zram = dev_to_zram(dev); 886 + ssize_t ret; 887 + 888 + down_read(&zram->init_lock); 889 + ret = scnprintf(buf, PAGE_SIZE, 890 + "%8llu %8llu %8llu\n", 891 + FOUR_K((u64)atomic64_read(&zram->stats.bd_count)), 892 + FOUR_K((u64)atomic64_read(&zram->stats.bd_reads)), 893 + FOUR_K((u64)atomic64_read(&zram->stats.bd_writes))); 894 + up_read(&zram->init_lock); 895 + 896 + return ret; 897 + } 898 + #endif 899 + 1063 900 static ssize_t debug_stat_show(struct device *dev, 1064 901 struct device_attribute *attr, char *buf) 1065 902 { ··· 1089 886 1090 887 down_read(&zram->init_lock); 1091 888 ret = scnprintf(buf, PAGE_SIZE, 1092 - "version: %d\n%8llu\n", 889 + "version: %d\n%8llu %8llu\n", 1093 890 version, 1094 - (u64)atomic64_read(&zram->stats.writestall)); 891 + (u64)atomic64_read(&zram->stats.writestall), 892 + (u64)atomic64_read(&zram->stats.miss_free)); 1095 893 up_read(&zram->init_lock); 1096 894 1097 895 return ret; ··· 1100 896 1101 897 static DEVICE_ATTR_RO(io_stat); 1102 898 static DEVICE_ATTR_RO(mm_stat); 899 + #ifdef CONFIG_ZRAM_WRITEBACK 900 + static DEVICE_ATTR_RO(bd_stat); 901 + #endif 1103 902 static DEVICE_ATTR_RO(debug_stat); 1104 903 1105 904 static void zram_meta_free(struct zram *zram, u64 disksize) ··· 1147 940 { 1148 941 unsigned long handle; 1149 942 1150 - zram_reset_access(zram, index); 943 + #ifdef CONFIG_ZRAM_MEMORY_TRACKING 944 + zram->table[index].ac_time = 0; 945 + #endif 946 + if (zram_test_flag(zram, index, ZRAM_IDLE)) 947 + zram_clear_flag(zram, index, ZRAM_IDLE); 1151 948 1152 949 if (zram_test_flag(zram, index, ZRAM_HUGE)) { 1153 950 zram_clear_flag(zram, index, ZRAM_HUGE); 1154 951 atomic64_dec(&zram->stats.huge_pages); 1155 952 } 1156 953 1157 - if (zram_wb_enabled(zram) && zram_test_flag(zram, index, ZRAM_WB)) { 1158 - zram_wb_clear(zram, index); 1159 - atomic64_dec(&zram->stats.pages_stored); 1160 - return; 954 + if (zram_test_flag(zram, index, ZRAM_WB)) { 955 + zram_clear_flag(zram, index, ZRAM_WB); 956 + free_block_bdev(zram, zram_get_element(zram, index)); 957 + goto out; 1161 958 } 1162 959 1163 960 /* ··· 1170 959 */ 1171 960 if (zram_test_flag(zram, index, ZRAM_SAME)) { 1172 961 zram_clear_flag(zram, index, ZRAM_SAME); 1173 - zram_set_element(zram, index, 0); 1174 962 atomic64_dec(&zram->stats.same_pages); 1175 - atomic64_dec(&zram->stats.pages_stored); 1176 - return; 963 + goto out; 1177 964 } 1178 965 1179 966 handle = zram_get_handle(zram, index); ··· 1182 973 1183 974 atomic64_sub(zram_get_obj_size(zram, index), 1184 975 &zram->stats.compr_data_size); 976 + out: 1185 977 atomic64_dec(&zram->stats.pages_stored); 1186 - 1187 978 zram_set_handle(zram, index, 0); 1188 979 zram_set_obj_size(zram, index, 0); 980 + WARN_ON_ONCE(zram->table[index].flags & 981 + ~(1UL << ZRAM_LOCK | 1UL << ZRAM_UNDER_WB)); 1189 982 } 1190 983 1191 984 static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index, ··· 1198 987 unsigned int size; 1199 988 void *src, *dst; 1200 989 1201 - if (zram_wb_enabled(zram)) { 1202 - zram_slot_lock(zram, index); 1203 - if (zram_test_flag(zram, index, ZRAM_WB)) { 1204 - struct bio_vec bvec; 990 + zram_slot_lock(zram, index); 991 + if (zram_test_flag(zram, index, ZRAM_WB)) { 992 + struct bio_vec bvec; 1205 993 1206 - zram_slot_unlock(zram, index); 1207 - 1208 - bvec.bv_page = page; 1209 - bvec.bv_len = PAGE_SIZE; 1210 - bvec.bv_offset = 0; 1211 - return read_from_bdev(zram, &bvec, 1212 - zram_get_element(zram, index), 1213 - bio, partial_io); 1214 - } 1215 994 zram_slot_unlock(zram, index); 995 + 996 + bvec.bv_page = page; 997 + bvec.bv_len = PAGE_SIZE; 998 + bvec.bv_offset = 0; 999 + return read_from_bdev(zram, &bvec, 1000 + zram_get_element(zram, index), 1001 + bio, partial_io); 1216 1002 } 1217 1003 1218 - zram_slot_lock(zram, index); 1219 1004 handle = zram_get_handle(zram, index); 1220 1005 if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) { 1221 1006 unsigned long value; ··· 1296 1089 struct page *page = bvec->bv_page; 1297 1090 unsigned long element = 0; 1298 1091 enum zram_pageflags flags = 0; 1299 - bool allow_wb = true; 1300 1092 1301 1093 mem = kmap_atomic(page); 1302 1094 if (page_same_filled(mem, &element)) { ··· 1320 1114 return ret; 1321 1115 } 1322 1116 1323 - if (unlikely(comp_len >= huge_class_size)) { 1117 + if (comp_len >= huge_class_size) 1324 1118 comp_len = PAGE_SIZE; 1325 - if (zram_wb_enabled(zram) && allow_wb) { 1326 - zcomp_stream_put(zram->comp); 1327 - ret = write_to_bdev(zram, bvec, index, bio, &element); 1328 - if (!ret) { 1329 - flags = ZRAM_WB; 1330 - ret = 1; 1331 - goto out; 1332 - } 1333 - allow_wb = false; 1334 - goto compress_again; 1335 - } 1336 - } 1337 - 1338 1119 /* 1339 1120 * handle allocation has 2 paths: 1340 1121 * a) fast path is executed with preemption disabled (for ··· 1593 1400 1594 1401 zram = bdev->bd_disk->private_data; 1595 1402 1596 - zram_slot_lock(zram, index); 1403 + atomic64_inc(&zram->stats.notify_free); 1404 + if (!zram_slot_trylock(zram, index)) { 1405 + atomic64_inc(&zram->stats.miss_free); 1406 + return; 1407 + } 1408 + 1597 1409 zram_free_page(zram, index); 1598 1410 zram_slot_unlock(zram, index); 1599 - atomic64_inc(&zram->stats.notify_free); 1600 1411 } 1601 1412 1602 1413 static int zram_rw_page(struct block_device *bdev, sector_t sector, ··· 1805 1608 static DEVICE_ATTR_WO(reset); 1806 1609 static DEVICE_ATTR_WO(mem_limit); 1807 1610 static DEVICE_ATTR_WO(mem_used_max); 1611 + static DEVICE_ATTR_WO(idle); 1808 1612 static DEVICE_ATTR_RW(max_comp_streams); 1809 1613 static DEVICE_ATTR_RW(comp_algorithm); 1810 1614 #ifdef CONFIG_ZRAM_WRITEBACK 1811 1615 static DEVICE_ATTR_RW(backing_dev); 1616 + static DEVICE_ATTR_WO(writeback); 1617 + static DEVICE_ATTR_RW(writeback_limit); 1812 1618 #endif 1813 1619 1814 1620 static struct attribute *zram_disk_attrs[] = { ··· 1821 1621 &dev_attr_compact.attr, 1822 1622 &dev_attr_mem_limit.attr, 1823 1623 &dev_attr_mem_used_max.attr, 1624 + &dev_attr_idle.attr, 1824 1625 &dev_attr_max_comp_streams.attr, 1825 1626 &dev_attr_comp_algorithm.attr, 1826 1627 #ifdef CONFIG_ZRAM_WRITEBACK 1827 1628 &dev_attr_backing_dev.attr, 1629 + &dev_attr_writeback.attr, 1630 + &dev_attr_writeback_limit.attr, 1828 1631 #endif 1829 1632 &dev_attr_io_stat.attr, 1830 1633 &dev_attr_mm_stat.attr, 1634 + #ifdef CONFIG_ZRAM_WRITEBACK 1635 + &dev_attr_bd_stat.attr, 1636 + #endif 1831 1637 &dev_attr_debug_stat.attr, 1832 1638 NULL, 1833 1639 };
+14 -5
drivers/block/zram/zram_drv.h
··· 30 30 31 31 32 32 /* 33 - * The lower ZRAM_FLAG_SHIFT bits of table.value is for 33 + * The lower ZRAM_FLAG_SHIFT bits of table.flags is for 34 34 * object size (excluding header), the higher bits is for 35 35 * zram_pageflags. 36 36 * ··· 41 41 */ 42 42 #define ZRAM_FLAG_SHIFT 24 43 43 44 - /* Flags for zram pages (table[page_no].value) */ 44 + /* Flags for zram pages (table[page_no].flags) */ 45 45 enum zram_pageflags { 46 46 /* zram slot is locked */ 47 47 ZRAM_LOCK = ZRAM_FLAG_SHIFT, 48 48 ZRAM_SAME, /* Page consists the same element */ 49 49 ZRAM_WB, /* page is stored on backing_device */ 50 + ZRAM_UNDER_WB, /* page is under writeback */ 50 51 ZRAM_HUGE, /* Incompressible page */ 52 + ZRAM_IDLE, /* not accessed page since last idle marking */ 51 53 52 54 __NR_ZRAM_PAGEFLAGS, 53 55 }; ··· 62 60 unsigned long handle; 63 61 unsigned long element; 64 62 }; 65 - unsigned long value; 63 + unsigned long flags; 66 64 #ifdef CONFIG_ZRAM_MEMORY_TRACKING 67 65 ktime_t ac_time; 68 66 #endif ··· 81 79 atomic64_t pages_stored; /* no. of pages currently stored */ 82 80 atomic_long_t max_used_pages; /* no. of maximum pages stored */ 83 81 atomic64_t writestall; /* no. of write slow paths */ 82 + atomic64_t miss_free; /* no. of missed free */ 83 + #ifdef CONFIG_ZRAM_WRITEBACK 84 + atomic64_t bd_count; /* no. of pages in backing device */ 85 + atomic64_t bd_reads; /* no. of reads from backing device */ 86 + atomic64_t bd_writes; /* no. of writes from backing device */ 87 + atomic64_t bd_wb_limit; /* writeback limit of backing device */ 88 + #endif 84 89 }; 85 90 86 91 struct zram { ··· 113 104 * zram is claimed so open request will be failed 114 105 */ 115 106 bool claim; /* Protected by bdev->bd_mutex */ 116 - #ifdef CONFIG_ZRAM_WRITEBACK 117 107 struct file *backing_dev; 108 + bool stop_writeback; 109 + #ifdef CONFIG_ZRAM_WRITEBACK 118 110 struct block_device *bdev; 119 111 unsigned int old_block_size; 120 112 unsigned long *bitmap; 121 113 unsigned long nr_pages; 122 - spinlock_t bitmap_lock; 123 114 #endif 124 115 #ifdef CONFIG_ZRAM_MEMORY_TRACKING 125 116 struct dentry *debugfs_dir;
+2 -2
drivers/char/agp/backend.c
··· 115 115 long memory, index, result; 116 116 117 117 #if PAGE_SHIFT < 20 118 - memory = totalram_pages >> (20 - PAGE_SHIFT); 118 + memory = totalram_pages() >> (20 - PAGE_SHIFT); 119 119 #else 120 - memory = totalram_pages << (PAGE_SHIFT - 20); 120 + memory = totalram_pages() << (PAGE_SHIFT - 20); 121 121 #endif 122 122 index = 1; 123 123
+3 -11
drivers/dax/pmem.c
··· 48 48 percpu_ref_exit(ref); 49 49 } 50 50 51 - static void dax_pmem_percpu_kill(void *data) 51 + static void dax_pmem_percpu_kill(struct percpu_ref *ref) 52 52 { 53 - struct percpu_ref *ref = data; 54 53 struct dax_pmem *dax_pmem = to_dax_pmem(ref); 55 54 56 55 dev_dbg(dax_pmem->dev, "trace\n"); ··· 111 112 } 112 113 113 114 dax_pmem->pgmap.ref = &dax_pmem->ref; 115 + dax_pmem->pgmap.kill = dax_pmem_percpu_kill; 114 116 addr = devm_memremap_pages(dev, &dax_pmem->pgmap); 115 - if (IS_ERR(addr)) { 116 - devm_remove_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref); 117 - percpu_ref_exit(&dax_pmem->ref); 117 + if (IS_ERR(addr)) 118 118 return PTR_ERR(addr); 119 - } 120 - 121 - rc = devm_add_action_or_reset(dev, dax_pmem_percpu_kill, 122 - &dax_pmem->ref); 123 - if (rc) 124 - return rc; 125 119 126 120 /* adjust the dax_region resource to the start of data */ 127 121 memcpy(&res, &dax_pmem->pgmap.res, sizeof(res));
+20 -27
drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
··· 238 238 * amdgpu_mn_invalidate_range_start_gfx - callback to notify about mm change 239 239 * 240 240 * @mn: our notifier 241 - * @mm: the mm this callback is about 242 - * @start: start of updated range 243 - * @end: end of updated range 241 + * @range: mmu notifier context 244 242 * 245 243 * Block for operations on BOs to finish and mark pages as accessed and 246 244 * potentially dirty. 247 245 */ 248 246 static int amdgpu_mn_invalidate_range_start_gfx(struct mmu_notifier *mn, 249 - struct mm_struct *mm, 250 - unsigned long start, 251 - unsigned long end, 252 - bool blockable) 247 + const struct mmu_notifier_range *range) 253 248 { 254 249 struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn); 255 250 struct interval_tree_node *it; 251 + unsigned long end; 256 252 257 253 /* notification is exclusive, but interval is inclusive */ 258 - end -= 1; 254 + end = range->end - 1; 259 255 260 256 /* TODO we should be able to split locking for interval tree and 261 257 * amdgpu_mn_invalidate_node 262 258 */ 263 - if (amdgpu_mn_read_lock(amn, blockable)) 259 + if (amdgpu_mn_read_lock(amn, range->blockable)) 264 260 return -EAGAIN; 265 261 266 - it = interval_tree_iter_first(&amn->objects, start, end); 262 + it = interval_tree_iter_first(&amn->objects, range->start, end); 267 263 while (it) { 268 264 struct amdgpu_mn_node *node; 269 265 270 - if (!blockable) { 266 + if (!range->blockable) { 271 267 amdgpu_mn_read_unlock(amn); 272 268 return -EAGAIN; 273 269 } 274 270 275 271 node = container_of(it, struct amdgpu_mn_node, it); 276 - it = interval_tree_iter_next(it, start, end); 272 + it = interval_tree_iter_next(it, range->start, end); 277 273 278 - amdgpu_mn_invalidate_node(node, start, end); 274 + amdgpu_mn_invalidate_node(node, range->start, end); 279 275 } 280 276 281 277 return 0; ··· 290 294 * are restorted in amdgpu_mn_invalidate_range_end_hsa. 291 295 */ 292 296 static int amdgpu_mn_invalidate_range_start_hsa(struct mmu_notifier *mn, 293 - struct mm_struct *mm, 294 - unsigned long start, 295 - unsigned long end, 296 - bool blockable) 297 + const struct mmu_notifier_range *range) 297 298 { 298 299 struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn); 299 300 struct interval_tree_node *it; 301 + unsigned long end; 300 302 301 303 /* notification is exclusive, but interval is inclusive */ 302 - end -= 1; 304 + end = range->end - 1; 303 305 304 - if (amdgpu_mn_read_lock(amn, blockable)) 306 + if (amdgpu_mn_read_lock(amn, range->blockable)) 305 307 return -EAGAIN; 306 308 307 - it = interval_tree_iter_first(&amn->objects, start, end); 309 + it = interval_tree_iter_first(&amn->objects, range->start, end); 308 310 while (it) { 309 311 struct amdgpu_mn_node *node; 310 312 struct amdgpu_bo *bo; 311 313 312 - if (!blockable) { 314 + if (!range->blockable) { 313 315 amdgpu_mn_read_unlock(amn); 314 316 return -EAGAIN; 315 317 } 316 318 317 319 node = container_of(it, struct amdgpu_mn_node, it); 318 - it = interval_tree_iter_next(it, start, end); 320 + it = interval_tree_iter_next(it, range->start, end); 319 321 320 322 list_for_each_entry(bo, &node->bos, mn_list) { 321 323 struct kgd_mem *mem = bo->kfd_bo; 322 324 323 325 if (amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, 324 - start, end)) 325 - amdgpu_amdkfd_evict_userptr(mem, mm); 326 + range->start, 327 + end)) 328 + amdgpu_amdkfd_evict_userptr(mem, range->mm); 326 329 } 327 330 } 328 331 ··· 339 344 * Release the lock again to allow new command submissions. 340 345 */ 341 346 static void amdgpu_mn_invalidate_range_end(struct mmu_notifier *mn, 342 - struct mm_struct *mm, 343 - unsigned long start, 344 - unsigned long end) 347 + const struct mmu_notifier_range *range) 345 348 { 346 349 struct amdgpu_mn *amn = container_of(mn, struct amdgpu_mn, mn); 347 350
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_crat.c
··· 853 853 */ 854 854 pgdat = NODE_DATA(numa_node_id); 855 855 for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) 856 - mem_in_bytes += pgdat->node_zones[zone_type].managed_pages; 856 + mem_in_bytes += zone_managed_pages(&pgdat->node_zones[zone_type]); 857 857 mem_in_bytes <<= PAGE_SHIFT; 858 858 859 859 sub_type_hdr->length_low = lower_32_bits(mem_in_bytes);
+1 -1
drivers/gpu/drm/i915/i915_gem.c
··· 2559 2559 * If there's no chance of allocating enough pages for the whole 2560 2560 * object, bail early. 2561 2561 */ 2562 - if (page_count > totalram_pages) 2562 + if (page_count > totalram_pages()) 2563 2563 return -ENOMEM; 2564 2564 2565 2565 st = kmalloc(sizeof(*st), GFP_KERNEL);
+6 -8
drivers/gpu/drm/i915/i915_gem_userptr.c
··· 113 113 } 114 114 115 115 static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, 116 - struct mm_struct *mm, 117 - unsigned long start, 118 - unsigned long end, 119 - bool blockable) 116 + const struct mmu_notifier_range *range) 120 117 { 121 118 struct i915_mmu_notifier *mn = 122 119 container_of(_mn, struct i915_mmu_notifier, mn); 123 120 struct i915_mmu_object *mo; 124 121 struct interval_tree_node *it; 125 122 LIST_HEAD(cancelled); 123 + unsigned long end; 126 124 127 125 if (RB_EMPTY_ROOT(&mn->objects.rb_root)) 128 126 return 0; 129 127 130 128 /* interval ranges are inclusive, but invalidate range is exclusive */ 131 - end--; 129 + end = range->end - 1; 132 130 133 131 spin_lock(&mn->lock); 134 - it = interval_tree_iter_first(&mn->objects, start, end); 132 + it = interval_tree_iter_first(&mn->objects, range->start, end); 135 133 while (it) { 136 - if (!blockable) { 134 + if (!range->blockable) { 137 135 spin_unlock(&mn->lock); 138 136 return -EAGAIN; 139 137 } ··· 149 151 queue_work(mn->wq, &mo->work); 150 152 151 153 list_add(&mo->link, &cancelled); 152 - it = interval_tree_iter_next(it, start, end); 154 + it = interval_tree_iter_next(it, range->start, end); 153 155 } 154 156 list_for_each_entry(mo, &cancelled, link) 155 157 del_object(mo);
+2 -2
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
··· 170 170 * This should ensure that we do not run into the oomkiller during 171 171 * the test and take down the machine wilfully. 172 172 */ 173 - limit = totalram_pages << PAGE_SHIFT; 173 + limit = totalram_pages() << PAGE_SHIFT; 174 174 limit = min(ppgtt->vm.total, limit); 175 175 176 176 /* Check we can allocate the entire range */ ··· 1244 1244 u64 hole_start, u64 hole_end, 1245 1245 unsigned long end_time)) 1246 1246 { 1247 - const u64 limit = totalram_pages << PAGE_SHIFT; 1247 + const u64 limit = totalram_pages() << PAGE_SHIFT; 1248 1248 struct i915_gem_context *ctx; 1249 1249 struct i915_hw_ppgtt *ppgtt; 1250 1250 IGT_TIMEOUT(end_time);
+7 -9
drivers/gpu/drm/radeon/radeon_mn.c
··· 119 119 * unmap them by move them into system domain again. 120 120 */ 121 121 static int radeon_mn_invalidate_range_start(struct mmu_notifier *mn, 122 - struct mm_struct *mm, 123 - unsigned long start, 124 - unsigned long end, 125 - bool blockable) 122 + const struct mmu_notifier_range *range) 126 123 { 127 124 struct radeon_mn *rmn = container_of(mn, struct radeon_mn, mn); 128 125 struct ttm_operation_ctx ctx = { false, false }; 129 126 struct interval_tree_node *it; 127 + unsigned long end; 130 128 int ret = 0; 131 129 132 130 /* notification is exclusive, but interval is inclusive */ 133 - end -= 1; 131 + end = range->end - 1; 134 132 135 133 /* TODO we should be able to split locking for interval tree and 136 134 * the tear down. 137 135 */ 138 - if (blockable) 136 + if (range->blockable) 139 137 mutex_lock(&rmn->lock); 140 138 else if (!mutex_trylock(&rmn->lock)) 141 139 return -EAGAIN; 142 140 143 - it = interval_tree_iter_first(&rmn->objects, start, end); 141 + it = interval_tree_iter_first(&rmn->objects, range->start, end); 144 142 while (it) { 145 143 struct radeon_mn_node *node; 146 144 struct radeon_bo *bo; 147 145 long r; 148 146 149 - if (!blockable) { 147 + if (!range->blockable) { 150 148 ret = -EAGAIN; 151 149 goto out_unlock; 152 150 } 153 151 154 152 node = container_of(it, struct radeon_mn_node, it); 155 - it = interval_tree_iter_next(it, start, end); 153 + it = interval_tree_iter_next(it, range->start, end); 156 154 157 155 list_for_each_entry(bo, &node->bos, mn_list) { 158 156
+10 -9
drivers/hv/hv_balloon.c
··· 1090 1090 static unsigned long compute_balloon_floor(void) 1091 1091 { 1092 1092 unsigned long min_pages; 1093 + unsigned long nr_pages = totalram_pages(); 1093 1094 #define MB2PAGES(mb) ((mb) << (20 - PAGE_SHIFT)) 1094 1095 /* Simple continuous piecewiese linear function: 1095 1096 * max MiB -> min MiB gradient ··· 1103 1102 * 8192 744 (1/16) 1104 1103 * 32768 1512 (1/32) 1105 1104 */ 1106 - if (totalram_pages < MB2PAGES(128)) 1107 - min_pages = MB2PAGES(8) + (totalram_pages >> 1); 1108 - else if (totalram_pages < MB2PAGES(512)) 1109 - min_pages = MB2PAGES(40) + (totalram_pages >> 2); 1110 - else if (totalram_pages < MB2PAGES(2048)) 1111 - min_pages = MB2PAGES(104) + (totalram_pages >> 3); 1112 - else if (totalram_pages < MB2PAGES(8192)) 1113 - min_pages = MB2PAGES(232) + (totalram_pages >> 4); 1105 + if (nr_pages < MB2PAGES(128)) 1106 + min_pages = MB2PAGES(8) + (nr_pages >> 1); 1107 + else if (nr_pages < MB2PAGES(512)) 1108 + min_pages = MB2PAGES(40) + (nr_pages >> 2); 1109 + else if (nr_pages < MB2PAGES(2048)) 1110 + min_pages = MB2PAGES(104) + (nr_pages >> 3); 1111 + else if (nr_pages < MB2PAGES(8192)) 1112 + min_pages = MB2PAGES(232) + (nr_pages >> 4); 1114 1113 else 1115 - min_pages = MB2PAGES(488) + (totalram_pages >> 5); 1114 + min_pages = MB2PAGES(488) + (nr_pages >> 5); 1116 1115 #undef MB2PAGES 1117 1116 return min_pages; 1118 1117 }
+8 -12
drivers/infiniband/core/umem_odp.c
··· 146 146 } 147 147 148 148 static int ib_umem_notifier_invalidate_range_start(struct mmu_notifier *mn, 149 - struct mm_struct *mm, 150 - unsigned long start, 151 - unsigned long end, 152 - bool blockable) 149 + const struct mmu_notifier_range *range) 153 150 { 154 151 struct ib_ucontext_per_mm *per_mm = 155 152 container_of(mn, struct ib_ucontext_per_mm, mn); 156 153 157 - if (blockable) 154 + if (range->blockable) 158 155 down_read(&per_mm->umem_rwsem); 159 156 else if (!down_read_trylock(&per_mm->umem_rwsem)) 160 157 return -EAGAIN; ··· 166 169 return 0; 167 170 } 168 171 169 - return rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, start, end, 172 + return rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start, 173 + range->end, 170 174 invalidate_range_start_trampoline, 171 - blockable, NULL); 175 + range->blockable, NULL); 172 176 } 173 177 174 178 static int invalidate_range_end_trampoline(struct ib_umem_odp *item, u64 start, ··· 180 182 } 181 183 182 184 static void ib_umem_notifier_invalidate_range_end(struct mmu_notifier *mn, 183 - struct mm_struct *mm, 184 - unsigned long start, 185 - unsigned long end) 185 + const struct mmu_notifier_range *range) 186 186 { 187 187 struct ib_ucontext_per_mm *per_mm = 188 188 container_of(mn, struct ib_ucontext_per_mm, mn); ··· 188 192 if (unlikely(!per_mm->active)) 189 193 return; 190 194 191 - rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, start, 192 - end, 195 + rbt_ib_umem_for_each_in_range(&per_mm->umem_tree, range->start, 196 + range->end, 193 197 invalidate_range_end_trampoline, true, NULL); 194 198 up_read(&per_mm->umem_rwsem); 195 199 }
+5 -8
drivers/infiniband/hw/hfi1/mmu_rb.c
··· 68 68 static unsigned long mmu_node_start(struct mmu_rb_node *); 69 69 static unsigned long mmu_node_last(struct mmu_rb_node *); 70 70 static int mmu_notifier_range_start(struct mmu_notifier *, 71 - struct mm_struct *, 72 - unsigned long, unsigned long, bool); 71 + const struct mmu_notifier_range *); 73 72 static struct mmu_rb_node *__mmu_rb_search(struct mmu_rb_handler *, 74 73 unsigned long, unsigned long); 75 74 static void do_remove(struct mmu_rb_handler *handler, ··· 283 284 } 284 285 285 286 static int mmu_notifier_range_start(struct mmu_notifier *mn, 286 - struct mm_struct *mm, 287 - unsigned long start, 288 - unsigned long end, 289 - bool blockable) 287 + const struct mmu_notifier_range *range) 290 288 { 291 289 struct mmu_rb_handler *handler = 292 290 container_of(mn, struct mmu_rb_handler, mn); ··· 293 297 bool added = false; 294 298 295 299 spin_lock_irqsave(&handler->lock, flags); 296 - for (node = __mmu_int_rb_iter_first(root, start, end - 1); 300 + for (node = __mmu_int_rb_iter_first(root, range->start, range->end-1); 297 301 node; node = ptr) { 298 302 /* Guard against node removal. */ 299 - ptr = __mmu_int_rb_iter_next(node, start, end - 1); 303 + ptr = __mmu_int_rb_iter_next(node, range->start, 304 + range->end - 1); 300 305 trace_hfi1_mmu_mem_invalidate(node->addr, node->len); 301 306 if (handler->ops->invalidate(handler->ops_arg, node)) { 302 307 __mmu_int_rb_remove(node, root);
+1 -1
drivers/md/dm-bufio.c
··· 1887 1887 dm_bufio_allocated_vmalloc = 0; 1888 1888 dm_bufio_current_allocated = 0; 1889 1889 1890 - mem = (__u64)mult_frac(totalram_pages - totalhigh_pages, 1890 + mem = (__u64)mult_frac(totalram_pages() - totalhigh_pages(), 1891 1891 DM_BUFIO_MEMORY_PERCENT, 100) << PAGE_SHIFT; 1892 1892 1893 1893 if (mem > ULONG_MAX)
+1 -1
drivers/md/dm-crypt.c
··· 2167 2167 2168 2168 static void crypt_calculate_pages_per_client(void) 2169 2169 { 2170 - unsigned long pages = (totalram_pages - totalhigh_pages) * DM_CRYPT_MEMORY_PERCENT / 100; 2170 + unsigned long pages = (totalram_pages() - totalhigh_pages()) * DM_CRYPT_MEMORY_PERCENT / 100; 2171 2171 2172 2172 if (!dm_crypt_clients_n) 2173 2173 return;
+1 -1
drivers/md/dm-integrity.c
··· 2843 2843 journal_pages = roundup((__u64)ic->journal_sections * ic->journal_section_sectors, 2844 2844 PAGE_SIZE >> SECTOR_SHIFT) >> (PAGE_SHIFT - SECTOR_SHIFT); 2845 2845 journal_desc_size = journal_pages * sizeof(struct page_list); 2846 - if (journal_pages >= totalram_pages - totalhigh_pages || journal_desc_size > ULONG_MAX) { 2846 + if (journal_pages >= totalram_pages() - totalhigh_pages() || journal_desc_size > ULONG_MAX) { 2847 2847 *error = "Journal doesn't fit into memory"; 2848 2848 r = -ENOMEM; 2849 2849 goto bad;
+1 -1
drivers/md/dm-stats.c
··· 85 85 a = shared_memory_amount + alloc_size; 86 86 if (a < shared_memory_amount) 87 87 return false; 88 - if (a >> PAGE_SHIFT > totalram_pages / DM_STATS_MEMORY_FACTOR) 88 + if (a >> PAGE_SHIFT > totalram_pages() / DM_STATS_MEMORY_FACTOR) 89 89 return false; 90 90 #ifdef CONFIG_MMU 91 91 if (a > (VMALLOC_END - VMALLOC_START) / DM_STATS_VMALLOC_FACTOR)
+1 -1
drivers/media/platform/mtk-vpu/mtk_vpu.c
··· 855 855 /* Set PTCM to 96K and DTCM to 32K */ 856 856 vpu_cfg_writel(vpu, 0x2, VPU_TCM_CFG); 857 857 858 - vpu->enable_4GB = !!(totalram_pages > (SZ_2G >> PAGE_SHIFT)); 858 + vpu->enable_4GB = !!(totalram_pages() > (SZ_2G >> PAGE_SHIFT)); 859 859 dev_info(dev, "4GB mode %u\n", vpu->enable_4GB); 860 860 861 861 if (vpu->enable_4GB) {
+3 -8
drivers/misc/mic/scif/scif_dma.c
··· 201 201 } 202 202 203 203 static int scif_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, 204 - struct mm_struct *mm, 205 - unsigned long start, 206 - unsigned long end, 207 - bool blockable) 204 + const struct mmu_notifier_range *range) 208 205 { 209 206 struct scif_mmu_notif *mmn; 210 207 211 208 mmn = container_of(mn, struct scif_mmu_notif, ep_mmu_notifier); 212 - scif_rma_destroy_tcw(mmn, start, end - start); 209 + scif_rma_destroy_tcw(mmn, range->start, range->end - range->start); 213 210 214 211 return 0; 215 212 } 216 213 217 214 static void scif_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, 218 - struct mm_struct *mm, 219 - unsigned long start, 220 - unsigned long end) 215 + const struct mmu_notifier_range *range) 221 216 { 222 217 /* 223 218 * Nothing to do here, everything needed was done in
+6 -8
drivers/misc/sgi-gru/grutlbpurge.c
··· 220 220 * MMUOPS notifier callout functions 221 221 */ 222 222 static int gru_invalidate_range_start(struct mmu_notifier *mn, 223 - struct mm_struct *mm, 224 - unsigned long start, unsigned long end, 225 - bool blockable) 223 + const struct mmu_notifier_range *range) 226 224 { 227 225 struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct, 228 226 ms_notifier); ··· 228 230 STAT(mmu_invalidate_range); 229 231 atomic_inc(&gms->ms_range_active); 230 232 gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx, act %d\n", gms, 231 - start, end, atomic_read(&gms->ms_range_active)); 232 - gru_flush_tlb_range(gms, start, end - start); 233 + range->start, range->end, atomic_read(&gms->ms_range_active)); 234 + gru_flush_tlb_range(gms, range->start, range->end - range->start); 233 235 234 236 return 0; 235 237 } 236 238 237 239 static void gru_invalidate_range_end(struct mmu_notifier *mn, 238 - struct mm_struct *mm, unsigned long start, 239 - unsigned long end) 240 + const struct mmu_notifier_range *range) 240 241 { 241 242 struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct, 242 243 ms_notifier); ··· 244 247 (void)atomic_dec_and_test(&gms->ms_range_active); 245 248 246 249 wake_up_all(&gms->ms_wait_queue); 247 - gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx\n", gms, start, end); 250 + gru_dbg(grudev, "gms %p, start 0x%lx, end 0x%lx\n", 251 + gms, range->start, range->end); 248 252 } 249 253 250 254 static void gru_release(struct mmu_notifier *mn, struct mm_struct *mm)
+1 -1
drivers/misc/vmw_balloon.c
··· 570 570 unsigned long status; 571 571 unsigned long limit; 572 572 573 - limit = totalram_pages; 573 + limit = totalram_pages(); 574 574 575 575 /* Ensure limit fits in 32-bits */ 576 576 if (limit != (u32)limit)
+5 -8
drivers/nvdimm/pmem.c
··· 309 309 blk_cleanup_queue(q); 310 310 } 311 311 312 - static void pmem_freeze_queue(void *q) 312 + static void pmem_freeze_queue(struct percpu_ref *ref) 313 313 { 314 + struct request_queue *q; 315 + 316 + q = container_of(ref, typeof(*q), q_usage_counter); 314 317 blk_freeze_queue_start(q); 315 318 } 316 319 ··· 405 402 406 403 pmem->pfn_flags = PFN_DEV; 407 404 pmem->pgmap.ref = &q->q_usage_counter; 405 + pmem->pgmap.kill = pmem_freeze_queue; 408 406 if (is_nd_pfn(dev)) { 409 407 if (setup_pagemap_fsdax(dev, &pmem->pgmap)) 410 408 return -ENOMEM; ··· 430 426 pmem->size, ARCH_MEMREMAP_PMEM); 431 427 memcpy(&bb_res, &nsio->res, sizeof(bb_res)); 432 428 } 433 - 434 - /* 435 - * At release time the queue must be frozen before 436 - * devm_memremap_pages is unwound 437 - */ 438 - if (devm_add_action_or_reset(dev, pmem_freeze_queue, q)) 439 - return -ENOMEM; 440 429 441 430 if (IS_ERR(addr)) 442 431 return PTR_ERR(addr);
+2 -2
drivers/parisc/ccio-dma.c
··· 1243 1243 ** Hot-Plug/Removal of PCI cards. (aka PCI OLARD). 1244 1244 */ 1245 1245 1246 - iova_space_size = (u32) (totalram_pages / count_parisc_driver(&ccio_driver)); 1246 + iova_space_size = (u32) (totalram_pages() / count_parisc_driver(&ccio_driver)); 1247 1247 1248 1248 /* limit IOVA space size to 1MB-1GB */ 1249 1249 ··· 1282 1282 1283 1283 DBG_INIT("%s() hpa 0x%p mem %luMB IOV %dMB (%d bits)\n", 1284 1284 __func__, ioc->ioc_regs, 1285 - (unsigned long) totalram_pages >> (20 - PAGE_SHIFT), 1285 + (unsigned long) totalram_pages() >> (20 - PAGE_SHIFT), 1286 1286 iova_space_size>>20, 1287 1287 iov_order + PAGE_SHIFT); 1288 1288
+2 -2
drivers/parisc/sba_iommu.c
··· 1406 1406 ** for DMA hints - ergo only 30 bits max. 1407 1407 */ 1408 1408 1409 - iova_space_size = (u32) (totalram_pages/global_ioc_cnt); 1409 + iova_space_size = (u32) (totalram_pages()/global_ioc_cnt); 1410 1410 1411 1411 /* limit IOVA space size to 1MB-1GB */ 1412 1412 if (iova_space_size < (1 << (20 - PAGE_SHIFT))) { ··· 1431 1431 DBG_INIT("%s() hpa 0x%lx mem %ldMB IOV %dMB (%d bits)\n", 1432 1432 __func__, 1433 1433 ioc->ioc_hpa, 1434 - (unsigned long) totalram_pages >> (20 - PAGE_SHIFT), 1434 + (unsigned long) totalram_pages() >> (20 - PAGE_SHIFT), 1435 1435 iova_space_size>>20, 1436 1436 iov_order + PAGE_SHIFT); 1437 1437
+2 -8
drivers/pci/p2pdma.c
··· 82 82 complete_all(&p2p->devmap_ref_done); 83 83 } 84 84 85 - static void pci_p2pdma_percpu_kill(void *data) 85 + static void pci_p2pdma_percpu_kill(struct percpu_ref *ref) 86 86 { 87 - struct percpu_ref *ref = data; 88 - 89 87 /* 90 88 * pci_p2pdma_add_resource() may be called multiple times 91 89 * by a driver and may register the percpu_kill devm action multiple ··· 196 198 pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; 197 199 pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) - 198 200 pci_resource_start(pdev, bar); 201 + pgmap->kill = pci_p2pdma_percpu_kill; 199 202 200 203 addr = devm_memremap_pages(&pdev->dev, pgmap); 201 204 if (IS_ERR(addr)) { ··· 207 208 error = gen_pool_add_virt(pdev->p2pdma->pool, (unsigned long)addr, 208 209 pci_bus_address(pdev, bar) + offset, 209 210 resource_size(&pgmap->res), dev_to_node(&pdev->dev)); 210 - if (error) 211 - goto pgmap_free; 212 - 213 - error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_percpu_kill, 214 - &pdev->p2pdma->devmap_ref); 215 211 if (error) 216 212 goto pgmap_free; 217 213
+1 -1
drivers/staging/android/ion/ion_system_heap.c
··· 110 110 unsigned long size_remaining = PAGE_ALIGN(size); 111 111 unsigned int max_order = orders[0]; 112 112 113 - if (size / PAGE_SIZE > totalram_pages / 2) 113 + if (size / PAGE_SIZE > totalram_pages() / 2) 114 114 return -ENOMEM; 115 115 116 116 INIT_LIST_HEAD(&pages);
+1 -1
drivers/xen/balloon.c
··· 352 352 mutex_unlock(&balloon_mutex); 353 353 /* add_memory_resource() requires the device_hotplug lock */ 354 354 lock_device_hotplug(); 355 - rc = add_memory_resource(nid, resource, memhp_auto_online); 355 + rc = add_memory_resource(nid, resource); 356 356 unlock_device_hotplug(); 357 357 mutex_lock(&balloon_mutex); 358 358
+6 -6
drivers/xen/gntdev.c
··· 520 520 } 521 521 522 522 static int mn_invl_range_start(struct mmu_notifier *mn, 523 - struct mm_struct *mm, 524 - unsigned long start, unsigned long end, 525 - bool blockable) 523 + const struct mmu_notifier_range *range) 526 524 { 527 525 struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn); 528 526 struct gntdev_grant_map *map; 529 527 int ret = 0; 530 528 531 - if (blockable) 529 + if (range->blockable) 532 530 mutex_lock(&priv->lock); 533 531 else if (!mutex_trylock(&priv->lock)) 534 532 return -EAGAIN; 535 533 536 534 list_for_each_entry(map, &priv->maps, next) { 537 - ret = unmap_if_in_range(map, start, end, blockable); 535 + ret = unmap_if_in_range(map, range->start, range->end, 536 + range->blockable); 538 537 if (ret) 539 538 goto out_unlock; 540 539 } 541 540 list_for_each_entry(map, &priv->freeable_maps, next) { 542 - ret = unmap_if_in_range(map, start, end, blockable); 541 + ret = unmap_if_in_range(map, range->start, range->end, 542 + range->blockable); 543 543 if (ret) 544 544 goto out_unlock; 545 545 }
+3 -3
drivers/xen/xen-selfballoon.c
··· 189 189 bool reset_timer = false; 190 190 191 191 if (xen_selfballooning_enabled) { 192 - cur_pages = totalram_pages; 192 + cur_pages = totalram_pages(); 193 193 tgt_pages = cur_pages; /* default is no change */ 194 194 goal_pages = vm_memory_committed() + 195 195 totalreserve_pages + ··· 227 227 if (tgt_pages < floor_pages) 228 228 tgt_pages = floor_pages; 229 229 balloon_set_new_target(tgt_pages + 230 - balloon_stats.current_pages - totalram_pages); 230 + balloon_stats.current_pages - totalram_pages()); 231 231 reset_timer = true; 232 232 } 233 233 #ifdef CONFIG_FRONTSWAP ··· 569 569 * much more reliably and response faster in some cases. 570 570 */ 571 571 if (!selfballoon_reserved_mb) { 572 - reserve_pages = totalram_pages / 10; 572 + reserve_pages = totalram_pages() / 10; 573 573 selfballoon_reserved_mb = PAGES2MB(reserve_pages); 574 574 } 575 575 schedule_delayed_work(&selfballoon_worker, selfballoon_interval * HZ);
+1 -1
fs/aio.c
··· 415 415 BUG_ON(PageWriteback(old)); 416 416 get_page(new); 417 417 418 - rc = migrate_page_move_mapping(mapping, new, old, NULL, mode, 1); 418 + rc = migrate_page_move_mapping(mapping, new, old, mode, 1); 419 419 if (rc != MIGRATEPAGE_SUCCESS) { 420 420 put_page(new); 421 421 goto out_unlock;
+1
fs/block_dev.c
··· 1992 1992 .writepages = blkdev_writepages, 1993 1993 .releasepage = blkdev_releasepage, 1994 1994 .direct_IO = blkdev_direct_IO, 1995 + .migratepage = buffer_migrate_page_norefs, 1995 1996 .is_dirty_writeback = buffer_check_dirty_writeback, 1996 1997 }; 1997 1998
+1 -1
fs/ceph/super.h
··· 810 810 * This allows larger machines to have larger/more transfers. 811 811 * Limit the default to 256M 812 812 */ 813 - congestion_kb = (16*int_sqrt(totalram_pages)) << (PAGE_SHIFT-10); 813 + congestion_kb = (16*int_sqrt(totalram_pages())) << (PAGE_SHIFT-10); 814 814 if (congestion_kb > 256*1024) 815 815 congestion_kb = 256*1024; 816 816
+5 -3
fs/dax.c
··· 779 779 780 780 i_mmap_lock_read(mapping); 781 781 vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) { 782 - unsigned long address, start, end; 782 + struct mmu_notifier_range range; 783 + unsigned long address; 783 784 784 785 cond_resched(); 785 786 ··· 794 793 * call mmu_notifier_invalidate_range_start() on our behalf 795 794 * before taking any lock. 796 795 */ 797 - if (follow_pte_pmd(vma->vm_mm, address, &start, &end, &ptep, &pmdp, &ptl)) 796 + if (follow_pte_pmd(vma->vm_mm, address, &range, 797 + &ptep, &pmdp, &ptl)) 798 798 continue; 799 799 800 800 /* ··· 837 835 pte_unmap_unlock(ptep, ptl); 838 836 } 839 837 840 - mmu_notifier_invalidate_range_end(vma->vm_mm, start, end); 838 + mmu_notifier_invalidate_range_end(&range); 841 839 } 842 840 i_mmap_unlock_read(mapping); 843 841 }
+1 -1
fs/f2fs/data.c
··· 2738 2738 */ 2739 2739 extra_count = (atomic_written ? 1 : 0) - page_has_private(page); 2740 2740 rc = migrate_page_move_mapping(mapping, newpage, 2741 - page, NULL, mode, extra_count); 2741 + page, mode, extra_count); 2742 2742 if (rc != MIGRATEPAGE_SUCCESS) { 2743 2743 if (atomic_written) 2744 2744 mutex_unlock(&fi->inmem_lock);
+4 -3
fs/file_table.c
··· 380 380 void __init files_maxfiles_init(void) 381 381 { 382 382 unsigned long n; 383 - unsigned long memreserve = (totalram_pages - nr_free_pages()) * 3/2; 383 + unsigned long nr_pages = totalram_pages(); 384 + unsigned long memreserve = (nr_pages - nr_free_pages()) * 3/2; 384 385 385 - memreserve = min(memreserve, totalram_pages - 1); 386 - n = ((totalram_pages - memreserve) * (PAGE_SIZE / 1024)) / 10; 386 + memreserve = min(memreserve, nr_pages - 1); 387 + n = ((nr_pages - memreserve) * (PAGE_SIZE / 1024)) / 10; 387 388 388 389 files_stat.max_files = max_t(unsigned long, n, NR_FILE); 389 390 }
+1 -1
fs/fuse/inode.c
··· 824 824 static void sanitize_global_limit(unsigned *limit) 825 825 { 826 826 if (*limit == 0) 827 - *limit = ((totalram_pages << PAGE_SHIFT) >> 13) / 827 + *limit = ((totalram_pages() << PAGE_SHIFT) >> 13) / 828 828 sizeof(struct fuse_req); 829 829 830 830 if (*limit >= 1 << 16)
+28 -33
fs/hugetlbfs/inode.c
··· 383 383 * truncation is indicated by end of range being LLONG_MAX 384 384 * In this case, we first scan the range and release found pages. 385 385 * After releasing pages, hugetlb_unreserve_pages cleans up region/reserv 386 - * maps and global counts. Page faults can not race with truncation 387 - * in this routine. hugetlb_no_page() prevents page faults in the 388 - * truncated range. It checks i_size before allocation, and again after 389 - * with the page table lock for the page held. The same lock must be 390 - * acquired to unmap a page. 386 + * maps and global counts. 391 387 * hole punch is indicated if end is not LLONG_MAX 392 388 * In the hole punch case we scan the range and release found pages. 393 389 * Only when releasing a page is the associated region/reserv map 394 390 * deleted. The region/reserv map for ranges without associated 395 - * pages are not modified. Page faults can race with hole punch. 396 - * This is indicated if we find a mapped page. 391 + * pages are not modified. 392 + * 393 + * Callers of this routine must hold the i_mmap_rwsem in write mode to prevent 394 + * races with page faults. 395 + * 397 396 * Note: If the passed end of range value is beyond the end of file, but 398 397 * not LLONG_MAX this routine still performs a hole punch operation. 399 398 */ ··· 422 423 423 424 for (i = 0; i < pagevec_count(&pvec); ++i) { 424 425 struct page *page = pvec.pages[i]; 425 - u32 hash; 426 426 427 427 index = page->index; 428 - hash = hugetlb_fault_mutex_hash(h, current->mm, 429 - &pseudo_vma, 430 - mapping, index, 0); 431 - mutex_lock(&hugetlb_fault_mutex_table[hash]); 432 - 433 428 /* 434 - * If page is mapped, it was faulted in after being 435 - * unmapped in caller. Unmap (again) now after taking 436 - * the fault mutex. The mutex will prevent faults 437 - * until we finish removing the page. 438 - * 439 - * This race can only happen in the hole punch case. 440 - * Getting here in a truncate operation is a bug. 429 + * A mapped page is impossible as callers should unmap 430 + * all references before calling. And, i_mmap_rwsem 431 + * prevents the creation of additional mappings. 441 432 */ 442 - if (unlikely(page_mapped(page))) { 443 - BUG_ON(truncate_op); 444 - 445 - i_mmap_lock_write(mapping); 446 - hugetlb_vmdelete_list(&mapping->i_mmap, 447 - index * pages_per_huge_page(h), 448 - (index + 1) * pages_per_huge_page(h)); 449 - i_mmap_unlock_write(mapping); 450 - } 433 + VM_BUG_ON(page_mapped(page)); 451 434 452 435 lock_page(page); 453 436 /* ··· 451 470 } 452 471 453 472 unlock_page(page); 454 - mutex_unlock(&hugetlb_fault_mutex_table[hash]); 455 473 } 456 474 huge_pagevec_release(&pvec); 457 475 cond_resched(); ··· 462 482 463 483 static void hugetlbfs_evict_inode(struct inode *inode) 464 484 { 485 + struct address_space *mapping = inode->i_mapping; 465 486 struct resv_map *resv_map; 466 487 488 + /* 489 + * The vfs layer guarantees that there are no other users of this 490 + * inode. Therefore, it would be safe to call remove_inode_hugepages 491 + * without holding i_mmap_rwsem. We acquire and hold here to be 492 + * consistent with other callers. Since there will be no contention 493 + * on the semaphore, overhead is negligible. 494 + */ 495 + i_mmap_lock_write(mapping); 467 496 remove_inode_hugepages(inode, 0, LLONG_MAX); 497 + i_mmap_unlock_write(mapping); 498 + 468 499 resv_map = (struct resv_map *)inode->i_mapping->private_data; 469 500 /* root inode doesn't have the resv_map, so we should check it */ 470 501 if (resv_map) ··· 496 505 i_mmap_lock_write(mapping); 497 506 if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)) 498 507 hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0); 499 - i_mmap_unlock_write(mapping); 500 508 remove_inode_hugepages(inode, offset, LLONG_MAX); 509 + i_mmap_unlock_write(mapping); 501 510 return 0; 502 511 } 503 512 ··· 531 540 hugetlb_vmdelete_list(&mapping->i_mmap, 532 541 hole_start >> PAGE_SHIFT, 533 542 hole_end >> PAGE_SHIFT); 534 - i_mmap_unlock_write(mapping); 535 543 remove_inode_hugepages(inode, hole_start, hole_end); 544 + i_mmap_unlock_write(mapping); 536 545 inode_unlock(inode); 537 546 } 538 547 ··· 615 624 /* addr is the offset within the file (zero based) */ 616 625 addr = index * hpage_size; 617 626 618 - /* mutex taken here, fault path and hole punch */ 627 + /* 628 + * fault mutex taken here, protects against fault path 629 + * and hole punch. inode_lock previously taken protects 630 + * against truncation. 631 + */ 619 632 hash = hugetlb_fault_mutex_hash(h, mm, &pseudo_vma, mapping, 620 633 index, addr); 621 634 mutex_lock(&hugetlb_fault_mutex_table[hash]);
+1 -1
fs/iomap.c
··· 563 563 { 564 564 int ret; 565 565 566 - ret = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); 566 + ret = migrate_page_move_mapping(mapping, newpage, page, mode, 0); 567 567 if (ret != MIGRATEPAGE_SUCCESS) 568 568 return ret; 569 569
+1 -1
fs/nfs/write.c
··· 2121 2121 * This allows larger machines to have larger/more transfers. 2122 2122 * Limit the default to 256M 2123 2123 */ 2124 - nfs_congestion_kb = (16*int_sqrt(totalram_pages)) << (PAGE_SHIFT-10); 2124 + nfs_congestion_kb = (16*int_sqrt(totalram_pages())) << (PAGE_SHIFT-10); 2125 2125 if (nfs_congestion_kb > 256*1024) 2126 2126 nfs_congestion_kb = 256*1024; 2127 2127
+1 -1
fs/nfsd/nfscache.c
··· 99 99 nfsd_cache_size_limit(void) 100 100 { 101 101 unsigned int limit; 102 - unsigned long low_pages = totalram_pages - totalhigh_pages; 102 + unsigned long low_pages = totalram_pages() - totalhigh_pages(); 103 103 104 104 limit = (16 * int_sqrt(low_pages)) << (PAGE_SHIFT-10); 105 105 return min_t(unsigned int, limit, 256*1024);
+1 -1
fs/ntfs/malloc.h
··· 47 47 return kmalloc(PAGE_SIZE, gfp_mask & ~__GFP_HIGHMEM); 48 48 /* return (void *)__get_free_page(gfp_mask); */ 49 49 } 50 - if (likely((size >> PAGE_SHIFT) < totalram_pages)) 50 + if (likely((size >> PAGE_SHIFT) < totalram_pages())) 51 51 return __vmalloc(size, gfp_mask, PAGE_KERNEL); 52 52 return NULL; 53 53 }
+1 -1
fs/ocfs2/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - ccflags-y := -Ifs/ocfs2 2 + ccflags-y := -I$(src) 3 3 4 4 obj-$(CONFIG_OCFS2_FS) += \ 5 5 ocfs2.o \
-2
fs/ocfs2/buffer_head_io.c
··· 161 161 #endif 162 162 } 163 163 164 - clear_buffer_uptodate(bh); 165 164 get_bh(bh); /* for end_buffer_read_sync() */ 166 165 bh->b_end_io = end_buffer_read_sync; 167 166 submit_bh(REQ_OP_READ, 0, bh); ··· 340 341 continue; 341 342 } 342 343 343 - clear_buffer_uptodate(bh); 344 344 get_bh(bh); /* for end_buffer_read_sync() */ 345 345 if (validate) 346 346 set_buffer_needs_validate(bh);
+12 -5
fs/ocfs2/cluster/heartbeat.c
··· 582 582 } 583 583 584 584 static int o2hb_read_slots(struct o2hb_region *reg, 585 + unsigned int begin_slot, 585 586 unsigned int max_slots) 586 587 { 587 - unsigned int current_slot=0; 588 + unsigned int current_slot = begin_slot; 588 589 int status; 589 590 struct o2hb_bio_wait_ctxt wc; 590 591 struct bio *bio; ··· 1094 1093 return find_last_bit(nodes, numbits); 1095 1094 } 1096 1095 1096 + static int o2hb_lowest_node(unsigned long *nodes, int numbits) 1097 + { 1098 + return find_first_bit(nodes, numbits); 1099 + } 1100 + 1097 1101 static int o2hb_do_disk_heartbeat(struct o2hb_region *reg) 1098 1102 { 1099 - int i, ret, highest_node; 1103 + int i, ret, highest_node, lowest_node; 1100 1104 int membership_change = 0, own_slot_ok = 0; 1101 1105 unsigned long configured_nodes[BITS_TO_LONGS(O2NM_MAX_NODES)]; 1102 1106 unsigned long live_node_bitmap[BITS_TO_LONGS(O2NM_MAX_NODES)]; ··· 1126 1120 } 1127 1121 1128 1122 highest_node = o2hb_highest_node(configured_nodes, O2NM_MAX_NODES); 1129 - if (highest_node >= O2NM_MAX_NODES) { 1123 + lowest_node = o2hb_lowest_node(configured_nodes, O2NM_MAX_NODES); 1124 + if (highest_node >= O2NM_MAX_NODES || lowest_node >= O2NM_MAX_NODES) { 1130 1125 mlog(ML_NOTICE, "o2hb: No configured nodes found!\n"); 1131 1126 ret = -EINVAL; 1132 1127 goto bail; ··· 1137 1130 * yet. Of course, if the node definitions have holes in them 1138 1131 * then we're reading an empty slot anyway... Consider this 1139 1132 * best-effort. */ 1140 - ret = o2hb_read_slots(reg, highest_node + 1); 1133 + ret = o2hb_read_slots(reg, lowest_node, highest_node + 1); 1141 1134 if (ret < 0) { 1142 1135 mlog_errno(ret); 1143 1136 goto bail; ··· 1808 1801 struct o2hb_disk_slot *slot; 1809 1802 struct o2hb_disk_heartbeat_block *hb_block; 1810 1803 1811 - ret = o2hb_read_slots(reg, reg->hr_blocks); 1804 + ret = o2hb_read_slots(reg, 0, reg->hr_blocks); 1812 1805 if (ret) 1813 1806 goto out; 1814 1807
+1 -1
fs/ocfs2/dlm/Makefile
··· 1 - ccflags-y := -Ifs/ocfs2 1 + ccflags-y := -I$(src)/.. 2 2 3 3 obj-$(CONFIG_OCFS2_FS_O2CB) += ocfs2_dlm.o 4 4
+1 -1
fs/ocfs2/dlmfs/Makefile
··· 1 - ccflags-y := -Ifs/ocfs2 1 + ccflags-y := -I$(src)/.. 2 2 3 3 obj-$(CONFIG_OCFS2_FS) += ocfs2_dlmfs.o 4 4
+1 -2
fs/ocfs2/dlmfs/dlmfs.c
··· 179 179 static int dlmfs_file_release(struct inode *inode, 180 180 struct file *file) 181 181 { 182 - int level, status; 182 + int level; 183 183 struct dlmfs_inode_private *ip = DLMFS_I(inode); 184 184 struct dlmfs_filp_private *fp = file->private_data; 185 185 ··· 188 188 189 189 mlog(0, "close called on inode %lu\n", inode->i_ino); 190 190 191 - status = 0; 192 191 if (fp) { 193 192 level = fp->fp_lock_level; 194 193 if (level != DLM_LOCK_IV)
+2 -4
fs/ocfs2/journal.c
··· 1017 1017 mlog_errno(status); 1018 1018 } 1019 1019 1020 - if (status == 0) { 1020 + /* Shutdown the kernel journal system */ 1021 + if (!jbd2_journal_destroy(journal->j_journal) && !status) { 1021 1022 /* 1022 1023 * Do not toggle if flush was unsuccessful otherwise 1023 1024 * will leave dirty metadata in a "clean" journal ··· 1027 1026 if (status < 0) 1028 1027 mlog_errno(status); 1029 1028 } 1030 - 1031 - /* Shutdown the kernel journal system */ 1032 - jbd2_journal_destroy(journal->j_journal); 1033 1029 journal->j_journal = NULL; 1034 1030 1035 1031 OCFS2_I(inode)->ip_open_count--;
+8 -4
fs/ocfs2/localalloc.c
··· 345 345 if (num_used 346 346 || alloc->id1.bitmap1.i_used 347 347 || alloc->id1.bitmap1.i_total 348 - || la->la_bm_off) 349 - mlog(ML_ERROR, "Local alloc hasn't been recovered!\n" 348 + || la->la_bm_off) { 349 + mlog(ML_ERROR, "inconsistent detected, clean journal with" 350 + " unrecovered local alloc, please run fsck.ocfs2!\n" 350 351 "found = %u, set = %u, taken = %u, off = %u\n", 351 352 num_used, le32_to_cpu(alloc->id1.bitmap1.i_used), 352 353 le32_to_cpu(alloc->id1.bitmap1.i_total), 353 354 OCFS2_LOCAL_ALLOC(alloc)->la_bm_off); 355 + 356 + status = -EINVAL; 357 + goto bail; 358 + } 354 359 355 360 osb->local_alloc_bh = alloc_bh; 356 361 osb->local_alloc_state = OCFS2_LA_ENABLED; ··· 840 835 u32 *numbits, 841 836 struct ocfs2_alloc_reservation *resv) 842 837 { 843 - int numfound = 0, bitoff, left, startoff, lastzero; 838 + int numfound = 0, bitoff, left, startoff; 844 839 int local_resv = 0; 845 840 struct ocfs2_alloc_reservation r; 846 841 void *bitmap = NULL; ··· 878 873 bitmap = OCFS2_LOCAL_ALLOC(alloc)->la_bitmap; 879 874 880 875 numfound = bitoff = startoff = 0; 881 - lastzero = -1; 882 876 left = le32_to_cpu(alloc->id1.bitmap1.i_total); 883 877 while ((bitoff = ocfs2_find_next_zero_bit(bitmap, left, startoff)) != -1) { 884 878 if (bitoff == left) {
+10
fs/proc/array.c
··· 392 392 seq_putc(m, '\n'); 393 393 } 394 394 395 + static inline void task_thp_status(struct seq_file *m, struct mm_struct *mm) 396 + { 397 + bool thp_enabled = IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE); 398 + 399 + if (thp_enabled) 400 + thp_enabled = !test_bit(MMF_DISABLE_THP, &mm->flags); 401 + seq_printf(m, "THP_enabled:\t%d\n", thp_enabled); 402 + } 403 + 395 404 int proc_pid_status(struct seq_file *m, struct pid_namespace *ns, 396 405 struct pid *pid, struct task_struct *task) 397 406 { ··· 415 406 if (mm) { 416 407 task_mem(m, mm); 417 408 task_core_dumping(m, mm); 409 + task_thp_status(m, mm); 418 410 mmput(mm); 419 411 } 420 412 task_sig(m, task);
+1 -1
fs/proc/base.c
··· 530 530 static int proc_oom_score(struct seq_file *m, struct pid_namespace *ns, 531 531 struct pid *pid, struct task_struct *task) 532 532 { 533 - unsigned long totalpages = totalram_pages + total_swap_pages; 533 + unsigned long totalpages = totalram_pages() + total_swap_pages; 534 534 unsigned long points = 0; 535 535 536 536 points = oom_badness(task, NULL, NULL, totalpages) *
+1 -1
fs/proc/page.c
··· 46 46 ppage = pfn_to_page(pfn); 47 47 else 48 48 ppage = NULL; 49 - if (!ppage || PageSlab(ppage)) 49 + if (!ppage || PageSlab(ppage) || page_has_type(ppage)) 50 50 pcount = 0; 51 51 else 52 52 pcount = page_mapcount(ppage);
+7 -2
fs/proc/task_mmu.c
··· 790 790 791 791 __show_smap(m, &mss); 792 792 793 + seq_printf(m, "THPeligible: %d\n", transparent_hugepage_enabled(vma)); 794 + 793 795 if (arch_pkeys_enabled()) 794 796 seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); 795 797 show_smap_vma_flags(m, vma); ··· 1098 1096 return -ESRCH; 1099 1097 mm = get_task_mm(task); 1100 1098 if (mm) { 1099 + struct mmu_notifier_range range; 1101 1100 struct clear_refs_private cp = { 1102 1101 .type = type, 1103 1102 }; ··· 1142 1139 downgrade_write(&mm->mmap_sem); 1143 1140 break; 1144 1141 } 1145 - mmu_notifier_invalidate_range_start(mm, 0, -1); 1142 + 1143 + mmu_notifier_range_init(&range, mm, 0, -1UL); 1144 + mmu_notifier_invalidate_range_start(&range); 1146 1145 } 1147 1146 walk_page_range(0, mm->highest_vm_end, &clear_refs_walk); 1148 1147 if (type == CLEAR_REFS_SOFT_DIRTY) 1149 - mmu_notifier_invalidate_range_end(mm, 0, -1); 1148 + mmu_notifier_invalidate_range_end(&range); 1150 1149 tlb_finish_mmu(&tlb, 0, -1); 1151 1150 up_read(&mm->mmap_sem); 1152 1151 out_mm:
+1 -1
fs/ubifs/file.c
··· 1481 1481 { 1482 1482 int rc; 1483 1483 1484 - rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); 1484 + rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); 1485 1485 if (rc != MIGRATEPAGE_SUCCESS) 1486 1486 return rc; 1487 1487
+14 -7
fs/userfaultfd.c
··· 53 53 /* a refile sequence protected by fault_pending_wqh lock */ 54 54 struct seqcount refile_seq; 55 55 /* pseudo fd refcounting */ 56 - atomic_t refcount; 56 + refcount_t refcount; 57 57 /* userfaultfd syscall flags */ 58 58 unsigned int flags; 59 59 /* features requested from the userspace */ ··· 140 140 */ 141 141 static void userfaultfd_ctx_get(struct userfaultfd_ctx *ctx) 142 142 { 143 - if (!atomic_inc_not_zero(&ctx->refcount)) 144 - BUG(); 143 + refcount_inc(&ctx->refcount); 145 144 } 146 145 147 146 /** ··· 153 154 */ 154 155 static void userfaultfd_ctx_put(struct userfaultfd_ctx *ctx) 155 156 { 156 - if (atomic_dec_and_test(&ctx->refcount)) { 157 + if (refcount_dec_and_test(&ctx->refcount)) { 157 158 VM_BUG_ON(spin_is_locked(&ctx->fault_pending_wqh.lock)); 158 159 VM_BUG_ON(waitqueue_active(&ctx->fault_pending_wqh)); 159 160 VM_BUG_ON(spin_is_locked(&ctx->fault_wqh.lock)); ··· 685 686 return -ENOMEM; 686 687 } 687 688 688 - atomic_set(&ctx->refcount, 1); 689 + refcount_set(&ctx->refcount, 1); 689 690 ctx->flags = octx->flags; 690 691 ctx->state = UFFD_STATE_RUNNING; 691 692 ctx->features = octx->features; ··· 735 736 struct userfaultfd_ctx *ctx; 736 737 737 738 ctx = vma->vm_userfaultfd_ctx.ctx; 738 - if (ctx && (ctx->features & UFFD_FEATURE_EVENT_REMAP)) { 739 + 740 + if (!ctx) 741 + return; 742 + 743 + if (ctx->features & UFFD_FEATURE_EVENT_REMAP) { 739 744 vm_ctx->ctx = ctx; 740 745 userfaultfd_ctx_get(ctx); 741 746 WRITE_ONCE(ctx->mmap_changing, true); 747 + } else { 748 + /* Drop uffd context if remap feature not enabled */ 749 + vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX; 750 + vma->vm_flags &= ~(VM_UFFD_WP | VM_UFFD_MISSING); 742 751 } 743 752 } 744 753 ··· 1934 1927 if (!ctx) 1935 1928 return -ENOMEM; 1936 1929 1937 - atomic_set(&ctx->refcount, 1); 1930 + refcount_set(&ctx->refcount, 1); 1938 1931 ctx->flags = flags; 1939 1932 ctx->features = 0; 1940 1933 ctx->state = UFFD_STATE_WAIT_API;
+1
include/asm-generic/error-injection.h
··· 8 8 EI_ETYPE_NULL, /* Return NULL if failure */ 9 9 EI_ETYPE_ERRNO, /* Return -ERRNO if failure */ 10 10 EI_ETYPE_ERRNO_NULL, /* Return -ERRNO or NULL if failure */ 11 + EI_ETYPE_TRUE, /* Return true if failure */ 11 12 }; 12 13 13 14 struct error_injection_entry {
+5
include/asm-generic/pgtable.h
··· 1057 1057 int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot); 1058 1058 int pud_clear_huge(pud_t *pud); 1059 1059 int pmd_clear_huge(pmd_t *pmd); 1060 + int p4d_free_pud_page(p4d_t *p4d, unsigned long addr); 1060 1061 int pud_free_pmd_page(pud_t *pud, unsigned long addr); 1061 1062 int pmd_free_pte_page(pmd_t *pmd, unsigned long addr); 1062 1063 #else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */ ··· 1082 1081 return 0; 1083 1082 } 1084 1083 static inline int pmd_clear_huge(pmd_t *pmd) 1084 + { 1085 + return 0; 1086 + } 1087 + static inline int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) 1085 1088 { 1086 1089 return 0; 1087 1090 }
+8
include/linux/backing-dev-defs.h
··· 258 258 */ 259 259 static inline void wb_put(struct bdi_writeback *wb) 260 260 { 261 + if (WARN_ON_ONCE(!wb->bdi)) { 262 + /* 263 + * A driver bug might cause a file to be removed before bdi was 264 + * initialized. 265 + */ 266 + return; 267 + } 268 + 261 269 if (wb != &wb->bdi->wb) 262 270 percpu_ref_put(&wb->refcnt); 263 271 }
+5 -1
include/linux/compiler-clang.h
··· 16 16 /* all clang versions usable with the kernel support KASAN ABI version 5 */ 17 17 #define KASAN_ABI_VERSION 5 18 18 19 + #if __has_feature(address_sanitizer) || __has_feature(hwaddress_sanitizer) 19 20 /* emulate gcc's __SANITIZE_ADDRESS__ flag */ 20 - #if __has_feature(address_sanitizer) 21 21 #define __SANITIZE_ADDRESS__ 22 + #define __no_sanitize_address \ 23 + __attribute__((no_sanitize("address", "hwaddress"))) 24 + #else 25 + #define __no_sanitize_address 22 26 #endif 23 27 24 28 /*
+6
include/linux/compiler-gcc.h
··· 143 143 #define KASAN_ABI_VERSION 3 144 144 #endif 145 145 146 + #if __has_attribute(__no_sanitize_address__) 147 + #define __no_sanitize_address __attribute__((no_sanitize_address)) 148 + #else 149 + #define __no_sanitize_address 150 + #endif 151 + 146 152 #if GCC_VERSION >= 50100 147 153 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 148 154 #endif
-13
include/linux/compiler_attributes.h
··· 200 200 #define __noreturn __attribute__((__noreturn__)) 201 201 202 202 /* 203 - * Optional: only supported since gcc >= 4.8 204 - * Optional: not supported by icc 205 - * 206 - * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-no_005fsanitize_005faddress-function-attribute 207 - * clang: https://clang.llvm.org/docs/AttributeReference.html#no-sanitize-address-no-address-safety-analysis 208 - */ 209 - #if __has_attribute(__no_sanitize_address__) 210 - # define __no_sanitize_address __attribute__((__no_sanitize_address__)) 211 - #else 212 - # define __no_sanitize_address 213 - #endif 214 - 215 - /* 216 203 * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Type-Attributes.html#index-packed-type-attribute 217 204 * clang: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-packed-variable-attribute 218 205 */
+4
include/linux/fs.h
··· 3269 3269 extern int buffer_migrate_page(struct address_space *, 3270 3270 struct page *, struct page *, 3271 3271 enum migrate_mode); 3272 + extern int buffer_migrate_page_norefs(struct address_space *, 3273 + struct page *, struct page *, 3274 + enum migrate_mode); 3272 3275 #else 3273 3276 #define buffer_migrate_page NULL 3277 + #define buffer_migrate_page_norefs NULL 3274 3278 #endif 3275 3279 3276 3280 extern int setattr_prepare(struct dentry *, struct iattr *);
+1 -1
include/linux/gfp.h
··· 81 81 * 82 82 * %__GFP_HARDWALL enforces the cpuset memory allocation policy. 83 83 * 84 - * %__GFP_THISNODE forces the allocation to be satisified from the requested 84 + * %__GFP_THISNODE forces the allocation to be satisfied from the requested 85 85 * node with no fallbacks or placement policy enforcements. 86 86 * 87 87 * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg.
+26 -2
include/linux/highmem.h
··· 36 36 37 37 /* declarations for linux/mm/highmem.c */ 38 38 unsigned int nr_free_highpages(void); 39 - extern unsigned long totalhigh_pages; 39 + extern atomic_long_t _totalhigh_pages; 40 + static inline unsigned long totalhigh_pages(void) 41 + { 42 + return (unsigned long)atomic_long_read(&_totalhigh_pages); 43 + } 44 + 45 + static inline void totalhigh_pages_inc(void) 46 + { 47 + atomic_long_inc(&_totalhigh_pages); 48 + } 49 + 50 + static inline void totalhigh_pages_dec(void) 51 + { 52 + atomic_long_dec(&_totalhigh_pages); 53 + } 54 + 55 + static inline void totalhigh_pages_add(long count) 56 + { 57 + atomic_long_add(count, &_totalhigh_pages); 58 + } 59 + 60 + static inline void totalhigh_pages_set(long val) 61 + { 62 + atomic_long_set(&_totalhigh_pages, val); 63 + } 40 64 41 65 void kmap_flush_unused(void); 42 66 ··· 75 51 return virt_to_page(addr); 76 52 } 77 53 78 - #define totalhigh_pages 0UL 54 + static inline unsigned long totalhigh_pages(void) { return 0UL; } 79 55 80 56 #ifndef ARCH_HAS_KMAP 81 57 static inline void *kmap(struct page *page)
+25 -3
include/linux/hmm.h
··· 69 69 #define LINUX_HMM_H 70 70 71 71 #include <linux/kconfig.h> 72 + #include <asm/pgtable.h> 72 73 73 74 #if IS_ENABLED(CONFIG_HMM) 74 75 ··· 487 486 * @device: device to bind resource to 488 487 * @ops: memory operations callback 489 488 * @ref: per CPU refcount 489 + * @page_fault: callback when CPU fault on an unaddressable device page 490 490 * 491 491 * This an helper structure for device drivers that do not wish to implement 492 492 * the gory details related to hotplugging new memoy and allocating struct ··· 495 493 * 496 494 * Device drivers can directly use ZONE_DEVICE memory on their own if they 497 495 * wish to do so. 496 + * 497 + * The page_fault() callback must migrate page back, from device memory to 498 + * system memory, so that the CPU can access it. This might fail for various 499 + * reasons (device issues, device have been unplugged, ...). When such error 500 + * conditions happen, the page_fault() callback must return VM_FAULT_SIGBUS and 501 + * set the CPU page table entry to "poisoned". 502 + * 503 + * Note that because memory cgroup charges are transferred to the device memory, 504 + * this should never fail due to memory restrictions. However, allocation 505 + * of a regular system page might still fail because we are out of memory. If 506 + * that happens, the page_fault() callback must return VM_FAULT_OOM. 507 + * 508 + * The page_fault() callback can also try to migrate back multiple pages in one 509 + * chunk, as an optimization. It must, however, prioritize the faulting address 510 + * over all the others. 498 511 */ 512 + typedef int (*dev_page_fault_t)(struct vm_area_struct *vma, 513 + unsigned long addr, 514 + const struct page *page, 515 + unsigned int flags, 516 + pmd_t *pmdp); 517 + 499 518 struct hmm_devmem { 500 519 struct completion completion; 501 520 unsigned long pfn_first; ··· 526 503 struct dev_pagemap pagemap; 527 504 const struct hmm_devmem_ops *ops; 528 505 struct percpu_ref ref; 506 + dev_page_fault_t page_fault; 529 507 }; 530 508 531 509 /* ··· 536 512 * enough and allocate struct page for it. 537 513 * 538 514 * The device driver can wrap the hmm_devmem struct inside a private device 539 - * driver struct. The device driver must call hmm_devmem_remove() before the 540 - * device goes away and before freeing the hmm_devmem struct memory. 515 + * driver struct. 541 516 */ 542 517 struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, 543 518 struct device *device, ··· 544 521 struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, 545 522 struct device *device, 546 523 struct resource *res); 547 - void hmm_devmem_remove(struct hmm_devmem *devmem); 548 524 549 525 /* 550 526 * hmm_devmem_page_set_drvdata - set per-page driver data field
+12 -1
include/linux/huge_mm.h
··· 93 93 94 94 extern unsigned long transparent_hugepage_flags; 95 95 96 - static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) 96 + /* 97 + * to be used on vmas which are known to support THP. 98 + * Use transparent_hugepage_enabled otherwise 99 + */ 100 + static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) 97 101 { 98 102 if (vma->vm_flags & VM_NOHUGEPAGE) 99 103 return false; ··· 120 116 121 117 return false; 122 118 } 119 + 120 + bool transparent_hugepage_enabled(struct vm_area_struct *vma); 123 121 124 122 #define transparent_hugepage_use_zero_page() \ 125 123 (transparent_hugepage_flags & \ ··· 262 256 #define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; }) 263 257 264 258 #define hpage_nr_pages(x) 1 259 + 260 + static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) 261 + { 262 + return false; 263 + } 265 264 266 265 static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) 267 266 {
+76 -25
include/linux/kasan.h
··· 14 14 #include <asm/kasan.h> 15 15 #include <asm/pgtable.h> 16 16 17 - extern unsigned char kasan_zero_page[PAGE_SIZE]; 18 - extern pte_t kasan_zero_pte[PTRS_PER_PTE]; 19 - extern pmd_t kasan_zero_pmd[PTRS_PER_PMD]; 20 - extern pud_t kasan_zero_pud[PTRS_PER_PUD]; 21 - extern p4d_t kasan_zero_p4d[MAX_PTRS_PER_P4D]; 17 + extern unsigned char kasan_early_shadow_page[PAGE_SIZE]; 18 + extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE]; 19 + extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD]; 20 + extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD]; 21 + extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D]; 22 22 23 - int kasan_populate_zero_shadow(const void *shadow_start, 23 + int kasan_populate_early_shadow(const void *shadow_start, 24 24 const void *shadow_end); 25 25 26 26 static inline void *kasan_mem_to_shadow(const void *addr) ··· 45 45 46 46 void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, 47 47 slab_flags_t *flags); 48 - void kasan_cache_shrink(struct kmem_cache *cache); 49 - void kasan_cache_shutdown(struct kmem_cache *cache); 50 48 51 49 void kasan_poison_slab(struct page *page); 52 50 void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); 53 51 void kasan_poison_object_data(struct kmem_cache *cache, void *object); 54 - void kasan_init_slab_obj(struct kmem_cache *cache, const void *object); 52 + void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, 53 + const void *object); 55 54 56 - void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); 55 + void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, 56 + gfp_t flags); 57 57 void kasan_kfree_large(void *ptr, unsigned long ip); 58 58 void kasan_poison_kfree(void *ptr, unsigned long ip); 59 - void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, 60 - gfp_t flags); 61 - void kasan_krealloc(const void *object, size_t new_size, gfp_t flags); 59 + void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, 60 + size_t size, gfp_t flags); 61 + void * __must_check kasan_krealloc(const void *object, size_t new_size, 62 + gfp_t flags); 62 63 63 - void kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); 64 + void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, 65 + gfp_t flags); 64 66 bool kasan_slab_free(struct kmem_cache *s, void *object, unsigned long ip); 65 67 66 68 struct kasan_cache { ··· 99 97 static inline void kasan_cache_create(struct kmem_cache *cache, 100 98 unsigned int *size, 101 99 slab_flags_t *flags) {} 102 - static inline void kasan_cache_shrink(struct kmem_cache *cache) {} 103 - static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} 104 100 105 101 static inline void kasan_poison_slab(struct page *page) {} 106 102 static inline void kasan_unpoison_object_data(struct kmem_cache *cache, 107 103 void *object) {} 108 104 static inline void kasan_poison_object_data(struct kmem_cache *cache, 109 105 void *object) {} 110 - static inline void kasan_init_slab_obj(struct kmem_cache *cache, 111 - const void *object) {} 106 + static inline void *kasan_init_slab_obj(struct kmem_cache *cache, 107 + const void *object) 108 + { 109 + return (void *)object; 110 + } 112 111 113 - static inline void kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) {} 112 + static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) 113 + { 114 + return ptr; 115 + } 114 116 static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} 115 117 static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} 116 - static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, 117 - size_t size, gfp_t flags) {} 118 - static inline void kasan_krealloc(const void *object, size_t new_size, 119 - gfp_t flags) {} 118 + static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, 119 + size_t size, gfp_t flags) 120 + { 121 + return (void *)object; 122 + } 123 + static inline void *kasan_krealloc(const void *object, size_t new_size, 124 + gfp_t flags) 125 + { 126 + return (void *)object; 127 + } 120 128 121 - static inline void kasan_slab_alloc(struct kmem_cache *s, void *object, 122 - gfp_t flags) {} 129 + static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, 130 + gfp_t flags) 131 + { 132 + return object; 133 + } 123 134 static inline bool kasan_slab_free(struct kmem_cache *s, void *object, 124 135 unsigned long ip) 125 136 { ··· 154 139 static inline size_t kasan_metadata_size(struct kmem_cache *cache) { return 0; } 155 140 156 141 #endif /* CONFIG_KASAN */ 142 + 143 + #ifdef CONFIG_KASAN_GENERIC 144 + 145 + #define KASAN_SHADOW_INIT 0 146 + 147 + void kasan_cache_shrink(struct kmem_cache *cache); 148 + void kasan_cache_shutdown(struct kmem_cache *cache); 149 + 150 + #else /* CONFIG_KASAN_GENERIC */ 151 + 152 + static inline void kasan_cache_shrink(struct kmem_cache *cache) {} 153 + static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} 154 + 155 + #endif /* CONFIG_KASAN_GENERIC */ 156 + 157 + #ifdef CONFIG_KASAN_SW_TAGS 158 + 159 + #define KASAN_SHADOW_INIT 0xFF 160 + 161 + void kasan_init_tags(void); 162 + 163 + void *kasan_reset_tag(const void *addr); 164 + 165 + void kasan_report(unsigned long addr, size_t size, 166 + bool is_write, unsigned long ip); 167 + 168 + #else /* CONFIG_KASAN_SW_TAGS */ 169 + 170 + static inline void kasan_init_tags(void) { } 171 + 172 + static inline void *kasan_reset_tag(const void *addr) 173 + { 174 + return (void *)addr; 175 + } 176 + 177 + #endif /* CONFIG_KASAN_SW_TAGS */ 157 178 158 179 #endif /* LINUX_KASAN_H */
+3 -3
include/linux/memblock.h
··· 154 154 void __next_reserved_mem_region(u64 *idx, phys_addr_t *out_start, 155 155 phys_addr_t *out_end); 156 156 157 - void __memblock_free_early(phys_addr_t base, phys_addr_t size); 158 157 void __memblock_free_late(phys_addr_t base, phys_addr_t size); 159 158 160 159 /** ··· 319 320 /* Flags for memblock allocation APIs */ 320 321 #define MEMBLOCK_ALLOC_ANYWHERE (~(phys_addr_t)0) 321 322 #define MEMBLOCK_ALLOC_ACCESSIBLE 0 323 + #define MEMBLOCK_ALLOC_KASAN 1 322 324 323 325 /* We are using top down, so it is safe to use 0 here */ 324 326 #define MEMBLOCK_LOW_LIMIT 0 ··· 414 414 static inline void __init memblock_free_early(phys_addr_t base, 415 415 phys_addr_t size) 416 416 { 417 - __memblock_free_early(base, size); 417 + memblock_free(base, size); 418 418 } 419 419 420 420 static inline void __init memblock_free_early_nid(phys_addr_t base, 421 421 phys_addr_t size, int nid) 422 422 { 423 - __memblock_free_early(base, size); 423 + memblock_free(base, size); 424 424 } 425 425 426 426 static inline void __init memblock_free_late(phys_addr_t base, phys_addr_t size)
+9 -2
include/linux/memcontrol.h
··· 526 526 527 527 unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg); 528 528 529 - void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, 529 + void mem_cgroup_print_oom_context(struct mem_cgroup *memcg, 530 530 struct task_struct *p); 531 + 532 + void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg); 531 533 532 534 static inline void mem_cgroup_enter_user_fault(void) 533 535 { ··· 972 970 } 973 971 974 972 static inline void 975 - mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) 973 + mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p) 974 + { 975 + } 976 + 977 + static inline void 978 + mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) 976 979 { 977 980 } 978 981
+5 -6
include/linux/memory_hotplug.h
··· 107 107 } 108 108 109 109 #ifdef CONFIG_MEMORY_HOTREMOVE 110 - extern int arch_remove_memory(u64 start, u64 size, 111 - struct vmem_altmap *altmap); 110 + extern int arch_remove_memory(int nid, u64 start, u64 size, 111 + struct vmem_altmap *altmap); 112 112 extern int __remove_pages(struct zone *zone, unsigned long start_pfn, 113 113 unsigned long nr_pages, struct vmem_altmap *altmap); 114 114 #endif /* CONFIG_MEMORY_HOTREMOVE */ ··· 326 326 void *arg, int (*func)(struct memory_block *, void *)); 327 327 extern int __add_memory(int nid, u64 start, u64 size); 328 328 extern int add_memory(int nid, u64 start, u64 size); 329 - extern int add_memory_resource(int nid, struct resource *resource, bool online); 329 + extern int add_memory_resource(int nid, struct resource *resource); 330 330 extern int arch_add_memory(int nid, u64 start, u64 size, 331 331 struct vmem_altmap *altmap, bool want_memblock); 332 332 extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, 333 333 unsigned long nr_pages, struct vmem_altmap *altmap); 334 - extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages); 335 334 extern bool is_memblock_offlined(struct memory_block *mem); 336 - extern int sparse_add_one_section(struct pglist_data *pgdat, 337 - unsigned long start_pfn, struct vmem_altmap *altmap); 335 + extern int sparse_add_one_section(int nid, unsigned long start_pfn, 336 + struct vmem_altmap *altmap); 338 337 extern void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, 339 338 unsigned long map_offset, struct vmem_altmap *altmap); 340 339 extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
+2 -32
include/linux/memremap.h
··· 4 4 #include <linux/ioport.h> 5 5 #include <linux/percpu-refcount.h> 6 6 7 - #include <asm/pgtable.h> 8 - 9 7 struct resource; 10 8 struct device; 11 9 ··· 64 66 }; 65 67 66 68 /* 67 - * For MEMORY_DEVICE_PRIVATE we use ZONE_DEVICE and extend it with two 68 - * callbacks: 69 - * page_fault() 70 - * page_free() 71 - * 72 69 * Additional notes about MEMORY_DEVICE_PRIVATE may be found in 73 70 * include/linux/hmm.h and Documentation/vm/hmm.rst. There is also a brief 74 71 * explanation in include/linux/memory_hotplug.h. 75 72 * 76 - * The page_fault() callback must migrate page back, from device memory to 77 - * system memory, so that the CPU can access it. This might fail for various 78 - * reasons (device issues, device have been unplugged, ...). When such error 79 - * conditions happen, the page_fault() callback must return VM_FAULT_SIGBUS and 80 - * set the CPU page table entry to "poisoned". 81 - * 82 - * Note that because memory cgroup charges are transferred to the device memory, 83 - * this should never fail due to memory restrictions. However, allocation 84 - * of a regular system page might still fail because we are out of memory. If 85 - * that happens, the page_fault() callback must return VM_FAULT_OOM. 86 - * 87 - * The page_fault() callback can also try to migrate back multiple pages in one 88 - * chunk, as an optimization. It must, however, prioritize the faulting address 89 - * over all the others. 90 - * 91 - * 92 73 * The page_free() callback is called once the page refcount reaches 1 93 74 * (ZONE_DEVICE pages never reach 0 refcount unless there is a refcount bug. 94 75 * This allows the device driver to implement its own memory management.) 95 - * 96 - * For MEMORY_DEVICE_PUBLIC only the page_free() callback matter. 97 76 */ 98 - typedef int (*dev_page_fault_t)(struct vm_area_struct *vma, 99 - unsigned long addr, 100 - const struct page *page, 101 - unsigned int flags, 102 - pmd_t *pmdp); 103 77 typedef void (*dev_page_free_t)(struct page *page, void *data); 104 78 105 79 /** 106 80 * struct dev_pagemap - metadata for ZONE_DEVICE mappings 107 - * @page_fault: callback when CPU fault on an unaddressable device page 108 81 * @page_free: free page callback when page refcount reaches 1 109 82 * @altmap: pre-allocated/reserved memory for vmemmap allocations 110 83 * @res: physical address range covered by @ref 111 84 * @ref: reference count that pins the devm_memremap_pages() mapping 85 + * @kill: callback to transition @ref to the dead state 112 86 * @dev: host device of the mapping for debug 113 87 * @data: private data pointer for page_free() 114 88 * @type: memory type: see MEMORY_* in memory_hotplug.h 115 89 */ 116 90 struct dev_pagemap { 117 - dev_page_fault_t page_fault; 118 91 dev_page_free_t page_free; 119 92 struct vmem_altmap altmap; 120 93 bool altmap_valid; 121 94 struct resource res; 122 95 struct percpu_ref *ref; 96 + void (*kill)(struct percpu_ref *ref); 123 97 struct device *dev; 124 98 void *data; 125 99 enum memory_type type;
+2 -3
include/linux/migrate.h
··· 29 29 }; 30 30 31 31 /* In mm/debug.c; also keep sync with include/trace/events/migrate.h */ 32 - extern char *migrate_reason_names[MR_TYPES]; 32 + extern const char *migrate_reason_names[MR_TYPES]; 33 33 34 34 static inline struct page *new_page_nodemask(struct page *page, 35 35 int preferred_nid, nodemask_t *nodemask) ··· 77 77 extern int migrate_huge_page_move_mapping(struct address_space *mapping, 78 78 struct page *newpage, struct page *page); 79 79 extern int migrate_page_move_mapping(struct address_space *mapping, 80 - struct page *newpage, struct page *page, 81 - struct buffer_head *head, enum migrate_mode mode, 80 + struct page *newpage, struct page *page, enum migrate_mode mode, 82 81 int extra_count); 83 82 #else 84 83
+63 -13
include/linux/mm.h
··· 48 48 static inline void set_max_mapnr(unsigned long limit) { } 49 49 #endif 50 50 51 - extern unsigned long totalram_pages; 51 + extern atomic_long_t _totalram_pages; 52 + static inline unsigned long totalram_pages(void) 53 + { 54 + return (unsigned long)atomic_long_read(&_totalram_pages); 55 + } 56 + 57 + static inline void totalram_pages_inc(void) 58 + { 59 + atomic_long_inc(&_totalram_pages); 60 + } 61 + 62 + static inline void totalram_pages_dec(void) 63 + { 64 + atomic_long_dec(&_totalram_pages); 65 + } 66 + 67 + static inline void totalram_pages_add(long count) 68 + { 69 + atomic_long_add(count, &_totalram_pages); 70 + } 71 + 72 + static inline void totalram_pages_set(long val) 73 + { 74 + atomic_long_set(&_totalram_pages, val); 75 + } 76 + 52 77 extern void * high_memory; 53 78 extern int page_cluster; 54 79 ··· 829 804 #define NODES_PGOFF (SECTIONS_PGOFF - NODES_WIDTH) 830 805 #define ZONES_PGOFF (NODES_PGOFF - ZONES_WIDTH) 831 806 #define LAST_CPUPID_PGOFF (ZONES_PGOFF - LAST_CPUPID_WIDTH) 807 + #define KASAN_TAG_PGOFF (LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH) 832 808 833 809 /* 834 810 * Define the bit shifts to access each section. For non-existent ··· 840 814 #define NODES_PGSHIFT (NODES_PGOFF * (NODES_WIDTH != 0)) 841 815 #define ZONES_PGSHIFT (ZONES_PGOFF * (ZONES_WIDTH != 0)) 842 816 #define LAST_CPUPID_PGSHIFT (LAST_CPUPID_PGOFF * (LAST_CPUPID_WIDTH != 0)) 817 + #define KASAN_TAG_PGSHIFT (KASAN_TAG_PGOFF * (KASAN_TAG_WIDTH != 0)) 843 818 844 819 /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allocator */ 845 820 #ifdef NODE_NOT_IN_PAGE_FLAGS ··· 863 836 #define NODES_MASK ((1UL << NODES_WIDTH) - 1) 864 837 #define SECTIONS_MASK ((1UL << SECTIONS_WIDTH) - 1) 865 838 #define LAST_CPUPID_MASK ((1UL << LAST_CPUPID_SHIFT) - 1) 839 + #define KASAN_TAG_MASK ((1UL << KASAN_TAG_WIDTH) - 1) 866 840 #define ZONEID_MASK ((1UL << ZONEID_SHIFT) - 1) 867 841 868 842 static inline enum zone_type page_zonenum(const struct page *page) ··· 1128 1100 return false; 1129 1101 } 1130 1102 #endif /* CONFIG_NUMA_BALANCING */ 1103 + 1104 + #ifdef CONFIG_KASAN_SW_TAGS 1105 + static inline u8 page_kasan_tag(const struct page *page) 1106 + { 1107 + return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK; 1108 + } 1109 + 1110 + static inline void page_kasan_tag_set(struct page *page, u8 tag) 1111 + { 1112 + page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT); 1113 + page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT; 1114 + } 1115 + 1116 + static inline void page_kasan_tag_reset(struct page *page) 1117 + { 1118 + page_kasan_tag_set(page, 0xff); 1119 + } 1120 + #else 1121 + static inline u8 page_kasan_tag(const struct page *page) 1122 + { 1123 + return 0xff; 1124 + } 1125 + 1126 + static inline void page_kasan_tag_set(struct page *page, u8 tag) { } 1127 + static inline void page_kasan_tag_reset(struct page *page) { } 1128 + #endif 1131 1129 1132 1130 static inline struct zone *page_zone(const struct page *page) 1133 1131 { ··· 1451 1397 void *private; 1452 1398 }; 1453 1399 1400 + struct mmu_notifier_range; 1401 + 1454 1402 int walk_page_range(unsigned long addr, unsigned long end, 1455 1403 struct mm_walk *walk); 1456 1404 int walk_page_vma(struct vm_area_struct *vma, struct mm_walk *walk); ··· 1461 1405 int copy_page_range(struct mm_struct *dst, struct mm_struct *src, 1462 1406 struct vm_area_struct *vma); 1463 1407 int follow_pte_pmd(struct mm_struct *mm, unsigned long address, 1464 - unsigned long *start, unsigned long *end, 1465 - pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); 1408 + struct mmu_notifier_range *range, 1409 + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); 1466 1410 int follow_pfn(struct vm_area_struct *vma, unsigned long address, 1467 1411 unsigned long *pfn); 1468 1412 int follow_phys(struct vm_area_struct *vma, unsigned long address, ··· 1956 1900 return true; 1957 1901 } 1958 1902 1959 - /* Reset page->mapping so free_pages_check won't complain. */ 1960 - static inline void pte_lock_deinit(struct page *page) 1961 - { 1962 - page->mapping = NULL; 1963 - ptlock_free(page); 1964 - } 1965 - 1966 1903 #else /* !USE_SPLIT_PTE_PTLOCKS */ 1967 1904 /* 1968 1905 * We use mm->page_table_lock to guard all pagetable pages of the mm. ··· 1966 1917 } 1967 1918 static inline void ptlock_cache_init(void) {} 1968 1919 static inline bool ptlock_init(struct page *page) { return true; } 1969 - static inline void pte_lock_deinit(struct page *page) {} 1920 + static inline void ptlock_free(struct page *page) {} 1970 1921 #endif /* USE_SPLIT_PTE_PTLOCKS */ 1971 1922 1972 1923 static inline void pgtable_init(void) ··· 1986 1937 1987 1938 static inline void pgtable_page_dtor(struct page *page) 1988 1939 { 1989 - pte_lock_deinit(page); 1940 + ptlock_free(page); 1990 1941 __ClearPageTable(page); 1991 1942 dec_zone_page_state(page, NR_PAGETABLE); 1992 1943 } ··· 2103 2054 * Return pages freed into the buddy system. 2104 2055 */ 2105 2056 extern unsigned long free_reserved_area(void *start, void *end, 2106 - int poison, char *s); 2057 + int poison, const char *s); 2107 2058 2108 2059 #ifdef CONFIG_HIGHMEM 2109 2060 /* ··· 2251 2202 2252 2203 /* page_alloc.c */ 2253 2204 extern int min_free_kbytes; 2205 + extern int watermark_boost_factor; 2254 2206 extern int watermark_scale_factor; 2255 2207 2256 2208 /* nommu.c */
+67 -35
include/linux/mmu_notifier.h
··· 25 25 spinlock_t lock; 26 26 }; 27 27 28 + struct mmu_notifier_range { 29 + struct mm_struct *mm; 30 + unsigned long start; 31 + unsigned long end; 32 + bool blockable; 33 + }; 34 + 28 35 struct mmu_notifier_ops { 29 36 /* 30 37 * Called either by mmu_notifier_unregister or when the mm is ··· 153 146 * 154 147 */ 155 148 int (*invalidate_range_start)(struct mmu_notifier *mn, 156 - struct mm_struct *mm, 157 - unsigned long start, unsigned long end, 158 - bool blockable); 149 + const struct mmu_notifier_range *range); 159 150 void (*invalidate_range_end)(struct mmu_notifier *mn, 160 - struct mm_struct *mm, 161 - unsigned long start, unsigned long end); 151 + const struct mmu_notifier_range *range); 162 152 163 153 /* 164 154 * invalidate_range() is either called between ··· 220 216 unsigned long address); 221 217 extern void __mmu_notifier_change_pte(struct mm_struct *mm, 222 218 unsigned long address, pte_t pte); 223 - extern int __mmu_notifier_invalidate_range_start(struct mm_struct *mm, 224 - unsigned long start, unsigned long end, 225 - bool blockable); 226 - extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm, 227 - unsigned long start, unsigned long end, 219 + extern int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *r); 220 + extern void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *r, 228 221 bool only_end); 229 222 extern void __mmu_notifier_invalidate_range(struct mm_struct *mm, 230 223 unsigned long start, unsigned long end); ··· 265 264 __mmu_notifier_change_pte(mm, address, pte); 266 265 } 267 266 268 - static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm, 269 - unsigned long start, unsigned long end) 267 + static inline void 268 + mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) 270 269 { 271 - if (mm_has_notifiers(mm)) 272 - __mmu_notifier_invalidate_range_start(mm, start, end, true); 270 + if (mm_has_notifiers(range->mm)) { 271 + range->blockable = true; 272 + __mmu_notifier_invalidate_range_start(range); 273 + } 273 274 } 274 275 275 - static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm, 276 - unsigned long start, unsigned long end) 276 + static inline int 277 + mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range) 277 278 { 278 - if (mm_has_notifiers(mm)) 279 - return __mmu_notifier_invalidate_range_start(mm, start, end, false); 279 + if (mm_has_notifiers(range->mm)) { 280 + range->blockable = false; 281 + return __mmu_notifier_invalidate_range_start(range); 282 + } 280 283 return 0; 281 284 } 282 285 283 - static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm, 284 - unsigned long start, unsigned long end) 286 + static inline void 287 + mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) 285 288 { 286 - if (mm_has_notifiers(mm)) 287 - __mmu_notifier_invalidate_range_end(mm, start, end, false); 289 + if (mm_has_notifiers(range->mm)) 290 + __mmu_notifier_invalidate_range_end(range, false); 288 291 } 289 292 290 - static inline void mmu_notifier_invalidate_range_only_end(struct mm_struct *mm, 291 - unsigned long start, unsigned long end) 293 + static inline void 294 + mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) 292 295 { 293 - if (mm_has_notifiers(mm)) 294 - __mmu_notifier_invalidate_range_end(mm, start, end, true); 296 + if (mm_has_notifiers(range->mm)) 297 + __mmu_notifier_invalidate_range_end(range, true); 295 298 } 296 299 297 300 static inline void mmu_notifier_invalidate_range(struct mm_struct *mm, ··· 314 309 { 315 310 if (mm_has_notifiers(mm)) 316 311 __mmu_notifier_mm_destroy(mm); 312 + } 313 + 314 + 315 + static inline void mmu_notifier_range_init(struct mmu_notifier_range *range, 316 + struct mm_struct *mm, 317 + unsigned long start, 318 + unsigned long end) 319 + { 320 + range->mm = mm; 321 + range->start = start; 322 + range->end = end; 317 323 } 318 324 319 325 #define ptep_clear_flush_young_notify(__vma, __address, __ptep) \ ··· 436 420 437 421 extern void mmu_notifier_call_srcu(struct rcu_head *rcu, 438 422 void (*func)(struct rcu_head *rcu)); 439 - extern void mmu_notifier_synchronize(void); 440 423 441 424 #else /* CONFIG_MMU_NOTIFIER */ 425 + 426 + struct mmu_notifier_range { 427 + unsigned long start; 428 + unsigned long end; 429 + }; 430 + 431 + static inline void _mmu_notifier_range_init(struct mmu_notifier_range *range, 432 + unsigned long start, 433 + unsigned long end) 434 + { 435 + range->start = start; 436 + range->end = end; 437 + } 438 + 439 + #define mmu_notifier_range_init(range, mm, start, end) \ 440 + _mmu_notifier_range_init(range, start, end) 441 + 442 442 443 443 static inline int mm_has_notifiers(struct mm_struct *mm) 444 444 { ··· 483 451 { 484 452 } 485 453 486 - static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm, 487 - unsigned long start, unsigned long end) 454 + static inline void 455 + mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) 488 456 { 489 457 } 490 458 491 - static inline int mmu_notifier_invalidate_range_start_nonblock(struct mm_struct *mm, 492 - unsigned long start, unsigned long end) 459 + static inline int 460 + mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range) 493 461 { 494 462 return 0; 495 463 } 496 464 497 - static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm, 498 - unsigned long start, unsigned long end) 465 + static inline 466 + void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) 499 467 { 500 468 } 501 469 502 - static inline void mmu_notifier_invalidate_range_only_end(struct mm_struct *mm, 503 - unsigned long start, unsigned long end) 470 + static inline void 471 + mmu_notifier_invalidate_range_only_end(struct mmu_notifier_range *range) 504 472 { 505 473 } 506 474
+18 -18
include/linux/mmzone.h
··· 65 65 }; 66 66 67 67 /* In mm/page_alloc.c; keep in sync also with show_migration_types() there */ 68 - extern char * const migratetype_names[MIGRATE_TYPES]; 68 + extern const char * const migratetype_names[MIGRATE_TYPES]; 69 69 70 70 #ifdef CONFIG_CMA 71 71 # define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA) ··· 269 269 NR_WMARK 270 270 }; 271 271 272 - #define min_wmark_pages(z) (z->watermark[WMARK_MIN]) 273 - #define low_wmark_pages(z) (z->watermark[WMARK_LOW]) 274 - #define high_wmark_pages(z) (z->watermark[WMARK_HIGH]) 272 + #define min_wmark_pages(z) (z->_watermark[WMARK_MIN] + z->watermark_boost) 273 + #define low_wmark_pages(z) (z->_watermark[WMARK_LOW] + z->watermark_boost) 274 + #define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boost) 275 + #define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost) 275 276 276 277 struct per_cpu_pages { 277 278 int count; /* number of pages in the list */ ··· 363 362 /* Read-mostly fields */ 364 363 365 364 /* zone watermarks, access with *_wmark_pages(zone) macros */ 366 - unsigned long watermark[NR_WMARK]; 365 + unsigned long _watermark[NR_WMARK]; 366 + unsigned long watermark_boost; 367 367 368 368 unsigned long nr_reserved_highatomic; 369 369 ··· 430 428 * Write access to present_pages at runtime should be protected by 431 429 * mem_hotplug_begin/end(). Any reader who can't tolerant drift of 432 430 * present_pages should get_online_mems() to get a stable value. 433 - * 434 - * Read access to managed_pages should be safe because it's unsigned 435 - * long. Write access to zone->managed_pages and totalram_pages are 436 - * protected by managed_page_count_lock at runtime. Idealy only 437 - * adjust_managed_page_count() should be used instead of directly 438 - * touching zone->managed_pages and totalram_pages. 439 431 */ 440 - unsigned long managed_pages; 432 + atomic_long_t managed_pages; 441 433 unsigned long spanned_pages; 442 434 unsigned long present_pages; 443 435 ··· 519 523 */ 520 524 PGDAT_RECLAIM_LOCKED, /* prevents concurrent reclaim */ 521 525 }; 526 + 527 + static inline unsigned long zone_managed_pages(struct zone *zone) 528 + { 529 + return (unsigned long)atomic_long_read(&zone->managed_pages); 530 + } 522 531 523 532 static inline unsigned long zone_end_pfn(const struct zone *zone) 524 533 { ··· 636 635 #endif 637 636 #if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_DEFERRED_STRUCT_PAGE_INIT) 638 637 /* 639 - * Must be held any time you expect node_start_pfn, node_present_pages 640 - * or node_spanned_pages stay constant. Holding this will also 641 - * guarantee that any pfn_valid() stays that way. 638 + * Must be held any time you expect node_start_pfn, 639 + * node_present_pages, node_spanned_pages or nr_zones to stay constant. 642 640 * 643 641 * pgdat_resize_lock() and pgdat_resize_unlock() are provided to 644 642 * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG ··· 691 691 * is the first PFN that needs to be initialised. 692 692 */ 693 693 unsigned long first_deferred_pfn; 694 - /* Number of non-deferred pages */ 695 - unsigned long static_init_pgcnt; 696 694 #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ 697 695 698 696 #ifdef CONFIG_TRANSPARENT_HUGEPAGE ··· 818 820 */ 819 821 static inline bool managed_zone(struct zone *zone) 820 822 { 821 - return zone->managed_pages; 823 + return zone_managed_pages(zone); 822 824 } 823 825 824 826 /* Returns true if a zone has memory */ ··· 887 889 /* These two functions are used to setup the per zone pages min values */ 888 890 struct ctl_table; 889 891 int min_free_kbytes_sysctl_handler(struct ctl_table *, int, 892 + void __user *, size_t *, loff_t *); 893 + int watermark_boost_factor_sysctl_handler(struct ctl_table *, int, 890 894 void __user *, size_t *, loff_t *); 891 895 int watermark_scale_factor_sysctl_handler(struct ctl_table *, int, 892 896 void __user *, size_t *, loff_t *);
+10
include/linux/oom.h
··· 15 15 struct mem_cgroup; 16 16 struct task_struct; 17 17 18 + enum oom_constraint { 19 + CONSTRAINT_NONE, 20 + CONSTRAINT_CPUSET, 21 + CONSTRAINT_MEMORY_POLICY, 22 + CONSTRAINT_MEMCG, 23 + }; 24 + 18 25 /* 19 26 * Details of the page allocation that triggered the oom killer that are used to 20 27 * determine what should be killed. ··· 49 42 unsigned long totalpages; 50 43 struct task_struct *chosen; 51 44 unsigned long chosen_points; 45 + 46 + /* Used to print the constraint info. */ 47 + enum oom_constraint constraint; 52 48 }; 53 49 54 50 extern struct mutex oom_lock;
+10
include/linux/page-flags-layout.h
··· 82 82 #define LAST_CPUPID_WIDTH 0 83 83 #endif 84 84 85 + #ifdef CONFIG_KASAN_SW_TAGS 86 + #define KASAN_TAG_WIDTH 8 87 + #if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \ 88 + > BITS_PER_LONG - NR_PAGEFLAGS 89 + #error "KASAN: not enough bits in page flags for tag" 90 + #endif 91 + #else 92 + #define KASAN_TAG_WIDTH 0 93 + #endif 94 + 85 95 /* 86 96 * We are going to use the flags for the page to node mapping if its in 87 97 * there. This includes the case where there is no node, so it is implicit.
+6
include/linux/page-flags.h
··· 669 669 670 670 #define PAGE_TYPE_BASE 0xf0000000 671 671 /* Reserve 0x0000007f to catch underflows of page_mapcount */ 672 + #define PAGE_MAPCOUNT_RESERVE -128 672 673 #define PG_buddy 0x00000080 673 674 #define PG_balloon 0x00000100 674 675 #define PG_kmemcg 0x00000200 ··· 677 676 678 677 #define PageType(page, flag) \ 679 678 ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) 679 + 680 + static inline int page_has_type(struct page *page) 681 + { 682 + return (int)page->page_type < PAGE_MAPCOUNT_RESERVE; 683 + } 680 684 681 685 #define PAGE_TYPE_OPS(uname, lname) \ 682 686 static __always_inline int Page##uname(struct page *page) \
+9 -2
include/linux/page-isolation.h
··· 30 30 } 31 31 #endif 32 32 33 + #define SKIP_HWPOISON 0x1 34 + #define REPORT_FAILURE 0x2 35 + 33 36 bool has_unmovable_pages(struct zone *zone, struct page *page, int count, 34 - int migratetype, bool skip_hwpoisoned_pages); 37 + int migratetype, int flags); 35 38 void set_pageblock_migratetype(struct page *page, int migratetype); 36 39 int move_freepages_block(struct zone *zone, struct page *page, 37 40 int migratetype, int *num_movable); ··· 47 44 * For isolating all pages in the range finally, the caller have to 48 45 * free all pages in the range. test_page_isolated() can be used for 49 46 * test it. 47 + * 48 + * The following flags are allowed (they can be combined in a bit mask) 49 + * SKIP_HWPOISON - ignore hwpoison pages 50 + * REPORT_FAILURE - report details about the failure to isolate the range 50 51 */ 51 52 int 52 53 start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, 53 - unsigned migratetype, bool skip_hwpoisoned_pages); 54 + unsigned migratetype, int flags); 54 55 55 56 /* 56 57 * Changes MIGRATE_ISOLATE to MIGRATE_MOVABLE.
+2 -1
include/linux/pageblock-flags.h
··· 25 25 26 26 #include <linux/types.h> 27 27 28 + #define PB_migratetype_bits 3 28 29 /* Bit indices that affect a whole block of pages */ 29 30 enum pageblock_bits { 30 31 PB_migrate, 31 - PB_migrate_end = PB_migrate + 3 - 1, 32 + PB_migrate_end = PB_migrate + PB_migratetype_bits - 1, 32 33 /* 3 bits required for migrate types */ 33 34 PB_migrate_skip,/* If set the block is skipped by compaction */ 34 35
+2
include/linux/pagemap.h
··· 537 537 return wait_on_page_bit_killable(compound_head(page), PG_locked); 538 538 } 539 539 540 + extern void put_and_wait_on_page_locked(struct page *page); 541 + 540 542 /* 541 543 * Wait for a page to complete writeback 542 544 */
+14 -14
include/linux/slab.h
··· 314 314 315 315 static __always_inline enum kmalloc_cache_type kmalloc_type(gfp_t flags) 316 316 { 317 - int is_dma = 0; 318 - int type_dma = 0; 319 - int is_reclaimable; 320 - 321 317 #ifdef CONFIG_ZONE_DMA 322 - is_dma = !!(flags & __GFP_DMA); 323 - type_dma = is_dma * KMALLOC_DMA; 324 - #endif 325 - 326 - is_reclaimable = !!(flags & __GFP_RECLAIMABLE); 318 + /* 319 + * The most common case is KMALLOC_NORMAL, so test for it 320 + * with a single branch for both flags. 321 + */ 322 + if (likely((flags & (__GFP_DMA | __GFP_RECLAIMABLE)) == 0)) 323 + return KMALLOC_NORMAL; 327 324 328 325 /* 329 - * If an allocation is both __GFP_DMA and __GFP_RECLAIMABLE, return 330 - * KMALLOC_DMA and effectively ignore __GFP_RECLAIMABLE 326 + * At least one of the flags has to be set. If both are, __GFP_DMA 327 + * is more important. 331 328 */ 332 - return type_dma + (is_reclaimable & !is_dma) * KMALLOC_RECLAIM; 329 + return flags & __GFP_DMA ? KMALLOC_DMA : KMALLOC_RECLAIM; 330 + #else 331 + return flags & __GFP_RECLAIMABLE ? KMALLOC_RECLAIM : KMALLOC_NORMAL; 332 + #endif 333 333 } 334 334 335 335 /* ··· 444 444 { 445 445 void *ret = kmem_cache_alloc(s, flags); 446 446 447 - kasan_kmalloc(s, ret, size, flags); 447 + ret = kasan_kmalloc(s, ret, size, flags); 448 448 return ret; 449 449 } 450 450 ··· 455 455 { 456 456 void *ret = kmem_cache_alloc_node(s, gfpflags, node); 457 457 458 - kasan_kmalloc(s, ret, size, gfpflags); 458 + ret = kasan_kmalloc(s, ret, size, gfpflags); 459 459 return ret; 460 460 } 461 461 #endif /* CONFIG_TRACING */
+13
include/linux/slab_def.h
··· 104 104 return object; 105 105 } 106 106 107 + /* 108 + * We want to avoid an expensive divide : (offset / cache->size) 109 + * Using the fact that size is a constant for a particular cache, 110 + * we can replace (offset / cache->size) by 111 + * reciprocal_divide(offset, cache->reciprocal_buffer_size) 112 + */ 113 + static inline unsigned int obj_to_index(const struct kmem_cache *cache, 114 + const struct page *page, void *obj) 115 + { 116 + u32 offset = (obj - page->s_mem); 117 + return reciprocal_divide(offset, cache->reciprocal_buffer_size); 118 + } 119 + 107 120 #endif /* _LINUX_SLAB_DEF_H */
+10 -8
include/linux/swap.h
··· 235 235 unsigned long flags; /* SWP_USED etc: see above */ 236 236 signed short prio; /* swap priority of this type */ 237 237 struct plist_node list; /* entry in swap_active_head */ 238 - struct plist_node avail_lists[MAX_NUMNODES];/* entry in swap_avail_heads */ 239 238 signed char type; /* strange name for an index */ 240 239 unsigned int max; /* extent of the swap_map */ 241 240 unsigned char *swap_map; /* vmalloc'ed array of usage counts */ ··· 275 276 */ 276 277 struct work_struct discard_work; /* discard worker */ 277 278 struct swap_cluster_list discard_clusters; /* discard clusters list */ 279 + struct plist_node avail_lists[0]; /* 280 + * entries in swap_avail_heads, one 281 + * entry per node. 282 + * Must be last as the number of the 283 + * array is nr_node_ids, which is not 284 + * a fixed value so have to allocate 285 + * dynamically. 286 + * And it has to be an array so that 287 + * plist_for_each_* can work. 288 + */ 278 289 }; 279 290 280 291 #ifdef CONFIG_64BIT ··· 319 310 } while (0) 320 311 321 312 /* linux/mm/page_alloc.c */ 322 - extern unsigned long totalram_pages; 323 313 extern unsigned long totalreserve_pages; 324 314 extern unsigned long nr_free_buffer_pages(void); 325 315 extern unsigned long nr_free_pagecache_pages(void); ··· 368 360 extern int node_reclaim_mode; 369 361 extern int sysctl_min_unmapped_ratio; 370 362 extern int sysctl_min_slab_ratio; 371 - extern int node_reclaim(struct pglist_data *, gfp_t, unsigned int); 372 363 #else 373 364 #define node_reclaim_mode 0 374 - static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask, 375 - unsigned int order) 376 - { 377 - return 0; 378 - } 379 365 #endif 380 366 381 367 extern int page_evictable(struct page *page);
-5
include/linux/vmstat.h
··· 239 239 #define node_page_state(node, item) global_node_page_state(item) 240 240 #endif /* CONFIG_NUMA */ 241 241 242 - #define add_zone_page_state(__z, __i, __d) mod_zone_page_state(__z, __i, __d) 243 - #define sub_zone_page_state(__z, __i, __d) mod_zone_page_state(__z, __i, -(__d)) 244 - #define add_node_page_state(__p, __i, __d) mod_node_page_state(__p, __i, __d) 245 - #define sub_node_page_state(__p, __i, __d) mod_node_page_state(__p, __i, -(__d)) 246 - 247 242 #ifdef CONFIG_SMP 248 243 void __mod_zone_page_state(struct zone *, enum zone_stat_item item, long); 249 244 void __inc_zone_page_state(struct page *, enum zone_stat_item);
+23
include/linux/xxhash.h
··· 107 107 */ 108 108 uint64_t xxh64(const void *input, size_t length, uint64_t seed); 109 109 110 + /** 111 + * xxhash() - calculate wordsize hash of the input with a given seed 112 + * @input: The data to hash. 113 + * @length: The length of the data to hash. 114 + * @seed: The seed can be used to alter the result predictably. 115 + * 116 + * If the hash does not need to be comparable between machines with 117 + * different word sizes, this function will call whichever of xxh32() 118 + * or xxh64() is faster. 119 + * 120 + * Return: wordsize hash of the data. 121 + */ 122 + 123 + static inline unsigned long xxhash(const void *input, size_t length, 124 + uint64_t seed) 125 + { 126 + #if BITS_PER_LONG == 64 127 + return xxh64(input, length, seed); 128 + #else 129 + return xxh32(input, length, seed); 130 + #endif 131 + } 132 + 110 133 /*-**************************** 111 134 * Streaming Hash Functions 112 135 *****************************/
+1 -1
init/main.c
··· 521 521 mem_init(); 522 522 kmem_cache_init(); 523 523 pgtable_init(); 524 + debug_objects_mem_init(); 524 525 vmalloc_init(); 525 526 ioremap_huge_init(); 526 527 /* Should be run before the first non-init thread is created */ ··· 698 697 #endif 699 698 page_ext_init(); 700 699 kmemleak_init(); 701 - debug_objects_mem_init(); 702 700 setup_per_cpu_pageset(); 703 701 numa_policy_init(); 704 702 acpi_early_init();
+2 -2
kernel/cgroup/cpuset.c
··· 2666 2666 rcu_read_lock(); 2667 2667 2668 2668 cgrp = task_cs(current)->css.cgroup; 2669 - pr_info("%s cpuset=", current->comm); 2669 + pr_cont(",cpuset="); 2670 2670 pr_cont_cgroup_name(cgrp); 2671 - pr_cont(" mems_allowed=%*pbl\n", 2671 + pr_cont(",mems_allowed=%*pbl", 2672 2672 nodemask_pr_args(&current->mems_allowed)); 2673 2673 2674 2674 rcu_read_unlock();
+5 -5
kernel/events/uprobes.c
··· 171 171 .address = addr, 172 172 }; 173 173 int err; 174 - /* For mmu_notifiers */ 175 - const unsigned long mmun_start = addr; 176 - const unsigned long mmun_end = addr + PAGE_SIZE; 174 + struct mmu_notifier_range range; 177 175 struct mem_cgroup *memcg; 176 + 177 + mmu_notifier_range_init(&range, mm, addr, addr + PAGE_SIZE); 178 178 179 179 VM_BUG_ON_PAGE(PageTransHuge(old_page), old_page); 180 180 ··· 186 186 /* For try_to_free_swap() and munlock_vma_page() below */ 187 187 lock_page(old_page); 188 188 189 - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 189 + mmu_notifier_invalidate_range_start(&range); 190 190 err = -EAGAIN; 191 191 if (!page_vma_mapped_walk(&pvmw)) { 192 192 mem_cgroup_cancel_charge(new_page, memcg, false); ··· 220 220 221 221 err = 0; 222 222 unlock: 223 - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 223 + mmu_notifier_invalidate_range_end(&range); 224 224 unlock_page(old_page); 225 225 return err; 226 226 }
+4 -3
kernel/fork.c
··· 744 744 static void set_max_threads(unsigned int max_threads_suggested) 745 745 { 746 746 u64 threads; 747 + unsigned long nr_pages = totalram_pages(); 747 748 748 749 /* 749 750 * The number of threads shall be limited such that the thread 750 751 * structures may only consume a small part of the available memory. 751 752 */ 752 - if (fls64(totalram_pages) + fls64(PAGE_SIZE) > 64) 753 + if (fls64(nr_pages) + fls64(PAGE_SIZE) > 64) 753 754 threads = MAX_THREADS; 754 755 else 755 - threads = div64_u64((u64) totalram_pages * (u64) PAGE_SIZE, 756 + threads = div64_u64((u64) nr_pages * (u64) PAGE_SIZE, 756 757 (u64) THREAD_SIZE * 8UL); 757 758 758 759 if (threads > max_threads_suggested) ··· 841 840 { 842 841 struct task_struct *tsk; 843 842 unsigned long *stack; 844 - struct vm_struct *stack_vm_area; 843 + struct vm_struct *stack_vm_area __maybe_unused; 845 844 int err; 846 845 847 846 if (node == NUMA_NO_NODE)
+3 -2
kernel/kexec_core.c
··· 152 152 int i; 153 153 unsigned long nr_segments = image->nr_segments; 154 154 unsigned long total_pages = 0; 155 + unsigned long nr_pages = totalram_pages(); 155 156 156 157 /* 157 158 * Verify we have good destination addresses. The caller is ··· 218 217 * wasted allocating pages, which can cause a soft lockup. 219 218 */ 220 219 for (i = 0; i < nr_segments; i++) { 221 - if (PAGE_COUNT(image->segment[i].memsz) > totalram_pages / 2) 220 + if (PAGE_COUNT(image->segment[i].memsz) > nr_pages / 2) 222 221 return -EINVAL; 223 222 224 223 total_pages += PAGE_COUNT(image->segment[i].memsz); 225 224 } 226 225 227 - if (total_pages > totalram_pages / 2) 226 + if (total_pages > nr_pages / 2) 228 227 return -EINVAL; 229 228 230 229 /*
+67 -36
kernel/memremap.c
··· 11 11 #include <linux/types.h> 12 12 #include <linux/wait_bit.h> 13 13 #include <linux/xarray.h> 14 + #include <linux/hmm.h> 14 15 15 16 static DEFINE_XARRAY(pgmap_array); 16 17 #define SECTION_MASK ~((1UL << PA_SECTION_SHIFT) - 1) ··· 25 24 pmd_t *pmdp) 26 25 { 27 26 struct page *page = device_private_entry_to_page(entry); 27 + struct hmm_devmem *devmem; 28 + 29 + devmem = container_of(page->pgmap, typeof(*devmem), pagemap); 28 30 29 31 /* 30 32 * The page_fault() callback must migrate page back to system memory ··· 43 39 * There is a more in-depth description of what that callback can and 44 40 * cannot do, in include/linux/memremap.h 45 41 */ 46 - return page->pgmap->page_fault(vma, addr, page, flags, pmdp); 42 + return devmem->page_fault(vma, addr, page, flags, pmdp); 47 43 } 48 44 EXPORT_SYMBOL(device_private_entry_fault); 49 45 #endif /* CONFIG_DEVICE_PRIVATE */ ··· 91 87 struct resource *res = &pgmap->res; 92 88 resource_size_t align_start, align_size; 93 89 unsigned long pfn; 90 + int nid; 94 91 92 + pgmap->kill(pgmap->ref); 95 93 for_each_device_pfn(pfn, pgmap) 96 94 put_page(pfn_to_page(pfn)); 97 - 98 - if (percpu_ref_tryget_live(pgmap->ref)) { 99 - dev_WARN(dev, "%s: page mapping is still live!\n", __func__); 100 - percpu_ref_put(pgmap->ref); 101 - } 102 95 103 96 /* pages are dead and unused, undo the arch mapping */ 104 97 align_start = res->start & ~(SECTION_SIZE - 1); 105 98 align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) 106 99 - align_start; 107 100 101 + nid = page_to_nid(pfn_to_page(align_start >> PAGE_SHIFT)); 102 + 108 103 mem_hotplug_begin(); 109 - arch_remove_memory(align_start, align_size, pgmap->altmap_valid ? 110 - &pgmap->altmap : NULL); 111 - kasan_remove_zero_shadow(__va(align_start), align_size); 104 + if (pgmap->type == MEMORY_DEVICE_PRIVATE) { 105 + pfn = align_start >> PAGE_SHIFT; 106 + __remove_pages(page_zone(pfn_to_page(pfn)), pfn, 107 + align_size >> PAGE_SHIFT, NULL); 108 + } else { 109 + arch_remove_memory(nid, align_start, align_size, 110 + pgmap->altmap_valid ? &pgmap->altmap : NULL); 111 + kasan_remove_zero_shadow(__va(align_start), align_size); 112 + } 112 113 mem_hotplug_done(); 113 114 114 115 untrack_pfn(NULL, PHYS_PFN(align_start), align_size); ··· 125 116 /** 126 117 * devm_memremap_pages - remap and provide memmap backing for the given resource 127 118 * @dev: hosting device for @res 128 - * @pgmap: pointer to a struct dev_pgmap 119 + * @pgmap: pointer to a struct dev_pagemap 129 120 * 130 121 * Notes: 131 122 * 1/ At a minimum the res, ref and type members of @pgmap must be initialized ··· 134 125 * 2/ The altmap field may optionally be initialized, in which case altmap_valid 135 126 * must be set to true 136 127 * 137 - * 3/ pgmap.ref must be 'live' on entry and 'dead' before devm_memunmap_pages() 138 - * time (or devm release event). The expected order of events is that ref has 139 - * been through percpu_ref_kill() before devm_memremap_pages_release(). The 140 - * wait for the completion of all references being dropped and 141 - * percpu_ref_exit() must occur after devm_memremap_pages_release(). 128 + * 3/ pgmap->ref must be 'live' on entry and will be killed at 129 + * devm_memremap_pages_release() time, or if this routine fails. 142 130 * 143 131 * 4/ res is expected to be a host memory range that could feasibly be 144 132 * treated as a "System RAM" range, i.e. not a device mmio range, but ··· 150 144 struct dev_pagemap *conflict_pgmap; 151 145 pgprot_t pgprot = PAGE_KERNEL; 152 146 int error, nid, is_ram; 147 + 148 + if (!pgmap->ref || !pgmap->kill) 149 + return ERR_PTR(-EINVAL); 153 150 154 151 align_start = res->start & ~(SECTION_SIZE - 1); 155 152 align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) ··· 176 167 is_ram = region_intersects(align_start, align_size, 177 168 IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE); 178 169 179 - if (is_ram == REGION_MIXED) { 180 - WARN_ONCE(1, "%s attempted on mixed region %pr\n", 181 - __func__, res); 182 - return ERR_PTR(-ENXIO); 170 + if (is_ram != REGION_DISJOINT) { 171 + WARN_ONCE(1, "%s attempted on %s region %pr\n", __func__, 172 + is_ram == REGION_MIXED ? "mixed" : "ram", res); 173 + error = -ENXIO; 174 + goto err_array; 183 175 } 184 - 185 - if (is_ram == REGION_INTERSECTS) 186 - return __va(res->start); 187 - 188 - if (!pgmap->ref) 189 - return ERR_PTR(-EINVAL); 190 176 191 177 pgmap->dev = dev; 192 178 ··· 200 196 goto err_pfn_remap; 201 197 202 198 mem_hotplug_begin(); 203 - error = kasan_add_zero_shadow(__va(align_start), align_size); 204 - if (error) { 205 - mem_hotplug_done(); 206 - goto err_kasan; 199 + 200 + /* 201 + * For device private memory we call add_pages() as we only need to 202 + * allocate and initialize struct page for the device memory. More- 203 + * over the device memory is un-accessible thus we do not want to 204 + * create a linear mapping for the memory like arch_add_memory() 205 + * would do. 206 + * 207 + * For all other device memory types, which are accessible by 208 + * the CPU, we do want the linear mapping and thus use 209 + * arch_add_memory(). 210 + */ 211 + if (pgmap->type == MEMORY_DEVICE_PRIVATE) { 212 + error = add_pages(nid, align_start >> PAGE_SHIFT, 213 + align_size >> PAGE_SHIFT, NULL, false); 214 + } else { 215 + error = kasan_add_zero_shadow(__va(align_start), align_size); 216 + if (error) { 217 + mem_hotplug_done(); 218 + goto err_kasan; 219 + } 220 + 221 + error = arch_add_memory(nid, align_start, align_size, altmap, 222 + false); 207 223 } 208 224 209 - error = arch_add_memory(nid, align_start, align_size, altmap, false); 210 - if (!error) 211 - move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], 212 - align_start >> PAGE_SHIFT, 213 - align_size >> PAGE_SHIFT, altmap); 225 + if (!error) { 226 + struct zone *zone; 227 + 228 + zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE]; 229 + move_pfn_range_to_zone(zone, align_start >> PAGE_SHIFT, 230 + align_size >> PAGE_SHIFT, altmap); 231 + } 232 + 214 233 mem_hotplug_done(); 215 234 if (error) 216 235 goto err_add_memory; ··· 247 220 align_size >> PAGE_SHIFT, pgmap); 248 221 percpu_ref_get_many(pgmap->ref, pfn_end(pgmap) - pfn_first(pgmap)); 249 222 250 - devm_add_action(dev, devm_memremap_pages_release, pgmap); 223 + error = devm_add_action_or_reset(dev, devm_memremap_pages_release, 224 + pgmap); 225 + if (error) 226 + return ERR_PTR(error); 251 227 252 228 return __va(res->start); 253 229 ··· 261 231 err_pfn_remap: 262 232 pgmap_array_delete(res); 263 233 err_array: 234 + pgmap->kill(pgmap->ref); 264 235 return ERR_PTR(error); 265 236 } 266 - EXPORT_SYMBOL(devm_memremap_pages); 237 + EXPORT_SYMBOL_GPL(devm_memremap_pages); 267 238 268 239 unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) 269 240 {
+1 -1
kernel/power/snapshot.c
··· 105 105 106 106 void __init hibernate_image_size_init(void) 107 107 { 108 - image_size = ((totalram_pages * 2) / 5) * PAGE_SIZE; 108 + image_size = ((totalram_pages() * 2) / 5) * PAGE_SIZE; 109 109 } 110 110 111 111 /*
+15
kernel/resource.c
··· 1256 1256 continue; 1257 1257 } 1258 1258 1259 + /* 1260 + * All memory regions added from memory-hotplug path have the 1261 + * flag IORESOURCE_SYSTEM_RAM. If the resource does not have 1262 + * this flag, we know that we are dealing with a resource coming 1263 + * from HMM/devm. HMM/devm use another mechanism to add/release 1264 + * a resource. This goes via devm_request_mem_region and 1265 + * devm_release_mem_region. 1266 + * HMM/devm take care to release their resources when they want, 1267 + * so if we are dealing with them, let us just back off here. 1268 + */ 1269 + if (!(res->flags & IORESOURCE_SYSRAM)) { 1270 + ret = 0; 1271 + break; 1272 + } 1273 + 1259 1274 if (!(res->flags & IORESOURCE_MEM)) 1260 1275 break; 1261 1276
+8
kernel/sysctl.c
··· 1463 1463 .extra1 = &zero, 1464 1464 }, 1465 1465 { 1466 + .procname = "watermark_boost_factor", 1467 + .data = &watermark_boost_factor, 1468 + .maxlen = sizeof(watermark_boost_factor), 1469 + .mode = 0644, 1470 + .proc_handler = watermark_boost_factor_sysctl_handler, 1471 + .extra1 = &zero, 1472 + }, 1473 + { 1466 1474 .procname = "watermark_scale_factor", 1467 1475 .data = &watermark_scale_factor, 1468 1476 .maxlen = sizeof(watermark_scale_factor),
+15
lib/Kconfig.debug
··· 593 593 Say Y here to disable kmemleak by default. It can then be enabled 594 594 on the command line via kmemleak=on. 595 595 596 + config DEBUG_KMEMLEAK_AUTO_SCAN 597 + bool "Enable kmemleak auto scan thread on boot up" 598 + default y 599 + depends on DEBUG_KMEMLEAK 600 + help 601 + Depending on the cpu, kmemleak scan may be cpu intensive and can 602 + stall user tasks at times. This option enables/disables automatic 603 + kmemleak scan at boot up. 604 + 605 + Say N here to disable kmemleak auto scan thread to stop automatic 606 + scanning. Disabling this option disables automatic reporting of 607 + memory leaks. 608 + 609 + If unsure, say Y. 610 + 596 611 config DEBUG_STACK_USAGE 597 612 bool "Stack utilization instrumentation" 598 613 depends on DEBUG_KERNEL && !IA64
+77 -23
lib/Kconfig.kasan
··· 1 + # This config refers to the generic KASAN mode. 1 2 config HAVE_ARCH_KASAN 2 3 bool 3 4 4 - if HAVE_ARCH_KASAN 5 + config HAVE_ARCH_KASAN_SW_TAGS 6 + bool 7 + 8 + config CC_HAS_KASAN_GENERIC 9 + def_bool $(cc-option, -fsanitize=kernel-address) 10 + 11 + config CC_HAS_KASAN_SW_TAGS 12 + def_bool $(cc-option, -fsanitize=kernel-hwaddress) 5 13 6 14 config KASAN 7 - bool "KASan: runtime memory debugger" 15 + bool "KASAN: runtime memory debugger" 16 + depends on (HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC) || \ 17 + (HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS) 18 + depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) 19 + help 20 + Enables KASAN (KernelAddressSANitizer) - runtime memory debugger, 21 + designed to find out-of-bounds accesses and use-after-free bugs. 22 + See Documentation/dev-tools/kasan.rst for details. 23 + 24 + choice 25 + prompt "KASAN mode" 26 + depends on KASAN 27 + default KASAN_GENERIC 28 + help 29 + KASAN has two modes: generic KASAN (similar to userspace ASan, 30 + x86_64/arm64/xtensa, enabled with CONFIG_KASAN_GENERIC) and 31 + software tag-based KASAN (a version based on software memory 32 + tagging, arm64 only, similar to userspace HWASan, enabled with 33 + CONFIG_KASAN_SW_TAGS). 34 + Both generic and tag-based KASAN are strictly debugging features. 35 + 36 + config KASAN_GENERIC 37 + bool "Generic mode" 38 + depends on HAVE_ARCH_KASAN && CC_HAS_KASAN_GENERIC 8 39 depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) 9 40 select SLUB_DEBUG if SLUB 10 41 select CONSTRUCTORS 11 42 select STACKDEPOT 12 43 help 13 - Enables kernel address sanitizer - runtime memory debugger, 14 - designed to find out-of-bounds accesses and use-after-free bugs. 15 - This is strictly a debugging feature and it requires a gcc version 16 - of 4.9.2 or later. Detection of out of bounds accesses to stack or 17 - global variables requires gcc 5.0 or later. 18 - This feature consumes about 1/8 of available memory and brings about 19 - ~x3 performance slowdown. 44 + Enables generic KASAN mode. 45 + Supported in both GCC and Clang. With GCC it requires version 4.9.2 46 + or later for basic support and version 5.0 or later for detection of 47 + out-of-bounds accesses for stack and global variables and for inline 48 + instrumentation mode (CONFIG_KASAN_INLINE). With Clang it requires 49 + version 3.7.0 or later and it doesn't support detection of 50 + out-of-bounds accesses for global variables yet. 51 + This mode consumes about 1/8th of available memory at kernel start 52 + and introduces an overhead of ~x1.5 for the rest of the allocations. 53 + The performance slowdown is ~x3. 20 54 For better error detection enable CONFIG_STACKTRACE. 21 - Currently CONFIG_KASAN doesn't work with CONFIG_DEBUG_SLAB 55 + Currently CONFIG_KASAN_GENERIC doesn't work with CONFIG_DEBUG_SLAB 22 56 (the resulting kernel does not boot). 23 57 24 - config KASAN_EXTRA 25 - bool "KAsan: extra checks" 26 - depends on KASAN && DEBUG_KERNEL && !COMPILE_TEST 58 + config KASAN_SW_TAGS 59 + bool "Software tag-based mode" 60 + depends on HAVE_ARCH_KASAN_SW_TAGS && CC_HAS_KASAN_SW_TAGS 61 + depends on (SLUB && SYSFS) || (SLAB && !DEBUG_SLAB) 62 + select SLUB_DEBUG if SLUB 63 + select CONSTRUCTORS 64 + select STACKDEPOT 27 65 help 28 - This enables further checks in the kernel address sanitizer, for now 29 - it only includes the address-use-after-scope check that can lead 30 - to excessive kernel stack usage, frame size warnings and longer 31 - compile time. 32 - https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 has more 66 + Enables software tag-based KASAN mode. 67 + This mode requires Top Byte Ignore support by the CPU and therefore 68 + is only supported for arm64. 69 + This mode requires Clang version 7.0.0 or later. 70 + This mode consumes about 1/16th of available memory at kernel start 71 + and introduces an overhead of ~20% for the rest of the allocations. 72 + This mode may potentially introduce problems relating to pointer 73 + casting and comparison, as it embeds tags into the top byte of each 74 + pointer. 75 + For better error detection enable CONFIG_STACKTRACE. 76 + Currently CONFIG_KASAN_SW_TAGS doesn't work with CONFIG_DEBUG_SLAB 77 + (the resulting kernel does not boot). 33 78 79 + endchoice 80 + 81 + config KASAN_EXTRA 82 + bool "KASAN: extra checks" 83 + depends on KASAN_GENERIC && DEBUG_KERNEL && !COMPILE_TEST 84 + help 85 + This enables further checks in generic KASAN, for now it only 86 + includes the address-use-after-scope check that can lead to 87 + excessive kernel stack usage, frame size warnings and longer 88 + compile time. 89 + See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 34 90 35 91 choice 36 92 prompt "Instrumentation type" ··· 109 53 memory accesses. This is faster than outline (in some workloads 110 54 it gives about x2 boost over outline instrumentation), but 111 55 make kernel's .text size much bigger. 112 - This requires a gcc version of 5.0 or later. 56 + For CONFIG_KASAN_GENERIC this requires GCC 5.0 or later. 113 57 114 58 endchoice 115 59 ··· 123 67 4-level paging instead. 124 68 125 69 config TEST_KASAN 126 - tristate "Module for testing kasan for bug detection" 70 + tristate "Module for testing KASAN for bug detection" 127 71 depends on m && KASAN 128 72 help 129 73 This is a test module doing various nasty things like 130 74 out of bounds accesses, use after free. It is useful for testing 131 - kernel debugging features like kernel address sanitizer. 132 - 133 - endif 75 + kernel debugging features like KASAN.
+3 -5
lib/debugobjects.c
··· 1131 1131 } 1132 1132 1133 1133 /* 1134 - * When debug_objects_mem_init() is called we know that only 1135 - * one CPU is up, so disabling interrupts is enough 1136 - * protection. This avoids the lockdep hell of lock ordering. 1134 + * debug_objects_mem_init() is now called early that only one CPU is up 1135 + * and interrupts have been disabled, so it is safe to replace the 1136 + * active object references. 1137 1137 */ 1138 - local_irq_disable(); 1139 1138 1140 1139 /* Remove the statically allocated objects from the pool */ 1141 1140 hlist_for_each_entry_safe(obj, tmp, &obj_pool, node) ··· 1155 1156 cnt++; 1156 1157 } 1157 1158 } 1158 - local_irq_enable(); 1159 1159 1160 1160 pr_debug("%d of %d active objects replaced\n", 1161 1161 cnt, obj_pool_used);
+71 -32
lib/ioremap.c
··· 76 76 return 0; 77 77 } 78 78 79 + static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, 80 + unsigned long end, phys_addr_t phys_addr, 81 + pgprot_t prot) 82 + { 83 + if (!ioremap_pmd_enabled()) 84 + return 0; 85 + 86 + if ((end - addr) != PMD_SIZE) 87 + return 0; 88 + 89 + if (!IS_ALIGNED(phys_addr, PMD_SIZE)) 90 + return 0; 91 + 92 + if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) 93 + return 0; 94 + 95 + return pmd_set_huge(pmd, phys_addr, prot); 96 + } 97 + 79 98 static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr, 80 99 unsigned long end, phys_addr_t phys_addr, pgprot_t prot) 81 100 { 82 101 pmd_t *pmd; 83 102 unsigned long next; 84 103 85 - phys_addr -= addr; 86 104 pmd = pmd_alloc(&init_mm, pud, addr); 87 105 if (!pmd) 88 106 return -ENOMEM; 89 107 do { 90 108 next = pmd_addr_end(addr, end); 91 109 92 - if (ioremap_pmd_enabled() && 93 - ((next - addr) == PMD_SIZE) && 94 - IS_ALIGNED(phys_addr + addr, PMD_SIZE) && 95 - pmd_free_pte_page(pmd, addr)) { 96 - if (pmd_set_huge(pmd, phys_addr + addr, prot)) 97 - continue; 98 - } 110 + if (ioremap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) 111 + continue; 99 112 100 - if (ioremap_pte_range(pmd, addr, next, phys_addr + addr, prot)) 113 + if (ioremap_pte_range(pmd, addr, next, phys_addr, prot)) 101 114 return -ENOMEM; 102 - } while (pmd++, addr = next, addr != end); 115 + } while (pmd++, phys_addr += (next - addr), addr = next, addr != end); 103 116 return 0; 117 + } 118 + 119 + static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr, 120 + unsigned long end, phys_addr_t phys_addr, 121 + pgprot_t prot) 122 + { 123 + if (!ioremap_pud_enabled()) 124 + return 0; 125 + 126 + if ((end - addr) != PUD_SIZE) 127 + return 0; 128 + 129 + if (!IS_ALIGNED(phys_addr, PUD_SIZE)) 130 + return 0; 131 + 132 + if (pud_present(*pud) && !pud_free_pmd_page(pud, addr)) 133 + return 0; 134 + 135 + return pud_set_huge(pud, phys_addr, prot); 104 136 } 105 137 106 138 static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr, ··· 141 109 pud_t *pud; 142 110 unsigned long next; 143 111 144 - phys_addr -= addr; 145 112 pud = pud_alloc(&init_mm, p4d, addr); 146 113 if (!pud) 147 114 return -ENOMEM; 148 115 do { 149 116 next = pud_addr_end(addr, end); 150 117 151 - if (ioremap_pud_enabled() && 152 - ((next - addr) == PUD_SIZE) && 153 - IS_ALIGNED(phys_addr + addr, PUD_SIZE) && 154 - pud_free_pmd_page(pud, addr)) { 155 - if (pud_set_huge(pud, phys_addr + addr, prot)) 156 - continue; 157 - } 118 + if (ioremap_try_huge_pud(pud, addr, next, phys_addr, prot)) 119 + continue; 158 120 159 - if (ioremap_pmd_range(pud, addr, next, phys_addr + addr, prot)) 121 + if (ioremap_pmd_range(pud, addr, next, phys_addr, prot)) 160 122 return -ENOMEM; 161 - } while (pud++, addr = next, addr != end); 123 + } while (pud++, phys_addr += (next - addr), addr = next, addr != end); 162 124 return 0; 125 + } 126 + 127 + static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr, 128 + unsigned long end, phys_addr_t phys_addr, 129 + pgprot_t prot) 130 + { 131 + if (!ioremap_p4d_enabled()) 132 + return 0; 133 + 134 + if ((end - addr) != P4D_SIZE) 135 + return 0; 136 + 137 + if (!IS_ALIGNED(phys_addr, P4D_SIZE)) 138 + return 0; 139 + 140 + if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr)) 141 + return 0; 142 + 143 + return p4d_set_huge(p4d, phys_addr, prot); 163 144 } 164 145 165 146 static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr, ··· 181 136 p4d_t *p4d; 182 137 unsigned long next; 183 138 184 - phys_addr -= addr; 185 139 p4d = p4d_alloc(&init_mm, pgd, addr); 186 140 if (!p4d) 187 141 return -ENOMEM; 188 142 do { 189 143 next = p4d_addr_end(addr, end); 190 144 191 - if (ioremap_p4d_enabled() && 192 - ((next - addr) == P4D_SIZE) && 193 - IS_ALIGNED(phys_addr + addr, P4D_SIZE)) { 194 - if (p4d_set_huge(p4d, phys_addr + addr, prot)) 195 - continue; 196 - } 145 + if (ioremap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) 146 + continue; 197 147 198 - if (ioremap_pud_range(p4d, addr, next, phys_addr + addr, prot)) 148 + if (ioremap_pud_range(p4d, addr, next, phys_addr, prot)) 199 149 return -ENOMEM; 200 - } while (p4d++, addr = next, addr != end); 150 + } while (p4d++, phys_addr += (next - addr), addr = next, addr != end); 201 151 return 0; 202 152 } 203 153 ··· 208 168 BUG_ON(addr >= end); 209 169 210 170 start = addr; 211 - phys_addr -= addr; 212 171 pgd = pgd_offset_k(addr); 213 172 do { 214 173 next = pgd_addr_end(addr, end); 215 - err = ioremap_p4d_range(pgd, addr, next, phys_addr+addr, prot); 174 + err = ioremap_p4d_range(pgd, addr, next, phys_addr, prot); 216 175 if (err) 217 176 break; 218 - } while (pgd++, addr = next, addr != end); 177 + } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); 219 178 220 179 flush_cache_vmap(start, end); 221 180
+1 -4
lib/show_mem.c
··· 18 18 show_free_areas(filter, nodemask); 19 19 20 20 for_each_online_pgdat(pgdat) { 21 - unsigned long flags; 22 21 int zoneid; 23 22 24 - pgdat_resize_lock(pgdat, &flags); 25 23 for (zoneid = 0; zoneid < MAX_NR_ZONES; zoneid++) { 26 24 struct zone *zone = &pgdat->node_zones[zoneid]; 27 25 if (!populated_zone(zone)) 28 26 continue; 29 27 30 28 total += zone->present_pages; 31 - reserved += zone->present_pages - zone->managed_pages; 29 + reserved += zone->present_pages - zone_managed_pages(zone); 32 30 33 31 if (is_highmem_idx(zoneid)) 34 32 highmem += zone->present_pages; 35 33 } 36 - pgdat_resize_unlock(pgdat, &flags); 37 34 } 38 35 39 36 printk("%lu pages RAM\n", total);
+1
mm/Kconfig
··· 291 291 config KSM 292 292 bool "Enable KSM for page merging" 293 293 depends on MMU 294 + select XXHASH 294 295 help 295 296 Enable Kernel Samepage Merging: KSM periodically scans those areas 296 297 of an application's address space that an app has advised may be
+11
mm/cma.c
··· 407 407 unsigned long pfn = -1; 408 408 unsigned long start = 0; 409 409 unsigned long bitmap_maxno, bitmap_no, bitmap_count; 410 + size_t i; 410 411 struct page *page = NULL; 411 412 int ret = -ENOMEM; 412 413 ··· 466 465 } 467 466 468 467 trace_cma_alloc(pfn, page, count, align); 468 + 469 + /* 470 + * CMA can allocate multiple page blocks, which results in different 471 + * blocks being marked with different tags. Reset the tags to ignore 472 + * those page blocks. 473 + */ 474 + if (page) { 475 + for (i = 0; i < count; i++) 476 + page_kasan_tag_reset(page + i); 477 + } 469 478 470 479 if (ret && !no_warn) { 471 480 pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n",
+1 -1
mm/compaction.c
··· 1431 1431 if (is_via_compact_memory(order)) 1432 1432 return COMPACT_CONTINUE; 1433 1433 1434 - watermark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK]; 1434 + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); 1435 1435 /* 1436 1436 * If watermarks for high-order allocation are already met, there 1437 1437 * should be no need for compaction at all.
+20 -7
mm/debug.c
··· 17 17 18 18 #include "internal.h" 19 19 20 - char *migrate_reason_names[MR_TYPES] = { 20 + const char *migrate_reason_names[MR_TYPES] = { 21 21 "compaction", 22 22 "memory_failure", 23 23 "memory_hotplug", ··· 44 44 45 45 void __dump_page(struct page *page, const char *reason) 46 46 { 47 + struct address_space *mapping = page_mapping(page); 47 48 bool page_poisoned = PagePoisoned(page); 48 49 int mapcount; 49 50 ··· 54 53 * dump_page() when detected. 55 54 */ 56 55 if (page_poisoned) { 57 - pr_emerg("page:%px is uninitialized and poisoned", page); 56 + pr_warn("page:%px is uninitialized and poisoned", page); 58 57 goto hex_only; 59 58 } 60 59 ··· 65 64 */ 66 65 mapcount = PageSlab(page) ? 0 : page_mapcount(page); 67 66 68 - pr_emerg("page:%px count:%d mapcount:%d mapping:%px index:%#lx", 67 + pr_warn("page:%px count:%d mapcount:%d mapping:%px index:%#lx", 69 68 page, page_ref_count(page), mapcount, 70 69 page->mapping, page_to_pgoff(page)); 71 70 if (PageCompound(page)) 72 71 pr_cont(" compound_mapcount: %d", compound_mapcount(page)); 73 72 pr_cont("\n"); 73 + if (PageAnon(page)) 74 + pr_warn("anon "); 75 + else if (PageKsm(page)) 76 + pr_warn("ksm "); 77 + else if (mapping) { 78 + pr_warn("%ps ", mapping->a_ops); 79 + if (mapping->host->i_dentry.first) { 80 + struct dentry *dentry; 81 + dentry = container_of(mapping->host->i_dentry.first, struct dentry, d_u.d_alias); 82 + pr_warn("name:\"%pd\" ", dentry); 83 + } 84 + } 74 85 BUILD_BUG_ON(ARRAY_SIZE(pageflag_names) != __NR_PAGEFLAGS + 1); 75 86 76 - pr_emerg("flags: %#lx(%pGp)\n", page->flags, &page->flags); 87 + pr_warn("flags: %#lx(%pGp)\n", page->flags, &page->flags); 77 88 78 89 hex_only: 79 - print_hex_dump(KERN_ALERT, "raw: ", DUMP_PREFIX_NONE, 32, 90 + print_hex_dump(KERN_WARNING, "raw: ", DUMP_PREFIX_NONE, 32, 80 91 sizeof(unsigned long), page, 81 92 sizeof(struct page), false); 82 93 83 94 if (reason) 84 - pr_alert("page dumped because: %s\n", reason); 95 + pr_warn("page dumped because: %s\n", reason); 85 96 86 97 #ifdef CONFIG_MEMCG 87 98 if (!page_poisoned && page->mem_cgroup) 88 - pr_alert("page->mem_cgroup:%px\n", page->mem_cgroup); 99 + pr_warn("page->mem_cgroup:%px\n", page->mem_cgroup); 89 100 #endif 90 101 } 91 102
+83 -15
mm/filemap.c
··· 981 981 if (wait_page->bit_nr != key->bit_nr) 982 982 return 0; 983 983 984 - /* Stop walking if it's locked */ 984 + /* 985 + * Stop walking if it's locked. 986 + * Is this safe if put_and_wait_on_page_locked() is in use? 987 + * Yes: the waker must hold a reference to this page, and if PG_locked 988 + * has now already been set by another task, that task must also hold 989 + * a reference to the *same usage* of this page; so there is no need 990 + * to walk on to wake even the put_and_wait_on_page_locked() callers. 991 + */ 985 992 if (test_bit(key->bit_nr, &key->page->flags)) 986 993 return -1; 987 994 ··· 1056 1049 wake_up_page_bit(page, bit); 1057 1050 } 1058 1051 1052 + /* 1053 + * A choice of three behaviors for wait_on_page_bit_common(): 1054 + */ 1055 + enum behavior { 1056 + EXCLUSIVE, /* Hold ref to page and take the bit when woken, like 1057 + * __lock_page() waiting on then setting PG_locked. 1058 + */ 1059 + SHARED, /* Hold ref to page and check the bit when woken, like 1060 + * wait_on_page_writeback() waiting on PG_writeback. 1061 + */ 1062 + DROP, /* Drop ref to page before wait, no check when woken, 1063 + * like put_and_wait_on_page_locked() on PG_locked. 1064 + */ 1065 + }; 1066 + 1059 1067 static inline int wait_on_page_bit_common(wait_queue_head_t *q, 1060 - struct page *page, int bit_nr, int state, bool lock) 1068 + struct page *page, int bit_nr, int state, enum behavior behavior) 1061 1069 { 1062 1070 struct wait_page_queue wait_page; 1063 1071 wait_queue_entry_t *wait = &wait_page.wait; 1072 + bool bit_is_set; 1064 1073 bool thrashing = false; 1074 + bool delayacct = false; 1065 1075 unsigned long pflags; 1066 1076 int ret = 0; 1067 1077 1068 1078 if (bit_nr == PG_locked && 1069 1079 !PageUptodate(page) && PageWorkingset(page)) { 1070 - if (!PageSwapBacked(page)) 1080 + if (!PageSwapBacked(page)) { 1071 1081 delayacct_thrashing_start(); 1082 + delayacct = true; 1083 + } 1072 1084 psi_memstall_enter(&pflags); 1073 1085 thrashing = true; 1074 1086 } 1075 1087 1076 1088 init_wait(wait); 1077 - wait->flags = lock ? WQ_FLAG_EXCLUSIVE : 0; 1089 + wait->flags = behavior == EXCLUSIVE ? WQ_FLAG_EXCLUSIVE : 0; 1078 1090 wait->func = wake_page_function; 1079 1091 wait_page.page = page; 1080 1092 wait_page.bit_nr = bit_nr; ··· 1110 1084 1111 1085 spin_unlock_irq(&q->lock); 1112 1086 1113 - if (likely(test_bit(bit_nr, &page->flags))) { 1114 - io_schedule(); 1115 - } 1087 + bit_is_set = test_bit(bit_nr, &page->flags); 1088 + if (behavior == DROP) 1089 + put_page(page); 1116 1090 1117 - if (lock) { 1091 + if (likely(bit_is_set)) 1092 + io_schedule(); 1093 + 1094 + if (behavior == EXCLUSIVE) { 1118 1095 if (!test_and_set_bit_lock(bit_nr, &page->flags)) 1119 1096 break; 1120 - } else { 1097 + } else if (behavior == SHARED) { 1121 1098 if (!test_bit(bit_nr, &page->flags)) 1122 1099 break; 1123 1100 } ··· 1129 1100 ret = -EINTR; 1130 1101 break; 1131 1102 } 1103 + 1104 + if (behavior == DROP) { 1105 + /* 1106 + * We can no longer safely access page->flags: 1107 + * even if CONFIG_MEMORY_HOTREMOVE is not enabled, 1108 + * there is a risk of waiting forever on a page reused 1109 + * for something that keeps it locked indefinitely. 1110 + * But best check for -EINTR above before breaking. 1111 + */ 1112 + break; 1113 + } 1132 1114 } 1133 1115 1134 1116 finish_wait(q, wait); 1135 1117 1136 1118 if (thrashing) { 1137 - if (!PageSwapBacked(page)) 1119 + if (delayacct) 1138 1120 delayacct_thrashing_end(); 1139 1121 psi_memstall_leave(&pflags); 1140 1122 } ··· 1164 1124 void wait_on_page_bit(struct page *page, int bit_nr) 1165 1125 { 1166 1126 wait_queue_head_t *q = page_waitqueue(page); 1167 - wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, false); 1127 + wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); 1168 1128 } 1169 1129 EXPORT_SYMBOL(wait_on_page_bit); 1170 1130 1171 1131 int wait_on_page_bit_killable(struct page *page, int bit_nr) 1172 1132 { 1173 1133 wait_queue_head_t *q = page_waitqueue(page); 1174 - return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, false); 1134 + return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED); 1175 1135 } 1176 1136 EXPORT_SYMBOL(wait_on_page_bit_killable); 1137 + 1138 + /** 1139 + * put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked 1140 + * @page: The page to wait for. 1141 + * 1142 + * The caller should hold a reference on @page. They expect the page to 1143 + * become unlocked relatively soon, but do not wish to hold up migration 1144 + * (for example) by holding the reference while waiting for the page to 1145 + * come unlocked. After this function returns, the caller should not 1146 + * dereference @page. 1147 + */ 1148 + void put_and_wait_on_page_locked(struct page *page) 1149 + { 1150 + wait_queue_head_t *q; 1151 + 1152 + page = compound_head(page); 1153 + q = page_waitqueue(page); 1154 + wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, DROP); 1155 + } 1177 1156 1178 1157 /** 1179 1158 * add_page_wait_queue - Add an arbitrary waiter to a page's wait queue ··· 1323 1264 { 1324 1265 struct page *page = compound_head(__page); 1325 1266 wait_queue_head_t *q = page_waitqueue(page); 1326 - wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, true); 1267 + wait_on_page_bit_common(q, page, PG_locked, TASK_UNINTERRUPTIBLE, 1268 + EXCLUSIVE); 1327 1269 } 1328 1270 EXPORT_SYMBOL(__lock_page); 1329 1271 ··· 1332 1272 { 1333 1273 struct page *page = compound_head(__page); 1334 1274 wait_queue_head_t *q = page_waitqueue(page); 1335 - return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, true); 1275 + return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE, 1276 + EXCLUSIVE); 1336 1277 } 1337 1278 EXPORT_SYMBOL_GPL(__lock_page_killable); 1338 1279 ··· 1601 1540 VM_BUG_ON_PAGE(page->index != offset, page); 1602 1541 } 1603 1542 1604 - if (page && (fgp_flags & FGP_ACCESSED)) 1543 + if (fgp_flags & FGP_ACCESSED) 1605 1544 mark_page_accessed(page); 1606 1545 1607 1546 no_page: ··· 2614 2553 goto next; 2615 2554 2616 2555 head = compound_head(page); 2556 + 2557 + /* 2558 + * Check for a locked page first, as a speculative 2559 + * reference may adversely influence page migration. 2560 + */ 2561 + if (PageLocked(head)) 2562 + goto next; 2617 2563 if (!page_cache_get_speculative(head)) 2618 2564 goto next; 2619 2565
+2 -3
mm/highmem.c
··· 105 105 } 106 106 #endif 107 107 108 - unsigned long totalhigh_pages __read_mostly; 109 - EXPORT_SYMBOL(totalhigh_pages); 110 - 108 + atomic_long_t _totalhigh_pages __read_mostly; 109 + EXPORT_SYMBOL(_totalhigh_pages); 111 110 112 111 EXPORT_PER_CPU_SYMBOL(__kmap_atomic_idx); 113 112
+50 -281
mm/hmm.c
··· 189 189 } 190 190 191 191 static int hmm_invalidate_range_start(struct mmu_notifier *mn, 192 - struct mm_struct *mm, 193 - unsigned long start, 194 - unsigned long end, 195 - bool blockable) 192 + const struct mmu_notifier_range *range) 196 193 { 197 194 struct hmm_update update; 198 - struct hmm *hmm = mm->hmm; 195 + struct hmm *hmm = range->mm->hmm; 199 196 200 197 VM_BUG_ON(!hmm); 201 198 202 - update.start = start; 203 - update.end = end; 199 + update.start = range->start; 200 + update.end = range->end; 204 201 update.event = HMM_UPDATE_INVALIDATE; 205 - update.blockable = blockable; 202 + update.blockable = range->blockable; 206 203 return hmm_invalidate_range(hmm, true, &update); 207 204 } 208 205 209 206 static void hmm_invalidate_range_end(struct mmu_notifier *mn, 210 - struct mm_struct *mm, 211 - unsigned long start, 212 - unsigned long end) 207 + const struct mmu_notifier_range *range) 213 208 { 214 209 struct hmm_update update; 215 - struct hmm *hmm = mm->hmm; 210 + struct hmm *hmm = range->mm->hmm; 216 211 217 212 VM_BUG_ON(!hmm); 218 213 219 - update.start = start; 220 - update.end = end; 214 + update.start = range->start; 215 + update.end = range->end; 221 216 update.event = HMM_UPDATE_INVALIDATE; 222 217 update.blockable = true; 223 218 hmm_invalidate_range(hmm, false, &update); ··· 981 986 struct hmm_devmem *devmem; 982 987 983 988 devmem = container_of(ref, struct hmm_devmem, ref); 989 + wait_for_completion(&devmem->completion); 984 990 percpu_ref_exit(ref); 985 - devm_remove_action(devmem->device, &hmm_devmem_ref_exit, data); 986 991 } 987 992 988 - static void hmm_devmem_ref_kill(void *data) 993 + static void hmm_devmem_ref_kill(struct percpu_ref *ref) 989 994 { 990 - struct percpu_ref *ref = data; 991 - struct hmm_devmem *devmem; 992 - 993 - devmem = container_of(ref, struct hmm_devmem, ref); 994 995 percpu_ref_kill(ref); 995 - wait_for_completion(&devmem->completion); 996 - devm_remove_action(devmem->device, &hmm_devmem_ref_kill, data); 997 996 } 998 997 999 998 static int hmm_devmem_fault(struct vm_area_struct *vma, ··· 1008 1019 page->mapping = NULL; 1009 1020 1010 1021 devmem->ops->free(devmem, page); 1011 - } 1012 - 1013 - static DEFINE_MUTEX(hmm_devmem_lock); 1014 - static RADIX_TREE(hmm_devmem_radix, GFP_KERNEL); 1015 - 1016 - static void hmm_devmem_radix_release(struct resource *resource) 1017 - { 1018 - resource_size_t key; 1019 - 1020 - mutex_lock(&hmm_devmem_lock); 1021 - for (key = resource->start; 1022 - key <= resource->end; 1023 - key += PA_SECTION_SIZE) 1024 - radix_tree_delete(&hmm_devmem_radix, key >> PA_SECTION_SHIFT); 1025 - mutex_unlock(&hmm_devmem_lock); 1026 - } 1027 - 1028 - static void hmm_devmem_release(struct device *dev, void *data) 1029 - { 1030 - struct hmm_devmem *devmem = data; 1031 - struct resource *resource = devmem->resource; 1032 - unsigned long start_pfn, npages; 1033 - struct zone *zone; 1034 - struct page *page; 1035 - 1036 - if (percpu_ref_tryget_live(&devmem->ref)) { 1037 - dev_WARN(dev, "%s: page mapping is still live!\n", __func__); 1038 - percpu_ref_put(&devmem->ref); 1039 - } 1040 - 1041 - /* pages are dead and unused, undo the arch mapping */ 1042 - start_pfn = (resource->start & ~(PA_SECTION_SIZE - 1)) >> PAGE_SHIFT; 1043 - npages = ALIGN(resource_size(resource), PA_SECTION_SIZE) >> PAGE_SHIFT; 1044 - 1045 - page = pfn_to_page(start_pfn); 1046 - zone = page_zone(page); 1047 - 1048 - mem_hotplug_begin(); 1049 - if (resource->desc == IORES_DESC_DEVICE_PRIVATE_MEMORY) 1050 - __remove_pages(zone, start_pfn, npages, NULL); 1051 - else 1052 - arch_remove_memory(start_pfn << PAGE_SHIFT, 1053 - npages << PAGE_SHIFT, NULL); 1054 - mem_hotplug_done(); 1055 - 1056 - hmm_devmem_radix_release(resource); 1057 - } 1058 - 1059 - static int hmm_devmem_pages_create(struct hmm_devmem *devmem) 1060 - { 1061 - resource_size_t key, align_start, align_size, align_end; 1062 - struct device *device = devmem->device; 1063 - int ret, nid, is_ram; 1064 - 1065 - align_start = devmem->resource->start & ~(PA_SECTION_SIZE - 1); 1066 - align_size = ALIGN(devmem->resource->start + 1067 - resource_size(devmem->resource), 1068 - PA_SECTION_SIZE) - align_start; 1069 - 1070 - is_ram = region_intersects(align_start, align_size, 1071 - IORESOURCE_SYSTEM_RAM, 1072 - IORES_DESC_NONE); 1073 - if (is_ram == REGION_MIXED) { 1074 - WARN_ONCE(1, "%s attempted on mixed region %pr\n", 1075 - __func__, devmem->resource); 1076 - return -ENXIO; 1077 - } 1078 - if (is_ram == REGION_INTERSECTS) 1079 - return -ENXIO; 1080 - 1081 - if (devmem->resource->desc == IORES_DESC_DEVICE_PUBLIC_MEMORY) 1082 - devmem->pagemap.type = MEMORY_DEVICE_PUBLIC; 1083 - else 1084 - devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; 1085 - 1086 - devmem->pagemap.res = *devmem->resource; 1087 - devmem->pagemap.page_fault = hmm_devmem_fault; 1088 - devmem->pagemap.page_free = hmm_devmem_free; 1089 - devmem->pagemap.dev = devmem->device; 1090 - devmem->pagemap.ref = &devmem->ref; 1091 - devmem->pagemap.data = devmem; 1092 - 1093 - mutex_lock(&hmm_devmem_lock); 1094 - align_end = align_start + align_size - 1; 1095 - for (key = align_start; key <= align_end; key += PA_SECTION_SIZE) { 1096 - struct hmm_devmem *dup; 1097 - 1098 - dup = radix_tree_lookup(&hmm_devmem_radix, 1099 - key >> PA_SECTION_SHIFT); 1100 - if (dup) { 1101 - dev_err(device, "%s: collides with mapping for %s\n", 1102 - __func__, dev_name(dup->device)); 1103 - mutex_unlock(&hmm_devmem_lock); 1104 - ret = -EBUSY; 1105 - goto error; 1106 - } 1107 - ret = radix_tree_insert(&hmm_devmem_radix, 1108 - key >> PA_SECTION_SHIFT, 1109 - devmem); 1110 - if (ret) { 1111 - dev_err(device, "%s: failed: %d\n", __func__, ret); 1112 - mutex_unlock(&hmm_devmem_lock); 1113 - goto error_radix; 1114 - } 1115 - } 1116 - mutex_unlock(&hmm_devmem_lock); 1117 - 1118 - nid = dev_to_node(device); 1119 - if (nid < 0) 1120 - nid = numa_mem_id(); 1121 - 1122 - mem_hotplug_begin(); 1123 - /* 1124 - * For device private memory we call add_pages() as we only need to 1125 - * allocate and initialize struct page for the device memory. More- 1126 - * over the device memory is un-accessible thus we do not want to 1127 - * create a linear mapping for the memory like arch_add_memory() 1128 - * would do. 1129 - * 1130 - * For device public memory, which is accesible by the CPU, we do 1131 - * want the linear mapping and thus use arch_add_memory(). 1132 - */ 1133 - if (devmem->pagemap.type == MEMORY_DEVICE_PUBLIC) 1134 - ret = arch_add_memory(nid, align_start, align_size, NULL, 1135 - false); 1136 - else 1137 - ret = add_pages(nid, align_start >> PAGE_SHIFT, 1138 - align_size >> PAGE_SHIFT, NULL, false); 1139 - if (ret) { 1140 - mem_hotplug_done(); 1141 - goto error_add_memory; 1142 - } 1143 - move_pfn_range_to_zone(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], 1144 - align_start >> PAGE_SHIFT, 1145 - align_size >> PAGE_SHIFT, NULL); 1146 - mem_hotplug_done(); 1147 - 1148 - /* 1149 - * Initialization of the pages has been deferred until now in order 1150 - * to allow us to do the work while not holding the hotplug lock. 1151 - */ 1152 - memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE], 1153 - align_start >> PAGE_SHIFT, 1154 - align_size >> PAGE_SHIFT, &devmem->pagemap); 1155 - 1156 - return 0; 1157 - 1158 - error_add_memory: 1159 - untrack_pfn(NULL, PHYS_PFN(align_start), align_size); 1160 - error_radix: 1161 - hmm_devmem_radix_release(devmem->resource); 1162 - error: 1163 - return ret; 1164 - } 1165 - 1166 - static int hmm_devmem_match(struct device *dev, void *data, void *match_data) 1167 - { 1168 - struct hmm_devmem *devmem = data; 1169 - 1170 - return devmem->resource == match_data; 1171 - } 1172 - 1173 - static void hmm_devmem_pages_remove(struct hmm_devmem *devmem) 1174 - { 1175 - devres_release(devmem->device, &hmm_devmem_release, 1176 - &hmm_devmem_match, devmem->resource); 1177 1022 } 1178 1023 1179 1024 /* ··· 1033 1210 { 1034 1211 struct hmm_devmem *devmem; 1035 1212 resource_size_t addr; 1213 + void *result; 1036 1214 int ret; 1037 1215 1038 1216 dev_pagemap_get_ops(); 1039 1217 1040 - devmem = devres_alloc_node(&hmm_devmem_release, sizeof(*devmem), 1041 - GFP_KERNEL, dev_to_node(device)); 1218 + devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL); 1042 1219 if (!devmem) 1043 1220 return ERR_PTR(-ENOMEM); 1044 1221 ··· 1052 1229 ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release, 1053 1230 0, GFP_KERNEL); 1054 1231 if (ret) 1055 - goto error_percpu_ref; 1232 + return ERR_PTR(ret); 1056 1233 1057 - ret = devm_add_action(device, hmm_devmem_ref_exit, &devmem->ref); 1234 + ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, &devmem->ref); 1058 1235 if (ret) 1059 - goto error_devm_add_action; 1236 + return ERR_PTR(ret); 1060 1237 1061 1238 size = ALIGN(size, PA_SECTION_SIZE); 1062 1239 addr = min((unsigned long)iomem_resource.end, ··· 1076 1253 1077 1254 devmem->resource = devm_request_mem_region(device, addr, size, 1078 1255 dev_name(device)); 1079 - if (!devmem->resource) { 1080 - ret = -ENOMEM; 1081 - goto error_no_resource; 1082 - } 1256 + if (!devmem->resource) 1257 + return ERR_PTR(-ENOMEM); 1083 1258 break; 1084 1259 } 1085 - if (!devmem->resource) { 1086 - ret = -ERANGE; 1087 - goto error_no_resource; 1088 - } 1260 + if (!devmem->resource) 1261 + return ERR_PTR(-ERANGE); 1089 1262 1090 1263 devmem->resource->desc = IORES_DESC_DEVICE_PRIVATE_MEMORY; 1091 1264 devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; 1092 1265 devmem->pfn_last = devmem->pfn_first + 1093 1266 (resource_size(devmem->resource) >> PAGE_SHIFT); 1267 + devmem->page_fault = hmm_devmem_fault; 1094 1268 1095 - ret = hmm_devmem_pages_create(devmem); 1096 - if (ret) 1097 - goto error_pages; 1269 + devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; 1270 + devmem->pagemap.res = *devmem->resource; 1271 + devmem->pagemap.page_free = hmm_devmem_free; 1272 + devmem->pagemap.altmap_valid = false; 1273 + devmem->pagemap.ref = &devmem->ref; 1274 + devmem->pagemap.data = devmem; 1275 + devmem->pagemap.kill = hmm_devmem_ref_kill; 1098 1276 1099 - devres_add(device, devmem); 1100 - 1101 - ret = devm_add_action(device, hmm_devmem_ref_kill, &devmem->ref); 1102 - if (ret) { 1103 - hmm_devmem_remove(devmem); 1104 - return ERR_PTR(ret); 1105 - } 1106 - 1277 + result = devm_memremap_pages(devmem->device, &devmem->pagemap); 1278 + if (IS_ERR(result)) 1279 + return result; 1107 1280 return devmem; 1108 - 1109 - error_pages: 1110 - devm_release_mem_region(device, devmem->resource->start, 1111 - resource_size(devmem->resource)); 1112 - error_no_resource: 1113 - error_devm_add_action: 1114 - hmm_devmem_ref_kill(&devmem->ref); 1115 - hmm_devmem_ref_exit(&devmem->ref); 1116 - error_percpu_ref: 1117 - devres_free(devmem); 1118 - return ERR_PTR(ret); 1119 1281 } 1120 - EXPORT_SYMBOL(hmm_devmem_add); 1282 + EXPORT_SYMBOL_GPL(hmm_devmem_add); 1121 1283 1122 1284 struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, 1123 1285 struct device *device, 1124 1286 struct resource *res) 1125 1287 { 1126 1288 struct hmm_devmem *devmem; 1289 + void *result; 1127 1290 int ret; 1128 1291 1129 1292 if (res->desc != IORES_DESC_DEVICE_PUBLIC_MEMORY) ··· 1117 1308 1118 1309 dev_pagemap_get_ops(); 1119 1310 1120 - devmem = devres_alloc_node(&hmm_devmem_release, sizeof(*devmem), 1121 - GFP_KERNEL, dev_to_node(device)); 1311 + devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL); 1122 1312 if (!devmem) 1123 1313 return ERR_PTR(-ENOMEM); 1124 1314 ··· 1131 1323 ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release, 1132 1324 0, GFP_KERNEL); 1133 1325 if (ret) 1134 - goto error_percpu_ref; 1326 + return ERR_PTR(ret); 1135 1327 1136 - ret = devm_add_action(device, hmm_devmem_ref_exit, &devmem->ref); 1328 + ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, 1329 + &devmem->ref); 1137 1330 if (ret) 1138 - goto error_devm_add_action; 1139 - 1331 + return ERR_PTR(ret); 1140 1332 1141 1333 devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; 1142 1334 devmem->pfn_last = devmem->pfn_first + 1143 1335 (resource_size(devmem->resource) >> PAGE_SHIFT); 1336 + devmem->page_fault = hmm_devmem_fault; 1144 1337 1145 - ret = hmm_devmem_pages_create(devmem); 1146 - if (ret) 1147 - goto error_devm_add_action; 1338 + devmem->pagemap.type = MEMORY_DEVICE_PUBLIC; 1339 + devmem->pagemap.res = *devmem->resource; 1340 + devmem->pagemap.page_free = hmm_devmem_free; 1341 + devmem->pagemap.altmap_valid = false; 1342 + devmem->pagemap.ref = &devmem->ref; 1343 + devmem->pagemap.data = devmem; 1344 + devmem->pagemap.kill = hmm_devmem_ref_kill; 1148 1345 1149 - devres_add(device, devmem); 1150 - 1151 - ret = devm_add_action(device, hmm_devmem_ref_kill, &devmem->ref); 1152 - if (ret) { 1153 - hmm_devmem_remove(devmem); 1154 - return ERR_PTR(ret); 1155 - } 1156 - 1346 + result = devm_memremap_pages(devmem->device, &devmem->pagemap); 1347 + if (IS_ERR(result)) 1348 + return result; 1157 1349 return devmem; 1158 - 1159 - error_devm_add_action: 1160 - hmm_devmem_ref_kill(&devmem->ref); 1161 - hmm_devmem_ref_exit(&devmem->ref); 1162 - error_percpu_ref: 1163 - devres_free(devmem); 1164 - return ERR_PTR(ret); 1165 1350 } 1166 - EXPORT_SYMBOL(hmm_devmem_add_resource); 1167 - 1168 - /* 1169 - * hmm_devmem_remove() - remove device memory (kill and free ZONE_DEVICE) 1170 - * 1171 - * @devmem: hmm_devmem struct use to track and manage the ZONE_DEVICE memory 1172 - * 1173 - * This will hot-unplug memory that was hotplugged by hmm_devmem_add on behalf 1174 - * of the device driver. It will free struct page and remove the resource that 1175 - * reserved the physical address range for this device memory. 1176 - */ 1177 - void hmm_devmem_remove(struct hmm_devmem *devmem) 1178 - { 1179 - resource_size_t start, size; 1180 - struct device *device; 1181 - bool cdm = false; 1182 - 1183 - if (!devmem) 1184 - return; 1185 - 1186 - device = devmem->device; 1187 - start = devmem->resource->start; 1188 - size = resource_size(devmem->resource); 1189 - 1190 - cdm = devmem->resource->desc == IORES_DESC_DEVICE_PUBLIC_MEMORY; 1191 - hmm_devmem_ref_kill(&devmem->ref); 1192 - hmm_devmem_ref_exit(&devmem->ref); 1193 - hmm_devmem_pages_remove(devmem); 1194 - 1195 - if (!cdm) 1196 - devm_release_mem_region(device, start, size); 1197 - } 1198 - EXPORT_SYMBOL(hmm_devmem_remove); 1351 + EXPORT_SYMBOL_GPL(hmm_devmem_add_resource); 1199 1352 1200 1353 /* 1201 1354 * A device driver that wants to handle multiple devices memory through a
+39 -35
mm/huge_memory.c
··· 62 62 static atomic_t huge_zero_refcount; 63 63 struct page *huge_zero_page __read_mostly; 64 64 65 + bool transparent_hugepage_enabled(struct vm_area_struct *vma) 66 + { 67 + if (vma_is_anonymous(vma)) 68 + return __transparent_hugepage_enabled(vma); 69 + if (vma_is_shmem(vma) && shmem_huge_enabled(vma)) 70 + return __transparent_hugepage_enabled(vma); 71 + 72 + return false; 73 + } 74 + 65 75 static struct page *get_huge_zero_page(void) 66 76 { 67 77 struct page *zero_page; ··· 430 420 * where the extra memory used could hurt more than TLB overhead 431 421 * is likely to save. The admin can still enable it through /sys. 432 422 */ 433 - if (totalram_pages < (512 << (20 - PAGE_SHIFT))) { 423 + if (totalram_pages() < (512 << (20 - PAGE_SHIFT))) { 434 424 transparent_hugepage_flags = 0; 435 425 return 0; 436 426 } ··· 1144 1134 int i; 1145 1135 vm_fault_t ret = 0; 1146 1136 struct page **pages; 1147 - unsigned long mmun_start; /* For mmu_notifiers */ 1148 - unsigned long mmun_end; /* For mmu_notifiers */ 1137 + struct mmu_notifier_range range; 1149 1138 1150 1139 pages = kmalloc_array(HPAGE_PMD_NR, sizeof(struct page *), 1151 1140 GFP_KERNEL); ··· 1182 1173 cond_resched(); 1183 1174 } 1184 1175 1185 - mmun_start = haddr; 1186 - mmun_end = haddr + HPAGE_PMD_SIZE; 1187 - mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end); 1176 + mmu_notifier_range_init(&range, vma->vm_mm, haddr, 1177 + haddr + HPAGE_PMD_SIZE); 1178 + mmu_notifier_invalidate_range_start(&range); 1188 1179 1189 1180 vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); 1190 1181 if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) ··· 1229 1220 * No need to double call mmu_notifier->invalidate_range() callback as 1230 1221 * the above pmdp_huge_clear_flush_notify() did already call it. 1231 1222 */ 1232 - mmu_notifier_invalidate_range_only_end(vma->vm_mm, mmun_start, 1233 - mmun_end); 1223 + mmu_notifier_invalidate_range_only_end(&range); 1234 1224 1235 1225 ret |= VM_FAULT_WRITE; 1236 1226 put_page(page); ··· 1239 1231 1240 1232 out_free_pages: 1241 1233 spin_unlock(vmf->ptl); 1242 - mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end); 1234 + mmu_notifier_invalidate_range_end(&range); 1243 1235 for (i = 0; i < HPAGE_PMD_NR; i++) { 1244 1236 memcg = (void *)page_private(pages[i]); 1245 1237 set_page_private(pages[i], 0); ··· 1256 1248 struct page *page = NULL, *new_page; 1257 1249 struct mem_cgroup *memcg; 1258 1250 unsigned long haddr = vmf->address & HPAGE_PMD_MASK; 1259 - unsigned long mmun_start; /* For mmu_notifiers */ 1260 - unsigned long mmun_end; /* For mmu_notifiers */ 1251 + struct mmu_notifier_range range; 1261 1252 gfp_t huge_gfp; /* for allocation and charge */ 1262 1253 vm_fault_t ret = 0; 1263 1254 ··· 1300 1293 get_page(page); 1301 1294 spin_unlock(vmf->ptl); 1302 1295 alloc: 1303 - if (transparent_hugepage_enabled(vma) && 1296 + if (__transparent_hugepage_enabled(vma) && 1304 1297 !transparent_hugepage_debug_cow()) { 1305 1298 huge_gfp = alloc_hugepage_direct_gfpmask(vma); 1306 1299 new_page = alloc_hugepage_vma(huge_gfp, vma, haddr, HPAGE_PMD_ORDER); ··· 1345 1338 vma, HPAGE_PMD_NR); 1346 1339 __SetPageUptodate(new_page); 1347 1340 1348 - mmun_start = haddr; 1349 - mmun_end = haddr + HPAGE_PMD_SIZE; 1350 - mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end); 1341 + mmu_notifier_range_init(&range, vma->vm_mm, haddr, 1342 + haddr + HPAGE_PMD_SIZE); 1343 + mmu_notifier_invalidate_range_start(&range); 1351 1344 1352 1345 spin_lock(vmf->ptl); 1353 1346 if (page) ··· 1382 1375 * No need to double call mmu_notifier->invalidate_range() callback as 1383 1376 * the above pmdp_huge_clear_flush_notify() did already call it. 1384 1377 */ 1385 - mmu_notifier_invalidate_range_only_end(vma->vm_mm, mmun_start, 1386 - mmun_end); 1378 + mmu_notifier_invalidate_range_only_end(&range); 1387 1379 out: 1388 1380 return ret; 1389 1381 out_unlock: ··· 1496 1490 if (!get_page_unless_zero(page)) 1497 1491 goto out_unlock; 1498 1492 spin_unlock(vmf->ptl); 1499 - wait_on_page_locked(page); 1500 - put_page(page); 1493 + put_and_wait_on_page_locked(page); 1501 1494 goto out; 1502 1495 } 1503 1496 ··· 1532 1527 if (!get_page_unless_zero(page)) 1533 1528 goto out_unlock; 1534 1529 spin_unlock(vmf->ptl); 1535 - wait_on_page_locked(page); 1536 - put_page(page); 1530 + put_and_wait_on_page_locked(page); 1537 1531 goto out; 1538 1532 } 1539 1533 ··· 2021 2017 unsigned long address) 2022 2018 { 2023 2019 spinlock_t *ptl; 2024 - struct mm_struct *mm = vma->vm_mm; 2025 - unsigned long haddr = address & HPAGE_PUD_MASK; 2020 + struct mmu_notifier_range range; 2026 2021 2027 - mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PUD_SIZE); 2028 - ptl = pud_lock(mm, pud); 2022 + mmu_notifier_range_init(&range, vma->vm_mm, address & HPAGE_PUD_MASK, 2023 + (address & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE); 2024 + mmu_notifier_invalidate_range_start(&range); 2025 + ptl = pud_lock(vma->vm_mm, pud); 2029 2026 if (unlikely(!pud_trans_huge(*pud) && !pud_devmap(*pud))) 2030 2027 goto out; 2031 - __split_huge_pud_locked(vma, pud, haddr); 2028 + __split_huge_pud_locked(vma, pud, range.start); 2032 2029 2033 2030 out: 2034 2031 spin_unlock(ptl); ··· 2037 2032 * No need to double call mmu_notifier->invalidate_range() callback as 2038 2033 * the above pudp_huge_clear_flush_notify() did already call it. 2039 2034 */ 2040 - mmu_notifier_invalidate_range_only_end(mm, haddr, haddr + 2041 - HPAGE_PUD_SIZE); 2035 + mmu_notifier_invalidate_range_only_end(&range); 2042 2036 } 2043 2037 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ 2044 2038 ··· 2239 2235 unsigned long address, bool freeze, struct page *page) 2240 2236 { 2241 2237 spinlock_t *ptl; 2242 - struct mm_struct *mm = vma->vm_mm; 2243 - unsigned long haddr = address & HPAGE_PMD_MASK; 2238 + struct mmu_notifier_range range; 2244 2239 2245 - mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PMD_SIZE); 2246 - ptl = pmd_lock(mm, pmd); 2240 + mmu_notifier_range_init(&range, vma->vm_mm, address & HPAGE_PMD_MASK, 2241 + (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE); 2242 + mmu_notifier_invalidate_range_start(&range); 2243 + ptl = pmd_lock(vma->vm_mm, pmd); 2247 2244 2248 2245 /* 2249 2246 * If caller asks to setup a migration entries, we need a page to check ··· 2260 2255 clear_page_mlock(page); 2261 2256 } else if (!(pmd_devmap(*pmd) || is_pmd_migration_entry(*pmd))) 2262 2257 goto out; 2263 - __split_huge_pmd_locked(vma, pmd, haddr, freeze); 2258 + __split_huge_pmd_locked(vma, pmd, range.start, freeze); 2264 2259 out: 2265 2260 spin_unlock(ptl); 2266 2261 /* ··· 2276 2271 * any further changes to individual pte will notify. So no need 2277 2272 * to call mmu_notifier->invalidate_range() 2278 2273 */ 2279 - mmu_notifier_invalidate_range_only_end(mm, haddr, haddr + 2280 - HPAGE_PMD_SIZE); 2274 + mmu_notifier_invalidate_range_only_end(&range); 2281 2275 } 2282 2276 2283 2277 void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
+82 -51
mm/hugetlb.c
··· 3238 3238 struct page *ptepage; 3239 3239 unsigned long addr; 3240 3240 int cow; 3241 + struct address_space *mapping = vma->vm_file->f_mapping; 3241 3242 struct hstate *h = hstate_vma(vma); 3242 3243 unsigned long sz = huge_page_size(h); 3243 - unsigned long mmun_start; /* For mmu_notifiers */ 3244 - unsigned long mmun_end; /* For mmu_notifiers */ 3244 + struct mmu_notifier_range range; 3245 3245 int ret = 0; 3246 3246 3247 3247 cow = (vma->vm_flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE; 3248 3248 3249 - mmun_start = vma->vm_start; 3250 - mmun_end = vma->vm_end; 3251 - if (cow) 3252 - mmu_notifier_invalidate_range_start(src, mmun_start, mmun_end); 3249 + if (cow) { 3250 + mmu_notifier_range_init(&range, src, vma->vm_start, 3251 + vma->vm_end); 3252 + mmu_notifier_invalidate_range_start(&range); 3253 + } else { 3254 + /* 3255 + * For shared mappings i_mmap_rwsem must be held to call 3256 + * huge_pte_alloc, otherwise the returned ptep could go 3257 + * away if part of a shared pmd and another thread calls 3258 + * huge_pmd_unshare. 3259 + */ 3260 + i_mmap_lock_read(mapping); 3261 + } 3253 3262 3254 3263 for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) { 3255 3264 spinlock_t *src_ptl, *dst_ptl; 3265 + 3256 3266 src_pte = huge_pte_offset(src, addr, sz); 3257 3267 if (!src_pte) 3258 3268 continue; 3269 + 3259 3270 dst_pte = huge_pte_alloc(dst, addr, sz); 3260 3271 if (!dst_pte) { 3261 3272 ret = -ENOMEM; ··· 3336 3325 } 3337 3326 3338 3327 if (cow) 3339 - mmu_notifier_invalidate_range_end(src, mmun_start, mmun_end); 3328 + mmu_notifier_invalidate_range_end(&range); 3329 + else 3330 + i_mmap_unlock_read(mapping); 3340 3331 3341 3332 return ret; 3342 3333 } ··· 3355 3342 struct page *page; 3356 3343 struct hstate *h = hstate_vma(vma); 3357 3344 unsigned long sz = huge_page_size(h); 3358 - unsigned long mmun_start = start; /* For mmu_notifiers */ 3359 - unsigned long mmun_end = end; /* For mmu_notifiers */ 3345 + struct mmu_notifier_range range; 3360 3346 3361 3347 WARN_ON(!is_vm_hugetlb_page(vma)); 3362 3348 BUG_ON(start & ~huge_page_mask(h)); ··· 3371 3359 /* 3372 3360 * If sharing possible, alert mmu notifiers of worst case. 3373 3361 */ 3374 - adjust_range_if_pmd_sharing_possible(vma, &mmun_start, &mmun_end); 3375 - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 3362 + mmu_notifier_range_init(&range, mm, start, end); 3363 + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); 3364 + mmu_notifier_invalidate_range_start(&range); 3376 3365 address = start; 3377 3366 for (; address < end; address += sz) { 3378 3367 ptep = huge_pte_offset(mm, address, sz); ··· 3441 3428 if (ref_page) 3442 3429 break; 3443 3430 } 3444 - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 3431 + mmu_notifier_invalidate_range_end(&range); 3445 3432 tlb_end_vma(tlb, vma); 3446 3433 } 3447 3434 ··· 3559 3546 struct page *old_page, *new_page; 3560 3547 int outside_reserve = 0; 3561 3548 vm_fault_t ret = 0; 3562 - unsigned long mmun_start; /* For mmu_notifiers */ 3563 - unsigned long mmun_end; /* For mmu_notifiers */ 3564 3549 unsigned long haddr = address & huge_page_mask(h); 3550 + struct mmu_notifier_range range; 3565 3551 3566 3552 pte = huge_ptep_get(ptep); 3567 3553 old_page = pte_page(pte); ··· 3639 3627 __SetPageUptodate(new_page); 3640 3628 set_page_huge_active(new_page); 3641 3629 3642 - mmun_start = haddr; 3643 - mmun_end = mmun_start + huge_page_size(h); 3644 - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 3630 + mmu_notifier_range_init(&range, mm, haddr, haddr + huge_page_size(h)); 3631 + mmu_notifier_invalidate_range_start(&range); 3645 3632 3646 3633 /* 3647 3634 * Retake the page table lock to check for racing updates ··· 3653 3642 3654 3643 /* Break COW */ 3655 3644 huge_ptep_clear_flush(vma, haddr, ptep); 3656 - mmu_notifier_invalidate_range(mm, mmun_start, mmun_end); 3645 + mmu_notifier_invalidate_range(mm, range.start, range.end); 3657 3646 set_huge_pte_at(mm, haddr, ptep, 3658 3647 make_huge_pte(vma, new_page, 1)); 3659 3648 page_remove_rmap(old_page, true); ··· 3662 3651 new_page = old_page; 3663 3652 } 3664 3653 spin_unlock(ptl); 3665 - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 3654 + mmu_notifier_invalidate_range_end(&range); 3666 3655 out_release_all: 3667 3656 restore_reserve_on_error(h, vma, haddr, new_page); 3668 3657 put_page(new_page); ··· 3755 3744 } 3756 3745 3757 3746 /* 3758 - * Use page lock to guard against racing truncation 3759 - * before we get page_table_lock. 3747 + * We can not race with truncation due to holding i_mmap_rwsem. 3748 + * Check once here for faults beyond end of file. 3760 3749 */ 3750 + size = i_size_read(mapping->host) >> huge_page_shift(h); 3751 + if (idx >= size) 3752 + goto out; 3753 + 3761 3754 retry: 3762 3755 page = find_lock_page(mapping, idx); 3763 3756 if (!page) { 3764 - size = i_size_read(mapping->host) >> huge_page_shift(h); 3765 - if (idx >= size) 3766 - goto out; 3767 - 3768 3757 /* 3769 3758 * Check for page in userfault range 3770 3759 */ ··· 3784 3773 }; 3785 3774 3786 3775 /* 3787 - * hugetlb_fault_mutex must be dropped before 3788 - * handling userfault. Reacquire after handling 3789 - * fault to make calling code simpler. 3776 + * hugetlb_fault_mutex and i_mmap_rwsem must be 3777 + * dropped before handling userfault. Reacquire 3778 + * after handling fault to make calling code simpler. 3790 3779 */ 3791 3780 hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, 3792 3781 idx, haddr); 3793 3782 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 3783 + i_mmap_unlock_read(mapping); 3784 + 3794 3785 ret = handle_userfault(&vmf, VM_UFFD_MISSING); 3786 + 3787 + i_mmap_lock_read(mapping); 3795 3788 mutex_lock(&hugetlb_fault_mutex_table[hash]); 3796 3789 goto out; 3797 3790 } ··· 3854 3839 } 3855 3840 3856 3841 ptl = huge_pte_lock(h, mm, ptep); 3857 - size = i_size_read(mapping->host) >> huge_page_shift(h); 3858 - if (idx >= size) 3859 - goto backout; 3860 3842 3861 3843 ret = 0; 3862 3844 if (!huge_pte_none(huge_ptep_get(ptep))) ··· 3940 3928 3941 3929 ptep = huge_pte_offset(mm, haddr, huge_page_size(h)); 3942 3930 if (ptep) { 3931 + /* 3932 + * Since we hold no locks, ptep could be stale. That is 3933 + * OK as we are only making decisions based on content and 3934 + * not actually modifying content here. 3935 + */ 3943 3936 entry = huge_ptep_get(ptep); 3944 3937 if (unlikely(is_hugetlb_entry_migration(entry))) { 3945 3938 migration_entry_wait_huge(vma, mm, ptep); ··· 3952 3935 } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) 3953 3936 return VM_FAULT_HWPOISON_LARGE | 3954 3937 VM_FAULT_SET_HINDEX(hstate_index(h)); 3955 - } else { 3956 - ptep = huge_pte_alloc(mm, haddr, huge_page_size(h)); 3957 - if (!ptep) 3958 - return VM_FAULT_OOM; 3959 3938 } 3960 3939 3940 + /* 3941 + * Acquire i_mmap_rwsem before calling huge_pte_alloc and hold 3942 + * until finished with ptep. This serves two purposes: 3943 + * 1) It prevents huge_pmd_unshare from being called elsewhere 3944 + * and making the ptep no longer valid. 3945 + * 2) It synchronizes us with file truncation. 3946 + * 3947 + * ptep could have already be assigned via huge_pte_offset. That 3948 + * is OK, as huge_pte_alloc will return the same value unless 3949 + * something changed. 3950 + */ 3961 3951 mapping = vma->vm_file->f_mapping; 3962 - idx = vma_hugecache_offset(h, vma, haddr); 3952 + i_mmap_lock_read(mapping); 3953 + ptep = huge_pte_alloc(mm, haddr, huge_page_size(h)); 3954 + if (!ptep) { 3955 + i_mmap_unlock_read(mapping); 3956 + return VM_FAULT_OOM; 3957 + } 3963 3958 3964 3959 /* 3965 3960 * Serialize hugepage allocation and instantiation, so that we don't 3966 3961 * get spurious allocation failures if two CPUs race to instantiate 3967 3962 * the same page in the page cache. 3968 3963 */ 3964 + idx = vma_hugecache_offset(h, vma, haddr); 3969 3965 hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, idx, haddr); 3970 3966 mutex_lock(&hugetlb_fault_mutex_table[hash]); 3971 3967 ··· 4066 4036 } 4067 4037 out_mutex: 4068 4038 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 4039 + i_mmap_unlock_read(mapping); 4069 4040 /* 4070 4041 * Generally it's safe to hold refcount during waiting page lock. But 4071 4042 * here we just wait to defer the next page fault to avoid busy loop and ··· 4371 4340 pte_t pte; 4372 4341 struct hstate *h = hstate_vma(vma); 4373 4342 unsigned long pages = 0; 4374 - unsigned long f_start = start; 4375 - unsigned long f_end = end; 4376 4343 bool shared_pmd = false; 4344 + struct mmu_notifier_range range; 4377 4345 4378 4346 /* 4379 4347 * In the case of shared PMDs, the area to flush could be beyond 4380 - * start/end. Set f_start/f_end to cover the maximum possible 4348 + * start/end. Set range.start/range.end to cover the maximum possible 4381 4349 * range if PMD sharing is possible. 4382 4350 */ 4383 - adjust_range_if_pmd_sharing_possible(vma, &f_start, &f_end); 4351 + mmu_notifier_range_init(&range, mm, start, end); 4352 + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); 4384 4353 4385 4354 BUG_ON(address >= end); 4386 - flush_cache_range(vma, f_start, f_end); 4355 + flush_cache_range(vma, range.start, range.end); 4387 4356 4388 - mmu_notifier_invalidate_range_start(mm, f_start, f_end); 4357 + mmu_notifier_invalidate_range_start(&range); 4389 4358 i_mmap_lock_write(vma->vm_file->f_mapping); 4390 4359 for (; address < end; address += huge_page_size(h)) { 4391 4360 spinlock_t *ptl; ··· 4436 4405 * did unshare a page of pmds, flush the range corresponding to the pud. 4437 4406 */ 4438 4407 if (shared_pmd) 4439 - flush_hugetlb_tlb_range(vma, f_start, f_end); 4408 + flush_hugetlb_tlb_range(vma, range.start, range.end); 4440 4409 else 4441 4410 flush_hugetlb_tlb_range(vma, start, end); 4442 4411 /* ··· 4446 4415 * See Documentation/vm/mmu_notifier.rst 4447 4416 */ 4448 4417 i_mmap_unlock_write(vma->vm_file->f_mapping); 4449 - mmu_notifier_invalidate_range_end(mm, f_start, f_end); 4418 + mmu_notifier_invalidate_range_end(&range); 4450 4419 4451 4420 return pages << h->order; 4452 4421 } ··· 4671 4640 * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc() 4672 4641 * and returns the corresponding pte. While this is not necessary for the 4673 4642 * !shared pmd case because we can allocate the pmd later as well, it makes the 4674 - * code much cleaner. pmd allocation is essential for the shared case because 4675 - * pud has to be populated inside the same i_mmap_rwsem section - otherwise 4676 - * racing tasks could either miss the sharing (see huge_pte_offset) or select a 4677 - * bad pmd for sharing. 4643 + * code much cleaner. 4644 + * 4645 + * This routine must be called with i_mmap_rwsem held in at least read mode. 4646 + * For hugetlbfs, this prevents removal of any page table entries associated 4647 + * with the address space. This is important as we are setting up sharing 4648 + * based on existing page table entries (mappings). 4678 4649 */ 4679 4650 pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) 4680 4651 { ··· 4693 4660 if (!vma_shareable(vma, addr)) 4694 4661 return (pte_t *)pmd_alloc(mm, pud, addr); 4695 4662 4696 - i_mmap_lock_write(mapping); 4697 4663 vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) { 4698 4664 if (svma == vma) 4699 4665 continue; ··· 4722 4690 spin_unlock(ptl); 4723 4691 out: 4724 4692 pte = (pte_t *)pmd_alloc(mm, pud, addr); 4725 - i_mmap_unlock_write(mapping); 4726 4693 return pte; 4727 4694 } 4728 4695 ··· 4732 4701 * indicated by page_count > 1, unmap is achieved by clearing pud and 4733 4702 * decrementing the ref count. If count == 1, the pte page is not shared. 4734 4703 * 4735 - * called with page table lock held. 4704 + * Called with page table lock held and i_mmap_rwsem held in write mode. 4736 4705 * 4737 4706 * returns: 1 successfully unmapped a shared pte page 4738 4707 * 0 the underlying pte page is not shared, or it is the last user
+20 -4
mm/internal.h
··· 444 444 #define NODE_RECLAIM_SOME 0 445 445 #define NODE_RECLAIM_SUCCESS 1 446 446 447 + #ifdef CONFIG_NUMA 448 + extern int node_reclaim(struct pglist_data *, gfp_t, unsigned int); 449 + #else 450 + static inline int node_reclaim(struct pglist_data *pgdat, gfp_t mask, 451 + unsigned int order) 452 + { 453 + return NODE_RECLAIM_NOSCAN; 454 + } 455 + #endif 456 + 447 457 extern int hwpoison_filter(struct page *p); 448 458 449 459 extern u32 hwpoison_filter_dev_major; ··· 490 480 #define ALLOC_OOM ALLOC_NO_WATERMARKS 491 481 #endif 492 482 493 - #define ALLOC_HARDER 0x10 /* try to alloc harder */ 494 - #define ALLOC_HIGH 0x20 /* __GFP_HIGH set */ 495 - #define ALLOC_CPUSET 0x40 /* check for correct cpuset */ 496 - #define ALLOC_CMA 0x80 /* allow allocations from CMA areas */ 483 + #define ALLOC_HARDER 0x10 /* try to alloc harder */ 484 + #define ALLOC_HIGH 0x20 /* __GFP_HIGH set */ 485 + #define ALLOC_CPUSET 0x40 /* check for correct cpuset */ 486 + #define ALLOC_CMA 0x80 /* allow allocations from CMA areas */ 487 + #ifdef CONFIG_ZONE_DMA32 488 + #define ALLOC_NOFRAGMENT 0x100 /* avoid mixing pageblock types */ 489 + #else 490 + #define ALLOC_NOFRAGMENT 0x0 491 + #endif 492 + #define ALLOC_KSWAPD 0x200 /* allow waking of kswapd */ 497 493 498 494 enum ttu_flags; 499 495 struct tlbflush_unmap_batch;
+11 -4
mm/kasan/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 KASAN_SANITIZE := n 3 - UBSAN_SANITIZE_kasan.o := n 3 + UBSAN_SANITIZE_common.o := n 4 + UBSAN_SANITIZE_generic.o := n 5 + UBSAN_SANITIZE_tags.o := n 4 6 KCOV_INSTRUMENT := n 5 7 6 - CFLAGS_REMOVE_kasan.o = -pg 8 + CFLAGS_REMOVE_generic.o = -pg 7 9 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1 8 10 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533 9 - CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 10 11 11 - obj-y := kasan.o report.o kasan_init.o quarantine.o 12 + CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 13 + CFLAGS_generic.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 14 + CFLAGS_tags.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 15 + 16 + obj-$(CONFIG_KASAN) := common.o init.o report.o 17 + obj-$(CONFIG_KASAN_GENERIC) += generic.o generic_report.o quarantine.o 18 + obj-$(CONFIG_KASAN_SW_TAGS) += tags.o tags_report.o
+697
mm/kasan/common.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * This file contains common generic and tag-based KASAN code. 4 + * 5 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 6 + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com> 7 + * 8 + * Some code borrowed from https://github.com/xairy/kasan-prototype by 9 + * Andrey Konovalov <andreyknvl@gmail.com> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License version 2 as 13 + * published by the Free Software Foundation. 14 + * 15 + */ 16 + 17 + #include <linux/export.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/init.h> 20 + #include <linux/kasan.h> 21 + #include <linux/kernel.h> 22 + #include <linux/kmemleak.h> 23 + #include <linux/linkage.h> 24 + #include <linux/memblock.h> 25 + #include <linux/memory.h> 26 + #include <linux/mm.h> 27 + #include <linux/module.h> 28 + #include <linux/printk.h> 29 + #include <linux/sched.h> 30 + #include <linux/sched/task_stack.h> 31 + #include <linux/slab.h> 32 + #include <linux/stacktrace.h> 33 + #include <linux/string.h> 34 + #include <linux/types.h> 35 + #include <linux/vmalloc.h> 36 + #include <linux/bug.h> 37 + 38 + #include "kasan.h" 39 + #include "../slab.h" 40 + 41 + static inline int in_irqentry_text(unsigned long ptr) 42 + { 43 + return (ptr >= (unsigned long)&__irqentry_text_start && 44 + ptr < (unsigned long)&__irqentry_text_end) || 45 + (ptr >= (unsigned long)&__softirqentry_text_start && 46 + ptr < (unsigned long)&__softirqentry_text_end); 47 + } 48 + 49 + static inline void filter_irq_stacks(struct stack_trace *trace) 50 + { 51 + int i; 52 + 53 + if (!trace->nr_entries) 54 + return; 55 + for (i = 0; i < trace->nr_entries; i++) 56 + if (in_irqentry_text(trace->entries[i])) { 57 + /* Include the irqentry function into the stack. */ 58 + trace->nr_entries = i + 1; 59 + break; 60 + } 61 + } 62 + 63 + static inline depot_stack_handle_t save_stack(gfp_t flags) 64 + { 65 + unsigned long entries[KASAN_STACK_DEPTH]; 66 + struct stack_trace trace = { 67 + .nr_entries = 0, 68 + .entries = entries, 69 + .max_entries = KASAN_STACK_DEPTH, 70 + .skip = 0 71 + }; 72 + 73 + save_stack_trace(&trace); 74 + filter_irq_stacks(&trace); 75 + if (trace.nr_entries != 0 && 76 + trace.entries[trace.nr_entries-1] == ULONG_MAX) 77 + trace.nr_entries--; 78 + 79 + return depot_save_stack(&trace, flags); 80 + } 81 + 82 + static inline void set_track(struct kasan_track *track, gfp_t flags) 83 + { 84 + track->pid = current->pid; 85 + track->stack = save_stack(flags); 86 + } 87 + 88 + void kasan_enable_current(void) 89 + { 90 + current->kasan_depth++; 91 + } 92 + 93 + void kasan_disable_current(void) 94 + { 95 + current->kasan_depth--; 96 + } 97 + 98 + void kasan_check_read(const volatile void *p, unsigned int size) 99 + { 100 + check_memory_region((unsigned long)p, size, false, _RET_IP_); 101 + } 102 + EXPORT_SYMBOL(kasan_check_read); 103 + 104 + void kasan_check_write(const volatile void *p, unsigned int size) 105 + { 106 + check_memory_region((unsigned long)p, size, true, _RET_IP_); 107 + } 108 + EXPORT_SYMBOL(kasan_check_write); 109 + 110 + #undef memset 111 + void *memset(void *addr, int c, size_t len) 112 + { 113 + check_memory_region((unsigned long)addr, len, true, _RET_IP_); 114 + 115 + return __memset(addr, c, len); 116 + } 117 + 118 + #undef memmove 119 + void *memmove(void *dest, const void *src, size_t len) 120 + { 121 + check_memory_region((unsigned long)src, len, false, _RET_IP_); 122 + check_memory_region((unsigned long)dest, len, true, _RET_IP_); 123 + 124 + return __memmove(dest, src, len); 125 + } 126 + 127 + #undef memcpy 128 + void *memcpy(void *dest, const void *src, size_t len) 129 + { 130 + check_memory_region((unsigned long)src, len, false, _RET_IP_); 131 + check_memory_region((unsigned long)dest, len, true, _RET_IP_); 132 + 133 + return __memcpy(dest, src, len); 134 + } 135 + 136 + /* 137 + * Poisons the shadow memory for 'size' bytes starting from 'addr'. 138 + * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE. 139 + */ 140 + void kasan_poison_shadow(const void *address, size_t size, u8 value) 141 + { 142 + void *shadow_start, *shadow_end; 143 + 144 + /* 145 + * Perform shadow offset calculation based on untagged address, as 146 + * some of the callers (e.g. kasan_poison_object_data) pass tagged 147 + * addresses to this function. 148 + */ 149 + address = reset_tag(address); 150 + 151 + shadow_start = kasan_mem_to_shadow(address); 152 + shadow_end = kasan_mem_to_shadow(address + size); 153 + 154 + __memset(shadow_start, value, shadow_end - shadow_start); 155 + } 156 + 157 + void kasan_unpoison_shadow(const void *address, size_t size) 158 + { 159 + u8 tag = get_tag(address); 160 + 161 + /* 162 + * Perform shadow offset calculation based on untagged address, as 163 + * some of the callers (e.g. kasan_unpoison_object_data) pass tagged 164 + * addresses to this function. 165 + */ 166 + address = reset_tag(address); 167 + 168 + kasan_poison_shadow(address, size, tag); 169 + 170 + if (size & KASAN_SHADOW_MASK) { 171 + u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); 172 + 173 + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) 174 + *shadow = tag; 175 + else 176 + *shadow = size & KASAN_SHADOW_MASK; 177 + } 178 + } 179 + 180 + static void __kasan_unpoison_stack(struct task_struct *task, const void *sp) 181 + { 182 + void *base = task_stack_page(task); 183 + size_t size = sp - base; 184 + 185 + kasan_unpoison_shadow(base, size); 186 + } 187 + 188 + /* Unpoison the entire stack for a task. */ 189 + void kasan_unpoison_task_stack(struct task_struct *task) 190 + { 191 + __kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE); 192 + } 193 + 194 + /* Unpoison the stack for the current task beyond a watermark sp value. */ 195 + asmlinkage void kasan_unpoison_task_stack_below(const void *watermark) 196 + { 197 + /* 198 + * Calculate the task stack base address. Avoid using 'current' 199 + * because this function is called by early resume code which hasn't 200 + * yet set up the percpu register (%gs). 201 + */ 202 + void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1)); 203 + 204 + kasan_unpoison_shadow(base, watermark - base); 205 + } 206 + 207 + /* 208 + * Clear all poison for the region between the current SP and a provided 209 + * watermark value, as is sometimes required prior to hand-crafted asm function 210 + * returns in the middle of functions. 211 + */ 212 + void kasan_unpoison_stack_above_sp_to(const void *watermark) 213 + { 214 + const void *sp = __builtin_frame_address(0); 215 + size_t size = watermark - sp; 216 + 217 + if (WARN_ON(sp > watermark)) 218 + return; 219 + kasan_unpoison_shadow(sp, size); 220 + } 221 + 222 + void kasan_alloc_pages(struct page *page, unsigned int order) 223 + { 224 + u8 tag; 225 + unsigned long i; 226 + 227 + if (unlikely(PageHighMem(page))) 228 + return; 229 + 230 + tag = random_tag(); 231 + for (i = 0; i < (1 << order); i++) 232 + page_kasan_tag_set(page + i, tag); 233 + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); 234 + } 235 + 236 + void kasan_free_pages(struct page *page, unsigned int order) 237 + { 238 + if (likely(!PageHighMem(page))) 239 + kasan_poison_shadow(page_address(page), 240 + PAGE_SIZE << order, 241 + KASAN_FREE_PAGE); 242 + } 243 + 244 + /* 245 + * Adaptive redzone policy taken from the userspace AddressSanitizer runtime. 246 + * For larger allocations larger redzones are used. 247 + */ 248 + static inline unsigned int optimal_redzone(unsigned int object_size) 249 + { 250 + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) 251 + return 0; 252 + 253 + return 254 + object_size <= 64 - 16 ? 16 : 255 + object_size <= 128 - 32 ? 32 : 256 + object_size <= 512 - 64 ? 64 : 257 + object_size <= 4096 - 128 ? 128 : 258 + object_size <= (1 << 14) - 256 ? 256 : 259 + object_size <= (1 << 15) - 512 ? 512 : 260 + object_size <= (1 << 16) - 1024 ? 1024 : 2048; 261 + } 262 + 263 + void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, 264 + slab_flags_t *flags) 265 + { 266 + unsigned int orig_size = *size; 267 + unsigned int redzone_size; 268 + int redzone_adjust; 269 + 270 + /* Add alloc meta. */ 271 + cache->kasan_info.alloc_meta_offset = *size; 272 + *size += sizeof(struct kasan_alloc_meta); 273 + 274 + /* Add free meta. */ 275 + if (IS_ENABLED(CONFIG_KASAN_GENERIC) && 276 + (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || 277 + cache->object_size < sizeof(struct kasan_free_meta))) { 278 + cache->kasan_info.free_meta_offset = *size; 279 + *size += sizeof(struct kasan_free_meta); 280 + } 281 + 282 + redzone_size = optimal_redzone(cache->object_size); 283 + redzone_adjust = redzone_size - (*size - cache->object_size); 284 + if (redzone_adjust > 0) 285 + *size += redzone_adjust; 286 + 287 + *size = min_t(unsigned int, KMALLOC_MAX_SIZE, 288 + max(*size, cache->object_size + redzone_size)); 289 + 290 + /* 291 + * If the metadata doesn't fit, don't enable KASAN at all. 292 + */ 293 + if (*size <= cache->kasan_info.alloc_meta_offset || 294 + *size <= cache->kasan_info.free_meta_offset) { 295 + cache->kasan_info.alloc_meta_offset = 0; 296 + cache->kasan_info.free_meta_offset = 0; 297 + *size = orig_size; 298 + return; 299 + } 300 + 301 + cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE); 302 + 303 + *flags |= SLAB_KASAN; 304 + } 305 + 306 + size_t kasan_metadata_size(struct kmem_cache *cache) 307 + { 308 + return (cache->kasan_info.alloc_meta_offset ? 309 + sizeof(struct kasan_alloc_meta) : 0) + 310 + (cache->kasan_info.free_meta_offset ? 311 + sizeof(struct kasan_free_meta) : 0); 312 + } 313 + 314 + struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, 315 + const void *object) 316 + { 317 + BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32); 318 + return (void *)object + cache->kasan_info.alloc_meta_offset; 319 + } 320 + 321 + struct kasan_free_meta *get_free_info(struct kmem_cache *cache, 322 + const void *object) 323 + { 324 + BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); 325 + return (void *)object + cache->kasan_info.free_meta_offset; 326 + } 327 + 328 + void kasan_poison_slab(struct page *page) 329 + { 330 + unsigned long i; 331 + 332 + for (i = 0; i < (1 << compound_order(page)); i++) 333 + page_kasan_tag_reset(page + i); 334 + kasan_poison_shadow(page_address(page), 335 + PAGE_SIZE << compound_order(page), 336 + KASAN_KMALLOC_REDZONE); 337 + } 338 + 339 + void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) 340 + { 341 + kasan_unpoison_shadow(object, cache->object_size); 342 + } 343 + 344 + void kasan_poison_object_data(struct kmem_cache *cache, void *object) 345 + { 346 + kasan_poison_shadow(object, 347 + round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE), 348 + KASAN_KMALLOC_REDZONE); 349 + } 350 + 351 + /* 352 + * Since it's desirable to only call object contructors once during slab 353 + * allocation, we preassign tags to all such objects. Also preassign tags for 354 + * SLAB_TYPESAFE_BY_RCU slabs to avoid use-after-free reports. 355 + * For SLAB allocator we can't preassign tags randomly since the freelist is 356 + * stored as an array of indexes instead of a linked list. Assign tags based 357 + * on objects indexes, so that objects that are next to each other get 358 + * different tags. 359 + * After a tag is assigned, the object always gets allocated with the same tag. 360 + * The reason is that we can't change tags for objects with constructors on 361 + * reallocation (even for non-SLAB_TYPESAFE_BY_RCU), because the constructor 362 + * code can save the pointer to the object somewhere (e.g. in the object 363 + * itself). Then if we retag it, the old saved pointer will become invalid. 364 + */ 365 + static u8 assign_tag(struct kmem_cache *cache, const void *object, bool new) 366 + { 367 + if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU)) 368 + return new ? KASAN_TAG_KERNEL : random_tag(); 369 + 370 + #ifdef CONFIG_SLAB 371 + return (u8)obj_to_index(cache, virt_to_page(object), (void *)object); 372 + #else 373 + return new ? random_tag() : get_tag(object); 374 + #endif 375 + } 376 + 377 + void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, 378 + const void *object) 379 + { 380 + struct kasan_alloc_meta *alloc_info; 381 + 382 + if (!(cache->flags & SLAB_KASAN)) 383 + return (void *)object; 384 + 385 + alloc_info = get_alloc_info(cache, object); 386 + __memset(alloc_info, 0, sizeof(*alloc_info)); 387 + 388 + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) 389 + object = set_tag(object, assign_tag(cache, object, true)); 390 + 391 + return (void *)object; 392 + } 393 + 394 + void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object, 395 + gfp_t flags) 396 + { 397 + return kasan_kmalloc(cache, object, cache->object_size, flags); 398 + } 399 + 400 + static inline bool shadow_invalid(u8 tag, s8 shadow_byte) 401 + { 402 + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) 403 + return shadow_byte < 0 || 404 + shadow_byte >= KASAN_SHADOW_SCALE_SIZE; 405 + else 406 + return tag != (u8)shadow_byte; 407 + } 408 + 409 + static bool __kasan_slab_free(struct kmem_cache *cache, void *object, 410 + unsigned long ip, bool quarantine) 411 + { 412 + s8 shadow_byte; 413 + u8 tag; 414 + void *tagged_object; 415 + unsigned long rounded_up_size; 416 + 417 + tag = get_tag(object); 418 + tagged_object = object; 419 + object = reset_tag(object); 420 + 421 + if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != 422 + object)) { 423 + kasan_report_invalid_free(tagged_object, ip); 424 + return true; 425 + } 426 + 427 + /* RCU slabs could be legally used after free within the RCU period */ 428 + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) 429 + return false; 430 + 431 + shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); 432 + if (shadow_invalid(tag, shadow_byte)) { 433 + kasan_report_invalid_free(tagged_object, ip); 434 + return true; 435 + } 436 + 437 + rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); 438 + kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); 439 + 440 + if ((IS_ENABLED(CONFIG_KASAN_GENERIC) && !quarantine) || 441 + unlikely(!(cache->flags & SLAB_KASAN))) 442 + return false; 443 + 444 + set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); 445 + quarantine_put(get_free_info(cache, object), cache); 446 + 447 + return IS_ENABLED(CONFIG_KASAN_GENERIC); 448 + } 449 + 450 + bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) 451 + { 452 + return __kasan_slab_free(cache, object, ip, true); 453 + } 454 + 455 + void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, 456 + size_t size, gfp_t flags) 457 + { 458 + unsigned long redzone_start; 459 + unsigned long redzone_end; 460 + u8 tag; 461 + 462 + if (gfpflags_allow_blocking(flags)) 463 + quarantine_reduce(); 464 + 465 + if (unlikely(object == NULL)) 466 + return NULL; 467 + 468 + redzone_start = round_up((unsigned long)(object + size), 469 + KASAN_SHADOW_SCALE_SIZE); 470 + redzone_end = round_up((unsigned long)object + cache->object_size, 471 + KASAN_SHADOW_SCALE_SIZE); 472 + 473 + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) 474 + tag = assign_tag(cache, object, false); 475 + 476 + /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */ 477 + kasan_unpoison_shadow(set_tag(object, tag), size); 478 + kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, 479 + KASAN_KMALLOC_REDZONE); 480 + 481 + if (cache->flags & SLAB_KASAN) 482 + set_track(&get_alloc_info(cache, object)->alloc_track, flags); 483 + 484 + return set_tag(object, tag); 485 + } 486 + EXPORT_SYMBOL(kasan_kmalloc); 487 + 488 + void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, 489 + gfp_t flags) 490 + { 491 + struct page *page; 492 + unsigned long redzone_start; 493 + unsigned long redzone_end; 494 + 495 + if (gfpflags_allow_blocking(flags)) 496 + quarantine_reduce(); 497 + 498 + if (unlikely(ptr == NULL)) 499 + return NULL; 500 + 501 + page = virt_to_page(ptr); 502 + redzone_start = round_up((unsigned long)(ptr + size), 503 + KASAN_SHADOW_SCALE_SIZE); 504 + redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page)); 505 + 506 + kasan_unpoison_shadow(ptr, size); 507 + kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, 508 + KASAN_PAGE_REDZONE); 509 + 510 + return (void *)ptr; 511 + } 512 + 513 + void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags) 514 + { 515 + struct page *page; 516 + 517 + if (unlikely(object == ZERO_SIZE_PTR)) 518 + return (void *)object; 519 + 520 + page = virt_to_head_page(object); 521 + 522 + if (unlikely(!PageSlab(page))) 523 + return kasan_kmalloc_large(object, size, flags); 524 + else 525 + return kasan_kmalloc(page->slab_cache, object, size, flags); 526 + } 527 + 528 + void kasan_poison_kfree(void *ptr, unsigned long ip) 529 + { 530 + struct page *page; 531 + 532 + page = virt_to_head_page(ptr); 533 + 534 + if (unlikely(!PageSlab(page))) { 535 + if (ptr != page_address(page)) { 536 + kasan_report_invalid_free(ptr, ip); 537 + return; 538 + } 539 + kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page), 540 + KASAN_FREE_PAGE); 541 + } else { 542 + __kasan_slab_free(page->slab_cache, ptr, ip, false); 543 + } 544 + } 545 + 546 + void kasan_kfree_large(void *ptr, unsigned long ip) 547 + { 548 + if (ptr != page_address(virt_to_head_page(ptr))) 549 + kasan_report_invalid_free(ptr, ip); 550 + /* The object will be poisoned by page_alloc. */ 551 + } 552 + 553 + int kasan_module_alloc(void *addr, size_t size) 554 + { 555 + void *ret; 556 + size_t scaled_size; 557 + size_t shadow_size; 558 + unsigned long shadow_start; 559 + 560 + shadow_start = (unsigned long)kasan_mem_to_shadow(addr); 561 + scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT; 562 + shadow_size = round_up(scaled_size, PAGE_SIZE); 563 + 564 + if (WARN_ON(!PAGE_ALIGNED(shadow_start))) 565 + return -EINVAL; 566 + 567 + ret = __vmalloc_node_range(shadow_size, 1, shadow_start, 568 + shadow_start + shadow_size, 569 + GFP_KERNEL, 570 + PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, 571 + __builtin_return_address(0)); 572 + 573 + if (ret) { 574 + __memset(ret, KASAN_SHADOW_INIT, shadow_size); 575 + find_vm_area(addr)->flags |= VM_KASAN; 576 + kmemleak_ignore(ret); 577 + return 0; 578 + } 579 + 580 + return -ENOMEM; 581 + } 582 + 583 + void kasan_free_shadow(const struct vm_struct *vm) 584 + { 585 + if (vm->flags & VM_KASAN) 586 + vfree(kasan_mem_to_shadow(vm->addr)); 587 + } 588 + 589 + #ifdef CONFIG_MEMORY_HOTPLUG 590 + static bool shadow_mapped(unsigned long addr) 591 + { 592 + pgd_t *pgd = pgd_offset_k(addr); 593 + p4d_t *p4d; 594 + pud_t *pud; 595 + pmd_t *pmd; 596 + pte_t *pte; 597 + 598 + if (pgd_none(*pgd)) 599 + return false; 600 + p4d = p4d_offset(pgd, addr); 601 + if (p4d_none(*p4d)) 602 + return false; 603 + pud = pud_offset(p4d, addr); 604 + if (pud_none(*pud)) 605 + return false; 606 + 607 + /* 608 + * We can't use pud_large() or pud_huge(), the first one is 609 + * arch-specific, the last one depends on HUGETLB_PAGE. So let's abuse 610 + * pud_bad(), if pud is bad then it's bad because it's huge. 611 + */ 612 + if (pud_bad(*pud)) 613 + return true; 614 + pmd = pmd_offset(pud, addr); 615 + if (pmd_none(*pmd)) 616 + return false; 617 + 618 + if (pmd_bad(*pmd)) 619 + return true; 620 + pte = pte_offset_kernel(pmd, addr); 621 + return !pte_none(*pte); 622 + } 623 + 624 + static int __meminit kasan_mem_notifier(struct notifier_block *nb, 625 + unsigned long action, void *data) 626 + { 627 + struct memory_notify *mem_data = data; 628 + unsigned long nr_shadow_pages, start_kaddr, shadow_start; 629 + unsigned long shadow_end, shadow_size; 630 + 631 + nr_shadow_pages = mem_data->nr_pages >> KASAN_SHADOW_SCALE_SHIFT; 632 + start_kaddr = (unsigned long)pfn_to_kaddr(mem_data->start_pfn); 633 + shadow_start = (unsigned long)kasan_mem_to_shadow((void *)start_kaddr); 634 + shadow_size = nr_shadow_pages << PAGE_SHIFT; 635 + shadow_end = shadow_start + shadow_size; 636 + 637 + if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) || 638 + WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT))) 639 + return NOTIFY_BAD; 640 + 641 + switch (action) { 642 + case MEM_GOING_ONLINE: { 643 + void *ret; 644 + 645 + /* 646 + * If shadow is mapped already than it must have been mapped 647 + * during the boot. This could happen if we onlining previously 648 + * offlined memory. 649 + */ 650 + if (shadow_mapped(shadow_start)) 651 + return NOTIFY_OK; 652 + 653 + ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start, 654 + shadow_end, GFP_KERNEL, 655 + PAGE_KERNEL, VM_NO_GUARD, 656 + pfn_to_nid(mem_data->start_pfn), 657 + __builtin_return_address(0)); 658 + if (!ret) 659 + return NOTIFY_BAD; 660 + 661 + kmemleak_ignore(ret); 662 + return NOTIFY_OK; 663 + } 664 + case MEM_CANCEL_ONLINE: 665 + case MEM_OFFLINE: { 666 + struct vm_struct *vm; 667 + 668 + /* 669 + * shadow_start was either mapped during boot by kasan_init() 670 + * or during memory online by __vmalloc_node_range(). 671 + * In the latter case we can use vfree() to free shadow. 672 + * Non-NULL result of the find_vm_area() will tell us if 673 + * that was the second case. 674 + * 675 + * Currently it's not possible to free shadow mapped 676 + * during boot by kasan_init(). It's because the code 677 + * to do that hasn't been written yet. So we'll just 678 + * leak the memory. 679 + */ 680 + vm = find_vm_area((void *)shadow_start); 681 + if (vm) 682 + vfree((void *)shadow_start); 683 + } 684 + } 685 + 686 + return NOTIFY_OK; 687 + } 688 + 689 + static int __init kasan_memhotplug_init(void) 690 + { 691 + hotplug_memory_notifier(kasan_mem_notifier, 0); 692 + 693 + return 0; 694 + } 695 + 696 + core_initcall(kasan_memhotplug_init); 697 + #endif
+344
mm/kasan/generic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * This file contains core generic KASAN code. 4 + * 5 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 6 + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com> 7 + * 8 + * Some code borrowed from https://github.com/xairy/kasan-prototype by 9 + * Andrey Konovalov <andreyknvl@gmail.com> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License version 2 as 13 + * published by the Free Software Foundation. 14 + * 15 + */ 16 + 17 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 18 + #define DISABLE_BRANCH_PROFILING 19 + 20 + #include <linux/export.h> 21 + #include <linux/interrupt.h> 22 + #include <linux/init.h> 23 + #include <linux/kasan.h> 24 + #include <linux/kernel.h> 25 + #include <linux/kmemleak.h> 26 + #include <linux/linkage.h> 27 + #include <linux/memblock.h> 28 + #include <linux/memory.h> 29 + #include <linux/mm.h> 30 + #include <linux/module.h> 31 + #include <linux/printk.h> 32 + #include <linux/sched.h> 33 + #include <linux/sched/task_stack.h> 34 + #include <linux/slab.h> 35 + #include <linux/stacktrace.h> 36 + #include <linux/string.h> 37 + #include <linux/types.h> 38 + #include <linux/vmalloc.h> 39 + #include <linux/bug.h> 40 + 41 + #include "kasan.h" 42 + #include "../slab.h" 43 + 44 + /* 45 + * All functions below always inlined so compiler could 46 + * perform better optimizations in each of __asan_loadX/__assn_storeX 47 + * depending on memory access size X. 48 + */ 49 + 50 + static __always_inline bool memory_is_poisoned_1(unsigned long addr) 51 + { 52 + s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr); 53 + 54 + if (unlikely(shadow_value)) { 55 + s8 last_accessible_byte = addr & KASAN_SHADOW_MASK; 56 + return unlikely(last_accessible_byte >= shadow_value); 57 + } 58 + 59 + return false; 60 + } 61 + 62 + static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr, 63 + unsigned long size) 64 + { 65 + u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr); 66 + 67 + /* 68 + * Access crosses 8(shadow size)-byte boundary. Such access maps 69 + * into 2 shadow bytes, so we need to check them both. 70 + */ 71 + if (unlikely(((addr + size - 1) & KASAN_SHADOW_MASK) < size - 1)) 72 + return *shadow_addr || memory_is_poisoned_1(addr + size - 1); 73 + 74 + return memory_is_poisoned_1(addr + size - 1); 75 + } 76 + 77 + static __always_inline bool memory_is_poisoned_16(unsigned long addr) 78 + { 79 + u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr); 80 + 81 + /* Unaligned 16-bytes access maps into 3 shadow bytes. */ 82 + if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE))) 83 + return *shadow_addr || memory_is_poisoned_1(addr + 15); 84 + 85 + return *shadow_addr; 86 + } 87 + 88 + static __always_inline unsigned long bytes_is_nonzero(const u8 *start, 89 + size_t size) 90 + { 91 + while (size) { 92 + if (unlikely(*start)) 93 + return (unsigned long)start; 94 + start++; 95 + size--; 96 + } 97 + 98 + return 0; 99 + } 100 + 101 + static __always_inline unsigned long memory_is_nonzero(const void *start, 102 + const void *end) 103 + { 104 + unsigned int words; 105 + unsigned long ret; 106 + unsigned int prefix = (unsigned long)start % 8; 107 + 108 + if (end - start <= 16) 109 + return bytes_is_nonzero(start, end - start); 110 + 111 + if (prefix) { 112 + prefix = 8 - prefix; 113 + ret = bytes_is_nonzero(start, prefix); 114 + if (unlikely(ret)) 115 + return ret; 116 + start += prefix; 117 + } 118 + 119 + words = (end - start) / 8; 120 + while (words) { 121 + if (unlikely(*(u64 *)start)) 122 + return bytes_is_nonzero(start, 8); 123 + start += 8; 124 + words--; 125 + } 126 + 127 + return bytes_is_nonzero(start, (end - start) % 8); 128 + } 129 + 130 + static __always_inline bool memory_is_poisoned_n(unsigned long addr, 131 + size_t size) 132 + { 133 + unsigned long ret; 134 + 135 + ret = memory_is_nonzero(kasan_mem_to_shadow((void *)addr), 136 + kasan_mem_to_shadow((void *)addr + size - 1) + 1); 137 + 138 + if (unlikely(ret)) { 139 + unsigned long last_byte = addr + size - 1; 140 + s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte); 141 + 142 + if (unlikely(ret != (unsigned long)last_shadow || 143 + ((long)(last_byte & KASAN_SHADOW_MASK) >= *last_shadow))) 144 + return true; 145 + } 146 + return false; 147 + } 148 + 149 + static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size) 150 + { 151 + if (__builtin_constant_p(size)) { 152 + switch (size) { 153 + case 1: 154 + return memory_is_poisoned_1(addr); 155 + case 2: 156 + case 4: 157 + case 8: 158 + return memory_is_poisoned_2_4_8(addr, size); 159 + case 16: 160 + return memory_is_poisoned_16(addr); 161 + default: 162 + BUILD_BUG(); 163 + } 164 + } 165 + 166 + return memory_is_poisoned_n(addr, size); 167 + } 168 + 169 + static __always_inline void check_memory_region_inline(unsigned long addr, 170 + size_t size, bool write, 171 + unsigned long ret_ip) 172 + { 173 + if (unlikely(size == 0)) 174 + return; 175 + 176 + if (unlikely((void *)addr < 177 + kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) { 178 + kasan_report(addr, size, write, ret_ip); 179 + return; 180 + } 181 + 182 + if (likely(!memory_is_poisoned(addr, size))) 183 + return; 184 + 185 + kasan_report(addr, size, write, ret_ip); 186 + } 187 + 188 + void check_memory_region(unsigned long addr, size_t size, bool write, 189 + unsigned long ret_ip) 190 + { 191 + check_memory_region_inline(addr, size, write, ret_ip); 192 + } 193 + 194 + void kasan_cache_shrink(struct kmem_cache *cache) 195 + { 196 + quarantine_remove_cache(cache); 197 + } 198 + 199 + void kasan_cache_shutdown(struct kmem_cache *cache) 200 + { 201 + if (!__kmem_cache_empty(cache)) 202 + quarantine_remove_cache(cache); 203 + } 204 + 205 + static void register_global(struct kasan_global *global) 206 + { 207 + size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE); 208 + 209 + kasan_unpoison_shadow(global->beg, global->size); 210 + 211 + kasan_poison_shadow(global->beg + aligned_size, 212 + global->size_with_redzone - aligned_size, 213 + KASAN_GLOBAL_REDZONE); 214 + } 215 + 216 + void __asan_register_globals(struct kasan_global *globals, size_t size) 217 + { 218 + int i; 219 + 220 + for (i = 0; i < size; i++) 221 + register_global(&globals[i]); 222 + } 223 + EXPORT_SYMBOL(__asan_register_globals); 224 + 225 + void __asan_unregister_globals(struct kasan_global *globals, size_t size) 226 + { 227 + } 228 + EXPORT_SYMBOL(__asan_unregister_globals); 229 + 230 + #define DEFINE_ASAN_LOAD_STORE(size) \ 231 + void __asan_load##size(unsigned long addr) \ 232 + { \ 233 + check_memory_region_inline(addr, size, false, _RET_IP_);\ 234 + } \ 235 + EXPORT_SYMBOL(__asan_load##size); \ 236 + __alias(__asan_load##size) \ 237 + void __asan_load##size##_noabort(unsigned long); \ 238 + EXPORT_SYMBOL(__asan_load##size##_noabort); \ 239 + void __asan_store##size(unsigned long addr) \ 240 + { \ 241 + check_memory_region_inline(addr, size, true, _RET_IP_); \ 242 + } \ 243 + EXPORT_SYMBOL(__asan_store##size); \ 244 + __alias(__asan_store##size) \ 245 + void __asan_store##size##_noabort(unsigned long); \ 246 + EXPORT_SYMBOL(__asan_store##size##_noabort) 247 + 248 + DEFINE_ASAN_LOAD_STORE(1); 249 + DEFINE_ASAN_LOAD_STORE(2); 250 + DEFINE_ASAN_LOAD_STORE(4); 251 + DEFINE_ASAN_LOAD_STORE(8); 252 + DEFINE_ASAN_LOAD_STORE(16); 253 + 254 + void __asan_loadN(unsigned long addr, size_t size) 255 + { 256 + check_memory_region(addr, size, false, _RET_IP_); 257 + } 258 + EXPORT_SYMBOL(__asan_loadN); 259 + 260 + __alias(__asan_loadN) 261 + void __asan_loadN_noabort(unsigned long, size_t); 262 + EXPORT_SYMBOL(__asan_loadN_noabort); 263 + 264 + void __asan_storeN(unsigned long addr, size_t size) 265 + { 266 + check_memory_region(addr, size, true, _RET_IP_); 267 + } 268 + EXPORT_SYMBOL(__asan_storeN); 269 + 270 + __alias(__asan_storeN) 271 + void __asan_storeN_noabort(unsigned long, size_t); 272 + EXPORT_SYMBOL(__asan_storeN_noabort); 273 + 274 + /* to shut up compiler complaints */ 275 + void __asan_handle_no_return(void) {} 276 + EXPORT_SYMBOL(__asan_handle_no_return); 277 + 278 + /* Emitted by compiler to poison large objects when they go out of scope. */ 279 + void __asan_poison_stack_memory(const void *addr, size_t size) 280 + { 281 + /* 282 + * Addr is KASAN_SHADOW_SCALE_SIZE-aligned and the object is surrounded 283 + * by redzones, so we simply round up size to simplify logic. 284 + */ 285 + kasan_poison_shadow(addr, round_up(size, KASAN_SHADOW_SCALE_SIZE), 286 + KASAN_USE_AFTER_SCOPE); 287 + } 288 + EXPORT_SYMBOL(__asan_poison_stack_memory); 289 + 290 + /* Emitted by compiler to unpoison large objects when they go into scope. */ 291 + void __asan_unpoison_stack_memory(const void *addr, size_t size) 292 + { 293 + kasan_unpoison_shadow(addr, size); 294 + } 295 + EXPORT_SYMBOL(__asan_unpoison_stack_memory); 296 + 297 + /* Emitted by compiler to poison alloca()ed objects. */ 298 + void __asan_alloca_poison(unsigned long addr, size_t size) 299 + { 300 + size_t rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE); 301 + size_t padding_size = round_up(size, KASAN_ALLOCA_REDZONE_SIZE) - 302 + rounded_up_size; 303 + size_t rounded_down_size = round_down(size, KASAN_SHADOW_SCALE_SIZE); 304 + 305 + const void *left_redzone = (const void *)(addr - 306 + KASAN_ALLOCA_REDZONE_SIZE); 307 + const void *right_redzone = (const void *)(addr + rounded_up_size); 308 + 309 + WARN_ON(!IS_ALIGNED(addr, KASAN_ALLOCA_REDZONE_SIZE)); 310 + 311 + kasan_unpoison_shadow((const void *)(addr + rounded_down_size), 312 + size - rounded_down_size); 313 + kasan_poison_shadow(left_redzone, KASAN_ALLOCA_REDZONE_SIZE, 314 + KASAN_ALLOCA_LEFT); 315 + kasan_poison_shadow(right_redzone, 316 + padding_size + KASAN_ALLOCA_REDZONE_SIZE, 317 + KASAN_ALLOCA_RIGHT); 318 + } 319 + EXPORT_SYMBOL(__asan_alloca_poison); 320 + 321 + /* Emitted by compiler to unpoison alloca()ed areas when the stack unwinds. */ 322 + void __asan_allocas_unpoison(const void *stack_top, const void *stack_bottom) 323 + { 324 + if (unlikely(!stack_top || stack_top > stack_bottom)) 325 + return; 326 + 327 + kasan_unpoison_shadow(stack_top, stack_bottom - stack_top); 328 + } 329 + EXPORT_SYMBOL(__asan_allocas_unpoison); 330 + 331 + /* Emitted by the compiler to [un]poison local variables. */ 332 + #define DEFINE_ASAN_SET_SHADOW(byte) \ 333 + void __asan_set_shadow_##byte(const void *addr, size_t size) \ 334 + { \ 335 + __memset((void *)addr, 0x##byte, size); \ 336 + } \ 337 + EXPORT_SYMBOL(__asan_set_shadow_##byte) 338 + 339 + DEFINE_ASAN_SET_SHADOW(00); 340 + DEFINE_ASAN_SET_SHADOW(f1); 341 + DEFINE_ASAN_SET_SHADOW(f2); 342 + DEFINE_ASAN_SET_SHADOW(f3); 343 + DEFINE_ASAN_SET_SHADOW(f5); 344 + DEFINE_ASAN_SET_SHADOW(f8);
+153
mm/kasan/generic_report.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * This file contains generic KASAN specific error reporting code. 4 + * 5 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 6 + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com> 7 + * 8 + * Some code borrowed from https://github.com/xairy/kasan-prototype by 9 + * Andrey Konovalov <andreyknvl@gmail.com> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License version 2 as 13 + * published by the Free Software Foundation. 14 + * 15 + */ 16 + 17 + #include <linux/bitops.h> 18 + #include <linux/ftrace.h> 19 + #include <linux/init.h> 20 + #include <linux/kernel.h> 21 + #include <linux/mm.h> 22 + #include <linux/printk.h> 23 + #include <linux/sched.h> 24 + #include <linux/slab.h> 25 + #include <linux/stackdepot.h> 26 + #include <linux/stacktrace.h> 27 + #include <linux/string.h> 28 + #include <linux/types.h> 29 + #include <linux/kasan.h> 30 + #include <linux/module.h> 31 + 32 + #include <asm/sections.h> 33 + 34 + #include "kasan.h" 35 + #include "../slab.h" 36 + 37 + void *find_first_bad_addr(void *addr, size_t size) 38 + { 39 + void *p = addr; 40 + 41 + while (p < addr + size && !(*(u8 *)kasan_mem_to_shadow(p))) 42 + p += KASAN_SHADOW_SCALE_SIZE; 43 + return p; 44 + } 45 + 46 + static const char *get_shadow_bug_type(struct kasan_access_info *info) 47 + { 48 + const char *bug_type = "unknown-crash"; 49 + u8 *shadow_addr; 50 + 51 + shadow_addr = (u8 *)kasan_mem_to_shadow(info->first_bad_addr); 52 + 53 + /* 54 + * If shadow byte value is in [0, KASAN_SHADOW_SCALE_SIZE) we can look 55 + * at the next shadow byte to determine the type of the bad access. 56 + */ 57 + if (*shadow_addr > 0 && *shadow_addr <= KASAN_SHADOW_SCALE_SIZE - 1) 58 + shadow_addr++; 59 + 60 + switch (*shadow_addr) { 61 + case 0 ... KASAN_SHADOW_SCALE_SIZE - 1: 62 + /* 63 + * In theory it's still possible to see these shadow values 64 + * due to a data race in the kernel code. 65 + */ 66 + bug_type = "out-of-bounds"; 67 + break; 68 + case KASAN_PAGE_REDZONE: 69 + case KASAN_KMALLOC_REDZONE: 70 + bug_type = "slab-out-of-bounds"; 71 + break; 72 + case KASAN_GLOBAL_REDZONE: 73 + bug_type = "global-out-of-bounds"; 74 + break; 75 + case KASAN_STACK_LEFT: 76 + case KASAN_STACK_MID: 77 + case KASAN_STACK_RIGHT: 78 + case KASAN_STACK_PARTIAL: 79 + bug_type = "stack-out-of-bounds"; 80 + break; 81 + case KASAN_FREE_PAGE: 82 + case KASAN_KMALLOC_FREE: 83 + bug_type = "use-after-free"; 84 + break; 85 + case KASAN_USE_AFTER_SCOPE: 86 + bug_type = "use-after-scope"; 87 + break; 88 + case KASAN_ALLOCA_LEFT: 89 + case KASAN_ALLOCA_RIGHT: 90 + bug_type = "alloca-out-of-bounds"; 91 + break; 92 + } 93 + 94 + return bug_type; 95 + } 96 + 97 + static const char *get_wild_bug_type(struct kasan_access_info *info) 98 + { 99 + const char *bug_type = "unknown-crash"; 100 + 101 + if ((unsigned long)info->access_addr < PAGE_SIZE) 102 + bug_type = "null-ptr-deref"; 103 + else if ((unsigned long)info->access_addr < TASK_SIZE) 104 + bug_type = "user-memory-access"; 105 + else 106 + bug_type = "wild-memory-access"; 107 + 108 + return bug_type; 109 + } 110 + 111 + const char *get_bug_type(struct kasan_access_info *info) 112 + { 113 + if (addr_has_shadow(info->access_addr)) 114 + return get_shadow_bug_type(info); 115 + return get_wild_bug_type(info); 116 + } 117 + 118 + #define DEFINE_ASAN_REPORT_LOAD(size) \ 119 + void __asan_report_load##size##_noabort(unsigned long addr) \ 120 + { \ 121 + kasan_report(addr, size, false, _RET_IP_); \ 122 + } \ 123 + EXPORT_SYMBOL(__asan_report_load##size##_noabort) 124 + 125 + #define DEFINE_ASAN_REPORT_STORE(size) \ 126 + void __asan_report_store##size##_noabort(unsigned long addr) \ 127 + { \ 128 + kasan_report(addr, size, true, _RET_IP_); \ 129 + } \ 130 + EXPORT_SYMBOL(__asan_report_store##size##_noabort) 131 + 132 + DEFINE_ASAN_REPORT_LOAD(1); 133 + DEFINE_ASAN_REPORT_LOAD(2); 134 + DEFINE_ASAN_REPORT_LOAD(4); 135 + DEFINE_ASAN_REPORT_LOAD(8); 136 + DEFINE_ASAN_REPORT_LOAD(16); 137 + DEFINE_ASAN_REPORT_STORE(1); 138 + DEFINE_ASAN_REPORT_STORE(2); 139 + DEFINE_ASAN_REPORT_STORE(4); 140 + DEFINE_ASAN_REPORT_STORE(8); 141 + DEFINE_ASAN_REPORT_STORE(16); 142 + 143 + void __asan_report_load_n_noabort(unsigned long addr, size_t size) 144 + { 145 + kasan_report(addr, size, false, _RET_IP_); 146 + } 147 + EXPORT_SYMBOL(__asan_report_load_n_noabort); 148 + 149 + void __asan_report_store_n_noabort(unsigned long addr, size_t size) 150 + { 151 + kasan_report(addr, size, true, _RET_IP_); 152 + } 153 + EXPORT_SYMBOL(__asan_report_store_n_noabort);
-903
mm/kasan/kasan.c
··· 1 - /* 2 - * This file contains shadow memory manipulation code. 3 - * 4 - * Copyright (c) 2014 Samsung Electronics Co., Ltd. 5 - * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com> 6 - * 7 - * Some code borrowed from https://github.com/xairy/kasan-prototype by 8 - * Andrey Konovalov <andreyknvl@gmail.com> 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License version 2 as 12 - * published by the Free Software Foundation. 13 - * 14 - */ 15 - 16 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 17 - #define DISABLE_BRANCH_PROFILING 18 - 19 - #include <linux/export.h> 20 - #include <linux/interrupt.h> 21 - #include <linux/init.h> 22 - #include <linux/kasan.h> 23 - #include <linux/kernel.h> 24 - #include <linux/kmemleak.h> 25 - #include <linux/linkage.h> 26 - #include <linux/memblock.h> 27 - #include <linux/memory.h> 28 - #include <linux/mm.h> 29 - #include <linux/module.h> 30 - #include <linux/printk.h> 31 - #include <linux/sched.h> 32 - #include <linux/sched/task_stack.h> 33 - #include <linux/slab.h> 34 - #include <linux/stacktrace.h> 35 - #include <linux/string.h> 36 - #include <linux/types.h> 37 - #include <linux/vmalloc.h> 38 - #include <linux/bug.h> 39 - 40 - #include "kasan.h" 41 - #include "../slab.h" 42 - 43 - void kasan_enable_current(void) 44 - { 45 - current->kasan_depth++; 46 - } 47 - 48 - void kasan_disable_current(void) 49 - { 50 - current->kasan_depth--; 51 - } 52 - 53 - /* 54 - * Poisons the shadow memory for 'size' bytes starting from 'addr'. 55 - * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE. 56 - */ 57 - static void kasan_poison_shadow(const void *address, size_t size, u8 value) 58 - { 59 - void *shadow_start, *shadow_end; 60 - 61 - shadow_start = kasan_mem_to_shadow(address); 62 - shadow_end = kasan_mem_to_shadow(address + size); 63 - 64 - memset(shadow_start, value, shadow_end - shadow_start); 65 - } 66 - 67 - void kasan_unpoison_shadow(const void *address, size_t size) 68 - { 69 - kasan_poison_shadow(address, size, 0); 70 - 71 - if (size & KASAN_SHADOW_MASK) { 72 - u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); 73 - *shadow = size & KASAN_SHADOW_MASK; 74 - } 75 - } 76 - 77 - static void __kasan_unpoison_stack(struct task_struct *task, const void *sp) 78 - { 79 - void *base = task_stack_page(task); 80 - size_t size = sp - base; 81 - 82 - kasan_unpoison_shadow(base, size); 83 - } 84 - 85 - /* Unpoison the entire stack for a task. */ 86 - void kasan_unpoison_task_stack(struct task_struct *task) 87 - { 88 - __kasan_unpoison_stack(task, task_stack_page(task) + THREAD_SIZE); 89 - } 90 - 91 - /* Unpoison the stack for the current task beyond a watermark sp value. */ 92 - asmlinkage void kasan_unpoison_task_stack_below(const void *watermark) 93 - { 94 - /* 95 - * Calculate the task stack base address. Avoid using 'current' 96 - * because this function is called by early resume code which hasn't 97 - * yet set up the percpu register (%gs). 98 - */ 99 - void *base = (void *)((unsigned long)watermark & ~(THREAD_SIZE - 1)); 100 - 101 - kasan_unpoison_shadow(base, watermark - base); 102 - } 103 - 104 - /* 105 - * Clear all poison for the region between the current SP and a provided 106 - * watermark value, as is sometimes required prior to hand-crafted asm function 107 - * returns in the middle of functions. 108 - */ 109 - void kasan_unpoison_stack_above_sp_to(const void *watermark) 110 - { 111 - const void *sp = __builtin_frame_address(0); 112 - size_t size = watermark - sp; 113 - 114 - if (WARN_ON(sp > watermark)) 115 - return; 116 - kasan_unpoison_shadow(sp, size); 117 - } 118 - 119 - /* 120 - * All functions below always inlined so compiler could 121 - * perform better optimizations in each of __asan_loadX/__assn_storeX 122 - * depending on memory access size X. 123 - */ 124 - 125 - static __always_inline bool memory_is_poisoned_1(unsigned long addr) 126 - { 127 - s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr); 128 - 129 - if (unlikely(shadow_value)) { 130 - s8 last_accessible_byte = addr & KASAN_SHADOW_MASK; 131 - return unlikely(last_accessible_byte >= shadow_value); 132 - } 133 - 134 - return false; 135 - } 136 - 137 - static __always_inline bool memory_is_poisoned_2_4_8(unsigned long addr, 138 - unsigned long size) 139 - { 140 - u8 *shadow_addr = (u8 *)kasan_mem_to_shadow((void *)addr); 141 - 142 - /* 143 - * Access crosses 8(shadow size)-byte boundary. Such access maps 144 - * into 2 shadow bytes, so we need to check them both. 145 - */ 146 - if (unlikely(((addr + size - 1) & KASAN_SHADOW_MASK) < size - 1)) 147 - return *shadow_addr || memory_is_poisoned_1(addr + size - 1); 148 - 149 - return memory_is_poisoned_1(addr + size - 1); 150 - } 151 - 152 - static __always_inline bool memory_is_poisoned_16(unsigned long addr) 153 - { 154 - u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr); 155 - 156 - /* Unaligned 16-bytes access maps into 3 shadow bytes. */ 157 - if (unlikely(!IS_ALIGNED(addr, KASAN_SHADOW_SCALE_SIZE))) 158 - return *shadow_addr || memory_is_poisoned_1(addr + 15); 159 - 160 - return *shadow_addr; 161 - } 162 - 163 - static __always_inline unsigned long bytes_is_nonzero(const u8 *start, 164 - size_t size) 165 - { 166 - while (size) { 167 - if (unlikely(*start)) 168 - return (unsigned long)start; 169 - start++; 170 - size--; 171 - } 172 - 173 - return 0; 174 - } 175 - 176 - static __always_inline unsigned long memory_is_nonzero(const void *start, 177 - const void *end) 178 - { 179 - unsigned int words; 180 - unsigned long ret; 181 - unsigned int prefix = (unsigned long)start % 8; 182 - 183 - if (end - start <= 16) 184 - return bytes_is_nonzero(start, end - start); 185 - 186 - if (prefix) { 187 - prefix = 8 - prefix; 188 - ret = bytes_is_nonzero(start, prefix); 189 - if (unlikely(ret)) 190 - return ret; 191 - start += prefix; 192 - } 193 - 194 - words = (end - start) / 8; 195 - while (words) { 196 - if (unlikely(*(u64 *)start)) 197 - return bytes_is_nonzero(start, 8); 198 - start += 8; 199 - words--; 200 - } 201 - 202 - return bytes_is_nonzero(start, (end - start) % 8); 203 - } 204 - 205 - static __always_inline bool memory_is_poisoned_n(unsigned long addr, 206 - size_t size) 207 - { 208 - unsigned long ret; 209 - 210 - ret = memory_is_nonzero(kasan_mem_to_shadow((void *)addr), 211 - kasan_mem_to_shadow((void *)addr + size - 1) + 1); 212 - 213 - if (unlikely(ret)) { 214 - unsigned long last_byte = addr + size - 1; 215 - s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte); 216 - 217 - if (unlikely(ret != (unsigned long)last_shadow || 218 - ((long)(last_byte & KASAN_SHADOW_MASK) >= *last_shadow))) 219 - return true; 220 - } 221 - return false; 222 - } 223 - 224 - static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size) 225 - { 226 - if (__builtin_constant_p(size)) { 227 - switch (size) { 228 - case 1: 229 - return memory_is_poisoned_1(addr); 230 - case 2: 231 - case 4: 232 - case 8: 233 - return memory_is_poisoned_2_4_8(addr, size); 234 - case 16: 235 - return memory_is_poisoned_16(addr); 236 - default: 237 - BUILD_BUG(); 238 - } 239 - } 240 - 241 - return memory_is_poisoned_n(addr, size); 242 - } 243 - 244 - static __always_inline void check_memory_region_inline(unsigned long addr, 245 - size_t size, bool write, 246 - unsigned long ret_ip) 247 - { 248 - if (unlikely(size == 0)) 249 - return; 250 - 251 - if (unlikely((void *)addr < 252 - kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) { 253 - kasan_report(addr, size, write, ret_ip); 254 - return; 255 - } 256 - 257 - if (likely(!memory_is_poisoned(addr, size))) 258 - return; 259 - 260 - kasan_report(addr, size, write, ret_ip); 261 - } 262 - 263 - static void check_memory_region(unsigned long addr, 264 - size_t size, bool write, 265 - unsigned long ret_ip) 266 - { 267 - check_memory_region_inline(addr, size, write, ret_ip); 268 - } 269 - 270 - void kasan_check_read(const volatile void *p, unsigned int size) 271 - { 272 - check_memory_region((unsigned long)p, size, false, _RET_IP_); 273 - } 274 - EXPORT_SYMBOL(kasan_check_read); 275 - 276 - void kasan_check_write(const volatile void *p, unsigned int size) 277 - { 278 - check_memory_region((unsigned long)p, size, true, _RET_IP_); 279 - } 280 - EXPORT_SYMBOL(kasan_check_write); 281 - 282 - #undef memset 283 - void *memset(void *addr, int c, size_t len) 284 - { 285 - check_memory_region((unsigned long)addr, len, true, _RET_IP_); 286 - 287 - return __memset(addr, c, len); 288 - } 289 - 290 - #undef memmove 291 - void *memmove(void *dest, const void *src, size_t len) 292 - { 293 - check_memory_region((unsigned long)src, len, false, _RET_IP_); 294 - check_memory_region((unsigned long)dest, len, true, _RET_IP_); 295 - 296 - return __memmove(dest, src, len); 297 - } 298 - 299 - #undef memcpy 300 - void *memcpy(void *dest, const void *src, size_t len) 301 - { 302 - check_memory_region((unsigned long)src, len, false, _RET_IP_); 303 - check_memory_region((unsigned long)dest, len, true, _RET_IP_); 304 - 305 - return __memcpy(dest, src, len); 306 - } 307 - 308 - void kasan_alloc_pages(struct page *page, unsigned int order) 309 - { 310 - if (likely(!PageHighMem(page))) 311 - kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); 312 - } 313 - 314 - void kasan_free_pages(struct page *page, unsigned int order) 315 - { 316 - if (likely(!PageHighMem(page))) 317 - kasan_poison_shadow(page_address(page), 318 - PAGE_SIZE << order, 319 - KASAN_FREE_PAGE); 320 - } 321 - 322 - /* 323 - * Adaptive redzone policy taken from the userspace AddressSanitizer runtime. 324 - * For larger allocations larger redzones are used. 325 - */ 326 - static unsigned int optimal_redzone(unsigned int object_size) 327 - { 328 - return 329 - object_size <= 64 - 16 ? 16 : 330 - object_size <= 128 - 32 ? 32 : 331 - object_size <= 512 - 64 ? 64 : 332 - object_size <= 4096 - 128 ? 128 : 333 - object_size <= (1 << 14) - 256 ? 256 : 334 - object_size <= (1 << 15) - 512 ? 512 : 335 - object_size <= (1 << 16) - 1024 ? 1024 : 2048; 336 - } 337 - 338 - void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, 339 - slab_flags_t *flags) 340 - { 341 - unsigned int orig_size = *size; 342 - int redzone_adjust; 343 - 344 - /* Add alloc meta. */ 345 - cache->kasan_info.alloc_meta_offset = *size; 346 - *size += sizeof(struct kasan_alloc_meta); 347 - 348 - /* Add free meta. */ 349 - if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || 350 - cache->object_size < sizeof(struct kasan_free_meta)) { 351 - cache->kasan_info.free_meta_offset = *size; 352 - *size += sizeof(struct kasan_free_meta); 353 - } 354 - redzone_adjust = optimal_redzone(cache->object_size) - 355 - (*size - cache->object_size); 356 - 357 - if (redzone_adjust > 0) 358 - *size += redzone_adjust; 359 - 360 - *size = min_t(unsigned int, KMALLOC_MAX_SIZE, 361 - max(*size, cache->object_size + 362 - optimal_redzone(cache->object_size))); 363 - 364 - /* 365 - * If the metadata doesn't fit, don't enable KASAN at all. 366 - */ 367 - if (*size <= cache->kasan_info.alloc_meta_offset || 368 - *size <= cache->kasan_info.free_meta_offset) { 369 - cache->kasan_info.alloc_meta_offset = 0; 370 - cache->kasan_info.free_meta_offset = 0; 371 - *size = orig_size; 372 - return; 373 - } 374 - 375 - *flags |= SLAB_KASAN; 376 - } 377 - 378 - void kasan_cache_shrink(struct kmem_cache *cache) 379 - { 380 - quarantine_remove_cache(cache); 381 - } 382 - 383 - void kasan_cache_shutdown(struct kmem_cache *cache) 384 - { 385 - if (!__kmem_cache_empty(cache)) 386 - quarantine_remove_cache(cache); 387 - } 388 - 389 - size_t kasan_metadata_size(struct kmem_cache *cache) 390 - { 391 - return (cache->kasan_info.alloc_meta_offset ? 392 - sizeof(struct kasan_alloc_meta) : 0) + 393 - (cache->kasan_info.free_meta_offset ? 394 - sizeof(struct kasan_free_meta) : 0); 395 - } 396 - 397 - void kasan_poison_slab(struct page *page) 398 - { 399 - kasan_poison_shadow(page_address(page), 400 - PAGE_SIZE << compound_order(page), 401 - KASAN_KMALLOC_REDZONE); 402 - } 403 - 404 - void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) 405 - { 406 - kasan_unpoison_shadow(object, cache->object_size); 407 - } 408 - 409 - void kasan_poison_object_data(struct kmem_cache *cache, void *object) 410 - { 411 - kasan_poison_shadow(object, 412 - round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE), 413 - KASAN_KMALLOC_REDZONE); 414 - } 415 - 416 - static inline int in_irqentry_text(unsigned long ptr) 417 - { 418 - return (ptr >= (unsigned long)&__irqentry_text_start && 419 - ptr < (unsigned long)&__irqentry_text_end) || 420 - (ptr >= (unsigned long)&__softirqentry_text_start && 421 - ptr < (unsigned long)&__softirqentry_text_end); 422 - } 423 - 424 - static inline void filter_irq_stacks(struct stack_trace *trace) 425 - { 426 - int i; 427 - 428 - if (!trace->nr_entries) 429 - return; 430 - for (i = 0; i < trace->nr_entries; i++) 431 - if (in_irqentry_text(trace->entries[i])) { 432 - /* Include the irqentry function into the stack. */ 433 - trace->nr_entries = i + 1; 434 - break; 435 - } 436 - } 437 - 438 - static inline depot_stack_handle_t save_stack(gfp_t flags) 439 - { 440 - unsigned long entries[KASAN_STACK_DEPTH]; 441 - struct stack_trace trace = { 442 - .nr_entries = 0, 443 - .entries = entries, 444 - .max_entries = KASAN_STACK_DEPTH, 445 - .skip = 0 446 - }; 447 - 448 - save_stack_trace(&trace); 449 - filter_irq_stacks(&trace); 450 - if (trace.nr_entries != 0 && 451 - trace.entries[trace.nr_entries-1] == ULONG_MAX) 452 - trace.nr_entries--; 453 - 454 - return depot_save_stack(&trace, flags); 455 - } 456 - 457 - static inline void set_track(struct kasan_track *track, gfp_t flags) 458 - { 459 - track->pid = current->pid; 460 - track->stack = save_stack(flags); 461 - } 462 - 463 - struct kasan_alloc_meta *get_alloc_info(struct kmem_cache *cache, 464 - const void *object) 465 - { 466 - BUILD_BUG_ON(sizeof(struct kasan_alloc_meta) > 32); 467 - return (void *)object + cache->kasan_info.alloc_meta_offset; 468 - } 469 - 470 - struct kasan_free_meta *get_free_info(struct kmem_cache *cache, 471 - const void *object) 472 - { 473 - BUILD_BUG_ON(sizeof(struct kasan_free_meta) > 32); 474 - return (void *)object + cache->kasan_info.free_meta_offset; 475 - } 476 - 477 - void kasan_init_slab_obj(struct kmem_cache *cache, const void *object) 478 - { 479 - struct kasan_alloc_meta *alloc_info; 480 - 481 - if (!(cache->flags & SLAB_KASAN)) 482 - return; 483 - 484 - alloc_info = get_alloc_info(cache, object); 485 - __memset(alloc_info, 0, sizeof(*alloc_info)); 486 - } 487 - 488 - void kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) 489 - { 490 - kasan_kmalloc(cache, object, cache->object_size, flags); 491 - } 492 - 493 - static bool __kasan_slab_free(struct kmem_cache *cache, void *object, 494 - unsigned long ip, bool quarantine) 495 - { 496 - s8 shadow_byte; 497 - unsigned long rounded_up_size; 498 - 499 - if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) != 500 - object)) { 501 - kasan_report_invalid_free(object, ip); 502 - return true; 503 - } 504 - 505 - /* RCU slabs could be legally used after free within the RCU period */ 506 - if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) 507 - return false; 508 - 509 - shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object)); 510 - if (shadow_byte < 0 || shadow_byte >= KASAN_SHADOW_SCALE_SIZE) { 511 - kasan_report_invalid_free(object, ip); 512 - return true; 513 - } 514 - 515 - rounded_up_size = round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE); 516 - kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); 517 - 518 - if (!quarantine || unlikely(!(cache->flags & SLAB_KASAN))) 519 - return false; 520 - 521 - set_track(&get_alloc_info(cache, object)->free_track, GFP_NOWAIT); 522 - quarantine_put(get_free_info(cache, object), cache); 523 - return true; 524 - } 525 - 526 - bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) 527 - { 528 - return __kasan_slab_free(cache, object, ip, true); 529 - } 530 - 531 - void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size, 532 - gfp_t flags) 533 - { 534 - unsigned long redzone_start; 535 - unsigned long redzone_end; 536 - 537 - if (gfpflags_allow_blocking(flags)) 538 - quarantine_reduce(); 539 - 540 - if (unlikely(object == NULL)) 541 - return; 542 - 543 - redzone_start = round_up((unsigned long)(object + size), 544 - KASAN_SHADOW_SCALE_SIZE); 545 - redzone_end = round_up((unsigned long)object + cache->object_size, 546 - KASAN_SHADOW_SCALE_SIZE); 547 - 548 - kasan_unpoison_shadow(object, size); 549 - kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, 550 - KASAN_KMALLOC_REDZONE); 551 - 552 - if (cache->flags & SLAB_KASAN) 553 - set_track(&get_alloc_info(cache, object)->alloc_track, flags); 554 - } 555 - EXPORT_SYMBOL(kasan_kmalloc); 556 - 557 - void kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) 558 - { 559 - struct page *page; 560 - unsigned long redzone_start; 561 - unsigned long redzone_end; 562 - 563 - if (gfpflags_allow_blocking(flags)) 564 - quarantine_reduce(); 565 - 566 - if (unlikely(ptr == NULL)) 567 - return; 568 - 569 - page = virt_to_page(ptr); 570 - redzone_start = round_up((unsigned long)(ptr + size), 571 - KASAN_SHADOW_SCALE_SIZE); 572 - redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page)); 573 - 574 - kasan_unpoison_shadow(ptr, size); 575 - kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, 576 - KASAN_PAGE_REDZONE); 577 - } 578 - 579 - void kasan_krealloc(const void *object, size_t size, gfp_t flags) 580 - { 581 - struct page *page; 582 - 583 - if (unlikely(object == ZERO_SIZE_PTR)) 584 - return; 585 - 586 - page = virt_to_head_page(object); 587 - 588 - if (unlikely(!PageSlab(page))) 589 - kasan_kmalloc_large(object, size, flags); 590 - else 591 - kasan_kmalloc(page->slab_cache, object, size, flags); 592 - } 593 - 594 - void kasan_poison_kfree(void *ptr, unsigned long ip) 595 - { 596 - struct page *page; 597 - 598 - page = virt_to_head_page(ptr); 599 - 600 - if (unlikely(!PageSlab(page))) { 601 - if (ptr != page_address(page)) { 602 - kasan_report_invalid_free(ptr, ip); 603 - return; 604 - } 605 - kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page), 606 - KASAN_FREE_PAGE); 607 - } else { 608 - __kasan_slab_free(page->slab_cache, ptr, ip, false); 609 - } 610 - } 611 - 612 - void kasan_kfree_large(void *ptr, unsigned long ip) 613 - { 614 - if (ptr != page_address(virt_to_head_page(ptr))) 615 - kasan_report_invalid_free(ptr, ip); 616 - /* The object will be poisoned by page_alloc. */ 617 - } 618 - 619 - int kasan_module_alloc(void *addr, size_t size) 620 - { 621 - void *ret; 622 - size_t scaled_size; 623 - size_t shadow_size; 624 - unsigned long shadow_start; 625 - 626 - shadow_start = (unsigned long)kasan_mem_to_shadow(addr); 627 - scaled_size = (size + KASAN_SHADOW_MASK) >> KASAN_SHADOW_SCALE_SHIFT; 628 - shadow_size = round_up(scaled_size, PAGE_SIZE); 629 - 630 - if (WARN_ON(!PAGE_ALIGNED(shadow_start))) 631 - return -EINVAL; 632 - 633 - ret = __vmalloc_node_range(shadow_size, 1, shadow_start, 634 - shadow_start + shadow_size, 635 - GFP_KERNEL | __GFP_ZERO, 636 - PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, 637 - __builtin_return_address(0)); 638 - 639 - if (ret) { 640 - find_vm_area(addr)->flags |= VM_KASAN; 641 - kmemleak_ignore(ret); 642 - return 0; 643 - } 644 - 645 - return -ENOMEM; 646 - } 647 - 648 - void kasan_free_shadow(const struct vm_struct *vm) 649 - { 650 - if (vm->flags & VM_KASAN) 651 - vfree(kasan_mem_to_shadow(vm->addr)); 652 - } 653 - 654 - static void register_global(struct kasan_global *global) 655 - { 656 - size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE); 657 - 658 - kasan_unpoison_shadow(global->beg, global->size); 659 - 660 - kasan_poison_shadow(global->beg + aligned_size, 661 - global->size_with_redzone - aligned_size, 662 - KASAN_GLOBAL_REDZONE); 663 - } 664 - 665 - void __asan_register_globals(struct kasan_global *globals, size_t size) 666 - { 667 - int i; 668 - 669 - for (i = 0; i < size; i++) 670 - register_global(&globals[i]); 671 - } 672 - EXPORT_SYMBOL(__asan_register_globals); 673 - 674 - void __asan_unregister_globals(struct kasan_global *globals, size_t size) 675 - { 676 - } 677 - EXPORT_SYMBOL(__asan_unregister_globals); 678 - 679 - #define DEFINE_ASAN_LOAD_STORE(size) \ 680 - void __asan_load##size(unsigned long addr) \ 681 - { \ 682 - check_memory_region_inline(addr, size, false, _RET_IP_);\ 683 - } \ 684 - EXPORT_SYMBOL(__asan_load##size); \ 685 - __alias(__asan_load##size) \ 686 - void __asan_load##size##_noabort(unsigned long); \ 687 - EXPORT_SYMBOL(__asan_load##size##_noabort); \ 688 - void __asan_store##size(unsigned long addr) \ 689 - { \ 690 - check_memory_region_inline(addr, size, true, _RET_IP_); \ 691 - } \ 692 - EXPORT_SYMBOL(__asan_store##size); \ 693 - __alias(__asan_store##size) \ 694 - void __asan_store##size##_noabort(unsigned long); \ 695 - EXPORT_SYMBOL(__asan_store##size##_noabort) 696 - 697 - DEFINE_ASAN_LOAD_STORE(1); 698 - DEFINE_ASAN_LOAD_STORE(2); 699 - DEFINE_ASAN_LOAD_STORE(4); 700 - DEFINE_ASAN_LOAD_STORE(8); 701 - DEFINE_ASAN_LOAD_STORE(16); 702 - 703 - void __asan_loadN(unsigned long addr, size_t size) 704 - { 705 - check_memory_region(addr, size, false, _RET_IP_); 706 - } 707 - EXPORT_SYMBOL(__asan_loadN); 708 - 709 - __alias(__asan_loadN) 710 - void __asan_loadN_noabort(unsigned long, size_t); 711 - EXPORT_SYMBOL(__asan_loadN_noabort); 712 - 713 - void __asan_storeN(unsigned long addr, size_t size) 714 - { 715 - check_memory_region(addr, size, true, _RET_IP_); 716 - } 717 - EXPORT_SYMBOL(__asan_storeN); 718 - 719 - __alias(__asan_storeN) 720 - void __asan_storeN_noabort(unsigned long, size_t); 721 - EXPORT_SYMBOL(__asan_storeN_noabort); 722 - 723 - /* to shut up compiler complaints */ 724 - void __asan_handle_no_return(void) {} 725 - EXPORT_SYMBOL(__asan_handle_no_return); 726 - 727 - /* Emitted by compiler to poison large objects when they go out of scope. */ 728 - void __asan_poison_stack_memory(const void *addr, size_t size) 729 - { 730 - /* 731 - * Addr is KASAN_SHADOW_SCALE_SIZE-aligned and the object is surrounded 732 - * by redzones, so we simply round up size to simplify logic. 733 - */ 734 - kasan_poison_shadow(addr, round_up(size, KASAN_SHADOW_SCALE_SIZE), 735 - KASAN_USE_AFTER_SCOPE); 736 - } 737 - EXPORT_SYMBOL(__asan_poison_stack_memory); 738 - 739 - /* Emitted by compiler to unpoison large objects when they go into scope. */ 740 - void __asan_unpoison_stack_memory(const void *addr, size_t size) 741 - { 742 - kasan_unpoison_shadow(addr, size); 743 - } 744 - EXPORT_SYMBOL(__asan_unpoison_stack_memory); 745 - 746 - /* Emitted by compiler to poison alloca()ed objects. */ 747 - void __asan_alloca_poison(unsigned long addr, size_t size) 748 - { 749 - size_t rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE); 750 - size_t padding_size = round_up(size, KASAN_ALLOCA_REDZONE_SIZE) - 751 - rounded_up_size; 752 - size_t rounded_down_size = round_down(size, KASAN_SHADOW_SCALE_SIZE); 753 - 754 - const void *left_redzone = (const void *)(addr - 755 - KASAN_ALLOCA_REDZONE_SIZE); 756 - const void *right_redzone = (const void *)(addr + rounded_up_size); 757 - 758 - WARN_ON(!IS_ALIGNED(addr, KASAN_ALLOCA_REDZONE_SIZE)); 759 - 760 - kasan_unpoison_shadow((const void *)(addr + rounded_down_size), 761 - size - rounded_down_size); 762 - kasan_poison_shadow(left_redzone, KASAN_ALLOCA_REDZONE_SIZE, 763 - KASAN_ALLOCA_LEFT); 764 - kasan_poison_shadow(right_redzone, 765 - padding_size + KASAN_ALLOCA_REDZONE_SIZE, 766 - KASAN_ALLOCA_RIGHT); 767 - } 768 - EXPORT_SYMBOL(__asan_alloca_poison); 769 - 770 - /* Emitted by compiler to unpoison alloca()ed areas when the stack unwinds. */ 771 - void __asan_allocas_unpoison(const void *stack_top, const void *stack_bottom) 772 - { 773 - if (unlikely(!stack_top || stack_top > stack_bottom)) 774 - return; 775 - 776 - kasan_unpoison_shadow(stack_top, stack_bottom - stack_top); 777 - } 778 - EXPORT_SYMBOL(__asan_allocas_unpoison); 779 - 780 - /* Emitted by the compiler to [un]poison local variables. */ 781 - #define DEFINE_ASAN_SET_SHADOW(byte) \ 782 - void __asan_set_shadow_##byte(const void *addr, size_t size) \ 783 - { \ 784 - __memset((void *)addr, 0x##byte, size); \ 785 - } \ 786 - EXPORT_SYMBOL(__asan_set_shadow_##byte) 787 - 788 - DEFINE_ASAN_SET_SHADOW(00); 789 - DEFINE_ASAN_SET_SHADOW(f1); 790 - DEFINE_ASAN_SET_SHADOW(f2); 791 - DEFINE_ASAN_SET_SHADOW(f3); 792 - DEFINE_ASAN_SET_SHADOW(f5); 793 - DEFINE_ASAN_SET_SHADOW(f8); 794 - 795 - #ifdef CONFIG_MEMORY_HOTPLUG 796 - static bool shadow_mapped(unsigned long addr) 797 - { 798 - pgd_t *pgd = pgd_offset_k(addr); 799 - p4d_t *p4d; 800 - pud_t *pud; 801 - pmd_t *pmd; 802 - pte_t *pte; 803 - 804 - if (pgd_none(*pgd)) 805 - return false; 806 - p4d = p4d_offset(pgd, addr); 807 - if (p4d_none(*p4d)) 808 - return false; 809 - pud = pud_offset(p4d, addr); 810 - if (pud_none(*pud)) 811 - return false; 812 - 813 - /* 814 - * We can't use pud_large() or pud_huge(), the first one is 815 - * arch-specific, the last one depends on HUGETLB_PAGE. So let's abuse 816 - * pud_bad(), if pud is bad then it's bad because it's huge. 817 - */ 818 - if (pud_bad(*pud)) 819 - return true; 820 - pmd = pmd_offset(pud, addr); 821 - if (pmd_none(*pmd)) 822 - return false; 823 - 824 - if (pmd_bad(*pmd)) 825 - return true; 826 - pte = pte_offset_kernel(pmd, addr); 827 - return !pte_none(*pte); 828 - } 829 - 830 - static int __meminit kasan_mem_notifier(struct notifier_block *nb, 831 - unsigned long action, void *data) 832 - { 833 - struct memory_notify *mem_data = data; 834 - unsigned long nr_shadow_pages, start_kaddr, shadow_start; 835 - unsigned long shadow_end, shadow_size; 836 - 837 - nr_shadow_pages = mem_data->nr_pages >> KASAN_SHADOW_SCALE_SHIFT; 838 - start_kaddr = (unsigned long)pfn_to_kaddr(mem_data->start_pfn); 839 - shadow_start = (unsigned long)kasan_mem_to_shadow((void *)start_kaddr); 840 - shadow_size = nr_shadow_pages << PAGE_SHIFT; 841 - shadow_end = shadow_start + shadow_size; 842 - 843 - if (WARN_ON(mem_data->nr_pages % KASAN_SHADOW_SCALE_SIZE) || 844 - WARN_ON(start_kaddr % (KASAN_SHADOW_SCALE_SIZE << PAGE_SHIFT))) 845 - return NOTIFY_BAD; 846 - 847 - switch (action) { 848 - case MEM_GOING_ONLINE: { 849 - void *ret; 850 - 851 - /* 852 - * If shadow is mapped already than it must have been mapped 853 - * during the boot. This could happen if we onlining previously 854 - * offlined memory. 855 - */ 856 - if (shadow_mapped(shadow_start)) 857 - return NOTIFY_OK; 858 - 859 - ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start, 860 - shadow_end, GFP_KERNEL, 861 - PAGE_KERNEL, VM_NO_GUARD, 862 - pfn_to_nid(mem_data->start_pfn), 863 - __builtin_return_address(0)); 864 - if (!ret) 865 - return NOTIFY_BAD; 866 - 867 - kmemleak_ignore(ret); 868 - return NOTIFY_OK; 869 - } 870 - case MEM_CANCEL_ONLINE: 871 - case MEM_OFFLINE: { 872 - struct vm_struct *vm; 873 - 874 - /* 875 - * shadow_start was either mapped during boot by kasan_init() 876 - * or during memory online by __vmalloc_node_range(). 877 - * In the latter case we can use vfree() to free shadow. 878 - * Non-NULL result of the find_vm_area() will tell us if 879 - * that was the second case. 880 - * 881 - * Currently it's not possible to free shadow mapped 882 - * during boot by kasan_init(). It's because the code 883 - * to do that hasn't been written yet. So we'll just 884 - * leak the memory. 885 - */ 886 - vm = find_vm_area((void *)shadow_start); 887 - if (vm) 888 - vfree((void *)shadow_start); 889 - } 890 - } 891 - 892 - return NOTIFY_OK; 893 - } 894 - 895 - static int __init kasan_memhotplug_init(void) 896 - { 897 - hotplug_memory_notifier(kasan_mem_notifier, 0); 898 - 899 - return 0; 900 - } 901 - 902 - core_initcall(kasan_memhotplug_init); 903 - #endif
+58 -1
mm/kasan/kasan.h
··· 8 8 #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) 9 9 #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) 10 10 11 + #define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ 12 + #define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ 13 + #define KASAN_TAG_MAX 0xFD /* maximum value for random tags */ 14 + 15 + #ifdef CONFIG_KASAN_GENERIC 11 16 #define KASAN_FREE_PAGE 0xFF /* page was freed */ 12 17 #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ 13 18 #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ 14 19 #define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */ 20 + #else 21 + #define KASAN_FREE_PAGE KASAN_TAG_INVALID 22 + #define KASAN_PAGE_REDZONE KASAN_TAG_INVALID 23 + #define KASAN_KMALLOC_REDZONE KASAN_TAG_INVALID 24 + #define KASAN_KMALLOC_FREE KASAN_TAG_INVALID 25 + #endif 26 + 15 27 #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ 16 28 17 29 /* ··· 117 105 << KASAN_SHADOW_SCALE_SHIFT); 118 106 } 119 107 108 + static inline bool addr_has_shadow(const void *addr) 109 + { 110 + return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START)); 111 + } 112 + 113 + void kasan_poison_shadow(const void *address, size_t size, u8 value); 114 + 115 + void check_memory_region(unsigned long addr, size_t size, bool write, 116 + unsigned long ret_ip); 117 + 118 + void *find_first_bad_addr(void *addr, size_t size); 119 + const char *get_bug_type(struct kasan_access_info *info); 120 + 120 121 void kasan_report(unsigned long addr, size_t size, 121 122 bool is_write, unsigned long ip); 122 123 void kasan_report_invalid_free(void *object, unsigned long ip); 123 124 124 - #if defined(CONFIG_SLAB) || defined(CONFIG_SLUB) 125 + #if defined(CONFIG_KASAN_GENERIC) && \ 126 + (defined(CONFIG_SLAB) || defined(CONFIG_SLUB)) 125 127 void quarantine_put(struct kasan_free_meta *info, struct kmem_cache *cache); 126 128 void quarantine_reduce(void); 127 129 void quarantine_remove_cache(struct kmem_cache *cache); ··· 145 119 static inline void quarantine_reduce(void) { } 146 120 static inline void quarantine_remove_cache(struct kmem_cache *cache) { } 147 121 #endif 122 + 123 + #ifdef CONFIG_KASAN_SW_TAGS 124 + 125 + void print_tags(u8 addr_tag, const void *addr); 126 + 127 + u8 random_tag(void); 128 + 129 + #else 130 + 131 + static inline void print_tags(u8 addr_tag, const void *addr) { } 132 + 133 + static inline u8 random_tag(void) 134 + { 135 + return 0; 136 + } 137 + 138 + #endif 139 + 140 + #ifndef arch_kasan_set_tag 141 + #define arch_kasan_set_tag(addr, tag) ((void *)(addr)) 142 + #endif 143 + #ifndef arch_kasan_reset_tag 144 + #define arch_kasan_reset_tag(addr) ((void *)(addr)) 145 + #endif 146 + #ifndef arch_kasan_get_tag 147 + #define arch_kasan_get_tag(addr) 0 148 + #endif 149 + 150 + #define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag))) 151 + #define reset_tag(addr) ((void *)arch_kasan_reset_tag(addr)) 152 + #define get_tag(addr) arch_kasan_get_tag(addr) 148 153 149 154 /* 150 155 * Exported functions for interfaces called from assembly or from generated
+41 -30
mm/kasan/kasan_init.c mm/kasan/init.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * This file contains some kasan initialization code. 3 4 * ··· 31 30 * - Latter it reused it as zero shadow to cover large ranges of memory 32 31 * that allowed to access, but not handled by kasan (vmalloc/vmemmap ...). 33 32 */ 34 - unsigned char kasan_zero_page[PAGE_SIZE] __page_aligned_bss; 33 + unsigned char kasan_early_shadow_page[PAGE_SIZE] __page_aligned_bss; 35 34 36 35 #if CONFIG_PGTABLE_LEVELS > 4 37 - p4d_t kasan_zero_p4d[MAX_PTRS_PER_P4D] __page_aligned_bss; 36 + p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D] __page_aligned_bss; 38 37 static inline bool kasan_p4d_table(pgd_t pgd) 39 38 { 40 - return pgd_page(pgd) == virt_to_page(lm_alias(kasan_zero_p4d)); 39 + return pgd_page(pgd) == virt_to_page(lm_alias(kasan_early_shadow_p4d)); 41 40 } 42 41 #else 43 42 static inline bool kasan_p4d_table(pgd_t pgd) ··· 46 45 } 47 46 #endif 48 47 #if CONFIG_PGTABLE_LEVELS > 3 49 - pud_t kasan_zero_pud[PTRS_PER_PUD] __page_aligned_bss; 48 + pud_t kasan_early_shadow_pud[PTRS_PER_PUD] __page_aligned_bss; 50 49 static inline bool kasan_pud_table(p4d_t p4d) 51 50 { 52 - return p4d_page(p4d) == virt_to_page(lm_alias(kasan_zero_pud)); 51 + return p4d_page(p4d) == virt_to_page(lm_alias(kasan_early_shadow_pud)); 53 52 } 54 53 #else 55 54 static inline bool kasan_pud_table(p4d_t p4d) ··· 58 57 } 59 58 #endif 60 59 #if CONFIG_PGTABLE_LEVELS > 2 61 - pmd_t kasan_zero_pmd[PTRS_PER_PMD] __page_aligned_bss; 60 + pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD] __page_aligned_bss; 62 61 static inline bool kasan_pmd_table(pud_t pud) 63 62 { 64 - return pud_page(pud) == virt_to_page(lm_alias(kasan_zero_pmd)); 63 + return pud_page(pud) == virt_to_page(lm_alias(kasan_early_shadow_pmd)); 65 64 } 66 65 #else 67 66 static inline bool kasan_pmd_table(pud_t pud) ··· 69 68 return 0; 70 69 } 71 70 #endif 72 - pte_t kasan_zero_pte[PTRS_PER_PTE] __page_aligned_bss; 71 + pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss; 73 72 74 73 static inline bool kasan_pte_table(pmd_t pmd) 75 74 { 76 - return pmd_page(pmd) == virt_to_page(lm_alias(kasan_zero_pte)); 75 + return pmd_page(pmd) == virt_to_page(lm_alias(kasan_early_shadow_pte)); 77 76 } 78 77 79 - static inline bool kasan_zero_page_entry(pte_t pte) 78 + static inline bool kasan_early_shadow_page_entry(pte_t pte) 80 79 { 81 - return pte_page(pte) == virt_to_page(lm_alias(kasan_zero_page)); 80 + return pte_page(pte) == virt_to_page(lm_alias(kasan_early_shadow_page)); 82 81 } 83 82 84 83 static __init void *early_alloc(size_t size, int node) ··· 93 92 pte_t *pte = pte_offset_kernel(pmd, addr); 94 93 pte_t zero_pte; 95 94 96 - zero_pte = pfn_pte(PFN_DOWN(__pa_symbol(kasan_zero_page)), PAGE_KERNEL); 95 + zero_pte = pfn_pte(PFN_DOWN(__pa_symbol(kasan_early_shadow_page)), 96 + PAGE_KERNEL); 97 97 zero_pte = pte_wrprotect(zero_pte); 98 98 99 99 while (addr + PAGE_SIZE <= end) { ··· 114 112 next = pmd_addr_end(addr, end); 115 113 116 114 if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) { 117 - pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte)); 115 + pmd_populate_kernel(&init_mm, pmd, 116 + lm_alias(kasan_early_shadow_pte)); 118 117 continue; 119 118 } 120 119 ··· 148 145 if (IS_ALIGNED(addr, PUD_SIZE) && end - addr >= PUD_SIZE) { 149 146 pmd_t *pmd; 150 147 151 - pud_populate(&init_mm, pud, lm_alias(kasan_zero_pmd)); 148 + pud_populate(&init_mm, pud, 149 + lm_alias(kasan_early_shadow_pmd)); 152 150 pmd = pmd_offset(pud, addr); 153 - pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte)); 151 + pmd_populate_kernel(&init_mm, pmd, 152 + lm_alias(kasan_early_shadow_pte)); 154 153 continue; 155 154 } 156 155 ··· 186 181 pud_t *pud; 187 182 pmd_t *pmd; 188 183 189 - p4d_populate(&init_mm, p4d, lm_alias(kasan_zero_pud)); 184 + p4d_populate(&init_mm, p4d, 185 + lm_alias(kasan_early_shadow_pud)); 190 186 pud = pud_offset(p4d, addr); 191 - pud_populate(&init_mm, pud, lm_alias(kasan_zero_pmd)); 187 + pud_populate(&init_mm, pud, 188 + lm_alias(kasan_early_shadow_pmd)); 192 189 pmd = pmd_offset(pud, addr); 193 190 pmd_populate_kernel(&init_mm, pmd, 194 - lm_alias(kasan_zero_pte)); 191 + lm_alias(kasan_early_shadow_pte)); 195 192 continue; 196 193 } 197 194 ··· 216 209 } 217 210 218 211 /** 219 - * kasan_populate_zero_shadow - populate shadow memory region with 220 - * kasan_zero_page 212 + * kasan_populate_early_shadow - populate shadow memory region with 213 + * kasan_early_shadow_page 221 214 * @shadow_start - start of the memory range to populate 222 215 * @shadow_end - end of the memory range to populate 223 216 */ 224 - int __ref kasan_populate_zero_shadow(const void *shadow_start, 225 - const void *shadow_end) 217 + int __ref kasan_populate_early_shadow(const void *shadow_start, 218 + const void *shadow_end) 226 219 { 227 220 unsigned long addr = (unsigned long)shadow_start; 228 221 unsigned long end = (unsigned long)shadow_end; ··· 238 231 pmd_t *pmd; 239 232 240 233 /* 241 - * kasan_zero_pud should be populated with pmds 234 + * kasan_early_shadow_pud should be populated with pmds 242 235 * at this moment. 243 236 * [pud,pmd]_populate*() below needed only for 244 237 * 3,2 - level page tables where we don't have ··· 248 241 * The ifndef is required to avoid build breakage. 249 242 * 250 243 * With 5level-fixup.h, pgd_populate() is not nop and 251 - * we reference kasan_zero_p4d. It's not defined 244 + * we reference kasan_early_shadow_p4d. It's not defined 252 245 * unless 5-level paging enabled. 253 246 * 254 247 * The ifndef can be dropped once all KASAN-enabled 255 248 * architectures will switch to pgtable-nop4d.h. 256 249 */ 257 250 #ifndef __ARCH_HAS_5LEVEL_HACK 258 - pgd_populate(&init_mm, pgd, lm_alias(kasan_zero_p4d)); 251 + pgd_populate(&init_mm, pgd, 252 + lm_alias(kasan_early_shadow_p4d)); 259 253 #endif 260 254 p4d = p4d_offset(pgd, addr); 261 - p4d_populate(&init_mm, p4d, lm_alias(kasan_zero_pud)); 255 + p4d_populate(&init_mm, p4d, 256 + lm_alias(kasan_early_shadow_pud)); 262 257 pud = pud_offset(p4d, addr); 263 - pud_populate(&init_mm, pud, lm_alias(kasan_zero_pmd)); 258 + pud_populate(&init_mm, pud, 259 + lm_alias(kasan_early_shadow_pmd)); 264 260 pmd = pmd_offset(pud, addr); 265 - pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_zero_pte)); 261 + pmd_populate_kernel(&init_mm, pmd, 262 + lm_alias(kasan_early_shadow_pte)); 266 263 continue; 267 264 } 268 265 ··· 361 350 if (!pte_present(*pte)) 362 351 continue; 363 352 364 - if (WARN_ON(!kasan_zero_page_entry(*pte))) 353 + if (WARN_ON(!kasan_early_shadow_page_entry(*pte))) 365 354 continue; 366 355 pte_clear(&init_mm, addr, pte); 367 356 } ··· 491 480 WARN_ON(size % (KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE))) 492 481 return -EINVAL; 493 482 494 - ret = kasan_populate_zero_shadow(shadow_start, shadow_end); 483 + ret = kasan_populate_early_shadow(shadow_start, shadow_end); 495 484 if (ret) 496 485 kasan_remove_zero_shadow(shadow_start, 497 486 size >> KASAN_SHADOW_SCALE_SHIFT);
+2 -1
mm/kasan/quarantine.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * KASAN quarantine. 3 4 * ··· 237 236 * Update quarantine size in case of hotplug. Allocate a fraction of 238 237 * the installed memory to quarantine minus per-cpu queue limits. 239 238 */ 240 - total_size = (READ_ONCE(totalram_pages) << PAGE_SHIFT) / 239 + total_size = (totalram_pages() << PAGE_SHIFT) / 241 240 QUARANTINE_FRACTION; 242 241 percpu_quarantines = QUARANTINE_PERCPU_SIZE * num_online_cpus(); 243 242 new_quarantine_size = (total_size < percpu_quarantines) ?
+81 -207
mm/kasan/report.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 - * This file contains error reporting code. 3 + * This file contains common generic and tag-based KASAN error reporting code. 3 4 * 4 5 * Copyright (c) 2014 Samsung Electronics Co., Ltd. 5 6 * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com> ··· 40 39 #define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK) 41 40 #define SHADOW_ROWS_AROUND_ADDR 2 42 41 43 - static const void *find_first_bad_addr(const void *addr, size_t size) 42 + static unsigned long kasan_flags; 43 + 44 + #define KASAN_BIT_REPORTED 0 45 + #define KASAN_BIT_MULTI_SHOT 1 46 + 47 + bool kasan_save_enable_multi_shot(void) 44 48 { 45 - u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr); 46 - const void *first_bad_addr = addr; 47 - 48 - while (!shadow_val && first_bad_addr < addr + size) { 49 - first_bad_addr += KASAN_SHADOW_SCALE_SIZE; 50 - shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr); 51 - } 52 - return first_bad_addr; 49 + return test_and_set_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags); 53 50 } 51 + EXPORT_SYMBOL_GPL(kasan_save_enable_multi_shot); 54 52 55 - static bool addr_has_shadow(struct kasan_access_info *info) 53 + void kasan_restore_multi_shot(bool enabled) 56 54 { 57 - return (info->access_addr >= 58 - kasan_shadow_to_mem((void *)KASAN_SHADOW_START)); 55 + if (!enabled) 56 + clear_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags); 59 57 } 58 + EXPORT_SYMBOL_GPL(kasan_restore_multi_shot); 60 59 61 - static const char *get_shadow_bug_type(struct kasan_access_info *info) 60 + static int __init kasan_set_multi_shot(char *str) 62 61 { 63 - const char *bug_type = "unknown-crash"; 64 - u8 *shadow_addr; 65 - 66 - info->first_bad_addr = find_first_bad_addr(info->access_addr, 67 - info->access_size); 68 - 69 - shadow_addr = (u8 *)kasan_mem_to_shadow(info->first_bad_addr); 70 - 71 - /* 72 - * If shadow byte value is in [0, KASAN_SHADOW_SCALE_SIZE) we can look 73 - * at the next shadow byte to determine the type of the bad access. 74 - */ 75 - if (*shadow_addr > 0 && *shadow_addr <= KASAN_SHADOW_SCALE_SIZE - 1) 76 - shadow_addr++; 77 - 78 - switch (*shadow_addr) { 79 - case 0 ... KASAN_SHADOW_SCALE_SIZE - 1: 80 - /* 81 - * In theory it's still possible to see these shadow values 82 - * due to a data race in the kernel code. 83 - */ 84 - bug_type = "out-of-bounds"; 85 - break; 86 - case KASAN_PAGE_REDZONE: 87 - case KASAN_KMALLOC_REDZONE: 88 - bug_type = "slab-out-of-bounds"; 89 - break; 90 - case KASAN_GLOBAL_REDZONE: 91 - bug_type = "global-out-of-bounds"; 92 - break; 93 - case KASAN_STACK_LEFT: 94 - case KASAN_STACK_MID: 95 - case KASAN_STACK_RIGHT: 96 - case KASAN_STACK_PARTIAL: 97 - bug_type = "stack-out-of-bounds"; 98 - break; 99 - case KASAN_FREE_PAGE: 100 - case KASAN_KMALLOC_FREE: 101 - bug_type = "use-after-free"; 102 - break; 103 - case KASAN_USE_AFTER_SCOPE: 104 - bug_type = "use-after-scope"; 105 - break; 106 - case KASAN_ALLOCA_LEFT: 107 - case KASAN_ALLOCA_RIGHT: 108 - bug_type = "alloca-out-of-bounds"; 109 - break; 110 - } 111 - 112 - return bug_type; 62 + set_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags); 63 + return 1; 113 64 } 114 - 115 - static const char *get_wild_bug_type(struct kasan_access_info *info) 116 - { 117 - const char *bug_type = "unknown-crash"; 118 - 119 - if ((unsigned long)info->access_addr < PAGE_SIZE) 120 - bug_type = "null-ptr-deref"; 121 - else if ((unsigned long)info->access_addr < TASK_SIZE) 122 - bug_type = "user-memory-access"; 123 - else 124 - bug_type = "wild-memory-access"; 125 - 126 - return bug_type; 127 - } 128 - 129 - static const char *get_bug_type(struct kasan_access_info *info) 130 - { 131 - if (addr_has_shadow(info)) 132 - return get_shadow_bug_type(info); 133 - return get_wild_bug_type(info); 134 - } 65 + __setup("kasan_multi_shot", kasan_set_multi_shot); 135 66 136 67 static void print_error_description(struct kasan_access_info *info) 137 68 { 138 - const char *bug_type = get_bug_type(info); 139 - 140 69 pr_err("BUG: KASAN: %s in %pS\n", 141 - bug_type, (void *)info->ip); 70 + get_bug_type(info), (void *)info->ip); 142 71 pr_err("%s of size %zu at addr %px by task %s/%d\n", 143 72 info->is_write ? "Write" : "Read", info->access_size, 144 73 info->access_addr, current->comm, task_pid_nr(current)); 145 74 } 146 75 147 - static inline bool kernel_or_module_addr(const void *addr) 148 - { 149 - if (addr >= (void *)_stext && addr < (void *)_end) 150 - return true; 151 - if (is_module_address((unsigned long)addr)) 152 - return true; 153 - return false; 154 - } 155 - 156 - static inline bool init_task_stack_addr(const void *addr) 157 - { 158 - return addr >= (void *)&init_thread_union.stack && 159 - (addr <= (void *)&init_thread_union.stack + 160 - sizeof(init_thread_union.stack)); 161 - } 162 - 163 76 static DEFINE_SPINLOCK(report_lock); 164 77 165 - static void kasan_start_report(unsigned long *flags) 78 + static void start_report(unsigned long *flags) 166 79 { 167 80 /* 168 81 * Make sure we don't end up in loop. ··· 86 171 pr_err("==================================================================\n"); 87 172 } 88 173 89 - static void kasan_end_report(unsigned long *flags) 174 + static void end_report(unsigned long *flags) 90 175 { 91 176 pr_err("==================================================================\n"); 92 177 add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); ··· 162 247 } 163 248 164 249 describe_object_addr(cache, object, addr); 250 + } 251 + 252 + static inline bool kernel_or_module_addr(const void *addr) 253 + { 254 + if (addr >= (void *)_stext && addr < (void *)_end) 255 + return true; 256 + if (is_module_address((unsigned long)addr)) 257 + return true; 258 + return false; 259 + } 260 + 261 + static inline bool init_task_stack_addr(const void *addr) 262 + { 263 + return addr >= (void *)&init_thread_union.stack && 264 + (addr <= (void *)&init_thread_union.stack + 265 + sizeof(init_thread_union.stack)); 165 266 } 166 267 167 268 static void print_address_description(void *addr) ··· 257 326 } 258 327 } 259 328 260 - void kasan_report_invalid_free(void *object, unsigned long ip) 261 - { 262 - unsigned long flags; 263 - 264 - kasan_start_report(&flags); 265 - pr_err("BUG: KASAN: double-free or invalid-free in %pS\n", (void *)ip); 266 - pr_err("\n"); 267 - print_address_description(object); 268 - pr_err("\n"); 269 - print_shadow_for_address(object); 270 - kasan_end_report(&flags); 271 - } 272 - 273 - static void kasan_report_error(struct kasan_access_info *info) 274 - { 275 - unsigned long flags; 276 - 277 - kasan_start_report(&flags); 278 - 279 - print_error_description(info); 280 - pr_err("\n"); 281 - 282 - if (!addr_has_shadow(info)) { 283 - dump_stack(); 284 - } else { 285 - print_address_description((void *)info->access_addr); 286 - pr_err("\n"); 287 - print_shadow_for_address(info->first_bad_addr); 288 - } 289 - 290 - kasan_end_report(&flags); 291 - } 292 - 293 - static unsigned long kasan_flags; 294 - 295 - #define KASAN_BIT_REPORTED 0 296 - #define KASAN_BIT_MULTI_SHOT 1 297 - 298 - bool kasan_save_enable_multi_shot(void) 299 - { 300 - return test_and_set_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags); 301 - } 302 - EXPORT_SYMBOL_GPL(kasan_save_enable_multi_shot); 303 - 304 - void kasan_restore_multi_shot(bool enabled) 305 - { 306 - if (!enabled) 307 - clear_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags); 308 - } 309 - EXPORT_SYMBOL_GPL(kasan_restore_multi_shot); 310 - 311 - static int __init kasan_set_multi_shot(char *str) 312 - { 313 - set_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags); 314 - return 1; 315 - } 316 - __setup("kasan_multi_shot", kasan_set_multi_shot); 317 - 318 - static inline bool kasan_report_enabled(void) 329 + static bool report_enabled(void) 319 330 { 320 331 if (current->kasan_depth) 321 332 return false; ··· 266 393 return !test_and_set_bit(KASAN_BIT_REPORTED, &kasan_flags); 267 394 } 268 395 396 + void kasan_report_invalid_free(void *object, unsigned long ip) 397 + { 398 + unsigned long flags; 399 + 400 + start_report(&flags); 401 + pr_err("BUG: KASAN: double-free or invalid-free in %pS\n", (void *)ip); 402 + print_tags(get_tag(object), reset_tag(object)); 403 + object = reset_tag(object); 404 + pr_err("\n"); 405 + print_address_description(object); 406 + pr_err("\n"); 407 + print_shadow_for_address(object); 408 + end_report(&flags); 409 + } 410 + 269 411 void kasan_report(unsigned long addr, size_t size, 270 412 bool is_write, unsigned long ip) 271 413 { 272 414 struct kasan_access_info info; 415 + void *tagged_addr; 416 + void *untagged_addr; 417 + unsigned long flags; 273 418 274 - if (likely(!kasan_report_enabled())) 419 + if (likely(!report_enabled())) 275 420 return; 276 421 277 422 disable_trace_on_warning(); 278 423 279 - info.access_addr = (void *)addr; 280 - info.first_bad_addr = (void *)addr; 424 + tagged_addr = (void *)addr; 425 + untagged_addr = reset_tag(tagged_addr); 426 + 427 + info.access_addr = tagged_addr; 428 + if (addr_has_shadow(untagged_addr)) 429 + info.first_bad_addr = find_first_bad_addr(tagged_addr, size); 430 + else 431 + info.first_bad_addr = untagged_addr; 281 432 info.access_size = size; 282 433 info.is_write = is_write; 283 434 info.ip = ip; 284 435 285 - kasan_report_error(&info); 436 + start_report(&flags); 437 + 438 + print_error_description(&info); 439 + if (addr_has_shadow(untagged_addr)) 440 + print_tags(get_tag(tagged_addr), info.first_bad_addr); 441 + pr_err("\n"); 442 + 443 + if (addr_has_shadow(untagged_addr)) { 444 + print_address_description(untagged_addr); 445 + pr_err("\n"); 446 + print_shadow_for_address(info.first_bad_addr); 447 + } else { 448 + dump_stack(); 449 + } 450 + 451 + end_report(&flags); 286 452 } 287 - 288 - 289 - #define DEFINE_ASAN_REPORT_LOAD(size) \ 290 - void __asan_report_load##size##_noabort(unsigned long addr) \ 291 - { \ 292 - kasan_report(addr, size, false, _RET_IP_); \ 293 - } \ 294 - EXPORT_SYMBOL(__asan_report_load##size##_noabort) 295 - 296 - #define DEFINE_ASAN_REPORT_STORE(size) \ 297 - void __asan_report_store##size##_noabort(unsigned long addr) \ 298 - { \ 299 - kasan_report(addr, size, true, _RET_IP_); \ 300 - } \ 301 - EXPORT_SYMBOL(__asan_report_store##size##_noabort) 302 - 303 - DEFINE_ASAN_REPORT_LOAD(1); 304 - DEFINE_ASAN_REPORT_LOAD(2); 305 - DEFINE_ASAN_REPORT_LOAD(4); 306 - DEFINE_ASAN_REPORT_LOAD(8); 307 - DEFINE_ASAN_REPORT_LOAD(16); 308 - DEFINE_ASAN_REPORT_STORE(1); 309 - DEFINE_ASAN_REPORT_STORE(2); 310 - DEFINE_ASAN_REPORT_STORE(4); 311 - DEFINE_ASAN_REPORT_STORE(8); 312 - DEFINE_ASAN_REPORT_STORE(16); 313 - 314 - void __asan_report_load_n_noabort(unsigned long addr, size_t size) 315 - { 316 - kasan_report(addr, size, false, _RET_IP_); 317 - } 318 - EXPORT_SYMBOL(__asan_report_load_n_noabort); 319 - 320 - void __asan_report_store_n_noabort(unsigned long addr, size_t size) 321 - { 322 - kasan_report(addr, size, true, _RET_IP_); 323 - } 324 - EXPORT_SYMBOL(__asan_report_store_n_noabort);
+161
mm/kasan/tags.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * This file contains core tag-based KASAN code. 4 + * 5 + * Copyright (c) 2018 Google, Inc. 6 + * Author: Andrey Konovalov <andreyknvl@google.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + * 12 + */ 13 + 14 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 15 + #define DISABLE_BRANCH_PROFILING 16 + 17 + #include <linux/export.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/init.h> 20 + #include <linux/kasan.h> 21 + #include <linux/kernel.h> 22 + #include <linux/kmemleak.h> 23 + #include <linux/linkage.h> 24 + #include <linux/memblock.h> 25 + #include <linux/memory.h> 26 + #include <linux/mm.h> 27 + #include <linux/module.h> 28 + #include <linux/printk.h> 29 + #include <linux/random.h> 30 + #include <linux/sched.h> 31 + #include <linux/sched/task_stack.h> 32 + #include <linux/slab.h> 33 + #include <linux/stacktrace.h> 34 + #include <linux/string.h> 35 + #include <linux/types.h> 36 + #include <linux/vmalloc.h> 37 + #include <linux/bug.h> 38 + 39 + #include "kasan.h" 40 + #include "../slab.h" 41 + 42 + static DEFINE_PER_CPU(u32, prng_state); 43 + 44 + void kasan_init_tags(void) 45 + { 46 + int cpu; 47 + 48 + for_each_possible_cpu(cpu) 49 + per_cpu(prng_state, cpu) = get_random_u32(); 50 + } 51 + 52 + /* 53 + * If a preemption happens between this_cpu_read and this_cpu_write, the only 54 + * side effect is that we'll give a few allocated in different contexts objects 55 + * the same tag. Since tag-based KASAN is meant to be used a probabilistic 56 + * bug-detection debug feature, this doesn't have significant negative impact. 57 + * 58 + * Ideally the tags use strong randomness to prevent any attempts to predict 59 + * them during explicit exploit attempts. But strong randomness is expensive, 60 + * and we did an intentional trade-off to use a PRNG. This non-atomic RMW 61 + * sequence has in fact positive effect, since interrupts that randomly skew 62 + * PRNG at unpredictable points do only good. 63 + */ 64 + u8 random_tag(void) 65 + { 66 + u32 state = this_cpu_read(prng_state); 67 + 68 + state = 1664525 * state + 1013904223; 69 + this_cpu_write(prng_state, state); 70 + 71 + return (u8)(state % (KASAN_TAG_MAX + 1)); 72 + } 73 + 74 + void *kasan_reset_tag(const void *addr) 75 + { 76 + return reset_tag(addr); 77 + } 78 + 79 + void check_memory_region(unsigned long addr, size_t size, bool write, 80 + unsigned long ret_ip) 81 + { 82 + u8 tag; 83 + u8 *shadow_first, *shadow_last, *shadow; 84 + void *untagged_addr; 85 + 86 + if (unlikely(size == 0)) 87 + return; 88 + 89 + tag = get_tag((const void *)addr); 90 + 91 + /* 92 + * Ignore accesses for pointers tagged with 0xff (native kernel 93 + * pointer tag) to suppress false positives caused by kmap. 94 + * 95 + * Some kernel code was written to account for archs that don't keep 96 + * high memory mapped all the time, but rather map and unmap particular 97 + * pages when needed. Instead of storing a pointer to the kernel memory, 98 + * this code saves the address of the page structure and offset within 99 + * that page for later use. Those pages are then mapped and unmapped 100 + * with kmap/kunmap when necessary and virt_to_page is used to get the 101 + * virtual address of the page. For arm64 (that keeps the high memory 102 + * mapped all the time), kmap is turned into a page_address call. 103 + 104 + * The issue is that with use of the page_address + virt_to_page 105 + * sequence the top byte value of the original pointer gets lost (gets 106 + * set to KASAN_TAG_KERNEL (0xFF)). 107 + */ 108 + if (tag == KASAN_TAG_KERNEL) 109 + return; 110 + 111 + untagged_addr = reset_tag((const void *)addr); 112 + if (unlikely(untagged_addr < 113 + kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) { 114 + kasan_report(addr, size, write, ret_ip); 115 + return; 116 + } 117 + shadow_first = kasan_mem_to_shadow(untagged_addr); 118 + shadow_last = kasan_mem_to_shadow(untagged_addr + size - 1); 119 + for (shadow = shadow_first; shadow <= shadow_last; shadow++) { 120 + if (*shadow != tag) { 121 + kasan_report(addr, size, write, ret_ip); 122 + return; 123 + } 124 + } 125 + } 126 + 127 + #define DEFINE_HWASAN_LOAD_STORE(size) \ 128 + void __hwasan_load##size##_noabort(unsigned long addr) \ 129 + { \ 130 + check_memory_region(addr, size, false, _RET_IP_); \ 131 + } \ 132 + EXPORT_SYMBOL(__hwasan_load##size##_noabort); \ 133 + void __hwasan_store##size##_noabort(unsigned long addr) \ 134 + { \ 135 + check_memory_region(addr, size, true, _RET_IP_); \ 136 + } \ 137 + EXPORT_SYMBOL(__hwasan_store##size##_noabort) 138 + 139 + DEFINE_HWASAN_LOAD_STORE(1); 140 + DEFINE_HWASAN_LOAD_STORE(2); 141 + DEFINE_HWASAN_LOAD_STORE(4); 142 + DEFINE_HWASAN_LOAD_STORE(8); 143 + DEFINE_HWASAN_LOAD_STORE(16); 144 + 145 + void __hwasan_loadN_noabort(unsigned long addr, unsigned long size) 146 + { 147 + check_memory_region(addr, size, false, _RET_IP_); 148 + } 149 + EXPORT_SYMBOL(__hwasan_loadN_noabort); 150 + 151 + void __hwasan_storeN_noabort(unsigned long addr, unsigned long size) 152 + { 153 + check_memory_region(addr, size, true, _RET_IP_); 154 + } 155 + EXPORT_SYMBOL(__hwasan_storeN_noabort); 156 + 157 + void __hwasan_tag_memory(unsigned long addr, u8 tag, unsigned long size) 158 + { 159 + kasan_poison_shadow((void *)addr, size, tag); 160 + } 161 + EXPORT_SYMBOL(__hwasan_tag_memory);
+58
mm/kasan/tags_report.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * This file contains tag-based KASAN specific error reporting code. 4 + * 5 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 6 + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com> 7 + * 8 + * Some code borrowed from https://github.com/xairy/kasan-prototype by 9 + * Andrey Konovalov <andreyknvl@gmail.com> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License version 2 as 13 + * published by the Free Software Foundation. 14 + * 15 + */ 16 + 17 + #include <linux/bitops.h> 18 + #include <linux/ftrace.h> 19 + #include <linux/init.h> 20 + #include <linux/kernel.h> 21 + #include <linux/mm.h> 22 + #include <linux/printk.h> 23 + #include <linux/sched.h> 24 + #include <linux/slab.h> 25 + #include <linux/stackdepot.h> 26 + #include <linux/stacktrace.h> 27 + #include <linux/string.h> 28 + #include <linux/types.h> 29 + #include <linux/kasan.h> 30 + #include <linux/module.h> 31 + 32 + #include <asm/sections.h> 33 + 34 + #include "kasan.h" 35 + #include "../slab.h" 36 + 37 + const char *get_bug_type(struct kasan_access_info *info) 38 + { 39 + return "invalid-access"; 40 + } 41 + 42 + void *find_first_bad_addr(void *addr, size_t size) 43 + { 44 + u8 tag = get_tag(addr); 45 + void *p = reset_tag(addr); 46 + void *end = p + size; 47 + 48 + while (p < end && tag == *(u8 *)kasan_mem_to_shadow(p)) 49 + p += KASAN_SHADOW_SCALE_SIZE; 50 + return p; 51 + } 52 + 53 + void print_tags(u8 addr_tag, const void *addr) 54 + { 55 + u8 *shadow = (u8 *)kasan_mem_to_shadow(addr); 56 + 57 + pr_err("Pointer tag: [%02x], memory tag: [%02x]\n", addr_tag, *shadow); 58 + }
+4 -6
mm/khugepaged.c
··· 944 944 int isolated = 0, result = 0; 945 945 struct mem_cgroup *memcg; 946 946 struct vm_area_struct *vma; 947 - unsigned long mmun_start; /* For mmu_notifiers */ 948 - unsigned long mmun_end; /* For mmu_notifiers */ 947 + struct mmu_notifier_range range; 949 948 gfp_t gfp; 950 949 951 950 VM_BUG_ON(address & ~HPAGE_PMD_MASK); ··· 1016 1017 pte = pte_offset_map(pmd, address); 1017 1018 pte_ptl = pte_lockptr(mm, pmd); 1018 1019 1019 - mmun_start = address; 1020 - mmun_end = address + HPAGE_PMD_SIZE; 1021 - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 1020 + mmu_notifier_range_init(&range, mm, address, address + HPAGE_PMD_SIZE); 1021 + mmu_notifier_invalidate_range_start(&range); 1022 1022 pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */ 1023 1023 /* 1024 1024 * After this gup_fast can't run anymore. This also removes ··· 1027 1029 */ 1028 1030 _pmd = pmdp_collapse_flush(vma, address, pmd); 1029 1031 spin_unlock(pmd_ptl); 1030 - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 1032 + mmu_notifier_invalidate_range_end(&range); 1031 1033 1032 1034 spin_lock(pte_ptl); 1033 1035 isolated = __collapse_huge_page_isolate(vma, address, pte);
+12 -7
mm/kmemleak.c
··· 1547 1547 unsigned long pfn; 1548 1548 1549 1549 for (pfn = start_pfn; pfn < end_pfn; pfn++) { 1550 - struct page *page; 1550 + struct page *page = pfn_to_online_page(pfn); 1551 1551 1552 - if (!pfn_valid(pfn)) 1552 + if (!page) 1553 1553 continue; 1554 - page = pfn_to_page(pfn); 1554 + 1555 + /* only scan pages belonging to this node */ 1556 + if (page_to_nid(page) != i) 1557 + continue; 1555 1558 /* only scan if page is in use */ 1556 1559 if (page_count(page) == 0) 1557 1560 continue; ··· 1650 1647 */ 1651 1648 static int kmemleak_scan_thread(void *arg) 1652 1649 { 1653 - static int first_run = 1; 1650 + static int first_run = IS_ENABLED(CONFIG_DEBUG_KMEMLEAK_AUTO_SCAN); 1654 1651 1655 1652 pr_info("Automatic memory scanning thread started\n"); 1656 1653 set_user_nice(current, 10); ··· 2144 2141 return -ENOMEM; 2145 2142 } 2146 2143 2147 - mutex_lock(&scan_mutex); 2148 - start_scan_thread(); 2149 - mutex_unlock(&scan_mutex); 2144 + if (IS_ENABLED(CONFIG_DEBUG_KMEMLEAK_AUTO_SCAN)) { 2145 + mutex_lock(&scan_mutex); 2146 + start_scan_thread(); 2147 + mutex_unlock(&scan_mutex); 2148 + } 2150 2149 2151 2150 pr_info("Kernel memory leak detector initialized\n"); 2152 2151
+19 -16
mm/ksm.c
··· 25 25 #include <linux/pagemap.h> 26 26 #include <linux/rmap.h> 27 27 #include <linux/spinlock.h> 28 - #include <linux/jhash.h> 28 + #include <linux/xxhash.h> 29 29 #include <linux/delay.h> 30 30 #include <linux/kthread.h> 31 31 #include <linux/wait.h> ··· 296 296 static void wait_while_offlining(void); 297 297 298 298 static DECLARE_WAIT_QUEUE_HEAD(ksm_thread_wait); 299 + static DECLARE_WAIT_QUEUE_HEAD(ksm_iter_wait); 299 300 static DEFINE_MUTEX(ksm_thread_mutex); 300 301 static DEFINE_SPINLOCK(ksm_mmlist_lock); 301 302 ··· 1010 1009 { 1011 1010 u32 checksum; 1012 1011 void *addr = kmap_atomic(page); 1013 - checksum = jhash2(addr, PAGE_SIZE / 4, 17); 1012 + checksum = xxhash(addr, PAGE_SIZE, 0); 1014 1013 kunmap_atomic(addr); 1015 1014 return checksum; 1016 1015 } ··· 1043 1042 }; 1044 1043 int swapped; 1045 1044 int err = -EFAULT; 1046 - unsigned long mmun_start; /* For mmu_notifiers */ 1047 - unsigned long mmun_end; /* For mmu_notifiers */ 1045 + struct mmu_notifier_range range; 1048 1046 1049 1047 pvmw.address = page_address_in_vma(page, vma); 1050 1048 if (pvmw.address == -EFAULT) ··· 1051 1051 1052 1052 BUG_ON(PageTransCompound(page)); 1053 1053 1054 - mmun_start = pvmw.address; 1055 - mmun_end = pvmw.address + PAGE_SIZE; 1056 - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 1054 + mmu_notifier_range_init(&range, mm, pvmw.address, 1055 + pvmw.address + PAGE_SIZE); 1056 + mmu_notifier_invalidate_range_start(&range); 1057 1057 1058 1058 if (!page_vma_mapped_walk(&pvmw)) 1059 1059 goto out_mn; ··· 1105 1105 out_unlock: 1106 1106 page_vma_mapped_walk_done(&pvmw); 1107 1107 out_mn: 1108 - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 1108 + mmu_notifier_invalidate_range_end(&range); 1109 1109 out: 1110 1110 return err; 1111 1111 } ··· 1129 1129 spinlock_t *ptl; 1130 1130 unsigned long addr; 1131 1131 int err = -EFAULT; 1132 - unsigned long mmun_start; /* For mmu_notifiers */ 1133 - unsigned long mmun_end; /* For mmu_notifiers */ 1132 + struct mmu_notifier_range range; 1134 1133 1135 1134 addr = page_address_in_vma(page, vma); 1136 1135 if (addr == -EFAULT) ··· 1139 1140 if (!pmd) 1140 1141 goto out; 1141 1142 1142 - mmun_start = addr; 1143 - mmun_end = addr + PAGE_SIZE; 1144 - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 1143 + mmu_notifier_range_init(&range, mm, addr, addr + PAGE_SIZE); 1144 + mmu_notifier_invalidate_range_start(&range); 1145 1145 1146 1146 ptep = pte_offset_map_lock(mm, pmd, addr, &ptl); 1147 1147 if (!pte_same(*ptep, orig_pte)) { ··· 1186 1188 pte_unmap_unlock(ptep, ptl); 1187 1189 err = 0; 1188 1190 out_mn: 1189 - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 1191 + mmu_notifier_invalidate_range_end(&range); 1190 1192 out: 1191 1193 return err; 1192 1194 } ··· 2389 2391 2390 2392 static int ksm_scan_thread(void *nothing) 2391 2393 { 2394 + unsigned int sleep_ms; 2395 + 2392 2396 set_freezable(); 2393 2397 set_user_nice(current, 5); 2394 2398 ··· 2404 2404 try_to_freeze(); 2405 2405 2406 2406 if (ksmd_should_run()) { 2407 - schedule_timeout_interruptible( 2408 - msecs_to_jiffies(ksm_thread_sleep_millisecs)); 2407 + sleep_ms = READ_ONCE(ksm_thread_sleep_millisecs); 2408 + wait_event_interruptible_timeout(ksm_iter_wait, 2409 + sleep_ms != READ_ONCE(ksm_thread_sleep_millisecs), 2410 + msecs_to_jiffies(sleep_ms)); 2409 2411 } else { 2410 2412 wait_event_freezable(ksm_thread_wait, 2411 2413 ksmd_should_run() || kthread_should_stop()); ··· 2826 2824 return -EINVAL; 2827 2825 2828 2826 ksm_thread_sleep_millisecs = msecs; 2827 + wake_up_interruptible(&ksm_iter_wait); 2829 2828 2830 2829 return count; 2831 2830 }
+11 -10
mm/madvise.c
··· 458 458 static int madvise_free_single_vma(struct vm_area_struct *vma, 459 459 unsigned long start_addr, unsigned long end_addr) 460 460 { 461 - unsigned long start, end; 462 461 struct mm_struct *mm = vma->vm_mm; 462 + struct mmu_notifier_range range; 463 463 struct mmu_gather tlb; 464 464 465 465 /* MADV_FREE works for only anon vma at the moment */ 466 466 if (!vma_is_anonymous(vma)) 467 467 return -EINVAL; 468 468 469 - start = max(vma->vm_start, start_addr); 470 - if (start >= vma->vm_end) 469 + range.start = max(vma->vm_start, start_addr); 470 + if (range.start >= vma->vm_end) 471 471 return -EINVAL; 472 - end = min(vma->vm_end, end_addr); 473 - if (end <= vma->vm_start) 472 + range.end = min(vma->vm_end, end_addr); 473 + if (range.end <= vma->vm_start) 474 474 return -EINVAL; 475 + mmu_notifier_range_init(&range, mm, range.start, range.end); 475 476 476 477 lru_add_drain(); 477 - tlb_gather_mmu(&tlb, mm, start, end); 478 + tlb_gather_mmu(&tlb, mm, range.start, range.end); 478 479 update_hiwater_rss(mm); 479 480 480 - mmu_notifier_invalidate_range_start(mm, start, end); 481 - madvise_free_page_range(&tlb, vma, start, end); 482 - mmu_notifier_invalidate_range_end(mm, start, end); 483 - tlb_finish_mmu(&tlb, start, end); 481 + mmu_notifier_invalidate_range_start(&range); 482 + madvise_free_page_range(&tlb, vma, range.start, range.end); 483 + mmu_notifier_invalidate_range_end(&range); 484 + tlb_finish_mmu(&tlb, range.start, range.end); 484 485 485 486 return 0; 486 487 }
+22 -30
mm/memblock.c
··· 262 262 phys_addr_t kernel_end, ret; 263 263 264 264 /* pump up @end */ 265 - if (end == MEMBLOCK_ALLOC_ACCESSIBLE) 265 + if (end == MEMBLOCK_ALLOC_ACCESSIBLE || 266 + end == MEMBLOCK_ALLOC_KASAN) 266 267 end = memblock.current_limit; 267 268 268 269 /* avoid allocating the first page */ ··· 801 800 return memblock_remove_range(&memblock.memory, base, size); 802 801 } 803 802 804 - 803 + /** 804 + * memblock_free - free boot memory block 805 + * @base: phys starting address of the boot memory block 806 + * @size: size of the boot memory block in bytes 807 + * 808 + * Free boot memory block previously allocated by memblock_alloc_xx() API. 809 + * The freeing memory will not be released to the buddy allocator. 810 + */ 805 811 int __init_memblock memblock_free(phys_addr_t base, phys_addr_t size) 806 812 { 807 813 phys_addr_t end = base + size - 1; ··· 1420 1412 done: 1421 1413 ptr = phys_to_virt(alloc); 1422 1414 1423 - /* 1424 - * The min_count is set to 0 so that bootmem allocated blocks 1425 - * are never reported as leaks. This is because many of these blocks 1426 - * are only referred via the physical address which is not 1427 - * looked up by kmemleak. 1428 - */ 1429 - kmemleak_alloc(ptr, size, 0, 0); 1415 + /* Skip kmemleak for kasan_init() due to high volume. */ 1416 + if (max_addr != MEMBLOCK_ALLOC_KASAN) 1417 + /* 1418 + * The min_count is set to 0 so that bootmem allocated 1419 + * blocks are never reported as leaks. This is because many 1420 + * of these blocks are only referred via the physical 1421 + * address which is not looked up by kmemleak. 1422 + */ 1423 + kmemleak_alloc(ptr, size, 0, 0); 1430 1424 1431 1425 return ptr; 1432 1426 } ··· 1547 1537 } 1548 1538 1549 1539 /** 1550 - * __memblock_free_early - free boot memory block 1551 - * @base: phys starting address of the boot memory block 1552 - * @size: size of the boot memory block in bytes 1553 - * 1554 - * Free boot memory block previously allocated by memblock_alloc_xx() API. 1555 - * The freeing memory will not be released to the buddy allocator. 1556 - */ 1557 - void __init __memblock_free_early(phys_addr_t base, phys_addr_t size) 1558 - { 1559 - phys_addr_t end = base + size - 1; 1560 - 1561 - memblock_dbg("%s: [%pa-%pa] %pF\n", 1562 - __func__, &base, &end, (void *)_RET_IP_); 1563 - kmemleak_free_part_phys(base, size); 1564 - memblock_remove_range(&memblock.reserved, base, size); 1565 - } 1566 - 1567 - /** 1568 1540 * __memblock_free_late - free bootmem block pages directly to buddy allocator 1569 1541 * @base: phys starting address of the boot memory block 1570 1542 * @size: size of the boot memory block in bytes ··· 1568 1576 1569 1577 for (; cursor < end; cursor++) { 1570 1578 memblock_free_pages(pfn_to_page(cursor), cursor, 0); 1571 - totalram_pages++; 1579 + totalram_pages_inc(); 1572 1580 } 1573 1581 } 1574 1582 ··· 1942 1950 struct zone *z; 1943 1951 1944 1952 for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++) 1945 - z->managed_pages = 0; 1953 + atomic_long_set(&z->managed_pages, 0); 1946 1954 } 1947 1955 1948 1956 void __init reset_all_zones_managed_pages(void) ··· 1970 1978 reset_all_zones_managed_pages(); 1971 1979 1972 1980 pages = free_low_memory_core_early(); 1973 - totalram_pages += pages; 1981 + totalram_pages_add(pages); 1974 1982 1975 1983 return pages; 1976 1984 }
+43 -20
mm/memcontrol.c
··· 1293 1293 1294 1294 #define K(x) ((x) << (PAGE_SHIFT-10)) 1295 1295 /** 1296 - * mem_cgroup_print_oom_info: Print OOM information relevant to memory controller. 1296 + * mem_cgroup_print_oom_context: Print OOM information relevant to 1297 + * memory controller. 1297 1298 * @memcg: The memory cgroup that went over limit 1298 1299 * @p: Task that is going to be killed 1299 1300 * 1300 1301 * NOTE: @memcg and @p's mem_cgroup can be different when hierarchy is 1301 1302 * enabled 1302 1303 */ 1303 - void mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p) 1304 + void mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p) 1305 + { 1306 + rcu_read_lock(); 1307 + 1308 + if (memcg) { 1309 + pr_cont(",oom_memcg="); 1310 + pr_cont_cgroup_path(memcg->css.cgroup); 1311 + } else 1312 + pr_cont(",global_oom"); 1313 + if (p) { 1314 + pr_cont(",task_memcg="); 1315 + pr_cont_cgroup_path(task_cgroup(p, memory_cgrp_id)); 1316 + } 1317 + rcu_read_unlock(); 1318 + } 1319 + 1320 + /** 1321 + * mem_cgroup_print_oom_meminfo: Print OOM memory information relevant to 1322 + * memory controller. 1323 + * @memcg: The memory cgroup that went over limit 1324 + */ 1325 + void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) 1304 1326 { 1305 1327 struct mem_cgroup *iter; 1306 1328 unsigned int i; 1307 - 1308 - rcu_read_lock(); 1309 - 1310 - if (p) { 1311 - pr_info("Task in "); 1312 - pr_cont_cgroup_path(task_cgroup(p, memory_cgrp_id)); 1313 - pr_cont(" killed as a result of limit of "); 1314 - } else { 1315 - pr_info("Memory limit reached of cgroup "); 1316 - } 1317 - 1318 - pr_cont_cgroup_path(memcg->css.cgroup); 1319 - pr_cont("\n"); 1320 - 1321 - rcu_read_unlock(); 1322 1329 1323 1330 pr_info("memory: usage %llukB, limit %llukB, failcnt %lu\n", 1324 1331 K((u64)page_counter_read(&memcg->memory)), ··· 1673 1666 1674 1667 static enum oom_status mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order) 1675 1668 { 1669 + enum oom_status ret; 1670 + bool locked; 1671 + 1676 1672 if (order > PAGE_ALLOC_COSTLY_ORDER) 1677 1673 return OOM_SKIPPED; 1678 1674 ··· 1710 1700 return OOM_ASYNC; 1711 1701 } 1712 1702 1713 - if (mem_cgroup_out_of_memory(memcg, mask, order)) 1714 - return OOM_SUCCESS; 1703 + mem_cgroup_mark_under_oom(memcg); 1715 1704 1716 - return OOM_FAILED; 1705 + locked = mem_cgroup_oom_trylock(memcg); 1706 + 1707 + if (locked) 1708 + mem_cgroup_oom_notify(memcg); 1709 + 1710 + mem_cgroup_unmark_under_oom(memcg); 1711 + if (mem_cgroup_out_of_memory(memcg, mask, order)) 1712 + ret = OOM_SUCCESS; 1713 + else 1714 + ret = OOM_FAILED; 1715 + 1716 + if (locked) 1717 + mem_cgroup_oom_unlock(memcg); 1718 + 1719 + return ret; 1717 1720 } 1718 1721 1719 1722 /**
+14 -2
mm/memory-failure.c
··· 966 966 enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS; 967 967 struct address_space *mapping; 968 968 LIST_HEAD(tokill); 969 - bool unmap_success; 969 + bool unmap_success = true; 970 970 int kill = 1, forcekill; 971 971 struct page *hpage = *hpagep; 972 972 bool mlocked = PageMlocked(hpage); ··· 1028 1028 if (kill) 1029 1029 collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); 1030 1030 1031 - unmap_success = try_to_unmap(hpage, ttu); 1031 + if (!PageHuge(hpage)) { 1032 + unmap_success = try_to_unmap(hpage, ttu); 1033 + } else if (mapping) { 1034 + /* 1035 + * For hugetlb pages, try_to_unmap could potentially call 1036 + * huge_pmd_unshare. Because of this, take semaphore in 1037 + * write mode here and set TTU_RMAP_LOCKED to indicate we 1038 + * have taken the lock at this higer level. 1039 + */ 1040 + i_mmap_lock_write(mapping); 1041 + unmap_success = try_to_unmap(hpage, ttu|TTU_RMAP_LOCKED); 1042 + i_mmap_unlock_write(mapping); 1043 + } 1032 1044 if (!unmap_success) 1033 1045 pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n", 1034 1046 pfn, page_mapcount(hpage));
+52 -51
mm/memory.c
··· 973 973 unsigned long next; 974 974 unsigned long addr = vma->vm_start; 975 975 unsigned long end = vma->vm_end; 976 - unsigned long mmun_start; /* For mmu_notifiers */ 977 - unsigned long mmun_end; /* For mmu_notifiers */ 976 + struct mmu_notifier_range range; 978 977 bool is_cow; 979 978 int ret; 980 979 ··· 1007 1008 * is_cow_mapping() returns true. 1008 1009 */ 1009 1010 is_cow = is_cow_mapping(vma->vm_flags); 1010 - mmun_start = addr; 1011 - mmun_end = end; 1012 - if (is_cow) 1013 - mmu_notifier_invalidate_range_start(src_mm, mmun_start, 1014 - mmun_end); 1011 + 1012 + if (is_cow) { 1013 + mmu_notifier_range_init(&range, src_mm, addr, end); 1014 + mmu_notifier_invalidate_range_start(&range); 1015 + } 1015 1016 1016 1017 ret = 0; 1017 1018 dst_pgd = pgd_offset(dst_mm, addr); ··· 1028 1029 } while (dst_pgd++, src_pgd++, addr = next, addr != end); 1029 1030 1030 1031 if (is_cow) 1031 - mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end); 1032 + mmu_notifier_invalidate_range_end(&range); 1032 1033 return ret; 1033 1034 } 1034 1035 ··· 1331 1332 struct vm_area_struct *vma, unsigned long start_addr, 1332 1333 unsigned long end_addr) 1333 1334 { 1334 - struct mm_struct *mm = vma->vm_mm; 1335 + struct mmu_notifier_range range; 1335 1336 1336 - mmu_notifier_invalidate_range_start(mm, start_addr, end_addr); 1337 + mmu_notifier_range_init(&range, vma->vm_mm, start_addr, end_addr); 1338 + mmu_notifier_invalidate_range_start(&range); 1337 1339 for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) 1338 1340 unmap_single_vma(tlb, vma, start_addr, end_addr, NULL); 1339 - mmu_notifier_invalidate_range_end(mm, start_addr, end_addr); 1341 + mmu_notifier_invalidate_range_end(&range); 1340 1342 } 1341 1343 1342 1344 /** ··· 1351 1351 void zap_page_range(struct vm_area_struct *vma, unsigned long start, 1352 1352 unsigned long size) 1353 1353 { 1354 - struct mm_struct *mm = vma->vm_mm; 1354 + struct mmu_notifier_range range; 1355 1355 struct mmu_gather tlb; 1356 - unsigned long end = start + size; 1357 1356 1358 1357 lru_add_drain(); 1359 - tlb_gather_mmu(&tlb, mm, start, end); 1360 - update_hiwater_rss(mm); 1361 - mmu_notifier_invalidate_range_start(mm, start, end); 1362 - for ( ; vma && vma->vm_start < end; vma = vma->vm_next) 1363 - unmap_single_vma(&tlb, vma, start, end, NULL); 1364 - mmu_notifier_invalidate_range_end(mm, start, end); 1365 - tlb_finish_mmu(&tlb, start, end); 1358 + mmu_notifier_range_init(&range, vma->vm_mm, start, start + size); 1359 + tlb_gather_mmu(&tlb, vma->vm_mm, start, range.end); 1360 + update_hiwater_rss(vma->vm_mm); 1361 + mmu_notifier_invalidate_range_start(&range); 1362 + for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next) 1363 + unmap_single_vma(&tlb, vma, start, range.end, NULL); 1364 + mmu_notifier_invalidate_range_end(&range); 1365 + tlb_finish_mmu(&tlb, start, range.end); 1366 1366 } 1367 1367 1368 1368 /** ··· 1377 1377 static void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, 1378 1378 unsigned long size, struct zap_details *details) 1379 1379 { 1380 - struct mm_struct *mm = vma->vm_mm; 1380 + struct mmu_notifier_range range; 1381 1381 struct mmu_gather tlb; 1382 - unsigned long end = address + size; 1383 1382 1384 1383 lru_add_drain(); 1385 - tlb_gather_mmu(&tlb, mm, address, end); 1386 - update_hiwater_rss(mm); 1387 - mmu_notifier_invalidate_range_start(mm, address, end); 1388 - unmap_single_vma(&tlb, vma, address, end, details); 1389 - mmu_notifier_invalidate_range_end(mm, address, end); 1390 - tlb_finish_mmu(&tlb, address, end); 1384 + mmu_notifier_range_init(&range, vma->vm_mm, address, address + size); 1385 + tlb_gather_mmu(&tlb, vma->vm_mm, address, range.end); 1386 + update_hiwater_rss(vma->vm_mm); 1387 + mmu_notifier_invalidate_range_start(&range); 1388 + unmap_single_vma(&tlb, vma, address, range.end, details); 1389 + mmu_notifier_invalidate_range_end(&range); 1390 + tlb_finish_mmu(&tlb, address, range.end); 1391 1391 } 1392 1392 1393 1393 /** ··· 2247 2247 struct page *new_page = NULL; 2248 2248 pte_t entry; 2249 2249 int page_copied = 0; 2250 - const unsigned long mmun_start = vmf->address & PAGE_MASK; 2251 - const unsigned long mmun_end = mmun_start + PAGE_SIZE; 2252 2250 struct mem_cgroup *memcg; 2251 + struct mmu_notifier_range range; 2253 2252 2254 2253 if (unlikely(anon_vma_prepare(vma))) 2255 2254 goto oom; ··· 2271 2272 2272 2273 __SetPageUptodate(new_page); 2273 2274 2274 - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); 2275 + mmu_notifier_range_init(&range, mm, vmf->address & PAGE_MASK, 2276 + (vmf->address & PAGE_MASK) + PAGE_SIZE); 2277 + mmu_notifier_invalidate_range_start(&range); 2275 2278 2276 2279 /* 2277 2280 * Re-check the pte - we dropped the lock ··· 2350 2349 * No need to double call mmu_notifier->invalidate_range() callback as 2351 2350 * the above ptep_clear_flush_notify() did already call it. 2352 2351 */ 2353 - mmu_notifier_invalidate_range_only_end(mm, mmun_start, mmun_end); 2352 + mmu_notifier_invalidate_range_only_end(&range); 2354 2353 if (old_page) { 2355 2354 /* 2356 2355 * Don't let another task, with possibly unlocked vma, ··· 3831 3830 vmf.pud = pud_alloc(mm, p4d, address); 3832 3831 if (!vmf.pud) 3833 3832 return VM_FAULT_OOM; 3834 - if (pud_none(*vmf.pud) && transparent_hugepage_enabled(vma)) { 3833 + if (pud_none(*vmf.pud) && __transparent_hugepage_enabled(vma)) { 3835 3834 ret = create_huge_pud(&vmf); 3836 3835 if (!(ret & VM_FAULT_FALLBACK)) 3837 3836 return ret; ··· 3857 3856 vmf.pmd = pmd_alloc(mm, vmf.pud, address); 3858 3857 if (!vmf.pmd) 3859 3858 return VM_FAULT_OOM; 3860 - if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { 3859 + if (pmd_none(*vmf.pmd) && __transparent_hugepage_enabled(vma)) { 3861 3860 ret = create_huge_pmd(&vmf); 3862 3861 if (!(ret & VM_FAULT_FALLBACK)) 3863 3862 return ret; ··· 4031 4030 #endif /* __PAGETABLE_PMD_FOLDED */ 4032 4031 4033 4032 static int __follow_pte_pmd(struct mm_struct *mm, unsigned long address, 4034 - unsigned long *start, unsigned long *end, 4033 + struct mmu_notifier_range *range, 4035 4034 pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) 4036 4035 { 4037 4036 pgd_t *pgd; ··· 4059 4058 if (!pmdpp) 4060 4059 goto out; 4061 4060 4062 - if (start && end) { 4063 - *start = address & PMD_MASK; 4064 - *end = *start + PMD_SIZE; 4065 - mmu_notifier_invalidate_range_start(mm, *start, *end); 4061 + if (range) { 4062 + mmu_notifier_range_init(range, mm, address & PMD_MASK, 4063 + (address & PMD_MASK) + PMD_SIZE); 4064 + mmu_notifier_invalidate_range_start(range); 4066 4065 } 4067 4066 *ptlp = pmd_lock(mm, pmd); 4068 4067 if (pmd_huge(*pmd)) { ··· 4070 4069 return 0; 4071 4070 } 4072 4071 spin_unlock(*ptlp); 4073 - if (start && end) 4074 - mmu_notifier_invalidate_range_end(mm, *start, *end); 4072 + if (range) 4073 + mmu_notifier_invalidate_range_end(range); 4075 4074 } 4076 4075 4077 4076 if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd))) 4078 4077 goto out; 4079 4078 4080 - if (start && end) { 4081 - *start = address & PAGE_MASK; 4082 - *end = *start + PAGE_SIZE; 4083 - mmu_notifier_invalidate_range_start(mm, *start, *end); 4079 + if (range) { 4080 + range->start = address & PAGE_MASK; 4081 + range->end = range->start + PAGE_SIZE; 4082 + mmu_notifier_invalidate_range_start(range); 4084 4083 } 4085 4084 ptep = pte_offset_map_lock(mm, pmd, address, ptlp); 4086 4085 if (!pte_present(*ptep)) ··· 4089 4088 return 0; 4090 4089 unlock: 4091 4090 pte_unmap_unlock(ptep, *ptlp); 4092 - if (start && end) 4093 - mmu_notifier_invalidate_range_end(mm, *start, *end); 4091 + if (range) 4092 + mmu_notifier_invalidate_range_end(range); 4094 4093 out: 4095 4094 return -EINVAL; 4096 4095 } ··· 4102 4101 4103 4102 /* (void) is needed to make gcc happy */ 4104 4103 (void) __cond_lock(*ptlp, 4105 - !(res = __follow_pte_pmd(mm, address, NULL, NULL, 4104 + !(res = __follow_pte_pmd(mm, address, NULL, 4106 4105 ptepp, NULL, ptlp))); 4107 4106 return res; 4108 4107 } 4109 4108 4110 4109 int follow_pte_pmd(struct mm_struct *mm, unsigned long address, 4111 - unsigned long *start, unsigned long *end, 4112 - pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) 4110 + struct mmu_notifier_range *range, 4111 + pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp) 4113 4112 { 4114 4113 int res; 4115 4114 4116 4115 /* (void) is needed to make gcc happy */ 4117 4116 (void) __cond_lock(*ptlp, 4118 - !(res = __follow_pte_pmd(mm, address, start, end, 4117 + !(res = __follow_pte_pmd(mm, address, range, 4119 4118 ptepp, pmdpp, ptlp))); 4120 4119 return res; 4121 4120 }
+87 -89
mm/memory_hotplug.c
··· 34 34 #include <linux/hugetlb.h> 35 35 #include <linux/memblock.h> 36 36 #include <linux/compaction.h> 37 + #include <linux/rmap.h> 37 38 38 39 #include <asm/tlbflush.h> 39 40 ··· 254 253 if (pfn_valid(phys_start_pfn)) 255 254 return -EEXIST; 256 255 257 - ret = sparse_add_one_section(NODE_DATA(nid), phys_start_pfn, altmap); 256 + ret = sparse_add_one_section(nid, phys_start_pfn, altmap); 258 257 if (ret < 0) 259 258 return ret; 260 259 ··· 744 743 int nid = pgdat->node_id; 745 744 unsigned long flags; 746 745 747 - if (zone_is_empty(zone)) 748 - init_currently_empty_zone(zone, start_pfn, nr_pages); 749 - 750 746 clear_zone_contiguous(zone); 751 747 752 748 /* TODO Huh pgdat is irqsave while zone is not. It used to be like that before */ 753 749 pgdat_resize_lock(pgdat, &flags); 754 750 zone_span_writelock(zone); 751 + if (zone_is_empty(zone)) 752 + init_currently_empty_zone(zone, start_pfn, nr_pages); 755 753 resize_zone_range(zone, start_pfn, nr_pages); 756 754 zone_span_writeunlock(zone); 757 755 resize_pgdat_range(pgdat, start_pfn, nr_pages); ··· 1078 1078 * 1079 1079 * we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG 1080 1080 */ 1081 - int __ref add_memory_resource(int nid, struct resource *res, bool online) 1081 + int __ref add_memory_resource(int nid, struct resource *res) 1082 1082 { 1083 1083 u64 start, size; 1084 1084 bool new_node = false; ··· 1133 1133 mem_hotplug_done(); 1134 1134 1135 1135 /* online pages if requested */ 1136 - if (online) 1136 + if (memhp_auto_online) 1137 1137 walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1), 1138 1138 NULL, online_memory_block); 1139 1139 ··· 1157 1157 if (IS_ERR(res)) 1158 1158 return PTR_ERR(res); 1159 1159 1160 - ret = add_memory_resource(nid, res, memhp_auto_online); 1160 + ret = add_memory_resource(nid, res); 1161 1161 if (ret < 0) 1162 1162 release_memory_resource(res); 1163 1163 return ret; ··· 1226 1226 if (!zone_spans_pfn(zone, pfn)) 1227 1227 return false; 1228 1228 1229 - return !has_unmovable_pages(zone, page, 0, MIGRATE_MOVABLE, true); 1229 + return !has_unmovable_pages(zone, page, 0, MIGRATE_MOVABLE, SKIP_HWPOISON); 1230 1230 } 1231 1231 1232 1232 /* Checks if this range of memory is likely to be hot-removable. */ ··· 1339 1339 return new_page_nodemask(page, nid, &nmask); 1340 1340 } 1341 1341 1342 - #define NR_OFFLINE_AT_ONCE_PAGES (256) 1343 1342 static int 1344 1343 do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) 1345 1344 { 1346 1345 unsigned long pfn; 1347 1346 struct page *page; 1348 - int move_pages = NR_OFFLINE_AT_ONCE_PAGES; 1349 1347 int not_managed = 0; 1350 1348 int ret = 0; 1351 1349 LIST_HEAD(source); 1352 1350 1353 - for (pfn = start_pfn; pfn < end_pfn && move_pages > 0; pfn++) { 1351 + for (pfn = start_pfn; pfn < end_pfn; pfn++) { 1354 1352 if (!pfn_valid(pfn)) 1355 1353 continue; 1356 1354 page = pfn_to_page(pfn); ··· 1360 1362 ret = -EBUSY; 1361 1363 break; 1362 1364 } 1363 - if (isolate_huge_page(page, &source)) 1364 - move_pages -= 1 << compound_order(head); 1365 + isolate_huge_page(page, &source); 1365 1366 continue; 1366 1367 } else if (PageTransHuge(page)) 1367 1368 pfn = page_to_pfn(compound_head(page)) 1368 1369 + hpage_nr_pages(page) - 1; 1370 + 1371 + /* 1372 + * HWPoison pages have elevated reference counts so the migration would 1373 + * fail on them. It also doesn't make any sense to migrate them in the 1374 + * first place. Still try to unmap such a page in case it is still mapped 1375 + * (e.g. current hwpoison implementation doesn't unmap KSM pages but keep 1376 + * the unmap as the catch all safety net). 1377 + */ 1378 + if (PageHWPoison(page)) { 1379 + if (WARN_ON(PageLRU(page))) 1380 + isolate_lru_page(page); 1381 + if (page_mapped(page)) 1382 + try_to_unmap(page, TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS); 1383 + continue; 1384 + } 1369 1385 1370 1386 if (!get_page_unless_zero(page)) 1371 1387 continue; ··· 1394 1382 if (!ret) { /* Success */ 1395 1383 put_page(page); 1396 1384 list_add_tail(&page->lru, &source); 1397 - move_pages--; 1398 1385 if (!__PageMovable(page)) 1399 1386 inc_node_page_state(page, NR_ISOLATED_ANON + 1400 1387 page_is_file_cache(page)); 1401 1388 1402 1389 } else { 1403 - #ifdef CONFIG_DEBUG_VM 1404 - pr_alert("failed to isolate pfn %lx\n", pfn); 1390 + pr_warn("failed to isolate pfn %lx\n", pfn); 1405 1391 dump_page(page, "isolation failed"); 1406 - #endif 1407 1392 put_page(page); 1408 1393 /* Because we don't have big zone->lock. we should 1409 1394 check this again here. */ ··· 1420 1411 /* Allocate a new page from the nearest neighbor node */ 1421 1412 ret = migrate_pages(&source, new_node_page, NULL, 0, 1422 1413 MIGRATE_SYNC, MR_MEMORY_HOTPLUG); 1423 - if (ret) 1414 + if (ret) { 1415 + list_for_each_entry(page, &source, lru) { 1416 + pr_warn("migrating pfn %lx failed ret:%d ", 1417 + page_to_pfn(page), ret); 1418 + dump_page(page, "migration failure"); 1419 + } 1424 1420 putback_movable_pages(&source); 1421 + } 1425 1422 } 1426 1423 out: 1427 1424 return ret; ··· 1568 1553 unsigned long valid_start, valid_end; 1569 1554 struct zone *zone; 1570 1555 struct memory_notify arg; 1571 - 1572 - /* at least, alignment against pageblock is necessary */ 1573 - if (!IS_ALIGNED(start_pfn, pageblock_nr_pages)) 1574 - return -EINVAL; 1575 - if (!IS_ALIGNED(end_pfn, pageblock_nr_pages)) 1576 - return -EINVAL; 1556 + char *reason; 1577 1557 1578 1558 mem_hotplug_begin(); 1579 1559 ··· 1577 1567 if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start, 1578 1568 &valid_end)) { 1579 1569 mem_hotplug_done(); 1580 - return -EINVAL; 1570 + ret = -EINVAL; 1571 + reason = "multizone range"; 1572 + goto failed_removal; 1581 1573 } 1582 1574 1583 1575 zone = page_zone(pfn_to_page(valid_start)); ··· 1588 1576 1589 1577 /* set above range as isolated */ 1590 1578 ret = start_isolate_page_range(start_pfn, end_pfn, 1591 - MIGRATE_MOVABLE, true); 1579 + MIGRATE_MOVABLE, 1580 + SKIP_HWPOISON | REPORT_FAILURE); 1592 1581 if (ret) { 1593 1582 mem_hotplug_done(); 1594 - return ret; 1583 + reason = "failure to isolate range"; 1584 + goto failed_removal; 1595 1585 } 1596 1586 1597 1587 arg.start_pfn = start_pfn; ··· 1602 1588 1603 1589 ret = memory_notify(MEM_GOING_OFFLINE, &arg); 1604 1590 ret = notifier_to_errno(ret); 1605 - if (ret) 1606 - goto failed_removal; 1607 - 1608 - pfn = start_pfn; 1609 - repeat: 1610 - /* start memory hot removal */ 1611 - ret = -EINTR; 1612 - if (signal_pending(current)) 1613 - goto failed_removal; 1614 - 1615 - cond_resched(); 1616 - lru_add_drain_all(); 1617 - drain_all_pages(zone); 1618 - 1619 - pfn = scan_movable_pages(start_pfn, end_pfn); 1620 - if (pfn) { /* We have movable pages */ 1621 - ret = do_migrate_range(pfn, end_pfn); 1622 - goto repeat; 1591 + if (ret) { 1592 + reason = "notifier failure"; 1593 + goto failed_removal_isolated; 1623 1594 } 1624 1595 1625 - /* 1626 - * dissolve free hugepages in the memory block before doing offlining 1627 - * actually in order to make hugetlbfs's object counting consistent. 1628 - */ 1629 - ret = dissolve_free_huge_pages(start_pfn, end_pfn); 1630 - if (ret) 1631 - goto failed_removal; 1632 - /* check again */ 1633 - offlined_pages = check_pages_isolated(start_pfn, end_pfn); 1634 - if (offlined_pages < 0) 1635 - goto repeat; 1596 + do { 1597 + for (pfn = start_pfn; pfn;) { 1598 + if (signal_pending(current)) { 1599 + ret = -EINTR; 1600 + reason = "signal backoff"; 1601 + goto failed_removal_isolated; 1602 + } 1603 + 1604 + cond_resched(); 1605 + lru_add_drain_all(); 1606 + drain_all_pages(zone); 1607 + 1608 + pfn = scan_movable_pages(pfn, end_pfn); 1609 + if (pfn) { 1610 + /* 1611 + * TODO: fatal migration failures should bail 1612 + * out 1613 + */ 1614 + do_migrate_range(pfn, end_pfn); 1615 + } 1616 + } 1617 + 1618 + /* 1619 + * Dissolve free hugepages in the memory block before doing 1620 + * offlining actually in order to make hugetlbfs's object 1621 + * counting consistent. 1622 + */ 1623 + ret = dissolve_free_huge_pages(start_pfn, end_pfn); 1624 + if (ret) { 1625 + reason = "failure to dissolve huge pages"; 1626 + goto failed_removal_isolated; 1627 + } 1628 + /* check again */ 1629 + offlined_pages = check_pages_isolated(start_pfn, end_pfn); 1630 + } while (offlined_pages < 0); 1631 + 1636 1632 pr_info("Offlined Pages %ld\n", offlined_pages); 1637 1633 /* Ok, all of our target is isolated. 1638 1634 We cannot do rollback at this point. */ ··· 1678 1654 mem_hotplug_done(); 1679 1655 return 0; 1680 1656 1657 + failed_removal_isolated: 1658 + undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE); 1681 1659 failed_removal: 1682 - pr_debug("memory offlining [mem %#010llx-%#010llx] failed\n", 1660 + pr_debug("memory offlining [mem %#010llx-%#010llx] failed due to %s\n", 1683 1661 (unsigned long long) start_pfn << PAGE_SHIFT, 1684 - ((unsigned long long) end_pfn << PAGE_SHIFT) - 1); 1662 + ((unsigned long long) end_pfn << PAGE_SHIFT) - 1, 1663 + reason); 1685 1664 memory_notify(MEM_CANCEL_OFFLINE, &arg); 1686 1665 /* pushback to free area */ 1687 - undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE); 1688 1666 mem_hotplug_done(); 1689 1667 return ret; 1690 1668 } ··· 1779 1753 return 0; 1780 1754 } 1781 1755 1782 - static void unmap_cpu_on_node(pg_data_t *pgdat) 1783 - { 1784 - #ifdef CONFIG_ACPI_NUMA 1785 - int cpu; 1786 - 1787 - for_each_possible_cpu(cpu) 1788 - if (cpu_to_node(cpu) == pgdat->node_id) 1789 - numa_clear_node(cpu); 1790 - #endif 1791 - } 1792 - 1793 - static int check_and_unmap_cpu_on_node(pg_data_t *pgdat) 1794 - { 1795 - int ret; 1796 - 1797 - ret = check_cpu_on_node(pgdat); 1798 - if (ret) 1799 - return ret; 1800 - 1801 - /* 1802 - * the node will be offlined when we come here, so we can clear 1803 - * the cpu_to_node() now. 1804 - */ 1805 - 1806 - unmap_cpu_on_node(pgdat); 1807 - return 0; 1808 - } 1809 - 1810 1756 /** 1811 1757 * try_offline_node 1812 1758 * @nid: the node ID ··· 1811 1813 return; 1812 1814 } 1813 1815 1814 - if (check_and_unmap_cpu_on_node(pgdat)) 1816 + if (check_cpu_on_node(pgdat)) 1815 1817 return; 1816 1818 1817 1819 /* ··· 1856 1858 memblock_free(start, size); 1857 1859 memblock_remove(start, size); 1858 1860 1859 - arch_remove_memory(start, size, NULL); 1861 + arch_remove_memory(nid, start, size, NULL); 1860 1862 1861 1863 try_offline_node(nid); 1862 1864
+150 -114
mm/migrate.c
··· 327 327 328 328 /* 329 329 * Once page cache replacement of page migration started, page_count 330 - * *must* be zero. And, we don't want to call wait_on_page_locked() 331 - * against a page without get_page(). 332 - * So, we use get_page_unless_zero(), here. Even failed, page fault 333 - * will occur again. 330 + * is zero; but we must not call put_and_wait_on_page_locked() without 331 + * a ref. Use get_page_unless_zero(), and just fault again if it fails. 334 332 */ 335 333 if (!get_page_unless_zero(page)) 336 334 goto out; 337 335 pte_unmap_unlock(ptep, ptl); 338 - wait_on_page_locked(page); 339 - put_page(page); 336 + put_and_wait_on_page_locked(page); 340 337 return; 341 338 out: 342 339 pte_unmap_unlock(ptep, ptl); ··· 367 370 if (!get_page_unless_zero(page)) 368 371 goto unlock; 369 372 spin_unlock(ptl); 370 - wait_on_page_locked(page); 371 - put_page(page); 373 + put_and_wait_on_page_locked(page); 372 374 return; 373 375 unlock: 374 376 spin_unlock(ptl); 375 377 } 376 378 #endif 377 379 378 - #ifdef CONFIG_BLOCK 379 - /* Returns true if all buffers are successfully locked */ 380 - static bool buffer_migrate_lock_buffers(struct buffer_head *head, 381 - enum migrate_mode mode) 380 + static int expected_page_refs(struct page *page) 382 381 { 383 - struct buffer_head *bh = head; 382 + int expected_count = 1; 384 383 385 - /* Simple case, sync compaction */ 386 - if (mode != MIGRATE_ASYNC) { 387 - do { 388 - get_bh(bh); 389 - lock_buffer(bh); 390 - bh = bh->b_this_page; 384 + /* 385 + * Device public or private pages have an extra refcount as they are 386 + * ZONE_DEVICE pages. 387 + */ 388 + expected_count += is_device_private_page(page); 389 + expected_count += is_device_public_page(page); 390 + if (page_mapping(page)) 391 + expected_count += hpage_nr_pages(page) + page_has_private(page); 391 392 392 - } while (bh != head); 393 - 394 - return true; 395 - } 396 - 397 - /* async case, we cannot block on lock_buffer so use trylock_buffer */ 398 - do { 399 - get_bh(bh); 400 - if (!trylock_buffer(bh)) { 401 - /* 402 - * We failed to lock the buffer and cannot stall in 403 - * async migration. Release the taken locks 404 - */ 405 - struct buffer_head *failed_bh = bh; 406 - put_bh(failed_bh); 407 - bh = head; 408 - while (bh != failed_bh) { 409 - unlock_buffer(bh); 410 - put_bh(bh); 411 - bh = bh->b_this_page; 412 - } 413 - return false; 414 - } 415 - 416 - bh = bh->b_this_page; 417 - } while (bh != head); 418 - return true; 393 + return expected_count; 419 394 } 420 - #else 421 - static inline bool buffer_migrate_lock_buffers(struct buffer_head *head, 422 - enum migrate_mode mode) 423 - { 424 - return true; 425 - } 426 - #endif /* CONFIG_BLOCK */ 427 395 428 396 /* 429 397 * Replace the page in the mapping. ··· 399 437 * 3 for pages with a mapping and PagePrivate/PagePrivate2 set. 400 438 */ 401 439 int migrate_page_move_mapping(struct address_space *mapping, 402 - struct page *newpage, struct page *page, 403 - struct buffer_head *head, enum migrate_mode mode, 440 + struct page *newpage, struct page *page, enum migrate_mode mode, 404 441 int extra_count) 405 442 { 406 443 XA_STATE(xas, &mapping->i_pages, page_index(page)); 407 444 struct zone *oldzone, *newzone; 408 445 int dirty; 409 - int expected_count = 1 + extra_count; 410 - 411 - /* 412 - * Device public or private pages have an extra refcount as they are 413 - * ZONE_DEVICE pages. 414 - */ 415 - expected_count += is_device_private_page(page); 416 - expected_count += is_device_public_page(page); 446 + int expected_count = expected_page_refs(page) + extra_count; 417 447 418 448 if (!mapping) { 419 449 /* Anonymous page without mapping */ ··· 425 471 newzone = page_zone(newpage); 426 472 427 473 xas_lock_irq(&xas); 428 - 429 - expected_count += hpage_nr_pages(page) + page_has_private(page); 430 474 if (page_count(page) != expected_count || xas_load(&xas) != page) { 431 475 xas_unlock_irq(&xas); 432 476 return -EAGAIN; 433 477 } 434 478 435 479 if (!page_ref_freeze(page, expected_count)) { 436 - xas_unlock_irq(&xas); 437 - return -EAGAIN; 438 - } 439 - 440 - /* 441 - * In the async migration case of moving a page with buffers, lock the 442 - * buffers using trylock before the mapping is moved. If the mapping 443 - * was moved, we later failed to lock the buffers and could not move 444 - * the mapping back due to an elevated page count, we would have to 445 - * block waiting on other references to be dropped. 446 - */ 447 - if (mode == MIGRATE_ASYNC && head && 448 - !buffer_migrate_lock_buffers(head, mode)) { 449 - page_ref_unfreeze(page, expected_count); 450 480 xas_unlock_irq(&xas); 451 481 return -EAGAIN; 452 482 } ··· 686 748 687 749 BUG_ON(PageWriteback(page)); /* Writeback must be complete */ 688 750 689 - rc = migrate_page_move_mapping(mapping, newpage, page, NULL, mode, 0); 751 + rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); 690 752 691 753 if (rc != MIGRATEPAGE_SUCCESS) 692 754 return rc; ··· 700 762 EXPORT_SYMBOL(migrate_page); 701 763 702 764 #ifdef CONFIG_BLOCK 703 - /* 704 - * Migration function for pages with buffers. This function can only be used 705 - * if the underlying filesystem guarantees that no other references to "page" 706 - * exist. 707 - */ 708 - int buffer_migrate_page(struct address_space *mapping, 709 - struct page *newpage, struct page *page, enum migrate_mode mode) 765 + /* Returns true if all buffers are successfully locked */ 766 + static bool buffer_migrate_lock_buffers(struct buffer_head *head, 767 + enum migrate_mode mode) 768 + { 769 + struct buffer_head *bh = head; 770 + 771 + /* Simple case, sync compaction */ 772 + if (mode != MIGRATE_ASYNC) { 773 + do { 774 + get_bh(bh); 775 + lock_buffer(bh); 776 + bh = bh->b_this_page; 777 + 778 + } while (bh != head); 779 + 780 + return true; 781 + } 782 + 783 + /* async case, we cannot block on lock_buffer so use trylock_buffer */ 784 + do { 785 + get_bh(bh); 786 + if (!trylock_buffer(bh)) { 787 + /* 788 + * We failed to lock the buffer and cannot stall in 789 + * async migration. Release the taken locks 790 + */ 791 + struct buffer_head *failed_bh = bh; 792 + put_bh(failed_bh); 793 + bh = head; 794 + while (bh != failed_bh) { 795 + unlock_buffer(bh); 796 + put_bh(bh); 797 + bh = bh->b_this_page; 798 + } 799 + return false; 800 + } 801 + 802 + bh = bh->b_this_page; 803 + } while (bh != head); 804 + return true; 805 + } 806 + 807 + static int __buffer_migrate_page(struct address_space *mapping, 808 + struct page *newpage, struct page *page, enum migrate_mode mode, 809 + bool check_refs) 710 810 { 711 811 struct buffer_head *bh, *head; 712 812 int rc; 813 + int expected_count; 713 814 714 815 if (!page_has_buffers(page)) 715 816 return migrate_page(mapping, newpage, page, mode); 716 817 818 + /* Check whether page does not have extra refs before we do more work */ 819 + expected_count = expected_page_refs(page); 820 + if (page_count(page) != expected_count) 821 + return -EAGAIN; 822 + 717 823 head = page_buffers(page); 824 + if (!buffer_migrate_lock_buffers(head, mode)) 825 + return -EAGAIN; 718 826 719 - rc = migrate_page_move_mapping(mapping, newpage, page, head, mode, 0); 827 + if (check_refs) { 828 + bool busy; 829 + bool invalidated = false; 720 830 831 + recheck_buffers: 832 + busy = false; 833 + spin_lock(&mapping->private_lock); 834 + bh = head; 835 + do { 836 + if (atomic_read(&bh->b_count)) { 837 + busy = true; 838 + break; 839 + } 840 + bh = bh->b_this_page; 841 + } while (bh != head); 842 + spin_unlock(&mapping->private_lock); 843 + if (busy) { 844 + if (invalidated) { 845 + rc = -EAGAIN; 846 + goto unlock_buffers; 847 + } 848 + invalidate_bh_lrus(); 849 + invalidated = true; 850 + goto recheck_buffers; 851 + } 852 + } 853 + 854 + rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); 721 855 if (rc != MIGRATEPAGE_SUCCESS) 722 - return rc; 723 - 724 - /* 725 - * In the async case, migrate_page_move_mapping locked the buffers 726 - * with an IRQ-safe spinlock held. In the sync case, the buffers 727 - * need to be locked now 728 - */ 729 - if (mode != MIGRATE_ASYNC) 730 - BUG_ON(!buffer_migrate_lock_buffers(head, mode)); 856 + goto unlock_buffers; 731 857 732 858 ClearPagePrivate(page); 733 859 set_page_private(newpage, page_private(page)); ··· 813 811 else 814 812 migrate_page_states(newpage, page); 815 813 814 + rc = MIGRATEPAGE_SUCCESS; 815 + unlock_buffers: 816 816 bh = head; 817 817 do { 818 818 unlock_buffer(bh); ··· 823 819 824 820 } while (bh != head); 825 821 826 - return MIGRATEPAGE_SUCCESS; 822 + return rc; 823 + } 824 + 825 + /* 826 + * Migration function for pages with buffers. This function can only be used 827 + * if the underlying filesystem guarantees that no other references to "page" 828 + * exist. For example attached buffer heads are accessed only under page lock. 829 + */ 830 + int buffer_migrate_page(struct address_space *mapping, 831 + struct page *newpage, struct page *page, enum migrate_mode mode) 832 + { 833 + return __buffer_migrate_page(mapping, newpage, page, mode, false); 827 834 } 828 835 EXPORT_SYMBOL(buffer_migrate_page); 836 + 837 + /* 838 + * Same as above except that this variant is more careful and checks that there 839 + * are also no buffer head references. This function is the right one for 840 + * mappings where buffer heads are directly looked up and referenced (such as 841 + * block device mappings). 842 + */ 843 + int buffer_migrate_page_norefs(struct address_space *mapping, 844 + struct page *newpage, struct page *page, enum migrate_mode mode) 845 + { 846 + return __buffer_migrate_page(mapping, newpage, page, mode, true); 847 + } 829 848 #endif 830 849 831 850 /* ··· 1324 1297 goto put_anon; 1325 1298 1326 1299 if (page_mapped(hpage)) { 1300 + struct address_space *mapping = page_mapping(hpage); 1301 + 1302 + /* 1303 + * try_to_unmap could potentially call huge_pmd_unshare. 1304 + * Because of this, take semaphore in write mode here and 1305 + * set TTU_RMAP_LOCKED to let lower levels know we have 1306 + * taken the lock. 1307 + */ 1308 + i_mmap_lock_write(mapping); 1327 1309 try_to_unmap(hpage, 1328 - TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS); 1310 + TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS| 1311 + TTU_RMAP_LOCKED); 1312 + i_mmap_unlock_write(mapping); 1329 1313 page_was_mapped = 1; 1330 1314 } 1331 1315 ··· 2341 2303 */ 2342 2304 static void migrate_vma_collect(struct migrate_vma *migrate) 2343 2305 { 2306 + struct mmu_notifier_range range; 2344 2307 struct mm_walk mm_walk; 2345 2308 2346 2309 mm_walk.pmd_entry = migrate_vma_collect_pmd; ··· 2353 2314 mm_walk.mm = migrate->vma->vm_mm; 2354 2315 mm_walk.private = migrate; 2355 2316 2356 - mmu_notifier_invalidate_range_start(mm_walk.mm, 2357 - migrate->start, 2358 - migrate->end); 2317 + mmu_notifier_range_init(&range, mm_walk.mm, migrate->start, 2318 + migrate->end); 2319 + mmu_notifier_invalidate_range_start(&range); 2359 2320 walk_page_range(migrate->start, migrate->end, &mm_walk); 2360 - mmu_notifier_invalidate_range_end(mm_walk.mm, 2361 - migrate->start, 2362 - migrate->end); 2321 + mmu_notifier_invalidate_range_end(&range); 2363 2322 2364 2323 migrate->end = migrate->start + (migrate->npages << PAGE_SHIFT); 2365 2324 } ··· 2738 2701 { 2739 2702 const unsigned long npages = migrate->npages; 2740 2703 const unsigned long start = migrate->start; 2741 - struct vm_area_struct *vma = migrate->vma; 2742 - struct mm_struct *mm = vma->vm_mm; 2743 - unsigned long addr, i, mmu_start; 2704 + struct mmu_notifier_range range; 2705 + unsigned long addr, i; 2744 2706 bool notified = false; 2745 2707 2746 2708 for (i = 0, addr = start; i < npages; addr += PAGE_SIZE, i++) { ··· 2758 2722 continue; 2759 2723 } 2760 2724 if (!notified) { 2761 - mmu_start = addr; 2762 2725 notified = true; 2763 - mmu_notifier_invalidate_range_start(mm, 2764 - mmu_start, 2765 - migrate->end); 2726 + 2727 + mmu_notifier_range_init(&range, 2728 + migrate->vma->vm_mm, 2729 + addr, migrate->end); 2730 + mmu_notifier_invalidate_range_start(&range); 2766 2731 } 2767 2732 migrate_vma_insert_page(migrate, addr, newpage, 2768 2733 &migrate->src[i], ··· 2804 2767 * did already call it. 2805 2768 */ 2806 2769 if (notified) 2807 - mmu_notifier_invalidate_range_only_end(mm, mmu_start, 2808 - migrate->end); 2770 + mmu_notifier_invalidate_range_only_end(&range); 2809 2771 } 2810 2772 2811 2773 /*
+1 -1
mm/mm_init.c
··· 146 146 s32 batch = max_t(s32, nr*2, 32); 147 147 148 148 /* batch size set to 0.4% of (total memory/#cpus), or max int32 */ 149 - memsized_batch = min_t(u64, (totalram_pages/nr)/256, 0x7fffffff); 149 + memsized_batch = min_t(u64, (totalram_pages()/nr)/256, 0x7fffffff); 150 150 151 151 vm_committed_as_batch = max_t(s32, memsized_batch, batch); 152 152 }
-16
mm/mmap.c
··· 2973 2973 return ret; 2974 2974 } 2975 2975 2976 - static inline void verify_mm_writelocked(struct mm_struct *mm) 2977 - { 2978 - #ifdef CONFIG_DEBUG_VM 2979 - if (unlikely(down_read_trylock(&mm->mmap_sem))) { 2980 - WARN_ON(1); 2981 - up_read(&mm->mmap_sem); 2982 - } 2983 - #endif 2984 - } 2985 - 2986 2976 /* 2987 2977 * this is really a simplified "do_mmap". it only handles 2988 2978 * anonymous maps. eventually we may be able to do some ··· 2998 3008 error = mlock_future_check(mm, mm->def_flags, len); 2999 3009 if (error) 3000 3010 return error; 3001 - 3002 - /* 3003 - * mm->mmap_sem is required to protect against another thread 3004 - * changing the mappings in case we sleep. 3005 - */ 3006 - verify_mm_writelocked(mm); 3007 3011 3008 3012 /* 3009 3013 * Clear old maps. this also does some error checking for us
+11 -20
mm/mmu_notifier.c
··· 35 35 } 36 36 EXPORT_SYMBOL_GPL(mmu_notifier_call_srcu); 37 37 38 - void mmu_notifier_synchronize(void) 39 - { 40 - /* Wait for any running method to finish. */ 41 - srcu_barrier(&srcu); 42 - } 43 - EXPORT_SYMBOL_GPL(mmu_notifier_synchronize); 44 - 45 38 /* 46 39 * This function can't run concurrently against mmu_notifier_register 47 40 * because mm->mm_users > 0 during mmu_notifier_register and exit_mmap ··· 167 174 srcu_read_unlock(&srcu, id); 168 175 } 169 176 170 - int __mmu_notifier_invalidate_range_start(struct mm_struct *mm, 171 - unsigned long start, unsigned long end, 172 - bool blockable) 177 + int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) 173 178 { 174 179 struct mmu_notifier *mn; 175 180 int ret = 0; 176 181 int id; 177 182 178 183 id = srcu_read_lock(&srcu); 179 - hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) { 184 + hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) { 180 185 if (mn->ops->invalidate_range_start) { 181 - int _ret = mn->ops->invalidate_range_start(mn, mm, start, end, blockable); 186 + int _ret = mn->ops->invalidate_range_start(mn, range); 182 187 if (_ret) { 183 188 pr_info("%pS callback failed with %d in %sblockable context.\n", 184 - mn->ops->invalidate_range_start, _ret, 185 - !blockable ? "non-" : ""); 189 + mn->ops->invalidate_range_start, _ret, 190 + !range->blockable ? "non-" : ""); 186 191 ret = _ret; 187 192 } 188 193 } ··· 191 200 } 192 201 EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start); 193 202 194 - void __mmu_notifier_invalidate_range_end(struct mm_struct *mm, 195 - unsigned long start, 196 - unsigned long end, 203 + void __mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range, 197 204 bool only_end) 198 205 { 199 206 struct mmu_notifier *mn; 200 207 int id; 201 208 202 209 id = srcu_read_lock(&srcu); 203 - hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) { 210 + hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) { 204 211 /* 205 212 * Call invalidate_range here too to avoid the need for the 206 213 * subsystem of having to register an invalidate_range_end ··· 213 224 * already happen under page table lock. 214 225 */ 215 226 if (!only_end && mn->ops->invalidate_range) 216 - mn->ops->invalidate_range(mn, mm, start, end); 227 + mn->ops->invalidate_range(mn, range->mm, 228 + range->start, 229 + range->end); 217 230 if (mn->ops->invalidate_range_end) 218 - mn->ops->invalidate_range_end(mn, mm, start, end); 231 + mn->ops->invalidate_range_end(mn, range); 219 232 } 220 233 srcu_read_unlock(&srcu, id); 221 234 }
+8 -7
mm/mprotect.c
··· 167 167 pgprot_t newprot, int dirty_accountable, int prot_numa) 168 168 { 169 169 pmd_t *pmd; 170 - struct mm_struct *mm = vma->vm_mm; 171 170 unsigned long next; 172 171 unsigned long pages = 0; 173 172 unsigned long nr_huge_updates = 0; 174 - unsigned long mni_start = 0; 173 + struct mmu_notifier_range range; 174 + 175 + range.start = 0; 175 176 176 177 pmd = pmd_offset(pud, addr); 177 178 do { ··· 184 183 goto next; 185 184 186 185 /* invoke the mmu notifier if the pmd is populated */ 187 - if (!mni_start) { 188 - mni_start = addr; 189 - mmu_notifier_invalidate_range_start(mm, mni_start, end); 186 + if (!range.start) { 187 + mmu_notifier_range_init(&range, vma->vm_mm, addr, end); 188 + mmu_notifier_invalidate_range_start(&range); 190 189 } 191 190 192 191 if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) { ··· 215 214 cond_resched(); 216 215 } while (pmd++, addr = next, addr != end); 217 216 218 - if (mni_start) 219 - mmu_notifier_invalidate_range_end(mm, mni_start, end); 217 + if (range.start) 218 + mmu_notifier_invalidate_range_end(&range); 220 219 221 220 if (nr_huge_updates) 222 221 count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
+4 -6
mm/mremap.c
··· 197 197 bool need_rmap_locks) 198 198 { 199 199 unsigned long extent, next, old_end; 200 + struct mmu_notifier_range range; 200 201 pmd_t *old_pmd, *new_pmd; 201 - unsigned long mmun_start; /* For mmu_notifiers */ 202 - unsigned long mmun_end; /* For mmu_notifiers */ 203 202 204 203 old_end = old_addr + len; 205 204 flush_cache_range(vma, old_addr, old_end); 206 205 207 - mmun_start = old_addr; 208 - mmun_end = old_end; 209 - mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end); 206 + mmu_notifier_range_init(&range, vma->vm_mm, old_addr, old_end); 207 + mmu_notifier_invalidate_range_start(&range); 210 208 211 209 for (; old_addr < old_end; old_addr += extent, new_addr += extent) { 212 210 cond_resched(); ··· 245 247 new_pmd, new_addr, need_rmap_locks); 246 248 } 247 249 248 - mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end); 250 + mmu_notifier_invalidate_range_end(&range); 249 251 250 252 return len + old_addr - old_end; /* how much done */ 251 253 }
+32 -19
mm/oom_kill.c
··· 245 245 return points > 0 ? points : 1; 246 246 } 247 247 248 - enum oom_constraint { 249 - CONSTRAINT_NONE, 250 - CONSTRAINT_CPUSET, 251 - CONSTRAINT_MEMORY_POLICY, 252 - CONSTRAINT_MEMCG, 248 + static const char * const oom_constraint_text[] = { 249 + [CONSTRAINT_NONE] = "CONSTRAINT_NONE", 250 + [CONSTRAINT_CPUSET] = "CONSTRAINT_CPUSET", 251 + [CONSTRAINT_MEMORY_POLICY] = "CONSTRAINT_MEMORY_POLICY", 252 + [CONSTRAINT_MEMCG] = "CONSTRAINT_MEMCG", 253 253 }; 254 254 255 255 /* ··· 269 269 } 270 270 271 271 /* Default to all available memory */ 272 - oc->totalpages = totalram_pages + total_swap_pages; 272 + oc->totalpages = totalram_pages() + total_swap_pages; 273 273 274 274 if (!IS_ENABLED(CONFIG_NUMA)) 275 275 return CONSTRAINT_NONE; ··· 428 428 rcu_read_unlock(); 429 429 } 430 430 431 + static void dump_oom_summary(struct oom_control *oc, struct task_struct *victim) 432 + { 433 + /* one line summary of the oom killer context. */ 434 + pr_info("oom-kill:constraint=%s,nodemask=%*pbl", 435 + oom_constraint_text[oc->constraint], 436 + nodemask_pr_args(oc->nodemask)); 437 + cpuset_print_current_mems_allowed(); 438 + mem_cgroup_print_oom_context(oc->memcg, victim); 439 + pr_cont(",task=%s,pid=%d,uid=%d\n", victim->comm, victim->pid, 440 + from_kuid(&init_user_ns, task_uid(victim))); 441 + } 442 + 431 443 static void dump_header(struct oom_control *oc, struct task_struct *p) 432 444 { 433 - pr_warn("%s invoked oom-killer: gfp_mask=%#x(%pGg), nodemask=%*pbl, order=%d, oom_score_adj=%hd\n", 434 - current->comm, oc->gfp_mask, &oc->gfp_mask, 435 - nodemask_pr_args(oc->nodemask), oc->order, 445 + pr_warn("%s invoked oom-killer: gfp_mask=%#x(%pGg), order=%d, oom_score_adj=%hd\n", 446 + current->comm, oc->gfp_mask, &oc->gfp_mask, oc->order, 436 447 current->signal->oom_score_adj); 437 448 if (!IS_ENABLED(CONFIG_COMPACTION) && oc->order) 438 449 pr_warn("COMPACTION is disabled!!!\n"); 439 450 440 - cpuset_print_current_mems_allowed(); 441 451 dump_stack(); 442 452 if (is_memcg_oom(oc)) 443 - mem_cgroup_print_oom_info(oc->memcg, p); 453 + mem_cgroup_print_oom_meminfo(oc->memcg); 444 454 else { 445 455 show_mem(SHOW_MEM_FILTER_NODES, oc->nodemask); 446 456 if (is_dump_unreclaim_slabs()) ··· 458 448 } 459 449 if (sysctl_oom_dump_tasks) 460 450 dump_tasks(oc->memcg, oc->nodemask); 451 + if (p) 452 + dump_oom_summary(oc, p); 461 453 } 462 454 463 455 /* ··· 528 516 * count elevated without a good reason. 529 517 */ 530 518 if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { 531 - const unsigned long start = vma->vm_start; 532 - const unsigned long end = vma->vm_end; 519 + struct mmu_notifier_range range; 533 520 struct mmu_gather tlb; 534 521 535 - tlb_gather_mmu(&tlb, mm, start, end); 536 - if (mmu_notifier_invalidate_range_start_nonblock(mm, start, end)) { 537 - tlb_finish_mmu(&tlb, start, end); 522 + mmu_notifier_range_init(&range, mm, vma->vm_start, 523 + vma->vm_end); 524 + tlb_gather_mmu(&tlb, mm, range.start, range.end); 525 + if (mmu_notifier_invalidate_range_start_nonblock(&range)) { 526 + tlb_finish_mmu(&tlb, range.start, range.end); 538 527 ret = false; 539 528 continue; 540 529 } 541 - unmap_page_range(&tlb, vma, start, end, NULL); 542 - mmu_notifier_invalidate_range_end(mm, start, end); 543 - tlb_finish_mmu(&tlb, start, end); 530 + unmap_page_range(&tlb, vma, range.start, range.end, NULL); 531 + mmu_notifier_invalidate_range_end(&range); 532 + tlb_finish_mmu(&tlb, range.start, range.end); 544 533 } 545 534 } 546 535
+21 -14
mm/page-writeback.c
··· 2154 2154 { 2155 2155 int ret = 0; 2156 2156 int done = 0; 2157 + int error; 2157 2158 struct pagevec pvec; 2158 2159 int nr_pages; 2159 2160 pgoff_t uninitialized_var(writeback_index); ··· 2228 2227 goto continue_unlock; 2229 2228 2230 2229 trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); 2231 - ret = (*writepage)(page, wbc, data); 2232 - if (unlikely(ret)) { 2233 - if (ret == AOP_WRITEPAGE_ACTIVATE) { 2230 + error = (*writepage)(page, wbc, data); 2231 + if (unlikely(error)) { 2232 + /* 2233 + * Handle errors according to the type of 2234 + * writeback. There's no need to continue for 2235 + * background writeback. Just push done_index 2236 + * past this page so media errors won't choke 2237 + * writeout for the entire file. For integrity 2238 + * writeback, we must process the entire dirty 2239 + * set regardless of errors because the fs may 2240 + * still have state to clear for each page. In 2241 + * that case we continue processing and return 2242 + * the first error. 2243 + */ 2244 + if (error == AOP_WRITEPAGE_ACTIVATE) { 2234 2245 unlock_page(page); 2235 - ret = 0; 2236 - } else { 2237 - /* 2238 - * done_index is set past this page, 2239 - * so media errors will not choke 2240 - * background writeout for the entire 2241 - * file. This has consequences for 2242 - * range_cyclic semantics (ie. it may 2243 - * not be suitable for data integrity 2244 - * writeout). 2245 - */ 2246 + error = 0; 2247 + } else if (wbc->sync_mode != WB_SYNC_ALL) { 2248 + ret = error; 2246 2249 done_index = page->index + 1; 2247 2250 done = 1; 2248 2251 break; 2249 2252 } 2253 + if (!ret) 2254 + ret = error; 2250 2255 } 2251 2256 2252 2257 /*
+281 -123
mm/page_alloc.c
··· 16 16 17 17 #include <linux/stddef.h> 18 18 #include <linux/mm.h> 19 + #include <linux/highmem.h> 19 20 #include <linux/swap.h> 20 21 #include <linux/interrupt.h> 21 22 #include <linux/pagemap.h> ··· 97 96 #endif 98 97 99 98 /* work_structs for global per-cpu drains */ 99 + struct pcpu_drain { 100 + struct zone *zone; 101 + struct work_struct work; 102 + }; 100 103 DEFINE_MUTEX(pcpu_drain_mutex); 101 - DEFINE_PER_CPU(struct work_struct, pcpu_drain); 104 + DEFINE_PER_CPU(struct pcpu_drain, pcpu_drain); 102 105 103 106 #ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY 104 107 volatile unsigned long latent_entropy __latent_entropy; ··· 126 121 }; 127 122 EXPORT_SYMBOL(node_states); 128 123 129 - /* Protect totalram_pages and zone->managed_pages */ 130 - static DEFINE_SPINLOCK(managed_page_count_lock); 131 - 132 - unsigned long totalram_pages __read_mostly; 124 + atomic_long_t _totalram_pages __read_mostly; 125 + EXPORT_SYMBOL(_totalram_pages); 133 126 unsigned long totalreserve_pages __read_mostly; 134 127 unsigned long totalcma_pages __read_mostly; 135 128 ··· 240 237 #endif 241 238 }; 242 239 243 - char * const migratetype_names[MIGRATE_TYPES] = { 240 + const char * const migratetype_names[MIGRATE_TYPES] = { 244 241 "Unmovable", 245 242 "Movable", 246 243 "Reclaimable", ··· 266 263 267 264 int min_free_kbytes = 1024; 268 265 int user_min_free_kbytes = -1; 266 + int watermark_boost_factor __read_mostly = 15000; 269 267 int watermark_scale_factor = 10; 270 268 271 - static unsigned long nr_kernel_pages __meminitdata; 272 - static unsigned long nr_all_pages __meminitdata; 273 - static unsigned long dma_reserve __meminitdata; 269 + static unsigned long nr_kernel_pages __initdata; 270 + static unsigned long nr_all_pages __initdata; 271 + static unsigned long dma_reserve __initdata; 274 272 275 273 #ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP 276 - static unsigned long arch_zone_lowest_possible_pfn[MAX_NR_ZONES] __meminitdata; 277 - static unsigned long arch_zone_highest_possible_pfn[MAX_NR_ZONES] __meminitdata; 274 + static unsigned long arch_zone_lowest_possible_pfn[MAX_NR_ZONES] __initdata; 275 + static unsigned long arch_zone_highest_possible_pfn[MAX_NR_ZONES] __initdata; 278 276 static unsigned long required_kernelcore __initdata; 279 277 static unsigned long required_kernelcore_percent __initdata; 280 278 static unsigned long required_movablecore __initdata; 281 279 static unsigned long required_movablecore_percent __initdata; 282 - static unsigned long zone_movable_pfn[MAX_NUMNODES] __meminitdata; 280 + static unsigned long zone_movable_pfn[MAX_NUMNODES] __initdata; 283 281 static bool mirrored_kernelcore __meminitdata; 284 282 285 283 /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ ··· 298 294 int page_group_by_mobility_disabled __read_mostly; 299 295 300 296 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT 297 + /* 298 + * During boot we initialize deferred pages on-demand, as needed, but once 299 + * page_alloc_init_late() has finished, the deferred pages are all initialized, 300 + * and we can permanently disable that path. 301 + */ 302 + static DEFINE_STATIC_KEY_TRUE(deferred_pages); 303 + 304 + /* 305 + * Calling kasan_free_pages() only after deferred memory initialization 306 + * has completed. Poisoning pages during deferred memory init will greatly 307 + * lengthen the process and cause problem in large memory systems as the 308 + * deferred pages initialization is done with interrupt disabled. 309 + * 310 + * Assuming that there will be no reference to those newly initialized 311 + * pages before they are ever allocated, this should have no effect on 312 + * KASAN memory tracking as the poison will be properly inserted at page 313 + * allocation time. The only corner case is when pages are allocated by 314 + * on-demand allocation and then freed again before the deferred pages 315 + * initialization is done, but this is not likely to happen. 316 + */ 317 + static inline void kasan_free_nondeferred_pages(struct page *page, int order) 318 + { 319 + if (!static_branch_unlikely(&deferred_pages)) 320 + kasan_free_pages(page, order); 321 + } 322 + 301 323 /* Returns true if the struct page for the pfn is uninitialised */ 302 324 static inline bool __meminit early_page_uninitialised(unsigned long pfn) 303 325 { ··· 356 326 /* Always populate low zones for address-constrained allocations */ 357 327 if (end_pfn < pgdat_end_pfn(NODE_DATA(nid))) 358 328 return false; 329 + 330 + /* 331 + * We start only with one section of pages, more pages are added as 332 + * needed until the rest of deferred pages are initialized. 333 + */ 359 334 nr_initialised++; 360 - if ((nr_initialised > NODE_DATA(nid)->static_init_pgcnt) && 335 + if ((nr_initialised > PAGES_PER_SECTION) && 361 336 (pfn & (PAGES_PER_SECTION - 1)) == 0) { 362 337 NODE_DATA(nid)->first_deferred_pfn = pfn; 363 338 return true; ··· 370 335 return false; 371 336 } 372 337 #else 338 + #define kasan_free_nondeferred_pages(p, o) kasan_free_pages(p, o) 339 + 373 340 static inline bool early_page_uninitialised(unsigned long pfn) 374 341 { 375 342 return false; ··· 463 426 unsigned long old_word, word; 464 427 465 428 BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4); 429 + BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits)); 466 430 467 431 bitmap = get_pageblock_bitmap(page, pfn); 468 432 bitidx = pfn_to_bitidx(page, pfn); ··· 1075 1037 arch_free_page(page, order); 1076 1038 kernel_poison_pages(page, 1 << order, 0); 1077 1039 kernel_map_pages(page, 1 << order, 0); 1078 - kasan_free_pages(page, order); 1040 + kasan_free_nondeferred_pages(page, order); 1079 1041 1080 1042 return true; 1081 1043 } ··· 1221 1183 init_page_count(page); 1222 1184 page_mapcount_reset(page); 1223 1185 page_cpupid_reset_last(page); 1186 + page_kasan_tag_reset(page); 1224 1187 1225 1188 INIT_LIST_HEAD(&page->lru); 1226 1189 #ifdef WANT_PAGE_VIRTUAL ··· 1318 1279 __ClearPageReserved(p); 1319 1280 set_page_count(p, 0); 1320 1281 1321 - page_zone(page)->managed_pages += nr_pages; 1282 + atomic_long_add(nr_pages, &page_zone(page)->managed_pages); 1322 1283 set_page_refcounted(page); 1323 1284 __free_pages(page, order); 1324 1285 } ··· 1643 1604 pgdat_init_report_one_done(); 1644 1605 return 0; 1645 1606 } 1646 - 1647 - /* 1648 - * During boot we initialize deferred pages on-demand, as needed, but once 1649 - * page_alloc_init_late() has finished, the deferred pages are all initialized, 1650 - * and we can permanently disable that path. 1651 - */ 1652 - static DEFINE_STATIC_KEY_TRUE(deferred_pages); 1653 1607 1654 1608 /* 1655 1609 * If this zone has deferred pages, try to grow it by initializing enough ··· 2013 1981 */ 2014 1982 static int fallbacks[MIGRATE_TYPES][4] = { 2015 1983 [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, 2016 - [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, 2017 1984 [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_TYPES }, 1985 + [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, 2018 1986 #ifdef CONFIG_CMA 2019 1987 [MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */ 2020 1988 #endif ··· 2161 2129 return false; 2162 2130 } 2163 2131 2132 + static inline void boost_watermark(struct zone *zone) 2133 + { 2134 + unsigned long max_boost; 2135 + 2136 + if (!watermark_boost_factor) 2137 + return; 2138 + 2139 + max_boost = mult_frac(zone->_watermark[WMARK_HIGH], 2140 + watermark_boost_factor, 10000); 2141 + max_boost = max(pageblock_nr_pages, max_boost); 2142 + 2143 + zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages, 2144 + max_boost); 2145 + } 2146 + 2164 2147 /* 2165 2148 * This function implements actual steal behaviour. If order is large enough, 2166 2149 * we can steal whole pageblock. If not, we first move freepages in this ··· 2185 2138 * itself, so pages freed in the future will be put on the correct free list. 2186 2139 */ 2187 2140 static void steal_suitable_fallback(struct zone *zone, struct page *page, 2188 - int start_type, bool whole_block) 2141 + unsigned int alloc_flags, int start_type, bool whole_block) 2189 2142 { 2190 2143 unsigned int current_order = page_order(page); 2191 2144 struct free_area *area; ··· 2206 2159 change_pageblock_range(page, current_order, start_type); 2207 2160 goto single_page; 2208 2161 } 2162 + 2163 + /* 2164 + * Boost watermarks to increase reclaim pressure to reduce the 2165 + * likelihood of future fallbacks. Wake kswapd now as the node 2166 + * may be balanced overall and kswapd will not wake naturally. 2167 + */ 2168 + boost_watermark(zone); 2169 + if (alloc_flags & ALLOC_KSWAPD) 2170 + wakeup_kswapd(zone, 0, 0, zone_idx(zone)); 2209 2171 2210 2172 /* We are not allowed to try stealing from the whole block */ 2211 2173 if (!whole_block) ··· 2314 2258 * Limit the number reserved to 1 pageblock or roughly 1% of a zone. 2315 2259 * Check is race-prone but harmless. 2316 2260 */ 2317 - max_managed = (zone->managed_pages / 100) + pageblock_nr_pages; 2261 + max_managed = (zone_managed_pages(zone) / 100) + pageblock_nr_pages; 2318 2262 if (zone->nr_reserved_highatomic >= max_managed) 2319 2263 return; 2320 2264 ··· 2431 2375 * condition simpler. 2432 2376 */ 2433 2377 static __always_inline bool 2434 - __rmqueue_fallback(struct zone *zone, int order, int start_migratetype) 2378 + __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, 2379 + unsigned int alloc_flags) 2435 2380 { 2436 2381 struct free_area *area; 2437 2382 int current_order; 2383 + int min_order = order; 2438 2384 struct page *page; 2439 2385 int fallback_mt; 2440 2386 bool can_steal; 2387 + 2388 + /* 2389 + * Do not steal pages from freelists belonging to other pageblocks 2390 + * i.e. orders < pageblock_order. If there are no local zones free, 2391 + * the zonelists will be reiterated without ALLOC_NOFRAGMENT. 2392 + */ 2393 + if (alloc_flags & ALLOC_NOFRAGMENT) 2394 + min_order = pageblock_order; 2441 2395 2442 2396 /* 2443 2397 * Find the largest available free page in the other list. This roughly 2444 2398 * approximates finding the pageblock with the most free pages, which 2445 2399 * would be too costly to do exactly. 2446 2400 */ 2447 - for (current_order = MAX_ORDER - 1; current_order >= order; 2401 + for (current_order = MAX_ORDER - 1; current_order >= min_order; 2448 2402 --current_order) { 2449 2403 area = &(zone->free_area[current_order]); 2450 2404 fallback_mt = find_suitable_fallback(area, current_order, ··· 2499 2433 page = list_first_entry(&area->free_list[fallback_mt], 2500 2434 struct page, lru); 2501 2435 2502 - steal_suitable_fallback(zone, page, start_migratetype, can_steal); 2436 + steal_suitable_fallback(zone, page, alloc_flags, start_migratetype, 2437 + can_steal); 2503 2438 2504 2439 trace_mm_page_alloc_extfrag(page, order, current_order, 2505 2440 start_migratetype, fallback_mt); ··· 2514 2447 * Call me with the zone->lock already held. 2515 2448 */ 2516 2449 static __always_inline struct page * 2517 - __rmqueue(struct zone *zone, unsigned int order, int migratetype) 2450 + __rmqueue(struct zone *zone, unsigned int order, int migratetype, 2451 + unsigned int alloc_flags) 2518 2452 { 2519 2453 struct page *page; 2520 2454 ··· 2525 2457 if (migratetype == MIGRATE_MOVABLE) 2526 2458 page = __rmqueue_cma_fallback(zone, order); 2527 2459 2528 - if (!page && __rmqueue_fallback(zone, order, migratetype)) 2460 + if (!page && __rmqueue_fallback(zone, order, migratetype, 2461 + alloc_flags)) 2529 2462 goto retry; 2530 2463 } 2531 2464 ··· 2541 2472 */ 2542 2473 static int rmqueue_bulk(struct zone *zone, unsigned int order, 2543 2474 unsigned long count, struct list_head *list, 2544 - int migratetype) 2475 + int migratetype, unsigned int alloc_flags) 2545 2476 { 2546 2477 int i, alloced = 0; 2547 2478 2548 2479 spin_lock(&zone->lock); 2549 2480 for (i = 0; i < count; ++i) { 2550 - struct page *page = __rmqueue(zone, order, migratetype); 2481 + struct page *page = __rmqueue(zone, order, migratetype, 2482 + alloc_flags); 2551 2483 if (unlikely(page == NULL)) 2552 2484 break; 2553 2485 ··· 2662 2592 2663 2593 static void drain_local_pages_wq(struct work_struct *work) 2664 2594 { 2595 + struct pcpu_drain *drain; 2596 + 2597 + drain = container_of(work, struct pcpu_drain, work); 2598 + 2665 2599 /* 2666 2600 * drain_all_pages doesn't use proper cpu hotplug protection so 2667 2601 * we can race with cpu offline when the WQ can move this from ··· 2674 2600 * a different one. 2675 2601 */ 2676 2602 preempt_disable(); 2677 - drain_local_pages(NULL); 2603 + drain_local_pages(drain->zone); 2678 2604 preempt_enable(); 2679 2605 } 2680 2606 ··· 2745 2671 } 2746 2672 2747 2673 for_each_cpu(cpu, &cpus_with_pcps) { 2748 - struct work_struct *work = per_cpu_ptr(&pcpu_drain, cpu); 2749 - INIT_WORK(work, drain_local_pages_wq); 2750 - queue_work_on(cpu, mm_percpu_wq, work); 2674 + struct pcpu_drain *drain = per_cpu_ptr(&pcpu_drain, cpu); 2675 + 2676 + drain->zone = zone; 2677 + INIT_WORK(&drain->work, drain_local_pages_wq); 2678 + queue_work_on(cpu, mm_percpu_wq, &drain->work); 2751 2679 } 2752 2680 for_each_cpu(cpu, &cpus_with_pcps) 2753 - flush_work(per_cpu_ptr(&pcpu_drain, cpu)); 2681 + flush_work(&per_cpu_ptr(&pcpu_drain, cpu)->work); 2754 2682 2755 2683 mutex_unlock(&pcpu_drain_mutex); 2756 2684 } ··· 3010 2934 3011 2935 /* Remove page from the per-cpu list, caller must protect the list */ 3012 2936 static struct page *__rmqueue_pcplist(struct zone *zone, int migratetype, 2937 + unsigned int alloc_flags, 3013 2938 struct per_cpu_pages *pcp, 3014 2939 struct list_head *list) 3015 2940 { ··· 3020 2943 if (list_empty(list)) { 3021 2944 pcp->count += rmqueue_bulk(zone, 0, 3022 2945 pcp->batch, list, 3023 - migratetype); 2946 + migratetype, alloc_flags); 3024 2947 if (unlikely(list_empty(list))) 3025 2948 return NULL; 3026 2949 } ··· 3036 2959 /* Lock and remove page from the per-cpu list */ 3037 2960 static struct page *rmqueue_pcplist(struct zone *preferred_zone, 3038 2961 struct zone *zone, unsigned int order, 3039 - gfp_t gfp_flags, int migratetype) 2962 + gfp_t gfp_flags, int migratetype, 2963 + unsigned int alloc_flags) 3040 2964 { 3041 2965 struct per_cpu_pages *pcp; 3042 2966 struct list_head *list; ··· 3047 2969 local_irq_save(flags); 3048 2970 pcp = &this_cpu_ptr(zone->pageset)->pcp; 3049 2971 list = &pcp->lists[migratetype]; 3050 - page = __rmqueue_pcplist(zone, migratetype, pcp, list); 2972 + page = __rmqueue_pcplist(zone, migratetype, alloc_flags, pcp, list); 3051 2973 if (page) { 3052 2974 __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order); 3053 2975 zone_statistics(preferred_zone, zone); ··· 3070 2992 3071 2993 if (likely(order == 0)) { 3072 2994 page = rmqueue_pcplist(preferred_zone, zone, order, 3073 - gfp_flags, migratetype); 2995 + gfp_flags, migratetype, alloc_flags); 3074 2996 goto out; 3075 2997 } 3076 2998 ··· 3089 3011 trace_mm_page_alloc_zone_locked(page, order, migratetype); 3090 3012 } 3091 3013 if (!page) 3092 - page = __rmqueue(zone, order, migratetype); 3014 + page = __rmqueue(zone, order, migratetype, alloc_flags); 3093 3015 } while (page && check_new_pages(page, order)); 3094 3016 spin_unlock(&zone->lock); 3095 3017 if (!page) ··· 3131 3053 } 3132 3054 __setup("fail_page_alloc=", setup_fail_page_alloc); 3133 3055 3134 - static bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) 3056 + static bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) 3135 3057 { 3136 3058 if (order < fail_page_alloc.min_order) 3137 3059 return false; ··· 3181 3103 3182 3104 #else /* CONFIG_FAIL_PAGE_ALLOC */ 3183 3105 3184 - static inline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) 3106 + static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) 3185 3107 { 3186 3108 return false; 3187 3109 } 3188 3110 3189 3111 #endif /* CONFIG_FAIL_PAGE_ALLOC */ 3112 + 3113 + static noinline bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) 3114 + { 3115 + return __should_fail_alloc_page(gfp_mask, order); 3116 + } 3117 + ALLOW_ERROR_INJECTION(should_fail_alloc_page, TRUE); 3190 3118 3191 3119 /* 3192 3120 * Return true if free base pages are above 'mark'. For high-order checks it ··· 3338 3254 #endif /* CONFIG_NUMA */ 3339 3255 3340 3256 /* 3257 + * The restriction on ZONE_DMA32 as being a suitable zone to use to avoid 3258 + * fragmentation is subtle. If the preferred zone was HIGHMEM then 3259 + * premature use of a lower zone may cause lowmem pressure problems that 3260 + * are worse than fragmentation. If the next zone is ZONE_DMA then it is 3261 + * probably too small. It only makes sense to spread allocations to avoid 3262 + * fragmentation between the Normal and DMA32 zones. 3263 + */ 3264 + static inline unsigned int 3265 + alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask) 3266 + { 3267 + unsigned int alloc_flags = 0; 3268 + 3269 + if (gfp_mask & __GFP_KSWAPD_RECLAIM) 3270 + alloc_flags |= ALLOC_KSWAPD; 3271 + 3272 + #ifdef CONFIG_ZONE_DMA32 3273 + if (zone_idx(zone) != ZONE_NORMAL) 3274 + goto out; 3275 + 3276 + /* 3277 + * If ZONE_DMA32 exists, assume it is the one after ZONE_NORMAL and 3278 + * the pointer is within zone->zone_pgdat->node_zones[]. Also assume 3279 + * on UMA that if Normal is populated then so is DMA32. 3280 + */ 3281 + BUILD_BUG_ON(ZONE_NORMAL - ZONE_DMA32 != 1); 3282 + if (nr_online_nodes > 1 && !populated_zone(--zone)) 3283 + goto out; 3284 + 3285 + out: 3286 + #endif /* CONFIG_ZONE_DMA32 */ 3287 + return alloc_flags; 3288 + } 3289 + 3290 + /* 3341 3291 * get_page_from_freelist goes through the zonelist trying to allocate 3342 3292 * a page. 3343 3293 */ ··· 3379 3261 get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, 3380 3262 const struct alloc_context *ac) 3381 3263 { 3382 - struct zoneref *z = ac->preferred_zoneref; 3264 + struct zoneref *z; 3383 3265 struct zone *zone; 3384 3266 struct pglist_data *last_pgdat_dirty_limit = NULL; 3267 + bool no_fallback; 3385 3268 3269 + retry: 3386 3270 /* 3387 3271 * Scan zonelist, looking for a zone with enough free. 3388 3272 * See also __cpuset_node_allowed() comment in kernel/cpuset.c. 3389 3273 */ 3274 + no_fallback = alloc_flags & ALLOC_NOFRAGMENT; 3275 + z = ac->preferred_zoneref; 3390 3276 for_next_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, 3391 3277 ac->nodemask) { 3392 3278 struct page *page; ··· 3429 3307 } 3430 3308 } 3431 3309 3432 - mark = zone->watermark[alloc_flags & ALLOC_WMARK_MASK]; 3310 + if (no_fallback && nr_online_nodes > 1 && 3311 + zone != ac->preferred_zoneref->zone) { 3312 + int local_nid; 3313 + 3314 + /* 3315 + * If moving to a remote node, retry but allow 3316 + * fragmenting fallbacks. Locality is more important 3317 + * than fragmentation avoidance. 3318 + */ 3319 + local_nid = zone_to_nid(ac->preferred_zoneref->zone); 3320 + if (zone_to_nid(zone) != local_nid) { 3321 + alloc_flags &= ~ALLOC_NOFRAGMENT; 3322 + goto retry; 3323 + } 3324 + } 3325 + 3326 + mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); 3433 3327 if (!zone_watermark_fast(zone, order, mark, 3434 3328 ac_classzone_idx(ac), alloc_flags)) { 3435 3329 int ret; ··· 3512 3374 } 3513 3375 } 3514 3376 3377 + /* 3378 + * It's possible on a UMA machine to get through all zones that are 3379 + * fragmented. If avoiding fragmentation, reset and try again. 3380 + */ 3381 + if (no_fallback) { 3382 + alloc_flags &= ~ALLOC_NOFRAGMENT; 3383 + goto retry; 3384 + } 3385 + 3515 3386 return NULL; 3516 3387 } 3517 3388 ··· 3560 3413 va_start(args, fmt); 3561 3414 vaf.fmt = fmt; 3562 3415 vaf.va = &args; 3563 - pr_warn("%s: %pV, mode:%#x(%pGg), nodemask=%*pbl\n", 3416 + pr_warn("%s: %pV, mode:%#x(%pGg), nodemask=%*pbl", 3564 3417 current->comm, &vaf, gfp_mask, &gfp_mask, 3565 3418 nodemask_pr_args(nodemask)); 3566 3419 va_end(args); 3567 3420 3568 3421 cpuset_print_current_mems_allowed(); 3569 - 3422 + pr_cont("\n"); 3570 3423 dump_stack(); 3571 3424 warn_alloc_show_mem(gfp_mask, nodemask); 3572 3425 } ··· 4008 3861 } else if (unlikely(rt_task(current)) && !in_interrupt()) 4009 3862 alloc_flags |= ALLOC_HARDER; 4010 3863 3864 + if (gfp_mask & __GFP_KSWAPD_RECLAIM) 3865 + alloc_flags |= ALLOC_KSWAPD; 3866 + 4011 3867 #ifdef CONFIG_CMA 4012 3868 if (gfpflags_to_migratetype(gfp_mask) == MIGRATE_MOVABLE) 4013 3869 alloc_flags |= ALLOC_CMA; ··· 4242 4092 if (!ac->preferred_zoneref->zone) 4243 4093 goto nopage; 4244 4094 4245 - if (gfp_mask & __GFP_KSWAPD_RECLAIM) 4095 + if (alloc_flags & ALLOC_KSWAPD) 4246 4096 wake_all_kswapds(order, gfp_mask, ac); 4247 4097 4248 4098 /* ··· 4300 4150 4301 4151 retry: 4302 4152 /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */ 4303 - if (gfp_mask & __GFP_KSWAPD_RECLAIM) 4153 + if (alloc_flags & ALLOC_KSWAPD) 4304 4154 wake_all_kswapds(order, gfp_mask, ac); 4305 4155 4306 4156 reserve_flags = __gfp_pfmemalloc_flags(gfp_mask); ··· 4519 4369 4520 4370 finalise_ac(gfp_mask, &ac); 4521 4371 4372 + /* 4373 + * Forbid the first pass from falling back to types that fragment 4374 + * memory until all local zones are considered. 4375 + */ 4376 + alloc_flags |= alloc_flags_nofragment(ac.preferred_zoneref->zone, gfp_mask); 4377 + 4522 4378 /* First allocation attempt */ 4523 4379 page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac); 4524 4380 if (likely(page)) ··· 4583 4427 } 4584 4428 EXPORT_SYMBOL(get_zeroed_page); 4585 4429 4586 - void __free_pages(struct page *page, unsigned int order) 4430 + static inline void free_the_page(struct page *page, unsigned int order) 4587 4431 { 4588 - if (put_page_testzero(page)) { 4589 - if (order == 0) 4590 - free_unref_page(page); 4591 - else 4592 - __free_pages_ok(page, order); 4593 - } 4432 + if (order == 0) /* Via pcp? */ 4433 + free_unref_page(page); 4434 + else 4435 + __free_pages_ok(page, order); 4594 4436 } 4595 4437 4438 + void __free_pages(struct page *page, unsigned int order) 4439 + { 4440 + if (put_page_testzero(page)) 4441 + free_the_page(page, order); 4442 + } 4596 4443 EXPORT_SYMBOL(__free_pages); 4597 4444 4598 4445 void free_pages(unsigned long addr, unsigned int order) ··· 4644 4485 { 4645 4486 VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); 4646 4487 4647 - if (page_ref_sub_and_test(page, count)) { 4648 - unsigned int order = compound_order(page); 4649 - 4650 - if (order == 0) 4651 - free_unref_page(page); 4652 - else 4653 - __free_pages_ok(page, order); 4654 - } 4488 + if (page_ref_sub_and_test(page, count)) 4489 + free_the_page(page, compound_order(page)); 4655 4490 } 4656 4491 EXPORT_SYMBOL(__page_frag_cache_drain); 4657 4492 ··· 4711 4558 struct page *page = virt_to_head_page(addr); 4712 4559 4713 4560 if (unlikely(put_page_testzero(page))) 4714 - __free_pages_ok(page, compound_order(page)); 4561 + free_the_page(page, compound_order(page)); 4715 4562 } 4716 4563 EXPORT_SYMBOL(page_frag_free); 4717 4564 ··· 4813 4660 struct zonelist *zonelist = node_zonelist(numa_node_id(), GFP_KERNEL); 4814 4661 4815 4662 for_each_zone_zonelist(zone, z, zonelist, offset) { 4816 - unsigned long size = zone->managed_pages; 4663 + unsigned long size = zone_managed_pages(zone); 4817 4664 unsigned long high = high_wmark_pages(zone); 4818 4665 if (size > high) 4819 4666 sum += size - high; ··· 4865 4712 pages[lru] = global_node_page_state(NR_LRU_BASE + lru); 4866 4713 4867 4714 for_each_zone(zone) 4868 - wmark_low += zone->watermark[WMARK_LOW]; 4715 + wmark_low += low_wmark_pages(zone); 4869 4716 4870 4717 /* 4871 4718 * Estimate the amount of memory available for userspace allocations, ··· 4899 4746 4900 4747 void si_meminfo(struct sysinfo *val) 4901 4748 { 4902 - val->totalram = totalram_pages; 4749 + val->totalram = totalram_pages(); 4903 4750 val->sharedram = global_node_page_state(NR_SHMEM); 4904 4751 val->freeram = global_zone_page_state(NR_FREE_PAGES); 4905 4752 val->bufferram = nr_blockdev_pages(); 4906 - val->totalhigh = totalhigh_pages; 4753 + val->totalhigh = totalhigh_pages(); 4907 4754 val->freehigh = nr_free_highpages(); 4908 4755 val->mem_unit = PAGE_SIZE; 4909 4756 } ··· 4920 4767 pg_data_t *pgdat = NODE_DATA(nid); 4921 4768 4922 4769 for (zone_type = 0; zone_type < MAX_NR_ZONES; zone_type++) 4923 - managed_pages += pgdat->node_zones[zone_type].managed_pages; 4770 + managed_pages += zone_managed_pages(&pgdat->node_zones[zone_type]); 4924 4771 val->totalram = managed_pages; 4925 4772 val->sharedram = node_page_state(pgdat, NR_SHMEM); 4926 4773 val->freeram = sum_zone_node_page_state(nid, NR_FREE_PAGES); ··· 4929 4776 struct zone *zone = &pgdat->node_zones[zone_type]; 4930 4777 4931 4778 if (is_highmem(zone)) { 4932 - managed_highpages += zone->managed_pages; 4779 + managed_highpages += zone_managed_pages(zone); 4933 4780 free_highpages += zone_page_state(zone, NR_FREE_PAGES); 4934 4781 } 4935 4782 } ··· 5136 4983 K(zone_page_state(zone, NR_ZONE_UNEVICTABLE)), 5137 4984 K(zone_page_state(zone, NR_ZONE_WRITE_PENDING)), 5138 4985 K(zone->present_pages), 5139 - K(zone->managed_pages), 4986 + K(zone_managed_pages(zone)), 5140 4987 K(zone_page_state(zone, NR_MLOCK)), 5141 4988 zone_page_state(zone, NR_KERNEL_STACK_KB), 5142 4989 K(zone_page_state(zone, NR_PAGETABLE)), ··· 5808 5655 * The per-cpu-pages pools are set to around 1000th of the 5809 5656 * size of the zone. 5810 5657 */ 5811 - batch = zone->managed_pages / 1024; 5658 + batch = zone_managed_pages(zone) / 1024; 5812 5659 /* But no more than a meg. */ 5813 5660 if (batch * PAGE_SIZE > 1024 * 1024) 5814 5661 batch = (1024 * 1024) / PAGE_SIZE; ··· 5889 5736 memset(p, 0, sizeof(*p)); 5890 5737 5891 5738 pcp = &p->pcp; 5892 - pcp->count = 0; 5893 5739 for (migratetype = 0; migratetype < MIGRATE_PCPTYPES; migratetype++) 5894 5740 INIT_LIST_HEAD(&pcp->lists[migratetype]); 5895 5741 } ··· 5918 5766 { 5919 5767 if (percpu_pagelist_fraction) 5920 5768 pageset_set_high(pcp, 5921 - (zone->managed_pages / 5769 + (zone_managed_pages(zone) / 5922 5770 percpu_pagelist_fraction)); 5923 5771 else 5924 5772 pageset_set_batch(pcp, zone_batchsize(zone)); ··· 6072 5920 * with no available memory, a warning is printed and the start and end 6073 5921 * PFNs will be 0. 6074 5922 */ 6075 - void __meminit get_pfn_range_for_nid(unsigned int nid, 5923 + void __init get_pfn_range_for_nid(unsigned int nid, 6076 5924 unsigned long *start_pfn, unsigned long *end_pfn) 6077 5925 { 6078 5926 unsigned long this_start_pfn, this_end_pfn; ··· 6121 5969 * highest usable zone for ZONE_MOVABLE. This preserves the assumption that 6122 5970 * zones within a node are in order of monotonic increases memory addresses 6123 5971 */ 6124 - static void __meminit adjust_zone_range_for_zone_movable(int nid, 5972 + static void __init adjust_zone_range_for_zone_movable(int nid, 6125 5973 unsigned long zone_type, 6126 5974 unsigned long node_start_pfn, 6127 5975 unsigned long node_end_pfn, ··· 6152 6000 * Return the number of pages a zone spans in a node, including holes 6153 6001 * present_pages = zone_spanned_pages_in_node() - zone_absent_pages_in_node() 6154 6002 */ 6155 - static unsigned long __meminit zone_spanned_pages_in_node(int nid, 6003 + static unsigned long __init zone_spanned_pages_in_node(int nid, 6156 6004 unsigned long zone_type, 6157 6005 unsigned long node_start_pfn, 6158 6006 unsigned long node_end_pfn, ··· 6187 6035 * Return the number of holes in a range on a node. If nid is MAX_NUMNODES, 6188 6036 * then all holes in the requested range will be accounted for. 6189 6037 */ 6190 - unsigned long __meminit __absent_pages_in_range(int nid, 6038 + unsigned long __init __absent_pages_in_range(int nid, 6191 6039 unsigned long range_start_pfn, 6192 6040 unsigned long range_end_pfn) 6193 6041 { ··· 6217 6065 } 6218 6066 6219 6067 /* Return the number of page frames in holes in a zone on a node */ 6220 - static unsigned long __meminit zone_absent_pages_in_node(int nid, 6068 + static unsigned long __init zone_absent_pages_in_node(int nid, 6221 6069 unsigned long zone_type, 6222 6070 unsigned long node_start_pfn, 6223 6071 unsigned long node_end_pfn, ··· 6269 6117 } 6270 6118 6271 6119 #else /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ 6272 - static inline unsigned long __meminit zone_spanned_pages_in_node(int nid, 6120 + static inline unsigned long __init zone_spanned_pages_in_node(int nid, 6273 6121 unsigned long zone_type, 6274 6122 unsigned long node_start_pfn, 6275 6123 unsigned long node_end_pfn, ··· 6288 6136 return zones_size[zone_type]; 6289 6137 } 6290 6138 6291 - static inline unsigned long __meminit zone_absent_pages_in_node(int nid, 6139 + static inline unsigned long __init zone_absent_pages_in_node(int nid, 6292 6140 unsigned long zone_type, 6293 6141 unsigned long node_start_pfn, 6294 6142 unsigned long node_end_pfn, ··· 6302 6150 6303 6151 #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */ 6304 6152 6305 - static void __meminit calculate_node_totalpages(struct pglist_data *pgdat, 6153 + static void __init calculate_node_totalpages(struct pglist_data *pgdat, 6306 6154 unsigned long node_start_pfn, 6307 6155 unsigned long node_end_pfn, 6308 6156 unsigned long *zones_size, ··· 6475 6323 static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, int nid, 6476 6324 unsigned long remaining_pages) 6477 6325 { 6478 - zone->managed_pages = remaining_pages; 6326 + atomic_long_set(&zone->managed_pages, remaining_pages); 6479 6327 zone_set_nid(zone, nid); 6480 6328 zone->name = zone_names[idx]; 6481 6329 zone->zone_pgdat = NODE_DATA(nid); ··· 6628 6476 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT 6629 6477 static inline void pgdat_set_deferred_range(pg_data_t *pgdat) 6630 6478 { 6631 - /* 6632 - * We start only with one section of pages, more pages are added as 6633 - * needed until the rest of deferred pages are initialized. 6634 - */ 6635 - pgdat->static_init_pgcnt = min_t(unsigned long, PAGES_PER_SECTION, 6636 - pgdat->node_spanned_pages); 6637 6479 pgdat->first_deferred_pfn = ULONG_MAX; 6638 6480 } 6639 6481 #else ··· 7221 7075 7222 7076 void adjust_managed_page_count(struct page *page, long count) 7223 7077 { 7224 - spin_lock(&managed_page_count_lock); 7225 - page_zone(page)->managed_pages += count; 7226 - totalram_pages += count; 7078 + atomic_long_add(count, &page_zone(page)->managed_pages); 7079 + totalram_pages_add(count); 7227 7080 #ifdef CONFIG_HIGHMEM 7228 7081 if (PageHighMem(page)) 7229 - totalhigh_pages += count; 7082 + totalhigh_pages_add(count); 7230 7083 #endif 7231 - spin_unlock(&managed_page_count_lock); 7232 7084 } 7233 7085 EXPORT_SYMBOL(adjust_managed_page_count); 7234 7086 7235 - unsigned long free_reserved_area(void *start, void *end, int poison, char *s) 7087 + unsigned long free_reserved_area(void *start, void *end, int poison, const char *s) 7236 7088 { 7237 7089 void *pos; 7238 7090 unsigned long pages = 0; ··· 7267 7123 void free_highmem_page(struct page *page) 7268 7124 { 7269 7125 __free_reserved_page(page); 7270 - totalram_pages++; 7271 - page_zone(page)->managed_pages++; 7272 - totalhigh_pages++; 7126 + totalram_pages_inc(); 7127 + atomic_long_inc(&page_zone(page)->managed_pages); 7128 + totalhigh_pages_inc(); 7273 7129 } 7274 7130 #endif 7275 7131 ··· 7318 7174 physpages << (PAGE_SHIFT - 10), 7319 7175 codesize >> 10, datasize >> 10, rosize >> 10, 7320 7176 (init_data_size + init_code_size) >> 10, bss_size >> 10, 7321 - (physpages - totalram_pages - totalcma_pages) << (PAGE_SHIFT - 10), 7177 + (physpages - totalram_pages() - totalcma_pages) << (PAGE_SHIFT - 10), 7322 7178 totalcma_pages << (PAGE_SHIFT - 10), 7323 7179 #ifdef CONFIG_HIGHMEM 7324 - totalhigh_pages << (PAGE_SHIFT - 10), 7180 + totalhigh_pages() << (PAGE_SHIFT - 10), 7325 7181 #endif 7326 7182 str ? ", " : "", str ? str : ""); 7327 7183 } ··· 7401 7257 for (i = 0; i < MAX_NR_ZONES; i++) { 7402 7258 struct zone *zone = pgdat->node_zones + i; 7403 7259 long max = 0; 7260 + unsigned long managed_pages = zone_managed_pages(zone); 7404 7261 7405 7262 /* Find valid and maximum lowmem_reserve in the zone */ 7406 7263 for (j = i; j < MAX_NR_ZONES; j++) { ··· 7412 7267 /* we treat the high watermark as reserved pages. */ 7413 7268 max += high_wmark_pages(zone); 7414 7269 7415 - if (max > zone->managed_pages) 7416 - max = zone->managed_pages; 7270 + if (max > managed_pages) 7271 + max = managed_pages; 7417 7272 7418 7273 pgdat->totalreserve_pages += max; 7419 7274 ··· 7437 7292 for_each_online_pgdat(pgdat) { 7438 7293 for (j = 0; j < MAX_NR_ZONES; j++) { 7439 7294 struct zone *zone = pgdat->node_zones + j; 7440 - unsigned long managed_pages = zone->managed_pages; 7295 + unsigned long managed_pages = zone_managed_pages(zone); 7441 7296 7442 7297 zone->lowmem_reserve[j] = 0; 7443 7298 ··· 7455 7310 lower_zone->lowmem_reserve[j] = 7456 7311 managed_pages / sysctl_lowmem_reserve_ratio[idx]; 7457 7312 } 7458 - managed_pages += lower_zone->managed_pages; 7313 + managed_pages += zone_managed_pages(lower_zone); 7459 7314 } 7460 7315 } 7461 7316 } ··· 7474 7329 /* Calculate total number of !ZONE_HIGHMEM pages */ 7475 7330 for_each_zone(zone) { 7476 7331 if (!is_highmem(zone)) 7477 - lowmem_pages += zone->managed_pages; 7332 + lowmem_pages += zone_managed_pages(zone); 7478 7333 } 7479 7334 7480 7335 for_each_zone(zone) { 7481 7336 u64 tmp; 7482 7337 7483 7338 spin_lock_irqsave(&zone->lock, flags); 7484 - tmp = (u64)pages_min * zone->managed_pages; 7339 + tmp = (u64)pages_min * zone_managed_pages(zone); 7485 7340 do_div(tmp, lowmem_pages); 7486 7341 if (is_highmem(zone)) { 7487 7342 /* ··· 7495 7350 */ 7496 7351 unsigned long min_pages; 7497 7352 7498 - min_pages = zone->managed_pages / 1024; 7353 + min_pages = zone_managed_pages(zone) / 1024; 7499 7354 min_pages = clamp(min_pages, SWAP_CLUSTER_MAX, 128UL); 7500 - zone->watermark[WMARK_MIN] = min_pages; 7355 + zone->_watermark[WMARK_MIN] = min_pages; 7501 7356 } else { 7502 7357 /* 7503 7358 * If it's a lowmem zone, reserve a number of pages 7504 7359 * proportionate to the zone's size. 7505 7360 */ 7506 - zone->watermark[WMARK_MIN] = tmp; 7361 + zone->_watermark[WMARK_MIN] = tmp; 7507 7362 } 7508 7363 7509 7364 /* ··· 7512 7367 * ensure a minimum size on small systems. 7513 7368 */ 7514 7369 tmp = max_t(u64, tmp >> 2, 7515 - mult_frac(zone->managed_pages, 7370 + mult_frac(zone_managed_pages(zone), 7516 7371 watermark_scale_factor, 10000)); 7517 7372 7518 - zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; 7519 - zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2; 7373 + zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp; 7374 + zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2; 7375 + zone->watermark_boost = 0; 7520 7376 7521 7377 spin_unlock_irqrestore(&zone->lock, flags); 7522 7378 } ··· 7618 7472 return 0; 7619 7473 } 7620 7474 7475 + int watermark_boost_factor_sysctl_handler(struct ctl_table *table, int write, 7476 + void __user *buffer, size_t *length, loff_t *ppos) 7477 + { 7478 + int rc; 7479 + 7480 + rc = proc_dointvec_minmax(table, write, buffer, length, ppos); 7481 + if (rc) 7482 + return rc; 7483 + 7484 + return 0; 7485 + } 7486 + 7621 7487 int watermark_scale_factor_sysctl_handler(struct ctl_table *table, int write, 7622 7488 void __user *buffer, size_t *length, loff_t *ppos) 7623 7489 { ··· 7655 7497 pgdat->min_unmapped_pages = 0; 7656 7498 7657 7499 for_each_zone(zone) 7658 - zone->zone_pgdat->min_unmapped_pages += (zone->managed_pages * 7659 - sysctl_min_unmapped_ratio) / 100; 7500 + zone->zone_pgdat->min_unmapped_pages += (zone_managed_pages(zone) * 7501 + sysctl_min_unmapped_ratio) / 100; 7660 7502 } 7661 7503 7662 7504 ··· 7683 7525 pgdat->min_slab_pages = 0; 7684 7526 7685 7527 for_each_zone(zone) 7686 - zone->zone_pgdat->min_slab_pages += (zone->managed_pages * 7687 - sysctl_min_slab_ratio) / 100; 7528 + zone->zone_pgdat->min_slab_pages += (zone_managed_pages(zone) * 7529 + sysctl_min_slab_ratio) / 100; 7688 7530 } 7689 7531 7690 7532 int sysctl_min_slab_ratio_sysctl_handler(struct ctl_table *table, int write, ··· 7924 7766 * race condition. So you can't expect this function should be exact. 7925 7767 */ 7926 7768 bool has_unmovable_pages(struct zone *zone, struct page *page, int count, 7927 - int migratetype, 7928 - bool skip_hwpoisoned_pages) 7769 + int migratetype, int flags) 7929 7770 { 7930 7771 unsigned long pfn, iter, found; 7931 7772 ··· 7998 7841 * The HWPoisoned page may be not in buddy system, and 7999 7842 * page_count() is not 0. 8000 7843 */ 8001 - if (skip_hwpoisoned_pages && PageHWPoison(page)) 7844 + if ((flags & SKIP_HWPOISON) && PageHWPoison(page)) 8002 7845 continue; 8003 7846 8004 7847 if (__PageMovable(page)) ··· 8025 7868 return false; 8026 7869 unmovable: 8027 7870 WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE); 7871 + if (flags & REPORT_FAILURE) 7872 + dump_page(pfn_to_page(pfn+iter), "unmovable page"); 8028 7873 return true; 8029 7874 } 8030 7875 ··· 8153 7994 */ 8154 7995 8155 7996 ret = start_isolate_page_range(pfn_max_align_down(start), 8156 - pfn_max_align_up(end), migratetype, 8157 - false); 7997 + pfn_max_align_up(end), migratetype, 0); 8158 7998 if (ret) 8159 7999 return ret; 8160 8000
+4 -6
mm/page_isolation.c
··· 15 15 #define CREATE_TRACE_POINTS 16 16 #include <trace/events/page_isolation.h> 17 17 18 - static int set_migratetype_isolate(struct page *page, int migratetype, 19 - bool skip_hwpoisoned_pages) 18 + static int set_migratetype_isolate(struct page *page, int migratetype, int isol_flags) 20 19 { 21 20 struct zone *zone; 22 21 unsigned long flags, pfn; ··· 59 60 * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself. 60 61 * We just check MOVABLE pages. 61 62 */ 62 - if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype, 63 - skip_hwpoisoned_pages)) 63 + if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype, flags)) 64 64 ret = 0; 65 65 66 66 /* ··· 183 185 * prevents two threads from simultaneously working on overlapping ranges. 184 186 */ 185 187 int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn, 186 - unsigned migratetype, bool skip_hwpoisoned_pages) 188 + unsigned migratetype, int flags) 187 189 { 188 190 unsigned long pfn; 189 191 unsigned long undo_pfn; ··· 197 199 pfn += pageblock_nr_pages) { 198 200 page = __first_valid_page(pfn, pageblock_nr_pages); 199 201 if (page && 200 - set_migratetype_isolate(page, migratetype, skip_hwpoisoned_pages)) { 202 + set_migratetype_isolate(page, migratetype, flags)) { 201 203 undo_pfn = pfn; 202 204 goto undo; 203 205 }
+1
mm/page_owner.c
··· 351 351 .skip = 0 352 352 }; 353 353 354 + count = min_t(size_t, count, PAGE_SIZE); 354 355 kbuf = kmalloc(count, GFP_KERNEL); 355 356 if (!kbuf) 356 357 return -ENOMEM;
+5 -7
mm/readahead.c
··· 270 270 * return it as the new window size. 271 271 */ 272 272 static unsigned long get_next_ra_size(struct file_ra_state *ra, 273 - unsigned long max) 273 + unsigned long max) 274 274 { 275 275 unsigned long cur = ra->size; 276 - unsigned long newsize; 277 276 278 277 if (cur < max / 16) 279 - newsize = 4 * cur; 280 - else 281 - newsize = 2 * cur; 282 - 283 - return min(newsize, max); 278 + return 4 * cur; 279 + if (cur <= max / 2) 280 + return 2 * cur; 281 + return max; 284 282 } 285 283 286 284 /*
+26 -33
mm/rmap.c
··· 25 25 * page->flags PG_locked (lock_page) 26 26 * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share) 27 27 * mapping->i_mmap_rwsem 28 + * hugetlb_fault_mutex (hugetlbfs specific page fault mutex) 28 29 * anon_vma->rwsem 29 30 * mm->page_table_lock or pte_lock 30 31 * zone_lru_lock (in mark_page_accessed, isolate_lru_page) ··· 890 889 .address = address, 891 890 .flags = PVMW_SYNC, 892 891 }; 893 - unsigned long start = address, end; 892 + struct mmu_notifier_range range; 894 893 int *cleaned = arg; 895 894 896 895 /* 897 896 * We have to assume the worse case ie pmd for invalidation. Note that 898 897 * the page can not be free from this function. 899 898 */ 900 - end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page))); 901 - mmu_notifier_invalidate_range_start(vma->vm_mm, start, end); 899 + mmu_notifier_range_init(&range, vma->vm_mm, address, 900 + min(vma->vm_end, address + 901 + (PAGE_SIZE << compound_order(page)))); 902 + mmu_notifier_invalidate_range_start(&range); 902 903 903 904 while (page_vma_mapped_walk(&pvmw)) { 904 905 unsigned long cstart; ··· 952 949 (*cleaned)++; 953 950 } 954 951 955 - mmu_notifier_invalidate_range_end(vma->vm_mm, start, end); 952 + mmu_notifier_invalidate_range_end(&range); 956 953 957 954 return true; 958 955 } ··· 1020 1017 1021 1018 /** 1022 1019 * __page_set_anon_rmap - set up new anonymous rmap 1023 - * @page: Page to add to rmap 1020 + * @page: Page or Hugepage to add to rmap 1024 1021 * @vma: VM area to add page to. 1025 1022 * @address: User virtual address of the mapping 1026 1023 * @exclusive: the page is exclusively owned by the current process ··· 1348 1345 pte_t pteval; 1349 1346 struct page *subpage; 1350 1347 bool ret = true; 1351 - unsigned long start = address, end; 1348 + struct mmu_notifier_range range; 1352 1349 enum ttu_flags flags = (enum ttu_flags)arg; 1353 1350 1354 1351 /* munlock has nothing to gain from examining un-locked vmas */ ··· 1372 1369 * Note that the page can not be free in this function as call of 1373 1370 * try_to_unmap() must hold a reference on the page. 1374 1371 */ 1375 - end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page))); 1372 + mmu_notifier_range_init(&range, vma->vm_mm, vma->vm_start, 1373 + min(vma->vm_end, vma->vm_start + 1374 + (PAGE_SIZE << compound_order(page)))); 1376 1375 if (PageHuge(page)) { 1377 1376 /* 1378 1377 * If sharing is possible, start and end will be adjusted 1379 1378 * accordingly. 1379 + * 1380 + * If called for a huge page, caller must hold i_mmap_rwsem 1381 + * in write mode as it is possible to call huge_pmd_unshare. 1380 1382 */ 1381 - adjust_range_if_pmd_sharing_possible(vma, &start, &end); 1383 + adjust_range_if_pmd_sharing_possible(vma, &range.start, 1384 + &range.end); 1382 1385 } 1383 - mmu_notifier_invalidate_range_start(vma->vm_mm, start, end); 1386 + mmu_notifier_invalidate_range_start(&range); 1384 1387 1385 1388 while (page_vma_mapped_walk(&pvmw)) { 1386 1389 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION ··· 1437 1428 * we must flush them all. start/end were 1438 1429 * already adjusted above to cover this range. 1439 1430 */ 1440 - flush_cache_range(vma, start, end); 1441 - flush_tlb_range(vma, start, end); 1442 - mmu_notifier_invalidate_range(mm, start, end); 1431 + flush_cache_range(vma, range.start, range.end); 1432 + flush_tlb_range(vma, range.start, range.end); 1433 + mmu_notifier_invalidate_range(mm, range.start, 1434 + range.end); 1443 1435 1444 1436 /* 1445 1437 * The ref count of the PMD page was dropped ··· 1660 1650 put_page(page); 1661 1651 } 1662 1652 1663 - mmu_notifier_invalidate_range_end(vma->vm_mm, start, end); 1653 + mmu_notifier_invalidate_range_end(&range); 1664 1654 1665 1655 return ret; 1666 1656 } ··· 1920 1910 1921 1911 #ifdef CONFIG_HUGETLB_PAGE 1922 1912 /* 1923 - * The following three functions are for anonymous (private mapped) hugepages. 1913 + * The following two functions are for anonymous (private mapped) hugepages. 1924 1914 * Unlike common anonymous pages, anonymous hugepages have no accounting code 1925 1915 * and no lru code, because we handle hugepages differently from common pages. 1926 1916 */ 1927 - static void __hugepage_set_anon_rmap(struct page *page, 1928 - struct vm_area_struct *vma, unsigned long address, int exclusive) 1929 - { 1930 - struct anon_vma *anon_vma = vma->anon_vma; 1931 - 1932 - BUG_ON(!anon_vma); 1933 - 1934 - if (PageAnon(page)) 1935 - return; 1936 - if (!exclusive) 1937 - anon_vma = anon_vma->root; 1938 - 1939 - anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; 1940 - page->mapping = (struct address_space *) anon_vma; 1941 - page->index = linear_page_index(vma, address); 1942 - } 1943 - 1944 1917 void hugepage_add_anon_rmap(struct page *page, 1945 1918 struct vm_area_struct *vma, unsigned long address) 1946 1919 { ··· 1935 1942 /* address might be in next vma when migration races vma_adjust */ 1936 1943 first = atomic_inc_and_test(compound_mapcount_ptr(page)); 1937 1944 if (first) 1938 - __hugepage_set_anon_rmap(page, vma, address, 0); 1945 + __page_set_anon_rmap(page, vma, address, 0); 1939 1946 } 1940 1947 1941 1948 void hugepage_add_new_anon_rmap(struct page *page, ··· 1943 1950 { 1944 1951 BUG_ON(address < vma->vm_start || address >= vma->vm_end); 1945 1952 atomic_set(compound_mapcount_ptr(page), 0); 1946 - __hugepage_set_anon_rmap(page, vma, address, 1); 1953 + __page_set_anon_rmap(page, vma, address, 1); 1947 1954 } 1948 1955 #endif /* CONFIG_HUGETLB_PAGE */
+5 -3
mm/shmem.c
··· 109 109 #ifdef CONFIG_TMPFS 110 110 static unsigned long shmem_default_max_blocks(void) 111 111 { 112 - return totalram_pages / 2; 112 + return totalram_pages() / 2; 113 113 } 114 114 115 115 static unsigned long shmem_default_max_inodes(void) 116 116 { 117 - return min(totalram_pages - totalhigh_pages, totalram_pages / 2); 117 + unsigned long nr_pages = totalram_pages(); 118 + 119 + return min(nr_pages - totalhigh_pages(), nr_pages / 2); 118 120 } 119 121 #endif 120 122 ··· 3303 3301 size = memparse(value,&rest); 3304 3302 if (*rest == '%') { 3305 3303 size <<= PAGE_SHIFT; 3306 - size *= totalram_pages; 3304 + size *= totalram_pages(); 3307 3305 do_div(size, 100); 3308 3306 rest++; 3309 3307 }
+9 -22
mm/slab.c
··· 406 406 return page->s_mem + cache->size * idx; 407 407 } 408 408 409 - /* 410 - * We want to avoid an expensive divide : (offset / cache->size) 411 - * Using the fact that size is a constant for a particular cache, 412 - * we can replace (offset / cache->size) by 413 - * reciprocal_divide(offset, cache->reciprocal_buffer_size) 414 - */ 415 - static inline unsigned int obj_to_index(const struct kmem_cache *cache, 416 - const struct page *page, void *obj) 417 - { 418 - u32 offset = (obj - page->s_mem); 419 - return reciprocal_divide(offset, cache->reciprocal_buffer_size); 420 - } 421 - 422 409 #define BOOT_CPUCACHE_ENTRIES 1 423 410 /* internal cache of cache description objs */ 424 411 static struct kmem_cache kmem_cache_boot = { ··· 1235 1248 * page orders on machines with more than 32MB of memory if 1236 1249 * not overridden on the command line. 1237 1250 */ 1238 - if (!slab_max_order_set && totalram_pages > (32 << 20) >> PAGE_SHIFT) 1251 + if (!slab_max_order_set && totalram_pages() > (32 << 20) >> PAGE_SHIFT) 1239 1252 slab_max_order = SLAB_MAX_ORDER_HI; 1240 1253 1241 1254 /* Bootstrap is tricky, because several objects are allocated ··· 2357 2370 void *freelist; 2358 2371 void *addr = page_address(page); 2359 2372 2360 - page->s_mem = addr + colour_off; 2373 + page->s_mem = kasan_reset_tag(addr) + colour_off; 2361 2374 page->active = 0; 2362 2375 2363 2376 if (OBJFREELIST_SLAB(cachep)) ··· 2561 2574 2562 2575 for (i = 0; i < cachep->num; i++) { 2563 2576 objp = index_to_obj(cachep, page, i); 2564 - kasan_init_slab_obj(cachep, objp); 2577 + objp = kasan_init_slab_obj(cachep, objp); 2565 2578 2566 2579 /* constructor could break poison info */ 2567 2580 if (DEBUG == 0 && cachep->ctor) { ··· 3538 3551 { 3539 3552 void *ret = slab_alloc(cachep, flags, _RET_IP_); 3540 3553 3541 - kasan_slab_alloc(cachep, ret, flags); 3554 + ret = kasan_slab_alloc(cachep, ret, flags); 3542 3555 trace_kmem_cache_alloc(_RET_IP_, ret, 3543 3556 cachep->object_size, cachep->size, flags); 3544 3557 ··· 3604 3617 3605 3618 ret = slab_alloc(cachep, flags, _RET_IP_); 3606 3619 3607 - kasan_kmalloc(cachep, ret, size, flags); 3620 + ret = kasan_kmalloc(cachep, ret, size, flags); 3608 3621 trace_kmalloc(_RET_IP_, ret, 3609 3622 size, cachep->size, flags); 3610 3623 return ret; ··· 3628 3641 { 3629 3642 void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); 3630 3643 3631 - kasan_slab_alloc(cachep, ret, flags); 3644 + ret = kasan_slab_alloc(cachep, ret, flags); 3632 3645 trace_kmem_cache_alloc_node(_RET_IP_, ret, 3633 3646 cachep->object_size, cachep->size, 3634 3647 flags, nodeid); ··· 3647 3660 3648 3661 ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); 3649 3662 3650 - kasan_kmalloc(cachep, ret, size, flags); 3663 + ret = kasan_kmalloc(cachep, ret, size, flags); 3651 3664 trace_kmalloc_node(_RET_IP_, ret, 3652 3665 size, cachep->size, 3653 3666 flags, nodeid); ··· 3668 3681 if (unlikely(ZERO_OR_NULL_PTR(cachep))) 3669 3682 return cachep; 3670 3683 ret = kmem_cache_alloc_node_trace(cachep, flags, node, size); 3671 - kasan_kmalloc(cachep, ret, size, flags); 3684 + ret = kasan_kmalloc(cachep, ret, size, flags); 3672 3685 3673 3686 return ret; 3674 3687 } ··· 3706 3719 return cachep; 3707 3720 ret = slab_alloc(cachep, flags, caller); 3708 3721 3709 - kasan_kmalloc(cachep, ret, size, flags); 3722 + ret = kasan_kmalloc(cachep, ret, size, flags); 3710 3723 trace_kmalloc(caller, ret, 3711 3724 size, cachep->size, flags); 3712 3725
+1 -1
mm/slab.h
··· 441 441 442 442 kmemleak_alloc_recursive(object, s->object_size, 1, 443 443 s->flags, flags); 444 - kasan_slab_alloc(s, object, flags); 444 + p[i] = kasan_slab_alloc(s, object, flags); 445 445 } 446 446 447 447 if (memcg_kmem_enabled())
+4 -6
mm/slab_common.c
··· 1029 1029 1030 1030 index = size_index[size_index_elem(size)]; 1031 1031 } else { 1032 - if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) { 1033 - WARN_ON(1); 1032 + if (WARN_ON_ONCE(size > KMALLOC_MAX_CACHE_SIZE)) 1034 1033 return NULL; 1035 - } 1036 1034 index = fls(size - 1); 1037 1035 } 1038 1036 ··· 1202 1204 page = alloc_pages(flags, order); 1203 1205 ret = page ? page_address(page) : NULL; 1204 1206 kmemleak_alloc(ret, size, 1, flags); 1205 - kasan_kmalloc_large(ret, size, flags); 1207 + ret = kasan_kmalloc_large(ret, size, flags); 1206 1208 return ret; 1207 1209 } 1208 1210 EXPORT_SYMBOL(kmalloc_order); ··· 1480 1482 ks = ksize(p); 1481 1483 1482 1484 if (ks >= new_size) { 1483 - kasan_krealloc((void *)p, new_size, flags); 1485 + p = kasan_krealloc((void *)p, new_size, flags); 1484 1486 return (void *)p; 1485 1487 } 1486 1488 ··· 1532 1534 } 1533 1535 1534 1536 ret = __do_krealloc(p, new_size, flags); 1535 - if (ret && p != ret) 1537 + if (ret && kasan_reset_tag(p) != kasan_reset_tag(ret)) 1536 1538 kfree(p); 1537 1539 1538 1540 return ret;
+38 -44
mm/slub.c
··· 1372 1372 * Hooks for other subsystems that check memory allocations. In a typical 1373 1373 * production configuration these hooks all should produce no code at all. 1374 1374 */ 1375 - static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) 1375 + static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) 1376 1376 { 1377 1377 kmemleak_alloc(ptr, size, 1, flags); 1378 - kasan_kmalloc_large(ptr, size, flags); 1378 + return kasan_kmalloc_large(ptr, size, flags); 1379 1379 } 1380 1380 1381 1381 static __always_inline void kfree_hook(void *x) ··· 1451 1451 #endif 1452 1452 } 1453 1453 1454 - static void setup_object(struct kmem_cache *s, struct page *page, 1454 + static void *setup_object(struct kmem_cache *s, struct page *page, 1455 1455 void *object) 1456 1456 { 1457 1457 setup_object_debug(s, page, object); 1458 - kasan_init_slab_obj(s, object); 1458 + object = kasan_init_slab_obj(s, object); 1459 1459 if (unlikely(s->ctor)) { 1460 1460 kasan_unpoison_object_data(s, object); 1461 1461 s->ctor(object); 1462 1462 kasan_poison_object_data(s, object); 1463 1463 } 1464 + return object; 1464 1465 } 1465 1466 1466 1467 /* ··· 1569 1568 /* First entry is used as the base of the freelist */ 1570 1569 cur = next_freelist_entry(s, page, &pos, start, page_limit, 1571 1570 freelist_count); 1571 + cur = setup_object(s, page, cur); 1572 1572 page->freelist = cur; 1573 1573 1574 1574 for (idx = 1; idx < page->objects; idx++) { 1575 - setup_object(s, page, cur); 1576 1575 next = next_freelist_entry(s, page, &pos, start, page_limit, 1577 1576 freelist_count); 1577 + next = setup_object(s, page, next); 1578 1578 set_freepointer(s, cur, next); 1579 1579 cur = next; 1580 1580 } 1581 - setup_object(s, page, cur); 1582 1581 set_freepointer(s, cur, NULL); 1583 1582 1584 1583 return true; ··· 1600 1599 struct page *page; 1601 1600 struct kmem_cache_order_objects oo = s->oo; 1602 1601 gfp_t alloc_gfp; 1603 - void *start, *p; 1602 + void *start, *p, *next; 1604 1603 int idx, order; 1605 1604 bool shuffle; 1606 1605 ··· 1652 1651 1653 1652 if (!shuffle) { 1654 1653 for_each_object_idx(p, idx, s, start, page->objects) { 1655 - setup_object(s, page, p); 1656 - if (likely(idx < page->objects)) 1657 - set_freepointer(s, p, p + s->size); 1658 - else 1654 + if (likely(idx < page->objects)) { 1655 + next = p + s->size; 1656 + next = setup_object(s, page, next); 1657 + set_freepointer(s, p, next); 1658 + } else 1659 1659 set_freepointer(s, p, NULL); 1660 1660 } 1661 - page->freelist = fixup_red_left(s, start); 1661 + start = fixup_red_left(s, start); 1662 + start = setup_object(s, page, start); 1663 + page->freelist = start; 1662 1664 } 1663 1665 1664 1666 page->inuse = page->objects; ··· 2131 2127 } 2132 2128 2133 2129 if (l != m) { 2134 - 2135 2130 if (l == M_PARTIAL) 2136 - 2137 2131 remove_partial(n, page); 2138 - 2139 2132 else if (l == M_FULL) 2140 - 2141 2133 remove_full(s, n, page); 2142 2134 2143 - if (m == M_PARTIAL) { 2144 - 2135 + if (m == M_PARTIAL) 2145 2136 add_partial(n, page, tail); 2146 - stat(s, tail); 2147 - 2148 - } else if (m == M_FULL) { 2149 - 2150 - stat(s, DEACTIVATE_FULL); 2137 + else if (m == M_FULL) 2151 2138 add_full(s, n, page); 2152 - 2153 - } 2154 2139 } 2155 2140 2156 2141 l = m; ··· 2152 2159 if (lock) 2153 2160 spin_unlock(&n->list_lock); 2154 2161 2155 - if (m == M_FREE) { 2162 + if (m == M_PARTIAL) 2163 + stat(s, tail); 2164 + else if (m == M_FULL) 2165 + stat(s, DEACTIVATE_FULL); 2166 + else if (m == M_FREE) { 2156 2167 stat(s, DEACTIVATE_EMPTY); 2157 2168 discard_slab(s, page); 2158 2169 stat(s, FREE_SLAB); ··· 2310 2313 { 2311 2314 struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); 2312 2315 2313 - if (likely(c)) { 2314 - if (c->page) 2315 - flush_slab(s, c); 2316 + if (c->page) 2317 + flush_slab(s, c); 2316 2318 2317 - unfreeze_partials(s, c); 2318 - } 2319 + unfreeze_partials(s, c); 2319 2320 } 2320 2321 2321 2322 static void flush_cpu_slab(void *d) ··· 2362 2367 static inline int node_match(struct page *page, int node) 2363 2368 { 2364 2369 #ifdef CONFIG_NUMA 2365 - if (!page || (node != NUMA_NO_NODE && page_to_nid(page) != node)) 2370 + if (node != NUMA_NO_NODE && page_to_nid(page) != node) 2366 2371 return 0; 2367 2372 #endif 2368 2373 return 1; ··· 2763 2768 { 2764 2769 void *ret = slab_alloc(s, gfpflags, _RET_IP_); 2765 2770 trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); 2766 - kasan_kmalloc(s, ret, size, gfpflags); 2771 + ret = kasan_kmalloc(s, ret, size, gfpflags); 2767 2772 return ret; 2768 2773 } 2769 2774 EXPORT_SYMBOL(kmem_cache_alloc_trace); ··· 2791 2796 trace_kmalloc_node(_RET_IP_, ret, 2792 2797 size, s->size, gfpflags, node); 2793 2798 2794 - kasan_kmalloc(s, ret, size, gfpflags); 2799 + ret = kasan_kmalloc(s, ret, size, gfpflags); 2795 2800 return ret; 2796 2801 } 2797 2802 EXPORT_SYMBOL(kmem_cache_alloc_node_trace); ··· 2987 2992 do_slab_free(s, page, head, tail, cnt, addr); 2988 2993 } 2989 2994 2990 - #ifdef CONFIG_KASAN 2995 + #ifdef CONFIG_KASAN_GENERIC 2991 2996 void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) 2992 2997 { 2993 2998 do_slab_free(cache, virt_to_head_page(x), x, NULL, 1, addr); ··· 3359 3364 3360 3365 n = page->freelist; 3361 3366 BUG_ON(!n); 3362 - page->freelist = get_freepointer(kmem_cache_node, n); 3363 - page->inuse = 1; 3364 - page->frozen = 0; 3365 - kmem_cache_node->node[node] = n; 3366 3367 #ifdef CONFIG_SLUB_DEBUG 3367 3368 init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); 3368 3369 init_tracking(kmem_cache_node, n); 3369 3370 #endif 3370 - kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node), 3371 + n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node), 3371 3372 GFP_KERNEL); 3373 + page->freelist = get_freepointer(kmem_cache_node, n); 3374 + page->inuse = 1; 3375 + page->frozen = 0; 3376 + kmem_cache_node->node[node] = n; 3372 3377 init_kmem_cache_node(n); 3373 3378 inc_slabs_node(kmem_cache_node, node, page->objects); 3374 3379 ··· 3779 3784 3780 3785 trace_kmalloc(_RET_IP_, ret, size, s->size, flags); 3781 3786 3782 - kasan_kmalloc(s, ret, size, flags); 3787 + ret = kasan_kmalloc(s, ret, size, flags); 3783 3788 3784 3789 return ret; 3785 3790 } ··· 3796 3801 if (page) 3797 3802 ptr = page_address(page); 3798 3803 3799 - kmalloc_large_node_hook(ptr, size, flags); 3800 - return ptr; 3804 + return kmalloc_large_node_hook(ptr, size, flags); 3801 3805 } 3802 3806 3803 3807 void *__kmalloc_node(size_t size, gfp_t flags, int node) ··· 3823 3829 3824 3830 trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); 3825 3831 3826 - kasan_kmalloc(s, ret, size, flags); 3832 + ret = kasan_kmalloc(s, ret, size, flags); 3827 3833 3828 3834 return ret; 3829 3835 }
+14 -12
mm/sparse.c
··· 678 678 * set. If this is <=0, then that means that the passed-in 679 679 * map was not consumed and must be freed. 680 680 */ 681 - int __meminit sparse_add_one_section(struct pglist_data *pgdat, 682 - unsigned long start_pfn, struct vmem_altmap *altmap) 681 + int __meminit sparse_add_one_section(int nid, unsigned long start_pfn, 682 + struct vmem_altmap *altmap) 683 683 { 684 684 unsigned long section_nr = pfn_to_section_nr(start_pfn); 685 685 struct mem_section *ms; 686 686 struct page *memmap; 687 687 unsigned long *usemap; 688 - unsigned long flags; 689 688 int ret; 690 689 691 690 /* 692 691 * no locking for this, because it does its own 693 692 * plus, it does a kmalloc 694 693 */ 695 - ret = sparse_index_init(section_nr, pgdat->node_id); 694 + ret = sparse_index_init(section_nr, nid); 696 695 if (ret < 0 && ret != -EEXIST) 697 696 return ret; 698 697 ret = 0; 699 - memmap = kmalloc_section_memmap(section_nr, pgdat->node_id, altmap); 698 + memmap = kmalloc_section_memmap(section_nr, nid, altmap); 700 699 if (!memmap) 701 700 return -ENOMEM; 702 701 usemap = __kmalloc_section_usemap(); ··· 703 704 __kfree_section_memmap(memmap, altmap); 704 705 return -ENOMEM; 705 706 } 706 - 707 - pgdat_resize_lock(pgdat, &flags); 708 707 709 708 ms = __pfn_to_section(start_pfn); 710 709 if (ms->section_mem_map & SECTION_MARKED_PRESENT) { ··· 720 723 sparse_init_one_section(ms, section_nr, memmap, usemap); 721 724 722 725 out: 723 - pgdat_resize_unlock(pgdat, &flags); 724 726 if (ret < 0) { 725 727 kfree(usemap); 726 728 __kfree_section_memmap(memmap, altmap); ··· 734 738 int i; 735 739 736 740 if (!memmap) 741 + return; 742 + 743 + /* 744 + * A further optimization is to have per section refcounted 745 + * num_poisoned_pages. But that would need more space per memmap, so 746 + * for now just do a quick global check to speed up this routine in the 747 + * absence of bad pages. 748 + */ 749 + if (atomic_long_read(&num_poisoned_pages) == 0) 737 750 return; 738 751 739 752 for (i = 0; i < nr_pages; i++) { ··· 790 785 unsigned long map_offset, struct vmem_altmap *altmap) 791 786 { 792 787 struct page *memmap = NULL; 793 - unsigned long *usemap = NULL, flags; 794 - struct pglist_data *pgdat = zone->zone_pgdat; 788 + unsigned long *usemap = NULL; 795 789 796 - pgdat_resize_lock(pgdat, &flags); 797 790 if (ms->section_mem_map) { 798 791 usemap = ms->pageblock_flags; 799 792 memmap = sparse_decode_mem_map(ms->section_mem_map, ··· 799 796 ms->section_mem_map = 0; 800 797 ms->pageblock_flags = NULL; 801 798 } 802 - pgdat_resize_unlock(pgdat, &flags); 803 799 804 800 clear_hwpoisoned_pages(memmap + map_offset, 805 801 PAGES_PER_SECTION - map_offset);
+1 -1
mm/swap.c
··· 1022 1022 */ 1023 1023 void __init swap_setup(void) 1024 1024 { 1025 - unsigned long megs = totalram_pages >> (20 - PAGE_SHIFT); 1025 + unsigned long megs = totalram_pages() >> (20 - PAGE_SHIFT); 1026 1026 1027 1027 /* Use a smaller cluster for small-memory machines */ 1028 1028 if (megs < 16)
+4 -2
mm/swapfile.c
··· 2197 2197 */ 2198 2198 if (PageSwapCache(page) && 2199 2199 likely(page_private(page) == entry.val) && 2200 - !page_swapped(page)) 2200 + (!PageTransCompound(page) || 2201 + !swap_page_trans_huge_swapped(si, entry))) 2201 2202 delete_from_swap_cache(compound_head(page)); 2202 2203 2203 2204 /* ··· 2813 2812 struct swap_info_struct *p; 2814 2813 unsigned int type; 2815 2814 int i; 2815 + int size = sizeof(*p) + nr_node_ids * sizeof(struct plist_node); 2816 2816 2817 - p = kvzalloc(sizeof(*p), GFP_KERNEL); 2817 + p = kvzalloc(size, GFP_KERNEL); 2818 2818 if (!p) 2819 2819 return ERR_PTR(-ENOMEM); 2820 2820
+9 -2
mm/userfaultfd.c
··· 267 267 VM_BUG_ON(dst_addr & ~huge_page_mask(h)); 268 268 269 269 /* 270 - * Serialize via hugetlb_fault_mutex 270 + * Serialize via i_mmap_rwsem and hugetlb_fault_mutex. 271 + * i_mmap_rwsem ensures the dst_pte remains valid even 272 + * in the case of shared pmds. fault mutex prevents 273 + * races with other faulting threads. 271 274 */ 272 - idx = linear_page_index(dst_vma, dst_addr); 273 275 mapping = dst_vma->vm_file->f_mapping; 276 + i_mmap_lock_read(mapping); 277 + idx = linear_page_index(dst_vma, dst_addr); 274 278 hash = hugetlb_fault_mutex_hash(h, dst_mm, dst_vma, mapping, 275 279 idx, dst_addr); 276 280 mutex_lock(&hugetlb_fault_mutex_table[hash]); ··· 283 279 dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h)); 284 280 if (!dst_pte) { 285 281 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 282 + i_mmap_unlock_read(mapping); 286 283 goto out_unlock; 287 284 } 288 285 ··· 291 286 dst_pteval = huge_ptep_get(dst_pte); 292 287 if (!huge_pte_none(dst_pteval)) { 293 288 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 289 + i_mmap_unlock_read(mapping); 294 290 goto out_unlock; 295 291 } 296 292 ··· 299 293 dst_addr, src_addr, &page); 300 294 301 295 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 296 + i_mmap_unlock_read(mapping); 302 297 vm_alloc_shared = vm_shared; 303 298 304 299 cond_resched();
+1 -1
mm/util.c
··· 593 593 if (sysctl_overcommit_kbytes) 594 594 allowed = sysctl_overcommit_kbytes >> (PAGE_SHIFT - 10); 595 595 else 596 - allowed = ((totalram_pages - hugetlb_total_pages()) 596 + allowed = ((totalram_pages() - hugetlb_total_pages()) 597 597 * sysctl_overcommit_ratio / 100); 598 598 allowed += total_swap_pages; 599 599
+2 -2
mm/vmalloc.c
··· 1634 1634 1635 1635 might_sleep(); 1636 1636 1637 - if (count > totalram_pages) 1637 + if (count > totalram_pages()) 1638 1638 return NULL; 1639 1639 1640 1640 size = (unsigned long)count << PAGE_SHIFT; ··· 1739 1739 unsigned long real_size = size; 1740 1740 1741 1741 size = PAGE_ALIGN(size); 1742 - if (!size || (size >> PAGE_SHIFT) > totalram_pages) 1742 + if (!size || (size >> PAGE_SHIFT) > totalram_pages()) 1743 1743 goto fail; 1744 1744 1745 1745 area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
+126 -17
mm/vmscan.c
··· 88 88 /* Can pages be swapped as part of reclaim? */ 89 89 unsigned int may_swap:1; 90 90 91 + /* e.g. boosted watermark reclaim leaves slabs alone */ 92 + unsigned int may_shrinkslab:1; 93 + 91 94 /* 92 95 * Cgroups are not reclaimed below their configured memory.low, 93 96 * unless we threaten to OOM. If any cgroups are skipped due to ··· 1460 1457 count_memcg_page_event(page, PGLAZYFREED); 1461 1458 } else if (!mapping || !__remove_mapping(mapping, page, true)) 1462 1459 goto keep_locked; 1463 - /* 1464 - * At this point, we have no other references and there is 1465 - * no way to pick any more up (removed from LRU, removed 1466 - * from pagecache). Can use non-atomic bitops now (and 1467 - * we obviously don't have to worry about waking up a process 1468 - * waiting on the page lock, because there are no references. 1469 - */ 1470 - __ClearPageLocked(page); 1460 + 1461 + unlock_page(page); 1471 1462 free_it: 1472 1463 nr_reclaimed++; 1473 1464 ··· 2753 2756 shrink_node_memcg(pgdat, memcg, sc, &lru_pages); 2754 2757 node_lru_pages += lru_pages; 2755 2758 2756 - shrink_slab(sc->gfp_mask, pgdat->node_id, 2759 + if (sc->may_shrinkslab) { 2760 + shrink_slab(sc->gfp_mask, pgdat->node_id, 2757 2761 memcg, sc->priority); 2762 + } 2758 2763 2759 2764 /* Record the group's reclaim efficiency */ 2760 2765 vmpressure(sc->gfp_mask, memcg, false, ··· 3238 3239 .may_writepage = !laptop_mode, 3239 3240 .may_unmap = 1, 3240 3241 .may_swap = 1, 3242 + .may_shrinkslab = 1, 3241 3243 }; 3242 3244 3243 3245 /* ··· 3283 3283 .may_unmap = 1, 3284 3284 .reclaim_idx = MAX_NR_ZONES - 1, 3285 3285 .may_swap = !noswap, 3286 + .may_shrinkslab = 1, 3286 3287 }; 3287 3288 unsigned long lru_pages; 3288 3289 ··· 3330 3329 .may_writepage = !laptop_mode, 3331 3330 .may_unmap = 1, 3332 3331 .may_swap = may_swap, 3332 + .may_shrinkslab = 1, 3333 3333 }; 3334 3334 3335 3335 /* ··· 3381 3379 } while (memcg); 3382 3380 } 3383 3381 3382 + static bool pgdat_watermark_boosted(pg_data_t *pgdat, int classzone_idx) 3383 + { 3384 + int i; 3385 + struct zone *zone; 3386 + 3387 + /* 3388 + * Check for watermark boosts top-down as the higher zones 3389 + * are more likely to be boosted. Both watermarks and boosts 3390 + * should not be checked at the time time as reclaim would 3391 + * start prematurely when there is no boosting and a lower 3392 + * zone is balanced. 3393 + */ 3394 + for (i = classzone_idx; i >= 0; i--) { 3395 + zone = pgdat->node_zones + i; 3396 + if (!managed_zone(zone)) 3397 + continue; 3398 + 3399 + if (zone->watermark_boost) 3400 + return true; 3401 + } 3402 + 3403 + return false; 3404 + } 3405 + 3384 3406 /* 3385 3407 * Returns true if there is an eligible zone balanced for the request order 3386 3408 * and classzone_idx ··· 3415 3389 unsigned long mark = -1; 3416 3390 struct zone *zone; 3417 3391 3392 + /* 3393 + * Check watermarks bottom-up as lower zones are more likely to 3394 + * meet watermarks. 3395 + */ 3418 3396 for (i = 0; i <= classzone_idx; i++) { 3419 3397 zone = pgdat->node_zones + i; 3420 3398 ··· 3547 3517 unsigned long nr_soft_reclaimed; 3548 3518 unsigned long nr_soft_scanned; 3549 3519 unsigned long pflags; 3520 + unsigned long nr_boost_reclaim; 3521 + unsigned long zone_boosts[MAX_NR_ZONES] = { 0, }; 3522 + bool boosted; 3550 3523 struct zone *zone; 3551 3524 struct scan_control sc = { 3552 3525 .gfp_mask = GFP_KERNEL, 3553 3526 .order = order, 3554 - .priority = DEF_PRIORITY, 3555 - .may_writepage = !laptop_mode, 3556 3527 .may_unmap = 1, 3557 - .may_swap = 1, 3558 3528 }; 3559 3529 3560 3530 psi_memstall_enter(&pflags); ··· 3562 3532 3563 3533 count_vm_event(PAGEOUTRUN); 3564 3534 3535 + /* 3536 + * Account for the reclaim boost. Note that the zone boost is left in 3537 + * place so that parallel allocations that are near the watermark will 3538 + * stall or direct reclaim until kswapd is finished. 3539 + */ 3540 + nr_boost_reclaim = 0; 3541 + for (i = 0; i <= classzone_idx; i++) { 3542 + zone = pgdat->node_zones + i; 3543 + if (!managed_zone(zone)) 3544 + continue; 3545 + 3546 + nr_boost_reclaim += zone->watermark_boost; 3547 + zone_boosts[i] = zone->watermark_boost; 3548 + } 3549 + boosted = nr_boost_reclaim; 3550 + 3551 + restart: 3552 + sc.priority = DEF_PRIORITY; 3565 3553 do { 3566 3554 unsigned long nr_reclaimed = sc.nr_reclaimed; 3567 3555 bool raise_priority = true; 3556 + bool balanced; 3568 3557 bool ret; 3569 3558 3570 3559 sc.reclaim_idx = classzone_idx; ··· 3610 3561 } 3611 3562 3612 3563 /* 3613 - * Only reclaim if there are no eligible zones. Note that 3614 - * sc.reclaim_idx is not used as buffer_heads_over_limit may 3615 - * have adjusted it. 3564 + * If the pgdat is imbalanced then ignore boosting and preserve 3565 + * the watermarks for a later time and restart. Note that the 3566 + * zone watermarks will be still reset at the end of balancing 3567 + * on the grounds that the normal reclaim should be enough to 3568 + * re-evaluate if boosting is required when kswapd next wakes. 3616 3569 */ 3617 - if (pgdat_balanced(pgdat, sc.order, classzone_idx)) 3570 + balanced = pgdat_balanced(pgdat, sc.order, classzone_idx); 3571 + if (!balanced && nr_boost_reclaim) { 3572 + nr_boost_reclaim = 0; 3573 + goto restart; 3574 + } 3575 + 3576 + /* 3577 + * If boosting is not active then only reclaim if there are no 3578 + * eligible zones. Note that sc.reclaim_idx is not used as 3579 + * buffer_heads_over_limit may have adjusted it. 3580 + */ 3581 + if (!nr_boost_reclaim && balanced) 3618 3582 goto out; 3583 + 3584 + /* Limit the priority of boosting to avoid reclaim writeback */ 3585 + if (nr_boost_reclaim && sc.priority == DEF_PRIORITY - 2) 3586 + raise_priority = false; 3587 + 3588 + /* 3589 + * Do not writeback or swap pages for boosted reclaim. The 3590 + * intent is to relieve pressure not issue sub-optimal IO 3591 + * from reclaim context. If no pages are reclaimed, the 3592 + * reclaim will be aborted. 3593 + */ 3594 + sc.may_writepage = !laptop_mode && !nr_boost_reclaim; 3595 + sc.may_swap = !nr_boost_reclaim; 3596 + sc.may_shrinkslab = !nr_boost_reclaim; 3619 3597 3620 3598 /* 3621 3599 * Do some background aging of the anon list, to give ··· 3695 3619 * progress in reclaiming pages 3696 3620 */ 3697 3621 nr_reclaimed = sc.nr_reclaimed - nr_reclaimed; 3622 + nr_boost_reclaim -= min(nr_boost_reclaim, nr_reclaimed); 3623 + 3624 + /* 3625 + * If reclaim made no progress for a boost, stop reclaim as 3626 + * IO cannot be queued and it could be an infinite loop in 3627 + * extreme circumstances. 3628 + */ 3629 + if (nr_boost_reclaim && !nr_reclaimed) 3630 + break; 3631 + 3698 3632 if (raise_priority || !nr_reclaimed) 3699 3633 sc.priority--; 3700 3634 } while (sc.priority >= 1); ··· 3713 3627 pgdat->kswapd_failures++; 3714 3628 3715 3629 out: 3630 + /* If reclaim was boosted, account for the reclaim done in this pass */ 3631 + if (boosted) { 3632 + unsigned long flags; 3633 + 3634 + for (i = 0; i <= classzone_idx; i++) { 3635 + if (!zone_boosts[i]) 3636 + continue; 3637 + 3638 + /* Increments are under the zone lock */ 3639 + zone = pgdat->node_zones + i; 3640 + spin_lock_irqsave(&zone->lock, flags); 3641 + zone->watermark_boost -= min(zone->watermark_boost, zone_boosts[i]); 3642 + spin_unlock_irqrestore(&zone->lock, flags); 3643 + } 3644 + 3645 + /* 3646 + * As there is now likely space, wakeup kcompact to defragment 3647 + * pageblocks. 3648 + */ 3649 + wakeup_kcompactd(pgdat, pageblock_order, classzone_idx); 3650 + } 3651 + 3716 3652 snapshot_refaults(NULL, pgdat); 3717 3653 __fs_reclaim_release(); 3718 3654 psi_memstall_leave(&pflags); ··· 3963 3855 3964 3856 /* Hopeless node, leave it to direct reclaim if possible */ 3965 3857 if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES || 3966 - pgdat_balanced(pgdat, order, classzone_idx)) { 3858 + (pgdat_balanced(pgdat, order, classzone_idx) && 3859 + !pgdat_watermark_boosted(pgdat, classzone_idx))) { 3967 3860 /* 3968 3861 * There may be plenty of free memory available, but it's too 3969 3862 * fragmented for high-order allocations. Wake up kcompactd
+2 -2
mm/vmstat.c
··· 227 227 * 125 1024 10 16-32 GB 9 228 228 */ 229 229 230 - mem = zone->managed_pages >> (27 - PAGE_SHIFT); 230 + mem = zone_managed_pages(zone) >> (27 - PAGE_SHIFT); 231 231 232 232 threshold = 2 * fls(num_online_cpus()) * (1 + fls(mem)); 233 233 ··· 1569 1569 high_wmark_pages(zone), 1570 1570 zone->spanned_pages, 1571 1571 zone->present_pages, 1572 - zone->managed_pages); 1572 + zone_managed_pages(zone)); 1573 1573 1574 1574 seq_printf(m, 1575 1575 "\n protection: (%ld",
+1 -1
mm/workingset.c
··· 549 549 * double the initial memory by using totalram_pages as-is. 550 550 */ 551 551 timestamp_bits = BITS_PER_LONG - EVICTION_SHIFT; 552 - max_order = fls_long(totalram_pages - 1); 552 + max_order = fls_long(totalram_pages() - 1); 553 553 if (max_order > timestamp_bits) 554 554 bucket_order = max_order - timestamp_bits; 555 555 pr_info("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n",
+2 -2
mm/zswap.c
··· 219 219 220 220 static bool zswap_is_full(void) 221 221 { 222 - return totalram_pages * zswap_max_pool_percent / 100 < 223 - DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE); 222 + return totalram_pages() * zswap_max_pool_percent / 100 < 223 + DIV_ROUND_UP(zswap_pool_total_size, PAGE_SIZE); 224 224 } 225 225 226 226 static void zswap_update_total_size(void)
+4 -3
net/dccp/proto.c
··· 1131 1131 static int __init dccp_init(void) 1132 1132 { 1133 1133 unsigned long goal; 1134 + unsigned long nr_pages = totalram_pages(); 1134 1135 int ehash_order, bhash_order, i; 1135 1136 int rc; 1136 1137 ··· 1158 1157 * 1159 1158 * The methodology is similar to that of the buffer cache. 1160 1159 */ 1161 - if (totalram_pages >= (128 * 1024)) 1162 - goal = totalram_pages >> (21 - PAGE_SHIFT); 1160 + if (nr_pages >= (128 * 1024)) 1161 + goal = nr_pages >> (21 - PAGE_SHIFT); 1163 1162 else 1164 - goal = totalram_pages >> (23 - PAGE_SHIFT); 1163 + goal = nr_pages >> (23 - PAGE_SHIFT); 1165 1164 1166 1165 if (thash_entries) 1167 1166 goal = (thash_entries *
+1 -1
net/decnet/dn_route.c
··· 1866 1866 dn_route_timer.expires = jiffies + decnet_dst_gc_interval * HZ; 1867 1867 add_timer(&dn_route_timer); 1868 1868 1869 - goal = totalram_pages >> (26 - PAGE_SHIFT); 1869 + goal = totalram_pages() >> (26 - PAGE_SHIFT); 1870 1870 1871 1871 for(order = 0; (1UL << order) < goal; order++) 1872 1872 /* NOTHING */;
+1 -1
net/ipv4/tcp_metrics.c
··· 1000 1000 1001 1001 slots = tcpmhash_entries; 1002 1002 if (!slots) { 1003 - if (totalram_pages >= 128 * 1024) 1003 + if (totalram_pages() >= 128 * 1024) 1004 1004 slots = 16 * 1024; 1005 1005 else 1006 1006 slots = 8 * 1024;
+4 -3
net/netfilter/nf_conntrack_core.c
··· 2248 2248 2249 2249 int nf_conntrack_init_start(void) 2250 2250 { 2251 + unsigned long nr_pages = totalram_pages(); 2251 2252 int max_factor = 8; 2252 2253 int ret = -ENOMEM; 2253 2254 int i; ··· 2268 2267 * >= 4GB machines have 65536 buckets. 2269 2268 */ 2270 2269 nf_conntrack_htable_size 2271 - = (((totalram_pages << PAGE_SHIFT) / 16384) 2270 + = (((nr_pages << PAGE_SHIFT) / 16384) 2272 2271 / sizeof(struct hlist_head)); 2273 - if (totalram_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE))) 2272 + if (nr_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE))) 2274 2273 nf_conntrack_htable_size = 65536; 2275 - else if (totalram_pages > (1024 * 1024 * 1024 / PAGE_SIZE)) 2274 + else if (nr_pages > (1024 * 1024 * 1024 / PAGE_SIZE)) 2276 2275 nf_conntrack_htable_size = 16384; 2277 2276 if (nf_conntrack_htable_size < 32) 2278 2277 nf_conntrack_htable_size = 32;
+3 -2
net/netfilter/xt_hashlimit.c
··· 274 274 struct xt_hashlimit_htable *hinfo; 275 275 const struct seq_operations *ops; 276 276 unsigned int size, i; 277 + unsigned long nr_pages = totalram_pages(); 277 278 int ret; 278 279 279 280 if (cfg->size) { 280 281 size = cfg->size; 281 282 } else { 282 - size = (totalram_pages << PAGE_SHIFT) / 16384 / 283 + size = (nr_pages << PAGE_SHIFT) / 16384 / 283 284 sizeof(struct hlist_head); 284 - if (totalram_pages > 1024 * 1024 * 1024 / PAGE_SIZE) 285 + if (nr_pages > 1024 * 1024 * 1024 / PAGE_SIZE) 285 286 size = 8192; 286 287 if (size < 16) 287 288 size = 16;
+4 -3
net/sctp/protocol.c
··· 1368 1368 int status = -EINVAL; 1369 1369 unsigned long goal; 1370 1370 unsigned long limit; 1371 + unsigned long nr_pages = totalram_pages(); 1371 1372 int max_share; 1372 1373 int order; 1373 1374 int num_entries; ··· 1427 1426 * The methodology is similar to that of the tcp hash tables. 1428 1427 * Though not identical. Start by getting a goal size 1429 1428 */ 1430 - if (totalram_pages >= (128 * 1024)) 1431 - goal = totalram_pages >> (22 - PAGE_SHIFT); 1429 + if (nr_pages >= (128 * 1024)) 1430 + goal = nr_pages >> (22 - PAGE_SHIFT); 1432 1431 else 1433 - goal = totalram_pages >> (24 - PAGE_SHIFT); 1432 + goal = nr_pages >> (24 - PAGE_SHIFT); 1434 1433 1435 1434 /* Then compute the page order for said goal */ 1436 1435 order = get_order(goal);
+31 -22
scripts/Makefile.kasan
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - ifdef CONFIG_KASAN 2 + ifdef CONFIG_KASAN_GENERIC 3 + 3 4 ifdef CONFIG_KASAN_INLINE 4 5 call_threshold := 10000 5 6 else ··· 13 12 14 13 cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1))) 15 14 16 - ifeq ($(call cc-option, $(CFLAGS_KASAN_MINIMAL) -Werror),) 17 - ifneq ($(CONFIG_COMPILE_TEST),y) 18 - $(warning Cannot use CONFIG_KASAN: \ 19 - -fsanitize=kernel-address is not supported by compiler) 20 - endif 21 - else 22 - # -fasan-shadow-offset fails without -fsanitize 23 - CFLAGS_KASAN_SHADOW := $(call cc-option, -fsanitize=kernel-address \ 15 + # -fasan-shadow-offset fails without -fsanitize 16 + CFLAGS_KASAN_SHADOW := $(call cc-option, -fsanitize=kernel-address \ 24 17 -fasan-shadow-offset=$(KASAN_SHADOW_OFFSET), \ 25 18 $(call cc-option, -fsanitize=kernel-address \ 26 19 -mllvm -asan-mapping-offset=$(KASAN_SHADOW_OFFSET))) 27 20 28 - ifeq ($(strip $(CFLAGS_KASAN_SHADOW)),) 29 - CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL) 30 - else 31 - # Now add all the compiler specific options that are valid standalone 32 - CFLAGS_KASAN := $(CFLAGS_KASAN_SHADOW) \ 33 - $(call cc-param,asan-globals=1) \ 34 - $(call cc-param,asan-instrumentation-with-call-threshold=$(call_threshold)) \ 35 - $(call cc-param,asan-stack=1) \ 36 - $(call cc-param,asan-use-after-scope=1) \ 37 - $(call cc-param,asan-instrument-allocas=1) 38 - endif 39 - 21 + ifeq ($(strip $(CFLAGS_KASAN_SHADOW)),) 22 + CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL) 23 + else 24 + # Now add all the compiler specific options that are valid standalone 25 + CFLAGS_KASAN := $(CFLAGS_KASAN_SHADOW) \ 26 + $(call cc-param,asan-globals=1) \ 27 + $(call cc-param,asan-instrumentation-with-call-threshold=$(call_threshold)) \ 28 + $(call cc-param,asan-stack=1) \ 29 + $(call cc-param,asan-use-after-scope=1) \ 30 + $(call cc-param,asan-instrument-allocas=1) 40 31 endif 41 32 42 33 ifdef CONFIG_KASAN_EXTRA 43 34 CFLAGS_KASAN += $(call cc-option, -fsanitize-address-use-after-scope) 44 35 endif 45 36 46 - CFLAGS_KASAN_NOSANITIZE := -fno-builtin 37 + endif # CONFIG_KASAN_GENERIC 47 38 39 + ifdef CONFIG_KASAN_SW_TAGS 40 + 41 + ifdef CONFIG_KASAN_INLINE 42 + instrumentation_flags := -mllvm -hwasan-mapping-offset=$(KASAN_SHADOW_OFFSET) 43 + else 44 + instrumentation_flags := -mllvm -hwasan-instrument-with-calls=1 45 + endif 46 + 47 + CFLAGS_KASAN := -fsanitize=kernel-hwaddress \ 48 + -mllvm -hwasan-instrument-stack=0 \ 49 + $(instrumentation_flags) 50 + 51 + endif # CONFIG_KASAN_SW_TAGS 52 + 53 + ifdef CONFIG_KASAN 54 + CFLAGS_KASAN_NOSANITIZE := -fno-builtin 48 55 endif
+1
scripts/bloat-o-meter
··· 32 32 if name.startswith("__mod_"): continue 33 33 if name.startswith("__se_sys"): continue 34 34 if name.startswith("__se_compat_sys"): continue 35 + if name.startswith("__addressable_"): continue 35 36 if name == "linux_banner": continue 36 37 # statics and some other optimizations adds random .NUMBER 37 38 name = re_NUMBER.sub('', name)
+2
scripts/checkstack.pl
··· 48 48 $funcre = qr/^$x* <(.*)>:$/; 49 49 if ($arch eq 'aarch64') { 50 50 #ffffffc0006325cc: a9bb7bfd stp x29, x30, [sp, #-80]! 51 + #a110: d11643ff sub sp, sp, #0x590 51 52 $re = qr/^.*stp.*sp, \#-([0-9]{1,8})\]\!/o; 53 + $dre = qr/^.*sub.*sp, sp, #(0x$x{1,8})/o; 52 54 } elsif ($arch eq 'arm') { 53 55 #c0008ffc: e24dd064 sub sp, sp, #100 ; 0x64 54 56 $re = qr/.*sub.*sp, sp, #(([0-9]{2}|[3-9])[0-9]{2})/o;
+1 -1
scripts/decode_stacktrace.sh
··· 78 78 fi 79 79 80 80 # Strip out the base of the path 81 - code=${code//$basepath/""} 81 + code=${code//^$basepath/""} 82 82 83 83 # In the case of inlines, move everything to same line 84 84 code=${code//$'\n'/' '}
+7
scripts/decodecode
··· 60 60 4) type=4byte ;; 61 61 esac 62 62 63 + if [ -z "$ARCH" ]; then 64 + case `uname -m` in 65 + aarch64*) ARCH=arm64 ;; 66 + arm*) ARCH=arm ;; 67 + esac 68 + fi 69 + 63 70 disas() { 64 71 ${CROSS_COMPILE}as $AFLAGS -o $1.o $1.s > /dev/null 2>&1 65 72
+12
scripts/spdxcheck-test.sh
··· 1 + #!/bin/sh 2 + 3 + for PYTHON in python2 python3; do 4 + # run check on a text and a binary file 5 + for FILE in Makefile Documentation/logo.gif; do 6 + $PYTHON scripts/spdxcheck.py $FILE 7 + $PYTHON scripts/spdxcheck.py - < $FILE 8 + done 9 + 10 + # run check on complete tree to catch any other issues 11 + $PYTHON scripts/spdxcheck.py > /dev/null 12 + done
+11 -2
scripts/tags.sh
··· 191 191 '/^DEF_PCI_AC_\(\|NO\)RET(\([[:alnum:]_]*\).*/\2/' 192 192 '/^PCI_OP_READ(\(\w*\).*[1-4])/pci_bus_read_config_\1/' 193 193 '/^PCI_OP_WRITE(\(\w*\).*[1-4])/pci_bus_write_config_\1/' 194 - '/\<DEFINE_\(MUTEX\|SEMAPHORE\|SPINLOCK\)(\([[:alnum:]_]*\)/\2/v/' 194 + '/\<DEFINE_\(RT_MUTEX\|MUTEX\|SEMAPHORE\|SPINLOCK\)(\([[:alnum:]_]*\)/\2/v/' 195 195 '/\<DEFINE_\(RAW_SPINLOCK\|RWLOCK\|SEQLOCK\)(\([[:alnum:]_]*\)/\2/v/' 196 196 '/\<DECLARE_\(RWSEM\|COMPLETION\)(\([[:alnum:]_]\+\)/\2/v/' 197 197 '/\<DECLARE_BITMAP(\([[:alnum:]_]*\)/\1/v/' ··· 204 204 '/\(^\s\)OFFSET(\([[:alnum:]_]*\)/\2/v/' 205 205 '/\(^\s\)DEFINE(\([[:alnum:]_]*\)/\2/v/' 206 206 '/\<\(DEFINE\|DECLARE\)_HASHTABLE(\([[:alnum:]_]*\)/\2/v/' 207 + '/\<DEFINE_ID\(R\|A\)(\([[:alnum:]_]\+\)/\2/' 208 + '/\<DEFINE_WD_CLASS(\([[:alnum:]_]\+\)/\1/' 209 + '/\<ATOMIC_NOTIFIER_HEAD(\([[:alnum:]_]\+\)/\1/' 210 + '/\<RAW_NOTIFIER_HEAD(\([[:alnum:]_]\+\)/\1/' 211 + '/\<DECLARE_FAULT_ATTR(\([[:alnum:]_]\+\)/\1/' 212 + '/\<BLOCKING_NOTIFIER_HEAD(\([[:alnum:]_]\+\)/\1/' 213 + '/\<DEVICE_ATTR_\(RW\|RO\|WO\)(\([[:alnum:]_]\+\)/dev_attr_\2/' 214 + '/\<DRIVER_ATTR_\(RW\|RO\|WO\)(\([[:alnum:]_]\+\)/driver_attr_\2/' 215 + '/\<\(DEFINE\|DECLARE\)_STATIC_KEY_\(TRUE\|FALSE\)\(\|_RO\)(\([[:alnum:]_]\+\)/\4/' 207 216 ) 208 217 regex_kconfig=( 209 218 '/^[[:blank:]]*\(menu\|\)config[[:blank:]]\+\([[:alnum:]_]\+\)/\2/' ··· 258 249 -I __initdata,__exitdata,__initconst,__ro_after_init \ 259 250 -I __initdata_memblock \ 260 251 -I __refdata,__attribute,__maybe_unused,__always_unused \ 261 - -I __acquires,__releases,__deprecated \ 252 + -I __acquires,__releases,__deprecated,__always_inline \ 262 253 -I __read_mostly,__aligned,____cacheline_aligned \ 263 254 -I ____cacheline_aligned_in_smp \ 264 255 -I __cacheline_aligned,__cacheline_aligned_in_smp \
+1 -1
security/integrity/ima/ima_kexec.c
··· 106 106 kexec_segment_size = ALIGN(ima_get_binary_runtime_size() + 107 107 PAGE_SIZE / 2, PAGE_SIZE); 108 108 if ((kexec_segment_size == ULONG_MAX) || 109 - ((kexec_segment_size >> PAGE_SHIFT) > totalram_pages / 2)) { 109 + ((kexec_segment_size >> PAGE_SHIFT) > totalram_pages() / 2)) { 110 110 pr_err("Binary measurement list too large.\n"); 111 111 return; 112 112 }
+15 -2
tools/testing/nvdimm/test/iomap.c
··· 104 104 } 105 105 EXPORT_SYMBOL(__wrap_devm_memremap); 106 106 107 + static void nfit_test_kill(void *_pgmap) 108 + { 109 + struct dev_pagemap *pgmap = _pgmap; 110 + 111 + pgmap->kill(pgmap->ref); 112 + } 113 + 107 114 void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) 108 115 { 109 116 resource_size_t offset = pgmap->res.start; 110 117 struct nfit_test_resource *nfit_res = get_nfit_res(offset); 111 118 112 - if (nfit_res) 119 + if (nfit_res) { 120 + int rc; 121 + 122 + rc = devm_add_action_or_reset(dev, nfit_test_kill, pgmap); 123 + if (rc) 124 + return ERR_PTR(rc); 113 125 return nfit_res->buf + offset - nfit_res->res.start; 126 + } 114 127 return devm_memremap_pages(dev, pgmap); 115 128 } 116 - EXPORT_SYMBOL(__wrap_devm_memremap_pages); 129 + EXPORT_SYMBOL_GPL(__wrap_devm_memremap_pages); 117 130 118 131 pfn_t __wrap_phys_to_pfn_t(phys_addr_t addr, unsigned long flags) 119 132 {
+1 -1
tools/vm/page-types.c
··· 701 701 if (kpagecgroup_read(cgi, index, pages) != pages) 702 702 fatal("kpagecgroup returned fewer pages than expected"); 703 703 704 - if (kpagecount_read(cnt, index, batch) != pages) 704 + if (kpagecount_read(cnt, index, pages) != pages) 705 705 fatal("kpagecount returned fewer pages than expected"); 706 706 707 707 for (i = 0; i < pages; i++)
+5 -9
virt/kvm/kvm_main.c
··· 363 363 } 364 364 365 365 static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, 366 - struct mm_struct *mm, 367 - unsigned long start, 368 - unsigned long end, 369 - bool blockable) 366 + const struct mmu_notifier_range *range) 370 367 { 371 368 struct kvm *kvm = mmu_notifier_to_kvm(mn); 372 369 int need_tlb_flush = 0, idx; ··· 377 380 * count is also read inside the mmu_lock critical section. 378 381 */ 379 382 kvm->mmu_notifier_count++; 380 - need_tlb_flush = kvm_unmap_hva_range(kvm, start, end); 383 + need_tlb_flush = kvm_unmap_hva_range(kvm, range->start, range->end); 381 384 need_tlb_flush |= kvm->tlbs_dirty; 382 385 /* we've to flush the tlb before the pages can be freed */ 383 386 if (need_tlb_flush) ··· 385 388 386 389 spin_unlock(&kvm->mmu_lock); 387 390 388 - ret = kvm_arch_mmu_notifier_invalidate_range(kvm, start, end, blockable); 391 + ret = kvm_arch_mmu_notifier_invalidate_range(kvm, range->start, 392 + range->end, range->blockable); 389 393 390 394 srcu_read_unlock(&kvm->srcu, idx); 391 395 ··· 394 396 } 395 397 396 398 static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, 397 - struct mm_struct *mm, 398 - unsigned long start, 399 - unsigned long end) 399 + const struct mmu_notifier_range *range) 400 400 { 401 401 struct kvm *kvm = mmu_notifier_to_kvm(mn); 402 402