Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.0-rc2 into tty-next

We need the tty core fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4108 -2638
+9
Documentation/ABI/testing/sysfs-block
··· 279 279 size in 512B sectors of the zones of the device, with 280 280 the eventual exception of the last zone of the device 281 281 which may be smaller. 282 + 283 + What: /sys/block/<disk>/queue/io_timeout 284 + Date: November 2018 285 + Contact: Weiping Zhang <zhangweiping@didiglobal.com> 286 + Description: 287 + io_timeout is the request timeout in milliseconds. If a request 288 + does not complete in this time then the block driver timeout 289 + handler is invoked. That timeout handler can decide to retry 290 + the request, to fail it or to start a device recovery strategy.
+9 -2
Documentation/ABI/testing/sysfs-block-zram
··· 122 122 statistics (bd_count, bd_reads, bd_writes) in a format 123 123 similar to block layer statistics file format. 124 124 125 + What: /sys/block/zram<id>/writeback_limit_enable 126 + Date: November 2018 127 + Contact: Minchan Kim <minchan@kernel.org> 128 + Description: 129 + The writeback_limit_enable file is read-write and specifies 130 + eanbe of writeback_limit feature. "1" means eable the feature. 131 + No limit "0" is the initial state. 132 + 125 133 What: /sys/block/zram<id>/writeback_limit 126 134 Date: November 2018 127 135 Contact: Minchan Kim <minchan@kernel.org> 128 136 Description: 129 137 The writeback_limit file is read-write and specifies the maximum 130 138 amount of writeback ZRAM can do. The limit could be changed 131 - in run time and "0" means disable the limit. 132 - No limit is the initial state. 139 + in run time.
+7
Documentation/block/bfq-iosched.txt
··· 357 357 than maximum throughput. In these cases, consider setting the 358 358 strict_guarantees parameter. 359 359 360 + slice_idle_us 361 + ------------- 362 + 363 + Controls the same tuning parameter as slice_idle, but in microseconds. 364 + Either tunable can be used to set idling behavior. Afterwards, the 365 + other tunable will reflect the newly set value in sysfs. 366 + 360 367 strict_guarantees 361 368 ----------------- 362 369
+2 -1
Documentation/block/null_blk.txt
··· 88 88 89 89 zoned=[0/1]: Default: 0 90 90 0: Block device is exposed as a random-access block device. 91 - 1: Block device is exposed as a host-managed zoned block device. 91 + 1: Block device is exposed as a host-managed zoned block device. Requires 92 + CONFIG_BLK_DEV_ZONED. 92 93 93 94 zone_size=[MB]: Default: 256 94 95 Per zone size when exposed as a zoned block device. Must be a power of two.
+7
Documentation/block/queue-sysfs.txt
··· 67 67 IO to sleep for this amount of microseconds before entering classic 68 68 polling. 69 69 70 + io_timeout (RW) 71 + --------------- 72 + io_timeout is the request timeout in milliseconds. If a request does not 73 + complete in this time then the block driver timeout handler is invoked. 74 + That timeout handler can decide to retry the request, to fail it or to start 75 + a device recovery strategy. 76 + 70 77 iostats (RW) 71 78 ------------- 72 79 This file is used to control (on/off) the iostats accounting of the
+47 -27
Documentation/blockdev/zram.txt
··· 156 156 A brief description of exported device attributes. For more details please 157 157 read Documentation/ABI/testing/sysfs-block-zram. 158 158 159 - Name access description 160 - ---- ------ ----------- 161 - disksize RW show and set the device's disk size 162 - initstate RO shows the initialization state of the device 163 - reset WO trigger device reset 164 - mem_used_max WO reset the `mem_used_max' counter (see later) 165 - mem_limit WO specifies the maximum amount of memory ZRAM can use 166 - to store the compressed data 167 - writeback_limit WO specifies the maximum amount of write IO zram can 168 - write out to backing device as 4KB unit 169 - max_comp_streams RW the number of possible concurrent compress operations 170 - comp_algorithm RW show and change the compression algorithm 171 - compact WO trigger memory compaction 172 - debug_stat RO this file is used for zram debugging purposes 173 - backing_dev RW set up backend storage for zram to write out 174 - idle WO mark allocated slot as idle 159 + Name access description 160 + ---- ------ ----------- 161 + disksize RW show and set the device's disk size 162 + initstate RO shows the initialization state of the device 163 + reset WO trigger device reset 164 + mem_used_max WO reset the `mem_used_max' counter (see later) 165 + mem_limit WO specifies the maximum amount of memory ZRAM can use 166 + to store the compressed data 167 + writeback_limit WO specifies the maximum amount of write IO zram can 168 + write out to backing device as 4KB unit 169 + writeback_limit_enable RW show and set writeback_limit feature 170 + max_comp_streams RW the number of possible concurrent compress operations 171 + comp_algorithm RW show and change the compression algorithm 172 + compact WO trigger memory compaction 173 + debug_stat RO this file is used for zram debugging purposes 174 + backing_dev RW set up backend storage for zram to write out 175 + idle WO mark allocated slot as idle 175 176 176 177 177 178 User space is advised to use the following files to read the device statistics. ··· 281 280 If there are lots of write IO with flash device, potentially, it has 282 281 flash wearout problem so that admin needs to design write limitation 283 282 to guarantee storage health for entire product life. 284 - To overcome the concern, zram supports "writeback_limit". 285 - The "writeback_limit"'s default value is 0 so that it doesn't limit 286 - any writeback. If admin want to measure writeback count in a certain 287 - period, he could know it via /sys/block/zram0/bd_stat's 3rd column. 283 + 284 + To overcome the concern, zram supports "writeback_limit" feature. 285 + The "writeback_limit_enable"'s default value is 0 so that it doesn't limit 286 + any writeback. IOW, if admin want to apply writeback budget, he should 287 + enable writeback_limit_enable via 288 + 289 + $ echo 1 > /sys/block/zramX/writeback_limit_enable 290 + 291 + Once writeback_limit_enable is set, zram doesn't allow any writeback 292 + until admin set the budget via /sys/block/zramX/writeback_limit. 293 + 294 + (If admin doesn't enable writeback_limit_enable, writeback_limit's value 295 + assigned via /sys/block/zramX/writeback_limit is meaninless.) 288 296 289 297 If admin want to limit writeback as per-day 400M, he could do it 290 298 like below. 291 299 292 - MB_SHIFT=20 293 - 4K_SHIFT=12 294 - echo $((400<<MB_SHIFT>>4K_SHIFT)) > \ 295 - /sys/block/zram0/writeback_limit. 300 + $ MB_SHIFT=20 301 + $ 4K_SHIFT=12 302 + $ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \ 303 + /sys/block/zram0/writeback_limit. 304 + $ echo 1 > /sys/block/zram0/writeback_limit_enable 296 305 297 - If admin want to allow further write again, he could do it like below 306 + If admin want to allow further write again once the bugdet is exausted, 307 + he could do it like below 298 308 299 - echo 0 > /sys/block/zram0/writeback_limit 309 + $ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \ 310 + /sys/block/zram0/writeback_limit 300 311 301 312 If admin want to see remaining writeback budget since he set, 302 313 303 - cat /sys/block/zram0/writeback_limit 314 + $ cat /sys/block/zramX/writeback_limit 315 + 316 + If admin want to disable writeback limit, he could do 317 + 318 + $ echo 0 > /sys/block/zramX/writeback_limit_enable 304 319 305 320 The writeback_limit count will reset whenever you reset zram(e.g., 306 321 system reboot, echo 1 > /sys/block/zramX/reset) so keeping how many of 307 322 writeback happened until you reset the zram to allocate extra writeback 308 323 budget in next setting is user's job. 324 + 325 + If admin want to measure writeback count in a certain period, he could 326 + know it via /sys/block/zram0/bd_stat's 3rd column. 309 327 310 328 = memory tracking 311 329
+2 -1
Documentation/devicetree/bindings/reset/socfpga-reset.txt
··· 1 1 Altera SOCFPGA Reset Manager 2 2 3 3 Required properties: 4 - - compatible : "altr,rst-mgr" 4 + - compatible : "altr,rst-mgr" for (Cyclone5/Arria5/Arria10) 5 + "altr,stratix10-rst-mgr","altr,rst-mgr" for Stratix10 ARM64 SoC 5 6 - reg : Should contain 1 register ranges(address and length) 6 7 - altr,modrst-offset : Should contain the offset of the first modrst register. 7 8 - #reset-cells: 1
+14 -11
Documentation/devicetree/bindings/reset/uniphier-reset.txt
··· 120 120 }; 121 121 122 122 123 - USB3 core reset 124 - --------------- 123 + Peripheral core reset in glue layer 124 + ----------------------------------- 125 125 126 - USB3 core reset belongs to USB3 glue layer. Before using the core reset, 127 - it is necessary to control the clocks and resets to enable this layer. 128 - These clocks and resets should be described in each property. 126 + Some peripheral core reset belongs to its own glue layer. Before using 127 + this core reset, it is necessary to control the clocks and resets to enable 128 + this layer. These clocks and resets should be described in each property. 129 129 130 130 Required properties: 131 131 - compatible: Should be 132 - "socionext,uniphier-pro4-usb3-reset" - for Pro4 SoC 133 - "socionext,uniphier-pxs2-usb3-reset" - for PXs2 SoC 134 - "socionext,uniphier-ld20-usb3-reset" - for LD20 SoC 135 - "socionext,uniphier-pxs3-usb3-reset" - for PXs3 SoC 132 + "socionext,uniphier-pro4-usb3-reset" - for Pro4 SoC USB3 133 + "socionext,uniphier-pxs2-usb3-reset" - for PXs2 SoC USB3 134 + "socionext,uniphier-ld20-usb3-reset" - for LD20 SoC USB3 135 + "socionext,uniphier-pxs3-usb3-reset" - for PXs3 SoC USB3 136 + "socionext,uniphier-pro4-ahci-reset" - for Pro4 SoC AHCI 137 + "socionext,uniphier-pxs2-ahci-reset" - for PXs2 SoC AHCI 138 + "socionext,uniphier-pxs3-ahci-reset" - for PXs3 SoC AHCI 136 139 - #reset-cells: Should be 1. 137 140 - reg: Specifies offset and length of the register set for the device. 138 - - clocks: A list of phandles to the clock gate for USB3 glue layer. 141 + - clocks: A list of phandles to the clock gate for the glue layer. 139 142 According to the clock-names, appropriate clocks are required. 140 143 - clock-names: Should contain 141 144 "gio", "link" - for Pro4 SoC 142 145 "link" - for others 143 - - resets: A list of phandles to the reset control for USB3 glue layer. 146 + - resets: A list of phandles to the reset control for the glue layer. 144 147 According to the reset-names, appropriate resets are required. 145 148 - reset-names: Should contain 146 149 "gio", "link" - for Pro4 SoC
+4 -4
Documentation/driver-model/bus.txt
··· 124 124 ssize_t (*store)(struct bus_type *, const char * buf, size_t count); 125 125 }; 126 126 127 - Bus drivers can export attributes using the BUS_ATTR macro that works 128 - similarly to the DEVICE_ATTR macro for devices. For example, a definition 129 - like this: 127 + Bus drivers can export attributes using the BUS_ATTR_RW macro that works 128 + similarly to the DEVICE_ATTR_RW macro for devices. For example, a 129 + definition like this: 130 130 131 - static BUS_ATTR(debug,0644,show_debug,store_debug); 131 + static BUS_ATTR_RW(debug); 132 132 133 133 is equivalent to declaring: 134 134
+1
Documentation/features/core/cBPF-JIT/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | TODO | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/core/eBPF-JIT/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/core/generic-idle-thread/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | ok | 14 15 | h8300: | TODO | 15 16 | hexagon: | ok | 16 17 | ia64: | ok |
+1
Documentation/features/core/jump-labels/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/core/tracehook/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | ok | 14 + | csky: | ok | 14 15 | h8300: | TODO | 15 16 | hexagon: | ok | 16 17 | ia64: | ok |
+1
Documentation/features/debug/KASAN/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/debug/gcov-profile-all/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/debug/kgdb/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | ok | 15 16 | hexagon: | ok | 16 17 | ia64: | TODO |
+1
Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | TODO | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/debug/kprobes/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | ok |
+1
Documentation/features/debug/kretprobes/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | ok |
+1
Documentation/features/debug/optprobes/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | TODO | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/debug/stackprotector/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/debug/uprobes/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/debug/user-ret-profiler/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | TODO | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/io/dma-contiguous/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | ok | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/locking/cmpxchg-local/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/locking/lockdep/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | ok | 16 17 | ia64: | TODO |
+1
Documentation/features/locking/queued-rwlocks/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | ok | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/locking/queued-spinlocks/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | TODO | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/locking/rwsem-optimized/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | ok |
+1
Documentation/features/perf/kprobes-event/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | ok | 16 17 | ia64: | TODO |
+1
Documentation/features/perf/perf-regs/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/perf/perf-stackdump/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/sched/membarrier-sync-core/arch-support.txt
··· 34 34 | arm: | ok | 35 35 | arm64: | ok | 36 36 | c6x: | TODO | 37 + | csky: | TODO | 37 38 | h8300: | TODO | 38 39 | hexagon: | TODO | 39 40 | ia64: | TODO |
+1
Documentation/features/sched/numa-balancing/arch-support.txt
··· 11 11 | arm: | .. | 12 12 | arm64: | ok | 13 13 | c6x: | .. | 14 + | csky: | .. | 14 15 | h8300: | .. | 15 16 | hexagon: | .. | 16 17 | ia64: | TODO |
+1
Documentation/features/seccomp/seccomp-filter/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/time/arch-tick-broadcast/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/time/clockevents/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | ok | 14 + | csky: | ok | 14 15 | h8300: | ok | 15 16 | hexagon: | ok | 16 17 | ia64: | TODO |
+1
Documentation/features/time/context-tracking/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/time/irq-time-acct/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | .. |
+1
Documentation/features/time/modern-timekeeping/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | ok | 13 13 | c6x: | ok | 14 + | csky: | ok | 14 15 | h8300: | ok | 15 16 | hexagon: | ok | 16 17 | ia64: | ok |
+1
Documentation/features/time/virt-cpuacct/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | ok |
+1
Documentation/features/vm/ELF-ASLR/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/vm/PG_uncached/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | TODO | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | ok |
+1
Documentation/features/vm/THP/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | .. | 14 + | csky: | .. | 14 15 | h8300: | .. | 15 16 | hexagon: | .. | 16 17 | ia64: | TODO |
+1
Documentation/features/vm/TLB/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | TODO | 13 13 | c6x: | .. | 14 + | csky: | TODO | 14 15 | h8300: | .. | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/vm/huge-vmap/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/vm/ioremap_prot/arch-support.txt
··· 11 11 | arm: | TODO | 12 12 | arm64: | TODO | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+1
Documentation/features/vm/numa-memblock/arch-support.txt
··· 11 11 | arm: | .. | 12 12 | arm64: | ok | 13 13 | c6x: | .. | 14 + | csky: | .. | 14 15 | h8300: | .. | 15 16 | hexagon: | .. | 16 17 | ia64: | ok |
+1
Documentation/features/vm/pte_special/arch-support.txt
··· 11 11 | arm: | ok | 12 12 | arm64: | ok | 13 13 | c6x: | TODO | 14 + | csky: | TODO | 14 15 | h8300: | TODO | 15 16 | hexagon: | TODO | 16 17 | ia64: | TODO |
+3 -1
Documentation/filesystems/sysfs.txt
··· 344 344 345 345 Declaring: 346 346 347 - BUS_ATTR(_name, _mode, _show, _store) 347 + static BUS_ATTR_RW(name); 348 + static BUS_ATTR_RO(name); 349 + static BUS_ATTR_WO(name); 348 350 349 351 Creation/Removal: 350 352
+1 -1
Documentation/trace/coresight-cpu-debug.txt
··· 165 165 The same can also be done from an application program. 166 166 167 167 Disable specific CPU's specific idle state from cpuidle sysfs (see 168 - Documentation/cpuidle/sysfs.txt): 168 + Documentation/admin-guide/pm/cpuidle.rst): 169 169 # echo 1 > /sys/devices/system/cpu/cpu$cpu/cpuidle/state$state/disable 170 170 171 171
+1 -1
Documentation/virtual/kvm/amd-memory-encryption.rst
··· 242 242 ========== 243 243 244 244 .. [white-paper] http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf 245 - .. [api-spec] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf 245 + .. [api-spec] http://support.amd.com/TechDocs/55766_SEV-KM_API_Specification.pdf 246 246 .. [amd-apm] http://support.amd.com/TechDocs/24593.pdf (section 15.34) 247 247 .. [kvm-forum] http://www.linux-kvm.org/images/7/74/02x08A-Thomas_Lendacky-AMDs_Virtualizatoin_Memory_Encryption_Technology.pdf
+1 -1
Documentation/x86/resctrl_ui.txt
··· 9 9 Tony Luck <tony.luck@intel.com> 10 10 Vikas Shivappa <vikas.shivappa@intel.com> 11 11 12 - This feature is enabled by the CONFIG_RESCTRL and the X86 /proc/cpuinfo 12 + This feature is enabled by the CONFIG_X86_RESCTRL and the x86 /proc/cpuinfo 13 13 flag bits: 14 14 RDT (Resource Director Technology) Allocation - "rdt_a" 15 15 CAT (Cache Allocation Technology) - "cat_l3", "cat_l2"
+12 -4
MAINTAINERS
··· 3951 3951 S: Maintained 3952 3952 F: drivers/net/ethernet/ti/cpmac.c 3953 3953 3954 - CPU FREQUENCY DRIVERS 3954 + CPU FREQUENCY SCALING FRAMEWORK 3955 3955 M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 3956 3956 M: Viresh Kumar <viresh.kumar@linaro.org> 3957 3957 L: linux-pm@vger.kernel.org ··· 3959 3959 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git 3960 3960 T: git git://git.linaro.org/people/vireshk/linux.git (For ARM Updates) 3961 3961 B: https://bugzilla.kernel.org 3962 + F: Documentation/admin-guide/pm/cpufreq.rst 3963 + F: Documentation/admin-guide/pm/intel_pstate.rst 3962 3964 F: Documentation/cpu-freq/ 3963 3965 F: Documentation/devicetree/bindings/cpufreq/ 3964 3966 F: drivers/cpufreq/ ··· 4008 4006 F: drivers/cpuidle/cpuidle-exynos.c 4009 4007 F: arch/arm/mach-exynos/pm.c 4010 4008 4011 - CPUIDLE DRIVERS 4009 + CPU IDLE TIME MANAGEMENT FRAMEWORK 4012 4010 M: "Rafael J. Wysocki" <rjw@rjwysocki.net> 4013 4011 M: Daniel Lezcano <daniel.lezcano@linaro.org> 4014 4012 L: linux-pm@vger.kernel.org 4015 4013 S: Maintained 4016 4014 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git 4017 4015 B: https://bugzilla.kernel.org 4016 + F: Documentation/admin-guide/pm/cpuidle.rst 4018 4017 F: drivers/cpuidle/* 4019 4018 F: include/linux/cpuidle.h 4020 4019 ··· 13823 13820 13824 13821 SIFIVE DRIVERS 13825 13822 M: Palmer Dabbelt <palmer@sifive.com> 13823 + M: Paul Walmsley <paul.walmsley@sifive.com> 13826 13824 L: linux-riscv@lists.infradead.org 13827 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux.git 13825 + T: git git://github.com/sifive/riscv-linux.git 13828 13826 S: Supported 13829 13827 K: sifive 13830 13828 N: sifive ··· 14435 14431 M: Florian Schilhabel <florian.c.schilhabel@googlemail.com>. 14436 14432 S: Odd Fixes 14437 14433 F: drivers/staging/rtl8712/ 14434 + 14435 + STAGING - REALTEK RTL8188EU DRIVERS 14436 + M: Larry Finger <Larry.Finger@lwfinger.net> 14437 + S: Odd Fixes 14438 + F: drivers/staging/rtl8188eu/ 14438 14439 14439 14440 STAGING - SILICON MOTION SM750 FRAME BUFFER DRIVER 14440 14441 M: Sudip Mukherjee <sudipm.mukherjee@gmail.com> ··· 15811 15802 L: linux-usb@vger.kernel.org 15812 15803 L: usb-storage@lists.one-eyed-alien.net 15813 15804 S: Maintained 15814 - W: http://www.one-eyed-alien.net/~mdharm/linux-usb/ 15815 15805 F: drivers/usb/storage/ 15816 15806 15817 15807 USB MIDI DRIVER
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Shy Crocodile 7 7 8 8 # *DOCUMENTATION*
+26 -5
arch/arm/boot/dts/da850-evm.dts
··· 94 94 regulator-boot-on; 95 95 }; 96 96 97 + baseboard_3v3: fixedregulator-3v3 { 98 + /* TPS73701DCQ */ 99 + compatible = "regulator-fixed"; 100 + regulator-name = "baseboard_3v3"; 101 + regulator-min-microvolt = <3300000>; 102 + regulator-max-microvolt = <3300000>; 103 + vin-supply = <&vbat>; 104 + regulator-always-on; 105 + regulator-boot-on; 106 + }; 107 + 108 + baseboard_1v8: fixedregulator-1v8 { 109 + /* TPS73701DCQ */ 110 + compatible = "regulator-fixed"; 111 + regulator-name = "baseboard_1v8"; 112 + regulator-min-microvolt = <1800000>; 113 + regulator-max-microvolt = <1800000>; 114 + vin-supply = <&vbat>; 115 + regulator-always-on; 116 + regulator-boot-on; 117 + }; 118 + 97 119 backlight_lcd: backlight-regulator { 98 120 compatible = "regulator-fixed"; 99 121 regulator-name = "lcd_backlight_pwr"; ··· 127 105 128 106 sound { 129 107 compatible = "simple-audio-card"; 130 - simple-audio-card,name = "DA850/OMAP-L138 EVM"; 108 + simple-audio-card,name = "DA850-OMAPL138 EVM"; 131 109 simple-audio-card,widgets = 132 110 "Line", "Line In", 133 111 "Line", "Line Out"; ··· 232 210 233 211 /* Regulators */ 234 212 IOVDD-supply = <&vdcdc2_reg>; 235 - /* Derived from VBAT: Baseboard 3.3V / 1.8V */ 236 - AVDD-supply = <&vbat>; 237 - DRVDD-supply = <&vbat>; 238 - DVDD-supply = <&vbat>; 213 + AVDD-supply = <&baseboard_3v3>; 214 + DRVDD-supply = <&baseboard_3v3>; 215 + DVDD-supply = <&baseboard_1v8>; 239 216 }; 240 217 tca6416: gpio@20 { 241 218 compatible = "ti,tca6416";
+37 -1
arch/arm/boot/dts/da850-lcdk.dts
··· 39 39 }; 40 40 }; 41 41 42 + vcc_5vd: fixedregulator-vcc_5vd { 43 + compatible = "regulator-fixed"; 44 + regulator-name = "vcc_5vd"; 45 + regulator-min-microvolt = <5000000>; 46 + regulator-max-microvolt = <5000000>; 47 + regulator-boot-on; 48 + }; 49 + 50 + vcc_3v3d: fixedregulator-vcc_3v3d { 51 + /* TPS650250 - VDCDC1 */ 52 + compatible = "regulator-fixed"; 53 + regulator-name = "vcc_3v3d"; 54 + regulator-min-microvolt = <3300000>; 55 + regulator-max-microvolt = <3300000>; 56 + vin-supply = <&vcc_5vd>; 57 + regulator-always-on; 58 + regulator-boot-on; 59 + }; 60 + 61 + vcc_1v8d: fixedregulator-vcc_1v8d { 62 + /* TPS650250 - VDCDC2 */ 63 + compatible = "regulator-fixed"; 64 + regulator-name = "vcc_1v8d"; 65 + regulator-min-microvolt = <1800000>; 66 + regulator-max-microvolt = <1800000>; 67 + vin-supply = <&vcc_5vd>; 68 + regulator-always-on; 69 + regulator-boot-on; 70 + }; 71 + 42 72 sound { 43 73 compatible = "simple-audio-card"; 44 - simple-audio-card,name = "DA850/OMAP-L138 LCDK"; 74 + simple-audio-card,name = "DA850-OMAPL138 LCDK"; 45 75 simple-audio-card,widgets = 46 76 "Line", "Line In", 47 77 "Line", "Line Out"; ··· 251 221 compatible = "ti,tlv320aic3106"; 252 222 reg = <0x18>; 253 223 status = "okay"; 224 + 225 + /* Regulators */ 226 + IOVDD-supply = <&vcc_3v3d>; 227 + AVDD-supply = <&vcc_3v3d>; 228 + DRVDD-supply = <&vcc_3v3d>; 229 + DVDD-supply = <&vcc_1v8d>; 254 230 }; 255 231 }; 256 232
+2 -2
arch/arm/boot/dts/kirkwood-dnskw.dtsi
··· 36 36 compatible = "gpio-fan"; 37 37 pinctrl-0 = <&pmx_fan_high_speed &pmx_fan_low_speed>; 38 38 pinctrl-names = "default"; 39 - gpios = <&gpio1 14 GPIO_ACTIVE_LOW 40 - &gpio1 13 GPIO_ACTIVE_LOW>; 39 + gpios = <&gpio1 14 GPIO_ACTIVE_HIGH 40 + &gpio1 13 GPIO_ACTIVE_HIGH>; 41 41 gpio-fan,speed-map = <0 0 42 42 3000 1 43 43 6000 2>;
+2 -2
arch/arm/mach-davinci/board-da830-evm.c
··· 208 208 .dev_id = "da830-mmc.0", 209 209 .table = { 210 210 /* gpio chip 1 contains gpio range 32-63 */ 211 - GPIO_LOOKUP("davinci_gpio.0", DA830_MMCSD_CD_PIN, "cd", 211 + GPIO_LOOKUP("davinci_gpio", DA830_MMCSD_CD_PIN, "cd", 212 212 GPIO_ACTIVE_LOW), 213 - GPIO_LOOKUP("davinci_gpio.0", DA830_MMCSD_WP_PIN, "wp", 213 + GPIO_LOOKUP("davinci_gpio", DA830_MMCSD_WP_PIN, "wp", 214 214 GPIO_ACTIVE_LOW), 215 215 }, 216 216 };
+2 -2
arch/arm/mach-davinci/board-da850-evm.c
··· 805 805 .dev_id = "da830-mmc.0", 806 806 .table = { 807 807 /* gpio chip 2 contains gpio range 64-95 */ 808 - GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_CD_PIN, "cd", 808 + GPIO_LOOKUP("davinci_gpio", DA850_MMCSD_CD_PIN, "cd", 809 809 GPIO_ACTIVE_LOW), 810 - GPIO_LOOKUP("davinci_gpio.0", DA850_MMCSD_WP_PIN, "wp", 810 + GPIO_LOOKUP("davinci_gpio", DA850_MMCSD_WP_PIN, "wp", 811 811 GPIO_ACTIVE_HIGH), 812 812 }, 813 813 };
+2 -2
arch/arm/mach-davinci/board-dm355-evm.c
··· 117 117 static struct gpiod_lookup_table i2c_recovery_gpiod_table = { 118 118 .dev_id = "i2c_davinci.1", 119 119 .table = { 120 - GPIO_LOOKUP("davinci_gpio.0", DM355_I2C_SDA_PIN, "sda", 120 + GPIO_LOOKUP("davinci_gpio", DM355_I2C_SDA_PIN, "sda", 121 121 GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN), 122 - GPIO_LOOKUP("davinci_gpio.0", DM355_I2C_SCL_PIN, "scl", 122 + GPIO_LOOKUP("davinci_gpio", DM355_I2C_SCL_PIN, "scl", 123 123 GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN), 124 124 }, 125 125 };
+2 -2
arch/arm/mach-davinci/board-dm644x-evm.c
··· 660 660 static struct gpiod_lookup_table i2c_recovery_gpiod_table = { 661 661 .dev_id = "i2c_davinci.1", 662 662 .table = { 663 - GPIO_LOOKUP("davinci_gpio.0", DM644X_I2C_SDA_PIN, "sda", 663 + GPIO_LOOKUP("davinci_gpio", DM644X_I2C_SDA_PIN, "sda", 664 664 GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN), 665 - GPIO_LOOKUP("davinci_gpio.0", DM644X_I2C_SCL_PIN, "scl", 665 + GPIO_LOOKUP("davinci_gpio", DM644X_I2C_SCL_PIN, "scl", 666 666 GPIO_ACTIVE_HIGH | GPIO_OPEN_DRAIN), 667 667 }, 668 668 };
+2 -2
arch/arm/mach-davinci/board-omapl138-hawk.c
··· 134 134 static struct gpiod_lookup_table mmc_gpios_table = { 135 135 .dev_id = "da830-mmc.0", 136 136 .table = { 137 - GPIO_LOOKUP("davinci_gpio.0", DA850_HAWK_MMCSD_CD_PIN, "cd", 137 + GPIO_LOOKUP("davinci_gpio", DA850_HAWK_MMCSD_CD_PIN, "cd", 138 138 GPIO_ACTIVE_LOW), 139 - GPIO_LOOKUP("davinci_gpio.0", DA850_HAWK_MMCSD_WP_PIN, "wp", 139 + GPIO_LOOKUP("davinci_gpio", DA850_HAWK_MMCSD_WP_PIN, "wp", 140 140 GPIO_ACTIVE_LOW), 141 141 }, 142 142 };
+6 -2
arch/arm/mach-integrator/impd1.c
··· 390 390 char *mmciname; 391 391 392 392 lookup = devm_kzalloc(&dev->dev, 393 - sizeof(*lookup) + 3 * sizeof(struct gpiod_lookup), 393 + struct_size(lookup, table, 3), 394 394 GFP_KERNEL); 395 395 chipname = devm_kstrdup(&dev->dev, devname, GFP_KERNEL); 396 - mmciname = kasprintf(GFP_KERNEL, "lm%x:00700", dev->id); 396 + mmciname = devm_kasprintf(&dev->dev, GFP_KERNEL, 397 + "lm%x:00700", dev->id); 398 + if (!lookup || !chipname || !mmciname) 399 + return -ENOMEM; 400 + 397 401 lookup->dev_id = mmciname; 398 402 /* 399 403 * Offsets on GPIO block 1:
+4
arch/arm/mach-socfpga/socfpga.c
··· 32 32 void __iomem *sdr_ctl_base_addr; 33 33 unsigned long socfpga_cpu1start_addr; 34 34 35 + extern void __init socfpga_reset_init(void); 36 + 35 37 static void __init socfpga_sysmgr_init(void) 36 38 { 37 39 struct device_node *np; ··· 66 64 67 65 if (IS_ENABLED(CONFIG_EDAC_ALTERA_OCRAM)) 68 66 socfpga_init_ocram_ecc(); 67 + socfpga_reset_init(); 69 68 } 70 69 71 70 static void __init socfpga_arria10_init_irq(void) ··· 77 74 socfpga_init_arria10_l2_ecc(); 78 75 if (IS_ENABLED(CONFIG_EDAC_ALTERA_OCRAM)) 79 76 socfpga_init_arria10_ocram_ecc(); 77 + socfpga_reset_init(); 80 78 } 81 79 82 80 static void socfpga_cyclone5_restart(enum reboot_mode mode, const char *cmd)
+1 -1
arch/arm64/boot/dts/marvell/armada-8040-mcbin.dtsi
··· 183 183 pinctrl-0 = <&cp0_pcie_pins>; 184 184 num-lanes = <4>; 185 185 num-viewport = <8>; 186 - reset-gpio = <&cp0_gpio1 20 GPIO_ACTIVE_LOW>; 186 + reset-gpios = <&cp0_gpio2 20 GPIO_ACTIVE_LOW>; 187 187 status = "okay"; 188 188 }; 189 189
+17
arch/arm64/boot/dts/marvell/armada-ap806.dtsi
··· 28 28 method = "smc"; 29 29 }; 30 30 31 + reserved-memory { 32 + #address-cells = <2>; 33 + #size-cells = <2>; 34 + ranges; 35 + 36 + /* 37 + * This area matches the mapping done with a 38 + * mainline U-Boot, and should be updated by the 39 + * bootloader. 40 + */ 41 + 42 + psci-area@4000000 { 43 + reg = <0x0 0x4000000 0x0 0x200000>; 44 + no-map; 45 + }; 46 + }; 47 + 31 48 ap806 { 32 49 #address-cells = <2>; 33 50 #size-cells = <2>;
+4
arch/arm64/configs/defconfig
··· 506 506 CONFIG_SND_SOC_ROCKCHIP_SPDIF=m 507 507 CONFIG_SND_SOC_ROCKCHIP_RT5645=m 508 508 CONFIG_SND_SOC_RK3399_GRU_SOUND=m 509 + CONFIG_SND_MESON_AXG_SOUND_CARD=m 509 510 CONFIG_SND_SOC_SAMSUNG=y 510 511 CONFIG_SND_SOC_RCAR=m 511 512 CONFIG_SND_SOC_AK4613=m 512 513 CONFIG_SND_SIMPLE_CARD=m 513 514 CONFIG_SND_AUDIO_GRAPH_CARD=m 515 + CONFIG_SND_SOC_ES7134=m 516 + CONFIG_SND_SOC_ES7241=m 517 + CONFIG_SND_SOC_TAS571X=m 514 518 CONFIG_I2C_HID=m 515 519 CONFIG_USB=y 516 520 CONFIG_USB_OTG=y
+1 -1
arch/arm64/include/asm/asm-prototypes.h
··· 2 2 #ifndef __ASM_PROTOTYPES_H 3 3 #define __ASM_PROTOTYPES_H 4 4 /* 5 - * CONFIG_MODEVERIONS requires a C declaration to generate the appropriate CRC 5 + * CONFIG_MODVERSIONS requires a C declaration to generate the appropriate CRC 6 6 * for each symbol. Since commit: 7 7 * 8 8 * 4efca4ed05cbdfd1 ("kbuild: modversions for EXPORT_SYMBOL() for asm")
+6
arch/arm64/include/asm/cache.h
··· 58 58 */ 59 59 #define ARCH_DMA_MINALIGN (128) 60 60 61 + #ifdef CONFIG_KASAN_SW_TAGS 62 + #define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT) 63 + #else 64 + #define ARCH_SLAB_MINALIGN __alignof__(unsigned long long) 65 + #endif 66 + 61 67 #ifndef __ASSEMBLY__ 62 68 63 69 #include <linux/bitops.h>
+41
arch/arm64/include/asm/mmu.h
··· 16 16 #ifndef __ASM_MMU_H 17 17 #define __ASM_MMU_H 18 18 19 + #include <asm/cputype.h> 20 + 19 21 #define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */ 20 22 #define USER_ASID_BIT 48 21 23 #define USER_ASID_FLAG (UL(1) << USER_ASID_BIT) ··· 44 42 { 45 43 return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) && 46 44 cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0); 45 + } 46 + 47 + static inline bool arm64_kernel_use_ng_mappings(void) 48 + { 49 + bool tx1_bug; 50 + 51 + /* What's a kpti? Use global mappings if we don't know. */ 52 + if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) 53 + return false; 54 + 55 + /* 56 + * Note: this function is called before the CPU capabilities have 57 + * been configured, so our early mappings will be global. If we 58 + * later determine that kpti is required, then 59 + * kpti_install_ng_mappings() will make them non-global. 60 + */ 61 + if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE)) 62 + return arm64_kernel_unmapped_at_el0(); 63 + 64 + /* 65 + * KASLR is enabled so we're going to be enabling kpti on non-broken 66 + * CPUs regardless of their susceptibility to Meltdown. Rather 67 + * than force everybody to go through the G -> nG dance later on, 68 + * just put down non-global mappings from the beginning. 69 + */ 70 + if (!IS_ENABLED(CONFIG_CAVIUM_ERRATUM_27456)) { 71 + tx1_bug = false; 72 + #ifndef MODULE 73 + } else if (!static_branch_likely(&arm64_const_caps_ready)) { 74 + extern const struct midr_range cavium_erratum_27456_cpus[]; 75 + 76 + tx1_bug = is_midr_in_range_list(read_cpuid_id(), 77 + cavium_erratum_27456_cpus); 78 + #endif 79 + } else { 80 + tx1_bug = __cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456); 81 + } 82 + 83 + return !tx1_bug && kaslr_offset() > 0; 47 84 } 48 85 49 86 typedef void (*bp_hardening_cb_t)(void);
+2 -2
arch/arm64/include/asm/pgtable-prot.h
··· 37 37 #define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) 38 38 #define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) 39 39 40 - #define PTE_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PTE_NG : 0) 41 - #define PMD_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0) 40 + #define PTE_MAYBE_NG (arm64_kernel_use_ng_mappings() ? PTE_NG : 0) 41 + #define PMD_MAYBE_NG (arm64_kernel_use_ng_mappings() ? PMD_SECT_NG : 0) 42 42 43 43 #define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG) 44 44 #define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
+1 -1
arch/arm64/kernel/cpu_errata.c
··· 553 553 #endif 554 554 555 555 #ifdef CONFIG_CAVIUM_ERRATUM_27456 556 - static const struct midr_range cavium_erratum_27456_cpus[] = { 556 + const struct midr_range cavium_erratum_27456_cpus[] = { 557 557 /* Cavium ThunderX, T88 pass 1.x - 2.1 */ 558 558 MIDR_RANGE(MIDR_THUNDERX, 0, 0, 1, 1), 559 559 /* Cavium ThunderX, T81 pass 1.0 */
+7 -2
arch/arm64/kernel/cpufeature.c
··· 983 983 984 984 /* Useful for KASLR robustness */ 985 985 if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) 986 - return true; 986 + return kaslr_offset() > 0; 987 987 988 988 /* Don't force KPTI for CPUs that are not vulnerable */ 989 989 if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list)) ··· 1003 1003 static bool kpti_applied = false; 1004 1004 int cpu = smp_processor_id(); 1005 1005 1006 - if (kpti_applied) 1006 + /* 1007 + * We don't need to rewrite the page-tables if either we've done 1008 + * it already or we have KASLR enabled and therefore have not 1009 + * created any global mappings at all. 1010 + */ 1011 + if (kpti_applied || kaslr_offset() > 0) 1007 1012 return; 1008 1013 1009 1014 remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
+1
arch/arm64/kernel/head.S
··· 475 475 476 476 ENTRY(kimage_vaddr) 477 477 .quad _text - TEXT_OFFSET 478 + EXPORT_SYMBOL(kimage_vaddr) 478 479 479 480 /* 480 481 * If we're fortunate enough to boot at EL2, ensure that the world is
+3 -1
arch/arm64/kernel/machine_kexec_file.c
··· 87 87 88 88 /* add kaslr-seed */ 89 89 ret = fdt_delprop(dtb, off, FDT_PROP_KASLR_SEED); 90 - if (ret && (ret != -FDT_ERR_NOTFOUND)) 90 + if (ret == -FDT_ERR_NOTFOUND) 91 + ret = 0; 92 + else if (ret) 91 93 goto out; 92 94 93 95 if (rng_is_initialized()) {
+1
arch/c6x/include/uapi/asm/Kbuild
··· 1 1 include include/uapi/asm-generic/Kbuild.asm 2 2 3 3 generic-y += kvm_para.h 4 + generic-y += shmparam.h 4 5 generic-y += ucontext.h
+25
arch/csky/include/asm/io.h
··· 15 15 extern int remap_area_pages(unsigned long address, phys_addr_t phys_addr, 16 16 size_t size, unsigned long flags); 17 17 18 + /* 19 + * I/O memory access primitives. Reads are ordered relative to any 20 + * following Normal memory access. Writes are ordered relative to any prior 21 + * Normal memory access. 22 + * 23 + * For CACHEV1 (807, 810), store instruction could fast retire, so we need 24 + * another mb() to prevent st fast retire. 25 + * 26 + * For CACHEV2 (860), store instruction with PAGE_ATTR_NO_BUFFERABLE won't 27 + * fast retire. 28 + */ 29 + #define readb(c) ({ u8 __v = readb_relaxed(c); rmb(); __v; }) 30 + #define readw(c) ({ u16 __v = readw_relaxed(c); rmb(); __v; }) 31 + #define readl(c) ({ u32 __v = readl_relaxed(c); rmb(); __v; }) 32 + 33 + #ifdef CONFIG_CPU_HAS_CACHEV2 34 + #define writeb(v,c) ({ wmb(); writeb_relaxed((v),(c)); }) 35 + #define writew(v,c) ({ wmb(); writew_relaxed((v),(c)); }) 36 + #define writel(v,c) ({ wmb(); writel_relaxed((v),(c)); }) 37 + #else 38 + #define writeb(v,c) ({ wmb(); writeb_relaxed((v),(c)); mb(); }) 39 + #define writew(v,c) ({ wmb(); writew_relaxed((v),(c)); mb(); }) 40 + #define writel(v,c) ({ wmb(); writel_relaxed((v),(c)); mb(); }) 41 + #endif 42 + 18 43 #define ioremap_nocache(phy, sz) ioremap(phy, sz) 19 44 #define ioremap_wc ioremap_nocache 20 45 #define ioremap_wt ioremap_nocache
+17 -24
arch/csky/include/asm/pgalloc.h
··· 24 24 25 25 extern void pgd_init(unsigned long *p); 26 26 27 - static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm, 28 - unsigned long address) 27 + static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) 29 28 { 30 29 pte_t *pte; 31 - unsigned long *kaddr, i; 30 + unsigned long i; 32 31 33 - pte = (pte_t *) __get_free_pages(GFP_KERNEL | __GFP_RETRY_MAYFAIL, 34 - PTE_ORDER); 35 - kaddr = (unsigned long *)pte; 36 - if (address & 0x80000000) 37 - for (i = 0; i < (PAGE_SIZE/4); i++) 38 - *(kaddr + i) = 0x1; 39 - else 40 - clear_page(kaddr); 32 + pte = (pte_t *) __get_free_page(GFP_KERNEL); 33 + if (!pte) 34 + return NULL; 35 + 36 + for (i = 0; i < PAGE_SIZE/sizeof(pte_t); i++) 37 + (pte + i)->pte_low = _PAGE_GLOBAL; 41 38 42 39 return pte; 43 40 } 44 41 45 - static inline struct page *pte_alloc_one(struct mm_struct *mm, 46 - unsigned long address) 42 + static inline struct page *pte_alloc_one(struct mm_struct *mm) 47 43 { 48 44 struct page *pte; 49 - unsigned long *kaddr, i; 50 45 51 - pte = alloc_pages(GFP_KERNEL | __GFP_RETRY_MAYFAIL, PTE_ORDER); 52 - if (pte) { 53 - kaddr = kmap_atomic(pte); 54 - if (address & 0x80000000) { 55 - for (i = 0; i < (PAGE_SIZE/4); i++) 56 - *(kaddr + i) = 0x1; 57 - } else 58 - clear_page(kaddr); 59 - kunmap_atomic(kaddr); 60 - pgtable_page_ctor(pte); 46 + pte = alloc_pages(GFP_KERNEL | __GFP_ZERO, 0); 47 + if (!pte) 48 + return NULL; 49 + 50 + if (!pgtable_page_ctor(pte)) { 51 + __free_page(pte); 52 + return NULL; 61 53 } 54 + 62 55 return pte; 63 56 } 64 57
+22 -16
arch/csky/kernel/module.c
··· 12 12 #include <linux/spinlock.h> 13 13 #include <asm/pgtable.h> 14 14 15 - #if defined(__CSKYABIV2__) 15 + #ifdef CONFIG_CPU_CK810 16 16 #define IS_BSR32(hi16, lo16) (((hi16) & 0xFC00) == 0xE000) 17 17 #define IS_JSRI32(hi16, lo16) ((hi16) == 0xEAE0) 18 18 ··· 25 25 *(uint16_t *)(addr) = 0xE8Fa; \ 26 26 *((uint16_t *)(addr) + 1) = 0x0000; \ 27 27 } while (0) 28 + 29 + static void jsri_2_lrw_jsr(uint32_t *location) 30 + { 31 + uint16_t *location_tmp = (uint16_t *)location; 32 + 33 + if (IS_BSR32(*location_tmp, *(location_tmp + 1))) 34 + return; 35 + 36 + if (IS_JSRI32(*location_tmp, *(location_tmp + 1))) { 37 + /* jsri 0x... --> lrw r26, 0x... */ 38 + CHANGE_JSRI_TO_LRW(location); 39 + /* lsli r0, r0 --> jsr r26 */ 40 + SET_JSR32_R26(location + 1); 41 + } 42 + } 43 + #else 44 + static void inline jsri_2_lrw_jsr(uint32_t *location) 45 + { 46 + return; 47 + } 28 48 #endif 29 49 30 50 int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, ··· 55 35 Elf32_Sym *sym; 56 36 uint32_t *location; 57 37 short *temp; 58 - #if defined(__CSKYABIV2__) 59 - uint16_t *location_tmp; 60 - #endif 61 38 62 39 for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) { 63 40 /* This is where to make the change */ ··· 76 59 case R_CSKY_PCRELJSR_IMM11BY2: 77 60 break; 78 61 case R_CSKY_PCRELJSR_IMM26BY2: 79 - #if defined(__CSKYABIV2__) 80 - location_tmp = (uint16_t *)location; 81 - if (IS_BSR32(*location_tmp, *(location_tmp + 1))) 82 - break; 83 - 84 - if (IS_JSRI32(*location_tmp, *(location_tmp + 1))) { 85 - /* jsri 0x... --> lrw r26, 0x... */ 86 - CHANGE_JSRI_TO_LRW(location); 87 - /* lsli r0, r0 --> jsr r26 */ 88 - SET_JSR32_R26(location + 1); 89 - } 90 - #endif 62 + jsri_2_lrw_jsr(location); 91 63 break; 92 64 case R_CSKY_ADDR_HI16: 93 65 temp = ((short *)location) + 1;
+1
arch/h8300/include/uapi/asm/Kbuild
··· 1 1 include include/uapi/asm-generic/Kbuild.asm 2 2 3 3 generic-y += kvm_para.h 4 + generic-y += shmparam.h 4 5 generic-y += ucontext.h
+1
arch/hexagon/include/uapi/asm/Kbuild
··· 1 1 include include/uapi/asm-generic/Kbuild.asm 2 2 3 + generic-y += shmparam.h 3 4 generic-y += ucontext.h
+1
arch/m68k/include/uapi/asm/Kbuild
··· 2 2 3 3 generated-y += unistd_32.h 4 4 generic-y += kvm_para.h 5 + generic-y += shmparam.h
+1
arch/microblaze/include/uapi/asm/Kbuild
··· 2 2 3 3 generated-y += unistd_32.h 4 4 generic-y += kvm_para.h 5 + generic-y += shmparam.h 5 6 generic-y += ucontext.h
+3 -3
arch/mips/lantiq/xway/dma.c
··· 129 129 unsigned long flags; 130 130 131 131 ch->desc = 0; 132 - ch->desc_base = dma_zalloc_coherent(ch->dev, 133 - LTQ_DESC_NUM * LTQ_DESC_SIZE, 134 - &ch->phys, GFP_ATOMIC); 132 + ch->desc_base = dma_alloc_coherent(ch->dev, 133 + LTQ_DESC_NUM * LTQ_DESC_SIZE, 134 + &ch->phys, GFP_ATOMIC); 135 135 136 136 spin_lock_irqsave(&ltq_dma_lock, flags); 137 137 ltq_dma_w32(ch->nr, LTQ_DMA_CS);
+6 -2
arch/openrisc/include/asm/uaccess.h
··· 58 58 /* Ensure that addr is below task's addr_limit */ 59 59 #define __addr_ok(addr) ((unsigned long) addr < get_fs()) 60 60 61 - #define access_ok(addr, size) \ 62 - __range_ok((unsigned long)addr, (unsigned long)size) 61 + #define access_ok(addr, size) \ 62 + ({ \ 63 + unsigned long __ao_addr = (unsigned long)(addr); \ 64 + unsigned long __ao_size = (unsigned long)(size); \ 65 + __range_ok(__ao_addr, __ao_size); \ 66 + }) 63 67 64 68 /* 65 69 * These are the main single-value transfer routines. They automatically
+1
arch/openrisc/include/uapi/asm/Kbuild
··· 1 1 include include/uapi/asm-generic/Kbuild.asm 2 2 3 3 generic-y += kvm_para.h 4 + generic-y += shmparam.h 4 5 generic-y += ucontext.h
+1 -1
arch/powerpc/platforms/pasemi/dma_lib.c
··· 255 255 256 256 chan->ring_size = ring_size; 257 257 258 - chan->ring_virt = dma_zalloc_coherent(&dma_pdev->dev, 258 + chan->ring_virt = dma_alloc_coherent(&dma_pdev->dev, 259 259 ring_size * sizeof(u64), 260 260 &chan->ring_dma, GFP_KERNEL); 261 261
+4 -3
arch/powerpc/sysdev/fsl_rmu.c
··· 756 756 } 757 757 758 758 /* Initialize outbound message descriptor ring */ 759 - rmu->msg_tx_ring.virt = dma_zalloc_coherent(priv->dev, 760 - rmu->msg_tx_ring.size * RIO_MSG_DESC_SIZE, 761 - &rmu->msg_tx_ring.phys, GFP_KERNEL); 759 + rmu->msg_tx_ring.virt = dma_alloc_coherent(priv->dev, 760 + rmu->msg_tx_ring.size * RIO_MSG_DESC_SIZE, 761 + &rmu->msg_tx_ring.phys, 762 + GFP_KERNEL); 762 763 if (!rmu->msg_tx_ring.virt) { 763 764 rc = -ENOMEM; 764 765 goto out_dma;
+3 -1
arch/riscv/Kconfig
··· 28 28 select GENERIC_STRNLEN_USER 29 29 select GENERIC_SMP_IDLE_THREAD 30 30 select GENERIC_ATOMIC64 if !64BIT || !RISCV_ISA_A 31 + select HAVE_ARCH_AUDITSYSCALL 31 32 select HAVE_MEMBLOCK_NODE_MAP 32 33 select HAVE_DMA_CONTIGUOUS 33 34 select HAVE_FUTEX_CMPXCHG if FUTEX 34 35 select HAVE_GENERIC_DMA_COHERENT 35 36 select HAVE_PERF_EVENTS 37 + select HAVE_SYSCALL_TRACEPOINTS 36 38 select IRQ_DOMAIN 37 39 select RISCV_ISA_A if SMP 38 40 select SPARSE_IRQ ··· 42 40 select HAVE_ARCH_TRACEHOOK 43 41 select HAVE_PCI 44 42 select MODULES_USE_ELF_RELA if MODULES 43 + select MODULE_SECTIONS if MODULES 45 44 select THREAD_INFO_IN_TASK 46 45 select PCI_DOMAINS_GENERIC if PCI 47 46 select PCI_MSI if PCI ··· 155 152 bool "2GiB" 156 153 config MAXPHYSMEM_128GB 157 154 depends on 64BIT && CMODEL_MEDANY 158 - select MODULE_SECTIONS if MODULES 159 155 bool "128GiB" 160 156 endchoice 161 157
+15 -13
arch/riscv/include/asm/module.h
··· 9 9 #define MODULE_ARCH_VERMAGIC "riscv" 10 10 11 11 struct module; 12 - u64 module_emit_got_entry(struct module *mod, u64 val); 13 - u64 module_emit_plt_entry(struct module *mod, u64 val); 12 + unsigned long module_emit_got_entry(struct module *mod, unsigned long val); 13 + unsigned long module_emit_plt_entry(struct module *mod, unsigned long val); 14 14 15 15 #ifdef CONFIG_MODULE_SECTIONS 16 16 struct mod_section { 17 - struct elf64_shdr *shdr; 17 + Elf_Shdr *shdr; 18 18 int num_entries; 19 19 int max_entries; 20 20 }; ··· 26 26 }; 27 27 28 28 struct got_entry { 29 - u64 symbol_addr; /* the real variable address */ 29 + unsigned long symbol_addr; /* the real variable address */ 30 30 }; 31 31 32 - static inline struct got_entry emit_got_entry(u64 val) 32 + static inline struct got_entry emit_got_entry(unsigned long val) 33 33 { 34 34 return (struct got_entry) {val}; 35 35 } 36 36 37 - static inline struct got_entry *get_got_entry(u64 val, 37 + static inline struct got_entry *get_got_entry(unsigned long val, 38 38 const struct mod_section *sec) 39 39 { 40 - struct got_entry *got = (struct got_entry *)sec->shdr->sh_addr; 40 + struct got_entry *got = (struct got_entry *)(sec->shdr->sh_addr); 41 41 int i; 42 42 for (i = 0; i < sec->num_entries; i++) { 43 43 if (got[i].symbol_addr == val) ··· 62 62 #define REG_T0 0x5 63 63 #define REG_T1 0x6 64 64 65 - static inline struct plt_entry emit_plt_entry(u64 val, u64 plt, u64 got_plt) 65 + static inline struct plt_entry emit_plt_entry(unsigned long val, 66 + unsigned long plt, 67 + unsigned long got_plt) 66 68 { 67 69 /* 68 70 * U-Type encoding: ··· 78 76 * +------------+------------+--------+----------+----------+ 79 77 * 80 78 */ 81 - u64 offset = got_plt - plt; 79 + unsigned long offset = got_plt - plt; 82 80 u32 hi20 = (offset + 0x800) & 0xfffff000; 83 81 u32 lo12 = (offset - hi20); 84 82 return (struct plt_entry) { ··· 88 86 }; 89 87 } 90 88 91 - static inline int get_got_plt_idx(u64 val, const struct mod_section *sec) 89 + static inline int get_got_plt_idx(unsigned long val, const struct mod_section *sec) 92 90 { 93 91 struct got_entry *got_plt = (struct got_entry *)sec->shdr->sh_addr; 94 92 int i; ··· 99 97 return -1; 100 98 } 101 99 102 - static inline struct plt_entry *get_plt_entry(u64 val, 103 - const struct mod_section *sec_plt, 104 - const struct mod_section *sec_got_plt) 100 + static inline struct plt_entry *get_plt_entry(unsigned long val, 101 + const struct mod_section *sec_plt, 102 + const struct mod_section *sec_got_plt) 105 103 { 106 104 struct plt_entry *plt = (struct plt_entry *)sec_plt->shdr->sh_addr; 107 105 int got_plt_idx = get_got_plt_idx(val, sec_got_plt);
+5
arch/riscv/include/asm/ptrace.h
··· 113 113 SET_FP(regs, val); 114 114 } 115 115 116 + static inline unsigned long regs_return_value(struct pt_regs *regs) 117 + { 118 + return regs->a0; 119 + } 120 + 116 121 #endif /* __ASSEMBLY__ */ 117 122 118 123 #endif /* _ASM_RISCV_PTRACE_H */
+10
arch/riscv/include/asm/syscall.h
··· 18 18 #ifndef _ASM_RISCV_SYSCALL_H 19 19 #define _ASM_RISCV_SYSCALL_H 20 20 21 + #include <uapi/linux/audit.h> 21 22 #include <linux/sched.h> 22 23 #include <linux/err.h> 23 24 ··· 98 97 n--; 99 98 } 100 99 memcpy(&regs->a1 + i * sizeof(regs->a1), args, n * sizeof(regs->a0)); 100 + } 101 + 102 + static inline int syscall_get_arch(void) 103 + { 104 + #ifdef CONFIG_64BIT 105 + return AUDIT_ARCH_RISCV64; 106 + #else 107 + return AUDIT_ARCH_RISCV32; 108 + #endif 101 109 } 102 110 103 111 #endif /* _ASM_RISCV_SYSCALL_H */
+6
arch/riscv/include/asm/thread_info.h
··· 80 80 #define TIF_RESTORE_SIGMASK 4 /* restore signal mask in do_signal() */ 81 81 #define TIF_MEMDIE 5 /* is terminating due to OOM killer */ 82 82 #define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */ 83 + #define TIF_SYSCALL_AUDIT 7 /* syscall auditing */ 83 84 84 85 #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 85 86 #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 86 87 #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) 87 88 #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) 89 + #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT) 90 + #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) 88 91 89 92 #define _TIF_WORK_MASK \ 90 93 (_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED) 94 + 95 + #define _TIF_SYSCALL_WORK \ 96 + (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_TRACEPOINT | _TIF_SYSCALL_AUDIT) 91 97 92 98 #endif /* _ASM_RISCV_THREAD_INFO_H */
+2
arch/riscv/include/asm/unistd.h
··· 19 19 #define __ARCH_WANT_SYS_CLONE 20 20 21 21 #include <uapi/asm/unistd.h> 22 + 23 + #define NR_syscalls (__NR_syscalls)
+2 -2
arch/riscv/kernel/entry.S
··· 201 201 REG_S s2, PT_SEPC(sp) 202 202 /* Trace syscalls, but only if requested by the user. */ 203 203 REG_L t0, TASK_TI_FLAGS(tp) 204 - andi t0, t0, _TIF_SYSCALL_TRACE 204 + andi t0, t0, _TIF_SYSCALL_WORK 205 205 bnez t0, handle_syscall_trace_enter 206 206 check_syscall_nr: 207 207 /* Check to make sure we don't jump to a bogus syscall number. */ ··· 221 221 REG_S a0, PT_A0(sp) 222 222 /* Trace syscalls, but only if requested by the user. */ 223 223 REG_L t0, TASK_TI_FLAGS(tp) 224 - andi t0, t0, _TIF_SYSCALL_TRACE 224 + andi t0, t0, _TIF_SYSCALL_WORK 225 225 bnez t0, handle_syscall_trace_exit 226 226 227 227 ret_from_exception:
+16 -14
arch/riscv/kernel/module-sections.c
··· 9 9 #include <linux/kernel.h> 10 10 #include <linux/module.h> 11 11 12 - u64 module_emit_got_entry(struct module *mod, u64 val) 12 + unsigned long module_emit_got_entry(struct module *mod, unsigned long val) 13 13 { 14 14 struct mod_section *got_sec = &mod->arch.got; 15 15 int i = got_sec->num_entries; 16 16 struct got_entry *got = get_got_entry(val, got_sec); 17 17 18 18 if (got) 19 - return (u64)got; 19 + return (unsigned long)got; 20 20 21 21 /* There is no duplicate entry, create a new one */ 22 22 got = (struct got_entry *)got_sec->shdr->sh_addr; ··· 25 25 got_sec->num_entries++; 26 26 BUG_ON(got_sec->num_entries > got_sec->max_entries); 27 27 28 - return (u64)&got[i]; 28 + return (unsigned long)&got[i]; 29 29 } 30 30 31 - u64 module_emit_plt_entry(struct module *mod, u64 val) 31 + unsigned long module_emit_plt_entry(struct module *mod, unsigned long val) 32 32 { 33 33 struct mod_section *got_plt_sec = &mod->arch.got_plt; 34 34 struct got_entry *got_plt; ··· 37 37 int i = plt_sec->num_entries; 38 38 39 39 if (plt) 40 - return (u64)plt; 40 + return (unsigned long)plt; 41 41 42 42 /* There is no duplicate entry, create a new one */ 43 43 got_plt = (struct got_entry *)got_plt_sec->shdr->sh_addr; 44 44 got_plt[i] = emit_got_entry(val); 45 45 plt = (struct plt_entry *)plt_sec->shdr->sh_addr; 46 - plt[i] = emit_plt_entry(val, (u64)&plt[i], (u64)&got_plt[i]); 46 + plt[i] = emit_plt_entry(val, 47 + (unsigned long)&plt[i], 48 + (unsigned long)&got_plt[i]); 47 49 48 50 plt_sec->num_entries++; 49 51 got_plt_sec->num_entries++; 50 52 BUG_ON(plt_sec->num_entries > plt_sec->max_entries); 51 53 52 - return (u64)&plt[i]; 54 + return (unsigned long)&plt[i]; 53 55 } 54 56 55 - static int is_rela_equal(const Elf64_Rela *x, const Elf64_Rela *y) 57 + static int is_rela_equal(const Elf_Rela *x, const Elf_Rela *y) 56 58 { 57 59 return x->r_info == y->r_info && x->r_addend == y->r_addend; 58 60 } 59 61 60 - static bool duplicate_rela(const Elf64_Rela *rela, int idx) 62 + static bool duplicate_rela(const Elf_Rela *rela, int idx) 61 63 { 62 64 int i; 63 65 for (i = 0; i < idx; i++) { ··· 69 67 return false; 70 68 } 71 69 72 - static void count_max_entries(Elf64_Rela *relas, int num, 70 + static void count_max_entries(Elf_Rela *relas, int num, 73 71 unsigned int *plts, unsigned int *gots) 74 72 { 75 73 unsigned int type, i; 76 74 77 75 for (i = 0; i < num; i++) { 78 - type = ELF64_R_TYPE(relas[i].r_info); 76 + type = ELF_RISCV_R_TYPE(relas[i].r_info); 79 77 if (type == R_RISCV_CALL_PLT) { 80 78 if (!duplicate_rela(relas, i)) 81 79 (*plts)++; ··· 120 118 121 119 /* Calculate the maxinum number of entries */ 122 120 for (i = 0; i < ehdr->e_shnum; i++) { 123 - Elf64_Rela *relas = (void *)ehdr + sechdrs[i].sh_offset; 124 - int num_rela = sechdrs[i].sh_size / sizeof(Elf64_Rela); 125 - Elf64_Shdr *dst_sec = sechdrs + sechdrs[i].sh_info; 121 + Elf_Rela *relas = (void *)ehdr + sechdrs[i].sh_offset; 122 + int num_rela = sechdrs[i].sh_size / sizeof(Elf_Rela); 123 + Elf_Shdr *dst_sec = sechdrs + sechdrs[i].sh_info; 126 124 127 125 if (sechdrs[i].sh_type != SHT_RELA) 128 126 continue;
+8 -1
arch/riscv/kernel/ptrace.c
··· 18 18 #include <asm/ptrace.h> 19 19 #include <asm/syscall.h> 20 20 #include <asm/thread_info.h> 21 + #include <linux/audit.h> 21 22 #include <linux/ptrace.h> 22 23 #include <linux/elf.h> 23 24 #include <linux/regset.h> 24 25 #include <linux/sched.h> 25 26 #include <linux/sched/task_stack.h> 26 27 #include <linux/tracehook.h> 28 + 29 + #define CREATE_TRACE_POINTS 27 30 #include <trace/events/syscalls.h> 28 31 29 32 enum riscv_regset { ··· 166 163 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) 167 164 trace_sys_enter(regs, syscall_get_nr(current, regs)); 168 165 #endif 166 + 167 + audit_syscall_entry(regs->a7, regs->a0, regs->a1, regs->a2, regs->a3); 169 168 } 170 169 171 170 void do_syscall_trace_exit(struct pt_regs *regs) 172 171 { 172 + audit_syscall_exit(regs); 173 + 173 174 if (test_thread_flag(TIF_SYSCALL_TRACE)) 174 175 tracehook_report_syscall_exit(regs, 0); 175 176 176 177 #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS 177 178 if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) 178 - trace_sys_exit(regs, regs->regs[0]); 179 + trace_sys_exit(regs, regs_return_value(regs)); 179 180 #endif 180 181 }
+8 -1
arch/riscv/kernel/setup.c
··· 149 149 150 150 void __init parse_dtb(unsigned int hartid, void *dtb) 151 151 { 152 - early_init_dt_scan(__va(dtb)); 152 + if (!early_init_dt_scan(__va(dtb))) 153 + return; 154 + 155 + pr_err("No DTB passed to the kernel\n"); 156 + #ifdef CONFIG_CMDLINE_FORCE 157 + strlcpy(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE); 158 + pr_info("Forcing kernel command line to: %s\n", boot_command_line); 159 + #endif 153 160 } 154 161 155 162 static void __init setup_bootmem(void)
+36 -7
arch/riscv/kernel/smp.c
··· 23 23 #include <linux/smp.h> 24 24 #include <linux/sched.h> 25 25 #include <linux/seq_file.h> 26 + #include <linux/delay.h> 26 27 27 28 #include <asm/sbi.h> 28 29 #include <asm/tlbflush.h> ··· 32 31 enum ipi_message_type { 33 32 IPI_RESCHEDULE, 34 33 IPI_CALL_FUNC, 34 + IPI_CPU_STOP, 35 35 IPI_MAX 36 36 }; 37 37 ··· 68 66 return -EINVAL; 69 67 } 70 68 69 + static void ipi_stop(void) 70 + { 71 + set_cpu_online(smp_processor_id(), false); 72 + while (1) 73 + wait_for_interrupt(); 74 + } 75 + 71 76 void riscv_software_interrupt(void) 72 77 { 73 78 unsigned long *pending_ipis = &ipi_data[smp_processor_id()].bits; ··· 101 92 if (ops & (1 << IPI_CALL_FUNC)) { 102 93 stats[IPI_CALL_FUNC]++; 103 94 generic_smp_call_function_interrupt(); 95 + } 96 + 97 + if (ops & (1 << IPI_CPU_STOP)) { 98 + stats[IPI_CPU_STOP]++; 99 + ipi_stop(); 104 100 } 105 101 106 102 BUG_ON((ops >> IPI_MAX) != 0); ··· 135 121 static const char * const ipi_names[] = { 136 122 [IPI_RESCHEDULE] = "Rescheduling interrupts", 137 123 [IPI_CALL_FUNC] = "Function call interrupts", 124 + [IPI_CPU_STOP] = "CPU stop interrupts", 138 125 }; 139 126 140 127 void show_ipi_stats(struct seq_file *p, int prec) ··· 161 146 send_ipi_message(cpumask_of(cpu), IPI_CALL_FUNC); 162 147 } 163 148 164 - static void ipi_stop(void *unused) 165 - { 166 - while (1) 167 - wait_for_interrupt(); 168 - } 169 - 170 149 void smp_send_stop(void) 171 150 { 172 - on_each_cpu(ipi_stop, NULL, 1); 151 + unsigned long timeout; 152 + 153 + if (num_online_cpus() > 1) { 154 + cpumask_t mask; 155 + 156 + cpumask_copy(&mask, cpu_online_mask); 157 + cpumask_clear_cpu(smp_processor_id(), &mask); 158 + 159 + if (system_state <= SYSTEM_RUNNING) 160 + pr_crit("SMP: stopping secondary CPUs\n"); 161 + send_ipi_message(&mask, IPI_CPU_STOP); 162 + } 163 + 164 + /* Wait up to one second for other CPUs to stop */ 165 + timeout = USEC_PER_SEC; 166 + while (num_online_cpus() > 1 && timeout--) 167 + udelay(1); 168 + 169 + if (num_online_cpus() > 1) 170 + pr_warn("SMP: failed to stop secondary CPUs %*pbl\n", 171 + cpumask_pr_args(cpu_online_mask)); 173 172 } 174 173 175 174 void smp_send_reschedule(int cpu)
+6 -2
arch/riscv/kernel/vmlinux.lds.S
··· 18 18 #include <asm/cache.h> 19 19 #include <asm/thread_info.h> 20 20 21 + #define MAX_BYTES_PER_LONG 0x10 22 + 21 23 OUTPUT_ARCH(riscv) 22 24 ENTRY(_start) 23 25 ··· 76 74 *(.sbss*) 77 75 } 78 76 79 - BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0) 80 - 81 77 EXCEPTION_TABLE(0x10) 82 78 NOTES 83 79 84 80 .rel.dyn : { 85 81 *(.rel.dyn*) 86 82 } 83 + 84 + BSS_SECTION(MAX_BYTES_PER_LONG, 85 + MAX_BYTES_PER_LONG, 86 + MAX_BYTES_PER_LONG) 87 87 88 88 _end = .; 89 89
+1
arch/unicore32/include/uapi/asm/Kbuild
··· 1 1 include include/uapi/asm-generic/Kbuild.asm 2 2 3 3 generic-y += kvm_para.h 4 + generic-y += shmparam.h 4 5 generic-y += ucontext.h
+1 -1
arch/x86/Kconfig
··· 446 446 branches. Requires a compiler with -mindirect-branch=thunk-extern 447 447 support for full protection. The kernel may run slower. 448 448 449 - config RESCTRL 449 + config X86_RESCTRL 450 450 bool "Resource Control support" 451 451 depends on X86 && (CPU_SUP_INTEL || CPU_SUP_AMD) 452 452 select KERNFS
+2 -2
arch/x86/include/asm/resctrl_sched.h
··· 2 2 #ifndef _ASM_X86_RESCTRL_SCHED_H 3 3 #define _ASM_X86_RESCTRL_SCHED_H 4 4 5 - #ifdef CONFIG_RESCTRL 5 + #ifdef CONFIG_X86_RESCTRL 6 6 7 7 #include <linux/sched.h> 8 8 #include <linux/jump_label.h> ··· 88 88 89 89 static inline void resctrl_sched_in(void) {} 90 90 91 - #endif /* CONFIG_RESCTRL */ 91 + #endif /* CONFIG_X86_RESCTRL */ 92 92 93 93 #endif /* _ASM_X86_RESCTRL_SCHED_H */
+1 -1
arch/x86/kernel/cpu/Makefile
··· 39 39 obj-$(CONFIG_X86_MCE) += mce/ 40 40 obj-$(CONFIG_MTRR) += mtrr/ 41 41 obj-$(CONFIG_MICROCODE) += microcode/ 42 - obj-$(CONFIG_RESCTRL) += resctrl/ 42 + obj-$(CONFIG_X86_RESCTRL) += resctrl/ 43 43 44 44 obj-$(CONFIG_X86_LOCAL_APIC) += perfctr-watchdog.o 45 45
+1 -1
arch/x86/kernel/cpu/bugs.c
··· 215 215 static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init = 216 216 SPECTRE_V2_USER_NONE; 217 217 218 - #ifdef RETPOLINE 218 + #ifdef CONFIG_RETPOLINE 219 219 static bool spectre_v2_bad_module; 220 220 221 221 bool retpoline_module_ok(bool has_retpoline)
+2 -2
arch/x86/kernel/cpu/resctrl/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_RESCTRL) += core.o rdtgroup.o monitor.o 3 - obj-$(CONFIG_RESCTRL) += ctrlmondata.o pseudo_lock.o 2 + obj-$(CONFIG_X86_RESCTRL) += core.o rdtgroup.o monitor.o 3 + obj-$(CONFIG_X86_RESCTRL) += ctrlmondata.o pseudo_lock.o 4 4 CFLAGS_pseudo_lock.o = -I$(src)
+3
arch/x86/kvm/svm.c
··· 6278 6278 int asid, ret; 6279 6279 6280 6280 ret = -EBUSY; 6281 + if (unlikely(sev->active)) 6282 + return ret; 6283 + 6281 6284 asid = sev_asid_new(); 6282 6285 if (asid < 0) 6283 6286 return ret;
+1 -2
arch/x86/kvm/vmx/nested.c
··· 4540 4540 * given physical address won't match the required 4541 4541 * VMCS12_REVISION identifier. 4542 4542 */ 4543 - nested_vmx_failValid(vcpu, 4543 + return nested_vmx_failValid(vcpu, 4544 4544 VMXERR_VMPTRLD_INCORRECT_VMCS_REVISION_ID); 4545 - return kvm_skip_emulated_instruction(vcpu); 4546 4545 } 4547 4546 new_vmcs12 = kmap(page); 4548 4547 if (new_vmcs12->hdr.revision_id != VMCS12_REVISION ||
+2 -2
arch/x86/kvm/vmx/vmx.c
··· 453 453 struct kvm_tlb_range *range) 454 454 { 455 455 struct kvm_vcpu *vcpu; 456 - int ret = -ENOTSUPP, i; 456 + int ret = 0, i; 457 457 458 458 spin_lock(&to_kvm_vmx(kvm)->ept_pointer_lock); 459 459 ··· 7044 7044 7045 7045 /* unmask address range configure area */ 7046 7046 for (i = 0; i < vmx->pt_desc.addr_range; i++) 7047 - vmx->pt_desc.ctl_bitmask &= ~(0xf << (32 + i * 4)); 7047 + vmx->pt_desc.ctl_bitmask &= ~(0xfULL << (32 + i * 4)); 7048 7048 } 7049 7049 7050 7050 static void vmx_cpuid_update(struct kvm_vcpu *vcpu)
+19 -1
block/blk-core.c
··· 661 661 * blk_attempt_plug_merge - try to merge with %current's plugged list 662 662 * @q: request_queue new bio is being queued at 663 663 * @bio: new bio being queued 664 - * @request_count: out parameter for number of traversed plugged requests 665 664 * @same_queue_rq: pointer to &struct request that gets filled in when 666 665 * another request associated with @q is found on the plug list 667 666 * (optional, may be %NULL) ··· 1682 1683 * @plug: The &struct blk_plug that needs to be initialized 1683 1684 * 1684 1685 * Description: 1686 + * blk_start_plug() indicates to the block layer an intent by the caller 1687 + * to submit multiple I/O requests in a batch. The block layer may use 1688 + * this hint to defer submitting I/Os from the caller until blk_finish_plug() 1689 + * is called. However, the block layer may choose to submit requests 1690 + * before a call to blk_finish_plug() if the number of queued I/Os 1691 + * exceeds %BLK_MAX_REQUEST_COUNT, or if the size of the I/O is larger than 1692 + * %BLK_PLUG_FLUSH_SIZE. The queued I/Os may also be submitted early if 1693 + * the task schedules (see below). 1694 + * 1685 1695 * Tracking blk_plug inside the task_struct will help with auto-flushing the 1686 1696 * pending I/O should the task end up blocking between blk_start_plug() and 1687 1697 * blk_finish_plug(). This is important from a performance perspective, but ··· 1773 1765 blk_mq_flush_plug_list(plug, from_schedule); 1774 1766 } 1775 1767 1768 + /** 1769 + * blk_finish_plug - mark the end of a batch of submitted I/O 1770 + * @plug: The &struct blk_plug passed to blk_start_plug() 1771 + * 1772 + * Description: 1773 + * Indicate that a batch of I/O submissions is complete. This function 1774 + * must be paired with an initial call to blk_start_plug(). The intent 1775 + * is to allow the block layer to optimize I/O submission. See the 1776 + * documentation for blk_start_plug() for more information. 1777 + */ 1776 1778 void blk_finish_plug(struct blk_plug *plug) 1777 1779 { 1778 1780 if (plug != current->plug)
+1
drivers/acpi/Kconfig
··· 10 10 bool "ACPI (Advanced Configuration and Power Interface) Support" 11 11 depends on ARCH_SUPPORTS_ACPI 12 12 select PNP 13 + select NLS 13 14 default y if X86 14 15 help 15 16 Advanced Configuration and Power Interface (ACPI) support for
+3 -2
drivers/acpi/arm64/iort.c
··· 876 876 return (resv == its->its_count) ? resv : -ENODEV; 877 877 } 878 878 #else 879 - static inline const struct iommu_ops *iort_fwspec_iommu_ops(struct device *dev); 879 + static inline const struct iommu_ops *iort_fwspec_iommu_ops(struct device *dev) 880 880 { return NULL; } 881 881 static inline int iort_add_device_replay(const struct iommu_ops *ops, 882 882 struct device *dev) ··· 952 952 { 953 953 struct acpi_iort_node *node; 954 954 struct acpi_iort_root_complex *rc; 955 + struct pci_bus *pbus = to_pci_dev(dev)->bus; 955 956 956 957 node = iort_scan_node(ACPI_IORT_NODE_PCI_ROOT_COMPLEX, 957 - iort_match_node_callback, dev); 958 + iort_match_node_callback, &pbus->dev); 958 959 if (!node || node->revision < 1) 959 960 return -ENODEV; 960 961
+3 -3
drivers/acpi/numa.c
··· 146 146 { 147 147 struct acpi_srat_mem_affinity *p = 148 148 (struct acpi_srat_mem_affinity *)header; 149 - pr_debug("SRAT Memory (0x%lx length 0x%lx) in proximity domain %d %s%s%s\n", 150 - (unsigned long)p->base_address, 151 - (unsigned long)p->length, 149 + pr_debug("SRAT Memory (0x%llx length 0x%llx) in proximity domain %d %s%s%s\n", 150 + (unsigned long long)p->base_address, 151 + (unsigned long long)p->length, 152 152 p->proximity_domain, 153 153 (p->flags & ACPI_SRAT_MEM_ENABLED) ? 154 154 "enabled" : "disabled",
+33 -8
drivers/acpi/pmic/intel_pmic_xpower.c
··· 20 20 #define GPI1_LDO_ON (3 << 0) 21 21 #define GPI1_LDO_OFF (4 << 0) 22 22 23 - #define AXP288_ADC_TS_PIN_GPADC 0xf2 24 - #define AXP288_ADC_TS_PIN_ON 0xf3 23 + #define AXP288_ADC_TS_CURRENT_ON_OFF_MASK GENMASK(1, 0) 24 + #define AXP288_ADC_TS_CURRENT_OFF (0 << 0) 25 + #define AXP288_ADC_TS_CURRENT_ON_WHEN_CHARGING (1 << 0) 26 + #define AXP288_ADC_TS_CURRENT_ON_ONDEMAND (2 << 0) 27 + #define AXP288_ADC_TS_CURRENT_ON (3 << 0) 25 28 26 29 static struct pmic_table power_table[] = { 27 30 { ··· 215 212 */ 216 213 static int intel_xpower_pmic_get_raw_temp(struct regmap *regmap, int reg) 217 214 { 215 + int ret, adc_ts_pin_ctrl; 218 216 u8 buf[2]; 219 - int ret; 220 217 221 - ret = regmap_write(regmap, AXP288_ADC_TS_PIN_CTRL, 222 - AXP288_ADC_TS_PIN_GPADC); 218 + /* 219 + * The current-source used for the battery temp-sensor (TS) is shared 220 + * with the GPADC. For proper fuel-gauge and charger operation the TS 221 + * current-source needs to be permanently on. But to read the GPADC we 222 + * need to temporary switch the TS current-source to ondemand, so that 223 + * the GPADC can use it, otherwise we will always read an all 0 value. 224 + * 225 + * Note that the switching from on to on-ondemand is not necessary 226 + * when the TS current-source is off (this happens on devices which 227 + * do not use the TS-pin). 228 + */ 229 + ret = regmap_read(regmap, AXP288_ADC_TS_PIN_CTRL, &adc_ts_pin_ctrl); 223 230 if (ret) 224 231 return ret; 225 232 226 - /* After switching to the GPADC pin give things some time to settle */ 227 - usleep_range(6000, 10000); 233 + if (adc_ts_pin_ctrl & AXP288_ADC_TS_CURRENT_ON_OFF_MASK) { 234 + ret = regmap_update_bits(regmap, AXP288_ADC_TS_PIN_CTRL, 235 + AXP288_ADC_TS_CURRENT_ON_OFF_MASK, 236 + AXP288_ADC_TS_CURRENT_ON_ONDEMAND); 237 + if (ret) 238 + return ret; 239 + 240 + /* Wait a bit after switching the current-source */ 241 + usleep_range(6000, 10000); 242 + } 228 243 229 244 ret = regmap_bulk_read(regmap, AXP288_GP_ADC_H, buf, 2); 230 245 if (ret == 0) 231 246 ret = (buf[0] << 4) + ((buf[1] >> 4) & 0x0f); 232 247 233 - regmap_write(regmap, AXP288_ADC_TS_PIN_CTRL, AXP288_ADC_TS_PIN_ON); 248 + if (adc_ts_pin_ctrl & AXP288_ADC_TS_CURRENT_ON_OFF_MASK) { 249 + regmap_update_bits(regmap, AXP288_ADC_TS_PIN_CTRL, 250 + AXP288_ADC_TS_CURRENT_ON_OFF_MASK, 251 + AXP288_ADC_TS_CURRENT_ON); 252 + } 234 253 235 254 return ret; 236 255 }
+22
drivers/acpi/power.c
··· 131 131 } 132 132 } 133 133 134 + static bool acpi_power_resource_is_dup(union acpi_object *package, 135 + unsigned int start, unsigned int i) 136 + { 137 + acpi_handle rhandle, dup; 138 + unsigned int j; 139 + 140 + /* The caller is expected to check the package element types */ 141 + rhandle = package->package.elements[i].reference.handle; 142 + for (j = start; j < i; j++) { 143 + dup = package->package.elements[j].reference.handle; 144 + if (dup == rhandle) 145 + return true; 146 + } 147 + 148 + return false; 149 + } 150 + 134 151 int acpi_extract_power_resources(union acpi_object *package, unsigned int start, 135 152 struct list_head *list) 136 153 { ··· 167 150 err = -ENODEV; 168 151 break; 169 152 } 153 + 154 + /* Some ACPI tables contain duplicate power resource references */ 155 + if (acpi_power_resource_is_dup(package, start, i)) 156 + continue; 157 + 170 158 err = acpi_add_power_resource(rhandle); 171 159 if (err) 172 160 break;
+2
drivers/ata/ahci.h
··· 254 254 AHCI_HFLAG_IS_MOBILE = (1 << 25), /* mobile chipset, use 255 255 SATA_MOBILE_LPM_POLICY 256 256 as default lpm_policy */ 257 + AHCI_HFLAG_SUSPEND_PHYS = (1 << 26), /* handle PHYs during 258 + suspend/resume */ 257 259 258 260 /* ap->flags bits */ 259 261
+64 -23
drivers/ata/ahci_mvebu.c
··· 28 28 #define AHCI_WINDOW_BASE(win) (0x64 + ((win) << 4)) 29 29 #define AHCI_WINDOW_SIZE(win) (0x68 + ((win) << 4)) 30 30 31 + struct ahci_mvebu_plat_data { 32 + int (*plat_config)(struct ahci_host_priv *hpriv); 33 + unsigned int flags; 34 + }; 35 + 31 36 static void ahci_mvebu_mbus_config(struct ahci_host_priv *hpriv, 32 37 const struct mbus_dram_target_info *dram) 33 38 { ··· 65 60 */ 66 61 writel(0x4, hpriv->mmio + AHCI_VENDOR_SPECIFIC_0_ADDR); 67 62 writel(0x80, hpriv->mmio + AHCI_VENDOR_SPECIFIC_0_DATA); 63 + } 64 + 65 + static int ahci_mvebu_armada_380_config(struct ahci_host_priv *hpriv) 66 + { 67 + const struct mbus_dram_target_info *dram; 68 + int rc = 0; 69 + 70 + dram = mv_mbus_dram_info(); 71 + if (dram) 72 + ahci_mvebu_mbus_config(hpriv, dram); 73 + else 74 + rc = -ENODEV; 75 + 76 + ahci_mvebu_regret_option(hpriv); 77 + 78 + return rc; 79 + } 80 + 81 + static int ahci_mvebu_armada_3700_config(struct ahci_host_priv *hpriv) 82 + { 83 + u32 reg; 84 + 85 + writel(0, hpriv->mmio + AHCI_VENDOR_SPECIFIC_0_ADDR); 86 + 87 + reg = readl(hpriv->mmio + AHCI_VENDOR_SPECIFIC_0_DATA); 88 + reg |= BIT(6); 89 + writel(reg, hpriv->mmio + AHCI_VENDOR_SPECIFIC_0_DATA); 90 + 91 + return 0; 68 92 } 69 93 70 94 /** ··· 160 126 { 161 127 struct ata_host *host = platform_get_drvdata(pdev); 162 128 struct ahci_host_priv *hpriv = host->private_data; 163 - const struct mbus_dram_target_info *dram; 129 + const struct ahci_mvebu_plat_data *pdata = hpriv->plat_data; 164 130 165 - dram = mv_mbus_dram_info(); 166 - if (dram) 167 - ahci_mvebu_mbus_config(hpriv, dram); 168 - 169 - ahci_mvebu_regret_option(hpriv); 131 + pdata->plat_config(hpriv); 170 132 171 133 return ahci_platform_resume_host(&pdev->dev); 172 134 } ··· 184 154 185 155 static int ahci_mvebu_probe(struct platform_device *pdev) 186 156 { 157 + const struct ahci_mvebu_plat_data *pdata; 187 158 struct ahci_host_priv *hpriv; 188 - const struct mbus_dram_target_info *dram; 189 159 int rc; 160 + 161 + pdata = of_device_get_match_data(&pdev->dev); 162 + if (!pdata) 163 + return -EINVAL; 190 164 191 165 hpriv = ahci_platform_get_resources(pdev, 0); 192 166 if (IS_ERR(hpriv)) 193 167 return PTR_ERR(hpriv); 168 + 169 + hpriv->flags |= pdata->flags; 170 + hpriv->plat_data = (void *)pdata; 194 171 195 172 rc = ahci_platform_enable_resources(hpriv); 196 173 if (rc) ··· 205 168 206 169 hpriv->stop_engine = ahci_mvebu_stop_engine; 207 170 208 - if (of_device_is_compatible(pdev->dev.of_node, 209 - "marvell,armada-380-ahci")) { 210 - dram = mv_mbus_dram_info(); 211 - if (!dram) 212 - return -ENODEV; 213 - 214 - ahci_mvebu_mbus_config(hpriv, dram); 215 - ahci_mvebu_regret_option(hpriv); 216 - } 171 + rc = pdata->plat_config(hpriv); 172 + if (rc) 173 + goto disable_resources; 217 174 218 175 rc = ahci_platform_init_host(pdev, hpriv, &ahci_mvebu_port_info, 219 176 &ahci_platform_sht); ··· 221 190 return rc; 222 191 } 223 192 193 + static const struct ahci_mvebu_plat_data ahci_mvebu_armada_380_plat_data = { 194 + .plat_config = ahci_mvebu_armada_380_config, 195 + }; 196 + 197 + static const struct ahci_mvebu_plat_data ahci_mvebu_armada_3700_plat_data = { 198 + .plat_config = ahci_mvebu_armada_3700_config, 199 + .flags = AHCI_HFLAG_SUSPEND_PHYS, 200 + }; 201 + 224 202 static const struct of_device_id ahci_mvebu_of_match[] = { 225 - { .compatible = "marvell,armada-380-ahci", }, 226 - { .compatible = "marvell,armada-3700-ahci", }, 203 + { 204 + .compatible = "marvell,armada-380-ahci", 205 + .data = &ahci_mvebu_armada_380_plat_data, 206 + }, 207 + { 208 + .compatible = "marvell,armada-3700-ahci", 209 + .data = &ahci_mvebu_armada_3700_plat_data, 210 + }, 227 211 { }, 228 212 }; 229 213 MODULE_DEVICE_TABLE(of, ahci_mvebu_of_match); 230 214 231 - /* 232 - * We currently don't provide power management related operations, 233 - * since there is no suspend/resume support at the platform level for 234 - * Armada 38x for the moment. 235 - */ 236 215 static struct platform_driver ahci_mvebu_driver = { 237 216 .probe = ahci_mvebu_probe, 238 217 .remove = ata_platform_remove_one,
+13
drivers/ata/libahci_platform.c
··· 56 56 if (rc) 57 57 goto disable_phys; 58 58 59 + rc = phy_set_mode(hpriv->phys[i], PHY_MODE_SATA); 60 + if (rc) { 61 + phy_exit(hpriv->phys[i]); 62 + goto disable_phys; 63 + } 64 + 59 65 rc = phy_power_on(hpriv->phys[i]); 60 66 if (rc) { 61 67 phy_exit(hpriv->phys[i]); ··· 744 738 writel(ctl, mmio + HOST_CTL); 745 739 readl(mmio + HOST_CTL); /* flush */ 746 740 741 + if (hpriv->flags & AHCI_HFLAG_SUSPEND_PHYS) 742 + ahci_platform_disable_phys(hpriv); 743 + 747 744 return ata_host_suspend(host, PMSG_SUSPEND); 748 745 } 749 746 EXPORT_SYMBOL_GPL(ahci_platform_suspend_host); ··· 765 756 int ahci_platform_resume_host(struct device *dev) 766 757 { 767 758 struct ata_host *host = dev_get_drvdata(dev); 759 + struct ahci_host_priv *hpriv = host->private_data; 768 760 int rc; 769 761 770 762 if (dev->power.power_state.event == PM_EVENT_SUSPEND) { ··· 775 765 776 766 ahci_init_controller(host); 777 767 } 768 + 769 + if (hpriv->flags & AHCI_HFLAG_SUSPEND_PHYS) 770 + ahci_platform_enable_phys(hpriv); 778 771 779 772 ata_host_resume(host); 780 773
+2 -2
drivers/ata/sata_fsl.c
··· 729 729 if (!pp) 730 730 return -ENOMEM; 731 731 732 - mem = dma_zalloc_coherent(dev, SATA_FSL_PORT_PRIV_DMA_SZ, &mem_dma, 733 - GFP_KERNEL); 732 + mem = dma_alloc_coherent(dev, SATA_FSL_PORT_PRIV_DMA_SZ, &mem_dma, 733 + GFP_KERNEL); 734 734 if (!mem) { 735 735 kfree(pp); 736 736 return -ENOMEM;
+19 -20
drivers/atm/he.c
··· 533 533 534 534 static int he_init_tpdrq(struct he_dev *he_dev) 535 535 { 536 - he_dev->tpdrq_base = dma_zalloc_coherent(&he_dev->pci_dev->dev, 537 - CONFIG_TPDRQ_SIZE * sizeof(struct he_tpdrq), 538 - &he_dev->tpdrq_phys, GFP_KERNEL); 536 + he_dev->tpdrq_base = dma_alloc_coherent(&he_dev->pci_dev->dev, 537 + CONFIG_TPDRQ_SIZE * sizeof(struct he_tpdrq), 538 + &he_dev->tpdrq_phys, 539 + GFP_KERNEL); 539 540 if (he_dev->tpdrq_base == NULL) { 540 541 hprintk("failed to alloc tpdrq\n"); 541 542 return -ENOMEM; ··· 806 805 goto out_free_rbpl_virt; 807 806 } 808 807 809 - he_dev->rbpl_base = dma_zalloc_coherent(&he_dev->pci_dev->dev, 810 - CONFIG_RBPL_SIZE * sizeof(struct he_rbp), 811 - &he_dev->rbpl_phys, GFP_KERNEL); 808 + he_dev->rbpl_base = dma_alloc_coherent(&he_dev->pci_dev->dev, 809 + CONFIG_RBPL_SIZE * sizeof(struct he_rbp), 810 + &he_dev->rbpl_phys, GFP_KERNEL); 812 811 if (he_dev->rbpl_base == NULL) { 813 812 hprintk("failed to alloc rbpl_base\n"); 814 813 goto out_destroy_rbpl_pool; ··· 845 844 846 845 /* rx buffer ready queue */ 847 846 848 - he_dev->rbrq_base = dma_zalloc_coherent(&he_dev->pci_dev->dev, 849 - CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq), 850 - &he_dev->rbrq_phys, GFP_KERNEL); 847 + he_dev->rbrq_base = dma_alloc_coherent(&he_dev->pci_dev->dev, 848 + CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq), 849 + &he_dev->rbrq_phys, GFP_KERNEL); 851 850 if (he_dev->rbrq_base == NULL) { 852 851 hprintk("failed to allocate rbrq\n"); 853 852 goto out_free_rbpl; ··· 869 868 870 869 /* tx buffer ready queue */ 871 870 872 - he_dev->tbrq_base = dma_zalloc_coherent(&he_dev->pci_dev->dev, 873 - CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq), 874 - &he_dev->tbrq_phys, GFP_KERNEL); 871 + he_dev->tbrq_base = dma_alloc_coherent(&he_dev->pci_dev->dev, 872 + CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq), 873 + &he_dev->tbrq_phys, GFP_KERNEL); 875 874 if (he_dev->tbrq_base == NULL) { 876 875 hprintk("failed to allocate tbrq\n"); 877 876 goto out_free_rbpq_base; ··· 914 913 /* 2.9.3.5 tail offset for each interrupt queue is located after the 915 914 end of the interrupt queue */ 916 915 917 - he_dev->irq_base = dma_zalloc_coherent(&he_dev->pci_dev->dev, 918 - (CONFIG_IRQ_SIZE + 1) 919 - * sizeof(struct he_irq), 920 - &he_dev->irq_phys, 921 - GFP_KERNEL); 916 + he_dev->irq_base = dma_alloc_coherent(&he_dev->pci_dev->dev, 917 + (CONFIG_IRQ_SIZE + 1) * sizeof(struct he_irq), 918 + &he_dev->irq_phys, GFP_KERNEL); 922 919 if (he_dev->irq_base == NULL) { 923 920 hprintk("failed to allocate irq\n"); 924 921 return -ENOMEM; ··· 1463 1464 1464 1465 /* host status page */ 1465 1466 1466 - he_dev->hsp = dma_zalloc_coherent(&he_dev->pci_dev->dev, 1467 - sizeof(struct he_hsp), 1468 - &he_dev->hsp_phys, GFP_KERNEL); 1467 + he_dev->hsp = dma_alloc_coherent(&he_dev->pci_dev->dev, 1468 + sizeof(struct he_hsp), 1469 + &he_dev->hsp_phys, GFP_KERNEL); 1469 1470 if (he_dev->hsp == NULL) { 1470 1471 hprintk("failed to allocate host status page\n"); 1471 1472 return -ENOMEM;
+8 -8
drivers/atm/idt77252.c
··· 641 641 scq = kzalloc(sizeof(struct scq_info), GFP_KERNEL); 642 642 if (!scq) 643 643 return NULL; 644 - scq->base = dma_zalloc_coherent(&card->pcidev->dev, SCQ_SIZE, 645 - &scq->paddr, GFP_KERNEL); 644 + scq->base = dma_alloc_coherent(&card->pcidev->dev, SCQ_SIZE, 645 + &scq->paddr, GFP_KERNEL); 646 646 if (scq->base == NULL) { 647 647 kfree(scq); 648 648 return NULL; ··· 971 971 { 972 972 struct rsq_entry *rsqe; 973 973 974 - card->rsq.base = dma_zalloc_coherent(&card->pcidev->dev, RSQSIZE, 975 - &card->rsq.paddr, GFP_KERNEL); 974 + card->rsq.base = dma_alloc_coherent(&card->pcidev->dev, RSQSIZE, 975 + &card->rsq.paddr, GFP_KERNEL); 976 976 if (card->rsq.base == NULL) { 977 977 printk("%s: can't allocate RSQ.\n", card->name); 978 978 return -1; ··· 3390 3390 writel(0, SAR_REG_GP); 3391 3391 3392 3392 /* Initialize RAW Cell Handle Register */ 3393 - card->raw_cell_hnd = dma_zalloc_coherent(&card->pcidev->dev, 3394 - 2 * sizeof(u32), 3395 - &card->raw_cell_paddr, 3396 - GFP_KERNEL); 3393 + card->raw_cell_hnd = dma_alloc_coherent(&card->pcidev->dev, 3394 + 2 * sizeof(u32), 3395 + &card->raw_cell_paddr, 3396 + GFP_KERNEL); 3397 3397 if (!card->raw_cell_hnd) { 3398 3398 printk("%s: memory allocation failure.\n", card->name); 3399 3399 deinit_card(card);
+3
drivers/base/power/main.c
··· 32 32 #include <trace/events/power.h> 33 33 #include <linux/cpufreq.h> 34 34 #include <linux/cpuidle.h> 35 + #include <linux/devfreq.h> 35 36 #include <linux/timer.h> 36 37 37 38 #include "../base.h" ··· 1079 1078 dpm_show_time(starttime, state, 0, NULL); 1080 1079 1081 1080 cpufreq_resume(); 1081 + devfreq_resume(); 1082 1082 trace_suspend_resume(TPS("dpm_resume"), state.event, false); 1083 1083 } 1084 1084 ··· 1854 1852 trace_suspend_resume(TPS("dpm_suspend"), state.event, true); 1855 1853 might_sleep(); 1856 1854 1855 + devfreq_suspend(); 1857 1856 cpufreq_suspend(); 1858 1857 1859 1858 mutex_lock(&dpm_list_mtx);
+7 -4
drivers/base/power/runtime.c
··· 121 121 * Compute the autosuspend-delay expiration time based on the device's 122 122 * power.last_busy time. If the delay has already expired or is disabled 123 123 * (negative) or the power.use_autosuspend flag isn't set, return 0. 124 - * Otherwise return the expiration time in jiffies (adjusted to be nonzero). 124 + * Otherwise return the expiration time in nanoseconds (adjusted to be nonzero). 125 125 * 126 126 * This function may be called either with or without dev->power.lock held. 127 127 * Either way it can be racy, since power.last_busy may be updated at any time. ··· 141 141 142 142 last_busy = READ_ONCE(dev->power.last_busy); 143 143 144 - expires = last_busy + autosuspend_delay * NSEC_PER_MSEC; 144 + expires = last_busy + (u64)autosuspend_delay * NSEC_PER_MSEC; 145 145 if (expires <= now) 146 146 expires = 0; /* Already expired. */ 147 147 ··· 525 525 * We add a slack of 25% to gather wakeups 526 526 * without sacrificing the granularity. 527 527 */ 528 - u64 slack = READ_ONCE(dev->power.autosuspend_delay) * 528 + u64 slack = (u64)READ_ONCE(dev->power.autosuspend_delay) * 529 529 (NSEC_PER_MSEC >> 2); 530 530 531 531 dev->power.timer_expires = expires; ··· 905 905 spin_lock_irqsave(&dev->power.lock, flags); 906 906 907 907 expires = dev->power.timer_expires; 908 - /* If 'expire' is after 'jiffies' we've been called too early. */ 908 + /* 909 + * If 'expires' is after the current time, we've been called 910 + * too early. 911 + */ 909 912 if (expires > 0 && expires < ktime_to_ns(ktime_get())) { 910 913 dev->power.timer_expires = 0; 911 914 rpm_suspend(dev, dev->power.timer_autosuspends ?
+33 -2
drivers/block/loop.c
··· 1190 1190 goto out_unlock; 1191 1191 } 1192 1192 1193 + if (lo->lo_offset != info->lo_offset || 1194 + lo->lo_sizelimit != info->lo_sizelimit) { 1195 + sync_blockdev(lo->lo_device); 1196 + kill_bdev(lo->lo_device); 1197 + } 1198 + 1193 1199 /* I/O need to be drained during transfer transition */ 1194 1200 blk_mq_freeze_queue(lo->lo_queue); 1195 1201 ··· 1224 1218 1225 1219 if (lo->lo_offset != info->lo_offset || 1226 1220 lo->lo_sizelimit != info->lo_sizelimit) { 1221 + /* kill_bdev should have truncated all the pages */ 1222 + if (lo->lo_device->bd_inode->i_mapping->nrpages) { 1223 + err = -EAGAIN; 1224 + pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n", 1225 + __func__, lo->lo_number, lo->lo_file_name, 1226 + lo->lo_device->bd_inode->i_mapping->nrpages); 1227 + goto out_unfreeze; 1228 + } 1227 1229 if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) { 1228 1230 err = -EFBIG; 1229 1231 goto out_unfreeze; ··· 1457 1443 1458 1444 static int loop_set_block_size(struct loop_device *lo, unsigned long arg) 1459 1445 { 1446 + int err = 0; 1447 + 1460 1448 if (lo->lo_state != Lo_bound) 1461 1449 return -ENXIO; 1462 1450 1463 1451 if (arg < 512 || arg > PAGE_SIZE || !is_power_of_2(arg)) 1464 1452 return -EINVAL; 1465 1453 1454 + if (lo->lo_queue->limits.logical_block_size != arg) { 1455 + sync_blockdev(lo->lo_device); 1456 + kill_bdev(lo->lo_device); 1457 + } 1458 + 1466 1459 blk_mq_freeze_queue(lo->lo_queue); 1460 + 1461 + /* kill_bdev should have truncated all the pages */ 1462 + if (lo->lo_queue->limits.logical_block_size != arg && 1463 + lo->lo_device->bd_inode->i_mapping->nrpages) { 1464 + err = -EAGAIN; 1465 + pr_warn("%s: loop%d (%s) has still dirty pages (nrpages=%lu)\n", 1466 + __func__, lo->lo_number, lo->lo_file_name, 1467 + lo->lo_device->bd_inode->i_mapping->nrpages); 1468 + goto out_unfreeze; 1469 + } 1467 1470 1468 1471 blk_queue_logical_block_size(lo->lo_queue, arg); 1469 1472 blk_queue_physical_block_size(lo->lo_queue, arg); 1470 1473 blk_queue_io_min(lo->lo_queue, arg); 1471 1474 loop_update_dio(lo); 1472 - 1475 + out_unfreeze: 1473 1476 blk_mq_unfreeze_queue(lo->lo_queue); 1474 1477 1475 - return 0; 1478 + return err; 1476 1479 } 1477 1480 1478 1481 static int lo_simple_ioctl(struct loop_device *lo, unsigned int cmd,
+1
drivers/block/null_blk.h
··· 97 97 #else 98 98 static inline int null_zone_init(struct nullb_device *dev) 99 99 { 100 + pr_err("null_blk: CONFIG_BLK_DEV_ZONED not enabled\n"); 100 101 return -EINVAL; 101 102 } 102 103 static inline void null_zone_exit(struct nullb_device *dev) {}
+4 -5
drivers/block/rbd.c
··· 5986 5986 struct list_head *tmp; 5987 5987 int dev_id; 5988 5988 char opt_buf[6]; 5989 - bool already = false; 5990 5989 bool force = false; 5991 5990 int ret; 5992 5991 ··· 6018 6019 spin_lock_irq(&rbd_dev->lock); 6019 6020 if (rbd_dev->open_count && !force) 6020 6021 ret = -EBUSY; 6021 - else 6022 - already = test_and_set_bit(RBD_DEV_FLAG_REMOVING, 6023 - &rbd_dev->flags); 6022 + else if (test_and_set_bit(RBD_DEV_FLAG_REMOVING, 6023 + &rbd_dev->flags)) 6024 + ret = -EINPROGRESS; 6024 6025 spin_unlock_irq(&rbd_dev->lock); 6025 6026 } 6026 6027 spin_unlock(&rbd_dev_list_lock); 6027 - if (ret < 0 || already) 6028 + if (ret) 6028 6029 return ret; 6029 6030 6030 6031 if (force) {
+2 -2
drivers/block/skd_main.c
··· 2641 2641 "comp pci_alloc, total bytes %zd entries %d\n", 2642 2642 SKD_SKCOMP_SIZE, SKD_N_COMPLETION_ENTRY); 2643 2643 2644 - skcomp = dma_zalloc_coherent(&skdev->pdev->dev, SKD_SKCOMP_SIZE, 2645 - &skdev->cq_dma_address, GFP_KERNEL); 2644 + skcomp = dma_alloc_coherent(&skdev->pdev->dev, SKD_SKCOMP_SIZE, 2645 + &skdev->cq_dma_address, GFP_KERNEL); 2646 2646 2647 2647 if (skcomp == NULL) { 2648 2648 rc = -ENOMEM;
+66 -24
drivers/block/zram/zram_drv.c
··· 316 316 * See the comment in writeback_store. 317 317 */ 318 318 zram_slot_lock(zram, index); 319 - if (!zram_allocated(zram, index) || 320 - zram_test_flag(zram, index, ZRAM_UNDER_WB)) 321 - goto next; 322 - zram_set_flag(zram, index, ZRAM_IDLE); 323 - next: 319 + if (zram_allocated(zram, index) && 320 + !zram_test_flag(zram, index, ZRAM_UNDER_WB)) 321 + zram_set_flag(zram, index, ZRAM_IDLE); 324 322 zram_slot_unlock(zram, index); 325 323 } 326 324 ··· 328 330 } 329 331 330 332 #ifdef CONFIG_ZRAM_WRITEBACK 333 + static ssize_t writeback_limit_enable_store(struct device *dev, 334 + struct device_attribute *attr, const char *buf, size_t len) 335 + { 336 + struct zram *zram = dev_to_zram(dev); 337 + u64 val; 338 + ssize_t ret = -EINVAL; 339 + 340 + if (kstrtoull(buf, 10, &val)) 341 + return ret; 342 + 343 + down_read(&zram->init_lock); 344 + spin_lock(&zram->wb_limit_lock); 345 + zram->wb_limit_enable = val; 346 + spin_unlock(&zram->wb_limit_lock); 347 + up_read(&zram->init_lock); 348 + ret = len; 349 + 350 + return ret; 351 + } 352 + 353 + static ssize_t writeback_limit_enable_show(struct device *dev, 354 + struct device_attribute *attr, char *buf) 355 + { 356 + bool val; 357 + struct zram *zram = dev_to_zram(dev); 358 + 359 + down_read(&zram->init_lock); 360 + spin_lock(&zram->wb_limit_lock); 361 + val = zram->wb_limit_enable; 362 + spin_unlock(&zram->wb_limit_lock); 363 + up_read(&zram->init_lock); 364 + 365 + return scnprintf(buf, PAGE_SIZE, "%d\n", val); 366 + } 367 + 331 368 static ssize_t writeback_limit_store(struct device *dev, 332 369 struct device_attribute *attr, const char *buf, size_t len) 333 370 { ··· 374 341 return ret; 375 342 376 343 down_read(&zram->init_lock); 377 - atomic64_set(&zram->stats.bd_wb_limit, val); 378 - if (val == 0) 379 - zram->stop_writeback = false; 344 + spin_lock(&zram->wb_limit_lock); 345 + zram->bd_wb_limit = val; 346 + spin_unlock(&zram->wb_limit_lock); 380 347 up_read(&zram->init_lock); 381 348 ret = len; 382 349 ··· 390 357 struct zram *zram = dev_to_zram(dev); 391 358 392 359 down_read(&zram->init_lock); 393 - val = atomic64_read(&zram->stats.bd_wb_limit); 360 + spin_lock(&zram->wb_limit_lock); 361 + val = zram->bd_wb_limit; 362 + spin_unlock(&zram->wb_limit_lock); 394 363 up_read(&zram->init_lock); 395 364 396 365 return scnprintf(buf, PAGE_SIZE, "%llu\n", val); ··· 623 588 return 1; 624 589 } 625 590 626 - #define HUGE_WRITEBACK 0x1 627 - #define IDLE_WRITEBACK 0x2 591 + #define HUGE_WRITEBACK 1 592 + #define IDLE_WRITEBACK 2 628 593 629 594 static ssize_t writeback_store(struct device *dev, 630 595 struct device_attribute *attr, const char *buf, size_t len) ··· 637 602 struct page *page; 638 603 ssize_t ret, sz; 639 604 char mode_buf[8]; 640 - unsigned long mode = -1UL; 605 + int mode = -1; 641 606 unsigned long blk_idx = 0; 642 607 643 608 sz = strscpy(mode_buf, buf, sizeof(mode_buf)); ··· 653 618 else if (!strcmp(mode_buf, "huge")) 654 619 mode = HUGE_WRITEBACK; 655 620 656 - if (mode == -1UL) 621 + if (mode == -1) 657 622 return -EINVAL; 658 623 659 624 down_read(&zram->init_lock); ··· 680 645 bvec.bv_len = PAGE_SIZE; 681 646 bvec.bv_offset = 0; 682 647 683 - if (zram->stop_writeback) { 648 + spin_lock(&zram->wb_limit_lock); 649 + if (zram->wb_limit_enable && !zram->bd_wb_limit) { 650 + spin_unlock(&zram->wb_limit_lock); 684 651 ret = -EIO; 685 652 break; 686 653 } 654 + spin_unlock(&zram->wb_limit_lock); 687 655 688 656 if (!blk_idx) { 689 657 blk_idx = alloc_block_bdev(zram); ··· 705 667 zram_test_flag(zram, index, ZRAM_UNDER_WB)) 706 668 goto next; 707 669 708 - if ((mode & IDLE_WRITEBACK && 709 - !zram_test_flag(zram, index, ZRAM_IDLE)) && 710 - (mode & HUGE_WRITEBACK && 711 - !zram_test_flag(zram, index, ZRAM_HUGE))) 670 + if (mode == IDLE_WRITEBACK && 671 + !zram_test_flag(zram, index, ZRAM_IDLE)) 672 + goto next; 673 + if (mode == HUGE_WRITEBACK && 674 + !zram_test_flag(zram, index, ZRAM_HUGE)) 712 675 goto next; 713 676 /* 714 677 * Clearing ZRAM_UNDER_WB is duty of caller. ··· 771 732 zram_set_element(zram, index, blk_idx); 772 733 blk_idx = 0; 773 734 atomic64_inc(&zram->stats.pages_stored); 774 - if (atomic64_add_unless(&zram->stats.bd_wb_limit, 775 - -1 << (PAGE_SHIFT - 12), 0)) { 776 - if (atomic64_read(&zram->stats.bd_wb_limit) == 0) 777 - zram->stop_writeback = true; 778 - } 735 + spin_lock(&zram->wb_limit_lock); 736 + if (zram->wb_limit_enable && zram->bd_wb_limit > 0) 737 + zram->bd_wb_limit -= 1UL << (PAGE_SHIFT - 12); 738 + spin_unlock(&zram->wb_limit_lock); 779 739 next: 780 740 zram_slot_unlock(zram, index); 781 741 } ··· 1850 1812 static DEVICE_ATTR_RW(backing_dev); 1851 1813 static DEVICE_ATTR_WO(writeback); 1852 1814 static DEVICE_ATTR_RW(writeback_limit); 1815 + static DEVICE_ATTR_RW(writeback_limit_enable); 1853 1816 #endif 1854 1817 1855 1818 static struct attribute *zram_disk_attrs[] = { ··· 1867 1828 &dev_attr_backing_dev.attr, 1868 1829 &dev_attr_writeback.attr, 1869 1830 &dev_attr_writeback_limit.attr, 1831 + &dev_attr_writeback_limit_enable.attr, 1870 1832 #endif 1871 1833 &dev_attr_io_stat.attr, 1872 1834 &dev_attr_mm_stat.attr, ··· 1907 1867 device_id = ret; 1908 1868 1909 1869 init_rwsem(&zram->init_lock); 1910 - 1870 + #ifdef CONFIG_ZRAM_WRITEBACK 1871 + spin_lock_init(&zram->wb_limit_lock); 1872 + #endif 1911 1873 queue = blk_alloc_queue(GFP_KERNEL); 1912 1874 if (!queue) { 1913 1875 pr_err("Error allocating disk queue for device %d\n",
+3 -2
drivers/block/zram/zram_drv.h
··· 86 86 atomic64_t bd_count; /* no. of pages in backing device */ 87 87 atomic64_t bd_reads; /* no. of reads from backing device */ 88 88 atomic64_t bd_writes; /* no. of writes from backing device */ 89 - atomic64_t bd_wb_limit; /* writeback limit of backing device */ 90 89 #endif 91 90 }; 92 91 ··· 113 114 */ 114 115 bool claim; /* Protected by bdev->bd_mutex */ 115 116 struct file *backing_dev; 116 - bool stop_writeback; 117 117 #ifdef CONFIG_ZRAM_WRITEBACK 118 + spinlock_t wb_limit_lock; 119 + bool wb_limit_enable; 120 + u64 bd_wb_limit; 118 121 struct block_device *bdev; 119 122 unsigned int old_block_size; 120 123 unsigned long *bitmap;
+4 -8
drivers/cpufreq/cpufreq.c
··· 1530 1530 { 1531 1531 unsigned int ret_freq = 0; 1532 1532 1533 - if (!cpufreq_driver->get) 1533 + if (unlikely(policy_is_inactive(policy)) || !cpufreq_driver->get) 1534 1534 return ret_freq; 1535 1535 1536 1536 ret_freq = cpufreq_driver->get(policy->cpu); 1537 1537 1538 1538 /* 1539 - * Updating inactive policies is invalid, so avoid doing that. Also 1540 - * if fast frequency switching is used with the given policy, the check 1539 + * If fast frequency switching is used with the given policy, the check 1541 1540 * against policy->cur is pointless, so skip it in that case too. 1542 1541 */ 1543 - if (unlikely(policy_is_inactive(policy)) || policy->fast_switch_enabled) 1542 + if (policy->fast_switch_enabled) 1544 1543 return ret_freq; 1545 1544 1546 1545 if (ret_freq && policy->cur && ··· 1568 1569 1569 1570 if (policy) { 1570 1571 down_read(&policy->rwsem); 1571 - 1572 - if (!policy_is_inactive(policy)) 1573 - ret_freq = __cpufreq_get(policy); 1574 - 1572 + ret_freq = __cpufreq_get(policy); 1575 1573 up_read(&policy->rwsem); 1576 1574 1577 1575 cpufreq_cpu_put(policy);
+4 -4
drivers/cpufreq/scmi-cpufreq.c
··· 52 52 int ret; 53 53 struct scmi_data *priv = policy->driver_data; 54 54 struct scmi_perf_ops *perf_ops = handle->perf_ops; 55 - u64 freq = policy->freq_table[index].frequency * 1000; 55 + u64 freq = policy->freq_table[index].frequency; 56 56 57 - ret = perf_ops->freq_set(handle, priv->domain_id, freq, false); 57 + ret = perf_ops->freq_set(handle, priv->domain_id, freq * 1000, false); 58 58 if (!ret) 59 59 arch_set_freq_scale(policy->related_cpus, freq, 60 60 policy->cpuinfo.max_freq); ··· 176 176 out_free_priv: 177 177 kfree(priv); 178 178 out_free_opp: 179 - dev_pm_opp_cpumask_remove_table(policy->cpus); 179 + dev_pm_opp_remove_all_dynamic(cpu_dev); 180 180 181 181 return ret; 182 182 } ··· 188 188 cpufreq_cooling_unregister(priv->cdev); 189 189 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 190 190 kfree(priv); 191 - dev_pm_opp_cpumask_remove_table(policy->related_cpus); 191 + dev_pm_opp_remove_all_dynamic(priv->cpu_dev); 192 192 193 193 return 0; 194 194 }
+2 -2
drivers/cpufreq/scpi-cpufreq.c
··· 177 177 out_free_priv: 178 178 kfree(priv); 179 179 out_free_opp: 180 - dev_pm_opp_cpumask_remove_table(policy->cpus); 180 + dev_pm_opp_remove_all_dynamic(cpu_dev); 181 181 182 182 return ret; 183 183 } ··· 190 190 clk_put(priv->clk); 191 191 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 192 192 kfree(priv); 193 - dev_pm_opp_cpumask_remove_table(policy->related_cpus); 193 + dev_pm_opp_remove_all_dynamic(priv->cpu_dev); 194 194 195 195 return 0; 196 196 }
+3 -3
drivers/crypto/amcc/crypto4xx_core.c
··· 283 283 */ 284 284 static u32 crypto4xx_build_gdr(struct crypto4xx_device *dev) 285 285 { 286 - dev->gdr = dma_zalloc_coherent(dev->core_dev->device, 287 - sizeof(struct ce_gd) * PPC4XX_NUM_GD, 288 - &dev->gdr_pa, GFP_ATOMIC); 286 + dev->gdr = dma_alloc_coherent(dev->core_dev->device, 287 + sizeof(struct ce_gd) * PPC4XX_NUM_GD, 288 + &dev->gdr_pa, GFP_ATOMIC); 289 289 if (!dev->gdr) 290 290 return -ENOMEM; 291 291
+2 -2
drivers/crypto/cavium/cpt/cptpf_main.c
··· 278 278 mcode->num_cores = is_ae ? 6 : 10; 279 279 280 280 /* Allocate DMAable space */ 281 - mcode->code = dma_zalloc_coherent(&cpt->pdev->dev, mcode->code_size, 282 - &mcode->phys_base, GFP_KERNEL); 281 + mcode->code = dma_alloc_coherent(&cpt->pdev->dev, mcode->code_size, 282 + &mcode->phys_base, GFP_KERNEL); 283 283 if (!mcode->code) { 284 284 dev_err(dev, "Unable to allocate space for microcode"); 285 285 ret = -ENOMEM;
+4 -3
drivers/crypto/cavium/cpt/cptvf_main.c
··· 236 236 237 237 c_size = (rem_q_size > qcsize_bytes) ? qcsize_bytes : 238 238 rem_q_size; 239 - curr->head = (u8 *)dma_zalloc_coherent(&pdev->dev, 240 - c_size + CPT_NEXT_CHUNK_PTR_SIZE, 241 - &curr->dma_addr, GFP_KERNEL); 239 + curr->head = (u8 *)dma_alloc_coherent(&pdev->dev, 240 + c_size + CPT_NEXT_CHUNK_PTR_SIZE, 241 + &curr->dma_addr, 242 + GFP_KERNEL); 242 243 if (!curr->head) { 243 244 dev_err(&pdev->dev, "Command Q (%d) chunk (%d) allocation failed\n", 244 245 i, queue->nchunks);
+3 -3
drivers/crypto/cavium/nitrox/nitrox_lib.c
··· 25 25 struct nitrox_device *ndev = cmdq->ndev; 26 26 27 27 cmdq->qsize = (ndev->qlen * cmdq->instr_size) + align_bytes; 28 - cmdq->unalign_base = dma_zalloc_coherent(DEV(ndev), cmdq->qsize, 29 - &cmdq->unalign_dma, 30 - GFP_KERNEL); 28 + cmdq->unalign_base = dma_alloc_coherent(DEV(ndev), cmdq->qsize, 29 + &cmdq->unalign_dma, 30 + GFP_KERNEL); 31 31 if (!cmdq->unalign_base) 32 32 return -ENOMEM; 33 33
+3 -3
drivers/crypto/ccp/ccp-dev-v5.c
··· 822 822 /* Page alignment satisfies our needs for N <= 128 */ 823 823 BUILD_BUG_ON(COMMANDS_PER_QUEUE > 128); 824 824 cmd_q->qsize = Q_SIZE(Q_DESC_SIZE); 825 - cmd_q->qbase = dma_zalloc_coherent(dev, cmd_q->qsize, 826 - &cmd_q->qbase_dma, 827 - GFP_KERNEL); 825 + cmd_q->qbase = dma_alloc_coherent(dev, cmd_q->qsize, 826 + &cmd_q->qbase_dma, 827 + GFP_KERNEL); 828 828 if (!cmd_q->qbase) { 829 829 dev_err(dev, "unable to allocate command queue\n"); 830 830 ret = -ENOMEM;
+2 -2
drivers/crypto/hisilicon/sec/sec_algs.c
··· 241 241 memset(ctx->key, 0, SEC_MAX_CIPHER_KEY); 242 242 } else { 243 243 /* new key */ 244 - ctx->key = dma_zalloc_coherent(dev, SEC_MAX_CIPHER_KEY, 245 - &ctx->pkey, GFP_KERNEL); 244 + ctx->key = dma_alloc_coherent(dev, SEC_MAX_CIPHER_KEY, 245 + &ctx->pkey, GFP_KERNEL); 246 246 if (!ctx->key) { 247 247 mutex_unlock(&ctx->lock); 248 248 return -ENOMEM;
+6 -9
drivers/crypto/hisilicon/sec/sec_drv.c
··· 1082 1082 struct sec_queue_ring_db *ring_db = &queue->ring_db; 1083 1083 int ret; 1084 1084 1085 - ring_cmd->vaddr = dma_zalloc_coherent(dev, SEC_Q_CMD_SIZE, 1086 - &ring_cmd->paddr, 1087 - GFP_KERNEL); 1085 + ring_cmd->vaddr = dma_alloc_coherent(dev, SEC_Q_CMD_SIZE, 1086 + &ring_cmd->paddr, GFP_KERNEL); 1088 1087 if (!ring_cmd->vaddr) 1089 1088 return -ENOMEM; 1090 1089 ··· 1091 1092 mutex_init(&ring_cmd->lock); 1092 1093 ring_cmd->callback = sec_alg_callback; 1093 1094 1094 - ring_cq->vaddr = dma_zalloc_coherent(dev, SEC_Q_CQ_SIZE, 1095 - &ring_cq->paddr, 1096 - GFP_KERNEL); 1095 + ring_cq->vaddr = dma_alloc_coherent(dev, SEC_Q_CQ_SIZE, 1096 + &ring_cq->paddr, GFP_KERNEL); 1097 1097 if (!ring_cq->vaddr) { 1098 1098 ret = -ENOMEM; 1099 1099 goto err_free_ring_cmd; 1100 1100 } 1101 1101 1102 - ring_db->vaddr = dma_zalloc_coherent(dev, SEC_Q_DB_SIZE, 1103 - &ring_db->paddr, 1104 - GFP_KERNEL); 1102 + ring_db->vaddr = dma_alloc_coherent(dev, SEC_Q_DB_SIZE, 1103 + &ring_db->paddr, GFP_KERNEL); 1105 1104 if (!ring_db->vaddr) { 1106 1105 ret = -ENOMEM; 1107 1106 goto err_free_ring_cq;
+3 -3
drivers/crypto/ixp4xx_crypto.c
··· 260 260 { 261 261 struct device *dev = &pdev->dev; 262 262 BUILD_BUG_ON(sizeof(struct crypt_ctl) != 64); 263 - crypt_virt = dma_zalloc_coherent(dev, 264 - NPE_QLEN * sizeof(struct crypt_ctl), 265 - &crypt_phys, GFP_ATOMIC); 263 + crypt_virt = dma_alloc_coherent(dev, 264 + NPE_QLEN * sizeof(struct crypt_ctl), 265 + &crypt_phys, GFP_ATOMIC); 266 266 if (!crypt_virt) 267 267 return -ENOMEM; 268 268 return 0;
+8 -8
drivers/crypto/mediatek/mtk-platform.c
··· 453 453 if (!ring[i]) 454 454 goto err_cleanup; 455 455 456 - ring[i]->cmd_base = dma_zalloc_coherent(cryp->dev, 457 - MTK_DESC_RING_SZ, 458 - &ring[i]->cmd_dma, 459 - GFP_KERNEL); 456 + ring[i]->cmd_base = dma_alloc_coherent(cryp->dev, 457 + MTK_DESC_RING_SZ, 458 + &ring[i]->cmd_dma, 459 + GFP_KERNEL); 460 460 if (!ring[i]->cmd_base) 461 461 goto err_cleanup; 462 462 463 - ring[i]->res_base = dma_zalloc_coherent(cryp->dev, 464 - MTK_DESC_RING_SZ, 465 - &ring[i]->res_dma, 466 - GFP_KERNEL); 463 + ring[i]->res_base = dma_alloc_coherent(cryp->dev, 464 + MTK_DESC_RING_SZ, 465 + &ring[i]->res_dma, 466 + GFP_KERNEL); 467 467 if (!ring[i]->res_base) 468 468 goto err_cleanup; 469 469
+6 -6
drivers/crypto/qat/qat_common/adf_admin.c
··· 244 244 dev_to_node(&GET_DEV(accel_dev))); 245 245 if (!admin) 246 246 return -ENOMEM; 247 - admin->virt_addr = dma_zalloc_coherent(&GET_DEV(accel_dev), PAGE_SIZE, 248 - &admin->phy_addr, GFP_KERNEL); 247 + admin->virt_addr = dma_alloc_coherent(&GET_DEV(accel_dev), PAGE_SIZE, 248 + &admin->phy_addr, GFP_KERNEL); 249 249 if (!admin->virt_addr) { 250 250 dev_err(&GET_DEV(accel_dev), "Failed to allocate dma buff\n"); 251 251 kfree(admin); 252 252 return -ENOMEM; 253 253 } 254 254 255 - admin->virt_tbl_addr = dma_zalloc_coherent(&GET_DEV(accel_dev), 256 - PAGE_SIZE, 257 - &admin->const_tbl_addr, 258 - GFP_KERNEL); 255 + admin->virt_tbl_addr = dma_alloc_coherent(&GET_DEV(accel_dev), 256 + PAGE_SIZE, 257 + &admin->const_tbl_addr, 258 + GFP_KERNEL); 259 259 if (!admin->virt_tbl_addr) { 260 260 dev_err(&GET_DEV(accel_dev), "Failed to allocate const_tbl\n"); 261 261 dma_free_coherent(&GET_DEV(accel_dev), PAGE_SIZE,
+12 -12
drivers/crypto/qat/qat_common/qat_algs.c
··· 601 601 602 602 dev = &GET_DEV(inst->accel_dev); 603 603 ctx->inst = inst; 604 - ctx->enc_cd = dma_zalloc_coherent(dev, sizeof(*ctx->enc_cd), 605 - &ctx->enc_cd_paddr, 606 - GFP_ATOMIC); 604 + ctx->enc_cd = dma_alloc_coherent(dev, sizeof(*ctx->enc_cd), 605 + &ctx->enc_cd_paddr, 606 + GFP_ATOMIC); 607 607 if (!ctx->enc_cd) { 608 608 return -ENOMEM; 609 609 } 610 - ctx->dec_cd = dma_zalloc_coherent(dev, sizeof(*ctx->dec_cd), 611 - &ctx->dec_cd_paddr, 612 - GFP_ATOMIC); 610 + ctx->dec_cd = dma_alloc_coherent(dev, sizeof(*ctx->dec_cd), 611 + &ctx->dec_cd_paddr, 612 + GFP_ATOMIC); 613 613 if (!ctx->dec_cd) { 614 614 goto out_free_enc; 615 615 } ··· 933 933 934 934 dev = &GET_DEV(inst->accel_dev); 935 935 ctx->inst = inst; 936 - ctx->enc_cd = dma_zalloc_coherent(dev, sizeof(*ctx->enc_cd), 937 - &ctx->enc_cd_paddr, 938 - GFP_ATOMIC); 936 + ctx->enc_cd = dma_alloc_coherent(dev, sizeof(*ctx->enc_cd), 937 + &ctx->enc_cd_paddr, 938 + GFP_ATOMIC); 939 939 if (!ctx->enc_cd) { 940 940 spin_unlock(&ctx->lock); 941 941 return -ENOMEM; 942 942 } 943 - ctx->dec_cd = dma_zalloc_coherent(dev, sizeof(*ctx->dec_cd), 944 - &ctx->dec_cd_paddr, 945 - GFP_ATOMIC); 943 + ctx->dec_cd = dma_alloc_coherent(dev, sizeof(*ctx->dec_cd), 944 + &ctx->dec_cd_paddr, 945 + GFP_ATOMIC); 946 946 if (!ctx->dec_cd) { 947 947 spin_unlock(&ctx->lock); 948 948 goto out_free_enc;
+34 -34
drivers/crypto/qat/qat_common/qat_asym_algs.c
··· 332 332 } else { 333 333 int shift = ctx->p_size - req->src_len; 334 334 335 - qat_req->src_align = dma_zalloc_coherent(dev, 336 - ctx->p_size, 337 - &qat_req->in.dh.in.b, 338 - GFP_KERNEL); 335 + qat_req->src_align = dma_alloc_coherent(dev, 336 + ctx->p_size, 337 + &qat_req->in.dh.in.b, 338 + GFP_KERNEL); 339 339 if (unlikely(!qat_req->src_align)) 340 340 return ret; 341 341 ··· 360 360 goto unmap_src; 361 361 362 362 } else { 363 - qat_req->dst_align = dma_zalloc_coherent(dev, ctx->p_size, 364 - &qat_req->out.dh.r, 365 - GFP_KERNEL); 363 + qat_req->dst_align = dma_alloc_coherent(dev, ctx->p_size, 364 + &qat_req->out.dh.r, 365 + GFP_KERNEL); 366 366 if (unlikely(!qat_req->dst_align)) 367 367 goto unmap_src; 368 368 } ··· 447 447 return -EINVAL; 448 448 449 449 ctx->p_size = params->p_size; 450 - ctx->p = dma_zalloc_coherent(dev, ctx->p_size, &ctx->dma_p, GFP_KERNEL); 450 + ctx->p = dma_alloc_coherent(dev, ctx->p_size, &ctx->dma_p, GFP_KERNEL); 451 451 if (!ctx->p) 452 452 return -ENOMEM; 453 453 memcpy(ctx->p, params->p, ctx->p_size); ··· 458 458 return 0; 459 459 } 460 460 461 - ctx->g = dma_zalloc_coherent(dev, ctx->p_size, &ctx->dma_g, GFP_KERNEL); 461 + ctx->g = dma_alloc_coherent(dev, ctx->p_size, &ctx->dma_g, GFP_KERNEL); 462 462 if (!ctx->g) 463 463 return -ENOMEM; 464 464 memcpy(ctx->g + (ctx->p_size - params->g_size), params->g, ··· 503 503 if (ret < 0) 504 504 goto err_clear_ctx; 505 505 506 - ctx->xa = dma_zalloc_coherent(dev, ctx->p_size, &ctx->dma_xa, 507 - GFP_KERNEL); 506 + ctx->xa = dma_alloc_coherent(dev, ctx->p_size, &ctx->dma_xa, 507 + GFP_KERNEL); 508 508 if (!ctx->xa) { 509 509 ret = -ENOMEM; 510 510 goto err_clear_ctx; ··· 737 737 } else { 738 738 int shift = ctx->key_sz - req->src_len; 739 739 740 - qat_req->src_align = dma_zalloc_coherent(dev, ctx->key_sz, 741 - &qat_req->in.rsa.enc.m, 742 - GFP_KERNEL); 740 + qat_req->src_align = dma_alloc_coherent(dev, ctx->key_sz, 741 + &qat_req->in.rsa.enc.m, 742 + GFP_KERNEL); 743 743 if (unlikely(!qat_req->src_align)) 744 744 return ret; 745 745 ··· 756 756 goto unmap_src; 757 757 758 758 } else { 759 - qat_req->dst_align = dma_zalloc_coherent(dev, ctx->key_sz, 760 - &qat_req->out.rsa.enc.c, 761 - GFP_KERNEL); 759 + qat_req->dst_align = dma_alloc_coherent(dev, ctx->key_sz, 760 + &qat_req->out.rsa.enc.c, 761 + GFP_KERNEL); 762 762 if (unlikely(!qat_req->dst_align)) 763 763 goto unmap_src; 764 764 ··· 881 881 } else { 882 882 int shift = ctx->key_sz - req->src_len; 883 883 884 - qat_req->src_align = dma_zalloc_coherent(dev, ctx->key_sz, 885 - &qat_req->in.rsa.dec.c, 886 - GFP_KERNEL); 884 + qat_req->src_align = dma_alloc_coherent(dev, ctx->key_sz, 885 + &qat_req->in.rsa.dec.c, 886 + GFP_KERNEL); 887 887 if (unlikely(!qat_req->src_align)) 888 888 return ret; 889 889 ··· 900 900 goto unmap_src; 901 901 902 902 } else { 903 - qat_req->dst_align = dma_zalloc_coherent(dev, ctx->key_sz, 904 - &qat_req->out.rsa.dec.m, 905 - GFP_KERNEL); 903 + qat_req->dst_align = dma_alloc_coherent(dev, ctx->key_sz, 904 + &qat_req->out.rsa.dec.m, 905 + GFP_KERNEL); 906 906 if (unlikely(!qat_req->dst_align)) 907 907 goto unmap_src; 908 908 ··· 989 989 goto err; 990 990 991 991 ret = -ENOMEM; 992 - ctx->n = dma_zalloc_coherent(dev, ctx->key_sz, &ctx->dma_n, GFP_KERNEL); 992 + ctx->n = dma_alloc_coherent(dev, ctx->key_sz, &ctx->dma_n, GFP_KERNEL); 993 993 if (!ctx->n) 994 994 goto err; 995 995 ··· 1018 1018 return -EINVAL; 1019 1019 } 1020 1020 1021 - ctx->e = dma_zalloc_coherent(dev, ctx->key_sz, &ctx->dma_e, GFP_KERNEL); 1021 + ctx->e = dma_alloc_coherent(dev, ctx->key_sz, &ctx->dma_e, GFP_KERNEL); 1022 1022 if (!ctx->e) 1023 1023 return -ENOMEM; 1024 1024 ··· 1044 1044 goto err; 1045 1045 1046 1046 ret = -ENOMEM; 1047 - ctx->d = dma_zalloc_coherent(dev, ctx->key_sz, &ctx->dma_d, GFP_KERNEL); 1047 + ctx->d = dma_alloc_coherent(dev, ctx->key_sz, &ctx->dma_d, GFP_KERNEL); 1048 1048 if (!ctx->d) 1049 1049 goto err; 1050 1050 ··· 1077 1077 qat_rsa_drop_leading_zeros(&ptr, &len); 1078 1078 if (!len) 1079 1079 goto err; 1080 - ctx->p = dma_zalloc_coherent(dev, half_key_sz, &ctx->dma_p, GFP_KERNEL); 1080 + ctx->p = dma_alloc_coherent(dev, half_key_sz, &ctx->dma_p, GFP_KERNEL); 1081 1081 if (!ctx->p) 1082 1082 goto err; 1083 1083 memcpy(ctx->p + (half_key_sz - len), ptr, len); ··· 1088 1088 qat_rsa_drop_leading_zeros(&ptr, &len); 1089 1089 if (!len) 1090 1090 goto free_p; 1091 - ctx->q = dma_zalloc_coherent(dev, half_key_sz, &ctx->dma_q, GFP_KERNEL); 1091 + ctx->q = dma_alloc_coherent(dev, half_key_sz, &ctx->dma_q, GFP_KERNEL); 1092 1092 if (!ctx->q) 1093 1093 goto free_p; 1094 1094 memcpy(ctx->q + (half_key_sz - len), ptr, len); ··· 1099 1099 qat_rsa_drop_leading_zeros(&ptr, &len); 1100 1100 if (!len) 1101 1101 goto free_q; 1102 - ctx->dp = dma_zalloc_coherent(dev, half_key_sz, &ctx->dma_dp, 1103 - GFP_KERNEL); 1102 + ctx->dp = dma_alloc_coherent(dev, half_key_sz, &ctx->dma_dp, 1103 + GFP_KERNEL); 1104 1104 if (!ctx->dp) 1105 1105 goto free_q; 1106 1106 memcpy(ctx->dp + (half_key_sz - len), ptr, len); ··· 1111 1111 qat_rsa_drop_leading_zeros(&ptr, &len); 1112 1112 if (!len) 1113 1113 goto free_dp; 1114 - ctx->dq = dma_zalloc_coherent(dev, half_key_sz, &ctx->dma_dq, 1115 - GFP_KERNEL); 1114 + ctx->dq = dma_alloc_coherent(dev, half_key_sz, &ctx->dma_dq, 1115 + GFP_KERNEL); 1116 1116 if (!ctx->dq) 1117 1117 goto free_dp; 1118 1118 memcpy(ctx->dq + (half_key_sz - len), ptr, len); ··· 1123 1123 qat_rsa_drop_leading_zeros(&ptr, &len); 1124 1124 if (!len) 1125 1125 goto free_dq; 1126 - ctx->qinv = dma_zalloc_coherent(dev, half_key_sz, &ctx->dma_qinv, 1127 - GFP_KERNEL); 1126 + ctx->qinv = dma_alloc_coherent(dev, half_key_sz, &ctx->dma_qinv, 1127 + GFP_KERNEL); 1128 1128 if (!ctx->qinv) 1129 1129 goto free_dq; 1130 1130 memcpy(ctx->qinv + (half_key_sz - len), ptr, len);
+4 -4
drivers/dma/imx-sdma.c
··· 1182 1182 { 1183 1183 int ret = -EBUSY; 1184 1184 1185 - sdma->bd0 = dma_zalloc_coherent(NULL, PAGE_SIZE, &sdma->bd0_phys, 1186 - GFP_NOWAIT); 1185 + sdma->bd0 = dma_alloc_coherent(NULL, PAGE_SIZE, &sdma->bd0_phys, 1186 + GFP_NOWAIT); 1187 1187 if (!sdma->bd0) { 1188 1188 ret = -ENOMEM; 1189 1189 goto out; ··· 1205 1205 u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); 1206 1206 int ret = 0; 1207 1207 1208 - desc->bd = dma_zalloc_coherent(NULL, bd_size, &desc->bd_phys, 1209 - GFP_NOWAIT); 1208 + desc->bd = dma_alloc_coherent(NULL, bd_size, &desc->bd_phys, 1209 + GFP_NOWAIT); 1210 1210 if (!desc->bd) { 1211 1211 ret = -ENOMEM; 1212 1212 goto out;
+2 -2
drivers/dma/mediatek/mtk-hsdma.c
··· 325 325 * and [MTK_DMA_SIZE ... 2 * MTK_DMA_SIZE - 1] is for RX ring. 326 326 */ 327 327 pc->sz_ring = 2 * MTK_DMA_SIZE * sizeof(*ring->txd); 328 - ring->txd = dma_zalloc_coherent(hsdma2dev(hsdma), pc->sz_ring, 329 - &ring->tphys, GFP_NOWAIT); 328 + ring->txd = dma_alloc_coherent(hsdma2dev(hsdma), pc->sz_ring, 329 + &ring->tphys, GFP_NOWAIT); 330 330 if (!ring->txd) 331 331 return -ENOMEM; 332 332
+3 -3
drivers/dma/mxs-dma.c
··· 416 416 struct mxs_dma_engine *mxs_dma = mxs_chan->mxs_dma; 417 417 int ret; 418 418 419 - mxs_chan->ccw = dma_zalloc_coherent(mxs_dma->dma_device.dev, 420 - CCW_BLOCK_SIZE, 421 - &mxs_chan->ccw_phys, GFP_KERNEL); 419 + mxs_chan->ccw = dma_alloc_coherent(mxs_dma->dma_device.dev, 420 + CCW_BLOCK_SIZE, 421 + &mxs_chan->ccw_phys, GFP_KERNEL); 422 422 if (!mxs_chan->ccw) { 423 423 ret = -ENOMEM; 424 424 goto err_alloc;
+2 -2
drivers/dma/xgene-dma.c
··· 1208 1208 ring->size = ret; 1209 1209 1210 1210 /* Allocate memory for DMA ring descriptor */ 1211 - ring->desc_vaddr = dma_zalloc_coherent(chan->dev, ring->size, 1212 - &ring->desc_paddr, GFP_KERNEL); 1211 + ring->desc_vaddr = dma_alloc_coherent(chan->dev, ring->size, 1212 + &ring->desc_paddr, GFP_KERNEL); 1213 1213 if (!ring->desc_vaddr) { 1214 1214 chan_err(chan, "Failed to allocate ring desc\n"); 1215 1215 return -ENOMEM;
+7 -7
drivers/dma/xilinx/xilinx_dma.c
··· 879 879 */ 880 880 if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) { 881 881 /* Allocate the buffer descriptors. */ 882 - chan->seg_v = dma_zalloc_coherent(chan->dev, 883 - sizeof(*chan->seg_v) * 884 - XILINX_DMA_NUM_DESCS, 885 - &chan->seg_p, GFP_KERNEL); 882 + chan->seg_v = dma_alloc_coherent(chan->dev, 883 + sizeof(*chan->seg_v) * XILINX_DMA_NUM_DESCS, 884 + &chan->seg_p, GFP_KERNEL); 886 885 if (!chan->seg_v) { 887 886 dev_err(chan->dev, 888 887 "unable to allocate channel %d descriptors\n", ··· 894 895 * so allocating a desc segment during channel allocation for 895 896 * programming tail descriptor. 896 897 */ 897 - chan->cyclic_seg_v = dma_zalloc_coherent(chan->dev, 898 - sizeof(*chan->cyclic_seg_v), 899 - &chan->cyclic_seg_p, GFP_KERNEL); 898 + chan->cyclic_seg_v = dma_alloc_coherent(chan->dev, 899 + sizeof(*chan->cyclic_seg_v), 900 + &chan->cyclic_seg_p, 901 + GFP_KERNEL); 900 902 if (!chan->cyclic_seg_v) { 901 903 dev_err(chan->dev, 902 904 "unable to allocate desc segment for cyclic DMA\n");
+3 -3
drivers/dma/xilinx/zynqmp_dma.c
··· 490 490 list_add_tail(&desc->node, &chan->free_list); 491 491 } 492 492 493 - chan->desc_pool_v = dma_zalloc_coherent(chan->dev, 494 - (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS), 495 - &chan->desc_pool_p, GFP_KERNEL); 493 + chan->desc_pool_v = dma_alloc_coherent(chan->dev, 494 + (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS), 495 + &chan->desc_pool_p, GFP_KERNEL); 496 496 if (!chan->desc_pool_v) 497 497 return -ENOMEM; 498 498
+6 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 1701 1701 amdgpu_xgmi_add_device(adev); 1702 1702 amdgpu_amdkfd_device_init(adev); 1703 1703 1704 - if (amdgpu_sriov_vf(adev)) 1704 + if (amdgpu_sriov_vf(adev)) { 1705 + amdgpu_virt_init_data_exchange(adev); 1705 1706 amdgpu_virt_release_full_gpu(adev, true); 1707 + } 1706 1708 1707 1709 return 0; 1708 1710 } ··· 2634 2632 goto failed; 2635 2633 } 2636 2634 2637 - if (amdgpu_sriov_vf(adev)) 2638 - amdgpu_virt_init_data_exchange(adev); 2639 - 2640 2635 amdgpu_fbdev_init(adev); 2641 2636 2642 2637 r = amdgpu_pm_sysfs_init(adev); ··· 2797 2798 struct drm_framebuffer *fb = crtc->primary->fb; 2798 2799 struct amdgpu_bo *robj; 2799 2800 2800 - if (amdgpu_crtc->cursor_bo) { 2801 + if (amdgpu_crtc->cursor_bo && !adev->enable_virtual_display) { 2801 2802 struct amdgpu_bo *aobj = gem_to_amdgpu_bo(amdgpu_crtc->cursor_bo); 2802 2803 r = amdgpu_bo_reserve(aobj, true); 2803 2804 if (r == 0) { ··· 2905 2906 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 2906 2907 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 2907 2908 2908 - if (amdgpu_crtc->cursor_bo) { 2909 + if (amdgpu_crtc->cursor_bo && !adev->enable_virtual_display) { 2909 2910 struct amdgpu_bo *aobj = gem_to_amdgpu_bo(amdgpu_crtc->cursor_bo); 2910 2911 r = amdgpu_bo_reserve(aobj, true); 2911 2912 if (r == 0) { ··· 3225 3226 r = amdgpu_ib_ring_tests(adev); 3226 3227 3227 3228 error: 3229 + amdgpu_virt_init_data_exchange(adev); 3228 3230 amdgpu_virt_release_full_gpu(adev, true); 3229 3231 if (!r && adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) { 3230 3232 atomic_inc(&adev->vram_lost_counter);
+30 -8
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 188 188 goto cleanup; 189 189 } 190 190 191 - r = amdgpu_bo_pin(new_abo, amdgpu_display_supported_domains(adev)); 192 - if (unlikely(r != 0)) { 193 - DRM_ERROR("failed to pin new abo buffer before flip\n"); 194 - goto unreserve; 191 + if (!adev->enable_virtual_display) { 192 + r = amdgpu_bo_pin(new_abo, amdgpu_display_supported_domains(adev)); 193 + if (unlikely(r != 0)) { 194 + DRM_ERROR("failed to pin new abo buffer before flip\n"); 195 + goto unreserve; 196 + } 195 197 } 196 198 197 199 r = amdgpu_ttm_alloc_gart(&new_abo->tbo); ··· 213 211 amdgpu_bo_get_tiling_flags(new_abo, &tiling_flags); 214 212 amdgpu_bo_unreserve(new_abo); 215 213 216 - work->base = amdgpu_bo_gpu_offset(new_abo); 214 + if (!adev->enable_virtual_display) 215 + work->base = amdgpu_bo_gpu_offset(new_abo); 217 216 work->target_vblank = target - (uint32_t)drm_crtc_vblank_count(crtc) + 218 217 amdgpu_get_vblank_counter_kms(dev, work->crtc_id); 219 218 ··· 245 242 goto cleanup; 246 243 } 247 244 unpin: 248 - if (unlikely(amdgpu_bo_unpin(new_abo) != 0)) { 249 - DRM_ERROR("failed to unpin new abo in error path\n"); 250 - } 245 + if (!adev->enable_virtual_display) 246 + if (unlikely(amdgpu_bo_unpin(new_abo) != 0)) 247 + DRM_ERROR("failed to unpin new abo in error path\n"); 248 + 251 249 unreserve: 252 250 amdgpu_bo_unreserve(new_abo); 253 251 ··· 531 527 struct drm_gem_object *obj; 532 528 struct amdgpu_framebuffer *amdgpu_fb; 533 529 int ret; 530 + int height; 531 + struct amdgpu_device *adev = dev->dev_private; 532 + int cpp = drm_format_plane_cpp(mode_cmd->pixel_format, 0); 533 + int pitch = mode_cmd->pitches[0] / cpp; 534 + 535 + pitch = amdgpu_align_pitch(adev, pitch, cpp, false); 536 + if (mode_cmd->pitches[0] != pitch) { 537 + DRM_DEBUG_KMS("Invalid pitch: expecting %d but got %d\n", 538 + pitch, mode_cmd->pitches[0]); 539 + return ERR_PTR(-EINVAL); 540 + } 534 541 535 542 obj = drm_gem_object_lookup(file_priv, mode_cmd->handles[0]); 536 543 if (obj == NULL) { ··· 553 538 /* Handle is imported dma-buf, so cannot be migrated to VRAM for scanout */ 554 539 if (obj->import_attach) { 555 540 DRM_DEBUG_KMS("Cannot create framebuffer from imported dma_buf\n"); 541 + return ERR_PTR(-EINVAL); 542 + } 543 + 544 + height = ALIGN(mode_cmd->height, 8); 545 + if (obj->size < pitch * height) { 546 + DRM_DEBUG_KMS("Invalid GEM size: expecting >= %d but got %zu\n", 547 + pitch * height, obj->size); 556 548 return ERR_PTR(-EINVAL); 557 549 } 558 550
+14 -8
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
··· 2008 2008 2009 2009 int amdgpu_pm_sysfs_init(struct amdgpu_device *adev) 2010 2010 { 2011 + struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle; 2011 2012 int ret; 2012 2013 2013 2014 if (adev->pm.sysfs_initialized) ··· 2092 2091 "pp_power_profile_mode\n"); 2093 2092 return ret; 2094 2093 } 2095 - ret = device_create_file(adev->dev, 2096 - &dev_attr_pp_od_clk_voltage); 2097 - if (ret) { 2098 - DRM_ERROR("failed to create device file " 2099 - "pp_od_clk_voltage\n"); 2100 - return ret; 2094 + if (hwmgr->od_enabled) { 2095 + ret = device_create_file(adev->dev, 2096 + &dev_attr_pp_od_clk_voltage); 2097 + if (ret) { 2098 + DRM_ERROR("failed to create device file " 2099 + "pp_od_clk_voltage\n"); 2100 + return ret; 2101 + } 2101 2102 } 2102 2103 ret = device_create_file(adev->dev, 2103 2104 &dev_attr_gpu_busy_percent); ··· 2121 2118 2122 2119 void amdgpu_pm_sysfs_fini(struct amdgpu_device *adev) 2123 2120 { 2121 + struct pp_hwmgr *hwmgr = adev->powerplay.pp_handle; 2122 + 2124 2123 if (adev->pm.dpm_enabled == 0) 2125 2124 return; 2126 2125 ··· 2143 2138 device_remove_file(adev->dev, &dev_attr_pp_mclk_od); 2144 2139 device_remove_file(adev->dev, 2145 2140 &dev_attr_pp_power_profile_mode); 2146 - device_remove_file(adev->dev, 2147 - &dev_attr_pp_od_clk_voltage); 2141 + if (hwmgr->od_enabled) 2142 + device_remove_file(adev->dev, 2143 + &dev_attr_pp_od_clk_voltage); 2148 2144 device_remove_file(adev->dev, &dev_attr_gpu_busy_percent); 2149 2145 } 2150 2146
-3
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 847 847 bp->size = amdgpu_vm_bo_size(adev, level); 848 848 bp->byte_align = AMDGPU_GPU_PAGE_SIZE; 849 849 bp->domain = AMDGPU_GEM_DOMAIN_VRAM; 850 - if (bp->size <= PAGE_SIZE && adev->asic_type >= CHIP_VEGA10 && 851 - adev->flags & AMD_IS_APU) 852 - bp->domain |= AMDGPU_GEM_DOMAIN_GTT; 853 850 bp->domain = amdgpu_bo_get_preferred_pin_domain(adev, bp->domain); 854 851 bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS | 855 852 AMDGPU_GEM_CREATE_CPU_GTT_USWC;
+3 -14
drivers/gpu/drm/amd/amdgpu/dce_virtual.c
··· 167 167 struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc); 168 168 169 169 dce_virtual_crtc_dpms(crtc, DRM_MODE_DPMS_OFF); 170 - if (crtc->primary->fb) { 171 - int r; 172 - struct amdgpu_bo *abo; 173 - 174 - abo = gem_to_amdgpu_bo(crtc->primary->fb->obj[0]); 175 - r = amdgpu_bo_reserve(abo, true); 176 - if (unlikely(r)) 177 - DRM_ERROR("failed to reserve abo before unpin\n"); 178 - else { 179 - amdgpu_bo_unpin(abo); 180 - amdgpu_bo_unreserve(abo); 181 - } 182 - } 183 170 184 171 amdgpu_crtc->pll_id = ATOM_PPLL_INVALID; 185 172 amdgpu_crtc->encoder = NULL; ··· 679 692 spin_unlock_irqrestore(&adev->ddev->event_lock, flags); 680 693 681 694 drm_crtc_vblank_put(&amdgpu_crtc->base); 682 - schedule_work(&works->unpin_work); 695 + amdgpu_bo_unref(&works->old_abo); 696 + kfree(works->shared); 697 + kfree(works); 683 698 684 699 return 0; 685 700 }
+35 -15
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
··· 4233 4233 u32 tmp; 4234 4234 u32 rb_bufsz; 4235 4235 u64 rb_addr, rptr_addr, wptr_gpu_addr; 4236 - int r; 4237 4236 4238 4237 /* Set the write pointer delay */ 4239 4238 WREG32(mmCP_RB_WPTR_DELAY, 0); ··· 4277 4278 amdgpu_ring_clear_ring(ring); 4278 4279 gfx_v8_0_cp_gfx_start(adev); 4279 4280 ring->sched.ready = true; 4280 - r = amdgpu_ring_test_helper(ring); 4281 4281 4282 - return r; 4282 + return 0; 4283 4283 } 4284 4284 4285 4285 static void gfx_v8_0_cp_compute_enable(struct amdgpu_device *adev, bool enable) ··· 4367 4369 amdgpu_ring_write(kiq_ring, upper_32_bits(wptr_addr)); 4368 4370 } 4369 4371 4370 - r = amdgpu_ring_test_helper(kiq_ring); 4371 - if (r) 4372 - DRM_ERROR("KCQ enable failed\n"); 4373 - return r; 4372 + amdgpu_ring_commit(kiq_ring); 4373 + 4374 + return 0; 4374 4375 } 4375 4376 4376 4377 static int gfx_v8_0_deactivate_hqd(struct amdgpu_device *adev, u32 req) ··· 4706 4709 if (r) 4707 4710 goto done; 4708 4711 4709 - /* Test KCQs - reversing the order of rings seems to fix ring test failure 4710 - * after GPU reset 4711 - */ 4712 - for (i = adev->gfx.num_compute_rings - 1; i >= 0; i--) { 4713 - ring = &adev->gfx.compute_ring[i]; 4714 - r = amdgpu_ring_test_helper(ring); 4715 - } 4716 - 4717 4712 done: 4718 4713 return r; 4714 + } 4715 + 4716 + static int gfx_v8_0_cp_test_all_rings(struct amdgpu_device *adev) 4717 + { 4718 + int r, i; 4719 + struct amdgpu_ring *ring; 4720 + 4721 + /* collect all the ring_tests here, gfx, kiq, compute */ 4722 + ring = &adev->gfx.gfx_ring[0]; 4723 + r = amdgpu_ring_test_helper(ring); 4724 + if (r) 4725 + return r; 4726 + 4727 + ring = &adev->gfx.kiq.ring; 4728 + r = amdgpu_ring_test_helper(ring); 4729 + if (r) 4730 + return r; 4731 + 4732 + for (i = 0; i < adev->gfx.num_compute_rings; i++) { 4733 + ring = &adev->gfx.compute_ring[i]; 4734 + amdgpu_ring_test_helper(ring); 4735 + } 4736 + 4737 + return 0; 4719 4738 } 4720 4739 4721 4740 static int gfx_v8_0_cp_resume(struct amdgpu_device *adev) ··· 4752 4739 r = gfx_v8_0_kcq_resume(adev); 4753 4740 if (r) 4754 4741 return r; 4742 + 4743 + r = gfx_v8_0_cp_test_all_rings(adev); 4744 + if (r) 4745 + return r; 4746 + 4755 4747 gfx_v8_0_enable_gui_idle_interrupt(adev, true); 4756 4748 4757 4749 return 0; ··· 5103 5085 if (REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_CP) || 5104 5086 REG_GET_FIELD(grbm_soft_reset, GRBM_SOFT_RESET, SOFT_RESET_GFX)) 5105 5087 gfx_v8_0_cp_gfx_resume(adev); 5088 + 5089 + gfx_v8_0_cp_test_all_rings(adev); 5106 5090 5107 5091 adev->gfx.rlc.funcs->start(adev); 5108 5092
+9 -5
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 113 113 SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CHAN_STEER_HI, 0xffffffff, 0x4a2c0e68), 114 114 SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CHAN_STEER_LO, 0xffffffff, 0xb5d3f197), 115 115 SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_CACHE_INVALIDATION, 0x3fff3af3, 0x19200000), 116 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_GS_MAX_WAVE_ID, 0x00000fff, 0x000003ff) 116 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_GS_MAX_WAVE_ID, 0x00000fff, 0x000003ff), 117 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_MEC1_F32_INT_DIS, 0x00000000, 0x00000800), 118 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_MEC2_F32_INT_DIS, 0x00000000, 0x00000800), 119 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_DEBUG, 0x00000000, 0x00008000) 117 120 }; 118 121 119 122 static const struct soc15_reg_golden golden_settings_gc_9_0_vg10[] = ··· 138 135 SOC15_REG_GOLDEN_VALUE(GC, 0, mmRMI_UTCL1_CNTL2, 0x00030000, 0x00020000), 139 136 SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_CONFIG_CNTL_1, 0x0000000f, 0x01000107), 140 137 SOC15_REG_GOLDEN_VALUE(GC, 0, mmTD_CNTL, 0x00001800, 0x00000800), 141 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmWD_UTCL1_CNTL, 0x08000000, 0x08000080), 142 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_MEC1_F32_INT_DIS, 0x00000000, 0x00000800), 143 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_MEC2_F32_INT_DIS, 0x00000000, 0x00000800), 144 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_DEBUG, 0x00000000, 0x00008000) 138 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmWD_UTCL1_CNTL, 0x08000000, 0x08000080) 145 139 }; 146 140 147 141 static const struct soc15_reg_golden golden_settings_gc_9_0_vg20[] = ··· 3587 3587 { 3588 3588 uint32_t data, def; 3589 3589 3590 + amdgpu_gfx_rlc_enter_safe_mode(adev); 3591 + 3590 3592 /* It is disabled by HW by default */ 3591 3593 if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_MGCG)) { 3592 3594 /* 1 - RLC_CGTT_MGCG_OVERRIDE */ ··· 3653 3651 WREG32_SOC15(GC, 0, mmCP_MEM_SLP_CNTL, data); 3654 3652 } 3655 3653 } 3654 + 3655 + amdgpu_gfx_rlc_exit_safe_mode(adev); 3656 3656 } 3657 3657 3658 3658 static void gfx_v9_0_update_3d_clock_gating(struct amdgpu_device *adev,
+1 -1
drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
··· 174 174 return r; 175 175 } 176 176 /* Retrieve checksum from mailbox2 */ 177 - if (req == IDH_REQ_GPU_INIT_ACCESS) { 177 + if (req == IDH_REQ_GPU_INIT_ACCESS || req == IDH_REQ_GPU_RESET_ACCESS) { 178 178 adev->virt.fw_reserve.checksum_key = 179 179 RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0, 180 180 mmBIF_BX_PF0_MAILBOX_MSGBUF_RCV_DW2));
+2 -1
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
··· 78 78 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000), 79 79 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0), 80 80 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000), 81 - SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07), 82 81 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100), 83 82 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GFX_IB_CNTL, 0x800f0100, 0x00000100), 84 83 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GFX_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000), ··· 95 96 static const struct soc15_reg_golden golden_settings_sdma_vg10[] = { 96 97 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0018773f, 0x00104002), 97 98 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0018773f, 0x00104002), 99 + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831d07), 98 100 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0018773f, 0x00104002), 99 101 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0018773f, 0x00104002) 100 102 }; ··· 103 103 static const struct soc15_reg_golden golden_settings_sdma_vg12[] = { 104 104 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0018773f, 0x00104001), 105 105 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0018773f, 0x00104001), 106 + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831d07), 106 107 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0018773f, 0x00104001), 107 108 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0018773f, 0x00104001) 108 109 };
+25 -12
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 699 699 { 700 700 struct amdgpu_dm_connector *aconnector; 701 701 struct drm_connector *connector; 702 + struct drm_dp_mst_topology_mgr *mgr; 703 + int ret; 704 + bool need_hotplug = false; 702 705 703 706 drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 704 707 705 - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 706 - aconnector = to_amdgpu_dm_connector(connector); 707 - if (aconnector->dc_link->type == dc_connection_mst_branch && 708 - !aconnector->mst_port) { 708 + list_for_each_entry(connector, &dev->mode_config.connector_list, 709 + head) { 710 + aconnector = to_amdgpu_dm_connector(connector); 711 + if (aconnector->dc_link->type != dc_connection_mst_branch || 712 + aconnector->mst_port) 713 + continue; 709 714 710 - if (suspend) 711 - drm_dp_mst_topology_mgr_suspend(&aconnector->mst_mgr); 712 - else 713 - drm_dp_mst_topology_mgr_resume(&aconnector->mst_mgr); 714 - } 715 + mgr = &aconnector->mst_mgr; 716 + 717 + if (suspend) { 718 + drm_dp_mst_topology_mgr_suspend(mgr); 719 + } else { 720 + ret = drm_dp_mst_topology_mgr_resume(mgr); 721 + if (ret < 0) { 722 + drm_dp_mst_topology_mgr_set_mst(mgr, false); 723 + need_hotplug = true; 724 + } 725 + } 715 726 } 716 727 717 728 drm_modeset_unlock(&dev->mode_config.connection_mutex); 729 + 730 + if (need_hotplug) 731 + drm_kms_helper_hotplug_event(dev); 718 732 } 719 733 720 734 /** ··· 912 898 struct drm_plane_state *new_plane_state; 913 899 struct dm_plane_state *dm_new_plane_state; 914 900 enum dc_connection_type new_connection_type = dc_connection_none; 915 - int ret; 916 901 int i; 917 902 918 903 /* power on hardware */ ··· 984 971 } 985 972 } 986 973 987 - ret = drm_atomic_helper_resume(ddev, dm->cached_state); 974 + drm_atomic_helper_resume(ddev, dm->cached_state); 988 975 989 976 dm->cached_state = NULL; 990 977 991 978 amdgpu_dm_irq_resume_late(adev); 992 979 993 - return ret; 980 + return 0; 994 981 } 995 982 996 983 /**
+7 -6
drivers/gpu/drm/amd/include/kgd_pp_interface.h
··· 127 127 }; 128 128 129 129 enum PP_SMC_POWER_PROFILE { 130 - PP_SMC_POWER_PROFILE_FULLSCREEN3D = 0x0, 131 - PP_SMC_POWER_PROFILE_POWERSAVING = 0x1, 132 - PP_SMC_POWER_PROFILE_VIDEO = 0x2, 133 - PP_SMC_POWER_PROFILE_VR = 0x3, 134 - PP_SMC_POWER_PROFILE_COMPUTE = 0x4, 135 - PP_SMC_POWER_PROFILE_CUSTOM = 0x5, 130 + PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT = 0x0, 131 + PP_SMC_POWER_PROFILE_FULLSCREEN3D = 0x1, 132 + PP_SMC_POWER_PROFILE_POWERSAVING = 0x2, 133 + PP_SMC_POWER_PROFILE_VIDEO = 0x3, 134 + PP_SMC_POWER_PROFILE_VR = 0x4, 135 + PP_SMC_POWER_PROFILE_COMPUTE = 0x5, 136 + PP_SMC_POWER_PROFILE_CUSTOM = 0x6, 136 137 }; 137 138 138 139 enum {
+12 -10
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
··· 64 64 65 65 static void hwmgr_init_workload_prority(struct pp_hwmgr *hwmgr) 66 66 { 67 - hwmgr->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 2; 68 - hwmgr->workload_prority[PP_SMC_POWER_PROFILE_POWERSAVING] = 0; 69 - hwmgr->workload_prority[PP_SMC_POWER_PROFILE_VIDEO] = 1; 70 - hwmgr->workload_prority[PP_SMC_POWER_PROFILE_VR] = 3; 71 - hwmgr->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 4; 67 + hwmgr->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0; 68 + hwmgr->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1; 69 + hwmgr->workload_prority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2; 70 + hwmgr->workload_prority[PP_SMC_POWER_PROFILE_VIDEO] = 3; 71 + hwmgr->workload_prority[PP_SMC_POWER_PROFILE_VR] = 4; 72 + hwmgr->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5; 72 73 73 - hwmgr->workload_setting[0] = PP_SMC_POWER_PROFILE_POWERSAVING; 74 - hwmgr->workload_setting[1] = PP_SMC_POWER_PROFILE_VIDEO; 75 - hwmgr->workload_setting[2] = PP_SMC_POWER_PROFILE_FULLSCREEN3D; 76 - hwmgr->workload_setting[3] = PP_SMC_POWER_PROFILE_VR; 77 - hwmgr->workload_setting[4] = PP_SMC_POWER_PROFILE_COMPUTE; 74 + hwmgr->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; 75 + hwmgr->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D; 76 + hwmgr->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING; 77 + hwmgr->workload_setting[3] = PP_SMC_POWER_PROFILE_VIDEO; 78 + hwmgr->workload_setting[4] = PP_SMC_POWER_PROFILE_VR; 79 + hwmgr->workload_setting[5] = PP_SMC_POWER_PROFILE_COMPUTE; 78 80 } 79 81 80 82 int hwmgr_early_init(struct pp_hwmgr *hwmgr)
+5 -3
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 77 77 #define PCIE_BUS_CLK 10000 78 78 #define TCLK (PCIE_BUS_CLK / 10) 79 79 80 - static const struct profile_mode_setting smu7_profiling[6] = 81 - {{1, 0, 100, 30, 1, 0, 100, 10}, 80 + static const struct profile_mode_setting smu7_profiling[7] = 81 + {{0, 0, 0, 0, 0, 0, 0, 0}, 82 + {1, 0, 100, 30, 1, 0, 100, 10}, 82 83 {1, 10, 0, 30, 0, 0, 0, 0}, 83 84 {0, 0, 0, 0, 1, 10, 16, 31}, 84 85 {1, 0, 11, 50, 1, 0, 100, 10}, ··· 4890 4889 uint32_t i, size = 0; 4891 4890 uint32_t len; 4892 4891 4893 - static const char *profile_name[6] = {"3D_FULL_SCREEN", 4892 + static const char *profile_name[7] = {"BOOTUP_DEFAULT", 4893 + "3D_FULL_SCREEN", 4894 4894 "POWER_SAVING", 4895 4895 "VIDEO", 4896 4896 "VR",
+7 -5
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
··· 804 804 805 805 hwmgr->backend = data; 806 806 807 - hwmgr->workload_mask = 1 << hwmgr->workload_prority[PP_SMC_POWER_PROFILE_VIDEO]; 808 - hwmgr->power_profile_mode = PP_SMC_POWER_PROFILE_VIDEO; 809 - hwmgr->default_power_profile_mode = PP_SMC_POWER_PROFILE_VIDEO; 807 + hwmgr->workload_mask = 1 << hwmgr->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT]; 808 + hwmgr->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; 809 + hwmgr->default_power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; 810 810 811 811 vega10_set_default_registry_data(hwmgr); 812 812 data->disable_dpm_mask = 0xff; ··· 4668 4668 { 4669 4669 struct vega10_hwmgr *data = hwmgr->backend; 4670 4670 uint32_t i, size = 0; 4671 - static const uint8_t profile_mode_setting[5][4] = {{70, 60, 1, 3,}, 4671 + static const uint8_t profile_mode_setting[6][4] = {{70, 60, 0, 0,}, 4672 + {70, 60, 1, 3,}, 4672 4673 {90, 60, 0, 0,}, 4673 4674 {70, 60, 0, 0,}, 4674 4675 {70, 90, 0, 0,}, 4675 4676 {30, 60, 0, 6,}, 4676 4677 }; 4677 - static const char *profile_name[6] = {"3D_FULL_SCREEN", 4678 + static const char *profile_name[7] = {"BOOTUP_DEFAULT", 4679 + "3D_FULL_SCREEN", 4678 4680 "POWER_SAVING", 4679 4681 "VIDEO", 4680 4682 "VR",
+24 -10
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
··· 390 390 391 391 hwmgr->backend = data; 392 392 393 - hwmgr->workload_mask = 1 << hwmgr->workload_prority[PP_SMC_POWER_PROFILE_VIDEO]; 394 - hwmgr->power_profile_mode = PP_SMC_POWER_PROFILE_VIDEO; 395 - hwmgr->default_power_profile_mode = PP_SMC_POWER_PROFILE_VIDEO; 393 + hwmgr->workload_mask = 1 << hwmgr->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT]; 394 + hwmgr->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; 395 + hwmgr->default_power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; 396 396 397 397 vega20_set_default_registry_data(hwmgr); 398 398 ··· 979 979 if (pptable_information->od_feature_capabilities[ATOM_VEGA20_ODFEATURE_FAN_ZERO_RPM_CONTROL] && 980 980 pp_table->FanZeroRpmEnable) 981 981 od_settings->overdrive8_capabilities |= OD8_FAN_ZERO_RPM_CONTROL; 982 + 983 + if (!od_settings->overdrive8_capabilities) 984 + hwmgr->od_enabled = false; 982 985 983 986 return 0; 984 987 } ··· 1692 1689 (PPCLK_UCLK << 16) | (min_freq & 0xffff))), 1693 1690 "Failed to set soft min memclk !", 1694 1691 return ret); 1695 - 1696 - min_freq = data->dpm_table.mem_table.dpm_state.hard_min_level; 1697 - PP_ASSERT_WITH_CODE(!(ret = smum_send_msg_to_smc_with_parameter( 1698 - hwmgr, PPSMC_MSG_SetHardMinByFreq, 1699 - (PPCLK_UCLK << 16) | (min_freq & 0xffff))), 1700 - "Failed to set hard min memclk !", 1701 - return ret); 1702 1692 } 1703 1693 1704 1694 if (data->smu_features[GNLD_DPM_UVD].enabled && ··· 2244 2248 soft_min_level = mask ? (ffs(mask) - 1) : 0; 2245 2249 soft_max_level = mask ? (fls(mask) - 1) : 0; 2246 2250 2251 + if (soft_max_level >= data->dpm_table.gfx_table.count) { 2252 + pr_err("Clock level specified %d is over max allowed %d\n", 2253 + soft_max_level, 2254 + data->dpm_table.gfx_table.count - 1); 2255 + return -EINVAL; 2256 + } 2257 + 2247 2258 data->dpm_table.gfx_table.dpm_state.soft_min_level = 2248 2259 data->dpm_table.gfx_table.dpm_levels[soft_min_level].value; 2249 2260 data->dpm_table.gfx_table.dpm_state.soft_max_level = ··· 2270 2267 case PP_MCLK: 2271 2268 soft_min_level = mask ? (ffs(mask) - 1) : 0; 2272 2269 soft_max_level = mask ? (fls(mask) - 1) : 0; 2270 + 2271 + if (soft_max_level >= data->dpm_table.mem_table.count) { 2272 + pr_err("Clock level specified %d is over max allowed %d\n", 2273 + soft_max_level, 2274 + data->dpm_table.mem_table.count - 1); 2275 + return -EINVAL; 2276 + } 2273 2277 2274 2278 data->dpm_table.mem_table.dpm_state.soft_min_level = 2275 2279 data->dpm_table.mem_table.dpm_levels[soft_min_level].value; ··· 3271 3261 int pplib_workload = 0; 3272 3262 3273 3263 switch (power_profile) { 3264 + case PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT: 3265 + pplib_workload = WORKLOAD_DEFAULT_BIT; 3266 + break; 3274 3267 case PP_SMC_POWER_PROFILE_FULLSCREEN3D: 3275 3268 pplib_workload = WORKLOAD_PPLIB_FULL_SCREEN_3D_BIT; 3276 3269 break; ··· 3303 3290 uint32_t i, size = 0; 3304 3291 uint16_t workload_type = 0; 3305 3292 static const char *profile_name[] = { 3293 + "BOOTUP_DEFAULT", 3306 3294 "3D_FULL_SCREEN", 3307 3295 "POWER_SAVING", 3308 3296 "VIDEO",
+1 -1
drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
··· 705 705 /** 706 706 * The main hardware manager structure. 707 707 */ 708 - #define Workload_Policy_Max 5 708 + #define Workload_Policy_Max 6 709 709 710 710 struct pp_hwmgr { 711 711 void *adev;
+38 -10
drivers/gpu/drm/bridge/tc358767.c
··· 98 98 #define DP0_STARTVAL 0x064c 99 99 #define DP0_ACTIVEVAL 0x0650 100 100 #define DP0_SYNCVAL 0x0654 101 + #define SYNCVAL_HS_POL_ACTIVE_LOW (1 << 15) 102 + #define SYNCVAL_VS_POL_ACTIVE_LOW (1 << 31) 101 103 #define DP0_MISC 0x0658 102 104 #define TU_SIZE_RECOMMENDED (63) /* LSCLK cycles per TU */ 103 105 #define BPC_6 (0 << 5) ··· 144 142 #define DP0_LTLOOPCTRL 0x06d8 145 143 #define DP0_SNKLTCTRL 0x06e4 146 144 145 + #define DP1_SRCCTRL 0x07a0 146 + 147 147 /* PHY */ 148 148 #define DP_PHY_CTRL 0x0800 149 149 #define DP_PHY_RST BIT(28) /* DP PHY Global Soft Reset */ ··· 154 150 #define PHY_M1_RST BIT(12) /* Reset PHY1 Main Channel */ 155 151 #define PHY_RDY BIT(16) /* PHY Main Channels Ready */ 156 152 #define PHY_M0_RST BIT(8) /* Reset PHY0 Main Channel */ 153 + #define PHY_2LANE BIT(2) /* PHY Enable 2 lanes */ 157 154 #define PHY_A0_EN BIT(1) /* PHY Aux Channel0 Enable */ 158 155 #define PHY_M0_EN BIT(0) /* PHY Main Channel0 Enable */ 159 156 ··· 545 540 unsigned long rate; 546 541 u32 value; 547 542 int ret; 543 + u32 dp_phy_ctrl; 548 544 549 545 rate = clk_get_rate(tc->refclk); 550 546 switch (rate) { ··· 570 564 value |= SYSCLK_SEL_LSCLK | LSCLK_DIV_2; 571 565 tc_write(SYS_PLLPARAM, value); 572 566 573 - tc_write(DP_PHY_CTRL, BGREN | PWR_SW_EN | BIT(2) | PHY_A0_EN); 567 + dp_phy_ctrl = BGREN | PWR_SW_EN | PHY_A0_EN; 568 + if (tc->link.base.num_lanes == 2) 569 + dp_phy_ctrl |= PHY_2LANE; 570 + tc_write(DP_PHY_CTRL, dp_phy_ctrl); 574 571 575 572 /* 576 573 * Initially PLLs are in bypass. Force PLL parameter update, ··· 728 719 729 720 tc_write(DP0_ACTIVEVAL, (mode->vdisplay << 16) | (mode->hdisplay)); 730 721 731 - tc_write(DP0_SYNCVAL, (vsync_len << 16) | (hsync_len << 0)); 722 + tc_write(DP0_SYNCVAL, (vsync_len << 16) | (hsync_len << 0) | 723 + ((mode->flags & DRM_MODE_FLAG_NHSYNC) ? SYNCVAL_HS_POL_ACTIVE_LOW : 0) | 724 + ((mode->flags & DRM_MODE_FLAG_NVSYNC) ? SYNCVAL_VS_POL_ACTIVE_LOW : 0)); 732 725 733 726 tc_write(DPIPXLFMT, VS_POL_ACTIVE_LOW | HS_POL_ACTIVE_LOW | 734 727 DE_POL_ACTIVE_HIGH | SUB_CFG_TYPE_CONFIG1 | DPI_BPP_RGB888); ··· 840 829 if (!tc->mode) 841 830 return -EINVAL; 842 831 843 - /* from excel file - DP0_SrcCtrl */ 844 - tc_write(DP0_SRCCTRL, DP0_SRCCTRL_SCRMBLDIS | DP0_SRCCTRL_EN810B | 845 - DP0_SRCCTRL_LANESKEW | DP0_SRCCTRL_LANES_2 | 846 - DP0_SRCCTRL_BW27 | DP0_SRCCTRL_AUTOCORRECT); 847 - /* from excel file - DP1_SrcCtrl */ 848 - tc_write(0x07a0, 0x00003083); 832 + tc_write(DP0_SRCCTRL, tc_srcctrl(tc)); 833 + /* SSCG and BW27 on DP1 must be set to the same as on DP0 */ 834 + tc_write(DP1_SRCCTRL, 835 + (tc->link.spread ? DP0_SRCCTRL_SSCG : 0) | 836 + ((tc->link.base.rate != 162000) ? DP0_SRCCTRL_BW27 : 0)); 849 837 850 838 rate = clk_get_rate(tc->refclk); 851 839 switch (rate) { ··· 865 855 } 866 856 value |= SYSCLK_SEL_LSCLK | LSCLK_DIV_2; 867 857 tc_write(SYS_PLLPARAM, value); 858 + 868 859 /* Setup Main Link */ 869 - dp_phy_ctrl = BGREN | PWR_SW_EN | BIT(2) | PHY_A0_EN | PHY_M0_EN; 860 + dp_phy_ctrl = BGREN | PWR_SW_EN | PHY_A0_EN | PHY_M0_EN; 861 + if (tc->link.base.num_lanes == 2) 862 + dp_phy_ctrl |= PHY_2LANE; 870 863 tc_write(DP_PHY_CTRL, dp_phy_ctrl); 871 864 msleep(100); 872 865 ··· 1118 1105 static enum drm_mode_status tc_connector_mode_valid(struct drm_connector *connector, 1119 1106 struct drm_display_mode *mode) 1120 1107 { 1108 + struct tc_data *tc = connector_to_tc(connector); 1109 + u32 req, avail; 1110 + u32 bits_per_pixel = 24; 1111 + 1121 1112 /* DPI interface clock limitation: upto 154 MHz */ 1122 1113 if (mode->clock > 154000) 1123 1114 return MODE_CLOCK_HIGH; 1115 + 1116 + req = mode->clock * bits_per_pixel / 8; 1117 + avail = tc->link.base.num_lanes * tc->link.base.rate; 1118 + 1119 + if (req > avail) 1120 + return MODE_BAD; 1124 1121 1125 1122 return MODE_OK; 1126 1123 } ··· 1209 1186 /* Create eDP connector */ 1210 1187 drm_connector_helper_add(&tc->connector, &tc_connector_helper_funcs); 1211 1188 ret = drm_connector_init(drm, &tc->connector, &tc_connector_funcs, 1212 - DRM_MODE_CONNECTOR_eDP); 1189 + tc->panel ? DRM_MODE_CONNECTOR_eDP : 1190 + DRM_MODE_CONNECTOR_DisplayPort); 1213 1191 if (ret) 1214 1192 return ret; 1215 1193 ··· 1219 1195 1220 1196 drm_display_info_set_bus_formats(&tc->connector.display_info, 1221 1197 &bus_format, 1); 1198 + tc->connector.display_info.bus_flags = 1199 + DRM_BUS_FLAG_DE_HIGH | 1200 + DRM_BUS_FLAG_PIXDATA_NEGEDGE | 1201 + DRM_BUS_FLAG_SYNC_NEGEDGE; 1222 1202 drm_connector_attach_encoder(&tc->connector, tc->bridge.encoder); 1223 1203 1224 1204 return 0;
+1 -2
drivers/gpu/drm/drm_atomic_uapi.c
··· 1296 1296 (arg->flags & DRM_MODE_PAGE_FLIP_EVENT)) 1297 1297 return -EINVAL; 1298 1298 1299 - drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE); 1300 - 1301 1299 state = drm_atomic_state_alloc(dev); 1302 1300 if (!state) 1303 1301 return -ENOMEM; 1304 1302 1303 + drm_modeset_acquire_init(&ctx, DRM_MODESET_ACQUIRE_INTERRUPTIBLE); 1305 1304 state->acquire_ctx = &ctx; 1306 1305 state->allow_modeset = !!(arg->flags & DRM_MODE_ATOMIC_ALLOW_MODESET); 1307 1306
+2
drivers/gpu/drm/drm_dp_helper.c
··· 1273 1273 { OUI(0x00, 0x22, 0xb9), DEVICE_ID_ANY, true, BIT(DP_DPCD_QUIRK_CONSTANT_N) }, 1274 1274 /* LG LP140WF6-SPM1 eDP panel */ 1275 1275 { OUI(0x00, 0x22, 0xb9), DEVICE_ID('s', 'i', 'v', 'a', 'r', 'T'), false, BIT(DP_DPCD_QUIRK_CONSTANT_N) }, 1276 + /* Apple panels need some additional handling to support PSR */ 1277 + { OUI(0x00, 0x10, 0xfa), DEVICE_ID_ANY, false, BIT(DP_DPCD_QUIRK_NO_PSR) } 1276 1278 }; 1277 1279 1278 1280 #undef OUI
+79 -54
drivers/gpu/drm/drm_fb_helper.c
··· 1621 1621 var_1->transp.msb_right == var_2->transp.msb_right; 1622 1622 } 1623 1623 1624 + static void drm_fb_helper_fill_pixel_fmt(struct fb_var_screeninfo *var, 1625 + u8 depth) 1626 + { 1627 + switch (depth) { 1628 + case 8: 1629 + var->red.offset = 0; 1630 + var->green.offset = 0; 1631 + var->blue.offset = 0; 1632 + var->red.length = 8; /* 8bit DAC */ 1633 + var->green.length = 8; 1634 + var->blue.length = 8; 1635 + var->transp.offset = 0; 1636 + var->transp.length = 0; 1637 + break; 1638 + case 15: 1639 + var->red.offset = 10; 1640 + var->green.offset = 5; 1641 + var->blue.offset = 0; 1642 + var->red.length = 5; 1643 + var->green.length = 5; 1644 + var->blue.length = 5; 1645 + var->transp.offset = 15; 1646 + var->transp.length = 1; 1647 + break; 1648 + case 16: 1649 + var->red.offset = 11; 1650 + var->green.offset = 5; 1651 + var->blue.offset = 0; 1652 + var->red.length = 5; 1653 + var->green.length = 6; 1654 + var->blue.length = 5; 1655 + var->transp.offset = 0; 1656 + break; 1657 + case 24: 1658 + var->red.offset = 16; 1659 + var->green.offset = 8; 1660 + var->blue.offset = 0; 1661 + var->red.length = 8; 1662 + var->green.length = 8; 1663 + var->blue.length = 8; 1664 + var->transp.offset = 0; 1665 + var->transp.length = 0; 1666 + break; 1667 + case 32: 1668 + var->red.offset = 16; 1669 + var->green.offset = 8; 1670 + var->blue.offset = 0; 1671 + var->red.length = 8; 1672 + var->green.length = 8; 1673 + var->blue.length = 8; 1674 + var->transp.offset = 24; 1675 + var->transp.length = 8; 1676 + break; 1677 + default: 1678 + break; 1679 + } 1680 + } 1681 + 1624 1682 /** 1625 1683 * drm_fb_helper_check_var - implementation for &fb_ops.fb_check_var 1626 1684 * @var: screeninfo to check ··· 1690 1632 struct drm_fb_helper *fb_helper = info->par; 1691 1633 struct drm_framebuffer *fb = fb_helper->fb; 1692 1634 1693 - if (var->pixclock != 0 || in_dbg_master()) 1635 + if (in_dbg_master()) 1694 1636 return -EINVAL; 1637 + 1638 + if (var->pixclock != 0) { 1639 + DRM_DEBUG("fbdev emulation doesn't support changing the pixel clock, value of pixclock is ignored\n"); 1640 + var->pixclock = 0; 1641 + } 1695 1642 1696 1643 if ((drm_format_info_block_width(fb->format, 0) > 1) || 1697 1644 (drm_format_info_block_height(fb->format, 0) > 1)) ··· 1715 1652 var->xres_virtual, var->yres_virtual, 1716 1653 fb->width, fb->height, fb->format->cpp[0] * 8); 1717 1654 return -EINVAL; 1655 + } 1656 + 1657 + /* 1658 + * Workaround for SDL 1.2, which is known to be setting all pixel format 1659 + * fields values to zero in some cases. We treat this situation as a 1660 + * kind of "use some reasonable autodetected values". 1661 + */ 1662 + if (!var->red.offset && !var->green.offset && 1663 + !var->blue.offset && !var->transp.offset && 1664 + !var->red.length && !var->green.length && 1665 + !var->blue.length && !var->transp.length && 1666 + !var->red.msb_right && !var->green.msb_right && 1667 + !var->blue.msb_right && !var->transp.msb_right) { 1668 + drm_fb_helper_fill_pixel_fmt(var, fb->format->depth); 1718 1669 } 1719 1670 1720 1671 /* ··· 2044 1967 info->var.yoffset = 0; 2045 1968 info->var.activate = FB_ACTIVATE_NOW; 2046 1969 2047 - switch (fb->format->depth) { 2048 - case 8: 2049 - info->var.red.offset = 0; 2050 - info->var.green.offset = 0; 2051 - info->var.blue.offset = 0; 2052 - info->var.red.length = 8; /* 8bit DAC */ 2053 - info->var.green.length = 8; 2054 - info->var.blue.length = 8; 2055 - info->var.transp.offset = 0; 2056 - info->var.transp.length = 0; 2057 - break; 2058 - case 15: 2059 - info->var.red.offset = 10; 2060 - info->var.green.offset = 5; 2061 - info->var.blue.offset = 0; 2062 - info->var.red.length = 5; 2063 - info->var.green.length = 5; 2064 - info->var.blue.length = 5; 2065 - info->var.transp.offset = 15; 2066 - info->var.transp.length = 1; 2067 - break; 2068 - case 16: 2069 - info->var.red.offset = 11; 2070 - info->var.green.offset = 5; 2071 - info->var.blue.offset = 0; 2072 - info->var.red.length = 5; 2073 - info->var.green.length = 6; 2074 - info->var.blue.length = 5; 2075 - info->var.transp.offset = 0; 2076 - break; 2077 - case 24: 2078 - info->var.red.offset = 16; 2079 - info->var.green.offset = 8; 2080 - info->var.blue.offset = 0; 2081 - info->var.red.length = 8; 2082 - info->var.green.length = 8; 2083 - info->var.blue.length = 8; 2084 - info->var.transp.offset = 0; 2085 - info->var.transp.length = 0; 2086 - break; 2087 - case 32: 2088 - info->var.red.offset = 16; 2089 - info->var.green.offset = 8; 2090 - info->var.blue.offset = 0; 2091 - info->var.red.length = 8; 2092 - info->var.green.length = 8; 2093 - info->var.blue.length = 8; 2094 - info->var.transp.offset = 24; 2095 - info->var.transp.length = 8; 2096 - break; 2097 - default: 2098 - break; 2099 - } 1970 + drm_fb_helper_fill_pixel_fmt(&info->var, fb->format->depth); 2100 1971 2101 1972 info->var.xres = fb_width; 2102 1973 info->var.yres = fb_height;
+2 -2
drivers/gpu/drm/drm_mode_object.c
··· 459 459 struct drm_modeset_acquire_ctx ctx; 460 460 int ret; 461 461 462 - drm_modeset_acquire_init(&ctx, 0); 463 - 464 462 state = drm_atomic_state_alloc(dev); 465 463 if (!state) 466 464 return -ENOMEM; 465 + 466 + drm_modeset_acquire_init(&ctx, 0); 467 467 state->acquire_ctx = &ctx; 468 468 retry: 469 469 if (prop == state->dev->mode_config.dpms_property) {
+3 -2
drivers/gpu/drm/drm_pci.c
··· 61 61 return NULL; 62 62 63 63 dmah->size = size; 64 - dmah->vaddr = dma_zalloc_coherent(&dev->pdev->dev, size, &dmah->busaddr, 65 - GFP_KERNEL | __GFP_COMP); 64 + dmah->vaddr = dma_alloc_coherent(&dev->pdev->dev, size, 65 + &dmah->busaddr, 66 + GFP_KERNEL | __GFP_COMP); 66 67 67 68 if (dmah->vaddr == NULL) { 68 69 kfree(dmah);
+42 -22
drivers/gpu/drm/i915/gvt/scheduler.c
··· 356 356 return 0; 357 357 } 358 358 359 + static int 360 + intel_gvt_workload_req_alloc(struct intel_vgpu_workload *workload) 361 + { 362 + struct intel_vgpu *vgpu = workload->vgpu; 363 + struct intel_vgpu_submission *s = &vgpu->submission; 364 + struct i915_gem_context *shadow_ctx = s->shadow_ctx; 365 + struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; 366 + struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id]; 367 + struct i915_request *rq; 368 + int ret = 0; 369 + 370 + lockdep_assert_held(&dev_priv->drm.struct_mutex); 371 + 372 + if (workload->req) 373 + goto out; 374 + 375 + rq = i915_request_alloc(engine, shadow_ctx); 376 + if (IS_ERR(rq)) { 377 + gvt_vgpu_err("fail to allocate gem request\n"); 378 + ret = PTR_ERR(rq); 379 + goto out; 380 + } 381 + workload->req = i915_request_get(rq); 382 + out: 383 + return ret; 384 + } 385 + 359 386 /** 360 387 * intel_gvt_scan_and_shadow_workload - audit the workload by scanning and 361 388 * shadow it as well, include ringbuffer,wa_ctx and ctx. ··· 399 372 struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; 400 373 struct intel_engine_cs *engine = dev_priv->engine[workload->ring_id]; 401 374 struct intel_context *ce; 402 - struct i915_request *rq; 403 375 int ret; 404 376 405 377 lockdep_assert_held(&dev_priv->drm.struct_mutex); 406 378 407 - if (workload->req) 379 + if (workload->shadow) 408 380 return 0; 409 381 410 382 ret = set_context_ppgtt_from_shadow(workload, shadow_ctx); ··· 443 417 goto err_shadow; 444 418 } 445 419 446 - rq = i915_request_alloc(engine, shadow_ctx); 447 - if (IS_ERR(rq)) { 448 - gvt_vgpu_err("fail to allocate gem request\n"); 449 - ret = PTR_ERR(rq); 450 - goto err_shadow; 451 - } 452 - workload->req = i915_request_get(rq); 453 - 454 - ret = populate_shadow_context(workload); 455 - if (ret) 456 - goto err_req; 457 - 420 + workload->shadow = true; 458 421 return 0; 459 - err_req: 460 - rq = fetch_and_zero(&workload->req); 461 - i915_request_put(rq); 462 422 err_shadow: 463 423 release_shadow_wa_ctx(&workload->wa_ctx); 464 424 err_unpin: ··· 683 671 mutex_lock(&vgpu->vgpu_lock); 684 672 mutex_lock(&dev_priv->drm.struct_mutex); 685 673 674 + ret = intel_gvt_workload_req_alloc(workload); 675 + if (ret) 676 + goto err_req; 677 + 686 678 ret = intel_gvt_scan_and_shadow_workload(workload); 687 679 if (ret) 688 680 goto out; 689 681 682 + ret = populate_shadow_context(workload); 683 + if (ret) { 684 + release_shadow_wa_ctx(&workload->wa_ctx); 685 + goto out; 686 + } 687 + 690 688 ret = prepare_workload(workload); 691 - 692 689 out: 693 - if (ret) 694 - workload->status = ret; 695 - 696 690 if (!IS_ERR_OR_NULL(workload->req)) { 697 691 gvt_dbg_sched("ring id %d submit workload to i915 %p\n", 698 692 ring_id, workload->req); 699 693 i915_request_add(workload->req); 700 694 workload->dispatched = true; 701 695 } 702 - 696 + err_req: 697 + if (ret) 698 + workload->status = ret; 703 699 mutex_unlock(&dev_priv->drm.struct_mutex); 704 700 mutex_unlock(&vgpu->vgpu_lock); 705 701 return ret;
+1
drivers/gpu/drm/i915/gvt/scheduler.h
··· 83 83 struct i915_request *req; 84 84 /* if this workload has been dispatched to i915? */ 85 85 bool dispatched; 86 + bool shadow; /* if workload has done shadow of guest request */ 86 87 int status; 87 88 88 89 struct intel_vgpu_mm *shadow_mm;
+9 -3
drivers/gpu/drm/i915/i915_debugfs.c
··· 984 984 intel_runtime_pm_get(i915); 985 985 gpu = i915_capture_gpu_state(i915); 986 986 intel_runtime_pm_put(i915); 987 - if (!gpu) 988 - return -ENOMEM; 987 + if (IS_ERR(gpu)) 988 + return PTR_ERR(gpu); 989 989 990 990 file->private_data = gpu; 991 991 return 0; ··· 1018 1018 1019 1019 static int i915_error_state_open(struct inode *inode, struct file *file) 1020 1020 { 1021 - file->private_data = i915_first_error_state(inode->i_private); 1021 + struct i915_gpu_state *error; 1022 + 1023 + error = i915_first_error_state(inode->i_private); 1024 + if (IS_ERR(error)) 1025 + return PTR_ERR(error); 1026 + 1027 + file->private_data = error; 1022 1028 return 0; 1023 1029 } 1024 1030
+12 -3
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 2075 2075 int gen6_ppgtt_pin(struct i915_hw_ppgtt *base) 2076 2076 { 2077 2077 struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(base); 2078 + int err; 2078 2079 2079 2080 /* 2080 2081 * Workaround the limited maximum vma->pin_count and the aliasing_ppgtt ··· 2091 2090 * allocator works in address space sizes, so it's multiplied by page 2092 2091 * size. We allocate at the top of the GTT to avoid fragmentation. 2093 2092 */ 2094 - return i915_vma_pin(ppgtt->vma, 2095 - 0, GEN6_PD_ALIGN, 2096 - PIN_GLOBAL | PIN_HIGH); 2093 + err = i915_vma_pin(ppgtt->vma, 2094 + 0, GEN6_PD_ALIGN, 2095 + PIN_GLOBAL | PIN_HIGH); 2096 + if (err) 2097 + goto unpin; 2098 + 2099 + return 0; 2100 + 2101 + unpin: 2102 + ppgtt->pin_count = 0; 2103 + return err; 2097 2104 } 2098 2105 2099 2106 void gen6_ppgtt_unpin(struct i915_hw_ppgtt *base)
+14 -9
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1907 1907 { 1908 1908 struct i915_gpu_state *error; 1909 1909 1910 + /* Check if GPU capture has been disabled */ 1911 + error = READ_ONCE(i915->gpu_error.first_error); 1912 + if (IS_ERR(error)) 1913 + return error; 1914 + 1910 1915 error = kzalloc(sizeof(*error), GFP_ATOMIC); 1911 - if (!error) 1912 - return NULL; 1916 + if (!error) { 1917 + i915_disable_error_state(i915, -ENOMEM); 1918 + return ERR_PTR(-ENOMEM); 1919 + } 1913 1920 1914 1921 kref_init(&error->ref); 1915 1922 error->i915 = i915; ··· 1952 1945 return; 1953 1946 1954 1947 error = i915_capture_gpu_state(i915); 1955 - if (!error) { 1956 - DRM_DEBUG_DRIVER("out of memory, not capturing error state\n"); 1957 - i915_disable_error_state(i915, -ENOMEM); 1948 + if (IS_ERR(error)) 1958 1949 return; 1959 - } 1960 1950 1961 1951 i915_error_capture_msg(i915, error, engine_mask, error_msg); 1962 1952 DRM_INFO("%s\n", error->error_msg); ··· 1991 1987 1992 1988 spin_lock_irq(&i915->gpu_error.lock); 1993 1989 error = i915->gpu_error.first_error; 1994 - if (error) 1990 + if (!IS_ERR_OR_NULL(error)) 1995 1991 i915_gpu_state_get(error); 1996 1992 spin_unlock_irq(&i915->gpu_error.lock); 1997 1993 ··· 2004 2000 2005 2001 spin_lock_irq(&i915->gpu_error.lock); 2006 2002 error = i915->gpu_error.first_error; 2007 - i915->gpu_error.first_error = NULL; 2003 + if (error != ERR_PTR(-ENODEV)) /* if disabled, always disabled */ 2004 + i915->gpu_error.first_error = NULL; 2008 2005 spin_unlock_irq(&i915->gpu_error.lock); 2009 2006 2010 - if (!IS_ERR(error)) 2007 + if (!IS_ERR_OR_NULL(error)) 2011 2008 i915_gpu_state_put(error); 2012 2009 } 2013 2010
+3 -1
drivers/gpu/drm/i915/i915_sysfs.c
··· 521 521 ssize_t ret; 522 522 523 523 gpu = i915_first_error_state(i915); 524 - if (gpu) { 524 + if (IS_ERR(gpu)) { 525 + ret = PTR_ERR(gpu); 526 + } else if (gpu) { 525 527 ret = i915_gpu_state_copy_to_buffer(gpu, buf, off, count); 526 528 i915_gpu_state_put(gpu); 527 529 } else {
+2 -1
drivers/gpu/drm/i915/intel_lrc.c
··· 2244 2244 if (ret) 2245 2245 return ret; 2246 2246 2247 + intel_engine_init_workarounds(engine); 2248 + 2247 2249 if (HAS_LOGICAL_RING_ELSQ(i915)) { 2248 2250 execlists->submit_reg = i915->regs + 2249 2251 i915_mmio_reg_offset(RING_EXECLIST_SQ_CONTENTS(engine)); ··· 2312 2310 } 2313 2311 2314 2312 intel_engine_init_whitelist(engine); 2315 - intel_engine_init_workarounds(engine); 2316 2313 2317 2314 return 0; 2318 2315 }
+6
drivers/gpu/drm/i915/intel_psr.c
··· 274 274 DRM_DEBUG_KMS("eDP panel supports PSR version %x\n", 275 275 intel_dp->psr_dpcd[0]); 276 276 277 + if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_NO_PSR)) { 278 + DRM_DEBUG_KMS("PSR support not currently available for this panel\n"); 279 + return; 280 + } 281 + 277 282 if (!(intel_dp->edp_dpcd[1] & DP_EDP_SET_POWER_CAP)) { 278 283 DRM_DEBUG_KMS("Panel lacks power state control, PSR cannot be enabled\n"); 279 284 return; 280 285 } 286 + 281 287 dev_priv->psr.sink_support = true; 282 288 dev_priv->psr.sink_sync_latency = 283 289 intel_dp_get_sink_sync_latency(intel_dp);
+3
drivers/gpu/drm/nouveau/nouveau_backlight.c
··· 253 253 case NV_DEVICE_INFO_V0_FERMI: 254 254 case NV_DEVICE_INFO_V0_KEPLER: 255 255 case NV_DEVICE_INFO_V0_MAXWELL: 256 + case NV_DEVICE_INFO_V0_PASCAL: 257 + case NV_DEVICE_INFO_V0_VOLTA: 258 + case NV_DEVICE_INFO_V0_TURING: 256 259 ret = nv50_backlight_init(nv_encoder, &props, &ops); 257 260 break; 258 261 default:
+5 -2
drivers/gpu/drm/nouveau/nvkm/engine/falcon.c
··· 22 22 #include <engine/falcon.h> 23 23 24 24 #include <core/gpuobj.h> 25 + #include <subdev/mc.h> 25 26 #include <subdev/timer.h> 26 27 #include <engine/fifo.h> 27 28 ··· 108 107 } 109 108 } 110 109 111 - nvkm_mask(device, base + 0x048, 0x00000003, 0x00000000); 112 - nvkm_wr32(device, base + 0x014, 0xffffffff); 110 + if (nvkm_mc_enabled(device, engine->subdev.index)) { 111 + nvkm_mask(device, base + 0x048, 0x00000003, 0x00000000); 112 + nvkm_wr32(device, base + 0x014, 0xffffffff); 113 + } 113 114 return 0; 114 115 } 115 116
+4 -3
drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
··· 132 132 duty = nvkm_therm_update_linear(therm); 133 133 break; 134 134 case NVBIOS_THERM_FAN_OTHER: 135 - if (therm->cstate) 135 + if (therm->cstate) { 136 136 duty = therm->cstate; 137 - else 137 + poll = false; 138 + } else { 138 139 duty = nvkm_therm_update_linear_fallback(therm); 139 - poll = false; 140 + } 140 141 break; 141 142 } 142 143 immd = false;
+14 -1
drivers/i2c/busses/i2c-tegra.c
··· 155 155 * @has_mst_fifo: The I2C controller contains the new MST FIFO interface that 156 156 * provides additional features and allows for longer messages to 157 157 * be transferred in one go. 158 + * @quirks: i2c adapter quirks for limiting write/read transfer size and not 159 + * allowing 0 length transfers. 158 160 */ 159 161 struct tegra_i2c_hw_feature { 160 162 bool has_continue_xfer_support; ··· 169 167 bool has_multi_master_mode; 170 168 bool has_slcg_override_reg; 171 169 bool has_mst_fifo; 170 + const struct i2c_adapter_quirks *quirks; 172 171 }; 173 172 174 173 /** ··· 840 837 .max_write_len = 4096, 841 838 }; 842 839 840 + static const struct i2c_adapter_quirks tegra194_i2c_quirks = { 841 + .flags = I2C_AQ_NO_ZERO_LEN, 842 + }; 843 + 843 844 static const struct tegra_i2c_hw_feature tegra20_i2c_hw = { 844 845 .has_continue_xfer_support = false, 845 846 .has_per_pkt_xfer_complete_irq = false, ··· 855 848 .has_multi_master_mode = false, 856 849 .has_slcg_override_reg = false, 857 850 .has_mst_fifo = false, 851 + .quirks = &tegra_i2c_quirks, 858 852 }; 859 853 860 854 static const struct tegra_i2c_hw_feature tegra30_i2c_hw = { ··· 869 861 .has_multi_master_mode = false, 870 862 .has_slcg_override_reg = false, 871 863 .has_mst_fifo = false, 864 + .quirks = &tegra_i2c_quirks, 872 865 }; 873 866 874 867 static const struct tegra_i2c_hw_feature tegra114_i2c_hw = { ··· 883 874 .has_multi_master_mode = false, 884 875 .has_slcg_override_reg = false, 885 876 .has_mst_fifo = false, 877 + .quirks = &tegra_i2c_quirks, 886 878 }; 887 879 888 880 static const struct tegra_i2c_hw_feature tegra124_i2c_hw = { ··· 897 887 .has_multi_master_mode = false, 898 888 .has_slcg_override_reg = true, 899 889 .has_mst_fifo = false, 890 + .quirks = &tegra_i2c_quirks, 900 891 }; 901 892 902 893 static const struct tegra_i2c_hw_feature tegra210_i2c_hw = { ··· 911 900 .has_multi_master_mode = true, 912 901 .has_slcg_override_reg = true, 913 902 .has_mst_fifo = false, 903 + .quirks = &tegra_i2c_quirks, 914 904 }; 915 905 916 906 static const struct tegra_i2c_hw_feature tegra194_i2c_hw = { ··· 925 913 .has_multi_master_mode = true, 926 914 .has_slcg_override_reg = true, 927 915 .has_mst_fifo = true, 916 + .quirks = &tegra194_i2c_quirks, 928 917 }; 929 918 930 919 /* Match table for of_platform binding */ ··· 977 964 i2c_dev->base = base; 978 965 i2c_dev->div_clk = div_clk; 979 966 i2c_dev->adapter.algo = &tegra_i2c_algo; 980 - i2c_dev->adapter.quirks = &tegra_i2c_quirks; 981 967 i2c_dev->irq = irq; 982 968 i2c_dev->cont_id = pdev->id; 983 969 i2c_dev->dev = &pdev->dev; ··· 992 980 i2c_dev->hw = of_device_get_match_data(&pdev->dev); 993 981 i2c_dev->is_dvc = of_device_is_compatible(pdev->dev.of_node, 994 982 "nvidia,tegra20-i2c-dvc"); 983 + i2c_dev->adapter.quirks = i2c_dev->hw->quirks; 995 984 init_completion(&i2c_dev->msg_complete); 996 985 spin_lock_init(&i2c_dev->xfer_lock); 997 986
+6
drivers/i2c/i2c-dev.c
··· 470 470 data_arg.data); 471 471 } 472 472 case I2C_RETRIES: 473 + if (arg > INT_MAX) 474 + return -EINVAL; 475 + 473 476 client->adapter->retries = arg; 474 477 break; 475 478 case I2C_TIMEOUT: 479 + if (arg > INT_MAX) 480 + return -EINVAL; 481 + 476 482 /* For historical reasons, user-space sets the timeout 477 483 * value in units of 10 ms. 478 484 */
+2 -2
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
··· 766 766 return NULL; 767 767 768 768 sbuf->size = size; 769 - sbuf->sb = dma_zalloc_coherent(&rcfw->pdev->dev, sbuf->size, 770 - &sbuf->dma_addr, GFP_ATOMIC); 769 + sbuf->sb = dma_alloc_coherent(&rcfw->pdev->dev, sbuf->size, 770 + &sbuf->dma_addr, GFP_ATOMIC); 771 771 if (!sbuf->sb) 772 772 goto bail; 773 773
+4 -4
drivers/infiniband/hw/bnxt_re/qplib_res.c
··· 105 105 106 106 if (!sghead) { 107 107 for (i = 0; i < pages; i++) { 108 - pbl->pg_arr[i] = dma_zalloc_coherent(&pdev->dev, 109 - pbl->pg_size, 110 - &pbl->pg_map_arr[i], 111 - GFP_KERNEL); 108 + pbl->pg_arr[i] = dma_alloc_coherent(&pdev->dev, 109 + pbl->pg_size, 110 + &pbl->pg_map_arr[i], 111 + GFP_KERNEL); 112 112 if (!pbl->pg_arr[i]) 113 113 goto fail; 114 114 pbl->pg_count++;
+3 -3
drivers/infiniband/hw/cxgb3/cxio_hal.c
··· 291 291 if (!wq->sq) 292 292 goto err3; 293 293 294 - wq->queue = dma_zalloc_coherent(&(rdev_p->rnic_info.pdev->dev), 295 - depth * sizeof(union t3_wr), 296 - &(wq->dma_addr), GFP_KERNEL); 294 + wq->queue = dma_alloc_coherent(&(rdev_p->rnic_info.pdev->dev), 295 + depth * sizeof(union t3_wr), 296 + &(wq->dma_addr), GFP_KERNEL); 297 297 if (!wq->queue) 298 298 goto err4; 299 299
+2 -3
drivers/infiniband/hw/cxgb4/qp.c
··· 2564 2564 wq->rqt_abs_idx = (wq->rqt_hwaddr - rdev->lldi.vr->rq.start) >> 2565 2565 T4_RQT_ENTRY_SHIFT; 2566 2566 2567 - wq->queue = dma_zalloc_coherent(&rdev->lldi.pdev->dev, 2568 - wq->memsize, &wq->dma_addr, 2569 - GFP_KERNEL); 2567 + wq->queue = dma_alloc_coherent(&rdev->lldi.pdev->dev, wq->memsize, 2568 + &wq->dma_addr, GFP_KERNEL); 2570 2569 if (!wq->queue) 2571 2570 goto err_free_rqtpool; 2572 2571
+15 -14
drivers/infiniband/hw/hfi1/init.c
··· 899 899 goto done; 900 900 901 901 /* allocate dummy tail memory for all receive contexts */ 902 - dd->rcvhdrtail_dummy_kvaddr = dma_zalloc_coherent( 903 - &dd->pcidev->dev, sizeof(u64), 904 - &dd->rcvhdrtail_dummy_dma, 905 - GFP_KERNEL); 902 + dd->rcvhdrtail_dummy_kvaddr = dma_alloc_coherent(&dd->pcidev->dev, 903 + sizeof(u64), 904 + &dd->rcvhdrtail_dummy_dma, 905 + GFP_KERNEL); 906 906 907 907 if (!dd->rcvhdrtail_dummy_kvaddr) { 908 908 dd_dev_err(dd, "cannot allocate dummy tail memory\n"); ··· 1863 1863 gfp_flags = GFP_KERNEL; 1864 1864 else 1865 1865 gfp_flags = GFP_USER; 1866 - rcd->rcvhdrq = dma_zalloc_coherent( 1867 - &dd->pcidev->dev, amt, &rcd->rcvhdrq_dma, 1868 - gfp_flags | __GFP_COMP); 1866 + rcd->rcvhdrq = dma_alloc_coherent(&dd->pcidev->dev, amt, 1867 + &rcd->rcvhdrq_dma, 1868 + gfp_flags | __GFP_COMP); 1869 1869 1870 1870 if (!rcd->rcvhdrq) { 1871 1871 dd_dev_err(dd, ··· 1876 1876 1877 1877 if (HFI1_CAP_KGET_MASK(rcd->flags, DMA_RTAIL) || 1878 1878 HFI1_CAP_UGET_MASK(rcd->flags, DMA_RTAIL)) { 1879 - rcd->rcvhdrtail_kvaddr = dma_zalloc_coherent( 1880 - &dd->pcidev->dev, PAGE_SIZE, 1881 - &rcd->rcvhdrqtailaddr_dma, gfp_flags); 1879 + rcd->rcvhdrtail_kvaddr = dma_alloc_coherent(&dd->pcidev->dev, 1880 + PAGE_SIZE, 1881 + &rcd->rcvhdrqtailaddr_dma, 1882 + gfp_flags); 1882 1883 if (!rcd->rcvhdrtail_kvaddr) 1883 1884 goto bail_free; 1884 1885 } ··· 1975 1974 while (alloced_bytes < rcd->egrbufs.size && 1976 1975 rcd->egrbufs.alloced < rcd->egrbufs.count) { 1977 1976 rcd->egrbufs.buffers[idx].addr = 1978 - dma_zalloc_coherent(&dd->pcidev->dev, 1979 - rcd->egrbufs.rcvtid_size, 1980 - &rcd->egrbufs.buffers[idx].dma, 1981 - gfp_flags); 1977 + dma_alloc_coherent(&dd->pcidev->dev, 1978 + rcd->egrbufs.rcvtid_size, 1979 + &rcd->egrbufs.buffers[idx].dma, 1980 + gfp_flags); 1982 1981 if (rcd->egrbufs.buffers[idx].addr) { 1983 1982 rcd->egrbufs.buffers[idx].len = 1984 1983 rcd->egrbufs.rcvtid_size;
+4 -5
drivers/infiniband/hw/hfi1/pio.c
··· 2098 2098 int bytes = TXE_NUM_CONTEXTS * sizeof(struct credit_return); 2099 2099 2100 2100 set_dev_node(&dd->pcidev->dev, i); 2101 - dd->cr_base[i].va = dma_zalloc_coherent( 2102 - &dd->pcidev->dev, 2103 - bytes, 2104 - &dd->cr_base[i].dma, 2105 - GFP_KERNEL); 2101 + dd->cr_base[i].va = dma_alloc_coherent(&dd->pcidev->dev, 2102 + bytes, 2103 + &dd->cr_base[i].dma, 2104 + GFP_KERNEL); 2106 2105 if (!dd->cr_base[i].va) { 2107 2106 set_dev_node(&dd->pcidev->dev, dd->node); 2108 2107 dd_dev_err(dd,
+9 -18
drivers/infiniband/hw/hfi1/sdma.c
··· 1453 1453 timer_setup(&sde->err_progress_check_timer, 1454 1454 sdma_err_progress_check, 0); 1455 1455 1456 - sde->descq = dma_zalloc_coherent( 1457 - &dd->pcidev->dev, 1458 - descq_cnt * sizeof(u64[2]), 1459 - &sde->descq_phys, 1460 - GFP_KERNEL 1461 - ); 1456 + sde->descq = dma_alloc_coherent(&dd->pcidev->dev, 1457 + descq_cnt * sizeof(u64[2]), 1458 + &sde->descq_phys, GFP_KERNEL); 1462 1459 if (!sde->descq) 1463 1460 goto bail; 1464 1461 sde->tx_ring = ··· 1468 1471 1469 1472 dd->sdma_heads_size = L1_CACHE_BYTES * num_engines; 1470 1473 /* Allocate memory for DMA of head registers to memory */ 1471 - dd->sdma_heads_dma = dma_zalloc_coherent( 1472 - &dd->pcidev->dev, 1473 - dd->sdma_heads_size, 1474 - &dd->sdma_heads_phys, 1475 - GFP_KERNEL 1476 - ); 1474 + dd->sdma_heads_dma = dma_alloc_coherent(&dd->pcidev->dev, 1475 + dd->sdma_heads_size, 1476 + &dd->sdma_heads_phys, 1477 + GFP_KERNEL); 1477 1478 if (!dd->sdma_heads_dma) { 1478 1479 dd_dev_err(dd, "failed to allocate SendDMA head memory\n"); 1479 1480 goto bail; 1480 1481 } 1481 1482 1482 1483 /* Allocate memory for pad */ 1483 - dd->sdma_pad_dma = dma_zalloc_coherent( 1484 - &dd->pcidev->dev, 1485 - sizeof(u32), 1486 - &dd->sdma_pad_phys, 1487 - GFP_KERNEL 1488 - ); 1484 + dd->sdma_pad_dma = dma_alloc_coherent(&dd->pcidev->dev, sizeof(u32), 1485 + &dd->sdma_pad_phys, GFP_KERNEL); 1489 1486 if (!dd->sdma_pad_dma) { 1490 1487 dd_dev_err(dd, "failed to allocate SendDMA pad memory\n"); 1491 1488 goto bail;
+6 -5
drivers/infiniband/hw/hns/hns_roce_alloc.c
··· 197 197 buf->npages = 1 << order; 198 198 buf->page_shift = page_shift; 199 199 /* MTT PA must be recorded in 4k alignment, t is 4k aligned */ 200 - buf->direct.buf = dma_zalloc_coherent(dev, 201 - size, &t, GFP_KERNEL); 200 + buf->direct.buf = dma_alloc_coherent(dev, size, &t, 201 + GFP_KERNEL); 202 202 if (!buf->direct.buf) 203 203 return -ENOMEM; 204 204 ··· 219 219 return -ENOMEM; 220 220 221 221 for (i = 0; i < buf->nbufs; ++i) { 222 - buf->page_list[i].buf = dma_zalloc_coherent(dev, 223 - page_size, &t, 224 - GFP_KERNEL); 222 + buf->page_list[i].buf = dma_alloc_coherent(dev, 223 + page_size, 224 + &t, 225 + GFP_KERNEL); 225 226 226 227 if (!buf->page_list[i].buf) 227 228 goto err_free;
+5 -5
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 5091 5091 eqe_alloc = i * (buf_chk_sz / eq->eqe_size); 5092 5092 size = (eq->entries - eqe_alloc) * eq->eqe_size; 5093 5093 } 5094 - eq->buf[i] = dma_zalloc_coherent(dev, size, 5094 + eq->buf[i] = dma_alloc_coherent(dev, size, 5095 5095 &(eq->buf_dma[i]), 5096 5096 GFP_KERNEL); 5097 5097 if (!eq->buf[i]) ··· 5126 5126 size = (eq->entries - eqe_alloc) 5127 5127 * eq->eqe_size; 5128 5128 } 5129 - eq->buf[idx] = dma_zalloc_coherent(dev, size, 5130 - &(eq->buf_dma[idx]), 5131 - GFP_KERNEL); 5129 + eq->buf[idx] = dma_alloc_coherent(dev, size, 5130 + &(eq->buf_dma[idx]), 5131 + GFP_KERNEL); 5132 5132 if (!eq->buf[idx]) 5133 5133 goto err_dma_alloc_buf; 5134 5134 ··· 5241 5241 goto free_cmd_mbox; 5242 5242 } 5243 5243 5244 - eq->buf_list->buf = dma_zalloc_coherent(dev, buf_chk_sz, 5244 + eq->buf_list->buf = dma_alloc_coherent(dev, buf_chk_sz, 5245 5245 &(eq->buf_list->map), 5246 5246 GFP_KERNEL); 5247 5247 if (!eq->buf_list->buf) {
+2 -2
drivers/infiniband/hw/i40iw/i40iw_utils.c
··· 745 745 if (!mem) 746 746 return I40IW_ERR_PARAM; 747 747 mem->size = ALIGN(size, alignment); 748 - mem->va = dma_zalloc_coherent(&pcidev->dev, mem->size, 749 - (dma_addr_t *)&mem->pa, GFP_KERNEL); 748 + mem->va = dma_alloc_coherent(&pcidev->dev, mem->size, 749 + (dma_addr_t *)&mem->pa, GFP_KERNEL); 750 750 if (!mem->va) 751 751 return I40IW_ERR_NO_MEMORY; 752 752 return 0;
+3 -2
drivers/infiniband/hw/mthca/mthca_memfree.c
··· 623 623 page = dev->db_tab->page + end; 624 624 625 625 alloc: 626 - page->db_rec = dma_zalloc_coherent(&dev->pdev->dev, MTHCA_ICM_PAGE_SIZE, 627 - &page->mapping, GFP_KERNEL); 626 + page->db_rec = dma_alloc_coherent(&dev->pdev->dev, 627 + MTHCA_ICM_PAGE_SIZE, &page->mapping, 628 + GFP_KERNEL); 628 629 if (!page->db_rec) { 629 630 ret = -ENOMEM; 630 631 goto out;
+7 -7
drivers/infiniband/hw/ocrdma/ocrdma_hw.c
··· 380 380 q->len = len; 381 381 q->entry_size = entry_size; 382 382 q->size = len * entry_size; 383 - q->va = dma_zalloc_coherent(&dev->nic_info.pdev->dev, q->size, 384 - &q->dma, GFP_KERNEL); 383 + q->va = dma_alloc_coherent(&dev->nic_info.pdev->dev, q->size, &q->dma, 384 + GFP_KERNEL); 385 385 if (!q->va) 386 386 return -ENOMEM; 387 387 return 0; ··· 1819 1819 return -ENOMEM; 1820 1820 ocrdma_init_mch(&cmd->cmd.req, OCRDMA_CMD_CREATE_CQ, 1821 1821 OCRDMA_SUBSYS_COMMON, sizeof(*cmd)); 1822 - cq->va = dma_zalloc_coherent(&pdev->dev, cq->len, &cq->pa, GFP_KERNEL); 1822 + cq->va = dma_alloc_coherent(&pdev->dev, cq->len, &cq->pa, GFP_KERNEL); 1823 1823 if (!cq->va) { 1824 1824 status = -ENOMEM; 1825 1825 goto mem_err; ··· 2209 2209 qp->sq.max_cnt = max_wqe_allocated; 2210 2210 len = (hw_pages * hw_page_size); 2211 2211 2212 - qp->sq.va = dma_zalloc_coherent(&pdev->dev, len, &pa, GFP_KERNEL); 2212 + qp->sq.va = dma_alloc_coherent(&pdev->dev, len, &pa, GFP_KERNEL); 2213 2213 if (!qp->sq.va) 2214 2214 return -EINVAL; 2215 2215 qp->sq.len = len; ··· 2259 2259 qp->rq.max_cnt = max_rqe_allocated; 2260 2260 len = (hw_pages * hw_page_size); 2261 2261 2262 - qp->rq.va = dma_zalloc_coherent(&pdev->dev, len, &pa, GFP_KERNEL); 2262 + qp->rq.va = dma_alloc_coherent(&pdev->dev, len, &pa, GFP_KERNEL); 2263 2263 if (!qp->rq.va) 2264 2264 return -ENOMEM; 2265 2265 qp->rq.pa = pa; ··· 2315 2315 if (dev->attr.ird == 0) 2316 2316 return 0; 2317 2317 2318 - qp->ird_q_va = dma_zalloc_coherent(&pdev->dev, ird_q_len, &pa, 2319 - GFP_KERNEL); 2318 + qp->ird_q_va = dma_alloc_coherent(&pdev->dev, ird_q_len, &pa, 2319 + GFP_KERNEL); 2320 2320 if (!qp->ird_q_va) 2321 2321 return -ENOMEM; 2322 2322 ocrdma_build_q_pages(&cmd->ird_addr[0], dev->attr.num_ird_pages,
+2 -2
drivers/infiniband/hw/ocrdma/ocrdma_stats.c
··· 73 73 mem->size = max_t(u32, sizeof(struct ocrdma_rdma_stats_req), 74 74 sizeof(struct ocrdma_rdma_stats_resp)); 75 75 76 - mem->va = dma_zalloc_coherent(&dev->nic_info.pdev->dev, mem->size, 77 - &mem->pa, GFP_KERNEL); 76 + mem->va = dma_alloc_coherent(&dev->nic_info.pdev->dev, mem->size, 77 + &mem->pa, GFP_KERNEL); 78 78 if (!mem->va) { 79 79 pr_err("%s: stats mbox allocation failed\n", __func__); 80 80 return false;
+3 -3
drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
··· 504 504 INIT_LIST_HEAD(&ctx->mm_head); 505 505 mutex_init(&ctx->mm_list_lock); 506 506 507 - ctx->ah_tbl.va = dma_zalloc_coherent(&pdev->dev, map_len, 508 - &ctx->ah_tbl.pa, GFP_KERNEL); 507 + ctx->ah_tbl.va = dma_alloc_coherent(&pdev->dev, map_len, 508 + &ctx->ah_tbl.pa, GFP_KERNEL); 509 509 if (!ctx->ah_tbl.va) { 510 510 kfree(ctx); 511 511 return ERR_PTR(-ENOMEM); ··· 838 838 return -ENOMEM; 839 839 840 840 for (i = 0; i < mr->num_pbls; i++) { 841 - va = dma_zalloc_coherent(&pdev->dev, dma_len, &pa, GFP_KERNEL); 841 + va = dma_alloc_coherent(&pdev->dev, dma_len, &pa, GFP_KERNEL); 842 842 if (!va) { 843 843 ocrdma_free_mr_pbl_tbl(dev, mr); 844 844 status = -ENOMEM;
+2 -2
drivers/infiniband/hw/qedr/verbs.c
··· 556 556 return ERR_PTR(-ENOMEM); 557 557 558 558 for (i = 0; i < pbl_info->num_pbls; i++) { 559 - va = dma_zalloc_coherent(&pdev->dev, pbl_info->pbl_size, 560 - &pa, flags); 559 + va = dma_alloc_coherent(&pdev->dev, pbl_info->pbl_size, &pa, 560 + flags); 561 561 if (!va) 562 562 goto err; 563 563
+2 -2
drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
··· 890 890 dev_info(&pdev->dev, "device version %d, driver version %d\n", 891 891 dev->dsr_version, PVRDMA_VERSION); 892 892 893 - dev->dsr = dma_zalloc_coherent(&pdev->dev, sizeof(*dev->dsr), 894 - &dev->dsrbase, GFP_KERNEL); 893 + dev->dsr = dma_alloc_coherent(&pdev->dev, sizeof(*dev->dsr), 894 + &dev->dsrbase, GFP_KERNEL); 895 895 if (!dev->dsr) { 896 896 dev_err(&pdev->dev, "failed to allocate shared region\n"); 897 897 ret = -ENOMEM;
+2 -2
drivers/input/touchscreen/raspberrypi-ts.c
··· 147 147 return -ENOMEM; 148 148 ts->pdev = pdev; 149 149 150 - ts->fw_regs_va = dma_zalloc_coherent(dev, PAGE_SIZE, &ts->fw_regs_phys, 151 - GFP_KERNEL); 150 + ts->fw_regs_va = dma_alloc_coherent(dev, PAGE_SIZE, &ts->fw_regs_phys, 151 + GFP_KERNEL); 152 152 if (!ts->fw_regs_va) { 153 153 dev_err(dev, "failed to dma_alloc_coherent\n"); 154 154 return -ENOMEM;
+2 -3
drivers/iommu/mtk_iommu_v1.c
··· 232 232 233 233 spin_lock_init(&dom->pgtlock); 234 234 235 - dom->pgt_va = dma_zalloc_coherent(data->dev, 236 - M2701_IOMMU_PGT_SIZE, 237 - &dom->pgt_pa, GFP_KERNEL); 235 + dom->pgt_va = dma_alloc_coherent(data->dev, M2701_IOMMU_PGT_SIZE, 236 + &dom->pgt_pa, GFP_KERNEL); 238 237 if (!dom->pgt_va) 239 238 return -ENOMEM; 240 239
+42 -35
drivers/irqchip/irq-csky-apb-intc.c
··· 95 95 96 96 /* Setup 64 channel slots */ 97 97 for (i = 0; i < INTC_IRQS; i += 4) 98 - writel_relaxed(build_channel_val(i, magic), reg_addr + i); 98 + writel(build_channel_val(i, magic), reg_addr + i); 99 99 } 100 100 101 101 static int __init ··· 135 135 static inline bool handle_irq_perbit(struct pt_regs *regs, u32 hwirq, 136 136 u32 irq_base) 137 137 { 138 - u32 irq; 139 - 140 138 if (hwirq == 0) 141 139 return 0; 142 140 143 - while (hwirq) { 144 - irq = __ffs(hwirq); 145 - hwirq &= ~BIT(irq); 146 - handle_domain_irq(root_domain, irq_base + irq, regs); 147 - } 141 + handle_domain_irq(root_domain, irq_base + __fls(hwirq), regs); 148 142 149 143 return 1; 150 144 } ··· 148 154 { 149 155 bool ret; 150 156 151 - do { 152 - ret = handle_irq_perbit(regs, 153 - readl_relaxed(reg_base + GX_INTC_PEN31_00), 0); 154 - ret |= handle_irq_perbit(regs, 155 - readl_relaxed(reg_base + GX_INTC_PEN63_32), 32); 156 - } while (ret); 157 + retry: 158 + ret = handle_irq_perbit(regs, 159 + readl(reg_base + GX_INTC_PEN63_32), 32); 160 + if (ret) 161 + goto retry; 162 + 163 + ret = handle_irq_perbit(regs, 164 + readl(reg_base + GX_INTC_PEN31_00), 0); 165 + if (ret) 166 + goto retry; 157 167 } 158 168 159 169 static int __init ··· 172 174 /* 173 175 * Initial enable reg to disable all interrupts 174 176 */ 175 - writel_relaxed(0x0, reg_base + GX_INTC_NEN31_00); 176 - writel_relaxed(0x0, reg_base + GX_INTC_NEN63_32); 177 + writel(0x0, reg_base + GX_INTC_NEN31_00); 178 + writel(0x0, reg_base + GX_INTC_NEN63_32); 177 179 178 180 /* 179 181 * Initial mask reg with all unmasked, because we only use enalbe reg 180 182 */ 181 - writel_relaxed(0x0, reg_base + GX_INTC_NMASK31_00); 182 - writel_relaxed(0x0, reg_base + GX_INTC_NMASK63_32); 183 + writel(0x0, reg_base + GX_INTC_NMASK31_00); 184 + writel(0x0, reg_base + GX_INTC_NMASK63_32); 183 185 184 186 setup_irq_channel(0x03020100, reg_base + GX_INTC_SOURCE); 185 187 ··· 202 204 void __iomem *reg_pen_lo = reg_base + CK_INTC_PEN31_00; 203 205 void __iomem *reg_pen_hi = reg_base + CK_INTC_PEN63_32; 204 206 205 - do { 206 - /* handle 0 - 31 irqs */ 207 - ret = handle_irq_perbit(regs, readl_relaxed(reg_pen_lo), 0); 208 - ret |= handle_irq_perbit(regs, readl_relaxed(reg_pen_hi), 32); 207 + retry: 208 + /* handle 0 - 63 irqs */ 209 + ret = handle_irq_perbit(regs, readl(reg_pen_hi), 32); 210 + if (ret) 211 + goto retry; 209 212 210 - if (nr_irq == INTC_IRQS) 211 - continue; 213 + ret = handle_irq_perbit(regs, readl(reg_pen_lo), 0); 214 + if (ret) 215 + goto retry; 212 216 213 - /* handle 64 - 127 irqs */ 214 - ret |= handle_irq_perbit(regs, 215 - readl_relaxed(reg_pen_lo + CK_INTC_DUAL_BASE), 64); 216 - ret |= handle_irq_perbit(regs, 217 - readl_relaxed(reg_pen_hi + CK_INTC_DUAL_BASE), 96); 218 - } while (ret); 217 + if (nr_irq == INTC_IRQS) 218 + return; 219 + 220 + /* handle 64 - 127 irqs */ 221 + ret = handle_irq_perbit(regs, 222 + readl(reg_pen_hi + CK_INTC_DUAL_BASE), 96); 223 + if (ret) 224 + goto retry; 225 + 226 + ret = handle_irq_perbit(regs, 227 + readl(reg_pen_lo + CK_INTC_DUAL_BASE), 64); 228 + if (ret) 229 + goto retry; 219 230 } 220 231 221 232 static int __init ··· 237 230 return ret; 238 231 239 232 /* Initial enable reg to disable all interrupts */ 240 - writel_relaxed(0, reg_base + CK_INTC_NEN31_00); 241 - writel_relaxed(0, reg_base + CK_INTC_NEN63_32); 233 + writel(0, reg_base + CK_INTC_NEN31_00); 234 + writel(0, reg_base + CK_INTC_NEN63_32); 242 235 243 236 /* Enable irq intc */ 244 - writel_relaxed(BIT(31), reg_base + CK_INTC_ICR); 237 + writel(BIT(31), reg_base + CK_INTC_ICR); 245 238 246 239 ck_set_gc(node, reg_base, CK_INTC_NEN31_00, 0); 247 240 ck_set_gc(node, reg_base, CK_INTC_NEN63_32, 32); ··· 267 260 return ret; 268 261 269 262 /* Initial enable reg to disable all interrupts */ 270 - writel_relaxed(0, reg_base + CK_INTC_NEN31_00 + CK_INTC_DUAL_BASE); 271 - writel_relaxed(0, reg_base + CK_INTC_NEN63_32 + CK_INTC_DUAL_BASE); 263 + writel(0, reg_base + CK_INTC_NEN31_00 + CK_INTC_DUAL_BASE); 264 + writel(0, reg_base + CK_INTC_NEN63_32 + CK_INTC_DUAL_BASE); 272 265 273 266 ck_set_gc(node, reg_base + CK_INTC_DUAL_BASE, CK_INTC_NEN31_00, 64); 274 267 ck_set_gc(node, reg_base + CK_INTC_DUAL_BASE, CK_INTC_NEN63_32, 96);
+2 -2
drivers/media/pci/intel/ipu3/ipu3-cio2.c
··· 218 218 { 219 219 struct device *dev = &cio2->pci_dev->dev; 220 220 221 - q->fbpt = dma_zalloc_coherent(dev, CIO2_FBPT_SIZE, &q->fbpt_bus_addr, 222 - GFP_KERNEL); 221 + q->fbpt = dma_alloc_coherent(dev, CIO2_FBPT_SIZE, &q->fbpt_bus_addr, 222 + GFP_KERNEL); 223 223 if (!q->fbpt) 224 224 return -ENOMEM; 225 225
+1 -1
drivers/media/platform/mtk-vcodec/mtk_vcodec_util.c
··· 49 49 struct mtk_vcodec_ctx *ctx = (struct mtk_vcodec_ctx *)data; 50 50 struct device *dev = &ctx->dev->plat_dev->dev; 51 51 52 - mem->va = dma_zalloc_coherent(dev, size, &mem->dma_addr, GFP_KERNEL); 52 + mem->va = dma_alloc_coherent(dev, size, &mem->dma_addr, GFP_KERNEL); 53 53 if (!mem->va) { 54 54 mtk_v4l2_err("%s dma_alloc size=%ld failed!", dev_name(dev), 55 55 size);
+2 -2
drivers/misc/genwqe/card_utils.c
··· 218 218 if (get_order(size) >= MAX_ORDER) 219 219 return NULL; 220 220 221 - return dma_zalloc_coherent(&cd->pci_dev->dev, size, dma_handle, 222 - GFP_KERNEL); 221 + return dma_alloc_coherent(&cd->pci_dev->dev, size, dma_handle, 222 + GFP_KERNEL); 223 223 } 224 224 225 225 void __genwqe_free_consistent(struct genwqe_dev *cd, size_t size,
+3 -2
drivers/mmc/host/sdhci.c
··· 3763 3763 * Use zalloc to zero the reserved high 32-bits of 128-bit 3764 3764 * descriptors so that they never need to be written. 3765 3765 */ 3766 - buf = dma_zalloc_coherent(mmc_dev(mmc), host->align_buffer_sz + 3767 - host->adma_table_sz, &dma, GFP_KERNEL); 3766 + buf = dma_alloc_coherent(mmc_dev(mmc), 3767 + host->align_buffer_sz + host->adma_table_sz, 3768 + &dma, GFP_KERNEL); 3768 3769 if (!buf) { 3769 3770 pr_warn("%s: Unable to allocate ADMA buffers - falling back to standard DMA\n", 3770 3771 mmc_hostname(mmc));
+1 -1
drivers/mtd/mtdcore.c
··· 522 522 mtd->nvmem = nvmem_register(&config); 523 523 if (IS_ERR(mtd->nvmem)) { 524 524 /* Just ignore if there is no NVMEM support in the kernel */ 525 - if (PTR_ERR(mtd->nvmem) == -ENOSYS) { 525 + if (PTR_ERR(mtd->nvmem) == -EOPNOTSUPP) { 526 526 mtd->nvmem = NULL; 527 527 } else { 528 528 dev_err(&mtd->dev, "Failed to register NVMEM device\n");
+1 -1
drivers/mtd/mtdcore.h
··· 7 7 extern struct mutex mtd_table_mutex; 8 8 9 9 struct mtd_info *__mtd_next_device(int i); 10 - int add_mtd_device(struct mtd_info *mtd); 10 + int __must_check add_mtd_device(struct mtd_info *mtd); 11 11 int del_mtd_device(struct mtd_info *mtd); 12 12 int add_mtd_partitions(struct mtd_info *, const struct mtd_partition *, int); 13 13 int del_mtd_partitions(struct mtd_info *);
+31 -5
drivers/mtd/mtdpart.c
··· 618 618 list_add(&new->list, &mtd_partitions); 619 619 mutex_unlock(&mtd_partitions_mutex); 620 620 621 - add_mtd_device(&new->mtd); 621 + ret = add_mtd_device(&new->mtd); 622 + if (ret) 623 + goto err_remove_part; 622 624 623 625 mtd_add_partition_attrs(new); 626 + 627 + return 0; 628 + 629 + err_remove_part: 630 + mutex_lock(&mtd_partitions_mutex); 631 + list_del(&new->list); 632 + mutex_unlock(&mtd_partitions_mutex); 633 + 634 + free_partition(new); 635 + pr_info("%s:%i\n", __func__, __LINE__); 624 636 625 637 return ret; 626 638 } ··· 724 712 { 725 713 struct mtd_part *slave; 726 714 uint64_t cur_offset = 0; 727 - int i; 715 + int i, ret; 728 716 729 717 printk(KERN_NOTICE "Creating %d MTD partitions on \"%s\":\n", nbparts, master->name); 730 718 731 719 for (i = 0; i < nbparts; i++) { 732 720 slave = allocate_partition(master, parts + i, i, cur_offset); 733 721 if (IS_ERR(slave)) { 734 - del_mtd_partitions(master); 735 - return PTR_ERR(slave); 722 + ret = PTR_ERR(slave); 723 + goto err_del_partitions; 736 724 } 737 725 738 726 mutex_lock(&mtd_partitions_mutex); 739 727 list_add(&slave->list, &mtd_partitions); 740 728 mutex_unlock(&mtd_partitions_mutex); 741 729 742 - add_mtd_device(&slave->mtd); 730 + ret = add_mtd_device(&slave->mtd); 731 + if (ret) { 732 + mutex_lock(&mtd_partitions_mutex); 733 + list_del(&slave->list); 734 + mutex_unlock(&mtd_partitions_mutex); 735 + 736 + free_partition(slave); 737 + goto err_del_partitions; 738 + } 739 + 743 740 mtd_add_partition_attrs(slave); 744 741 /* Look for subpartitions */ 745 742 parse_mtd_partitions(&slave->mtd, parts[i].types, NULL); ··· 757 736 } 758 737 759 738 return 0; 739 + 740 + err_del_partitions: 741 + del_mtd_partitions(master); 742 + 743 + return ret; 760 744 } 761 745 762 746 static DEFINE_SPINLOCK(part_parser_lock);
+10 -10
drivers/mtd/nand/raw/qcom_nandc.c
··· 2833 2833 if (ret) 2834 2834 return ret; 2835 2835 2836 + if (nandc->props->is_bam) { 2837 + free_bam_transaction(nandc); 2838 + nandc->bam_txn = alloc_bam_transaction(nandc); 2839 + if (!nandc->bam_txn) { 2840 + dev_err(nandc->dev, 2841 + "failed to allocate bam transaction\n"); 2842 + return -ENOMEM; 2843 + } 2844 + } 2845 + 2836 2846 ret = mtd_device_register(mtd, NULL, 0); 2837 2847 if (ret) 2838 2848 nand_cleanup(chip); ··· 2856 2846 struct device_node *dn = dev->of_node, *child; 2857 2847 struct qcom_nand_host *host; 2858 2848 int ret; 2859 - 2860 - if (nandc->props->is_bam) { 2861 - free_bam_transaction(nandc); 2862 - nandc->bam_txn = alloc_bam_transaction(nandc); 2863 - if (!nandc->bam_txn) { 2864 - dev_err(nandc->dev, 2865 - "failed to allocate bam transaction\n"); 2866 - return -ENOMEM; 2867 - } 2868 - } 2869 2849 2870 2850 for_each_available_child_of_node(dn, child) { 2871 2851 host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL);
+6 -6
drivers/net/ethernet/aeroflex/greth.c
··· 1433 1433 } 1434 1434 1435 1435 /* Allocate TX descriptor ring in coherent memory */ 1436 - greth->tx_bd_base = dma_zalloc_coherent(greth->dev, 1024, 1437 - &greth->tx_bd_base_phys, 1438 - GFP_KERNEL); 1436 + greth->tx_bd_base = dma_alloc_coherent(greth->dev, 1024, 1437 + &greth->tx_bd_base_phys, 1438 + GFP_KERNEL); 1439 1439 if (!greth->tx_bd_base) { 1440 1440 err = -ENOMEM; 1441 1441 goto error3; 1442 1442 } 1443 1443 1444 1444 /* Allocate RX descriptor ring in coherent memory */ 1445 - greth->rx_bd_base = dma_zalloc_coherent(greth->dev, 1024, 1446 - &greth->rx_bd_base_phys, 1447 - GFP_KERNEL); 1445 + greth->rx_bd_base = dma_alloc_coherent(greth->dev, 1024, 1446 + &greth->rx_bd_base_phys, 1447 + GFP_KERNEL); 1448 1448 if (!greth->rx_bd_base) { 1449 1449 err = -ENOMEM; 1450 1450 goto error4;
+6 -6
drivers/net/ethernet/alacritech/slicoss.c
··· 795 795 size = stq->len * sizeof(*descs) + DESC_ALIGN_MASK; 796 796 797 797 for (i = 0; i < SLIC_NUM_STAT_DESC_ARRAYS; i++) { 798 - descs = dma_zalloc_coherent(&sdev->pdev->dev, size, &paddr, 799 - GFP_KERNEL); 798 + descs = dma_alloc_coherent(&sdev->pdev->dev, size, &paddr, 799 + GFP_KERNEL); 800 800 if (!descs) { 801 801 netdev_err(sdev->netdev, 802 802 "failed to allocate status descriptors\n"); ··· 1240 1240 struct slic_shmem_data *sm_data; 1241 1241 dma_addr_t paddr; 1242 1242 1243 - sm_data = dma_zalloc_coherent(&sdev->pdev->dev, sizeof(*sm_data), 1244 - &paddr, GFP_KERNEL); 1243 + sm_data = dma_alloc_coherent(&sdev->pdev->dev, sizeof(*sm_data), 1244 + &paddr, GFP_KERNEL); 1245 1245 if (!sm_data) { 1246 1246 dev_err(&sdev->pdev->dev, "failed to allocate shared memory\n"); 1247 1247 return -ENOMEM; ··· 1621 1621 int err = 0; 1622 1622 u8 *mac[2]; 1623 1623 1624 - eeprom = dma_zalloc_coherent(&sdev->pdev->dev, SLIC_EEPROM_SIZE, 1625 - &paddr, GFP_KERNEL); 1624 + eeprom = dma_alloc_coherent(&sdev->pdev->dev, SLIC_EEPROM_SIZE, 1625 + &paddr, GFP_KERNEL); 1626 1626 if (!eeprom) 1627 1627 return -ENOMEM; 1628 1628
+31 -30
drivers/net/ethernet/amazon/ena/ena_com.c
··· 111 111 struct ena_com_admin_sq *sq = &queue->sq; 112 112 u16 size = ADMIN_SQ_SIZE(queue->q_depth); 113 113 114 - sq->entries = dma_zalloc_coherent(queue->q_dmadev, size, &sq->dma_addr, 115 - GFP_KERNEL); 114 + sq->entries = dma_alloc_coherent(queue->q_dmadev, size, &sq->dma_addr, 115 + GFP_KERNEL); 116 116 117 117 if (!sq->entries) { 118 118 pr_err("memory allocation failed"); ··· 133 133 struct ena_com_admin_cq *cq = &queue->cq; 134 134 u16 size = ADMIN_CQ_SIZE(queue->q_depth); 135 135 136 - cq->entries = dma_zalloc_coherent(queue->q_dmadev, size, &cq->dma_addr, 137 - GFP_KERNEL); 136 + cq->entries = dma_alloc_coherent(queue->q_dmadev, size, &cq->dma_addr, 137 + GFP_KERNEL); 138 138 139 139 if (!cq->entries) { 140 140 pr_err("memory allocation failed"); ··· 156 156 157 157 dev->aenq.q_depth = ENA_ASYNC_QUEUE_DEPTH; 158 158 size = ADMIN_AENQ_SIZE(ENA_ASYNC_QUEUE_DEPTH); 159 - aenq->entries = dma_zalloc_coherent(dev->dmadev, size, &aenq->dma_addr, 160 - GFP_KERNEL); 159 + aenq->entries = dma_alloc_coherent(dev->dmadev, size, &aenq->dma_addr, 160 + GFP_KERNEL); 161 161 162 162 if (!aenq->entries) { 163 163 pr_err("memory allocation failed"); ··· 344 344 dev_node = dev_to_node(ena_dev->dmadev); 345 345 set_dev_node(ena_dev->dmadev, ctx->numa_node); 346 346 io_sq->desc_addr.virt_addr = 347 - dma_zalloc_coherent(ena_dev->dmadev, size, 348 - &io_sq->desc_addr.phys_addr, 349 - GFP_KERNEL); 347 + dma_alloc_coherent(ena_dev->dmadev, size, 348 + &io_sq->desc_addr.phys_addr, 349 + GFP_KERNEL); 350 350 set_dev_node(ena_dev->dmadev, dev_node); 351 351 if (!io_sq->desc_addr.virt_addr) { 352 352 io_sq->desc_addr.virt_addr = 353 - dma_zalloc_coherent(ena_dev->dmadev, size, 354 - &io_sq->desc_addr.phys_addr, 355 - GFP_KERNEL); 353 + dma_alloc_coherent(ena_dev->dmadev, size, 354 + &io_sq->desc_addr.phys_addr, 355 + GFP_KERNEL); 356 356 } 357 357 358 358 if (!io_sq->desc_addr.virt_addr) { ··· 425 425 prev_node = dev_to_node(ena_dev->dmadev); 426 426 set_dev_node(ena_dev->dmadev, ctx->numa_node); 427 427 io_cq->cdesc_addr.virt_addr = 428 - dma_zalloc_coherent(ena_dev->dmadev, size, 429 - &io_cq->cdesc_addr.phys_addr, GFP_KERNEL); 428 + dma_alloc_coherent(ena_dev->dmadev, size, 429 + &io_cq->cdesc_addr.phys_addr, GFP_KERNEL); 430 430 set_dev_node(ena_dev->dmadev, prev_node); 431 431 if (!io_cq->cdesc_addr.virt_addr) { 432 432 io_cq->cdesc_addr.virt_addr = 433 - dma_zalloc_coherent(ena_dev->dmadev, size, 434 - &io_cq->cdesc_addr.phys_addr, 435 - GFP_KERNEL); 433 + dma_alloc_coherent(ena_dev->dmadev, size, 434 + &io_cq->cdesc_addr.phys_addr, 435 + GFP_KERNEL); 436 436 } 437 437 438 438 if (!io_cq->cdesc_addr.virt_addr) { ··· 1026 1026 struct ena_rss *rss = &ena_dev->rss; 1027 1027 1028 1028 rss->hash_key = 1029 - dma_zalloc_coherent(ena_dev->dmadev, sizeof(*rss->hash_key), 1030 - &rss->hash_key_dma_addr, GFP_KERNEL); 1029 + dma_alloc_coherent(ena_dev->dmadev, sizeof(*rss->hash_key), 1030 + &rss->hash_key_dma_addr, GFP_KERNEL); 1031 1031 1032 1032 if (unlikely(!rss->hash_key)) 1033 1033 return -ENOMEM; ··· 1050 1050 struct ena_rss *rss = &ena_dev->rss; 1051 1051 1052 1052 rss->hash_ctrl = 1053 - dma_zalloc_coherent(ena_dev->dmadev, sizeof(*rss->hash_ctrl), 1054 - &rss->hash_ctrl_dma_addr, GFP_KERNEL); 1053 + dma_alloc_coherent(ena_dev->dmadev, sizeof(*rss->hash_ctrl), 1054 + &rss->hash_ctrl_dma_addr, GFP_KERNEL); 1055 1055 1056 1056 if (unlikely(!rss->hash_ctrl)) 1057 1057 return -ENOMEM; ··· 1094 1094 sizeof(struct ena_admin_rss_ind_table_entry); 1095 1095 1096 1096 rss->rss_ind_tbl = 1097 - dma_zalloc_coherent(ena_dev->dmadev, tbl_size, 1098 - &rss->rss_ind_tbl_dma_addr, GFP_KERNEL); 1097 + dma_alloc_coherent(ena_dev->dmadev, tbl_size, 1098 + &rss->rss_ind_tbl_dma_addr, GFP_KERNEL); 1099 1099 if (unlikely(!rss->rss_ind_tbl)) 1100 1100 goto mem_err1; 1101 1101 ··· 1649 1649 1650 1650 spin_lock_init(&mmio_read->lock); 1651 1651 mmio_read->read_resp = 1652 - dma_zalloc_coherent(ena_dev->dmadev, 1653 - sizeof(*mmio_read->read_resp), 1654 - &mmio_read->read_resp_dma_addr, GFP_KERNEL); 1652 + dma_alloc_coherent(ena_dev->dmadev, 1653 + sizeof(*mmio_read->read_resp), 1654 + &mmio_read->read_resp_dma_addr, GFP_KERNEL); 1655 1655 if (unlikely(!mmio_read->read_resp)) 1656 1656 goto err; 1657 1657 ··· 2623 2623 struct ena_host_attribute *host_attr = &ena_dev->host_attr; 2624 2624 2625 2625 host_attr->host_info = 2626 - dma_zalloc_coherent(ena_dev->dmadev, SZ_4K, 2627 - &host_attr->host_info_dma_addr, GFP_KERNEL); 2626 + dma_alloc_coherent(ena_dev->dmadev, SZ_4K, 2627 + &host_attr->host_info_dma_addr, GFP_KERNEL); 2628 2628 if (unlikely(!host_attr->host_info)) 2629 2629 return -ENOMEM; 2630 2630 ··· 2641 2641 struct ena_host_attribute *host_attr = &ena_dev->host_attr; 2642 2642 2643 2643 host_attr->debug_area_virt_addr = 2644 - dma_zalloc_coherent(ena_dev->dmadev, debug_area_size, 2645 - &host_attr->debug_area_dma_addr, GFP_KERNEL); 2644 + dma_alloc_coherent(ena_dev->dmadev, debug_area_size, 2645 + &host_attr->debug_area_dma_addr, 2646 + GFP_KERNEL); 2646 2647 if (unlikely(!host_attr->debug_area_virt_addr)) { 2647 2648 host_attr->debug_area_size = 0; 2648 2649 return -ENOMEM;
+4 -4
drivers/net/ethernet/apm/xgene-v2/main.c
··· 206 206 } 207 207 208 208 /* Packet buffers should be 64B aligned */ 209 - pkt_buf = dma_zalloc_coherent(dev, XGENE_ENET_STD_MTU, &dma_addr, 210 - GFP_ATOMIC); 209 + pkt_buf = dma_alloc_coherent(dev, XGENE_ENET_STD_MTU, &dma_addr, 210 + GFP_ATOMIC); 211 211 if (unlikely(!pkt_buf)) { 212 212 dev_kfree_skb_any(skb); 213 213 return NETDEV_TX_OK; ··· 428 428 ring->ndev = ndev; 429 429 430 430 size = XGENE_ENET_DESC_SIZE * XGENE_ENET_NUM_DESC; 431 - ring->desc_addr = dma_zalloc_coherent(dev, size, &ring->dma_addr, 432 - GFP_KERNEL); 431 + ring->desc_addr = dma_alloc_coherent(dev, size, &ring->dma_addr, 432 + GFP_KERNEL); 433 433 if (!ring->desc_addr) 434 434 goto err; 435 435
+3 -4
drivers/net/ethernet/atheros/alx/main.c
··· 660 660 alx->num_txq + 661 661 sizeof(struct alx_rrd) * alx->rx_ringsz + 662 662 sizeof(struct alx_rfd) * alx->rx_ringsz; 663 - alx->descmem.virt = dma_zalloc_coherent(&alx->hw.pdev->dev, 664 - alx->descmem.size, 665 - &alx->descmem.dma, 666 - GFP_KERNEL); 663 + alx->descmem.virt = dma_alloc_coherent(&alx->hw.pdev->dev, 664 + alx->descmem.size, 665 + &alx->descmem.dma, GFP_KERNEL); 667 666 if (!alx->descmem.virt) 668 667 return -ENOMEM; 669 668
+2 -2
drivers/net/ethernet/atheros/atl1c/atl1c_main.c
··· 1019 1019 sizeof(struct atl1c_recv_ret_status) * rx_desc_count + 1020 1020 8 * 4; 1021 1021 1022 - ring_header->desc = dma_zalloc_coherent(&pdev->dev, ring_header->size, 1023 - &ring_header->dma, GFP_KERNEL); 1022 + ring_header->desc = dma_alloc_coherent(&pdev->dev, ring_header->size, 1023 + &ring_header->dma, GFP_KERNEL); 1024 1024 if (unlikely(!ring_header->desc)) { 1025 1025 dev_err(&pdev->dev, "could not get memory for DMA buffer\n"); 1026 1026 goto err_nomem;
+4 -4
drivers/net/ethernet/broadcom/bcm63xx_enet.c
··· 936 936 937 937 /* allocate rx dma ring */ 938 938 size = priv->rx_ring_size * sizeof(struct bcm_enet_desc); 939 - p = dma_zalloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL); 939 + p = dma_alloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL); 940 940 if (!p) { 941 941 ret = -ENOMEM; 942 942 goto out_freeirq_tx; ··· 947 947 948 948 /* allocate tx dma ring */ 949 949 size = priv->tx_ring_size * sizeof(struct bcm_enet_desc); 950 - p = dma_zalloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL); 950 + p = dma_alloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL); 951 951 if (!p) { 952 952 ret = -ENOMEM; 953 953 goto out_free_rx_ring; ··· 2120 2120 2121 2121 /* allocate rx dma ring */ 2122 2122 size = priv->rx_ring_size * sizeof(struct bcm_enet_desc); 2123 - p = dma_zalloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL); 2123 + p = dma_alloc_coherent(kdev, size, &priv->rx_desc_dma, GFP_KERNEL); 2124 2124 if (!p) { 2125 2125 dev_err(kdev, "cannot allocate rx ring %u\n", size); 2126 2126 ret = -ENOMEM; ··· 2132 2132 2133 2133 /* allocate tx dma ring */ 2134 2134 size = priv->tx_ring_size * sizeof(struct bcm_enet_desc); 2135 - p = dma_zalloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL); 2135 + p = dma_alloc_coherent(kdev, size, &priv->tx_desc_dma, GFP_KERNEL); 2136 2136 if (!p) { 2137 2137 dev_err(kdev, "cannot allocate tx ring\n"); 2138 2138 ret = -ENOMEM;
+2 -2
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1506 1506 /* We just need one DMA descriptor which is DMA-able, since writing to 1507 1507 * the port will allocate a new descriptor in its internal linked-list 1508 1508 */ 1509 - p = dma_zalloc_coherent(kdev, sizeof(struct dma_desc), &ring->desc_dma, 1510 - GFP_KERNEL); 1509 + p = dma_alloc_coherent(kdev, sizeof(struct dma_desc), &ring->desc_dma, 1510 + GFP_KERNEL); 1511 1511 if (!p) { 1512 1512 netif_err(priv, hw, priv->netdev, "DMA alloc failed\n"); 1513 1513 return -ENOMEM;
+6 -6
drivers/net/ethernet/broadcom/bgmac.c
··· 634 634 635 635 /* Alloc ring of descriptors */ 636 636 size = BGMAC_TX_RING_SLOTS * sizeof(struct bgmac_dma_desc); 637 - ring->cpu_base = dma_zalloc_coherent(dma_dev, size, 638 - &ring->dma_base, 639 - GFP_KERNEL); 637 + ring->cpu_base = dma_alloc_coherent(dma_dev, size, 638 + &ring->dma_base, 639 + GFP_KERNEL); 640 640 if (!ring->cpu_base) { 641 641 dev_err(bgmac->dev, "Allocation of TX ring 0x%X failed\n", 642 642 ring->mmio_base); ··· 659 659 660 660 /* Alloc ring of descriptors */ 661 661 size = BGMAC_RX_RING_SLOTS * sizeof(struct bgmac_dma_desc); 662 - ring->cpu_base = dma_zalloc_coherent(dma_dev, size, 663 - &ring->dma_base, 664 - GFP_KERNEL); 662 + ring->cpu_base = dma_alloc_coherent(dma_dev, size, 663 + &ring->dma_base, 664 + GFP_KERNEL); 665 665 if (!ring->cpu_base) { 666 666 dev_err(bgmac->dev, "Allocation of RX ring 0x%X failed\n", 667 667 ring->mmio_base);
+2 -2
drivers/net/ethernet/broadcom/bnx2.c
··· 844 844 BNX2_SBLK_MSIX_ALIGN_SIZE); 845 845 bp->status_stats_size = status_blk_size + 846 846 sizeof(struct statistics_block); 847 - status_blk = dma_zalloc_coherent(&bp->pdev->dev, bp->status_stats_size, 848 - &bp->status_blk_mapping, GFP_KERNEL); 847 + status_blk = dma_alloc_coherent(&bp->pdev->dev, bp->status_stats_size, 848 + &bp->status_blk_mapping, GFP_KERNEL); 849 849 if (!status_blk) 850 850 return -ENOMEM; 851 851
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 2081 2081 bool is_pf); 2082 2082 2083 2083 #define BNX2X_ILT_ZALLOC(x, y, size) \ 2084 - x = dma_zalloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL) 2084 + x = dma_alloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL) 2085 2085 2086 2086 #define BNX2X_ILT_FREE(x, y, size) \ 2087 2087 do { \
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
··· 52 52 53 53 #define BNX2X_PCI_ALLOC(y, size) \ 54 54 ({ \ 55 - void *x = dma_zalloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL); \ 55 + void *x = dma_alloc_coherent(&bp->pdev->dev, size, y, GFP_KERNEL); \ 56 56 if (x) \ 57 57 DP(NETIF_MSG_HW, \ 58 58 "BNX2X_PCI_ALLOC: Physical %Lx Virtual %p\n", \
+8 -8
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 3449 3449 goto alloc_tx_ext_stats; 3450 3450 3451 3451 bp->hw_rx_port_stats_ext = 3452 - dma_zalloc_coherent(&pdev->dev, 3453 - sizeof(struct rx_port_stats_ext), 3454 - &bp->hw_rx_port_stats_ext_map, 3455 - GFP_KERNEL); 3452 + dma_alloc_coherent(&pdev->dev, 3453 + sizeof(struct rx_port_stats_ext), 3454 + &bp->hw_rx_port_stats_ext_map, 3455 + GFP_KERNEL); 3456 3456 if (!bp->hw_rx_port_stats_ext) 3457 3457 return 0; 3458 3458 ··· 3462 3462 3463 3463 if (bp->hwrm_spec_code >= 0x10902) { 3464 3464 bp->hw_tx_port_stats_ext = 3465 - dma_zalloc_coherent(&pdev->dev, 3466 - sizeof(struct tx_port_stats_ext), 3467 - &bp->hw_tx_port_stats_ext_map, 3468 - GFP_KERNEL); 3465 + dma_alloc_coherent(&pdev->dev, 3466 + sizeof(struct tx_port_stats_ext), 3467 + &bp->hw_tx_port_stats_ext_map, 3468 + GFP_KERNEL); 3469 3469 } 3470 3470 bp->flags |= BNXT_FLAG_PORT_STATS_EXT; 3471 3471 }
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
··· 316 316 317 317 n = IEEE_8021QAZ_MAX_TCS; 318 318 data_len = sizeof(*data) + sizeof(*fw_app) * n; 319 - data = dma_zalloc_coherent(&bp->pdev->dev, data_len, &mapping, 320 - GFP_KERNEL); 319 + data = dma_alloc_coherent(&bp->pdev->dev, data_len, &mapping, 320 + GFP_KERNEL); 321 321 if (!data) 322 322 return -ENOMEM; 323 323
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
··· 85 85 return -EFAULT; 86 86 } 87 87 88 - data_addr = dma_zalloc_coherent(&bp->pdev->dev, bytesize, 89 - &data_dma_addr, GFP_KERNEL); 88 + data_addr = dma_alloc_coherent(&bp->pdev->dev, bytesize, 89 + &data_dma_addr, GFP_KERNEL); 90 90 if (!data_addr) 91 91 return -ENOMEM; 92 92
+11 -11
drivers/net/ethernet/broadcom/tg3.c
··· 8712 8712 if (!i && tg3_flag(tp, ENABLE_RSS)) 8713 8713 continue; 8714 8714 8715 - tnapi->rx_rcb = dma_zalloc_coherent(&tp->pdev->dev, 8716 - TG3_RX_RCB_RING_BYTES(tp), 8717 - &tnapi->rx_rcb_mapping, 8718 - GFP_KERNEL); 8715 + tnapi->rx_rcb = dma_alloc_coherent(&tp->pdev->dev, 8716 + TG3_RX_RCB_RING_BYTES(tp), 8717 + &tnapi->rx_rcb_mapping, 8718 + GFP_KERNEL); 8719 8719 if (!tnapi->rx_rcb) 8720 8720 goto err_out; 8721 8721 } ··· 8768 8768 { 8769 8769 int i; 8770 8770 8771 - tp->hw_stats = dma_zalloc_coherent(&tp->pdev->dev, 8772 - sizeof(struct tg3_hw_stats), 8773 - &tp->stats_mapping, GFP_KERNEL); 8771 + tp->hw_stats = dma_alloc_coherent(&tp->pdev->dev, 8772 + sizeof(struct tg3_hw_stats), 8773 + &tp->stats_mapping, GFP_KERNEL); 8774 8774 if (!tp->hw_stats) 8775 8775 goto err_out; 8776 8776 ··· 8778 8778 struct tg3_napi *tnapi = &tp->napi[i]; 8779 8779 struct tg3_hw_status *sblk; 8780 8780 8781 - tnapi->hw_status = dma_zalloc_coherent(&tp->pdev->dev, 8782 - TG3_HW_STATUS_SIZE, 8783 - &tnapi->status_mapping, 8784 - GFP_KERNEL); 8781 + tnapi->hw_status = dma_alloc_coherent(&tp->pdev->dev, 8782 + TG3_HW_STATUS_SIZE, 8783 + &tnapi->status_mapping, 8784 + GFP_KERNEL); 8785 8785 if (!tnapi->hw_status) 8786 8786 goto err_out; 8787 8787
+1 -1
drivers/net/ethernet/cavium/thunder/nicvf_queues.c
··· 59 59 dmem->q_len = q_len; 60 60 dmem->size = (desc_size * q_len) + align_bytes; 61 61 /* Save address, need it while freeing */ 62 - dmem->unalign_base = dma_zalloc_coherent(&nic->pdev->dev, dmem->size, 62 + dmem->unalign_base = dma_alloc_coherent(&nic->pdev->dev, dmem->size, 63 63 &dmem->dma, GFP_KERNEL); 64 64 if (!dmem->unalign_base) 65 65 return -ENOMEM;
+1 -1
drivers/net/ethernet/chelsio/cxgb3/sge.c
··· 620 620 { 621 621 size_t len = nelem * elem_size; 622 622 void *s = NULL; 623 - void *p = dma_zalloc_coherent(&pdev->dev, len, phys, GFP_KERNEL); 623 + void *p = dma_alloc_coherent(&pdev->dev, len, phys, GFP_KERNEL); 624 624 625 625 if (!p) 626 626 return NULL;
+1 -1
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 694 694 { 695 695 size_t len = nelem * elem_size + stat_size; 696 696 void *s = NULL; 697 - void *p = dma_zalloc_coherent(dev, len, phys, GFP_KERNEL); 697 + void *p = dma_alloc_coherent(dev, len, phys, GFP_KERNEL); 698 698 699 699 if (!p) 700 700 return NULL;
+1 -1
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
··· 756 756 * Allocate the hardware ring and PCI DMA bus address space for said. 757 757 */ 758 758 size_t hwlen = nelem * hwsize + stat_size; 759 - void *hwring = dma_zalloc_coherent(dev, hwlen, busaddrp, GFP_KERNEL); 759 + void *hwring = dma_alloc_coherent(dev, hwlen, busaddrp, GFP_KERNEL); 760 760 761 761 if (!hwring) 762 762 return NULL;
+34 -34
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 1808 1808 total_size = buf_len; 1809 1809 1810 1810 get_fat_cmd.size = sizeof(struct be_cmd_req_get_fat) + 60*1024; 1811 - get_fat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, 1812 - get_fat_cmd.size, 1813 - &get_fat_cmd.dma, GFP_ATOMIC); 1811 + get_fat_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, 1812 + get_fat_cmd.size, 1813 + &get_fat_cmd.dma, GFP_ATOMIC); 1814 1814 if (!get_fat_cmd.va) 1815 1815 return -ENOMEM; 1816 1816 ··· 2302 2302 return -EINVAL; 2303 2303 2304 2304 cmd.size = sizeof(struct be_cmd_resp_port_type); 2305 - cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 2306 - GFP_ATOMIC); 2305 + cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 2306 + GFP_ATOMIC); 2307 2307 if (!cmd.va) { 2308 2308 dev_err(&adapter->pdev->dev, "Memory allocation failed\n"); 2309 2309 return -ENOMEM; ··· 3066 3066 3067 3067 flash_cmd.size = sizeof(struct lancer_cmd_req_write_object) 3068 3068 + LANCER_FW_DOWNLOAD_CHUNK; 3069 - flash_cmd.va = dma_zalloc_coherent(dev, flash_cmd.size, 3070 - &flash_cmd.dma, GFP_KERNEL); 3069 + flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size, &flash_cmd.dma, 3070 + GFP_KERNEL); 3071 3071 if (!flash_cmd.va) 3072 3072 return -ENOMEM; 3073 3073 ··· 3184 3184 } 3185 3185 3186 3186 flash_cmd.size = sizeof(struct be_cmd_write_flashrom); 3187 - flash_cmd.va = dma_zalloc_coherent(dev, flash_cmd.size, &flash_cmd.dma, 3188 - GFP_KERNEL); 3187 + flash_cmd.va = dma_alloc_coherent(dev, flash_cmd.size, &flash_cmd.dma, 3188 + GFP_KERNEL); 3189 3189 if (!flash_cmd.va) 3190 3190 return -ENOMEM; 3191 3191 ··· 3435 3435 goto err; 3436 3436 } 3437 3437 cmd.size = sizeof(struct be_cmd_req_get_phy_info); 3438 - cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 3439 - GFP_ATOMIC); 3438 + cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 3439 + GFP_ATOMIC); 3440 3440 if (!cmd.va) { 3441 3441 dev_err(&adapter->pdev->dev, "Memory alloc failure\n"); 3442 3442 status = -ENOMEM; ··· 3522 3522 3523 3523 memset(&attribs_cmd, 0, sizeof(struct be_dma_mem)); 3524 3524 attribs_cmd.size = sizeof(struct be_cmd_resp_cntl_attribs); 3525 - attribs_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, 3526 - attribs_cmd.size, 3527 - &attribs_cmd.dma, GFP_ATOMIC); 3525 + attribs_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, 3526 + attribs_cmd.size, 3527 + &attribs_cmd.dma, GFP_ATOMIC); 3528 3528 if (!attribs_cmd.va) { 3529 3529 dev_err(&adapter->pdev->dev, "Memory allocation failure\n"); 3530 3530 status = -ENOMEM; ··· 3699 3699 3700 3700 memset(&get_mac_list_cmd, 0, sizeof(struct be_dma_mem)); 3701 3701 get_mac_list_cmd.size = sizeof(struct be_cmd_resp_get_mac_list); 3702 - get_mac_list_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, 3703 - get_mac_list_cmd.size, 3704 - &get_mac_list_cmd.dma, 3705 - GFP_ATOMIC); 3702 + get_mac_list_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, 3703 + get_mac_list_cmd.size, 3704 + &get_mac_list_cmd.dma, 3705 + GFP_ATOMIC); 3706 3706 3707 3707 if (!get_mac_list_cmd.va) { 3708 3708 dev_err(&adapter->pdev->dev, ··· 3829 3829 3830 3830 memset(&cmd, 0, sizeof(struct be_dma_mem)); 3831 3831 cmd.size = sizeof(struct be_cmd_req_set_mac_list); 3832 - cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 3833 - GFP_KERNEL); 3832 + cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 3833 + GFP_KERNEL); 3834 3834 if (!cmd.va) 3835 3835 return -ENOMEM; 3836 3836 ··· 4035 4035 4036 4036 memset(&cmd, 0, sizeof(struct be_dma_mem)); 4037 4037 cmd.size = sizeof(struct be_cmd_resp_acpi_wol_magic_config_v1); 4038 - cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 4039 - GFP_ATOMIC); 4038 + cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 4039 + GFP_ATOMIC); 4040 4040 if (!cmd.va) { 4041 4041 dev_err(&adapter->pdev->dev, "Memory allocation failure\n"); 4042 4042 status = -ENOMEM; ··· 4089 4089 4090 4090 memset(&extfat_cmd, 0, sizeof(struct be_dma_mem)); 4091 4091 extfat_cmd.size = sizeof(struct be_cmd_resp_get_ext_fat_caps); 4092 - extfat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, 4093 - extfat_cmd.size, &extfat_cmd.dma, 4094 - GFP_ATOMIC); 4092 + extfat_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, 4093 + extfat_cmd.size, &extfat_cmd.dma, 4094 + GFP_ATOMIC); 4095 4095 if (!extfat_cmd.va) 4096 4096 return -ENOMEM; 4097 4097 ··· 4127 4127 4128 4128 memset(&extfat_cmd, 0, sizeof(struct be_dma_mem)); 4129 4129 extfat_cmd.size = sizeof(struct be_cmd_resp_get_ext_fat_caps); 4130 - extfat_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, 4131 - extfat_cmd.size, &extfat_cmd.dma, 4132 - GFP_ATOMIC); 4130 + extfat_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, 4131 + extfat_cmd.size, &extfat_cmd.dma, 4132 + GFP_ATOMIC); 4133 4133 4134 4134 if (!extfat_cmd.va) { 4135 4135 dev_err(&adapter->pdev->dev, "%s: Memory allocation failure\n", ··· 4354 4354 4355 4355 memset(&cmd, 0, sizeof(struct be_dma_mem)); 4356 4356 cmd.size = sizeof(struct be_cmd_resp_get_func_config); 4357 - cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 4358 - GFP_ATOMIC); 4357 + cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 4358 + GFP_ATOMIC); 4359 4359 if (!cmd.va) { 4360 4360 dev_err(&adapter->pdev->dev, "Memory alloc failure\n"); 4361 4361 status = -ENOMEM; ··· 4452 4452 4453 4453 memset(&cmd, 0, sizeof(struct be_dma_mem)); 4454 4454 cmd.size = sizeof(struct be_cmd_resp_get_profile_config); 4455 - cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 4456 - GFP_ATOMIC); 4455 + cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 4456 + GFP_ATOMIC); 4457 4457 if (!cmd.va) 4458 4458 return -ENOMEM; 4459 4459 ··· 4539 4539 4540 4540 memset(&cmd, 0, sizeof(struct be_dma_mem)); 4541 4541 cmd.size = sizeof(struct be_cmd_req_set_profile_config); 4542 - cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 4543 - GFP_ATOMIC); 4542 + cmd.va = dma_alloc_coherent(&adapter->pdev->dev, cmd.size, &cmd.dma, 4543 + GFP_ATOMIC); 4544 4544 if (!cmd.va) 4545 4545 return -ENOMEM; 4546 4546
+9 -9
drivers/net/ethernet/emulex/benet/be_ethtool.c
··· 274 274 int status = 0; 275 275 276 276 read_cmd.size = LANCER_READ_FILE_CHUNK; 277 - read_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, read_cmd.size, 278 - &read_cmd.dma, GFP_ATOMIC); 277 + read_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, read_cmd.size, 278 + &read_cmd.dma, GFP_ATOMIC); 279 279 280 280 if (!read_cmd.va) { 281 281 dev_err(&adapter->pdev->dev, ··· 815 815 } 816 816 817 817 cmd.size = sizeof(struct be_cmd_req_acpi_wol_magic_config); 818 - cmd.va = dma_zalloc_coherent(dev, cmd.size, &cmd.dma, GFP_KERNEL); 818 + cmd.va = dma_alloc_coherent(dev, cmd.size, &cmd.dma, GFP_KERNEL); 819 819 if (!cmd.va) 820 820 return -ENOMEM; 821 821 ··· 851 851 }; 852 852 853 853 ddrdma_cmd.size = sizeof(struct be_cmd_req_ddrdma_test); 854 - ddrdma_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, 855 - ddrdma_cmd.size, &ddrdma_cmd.dma, 856 - GFP_KERNEL); 854 + ddrdma_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, 855 + ddrdma_cmd.size, &ddrdma_cmd.dma, 856 + GFP_KERNEL); 857 857 if (!ddrdma_cmd.va) 858 858 return -ENOMEM; 859 859 ··· 1014 1014 1015 1015 memset(&eeprom_cmd, 0, sizeof(struct be_dma_mem)); 1016 1016 eeprom_cmd.size = sizeof(struct be_cmd_req_seeprom_read); 1017 - eeprom_cmd.va = dma_zalloc_coherent(&adapter->pdev->dev, 1018 - eeprom_cmd.size, &eeprom_cmd.dma, 1019 - GFP_KERNEL); 1017 + eeprom_cmd.va = dma_alloc_coherent(&adapter->pdev->dev, 1018 + eeprom_cmd.size, &eeprom_cmd.dma, 1019 + GFP_KERNEL); 1020 1020 1021 1021 if (!eeprom_cmd.va) 1022 1022 return -ENOMEM;
+9 -9
drivers/net/ethernet/emulex/benet/be_main.c
··· 167 167 q->len = len; 168 168 q->entry_size = entry_size; 169 169 mem->size = len * entry_size; 170 - mem->va = dma_zalloc_coherent(&adapter->pdev->dev, mem->size, &mem->dma, 171 - GFP_KERNEL); 170 + mem->va = dma_alloc_coherent(&adapter->pdev->dev, mem->size, 171 + &mem->dma, GFP_KERNEL); 172 172 if (!mem->va) 173 173 return -ENOMEM; 174 174 return 0; ··· 5766 5766 int status = 0; 5767 5767 5768 5768 mbox_mem_alloc->size = sizeof(struct be_mcc_mailbox) + 16; 5769 - mbox_mem_alloc->va = dma_zalloc_coherent(dev, mbox_mem_alloc->size, 5770 - &mbox_mem_alloc->dma, 5771 - GFP_KERNEL); 5769 + mbox_mem_alloc->va = dma_alloc_coherent(dev, mbox_mem_alloc->size, 5770 + &mbox_mem_alloc->dma, 5771 + GFP_KERNEL); 5772 5772 if (!mbox_mem_alloc->va) 5773 5773 return -ENOMEM; 5774 5774 ··· 5777 5777 mbox_mem_align->dma = PTR_ALIGN(mbox_mem_alloc->dma, 16); 5778 5778 5779 5779 rx_filter->size = sizeof(struct be_cmd_req_rx_filter); 5780 - rx_filter->va = dma_zalloc_coherent(dev, rx_filter->size, 5781 - &rx_filter->dma, GFP_KERNEL); 5780 + rx_filter->va = dma_alloc_coherent(dev, rx_filter->size, 5781 + &rx_filter->dma, GFP_KERNEL); 5782 5782 if (!rx_filter->va) { 5783 5783 status = -ENOMEM; 5784 5784 goto free_mbox; ··· 5792 5792 stats_cmd->size = sizeof(struct be_cmd_req_get_stats_v1); 5793 5793 else 5794 5794 stats_cmd->size = sizeof(struct be_cmd_req_get_stats_v2); 5795 - stats_cmd->va = dma_zalloc_coherent(dev, stats_cmd->size, 5796 - &stats_cmd->dma, GFP_KERNEL); 5795 + stats_cmd->va = dma_alloc_coherent(dev, stats_cmd->size, 5796 + &stats_cmd->dma, GFP_KERNEL); 5797 5797 if (!stats_cmd->va) { 5798 5798 status = -ENOMEM; 5799 5799 goto free_rx_filter;
+6 -8
drivers/net/ethernet/faraday/ftgmac100.c
··· 935 935 return -ENOMEM; 936 936 937 937 /* Allocate descriptors */ 938 - priv->rxdes = dma_zalloc_coherent(priv->dev, 939 - MAX_RX_QUEUE_ENTRIES * 940 - sizeof(struct ftgmac100_rxdes), 941 - &priv->rxdes_dma, GFP_KERNEL); 938 + priv->rxdes = dma_alloc_coherent(priv->dev, 939 + MAX_RX_QUEUE_ENTRIES * sizeof(struct ftgmac100_rxdes), 940 + &priv->rxdes_dma, GFP_KERNEL); 942 941 if (!priv->rxdes) 943 942 return -ENOMEM; 944 - priv->txdes = dma_zalloc_coherent(priv->dev, 945 - MAX_TX_QUEUE_ENTRIES * 946 - sizeof(struct ftgmac100_txdes), 947 - &priv->txdes_dma, GFP_KERNEL); 943 + priv->txdes = dma_alloc_coherent(priv->dev, 944 + MAX_TX_QUEUE_ENTRIES * sizeof(struct ftgmac100_txdes), 945 + &priv->txdes_dma, GFP_KERNEL); 948 946 if (!priv->txdes) 949 947 return -ENOMEM; 950 948
+3 -4
drivers/net/ethernet/faraday/ftmac100.c
··· 734 734 { 735 735 int i; 736 736 737 - priv->descs = dma_zalloc_coherent(priv->dev, 738 - sizeof(struct ftmac100_descs), 739 - &priv->descs_dma_addr, 740 - GFP_KERNEL); 737 + priv->descs = dma_alloc_coherent(priv->dev, 738 + sizeof(struct ftmac100_descs), 739 + &priv->descs_dma_addr, GFP_KERNEL); 741 740 if (!priv->descs) 742 741 return -ENOMEM; 743 742
+2 -2
drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
··· 1006 1006 1007 1007 for (i = 0; i < QUEUE_NUMS; i++) { 1008 1008 size = priv->pool[i].count * sizeof(struct hix5hd2_desc); 1009 - virt_addr = dma_zalloc_coherent(dev, size, &phys_addr, 1010 - GFP_KERNEL); 1009 + virt_addr = dma_alloc_coherent(dev, size, &phys_addr, 1010 + GFP_KERNEL); 1011 1011 if (virt_addr == NULL) 1012 1012 goto error_free_pool; 1013 1013
+2 -3
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 2041 2041 { 2042 2042 int size = ring->desc_num * sizeof(ring->desc[0]); 2043 2043 2044 - ring->desc = dma_zalloc_coherent(ring_to_dev(ring), size, 2045 - &ring->desc_dma_addr, 2046 - GFP_KERNEL); 2044 + ring->desc = dma_alloc_coherent(ring_to_dev(ring), size, 2045 + &ring->desc_dma_addr, GFP_KERNEL); 2047 2046 if (!ring->desc) 2048 2047 return -ENOMEM; 2049 2048
+2 -3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
··· 39 39 { 40 40 int size = ring->desc_num * sizeof(struct hclge_desc); 41 41 42 - ring->desc = dma_zalloc_coherent(cmq_ring_to_dev(ring), 43 - size, &ring->desc_dma_addr, 44 - GFP_KERNEL); 42 + ring->desc = dma_alloc_coherent(cmq_ring_to_dev(ring), size, 43 + &ring->desc_dma_addr, GFP_KERNEL); 45 44 if (!ring->desc) 46 45 return -ENOMEM; 47 46
+2 -3
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
··· 115 115 { 116 116 int size = ring->desc_num * sizeof(struct hclgevf_desc); 117 117 118 - ring->desc = dma_zalloc_coherent(cmq_ring_to_dev(ring), 119 - size, &ring->desc_dma_addr, 120 - GFP_KERNEL); 118 + ring->desc = dma_alloc_coherent(cmq_ring_to_dev(ring), size, 119 + &ring->desc_dma_addr, GFP_KERNEL); 121 120 if (!ring->desc) 122 121 return -ENOMEM; 123 122
+8 -8
drivers/net/ethernet/huawei/hinic/hinic_hw_api_cmd.c
··· 613 613 u8 *cmd_vaddr; 614 614 int err = 0; 615 615 616 - cmd_vaddr = dma_zalloc_coherent(&pdev->dev, API_CMD_BUF_SIZE, 617 - &cmd_paddr, GFP_KERNEL); 616 + cmd_vaddr = dma_alloc_coherent(&pdev->dev, API_CMD_BUF_SIZE, 617 + &cmd_paddr, GFP_KERNEL); 618 618 if (!cmd_vaddr) { 619 619 dev_err(&pdev->dev, "Failed to allocate API CMD DMA memory\n"); 620 620 return -ENOMEM; ··· 663 663 dma_addr_t node_paddr; 664 664 int err; 665 665 666 - node = dma_zalloc_coherent(&pdev->dev, chain->cell_size, 667 - &node_paddr, GFP_KERNEL); 666 + node = dma_alloc_coherent(&pdev->dev, chain->cell_size, &node_paddr, 667 + GFP_KERNEL); 668 668 if (!node) { 669 669 dev_err(&pdev->dev, "Failed to allocate dma API CMD cell\n"); 670 670 return -ENOMEM; ··· 821 821 if (!chain->cell_ctxt) 822 822 return -ENOMEM; 823 823 824 - chain->wb_status = dma_zalloc_coherent(&pdev->dev, 825 - sizeof(*chain->wb_status), 826 - &chain->wb_status_paddr, 827 - GFP_KERNEL); 824 + chain->wb_status = dma_alloc_coherent(&pdev->dev, 825 + sizeof(*chain->wb_status), 826 + &chain->wb_status_paddr, 827 + GFP_KERNEL); 828 828 if (!chain->wb_status) { 829 829 dev_err(&pdev->dev, "Failed to allocate DMA wb status\n"); 830 830 return -ENOMEM;
+4 -4
drivers/net/ethernet/huawei/hinic/hinic_hw_eqs.c
··· 593 593 } 594 594 595 595 for (pg = 0; pg < eq->num_pages; pg++) { 596 - eq->virt_addr[pg] = dma_zalloc_coherent(&pdev->dev, 597 - eq->page_size, 598 - &eq->dma_addr[pg], 599 - GFP_KERNEL); 596 + eq->virt_addr[pg] = dma_alloc_coherent(&pdev->dev, 597 + eq->page_size, 598 + &eq->dma_addr[pg], 599 + GFP_KERNEL); 600 600 if (!eq->virt_addr[pg]) { 601 601 err = -ENOMEM; 602 602 goto err_dma_alloc;
+3 -3
drivers/net/ethernet/huawei/hinic/hinic_hw_io.c
··· 355 355 goto err_sq_db; 356 356 } 357 357 358 - ci_addr_base = dma_zalloc_coherent(&pdev->dev, CI_TABLE_SIZE(num_qps), 359 - &func_to_io->ci_dma_base, 360 - GFP_KERNEL); 358 + ci_addr_base = dma_alloc_coherent(&pdev->dev, CI_TABLE_SIZE(num_qps), 359 + &func_to_io->ci_dma_base, 360 + GFP_KERNEL); 361 361 if (!ci_addr_base) { 362 362 dev_err(&pdev->dev, "Failed to allocate CI area\n"); 363 363 err = -ENOMEM;
+5 -5
drivers/net/ethernet/huawei/hinic/hinic_hw_qp.c
··· 336 336 goto err_cqe_dma_arr_alloc; 337 337 338 338 for (i = 0; i < wq->q_depth; i++) { 339 - rq->cqe[i] = dma_zalloc_coherent(&pdev->dev, 340 - sizeof(*rq->cqe[i]), 341 - &rq->cqe_dma[i], GFP_KERNEL); 339 + rq->cqe[i] = dma_alloc_coherent(&pdev->dev, 340 + sizeof(*rq->cqe[i]), 341 + &rq->cqe_dma[i], GFP_KERNEL); 342 342 if (!rq->cqe[i]) 343 343 goto err_cqe_alloc; 344 344 } ··· 415 415 416 416 /* HW requirements: Must be at least 32 bit */ 417 417 pi_size = ALIGN(sizeof(*rq->pi_virt_addr), sizeof(u32)); 418 - rq->pi_virt_addr = dma_zalloc_coherent(&pdev->dev, pi_size, 419 - &rq->pi_dma_addr, GFP_KERNEL); 418 + rq->pi_virt_addr = dma_alloc_coherent(&pdev->dev, pi_size, 419 + &rq->pi_dma_addr, GFP_KERNEL); 420 420 if (!rq->pi_virt_addr) { 421 421 dev_err(&pdev->dev, "Failed to allocate PI address\n"); 422 422 err = -ENOMEM;
+4 -4
drivers/net/ethernet/huawei/hinic/hinic_hw_wq.c
··· 114 114 struct pci_dev *pdev = hwif->pdev; 115 115 dma_addr_t dma_addr; 116 116 117 - *vaddr = dma_zalloc_coherent(&pdev->dev, page_sz, &dma_addr, 118 - GFP_KERNEL); 117 + *vaddr = dma_alloc_coherent(&pdev->dev, page_sz, &dma_addr, 118 + GFP_KERNEL); 119 119 if (!*vaddr) { 120 120 dev_err(&pdev->dev, "Failed to allocate dma for wqs page\n"); 121 121 return -ENOMEM; ··· 482 482 u64 *paddr = &wq->block_vaddr[i]; 483 483 dma_addr_t dma_addr; 484 484 485 - *vaddr = dma_zalloc_coherent(&pdev->dev, wq->wq_page_size, 486 - &dma_addr, GFP_KERNEL); 485 + *vaddr = dma_alloc_coherent(&pdev->dev, wq->wq_page_size, 486 + &dma_addr, GFP_KERNEL); 487 487 if (!*vaddr) { 488 488 dev_err(&pdev->dev, "Failed to allocate wq page\n"); 489 489 goto err_alloc_wq_pages;
+2 -2
drivers/net/ethernet/ibm/emac/mal.c
··· 636 636 bd_size = sizeof(struct mal_descriptor) * 637 637 (NUM_TX_BUFF * mal->num_tx_chans + 638 638 NUM_RX_BUFF * mal->num_rx_chans); 639 - mal->bd_virt = dma_zalloc_coherent(&ofdev->dev, bd_size, &mal->bd_dma, 640 - GFP_KERNEL); 639 + mal->bd_virt = dma_alloc_coherent(&ofdev->dev, bd_size, &mal->bd_dma, 640 + GFP_KERNEL); 641 641 if (mal->bd_virt == NULL) { 642 642 err = -ENOMEM; 643 643 goto fail_unmap;
+4 -4
drivers/net/ethernet/intel/e1000/e1000_ethtool.c
··· 993 993 994 994 txdr->size = txdr->count * sizeof(struct e1000_tx_desc); 995 995 txdr->size = ALIGN(txdr->size, 4096); 996 - txdr->desc = dma_zalloc_coherent(&pdev->dev, txdr->size, &txdr->dma, 997 - GFP_KERNEL); 996 + txdr->desc = dma_alloc_coherent(&pdev->dev, txdr->size, &txdr->dma, 997 + GFP_KERNEL); 998 998 if (!txdr->desc) { 999 999 ret_val = 2; 1000 1000 goto err_nomem; ··· 1051 1051 } 1052 1052 1053 1053 rxdr->size = rxdr->count * sizeof(struct e1000_rx_desc); 1054 - rxdr->desc = dma_zalloc_coherent(&pdev->dev, rxdr->size, &rxdr->dma, 1055 - GFP_KERNEL); 1054 + rxdr->desc = dma_alloc_coherent(&pdev->dev, rxdr->size, &rxdr->dma, 1055 + GFP_KERNEL); 1056 1056 if (!rxdr->desc) { 1057 1057 ret_val = 6; 1058 1058 goto err_nomem;
+2 -2
drivers/net/ethernet/intel/e1000e/netdev.c
··· 2305 2305 { 2306 2306 struct pci_dev *pdev = adapter->pdev; 2307 2307 2308 - ring->desc = dma_zalloc_coherent(&pdev->dev, ring->size, &ring->dma, 2309 - GFP_KERNEL); 2308 + ring->desc = dma_alloc_coherent(&pdev->dev, ring->size, &ring->dma, 2309 + GFP_KERNEL); 2310 2310 if (!ring->desc) 2311 2311 return -ENOMEM; 2312 2312
+2 -2
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 109 109 struct i40e_pf *pf = (struct i40e_pf *)hw->back; 110 110 111 111 mem->size = ALIGN(size, alignment); 112 - mem->va = dma_zalloc_coherent(&pf->pdev->dev, mem->size, 113 - &mem->pa, GFP_KERNEL); 112 + mem->va = dma_alloc_coherent(&pf->pdev->dev, mem->size, &mem->pa, 113 + GFP_KERNEL); 114 114 if (!mem->va) 115 115 return -ENOMEM; 116 116
+4 -4
drivers/net/ethernet/intel/ixgb/ixgb_main.c
··· 680 680 txdr->size = txdr->count * sizeof(struct ixgb_tx_desc); 681 681 txdr->size = ALIGN(txdr->size, 4096); 682 682 683 - txdr->desc = dma_zalloc_coherent(&pdev->dev, txdr->size, &txdr->dma, 684 - GFP_KERNEL); 683 + txdr->desc = dma_alloc_coherent(&pdev->dev, txdr->size, &txdr->dma, 684 + GFP_KERNEL); 685 685 if (!txdr->desc) { 686 686 vfree(txdr->buffer_info); 687 687 return -ENOMEM; ··· 763 763 rxdr->size = rxdr->count * sizeof(struct ixgb_rx_desc); 764 764 rxdr->size = ALIGN(rxdr->size, 4096); 765 765 766 - rxdr->desc = dma_zalloc_coherent(&pdev->dev, rxdr->size, &rxdr->dma, 767 - GFP_KERNEL); 766 + rxdr->desc = dma_alloc_coherent(&pdev->dev, rxdr->size, &rxdr->dma, 767 + GFP_KERNEL); 768 768 769 769 if (!rxdr->desc) { 770 770 vfree(rxdr->buffer_info);
+3 -3
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 2044 2044 u32 txq_dma; 2045 2045 2046 2046 /* Allocate memory for TX descriptors */ 2047 - aggr_txq->descs = dma_zalloc_coherent(&pdev->dev, 2048 - MVPP2_AGGR_TXQ_SIZE * MVPP2_DESC_ALIGNED_SIZE, 2049 - &aggr_txq->descs_dma, GFP_KERNEL); 2047 + aggr_txq->descs = dma_alloc_coherent(&pdev->dev, 2048 + MVPP2_AGGR_TXQ_SIZE * MVPP2_DESC_ALIGNED_SIZE, 2049 + &aggr_txq->descs_dma, GFP_KERNEL); 2050 2050 if (!aggr_txq->descs) 2051 2051 return -ENOMEM; 2052 2052
+1 -1
drivers/net/ethernet/marvell/octeontx2/af/common.h
··· 64 64 65 65 qmem->entry_sz = entry_sz; 66 66 qmem->alloc_sz = (qsize * entry_sz) + OTX2_ALIGN; 67 - qmem->base = dma_zalloc_coherent(dev, qmem->alloc_sz, 67 + qmem->base = dma_alloc_coherent(dev, qmem->alloc_sz, 68 68 &qmem->iova, GFP_KERNEL); 69 69 if (!qmem->base) 70 70 return -ENOMEM;
+9 -9
drivers/net/ethernet/marvell/pxa168_eth.c
··· 557 557 * table is full. 558 558 */ 559 559 if (!pep->htpr) { 560 - pep->htpr = dma_zalloc_coherent(pep->dev->dev.parent, 561 - HASH_ADDR_TABLE_SIZE, 562 - &pep->htpr_dma, GFP_KERNEL); 560 + pep->htpr = dma_alloc_coherent(pep->dev->dev.parent, 561 + HASH_ADDR_TABLE_SIZE, 562 + &pep->htpr_dma, GFP_KERNEL); 563 563 if (!pep->htpr) 564 564 return -ENOMEM; 565 565 } else { ··· 1044 1044 pep->rx_desc_count = 0; 1045 1045 size = pep->rx_ring_size * sizeof(struct rx_desc); 1046 1046 pep->rx_desc_area_size = size; 1047 - pep->p_rx_desc_area = dma_zalloc_coherent(pep->dev->dev.parent, size, 1048 - &pep->rx_desc_dma, 1049 - GFP_KERNEL); 1047 + pep->p_rx_desc_area = dma_alloc_coherent(pep->dev->dev.parent, size, 1048 + &pep->rx_desc_dma, 1049 + GFP_KERNEL); 1050 1050 if (!pep->p_rx_desc_area) 1051 1051 goto out; 1052 1052 ··· 1103 1103 pep->tx_desc_count = 0; 1104 1104 size = pep->tx_ring_size * sizeof(struct tx_desc); 1105 1105 pep->tx_desc_area_size = size; 1106 - pep->p_tx_desc_area = dma_zalloc_coherent(pep->dev->dev.parent, size, 1107 - &pep->tx_desc_dma, 1108 - GFP_KERNEL); 1106 + pep->p_tx_desc_area = dma_alloc_coherent(pep->dev->dev.parent, size, 1107 + &pep->tx_desc_dma, 1108 + GFP_KERNEL); 1109 1109 if (!pep->p_tx_desc_area) 1110 1110 goto out; 1111 1111 /* Initialize the next_desc_ptr links in the Tx descriptors ring */
+9 -9
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 598 598 dma_addr_t dma_addr; 599 599 int i; 600 600 601 - eth->scratch_ring = dma_zalloc_coherent(eth->dev, 602 - cnt * sizeof(struct mtk_tx_dma), 603 - &eth->phy_scratch_ring, 604 - GFP_ATOMIC); 601 + eth->scratch_ring = dma_alloc_coherent(eth->dev, 602 + cnt * sizeof(struct mtk_tx_dma), 603 + &eth->phy_scratch_ring, 604 + GFP_ATOMIC); 605 605 if (unlikely(!eth->scratch_ring)) 606 606 return -ENOMEM; 607 607 ··· 1213 1213 if (!ring->buf) 1214 1214 goto no_tx_mem; 1215 1215 1216 - ring->dma = dma_zalloc_coherent(eth->dev, MTK_DMA_SIZE * sz, 1217 - &ring->phys, GFP_ATOMIC); 1216 + ring->dma = dma_alloc_coherent(eth->dev, MTK_DMA_SIZE * sz, 1217 + &ring->phys, GFP_ATOMIC); 1218 1218 if (!ring->dma) 1219 1219 goto no_tx_mem; 1220 1220 ··· 1310 1310 return -ENOMEM; 1311 1311 } 1312 1312 1313 - ring->dma = dma_zalloc_coherent(eth->dev, 1314 - rx_dma_size * sizeof(*ring->dma), 1315 - &ring->phys, GFP_ATOMIC); 1313 + ring->dma = dma_alloc_coherent(eth->dev, 1314 + rx_dma_size * sizeof(*ring->dma), 1315 + &ring->phys, GFP_ATOMIC); 1316 1316 if (!ring->dma) 1317 1317 return -ENOMEM; 1318 1318
+4 -4
drivers/net/ethernet/mellanox/mlx4/alloc.c
··· 584 584 buf->npages = 1; 585 585 buf->page_shift = get_order(size) + PAGE_SHIFT; 586 586 buf->direct.buf = 587 - dma_zalloc_coherent(&dev->persist->pdev->dev, 588 - size, &t, GFP_KERNEL); 587 + dma_alloc_coherent(&dev->persist->pdev->dev, size, &t, 588 + GFP_KERNEL); 589 589 if (!buf->direct.buf) 590 590 return -ENOMEM; 591 591 ··· 624 624 625 625 for (i = 0; i < buf->nbufs; ++i) { 626 626 buf->page_list[i].buf = 627 - dma_zalloc_coherent(&dev->persist->pdev->dev, 628 - PAGE_SIZE, &t, GFP_KERNEL); 627 + dma_alloc_coherent(&dev->persist->pdev->dev, 628 + PAGE_SIZE, &t, GFP_KERNEL); 629 629 if (!buf->page_list[i].buf) 630 630 goto err_free; 631 631
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/alloc.c
··· 63 63 mutex_lock(&priv->alloc_mutex); 64 64 original_node = dev_to_node(&dev->pdev->dev); 65 65 set_dev_node(&dev->pdev->dev, node); 66 - cpu_handle = dma_zalloc_coherent(&dev->pdev->dev, size, 67 - dma_handle, GFP_KERNEL); 66 + cpu_handle = dma_alloc_coherent(&dev->pdev->dev, size, dma_handle, 67 + GFP_KERNEL); 68 68 set_dev_node(&dev->pdev->dev, original_node); 69 69 mutex_unlock(&priv->alloc_mutex); 70 70 return cpu_handle;
+5 -5
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 1789 1789 { 1790 1790 struct device *ddev = &dev->pdev->dev; 1791 1791 1792 - cmd->cmd_alloc_buf = dma_zalloc_coherent(ddev, MLX5_ADAPTER_PAGE_SIZE, 1793 - &cmd->alloc_dma, GFP_KERNEL); 1792 + cmd->cmd_alloc_buf = dma_alloc_coherent(ddev, MLX5_ADAPTER_PAGE_SIZE, 1793 + &cmd->alloc_dma, GFP_KERNEL); 1794 1794 if (!cmd->cmd_alloc_buf) 1795 1795 return -ENOMEM; 1796 1796 ··· 1804 1804 1805 1805 dma_free_coherent(ddev, MLX5_ADAPTER_PAGE_SIZE, cmd->cmd_alloc_buf, 1806 1806 cmd->alloc_dma); 1807 - cmd->cmd_alloc_buf = dma_zalloc_coherent(ddev, 1808 - 2 * MLX5_ADAPTER_PAGE_SIZE - 1, 1809 - &cmd->alloc_dma, GFP_KERNEL); 1807 + cmd->cmd_alloc_buf = dma_alloc_coherent(ddev, 1808 + 2 * MLX5_ADAPTER_PAGE_SIZE - 1, 1809 + &cmd->alloc_dma, GFP_KERNEL); 1810 1810 if (!cmd->cmd_alloc_buf) 1811 1811 return -ENOMEM; 1812 1812
+3 -3
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 3604 3604 for (i = 0; i < mgp->num_slices; i++) { 3605 3605 ss = &mgp->ss[i]; 3606 3606 bytes = mgp->max_intr_slots * sizeof(*ss->rx_done.entry); 3607 - ss->rx_done.entry = dma_zalloc_coherent(&pdev->dev, bytes, 3608 - &ss->rx_done.bus, 3609 - GFP_KERNEL); 3607 + ss->rx_done.entry = dma_alloc_coherent(&pdev->dev, bytes, 3608 + &ss->rx_done.bus, 3609 + GFP_KERNEL); 3610 3610 if (ss->rx_done.entry == NULL) 3611 3611 goto abort; 3612 3612 bytes = sizeof(*ss->fw_stats);
+6 -6
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
··· 2170 2170 tx_ring->cnt = dp->txd_cnt; 2171 2171 2172 2172 tx_ring->size = array_size(tx_ring->cnt, sizeof(*tx_ring->txds)); 2173 - tx_ring->txds = dma_zalloc_coherent(dp->dev, tx_ring->size, 2174 - &tx_ring->dma, 2175 - GFP_KERNEL | __GFP_NOWARN); 2173 + tx_ring->txds = dma_alloc_coherent(dp->dev, tx_ring->size, 2174 + &tx_ring->dma, 2175 + GFP_KERNEL | __GFP_NOWARN); 2176 2176 if (!tx_ring->txds) { 2177 2177 netdev_warn(dp->netdev, "failed to allocate TX descriptor ring memory, requested descriptor count: %d, consider lowering descriptor count\n", 2178 2178 tx_ring->cnt); ··· 2328 2328 2329 2329 rx_ring->cnt = dp->rxd_cnt; 2330 2330 rx_ring->size = array_size(rx_ring->cnt, sizeof(*rx_ring->rxds)); 2331 - rx_ring->rxds = dma_zalloc_coherent(dp->dev, rx_ring->size, 2332 - &rx_ring->dma, 2333 - GFP_KERNEL | __GFP_NOWARN); 2331 + rx_ring->rxds = dma_alloc_coherent(dp->dev, rx_ring->size, 2332 + &rx_ring->dma, 2333 + GFP_KERNEL | __GFP_NOWARN); 2334 2334 if (!rx_ring->rxds) { 2335 2335 netdev_warn(dp->netdev, "failed to allocate RX descriptor ring memory, requested descriptor count: %d, consider lowering descriptor count\n", 2336 2336 rx_ring->cnt);
+6 -6
drivers/net/ethernet/ni/nixge.c
··· 287 287 priv->rx_bd_ci = 0; 288 288 289 289 /* Allocate the Tx and Rx buffer descriptors. */ 290 - priv->tx_bd_v = dma_zalloc_coherent(ndev->dev.parent, 291 - sizeof(*priv->tx_bd_v) * TX_BD_NUM, 292 - &priv->tx_bd_p, GFP_KERNEL); 290 + priv->tx_bd_v = dma_alloc_coherent(ndev->dev.parent, 291 + sizeof(*priv->tx_bd_v) * TX_BD_NUM, 292 + &priv->tx_bd_p, GFP_KERNEL); 293 293 if (!priv->tx_bd_v) 294 294 goto out; 295 295 ··· 299 299 if (!priv->tx_skb) 300 300 goto out; 301 301 302 - priv->rx_bd_v = dma_zalloc_coherent(ndev->dev.parent, 303 - sizeof(*priv->rx_bd_v) * RX_BD_NUM, 304 - &priv->rx_bd_p, GFP_KERNEL); 302 + priv->rx_bd_v = dma_alloc_coherent(ndev->dev.parent, 303 + sizeof(*priv->rx_bd_v) * RX_BD_NUM, 304 + &priv->rx_bd_p, GFP_KERNEL); 305 305 if (!priv->rx_bd_v) 306 306 goto out; 307 307
+6 -6
drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
··· 1440 1440 1441 1441 size = rx_ring->count * bufsz + PCH_GBE_RESERVE_MEMORY; 1442 1442 rx_ring->rx_buff_pool = 1443 - dma_zalloc_coherent(&pdev->dev, size, 1444 - &rx_ring->rx_buff_pool_logic, GFP_KERNEL); 1443 + dma_alloc_coherent(&pdev->dev, size, 1444 + &rx_ring->rx_buff_pool_logic, GFP_KERNEL); 1445 1445 if (!rx_ring->rx_buff_pool) 1446 1446 return -ENOMEM; 1447 1447 ··· 1755 1755 1756 1756 tx_ring->size = tx_ring->count * (int)sizeof(struct pch_gbe_tx_desc); 1757 1757 1758 - tx_ring->desc = dma_zalloc_coherent(&pdev->dev, tx_ring->size, 1759 - &tx_ring->dma, GFP_KERNEL); 1758 + tx_ring->desc = dma_alloc_coherent(&pdev->dev, tx_ring->size, 1759 + &tx_ring->dma, GFP_KERNEL); 1760 1760 if (!tx_ring->desc) { 1761 1761 vfree(tx_ring->buffer_info); 1762 1762 return -ENOMEM; ··· 1798 1798 return -ENOMEM; 1799 1799 1800 1800 rx_ring->size = rx_ring->count * (int)sizeof(struct pch_gbe_rx_desc); 1801 - rx_ring->desc = dma_zalloc_coherent(&pdev->dev, rx_ring->size, 1802 - &rx_ring->dma, GFP_KERNEL); 1801 + rx_ring->desc = dma_alloc_coherent(&pdev->dev, rx_ring->size, 1802 + &rx_ring->dma, GFP_KERNEL); 1803 1803 if (!rx_ring->desc) { 1804 1804 vfree(rx_ring->buffer_info); 1805 1805 return -ENOMEM;
+3 -3
drivers/net/ethernet/pasemi/pasemi_mac.c
··· 401 401 if (pasemi_dma_alloc_ring(&ring->chan, RX_RING_SIZE)) 402 402 goto out_ring_desc; 403 403 404 - ring->buffers = dma_zalloc_coherent(&mac->dma_pdev->dev, 405 - RX_RING_SIZE * sizeof(u64), 406 - &ring->buf_dma, GFP_KERNEL); 404 + ring->buffers = dma_alloc_coherent(&mac->dma_pdev->dev, 405 + RX_RING_SIZE * sizeof(u64), 406 + &ring->buf_dma, GFP_KERNEL); 407 407 if (!ring->buffers) 408 408 goto out_ring_desc; 409 409
+8 -8
drivers/net/ethernet/qlogic/qed/qed_cxt.c
··· 936 936 u32 size = min_t(u32, total_size, psz); 937 937 void **p_virt = &p_mngr->t2[i].p_virt; 938 938 939 - *p_virt = dma_zalloc_coherent(&p_hwfn->cdev->pdev->dev, 940 - size, &p_mngr->t2[i].p_phys, 941 - GFP_KERNEL); 939 + *p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev, size, 940 + &p_mngr->t2[i].p_phys, 941 + GFP_KERNEL); 942 942 if (!p_mngr->t2[i].p_virt) { 943 943 rc = -ENOMEM; 944 944 goto t2_fail; ··· 1054 1054 u32 size; 1055 1055 1056 1056 size = min_t(u32, sz_left, p_blk->real_size_in_page); 1057 - p_virt = dma_zalloc_coherent(&p_hwfn->cdev->pdev->dev, size, 1058 - &p_phys, GFP_KERNEL); 1057 + p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev, size, 1058 + &p_phys, GFP_KERNEL); 1059 1059 if (!p_virt) 1060 1060 return -ENOMEM; 1061 1061 ··· 2306 2306 goto out0; 2307 2307 } 2308 2308 2309 - p_virt = dma_zalloc_coherent(&p_hwfn->cdev->pdev->dev, 2310 - p_blk->real_size_in_page, &p_phys, 2311 - GFP_KERNEL); 2309 + p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev, 2310 + p_blk->real_size_in_page, &p_phys, 2311 + GFP_KERNEL); 2312 2312 if (!p_virt) { 2313 2313 rc = -ENOMEM; 2314 2314 goto out1;
+14 -14
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
··· 434 434 *(tx_ring->hw_consumer) = 0; 435 435 436 436 rq_size = SIZEOF_HOSTRQ_TX(struct qlcnic_hostrq_tx_ctx); 437 - rq_addr = dma_zalloc_coherent(&adapter->pdev->dev, rq_size, 438 - &rq_phys_addr, GFP_KERNEL); 437 + rq_addr = dma_alloc_coherent(&adapter->pdev->dev, rq_size, 438 + &rq_phys_addr, GFP_KERNEL); 439 439 if (!rq_addr) 440 440 return -ENOMEM; 441 441 442 442 rsp_size = SIZEOF_CARDRSP_TX(struct qlcnic_cardrsp_tx_ctx); 443 - rsp_addr = dma_zalloc_coherent(&adapter->pdev->dev, rsp_size, 444 - &rsp_phys_addr, GFP_KERNEL); 443 + rsp_addr = dma_alloc_coherent(&adapter->pdev->dev, rsp_size, 444 + &rsp_phys_addr, GFP_KERNEL); 445 445 if (!rsp_addr) { 446 446 err = -ENOMEM; 447 447 goto out_free_rq; ··· 855 855 struct qlcnic_cmd_args cmd; 856 856 size_t nic_size = sizeof(struct qlcnic_info_le); 857 857 858 - nic_info_addr = dma_zalloc_coherent(&adapter->pdev->dev, nic_size, 859 - &nic_dma_t, GFP_KERNEL); 858 + nic_info_addr = dma_alloc_coherent(&adapter->pdev->dev, nic_size, 859 + &nic_dma_t, GFP_KERNEL); 860 860 if (!nic_info_addr) 861 861 return -ENOMEM; 862 862 ··· 909 909 if (adapter->ahw->op_mode != QLCNIC_MGMT_FUNC) 910 910 return err; 911 911 912 - nic_info_addr = dma_zalloc_coherent(&adapter->pdev->dev, nic_size, 913 - &nic_dma_t, GFP_KERNEL); 912 + nic_info_addr = dma_alloc_coherent(&adapter->pdev->dev, nic_size, 913 + &nic_dma_t, GFP_KERNEL); 914 914 if (!nic_info_addr) 915 915 return -ENOMEM; 916 916 ··· 964 964 void *pci_info_addr; 965 965 int err = 0, i; 966 966 967 - pci_info_addr = dma_zalloc_coherent(&adapter->pdev->dev, pci_size, 968 - &pci_info_dma_t, GFP_KERNEL); 967 + pci_info_addr = dma_alloc_coherent(&adapter->pdev->dev, pci_size, 968 + &pci_info_dma_t, GFP_KERNEL); 969 969 if (!pci_info_addr) 970 970 return -ENOMEM; 971 971 ··· 1078 1078 return -EIO; 1079 1079 } 1080 1080 1081 - stats_addr = dma_zalloc_coherent(&adapter->pdev->dev, stats_size, 1082 - &stats_dma_t, GFP_KERNEL); 1081 + stats_addr = dma_alloc_coherent(&adapter->pdev->dev, stats_size, 1082 + &stats_dma_t, GFP_KERNEL); 1083 1083 if (!stats_addr) 1084 1084 return -ENOMEM; 1085 1085 ··· 1134 1134 if (mac_stats == NULL) 1135 1135 return -ENOMEM; 1136 1136 1137 - stats_addr = dma_zalloc_coherent(&adapter->pdev->dev, stats_size, 1138 - &stats_dma_t, GFP_KERNEL); 1137 + stats_addr = dma_alloc_coherent(&adapter->pdev->dev, stats_size, 1138 + &stats_dma_t, GFP_KERNEL); 1139 1139 if (!stats_addr) 1140 1140 return -ENOMEM; 1141 1141
+1 -1
drivers/net/ethernet/qualcomm/emac/emac-mac.c
··· 776 776 8 + 2 * 8; /* 8 byte per one Tx and two Rx rings */ 777 777 778 778 ring_header->used = 0; 779 - ring_header->v_addr = dma_zalloc_coherent(dev, ring_header->size, 779 + ring_header->v_addr = dma_alloc_coherent(dev, ring_header->size, 780 780 &ring_header->dma_addr, 781 781 GFP_KERNEL); 782 782 if (!ring_header->v_addr)
+6 -6
drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
··· 400 400 } 401 401 402 402 /* allocate memory for TX descriptors */ 403 - tx_ring->dma_tx = dma_zalloc_coherent(dev, 404 - tx_rsize * sizeof(struct sxgbe_tx_norm_desc), 405 - &tx_ring->dma_tx_phy, GFP_KERNEL); 403 + tx_ring->dma_tx = dma_alloc_coherent(dev, 404 + tx_rsize * sizeof(struct sxgbe_tx_norm_desc), 405 + &tx_ring->dma_tx_phy, GFP_KERNEL); 406 406 if (!tx_ring->dma_tx) 407 407 return -ENOMEM; 408 408 ··· 479 479 rx_ring->queue_no = queue_no; 480 480 481 481 /* allocate memory for RX descriptors */ 482 - rx_ring->dma_rx = dma_zalloc_coherent(priv->device, 483 - rx_rsize * sizeof(struct sxgbe_rx_norm_desc), 484 - &rx_ring->dma_rx_phy, GFP_KERNEL); 482 + rx_ring->dma_rx = dma_alloc_coherent(priv->device, 483 + rx_rsize * sizeof(struct sxgbe_rx_norm_desc), 484 + &rx_ring->dma_rx_phy, GFP_KERNEL); 485 485 486 486 if (rx_ring->dma_rx == NULL) 487 487 return -ENOMEM;
+2 -2
drivers/net/ethernet/sfc/falcon/nic.c
··· 33 33 int ef4_nic_alloc_buffer(struct ef4_nic *efx, struct ef4_buffer *buffer, 34 34 unsigned int len, gfp_t gfp_flags) 35 35 { 36 - buffer->addr = dma_zalloc_coherent(&efx->pci_dev->dev, len, 37 - &buffer->dma_addr, gfp_flags); 36 + buffer->addr = dma_alloc_coherent(&efx->pci_dev->dev, len, 37 + &buffer->dma_addr, gfp_flags); 38 38 if (!buffer->addr) 39 39 return -ENOMEM; 40 40 buffer->len = len;
+2 -2
drivers/net/ethernet/sfc/nic.c
··· 34 34 int efx_nic_alloc_buffer(struct efx_nic *efx, struct efx_buffer *buffer, 35 35 unsigned int len, gfp_t gfp_flags) 36 36 { 37 - buffer->addr = dma_zalloc_coherent(&efx->pci_dev->dev, len, 38 - &buffer->dma_addr, gfp_flags); 37 + buffer->addr = dma_alloc_coherent(&efx->pci_dev->dev, len, 38 + &buffer->dma_addr, gfp_flags); 39 39 if (!buffer->addr) 40 40 return -ENOMEM; 41 41 buffer->len = len;
+2 -2
drivers/net/ethernet/sgi/meth.c
··· 211 211 static int meth_init_tx_ring(struct meth_private *priv) 212 212 { 213 213 /* Init TX ring */ 214 - priv->tx_ring = dma_zalloc_coherent(NULL, TX_RING_BUFFER_SIZE, 215 - &priv->tx_ring_dma, GFP_ATOMIC); 214 + priv->tx_ring = dma_alloc_coherent(NULL, TX_RING_BUFFER_SIZE, 215 + &priv->tx_ring_dma, GFP_ATOMIC); 216 216 if (!priv->tx_ring) 217 217 return -ENOMEM; 218 218
+2 -2
drivers/net/ethernet/socionext/netsec.c
··· 1029 1029 struct netsec_desc_ring *dring = &priv->desc_ring[id]; 1030 1030 int i; 1031 1031 1032 - dring->vaddr = dma_zalloc_coherent(priv->dev, DESC_SZ * DESC_NUM, 1033 - &dring->desc_dma, GFP_KERNEL); 1032 + dring->vaddr = dma_alloc_coherent(priv->dev, DESC_SZ * DESC_NUM, 1033 + &dring->desc_dma, GFP_KERNEL); 1034 1034 if (!dring->vaddr) 1035 1035 goto err; 1036 1036
+16 -24
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1549 1549 goto err_dma; 1550 1550 1551 1551 if (priv->extend_desc) { 1552 - rx_q->dma_erx = dma_zalloc_coherent(priv->device, 1553 - DMA_RX_SIZE * 1554 - sizeof(struct 1555 - dma_extended_desc), 1556 - &rx_q->dma_rx_phy, 1557 - GFP_KERNEL); 1552 + rx_q->dma_erx = dma_alloc_coherent(priv->device, 1553 + DMA_RX_SIZE * sizeof(struct dma_extended_desc), 1554 + &rx_q->dma_rx_phy, 1555 + GFP_KERNEL); 1558 1556 if (!rx_q->dma_erx) 1559 1557 goto err_dma; 1560 1558 1561 1559 } else { 1562 - rx_q->dma_rx = dma_zalloc_coherent(priv->device, 1563 - DMA_RX_SIZE * 1564 - sizeof(struct 1565 - dma_desc), 1566 - &rx_q->dma_rx_phy, 1567 - GFP_KERNEL); 1560 + rx_q->dma_rx = dma_alloc_coherent(priv->device, 1561 + DMA_RX_SIZE * sizeof(struct dma_desc), 1562 + &rx_q->dma_rx_phy, 1563 + GFP_KERNEL); 1568 1564 if (!rx_q->dma_rx) 1569 1565 goto err_dma; 1570 1566 } ··· 1608 1612 goto err_dma; 1609 1613 1610 1614 if (priv->extend_desc) { 1611 - tx_q->dma_etx = dma_zalloc_coherent(priv->device, 1612 - DMA_TX_SIZE * 1613 - sizeof(struct 1614 - dma_extended_desc), 1615 - &tx_q->dma_tx_phy, 1616 - GFP_KERNEL); 1615 + tx_q->dma_etx = dma_alloc_coherent(priv->device, 1616 + DMA_TX_SIZE * sizeof(struct dma_extended_desc), 1617 + &tx_q->dma_tx_phy, 1618 + GFP_KERNEL); 1617 1619 if (!tx_q->dma_etx) 1618 1620 goto err_dma; 1619 1621 } else { 1620 - tx_q->dma_tx = dma_zalloc_coherent(priv->device, 1621 - DMA_TX_SIZE * 1622 - sizeof(struct 1623 - dma_desc), 1624 - &tx_q->dma_tx_phy, 1625 - GFP_KERNEL); 1622 + tx_q->dma_tx = dma_alloc_coherent(priv->device, 1623 + DMA_TX_SIZE * sizeof(struct dma_desc), 1624 + &tx_q->dma_tx_phy, 1625 + GFP_KERNEL); 1626 1626 if (!tx_q->dma_tx) 1627 1627 goto err_dma; 1628 1628 }
+4 -4
drivers/net/ethernet/tundra/tsi108_eth.c
··· 1311 1311 data->id, dev->irq, dev->name); 1312 1312 } 1313 1313 1314 - data->rxring = dma_zalloc_coherent(&data->pdev->dev, rxring_size, 1315 - &data->rxdma, GFP_KERNEL); 1314 + data->rxring = dma_alloc_coherent(&data->pdev->dev, rxring_size, 1315 + &data->rxdma, GFP_KERNEL); 1316 1316 if (!data->rxring) 1317 1317 return -ENOMEM; 1318 1318 1319 - data->txring = dma_zalloc_coherent(&data->pdev->dev, txring_size, 1320 - &data->txdma, GFP_KERNEL); 1319 + data->txring = dma_alloc_coherent(&data->pdev->dev, txring_size, 1320 + &data->txdma, GFP_KERNEL); 1321 1321 if (!data->txring) { 1322 1322 dma_free_coherent(&data->pdev->dev, rxring_size, data->rxring, 1323 1323 data->rxdma);
+6 -6
drivers/net/ethernet/xilinx/ll_temac_main.c
··· 243 243 244 244 /* allocate the tx and rx ring buffer descriptors. */ 245 245 /* returns a virtual address and a physical address. */ 246 - lp->tx_bd_v = dma_zalloc_coherent(ndev->dev.parent, 247 - sizeof(*lp->tx_bd_v) * TX_BD_NUM, 248 - &lp->tx_bd_p, GFP_KERNEL); 246 + lp->tx_bd_v = dma_alloc_coherent(ndev->dev.parent, 247 + sizeof(*lp->tx_bd_v) * TX_BD_NUM, 248 + &lp->tx_bd_p, GFP_KERNEL); 249 249 if (!lp->tx_bd_v) 250 250 goto out; 251 251 252 - lp->rx_bd_v = dma_zalloc_coherent(ndev->dev.parent, 253 - sizeof(*lp->rx_bd_v) * RX_BD_NUM, 254 - &lp->rx_bd_p, GFP_KERNEL); 252 + lp->rx_bd_v = dma_alloc_coherent(ndev->dev.parent, 253 + sizeof(*lp->rx_bd_v) * RX_BD_NUM, 254 + &lp->rx_bd_p, GFP_KERNEL); 255 255 if (!lp->rx_bd_v) 256 256 goto out; 257 257
+6 -6
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 199 199 lp->rx_bd_ci = 0; 200 200 201 201 /* Allocate the Tx and Rx buffer descriptors. */ 202 - lp->tx_bd_v = dma_zalloc_coherent(ndev->dev.parent, 203 - sizeof(*lp->tx_bd_v) * TX_BD_NUM, 204 - &lp->tx_bd_p, GFP_KERNEL); 202 + lp->tx_bd_v = dma_alloc_coherent(ndev->dev.parent, 203 + sizeof(*lp->tx_bd_v) * TX_BD_NUM, 204 + &lp->tx_bd_p, GFP_KERNEL); 205 205 if (!lp->tx_bd_v) 206 206 goto out; 207 207 208 - lp->rx_bd_v = dma_zalloc_coherent(ndev->dev.parent, 209 - sizeof(*lp->rx_bd_v) * RX_BD_NUM, 210 - &lp->rx_bd_p, GFP_KERNEL); 208 + lp->rx_bd_v = dma_alloc_coherent(ndev->dev.parent, 209 + sizeof(*lp->rx_bd_v) * RX_BD_NUM, 210 + &lp->rx_bd_p, GFP_KERNEL); 211 211 if (!lp->rx_bd_v) 212 212 goto out; 213 213
+3 -3
drivers/net/fddi/defxx.c
··· 1139 1139 #endif 1140 1140 sizeof(PI_CONSUMER_BLOCK) + 1141 1141 (PI_ALIGN_K_DESC_BLK - 1); 1142 - bp->kmalloced = top_v = dma_zalloc_coherent(bp->bus_dev, alloc_size, 1143 - &bp->kmalloced_dma, 1144 - GFP_ATOMIC); 1142 + bp->kmalloced = top_v = dma_alloc_coherent(bp->bus_dev, alloc_size, 1143 + &bp->kmalloced_dma, 1144 + GFP_ATOMIC); 1145 1145 if (top_v == NULL) 1146 1146 return DFX_K_FAILURE; 1147 1147
+4 -4
drivers/net/fddi/skfp/skfddi.c
··· 409 409 if (bp->SharedMemSize > 0) { 410 410 bp->SharedMemSize += 16; // for descriptor alignment 411 411 412 - bp->SharedMemAddr = dma_zalloc_coherent(&bp->pdev.dev, 413 - bp->SharedMemSize, 414 - &bp->SharedMemDMA, 415 - GFP_ATOMIC); 412 + bp->SharedMemAddr = dma_alloc_coherent(&bp->pdev.dev, 413 + bp->SharedMemSize, 414 + &bp->SharedMemDMA, 415 + GFP_ATOMIC); 416 416 if (!bp->SharedMemAddr) { 417 417 printk("could not allocate mem for "); 418 418 printk("hardware module: %ld byte\n",
+4 -4
drivers/net/vmxnet3/vmxnet3_drv.c
··· 535 535 } 536 536 537 537 sz = tq->tx_ring.size * sizeof(tq->buf_info[0]); 538 - tq->buf_info = dma_zalloc_coherent(&adapter->pdev->dev, sz, 539 - &tq->buf_info_pa, GFP_KERNEL); 538 + tq->buf_info = dma_alloc_coherent(&adapter->pdev->dev, sz, 539 + &tq->buf_info_pa, GFP_KERNEL); 540 540 if (!tq->buf_info) 541 541 goto err; 542 542 ··· 1815 1815 1816 1816 sz = sizeof(struct vmxnet3_rx_buf_info) * (rq->rx_ring[0].size + 1817 1817 rq->rx_ring[1].size); 1818 - bi = dma_zalloc_coherent(&adapter->pdev->dev, sz, &rq->buf_info_pa, 1819 - GFP_KERNEL); 1818 + bi = dma_alloc_coherent(&adapter->pdev->dev, sz, &rq->buf_info_pa, 1819 + GFP_KERNEL); 1820 1820 if (!bi) 1821 1821 goto err; 1822 1822
+3 -4
drivers/net/wan/fsl_ucc_hdlc.c
··· 279 279 iowrite16be(DEFAULT_HDLC_ADDR, &priv->ucc_pram->haddr4); 280 280 281 281 /* Get BD buffer */ 282 - bd_buffer = dma_zalloc_coherent(priv->dev, 283 - (RX_BD_RING_LEN + TX_BD_RING_LEN) * 284 - MAX_RX_BUF_LENGTH, 285 - &bd_dma_addr, GFP_KERNEL); 282 + bd_buffer = dma_alloc_coherent(priv->dev, 283 + (RX_BD_RING_LEN + TX_BD_RING_LEN) * MAX_RX_BUF_LENGTH, 284 + &bd_dma_addr, GFP_KERNEL); 286 285 287 286 if (!bd_buffer) { 288 287 dev_err(priv->dev, "Could not allocate buffer descriptors\n");
+3 -4
drivers/net/wireless/ath/ath10k/ce.c
··· 1553 1553 * coherent DMA are unsupported 1554 1554 */ 1555 1555 dest_ring->base_addr_owner_space_unaligned = 1556 - dma_zalloc_coherent(ar->dev, 1557 - (nentries * sizeof(struct ce_desc) + 1558 - CE_DESC_RING_ALIGN), 1559 - &base_addr, GFP_KERNEL); 1556 + dma_alloc_coherent(ar->dev, 1557 + (nentries * sizeof(struct ce_desc) + CE_DESC_RING_ALIGN), 1558 + &base_addr, GFP_KERNEL); 1560 1559 if (!dest_ring->base_addr_owner_space_unaligned) { 1561 1560 kfree(dest_ring); 1562 1561 return ERR_PTR(-ENOMEM);
+4 -4
drivers/net/wireless/ath/ath10k/mac.c
··· 5169 5169 if (vif->type == NL80211_IFTYPE_ADHOC || 5170 5170 vif->type == NL80211_IFTYPE_MESH_POINT || 5171 5171 vif->type == NL80211_IFTYPE_AP) { 5172 - arvif->beacon_buf = dma_zalloc_coherent(ar->dev, 5173 - IEEE80211_MAX_FRAME_LEN, 5174 - &arvif->beacon_paddr, 5175 - GFP_ATOMIC); 5172 + arvif->beacon_buf = dma_alloc_coherent(ar->dev, 5173 + IEEE80211_MAX_FRAME_LEN, 5174 + &arvif->beacon_paddr, 5175 + GFP_ATOMIC); 5176 5176 if (!arvif->beacon_buf) { 5177 5177 ret = -ENOMEM; 5178 5178 ath10k_warn(ar, "failed to allocate beacon buffer: %d\n",
+1 -2
drivers/net/wireless/ath/ath10k/pci.c
··· 936 936 */ 937 937 alloc_nbytes = min_t(unsigned int, nbytes, DIAG_TRANSFER_LIMIT); 938 938 939 - data_buf = (unsigned char *)dma_zalloc_coherent(ar->dev, 940 - alloc_nbytes, 939 + data_buf = (unsigned char *)dma_alloc_coherent(ar->dev, alloc_nbytes, 941 940 &ce_data_base, 942 941 GFP_ATOMIC); 943 942
+1 -1
drivers/net/wireless/ath/ath10k/wmi.c
··· 5193 5193 void *vaddr; 5194 5194 5195 5195 pool_size = num_units * round_up(unit_len, 4); 5196 - vaddr = dma_zalloc_coherent(ar->dev, pool_size, &paddr, GFP_KERNEL); 5196 + vaddr = dma_alloc_coherent(ar->dev, pool_size, &paddr, GFP_KERNEL); 5197 5197 5198 5198 if (!vaddr) 5199 5199 return -ENOMEM;
+8 -9
drivers/net/wireless/ath/wcn36xx/dxe.c
··· 174 174 int i; 175 175 176 176 size = wcn_ch->desc_num * sizeof(struct wcn36xx_dxe_desc); 177 - wcn_ch->cpu_addr = dma_zalloc_coherent(dev, size, 178 - &wcn_ch->dma_addr, 179 - GFP_KERNEL); 177 + wcn_ch->cpu_addr = dma_alloc_coherent(dev, size, &wcn_ch->dma_addr, 178 + GFP_KERNEL); 180 179 if (!wcn_ch->cpu_addr) 181 180 return -ENOMEM; 182 181 ··· 626 627 16 - (WCN36XX_BD_CHUNK_SIZE % 8); 627 628 628 629 s = wcn->mgmt_mem_pool.chunk_size * WCN36XX_DXE_CH_DESC_NUMB_TX_H; 629 - cpu_addr = dma_zalloc_coherent(wcn->dev, s, 630 - &wcn->mgmt_mem_pool.phy_addr, 631 - GFP_KERNEL); 630 + cpu_addr = dma_alloc_coherent(wcn->dev, s, 631 + &wcn->mgmt_mem_pool.phy_addr, 632 + GFP_KERNEL); 632 633 if (!cpu_addr) 633 634 goto out_err; 634 635 ··· 641 642 16 - (WCN36XX_BD_CHUNK_SIZE % 8); 642 643 643 644 s = wcn->data_mem_pool.chunk_size * WCN36XX_DXE_CH_DESC_NUMB_TX_L; 644 - cpu_addr = dma_zalloc_coherent(wcn->dev, s, 645 - &wcn->data_mem_pool.phy_addr, 646 - GFP_KERNEL); 645 + cpu_addr = dma_alloc_coherent(wcn->dev, s, 646 + &wcn->data_mem_pool.phy_addr, 647 + GFP_KERNEL); 647 648 if (!cpu_addr) 648 649 goto out_err; 649 650
+4 -4
drivers/net/wireless/ath/wil6210/txrx_edma.c
··· 99 99 /* Status messages are allocated and initialized to 0. This is necessary 100 100 * since DR bit should be initialized to 0. 101 101 */ 102 - sring->va = dma_zalloc_coherent(dev, sz, &sring->pa, GFP_KERNEL); 102 + sring->va = dma_alloc_coherent(dev, sz, &sring->pa, GFP_KERNEL); 103 103 if (!sring->va) 104 104 return -ENOMEM; 105 105 ··· 381 381 if (!ring->ctx) 382 382 goto err; 383 383 384 - ring->va = dma_zalloc_coherent(dev, sz, &ring->pa, GFP_KERNEL); 384 + ring->va = dma_alloc_coherent(dev, sz, &ring->pa, GFP_KERNEL); 385 385 if (!ring->va) 386 386 goto err_free_ctx; 387 387 388 388 if (ring->is_rx) { 389 389 sz = sizeof(*ring->edma_rx_swtail.va); 390 390 ring->edma_rx_swtail.va = 391 - dma_zalloc_coherent(dev, sz, &ring->edma_rx_swtail.pa, 392 - GFP_KERNEL); 391 + dma_alloc_coherent(dev, sz, &ring->edma_rx_swtail.pa, 392 + GFP_KERNEL); 393 393 if (!ring->edma_rx_swtail.va) 394 394 goto err_free_va; 395 395 }
+3 -3
drivers/net/wireless/broadcom/b43/dma.c
··· 431 431 u16 ring_mem_size = (ring->type == B43_DMA_64BIT) ? 432 432 B43_DMA64_RINGMEMSIZE : B43_DMA32_RINGMEMSIZE; 433 433 434 - ring->descbase = dma_zalloc_coherent(ring->dev->dev->dma_dev, 435 - ring_mem_size, &(ring->dmabase), 436 - GFP_KERNEL); 434 + ring->descbase = dma_alloc_coherent(ring->dev->dev->dma_dev, 435 + ring_mem_size, &(ring->dmabase), 436 + GFP_KERNEL); 437 437 if (!ring->descbase) 438 438 return -ENOMEM; 439 439
+3 -3
drivers/net/wireless/broadcom/b43legacy/dma.c
··· 331 331 static int alloc_ringmemory(struct b43legacy_dmaring *ring) 332 332 { 333 333 /* GFP flags must match the flags in free_ringmemory()! */ 334 - ring->descbase = dma_zalloc_coherent(ring->dev->dev->dma_dev, 335 - B43legacy_DMA_RINGMEMSIZE, 336 - &(ring->dmabase), GFP_KERNEL); 334 + ring->descbase = dma_alloc_coherent(ring->dev->dev->dma_dev, 335 + B43legacy_DMA_RINGMEMSIZE, 336 + &(ring->dmabase), GFP_KERNEL); 337 337 if (!ring->descbase) 338 338 return -ENOMEM; 339 339
+8 -8
drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
··· 1281 1281 u32 addr; 1282 1282 1283 1283 devinfo->shared.scratch = 1284 - dma_zalloc_coherent(&devinfo->pdev->dev, 1285 - BRCMF_DMA_D2H_SCRATCH_BUF_LEN, 1286 - &devinfo->shared.scratch_dmahandle, 1287 - GFP_KERNEL); 1284 + dma_alloc_coherent(&devinfo->pdev->dev, 1285 + BRCMF_DMA_D2H_SCRATCH_BUF_LEN, 1286 + &devinfo->shared.scratch_dmahandle, 1287 + GFP_KERNEL); 1288 1288 if (!devinfo->shared.scratch) 1289 1289 goto fail; 1290 1290 ··· 1298 1298 brcmf_pcie_write_tcm32(devinfo, addr, BRCMF_DMA_D2H_SCRATCH_BUF_LEN); 1299 1299 1300 1300 devinfo->shared.ringupd = 1301 - dma_zalloc_coherent(&devinfo->pdev->dev, 1302 - BRCMF_DMA_D2H_RINGUPD_BUF_LEN, 1303 - &devinfo->shared.ringupd_dmahandle, 1304 - GFP_KERNEL); 1301 + dma_alloc_coherent(&devinfo->pdev->dev, 1302 + BRCMF_DMA_D2H_RINGUPD_BUF_LEN, 1303 + &devinfo->shared.ringupd_dmahandle, 1304 + GFP_KERNEL); 1305 1305 if (!devinfo->shared.ringupd) 1306 1306 goto fail; 1307 1307
+15 -24
drivers/net/wireless/intel/iwlwifi/pcie/rx.c
··· 711 711 * Allocate the circular buffer of Read Buffer Descriptors 712 712 * (RBDs) 713 713 */ 714 - rxq->bd = dma_zalloc_coherent(dev, 715 - free_size * rxq->queue_size, 716 - &rxq->bd_dma, GFP_KERNEL); 714 + rxq->bd = dma_alloc_coherent(dev, free_size * rxq->queue_size, 715 + &rxq->bd_dma, GFP_KERNEL); 717 716 if (!rxq->bd) 718 717 goto err; 719 718 720 719 if (trans->cfg->mq_rx_supported) { 721 - rxq->used_bd = dma_zalloc_coherent(dev, 722 - (use_rx_td ? 723 - sizeof(*rxq->cd) : 724 - sizeof(__le32)) * 725 - rxq->queue_size, 726 - &rxq->used_bd_dma, 727 - GFP_KERNEL); 720 + rxq->used_bd = dma_alloc_coherent(dev, 721 + (use_rx_td ? sizeof(*rxq->cd) : sizeof(__le32)) * rxq->queue_size, 722 + &rxq->used_bd_dma, 723 + GFP_KERNEL); 728 724 if (!rxq->used_bd) 729 725 goto err; 730 726 } 731 727 732 728 /* Allocate the driver's pointer to receive buffer status */ 733 - rxq->rb_stts = dma_zalloc_coherent(dev, use_rx_td ? 734 - sizeof(__le16) : 735 - sizeof(struct iwl_rb_status), 736 - &rxq->rb_stts_dma, 737 - GFP_KERNEL); 729 + rxq->rb_stts = dma_alloc_coherent(dev, 730 + use_rx_td ? sizeof(__le16) : sizeof(struct iwl_rb_status), 731 + &rxq->rb_stts_dma, GFP_KERNEL); 738 732 if (!rxq->rb_stts) 739 733 goto err; 740 734 ··· 736 742 return 0; 737 743 738 744 /* Allocate the driver's pointer to TR tail */ 739 - rxq->tr_tail = dma_zalloc_coherent(dev, sizeof(__le16), 740 - &rxq->tr_tail_dma, 741 - GFP_KERNEL); 745 + rxq->tr_tail = dma_alloc_coherent(dev, sizeof(__le16), 746 + &rxq->tr_tail_dma, GFP_KERNEL); 742 747 if (!rxq->tr_tail) 743 748 goto err; 744 749 745 750 /* Allocate the driver's pointer to CR tail */ 746 - rxq->cr_tail = dma_zalloc_coherent(dev, sizeof(__le16), 747 - &rxq->cr_tail_dma, 748 - GFP_KERNEL); 751 + rxq->cr_tail = dma_alloc_coherent(dev, sizeof(__le16), 752 + &rxq->cr_tail_dma, GFP_KERNEL); 749 753 if (!rxq->cr_tail) 750 754 goto err; 751 755 /* ··· 1939 1947 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 1940 1948 1941 1949 trans_pcie->ict_tbl = 1942 - dma_zalloc_coherent(trans->dev, ICT_SIZE, 1943 - &trans_pcie->ict_tbl_dma, 1944 - GFP_KERNEL); 1950 + dma_alloc_coherent(trans->dev, ICT_SIZE, 1951 + &trans_pcie->ict_tbl_dma, GFP_KERNEL); 1945 1952 if (!trans_pcie->ict_tbl) 1946 1953 return -ENOMEM; 1947 1954
+3 -3
drivers/net/wireless/ralink/rt2x00/rt2x00mmio.c
··· 119 119 /* 120 120 * Allocate DMA memory for descriptor and buffer. 121 121 */ 122 - addr = dma_zalloc_coherent(rt2x00dev->dev, 123 - queue->limit * queue->desc_size, &dma, 124 - GFP_KERNEL); 122 + addr = dma_alloc_coherent(rt2x00dev->dev, 123 + queue->limit * queue->desc_size, &dma, 124 + GFP_KERNEL); 125 125 if (!addr) 126 126 return -ENOMEM; 127 127
+4 -4
drivers/ntb/hw/mscc/ntb_hw_switchtec.c
··· 1339 1339 int rc; 1340 1340 1341 1341 sndev->nr_rsvd_luts++; 1342 - sndev->self_shared = dma_zalloc_coherent(&sndev->stdev->pdev->dev, 1343 - LUT_SIZE, 1344 - &sndev->self_shared_dma, 1345 - GFP_KERNEL); 1342 + sndev->self_shared = dma_alloc_coherent(&sndev->stdev->pdev->dev, 1343 + LUT_SIZE, 1344 + &sndev->self_shared_dma, 1345 + GFP_KERNEL); 1346 1346 if (!sndev->self_shared) { 1347 1347 dev_err(&sndev->stdev->dev, 1348 1348 "unable to allocate memory for shared mw\n");
+10 -9
drivers/nvme/host/core.c
··· 2173 2173 size_t nqnlen; 2174 2174 int off; 2175 2175 2176 - nqnlen = strnlen(id->subnqn, NVMF_NQN_SIZE); 2177 - if (nqnlen > 0 && nqnlen < NVMF_NQN_SIZE) { 2178 - strlcpy(subsys->subnqn, id->subnqn, NVMF_NQN_SIZE); 2179 - return; 2180 - } 2176 + if(!(ctrl->quirks & NVME_QUIRK_IGNORE_DEV_SUBNQN)) { 2177 + nqnlen = strnlen(id->subnqn, NVMF_NQN_SIZE); 2178 + if (nqnlen > 0 && nqnlen < NVMF_NQN_SIZE) { 2179 + strlcpy(subsys->subnqn, id->subnqn, NVMF_NQN_SIZE); 2180 + return; 2181 + } 2181 2182 2182 - if (ctrl->vs >= NVME_VS(1, 2, 1)) 2183 - dev_warn(ctrl->device, "missing or invalid SUBNQN field.\n"); 2183 + if (ctrl->vs >= NVME_VS(1, 2, 1)) 2184 + dev_warn(ctrl->device, "missing or invalid SUBNQN field.\n"); 2185 + } 2184 2186 2185 2187 /* Generate a "fake" NQN per Figure 254 in NVMe 1.3 + ECN 001 */ 2186 2188 off = snprintf(subsys->subnqn, NVMF_NQN_SIZE, 2187 - "nqn.2014.08.org.nvmexpress:%4x%4x", 2189 + "nqn.2014.08.org.nvmexpress:%04x%04x", 2188 2190 le16_to_cpu(id->vid), le16_to_cpu(id->ssvid)); 2189 2191 memcpy(subsys->subnqn + off, id->sn, sizeof(id->sn)); 2190 2192 off += sizeof(id->sn); ··· 2502 2500 ctrl->oaes = le32_to_cpu(id->oaes); 2503 2501 atomic_set(&ctrl->abort_limit, id->acl + 1); 2504 2502 ctrl->vwc = id->vwc; 2505 - ctrl->cntlid = le16_to_cpup(&id->cntlid); 2506 2503 if (id->mdts) 2507 2504 max_hw_sectors = 1 << (id->mdts + page_shift - 9); 2508 2505 else
+2
drivers/nvme/host/fabrics.c
··· 874 874 if (opts->discovery_nqn) { 875 875 opts->kato = 0; 876 876 opts->nr_io_queues = 0; 877 + opts->nr_write_queues = 0; 878 + opts->nr_poll_queues = 0; 877 879 opts->duplicate_connect = true; 878 880 } 879 881 if (ctrl_loss_tmo < 0)
+2
drivers/nvme/host/multipath.c
··· 570 570 return 0; 571 571 out_free_ana_log_buf: 572 572 kfree(ctrl->ana_log_buf); 573 + ctrl->ana_log_buf = NULL; 573 574 out: 574 575 return error; 575 576 } ··· 578 577 void nvme_mpath_uninit(struct nvme_ctrl *ctrl) 579 578 { 580 579 kfree(ctrl->ana_log_buf); 580 + ctrl->ana_log_buf = NULL; 581 581 } 582 582
+5
drivers/nvme/host/nvme.h
··· 90 90 * Set MEDIUM priority on SQ creation 91 91 */ 92 92 NVME_QUIRK_MEDIUM_PRIO_SQ = (1 << 7), 93 + 94 + /* 95 + * Ignore device provided subnqn. 96 + */ 97 + NVME_QUIRK_IGNORE_DEV_SUBNQN = (1 << 8), 93 98 }; 94 99 95 100 /*
+51 -24
drivers/nvme/host/pci.c
··· 95 95 struct nvme_queue; 96 96 97 97 static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown); 98 + static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode); 98 99 99 100 /* 100 101 * Represents an NVM Express device. Each nvme_dev is a PCI function. ··· 1020 1019 1021 1020 static inline void nvme_update_cq_head(struct nvme_queue *nvmeq) 1022 1021 { 1023 - if (++nvmeq->cq_head == nvmeq->q_depth) { 1022 + if (nvmeq->cq_head == nvmeq->q_depth - 1) { 1024 1023 nvmeq->cq_head = 0; 1025 1024 nvmeq->cq_phase = !nvmeq->cq_phase; 1025 + } else { 1026 + nvmeq->cq_head++; 1026 1027 } 1027 1028 } 1028 1029 ··· 1423 1420 return 0; 1424 1421 } 1425 1422 1423 + static void nvme_suspend_io_queues(struct nvme_dev *dev) 1424 + { 1425 + int i; 1426 + 1427 + for (i = dev->ctrl.queue_count - 1; i > 0; i--) 1428 + nvme_suspend_queue(&dev->queues[i]); 1429 + } 1430 + 1426 1431 static void nvme_disable_admin_queue(struct nvme_dev *dev, bool shutdown) 1427 1432 { 1428 1433 struct nvme_queue *nvmeq = &dev->queues[0]; ··· 1496 1485 if (dev->ctrl.queue_count > qid) 1497 1486 return 0; 1498 1487 1499 - nvmeq->cqes = dma_zalloc_coherent(dev->dev, CQ_SIZE(depth), 1500 - &nvmeq->cq_dma_addr, GFP_KERNEL); 1488 + nvmeq->cqes = dma_alloc_coherent(dev->dev, CQ_SIZE(depth), 1489 + &nvmeq->cq_dma_addr, GFP_KERNEL); 1501 1490 if (!nvmeq->cqes) 1502 1491 goto free_nvmeq; 1503 1492 ··· 1896 1885 struct nvme_host_mem_buf_desc *desc = &dev->host_mem_descs[i]; 1897 1886 size_t size = le32_to_cpu(desc->size) * dev->ctrl.page_size; 1898 1887 1899 - dma_free_coherent(dev->dev, size, dev->host_mem_desc_bufs[i], 1900 - le64_to_cpu(desc->addr)); 1888 + dma_free_attrs(dev->dev, size, dev->host_mem_desc_bufs[i], 1889 + le64_to_cpu(desc->addr), 1890 + DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_NO_WARN); 1901 1891 } 1902 1892 1903 1893 kfree(dev->host_mem_desc_bufs); ··· 1927 1915 if (dev->ctrl.hmmaxd && dev->ctrl.hmmaxd < max_entries) 1928 1916 max_entries = dev->ctrl.hmmaxd; 1929 1917 1930 - descs = dma_zalloc_coherent(dev->dev, max_entries * sizeof(*descs), 1931 - &descs_dma, GFP_KERNEL); 1918 + descs = dma_alloc_coherent(dev->dev, max_entries * sizeof(*descs), 1919 + &descs_dma, GFP_KERNEL); 1932 1920 if (!descs) 1933 1921 goto out; 1934 1922 ··· 1964 1952 while (--i >= 0) { 1965 1953 size_t size = le32_to_cpu(descs[i].size) * dev->ctrl.page_size; 1966 1954 1967 - dma_free_coherent(dev->dev, size, bufs[i], 1968 - le64_to_cpu(descs[i].addr)); 1955 + dma_free_attrs(dev->dev, size, bufs[i], 1956 + le64_to_cpu(descs[i].addr), 1957 + DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_NO_WARN); 1969 1958 } 1970 1959 1971 1960 kfree(bufs); ··· 2145 2132 return result; 2146 2133 } 2147 2134 2135 + static void nvme_disable_io_queues(struct nvme_dev *dev) 2136 + { 2137 + if (__nvme_disable_io_queues(dev, nvme_admin_delete_sq)) 2138 + __nvme_disable_io_queues(dev, nvme_admin_delete_cq); 2139 + } 2140 + 2148 2141 static int nvme_setup_io_queues(struct nvme_dev *dev) 2149 2142 { 2150 2143 struct nvme_queue *adminq = &dev->queues[0]; ··· 2187 2168 } while (1); 2188 2169 adminq->q_db = dev->dbs; 2189 2170 2171 + retry: 2190 2172 /* Deregister the admin queue's interrupt */ 2191 2173 pci_free_irq(pdev, 0, adminq); 2192 2174 ··· 2205 2185 result = max(result - 1, 1); 2206 2186 dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL]; 2207 2187 2208 - dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n", 2209 - dev->io_queues[HCTX_TYPE_DEFAULT], 2210 - dev->io_queues[HCTX_TYPE_READ], 2211 - dev->io_queues[HCTX_TYPE_POLL]); 2212 - 2213 2188 /* 2214 2189 * Should investigate if there's a performance win from allocating 2215 2190 * more queues than interrupt vectors; it might allow the submission 2216 2191 * path to scale better, even if the receive path is limited by the 2217 2192 * number of interrupts. 2218 2193 */ 2219 - 2220 2194 result = queue_request_irq(adminq); 2221 2195 if (result) { 2222 2196 adminq->cq_vector = -1; 2223 2197 return result; 2224 2198 } 2225 2199 set_bit(NVMEQ_ENABLED, &adminq->flags); 2226 - return nvme_create_io_queues(dev); 2200 + 2201 + result = nvme_create_io_queues(dev); 2202 + if (result || dev->online_queues < 2) 2203 + return result; 2204 + 2205 + if (dev->online_queues - 1 < dev->max_qid) { 2206 + nr_io_queues = dev->online_queues - 1; 2207 + nvme_disable_io_queues(dev); 2208 + nvme_suspend_io_queues(dev); 2209 + goto retry; 2210 + } 2211 + dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n", 2212 + dev->io_queues[HCTX_TYPE_DEFAULT], 2213 + dev->io_queues[HCTX_TYPE_READ], 2214 + dev->io_queues[HCTX_TYPE_POLL]); 2215 + return 0; 2227 2216 } 2228 2217 2229 2218 static void nvme_del_queue_end(struct request *req, blk_status_t error) ··· 2277 2248 return 0; 2278 2249 } 2279 2250 2280 - static bool nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode) 2251 + static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode) 2281 2252 { 2282 2253 int nr_queues = dev->online_queues - 1, sent = 0; 2283 2254 unsigned long timeout; ··· 2323 2294 dev->tagset.nr_maps = 2; /* default + read */ 2324 2295 if (dev->io_queues[HCTX_TYPE_POLL]) 2325 2296 dev->tagset.nr_maps++; 2326 - dev->tagset.nr_maps = HCTX_MAX_TYPES; 2327 2297 dev->tagset.timeout = NVME_IO_TIMEOUT; 2328 2298 dev->tagset.numa_node = dev_to_node(dev->dev); 2329 2299 dev->tagset.queue_depth = ··· 2438 2410 2439 2411 static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) 2440 2412 { 2441 - int i; 2442 2413 bool dead = true; 2443 2414 struct pci_dev *pdev = to_pci_dev(dev->dev); 2444 2415 ··· 2464 2437 nvme_stop_queues(&dev->ctrl); 2465 2438 2466 2439 if (!dead && dev->ctrl.queue_count > 0) { 2467 - if (nvme_disable_io_queues(dev, nvme_admin_delete_sq)) 2468 - nvme_disable_io_queues(dev, nvme_admin_delete_cq); 2440 + nvme_disable_io_queues(dev); 2469 2441 nvme_disable_admin_queue(dev, shutdown); 2470 2442 } 2471 - for (i = dev->ctrl.queue_count - 1; i >= 0; i--) 2472 - nvme_suspend_queue(&dev->queues[i]); 2473 - 2443 + nvme_suspend_io_queues(dev); 2444 + nvme_suspend_queue(&dev->queues[0]); 2474 2445 nvme_pci_disable(dev); 2475 2446 2476 2447 blk_mq_tagset_busy_iter(&dev->tagset, nvme_cancel_request, &dev->ctrl); ··· 2971 2946 { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */ 2972 2947 .driver_data = NVME_QUIRK_NO_DEEPEST_PS | 2973 2948 NVME_QUIRK_MEDIUM_PRIO_SQ }, 2949 + { PCI_VDEVICE(INTEL, 0xf1a6), /* Intel 760p/Pro 7600p */ 2950 + .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 2974 2951 { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */ 2975 2952 .driver_data = NVME_QUIRK_IDENTIFY_CNS, }, 2976 2953 { PCI_DEVICE(0x1bb1, 0x0100), /* Seagate Nytro Flash Storage */
+6 -10
drivers/nvme/host/tcp.c
··· 1565 1565 { 1566 1566 nvme_tcp_stop_io_queues(ctrl); 1567 1567 if (remove) { 1568 - if (ctrl->ops->flags & NVME_F_FABRICS) 1569 - blk_cleanup_queue(ctrl->connect_q); 1568 + blk_cleanup_queue(ctrl->connect_q); 1570 1569 blk_mq_free_tag_set(ctrl->tagset); 1571 1570 } 1572 1571 nvme_tcp_free_io_queues(ctrl); ··· 1586 1587 goto out_free_io_queues; 1587 1588 } 1588 1589 1589 - if (ctrl->ops->flags & NVME_F_FABRICS) { 1590 - ctrl->connect_q = blk_mq_init_queue(ctrl->tagset); 1591 - if (IS_ERR(ctrl->connect_q)) { 1592 - ret = PTR_ERR(ctrl->connect_q); 1593 - goto out_free_tag_set; 1594 - } 1590 + ctrl->connect_q = blk_mq_init_queue(ctrl->tagset); 1591 + if (IS_ERR(ctrl->connect_q)) { 1592 + ret = PTR_ERR(ctrl->connect_q); 1593 + goto out_free_tag_set; 1595 1594 } 1596 1595 } else { 1597 1596 blk_mq_update_nr_hw_queues(ctrl->tagset, ··· 1603 1606 return 0; 1604 1607 1605 1608 out_cleanup_connect_q: 1606 - if (new && (ctrl->ops->flags & NVME_F_FABRICS)) 1609 + if (new) 1607 1610 blk_cleanup_queue(ctrl->connect_q); 1608 1611 out_free_tag_set: 1609 1612 if (new) ··· 1617 1620 { 1618 1621 nvme_tcp_stop_queue(ctrl, 0); 1619 1622 if (remove) { 1620 - free_opal_dev(ctrl->opal_dev); 1621 1623 blk_cleanup_queue(ctrl->admin_q); 1622 1624 blk_mq_free_tag_set(ctrl->admin_tagset); 1623 1625 }
+58 -5
drivers/opp/core.c
··· 988 988 kfree(opp); 989 989 } 990 990 991 - static void _opp_kref_release(struct kref *kref) 991 + static void _opp_kref_release(struct dev_pm_opp *opp, 992 + struct opp_table *opp_table) 992 993 { 993 - struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref); 994 - struct opp_table *opp_table = opp->opp_table; 995 - 996 994 /* 997 995 * Notify the changes in the availability of the operable 998 996 * frequency/voltage list. ··· 1000 1002 opp_debug_remove_one(opp); 1001 1003 list_del(&opp->node); 1002 1004 kfree(opp); 1005 + } 1003 1006 1007 + static void _opp_kref_release_unlocked(struct kref *kref) 1008 + { 1009 + struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref); 1010 + struct opp_table *opp_table = opp->opp_table; 1011 + 1012 + _opp_kref_release(opp, opp_table); 1013 + } 1014 + 1015 + static void _opp_kref_release_locked(struct kref *kref) 1016 + { 1017 + struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref); 1018 + struct opp_table *opp_table = opp->opp_table; 1019 + 1020 + _opp_kref_release(opp, opp_table); 1004 1021 mutex_unlock(&opp_table->lock); 1005 1022 } 1006 1023 ··· 1026 1013 1027 1014 void dev_pm_opp_put(struct dev_pm_opp *opp) 1028 1015 { 1029 - kref_put_mutex(&opp->kref, _opp_kref_release, &opp->opp_table->lock); 1016 + kref_put_mutex(&opp->kref, _opp_kref_release_locked, 1017 + &opp->opp_table->lock); 1030 1018 } 1031 1019 EXPORT_SYMBOL_GPL(dev_pm_opp_put); 1020 + 1021 + static void dev_pm_opp_put_unlocked(struct dev_pm_opp *opp) 1022 + { 1023 + kref_put(&opp->kref, _opp_kref_release_unlocked); 1024 + } 1032 1025 1033 1026 /** 1034 1027 * dev_pm_opp_remove() - Remove an OPP from OPP table ··· 1078 1059 dev_pm_opp_put_opp_table(opp_table); 1079 1060 } 1080 1061 EXPORT_SYMBOL_GPL(dev_pm_opp_remove); 1062 + 1063 + /** 1064 + * dev_pm_opp_remove_all_dynamic() - Remove all dynamically created OPPs 1065 + * @dev: device for which we do this operation 1066 + * 1067 + * This function removes all dynamically created OPPs from the opp table. 1068 + */ 1069 + void dev_pm_opp_remove_all_dynamic(struct device *dev) 1070 + { 1071 + struct opp_table *opp_table; 1072 + struct dev_pm_opp *opp, *temp; 1073 + int count = 0; 1074 + 1075 + opp_table = _find_opp_table(dev); 1076 + if (IS_ERR(opp_table)) 1077 + return; 1078 + 1079 + mutex_lock(&opp_table->lock); 1080 + list_for_each_entry_safe(opp, temp, &opp_table->opp_list, node) { 1081 + if (opp->dynamic) { 1082 + dev_pm_opp_put_unlocked(opp); 1083 + count++; 1084 + } 1085 + } 1086 + mutex_unlock(&opp_table->lock); 1087 + 1088 + /* Drop the references taken by dev_pm_opp_add() */ 1089 + while (count--) 1090 + dev_pm_opp_put_opp_table(opp_table); 1091 + 1092 + /* Drop the reference taken by _find_opp_table() */ 1093 + dev_pm_opp_put_opp_table(opp_table); 1094 + } 1095 + EXPORT_SYMBOL_GPL(dev_pm_opp_remove_all_dynamic); 1081 1096 1082 1097 struct dev_pm_opp *_opp_allocate(struct opp_table *table) 1083 1098 {
+1
drivers/pci/controller/dwc/pci-meson.c
··· 8 8 9 9 #include <linux/clk.h> 10 10 #include <linux/delay.h> 11 + #include <linux/gpio/consumer.h> 11 12 #include <linux/of_device.h> 12 13 #include <linux/of_gpio.h> 13 14 #include <linux/pci.h>
+3 -3
drivers/pci/controller/pcie-iproc-msi.c
··· 602 602 } 603 603 604 604 /* Reserve memory for event queue and make sure memories are zeroed */ 605 - msi->eq_cpu = dma_zalloc_coherent(pcie->dev, 606 - msi->nr_eq_region * EQ_MEM_REGION_SIZE, 607 - &msi->eq_dma, GFP_KERNEL); 605 + msi->eq_cpu = dma_alloc_coherent(pcie->dev, 606 + msi->nr_eq_region * EQ_MEM_REGION_SIZE, 607 + &msi->eq_dma, GFP_KERNEL); 608 608 if (!msi->eq_cpu) { 609 609 ret = -ENOMEM; 610 610 goto free_irqs;
+4 -4
drivers/pci/switch/switchtec.c
··· 1373 1373 if (ioread32(&stdev->mmio_mrpc->dma_ver) == 0) 1374 1374 return 0; 1375 1375 1376 - stdev->dma_mrpc = dma_zalloc_coherent(&stdev->pdev->dev, 1377 - sizeof(*stdev->dma_mrpc), 1378 - &stdev->dma_mrpc_dma_addr, 1379 - GFP_KERNEL); 1376 + stdev->dma_mrpc = dma_alloc_coherent(&stdev->pdev->dev, 1377 + sizeof(*stdev->dma_mrpc), 1378 + &stdev->dma_mrpc_dma_addr, 1379 + GFP_KERNEL); 1380 1380 if (stdev->dma_mrpc == NULL) 1381 1381 return -ENOMEM; 1382 1382
+3 -2
drivers/phy/marvell/phy-berlin-sata.c
··· 32 32 33 33 /* register 0x01 */ 34 34 #define REF_FREF_SEL_25 BIT(0) 35 - #define PHY_MODE_SATA (0x0 << 5) 35 + #define PHY_BERLIN_MODE_SATA (0x0 << 5) 36 36 37 37 /* register 0x02 */ 38 38 #define USE_MAX_PLL_RATE BIT(12) ··· 102 102 103 103 /* set PHY mode and ref freq to 25 MHz */ 104 104 phy_berlin_sata_reg_setbits(ctrl_reg, priv->phy_base, 0x01, 105 - 0x00ff, REF_FREF_SEL_25 | PHY_MODE_SATA); 105 + 0x00ff, 106 + REF_FREF_SEL_25 | PHY_BERLIN_MODE_SATA); 106 107 107 108 /* set PHY up to 6 Gbps */ 108 109 phy_berlin_sata_reg_setbits(ctrl_reg, priv->phy_base, 0x25,
+11 -11
drivers/rapidio/devices/tsi721.c
··· 1382 1382 INIT_WORK(&priv->idb_work, tsi721_db_dpc); 1383 1383 1384 1384 /* Allocate buffer for inbound doorbells queue */ 1385 - priv->idb_base = dma_zalloc_coherent(&priv->pdev->dev, 1386 - IDB_QSIZE * TSI721_IDB_ENTRY_SIZE, 1387 - &priv->idb_dma, GFP_KERNEL); 1385 + priv->idb_base = dma_alloc_coherent(&priv->pdev->dev, 1386 + IDB_QSIZE * TSI721_IDB_ENTRY_SIZE, 1387 + &priv->idb_dma, GFP_KERNEL); 1388 1388 if (!priv->idb_base) 1389 1389 return -ENOMEM; 1390 1390 ··· 1447 1447 regs = priv->regs + TSI721_DMAC_BASE(TSI721_DMACH_MAINT); 1448 1448 1449 1449 /* Allocate space for DMA descriptors */ 1450 - bd_ptr = dma_zalloc_coherent(&priv->pdev->dev, 1451 - bd_num * sizeof(struct tsi721_dma_desc), 1452 - &bd_phys, GFP_KERNEL); 1450 + bd_ptr = dma_alloc_coherent(&priv->pdev->dev, 1451 + bd_num * sizeof(struct tsi721_dma_desc), 1452 + &bd_phys, GFP_KERNEL); 1453 1453 if (!bd_ptr) 1454 1454 return -ENOMEM; 1455 1455 ··· 1464 1464 sts_size = (bd_num >= TSI721_DMA_MINSTSSZ) ? 1465 1465 bd_num : TSI721_DMA_MINSTSSZ; 1466 1466 sts_size = roundup_pow_of_two(sts_size); 1467 - sts_ptr = dma_zalloc_coherent(&priv->pdev->dev, 1467 + sts_ptr = dma_alloc_coherent(&priv->pdev->dev, 1468 1468 sts_size * sizeof(struct tsi721_dma_sts), 1469 1469 &sts_phys, GFP_KERNEL); 1470 1470 if (!sts_ptr) { ··· 1939 1939 1940 1940 /* Outbound message descriptor status FIFO allocation */ 1941 1941 priv->omsg_ring[mbox].sts_size = roundup_pow_of_two(entries + 1); 1942 - priv->omsg_ring[mbox].sts_base = dma_zalloc_coherent(&priv->pdev->dev, 1943 - priv->omsg_ring[mbox].sts_size * 1944 - sizeof(struct tsi721_dma_sts), 1945 - &priv->omsg_ring[mbox].sts_phys, GFP_KERNEL); 1942 + priv->omsg_ring[mbox].sts_base = dma_alloc_coherent(&priv->pdev->dev, 1943 + priv->omsg_ring[mbox].sts_size * sizeof(struct tsi721_dma_sts), 1944 + &priv->omsg_ring[mbox].sts_phys, 1945 + GFP_KERNEL); 1946 1946 if (priv->omsg_ring[mbox].sts_base == NULL) { 1947 1947 tsi_debug(OMSG, &priv->pdev->dev, 1948 1948 "ENOMEM for OB_MSG_%d status FIFO", mbox);
+4 -4
drivers/rapidio/devices/tsi721_dma.c
··· 90 90 * Allocate space for DMA descriptors 91 91 * (add an extra element for link descriptor) 92 92 */ 93 - bd_ptr = dma_zalloc_coherent(dev, 94 - (bd_num + 1) * sizeof(struct tsi721_dma_desc), 95 - &bd_phys, GFP_ATOMIC); 93 + bd_ptr = dma_alloc_coherent(dev, 94 + (bd_num + 1) * sizeof(struct tsi721_dma_desc), 95 + &bd_phys, GFP_ATOMIC); 96 96 if (!bd_ptr) 97 97 return -ENOMEM; 98 98 ··· 108 108 sts_size = ((bd_num + 1) >= TSI721_DMA_MINSTSSZ) ? 109 109 (bd_num + 1) : TSI721_DMA_MINSTSSZ; 110 110 sts_size = roundup_pow_of_two(sts_size); 111 - sts_ptr = dma_zalloc_coherent(dev, 111 + sts_ptr = dma_alloc_coherent(dev, 112 112 sts_size * sizeof(struct tsi721_dma_sts), 113 113 &sts_phys, GFP_ATOMIC); 114 114 if (!sts_ptr) {
+14 -6
drivers/reset/Kconfig
··· 109 109 110 110 config RESET_SIMPLE 111 111 bool "Simple Reset Controller Driver" if COMPILE_TEST 112 - default ARCH_SOCFPGA || ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED 112 + default ARCH_STM32 || ARCH_STRATIX10 || ARCH_SUNXI || ARCH_ZX || ARCH_ASPEED 113 113 help 114 114 This enables a simple reset controller driver for reset lines that 115 115 that can be asserted and deasserted by toggling bits in a contiguous, ··· 127 127 default MACH_STM32MP157 128 128 help 129 129 This enables the RCC reset controller driver for STM32 MPUs. 130 + 131 + config RESET_SOCFPGA 132 + bool "SoCFPGA Reset Driver" if COMPILE_TEST && !ARCH_SOCFPGA 133 + default ARCH_SOCFPGA 134 + select RESET_SIMPLE 135 + help 136 + This enables the reset driver for the SoCFPGA ARMv7 platforms. This 137 + driver gets initialized early during platform init calls. 130 138 131 139 config RESET_SUNXI 132 140 bool "Allwinner SoCs Reset Driver" if COMPILE_TEST && !ARCH_SUNXI ··· 171 163 Say Y if you want to control reset signals provided by System Control 172 164 block, Media I/O block, Peripheral Block. 173 165 174 - config RESET_UNIPHIER_USB3 175 - tristate "USB3 reset driver for UniPhier SoCs" 166 + config RESET_UNIPHIER_GLUE 167 + tristate "Reset driver in glue layer for UniPhier SoCs" 176 168 depends on (ARCH_UNIPHIER || COMPILE_TEST) && OF 177 169 default ARCH_UNIPHIER 178 170 select RESET_SIMPLE 179 171 help 180 - Support for the USB3 core reset on UniPhier SoCs. 181 - Say Y if you want to control reset signals provided by 182 - USB3 glue layer. 172 + Support for peripheral core reset included in its own glue layer 173 + on UniPhier SoCs. Say Y if you want to control reset signals 174 + provided by the glue layer. 183 175 184 176 config RESET_ZYNQ 185 177 bool "ZYNQ Reset Driver" if COMPILE_TEST
+2 -1
drivers/reset/Makefile
··· 19 19 obj-$(CONFIG_RESET_QCOM_PDC) += reset-qcom-pdc.o 20 20 obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o 21 21 obj-$(CONFIG_RESET_STM32MP157) += reset-stm32mp1.o 22 + obj-$(CONFIG_RESET_SOCFPGA) += reset-socfpga.o 22 23 obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o 23 24 obj-$(CONFIG_RESET_TI_SCI) += reset-ti-sci.o 24 25 obj-$(CONFIG_RESET_TI_SYSCON) += reset-ti-syscon.o 25 26 obj-$(CONFIG_RESET_UNIPHIER) += reset-uniphier.o 26 - obj-$(CONFIG_RESET_UNIPHIER_USB3) += reset-uniphier-usb3.o 27 + obj-$(CONFIG_RESET_UNIPHIER_GLUE) += reset-uniphier-glue.o 27 28 obj-$(CONFIG_RESET_ZYNQ) += reset-zynq.o 28 29
+42
drivers/reset/core.c
··· 795 795 return rstc; 796 796 } 797 797 EXPORT_SYMBOL_GPL(devm_reset_control_array_get); 798 + 799 + static int reset_control_get_count_from_lookup(struct device *dev) 800 + { 801 + const struct reset_control_lookup *lookup; 802 + const char *dev_id; 803 + int count = 0; 804 + 805 + if (!dev) 806 + return -EINVAL; 807 + 808 + dev_id = dev_name(dev); 809 + mutex_lock(&reset_lookup_mutex); 810 + 811 + list_for_each_entry(lookup, &reset_lookup_list, list) { 812 + if (!strcmp(lookup->dev_id, dev_id)) 813 + count++; 814 + } 815 + 816 + mutex_unlock(&reset_lookup_mutex); 817 + 818 + if (count == 0) 819 + count = -ENOENT; 820 + 821 + return count; 822 + } 823 + 824 + /** 825 + * reset_control_get_count - Count number of resets available with a device 826 + * 827 + * @dev: device for which to return the number of resets 828 + * 829 + * Returns positive reset count on success, or error number on failure and 830 + * on count being zero. 831 + */ 832 + int reset_control_get_count(struct device *dev) 833 + { 834 + if (dev->of_node) 835 + return of_reset_control_get_count(dev->of_node); 836 + 837 + return reset_control_get_count_from_lookup(dev); 838 + } 839 + EXPORT_SYMBOL_GPL(reset_control_get_count);
+1
drivers/reset/reset-hsdk.c
··· 86 86 87 87 static const struct reset_control_ops hsdk_reset_ops = { 88 88 .reset = hsdk_reset_reset, 89 + .deassert = hsdk_reset_reset, 89 90 }; 90 91 91 92 static int hsdk_reset_probe(struct platform_device *pdev)
+3 -10
drivers/reset/reset-simple.c
··· 109 109 #define SOCFPGA_NR_BANKS 8 110 110 111 111 static const struct reset_simple_devdata reset_simple_socfpga = { 112 - .reg_offset = 0x10, 112 + .reg_offset = 0x20, 113 113 .nr_resets = SOCFPGA_NR_BANKS * 32, 114 114 .status_active_low = true, 115 115 }; ··· 120 120 }; 121 121 122 122 static const struct of_device_id reset_simple_dt_ids[] = { 123 - { .compatible = "altr,rst-mgr", .data = &reset_simple_socfpga }, 123 + { .compatible = "altr,stratix10-rst-mgr", 124 + .data = &reset_simple_socfpga }, 124 125 { .compatible = "st,stm32-rcc", }, 125 126 { .compatible = "allwinner,sun6i-a31-clock-reset", 126 127 .data = &reset_simple_active_low }, ··· 165 164 data->rcdev.nr_resets = devdata->nr_resets; 166 165 data->active_low = devdata->active_low; 167 166 data->status_active_low = devdata->status_active_low; 168 - } 169 - 170 - if (of_device_is_compatible(dev->of_node, "altr,rst-mgr") && 171 - of_property_read_u32(dev->of_node, "altr,modrst-offset", 172 - &reg_offset)) { 173 - dev_warn(dev, 174 - "missing altr,modrst-offset property, assuming 0x%x!\n", 175 - reg_offset); 176 167 } 177 168 178 169 data->membase += reg_offset;
+88
drivers/reset/reset-socfpga.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018, Intel Corporation 4 + * Copied from reset-sunxi.c 5 + */ 6 + 7 + #include <linux/err.h> 8 + #include <linux/io.h> 9 + #include <linux/init.h> 10 + #include <linux/of.h> 11 + #include <linux/of_address.h> 12 + #include <linux/platform_device.h> 13 + #include <linux/reset-controller.h> 14 + #include <linux/slab.h> 15 + #include <linux/spinlock.h> 16 + #include <linux/types.h> 17 + 18 + #include "reset-simple.h" 19 + 20 + #define SOCFPGA_NR_BANKS 8 21 + void __init socfpga_reset_init(void); 22 + 23 + static int a10_reset_init(struct device_node *np) 24 + { 25 + struct reset_simple_data *data; 26 + struct resource res; 27 + resource_size_t size; 28 + int ret; 29 + u32 reg_offset = 0x10; 30 + 31 + data = kzalloc(sizeof(*data), GFP_KERNEL); 32 + if (!data) 33 + return -ENOMEM; 34 + 35 + ret = of_address_to_resource(np, 0, &res); 36 + if (ret) 37 + goto err_alloc; 38 + 39 + size = resource_size(&res); 40 + if (!request_mem_region(res.start, size, np->name)) { 41 + ret = -EBUSY; 42 + goto err_alloc; 43 + } 44 + 45 + data->membase = ioremap(res.start, size); 46 + if (!data->membase) { 47 + ret = -ENOMEM; 48 + goto err_alloc; 49 + } 50 + 51 + if (of_property_read_u32(np, "altr,modrst-offset", &reg_offset)) 52 + pr_warn("missing altr,modrst-offset property, assuming 0x10\n"); 53 + data->membase += reg_offset; 54 + 55 + spin_lock_init(&data->lock); 56 + 57 + data->rcdev.owner = THIS_MODULE; 58 + data->rcdev.nr_resets = SOCFPGA_NR_BANKS * 32; 59 + data->rcdev.ops = &reset_simple_ops; 60 + data->rcdev.of_node = np; 61 + data->status_active_low = true; 62 + 63 + return reset_controller_register(&data->rcdev); 64 + 65 + err_alloc: 66 + kfree(data); 67 + return ret; 68 + }; 69 + 70 + /* 71 + * These are the reset controller we need to initialize early on in 72 + * our system, before we can even think of using a regular device 73 + * driver for it. 74 + * The controllers that we can register through the regular device 75 + * model are handled by the simple reset driver directly. 76 + */ 77 + static const struct of_device_id socfpga_early_reset_dt_ids[] __initconst = { 78 + { .compatible = "altr,rst-mgr", }, 79 + { /* sentinel */ }, 80 + }; 81 + 82 + void __init socfpga_reset_init(void) 83 + { 84 + struct device_node *np; 85 + 86 + for_each_matching_node(np, socfpga_early_reset_dt_ids) 87 + a10_reset_init(np); 88 + }
+31 -19
drivers/reset/reset-uniphier-usb3.c drivers/reset/reset-uniphier-glue.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // 3 - // reset-uniphier-usb3.c - USB3 reset driver for UniPhier 3 + // reset-uniphier-glue.c - Glue layer reset driver for UniPhier 4 4 // Copyright 2018 Socionext Inc. 5 5 // Author: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> 6 6 ··· 15 15 #define MAX_CLKS 2 16 16 #define MAX_RSTS 2 17 17 18 - struct uniphier_usb3_reset_soc_data { 18 + struct uniphier_glue_reset_soc_data { 19 19 int nclks; 20 20 const char * const *clock_names; 21 21 int nrsts; 22 22 const char * const *reset_names; 23 23 }; 24 24 25 - struct uniphier_usb3_reset_priv { 25 + struct uniphier_glue_reset_priv { 26 26 struct clk_bulk_data clk[MAX_CLKS]; 27 27 struct reset_control *rst[MAX_RSTS]; 28 28 struct reset_simple_data rdata; 29 - const struct uniphier_usb3_reset_soc_data *data; 29 + const struct uniphier_glue_reset_soc_data *data; 30 30 }; 31 31 32 - static int uniphier_usb3_reset_probe(struct platform_device *pdev) 32 + static int uniphier_glue_reset_probe(struct platform_device *pdev) 33 33 { 34 34 struct device *dev = &pdev->dev; 35 - struct uniphier_usb3_reset_priv *priv; 35 + struct uniphier_glue_reset_priv *priv; 36 36 struct resource *res; 37 37 resource_size_t size; 38 38 const char *name; ··· 100 100 return ret; 101 101 } 102 102 103 - static int uniphier_usb3_reset_remove(struct platform_device *pdev) 103 + static int uniphier_glue_reset_remove(struct platform_device *pdev) 104 104 { 105 - struct uniphier_usb3_reset_priv *priv = platform_get_drvdata(pdev); 105 + struct uniphier_glue_reset_priv *priv = platform_get_drvdata(pdev); 106 106 int i; 107 107 108 108 for (i = 0; i < priv->data->nrsts; i++) ··· 117 117 "gio", "link", 118 118 }; 119 119 120 - static const struct uniphier_usb3_reset_soc_data uniphier_pro4_data = { 120 + static const struct uniphier_glue_reset_soc_data uniphier_pro4_data = { 121 121 .nclks = ARRAY_SIZE(uniphier_pro4_clock_reset_names), 122 122 .clock_names = uniphier_pro4_clock_reset_names, 123 123 .nrsts = ARRAY_SIZE(uniphier_pro4_clock_reset_names), ··· 128 128 "link", 129 129 }; 130 130 131 - static const struct uniphier_usb3_reset_soc_data uniphier_pxs2_data = { 131 + static const struct uniphier_glue_reset_soc_data uniphier_pxs2_data = { 132 132 .nclks = ARRAY_SIZE(uniphier_pxs2_clock_reset_names), 133 133 .clock_names = uniphier_pxs2_clock_reset_names, 134 134 .nrsts = ARRAY_SIZE(uniphier_pxs2_clock_reset_names), 135 135 .reset_names = uniphier_pxs2_clock_reset_names, 136 136 }; 137 137 138 - static const struct of_device_id uniphier_usb3_reset_match[] = { 138 + static const struct of_device_id uniphier_glue_reset_match[] = { 139 139 { 140 140 .compatible = "socionext,uniphier-pro4-usb3-reset", 141 141 .data = &uniphier_pro4_data, ··· 152 152 .compatible = "socionext,uniphier-pxs3-usb3-reset", 153 153 .data = &uniphier_pxs2_data, 154 154 }, 155 + { 156 + .compatible = "socionext,uniphier-pro4-ahci-reset", 157 + .data = &uniphier_pro4_data, 158 + }, 159 + { 160 + .compatible = "socionext,uniphier-pxs2-ahci-reset", 161 + .data = &uniphier_pxs2_data, 162 + }, 163 + { 164 + .compatible = "socionext,uniphier-pxs3-ahci-reset", 165 + .data = &uniphier_pxs2_data, 166 + }, 155 167 { /* Sentinel */ } 156 168 }; 157 - MODULE_DEVICE_TABLE(of, uniphier_usb3_reset_match); 169 + MODULE_DEVICE_TABLE(of, uniphier_glue_reset_match); 158 170 159 - static struct platform_driver uniphier_usb3_reset_driver = { 160 - .probe = uniphier_usb3_reset_probe, 161 - .remove = uniphier_usb3_reset_remove, 171 + static struct platform_driver uniphier_glue_reset_driver = { 172 + .probe = uniphier_glue_reset_probe, 173 + .remove = uniphier_glue_reset_remove, 162 174 .driver = { 163 - .name = "uniphier-usb3-reset", 164 - .of_match_table = uniphier_usb3_reset_match, 175 + .name = "uniphier-glue-reset", 176 + .of_match_table = uniphier_glue_reset_match, 165 177 }, 166 178 }; 167 - module_platform_driver(uniphier_usb3_reset_driver); 179 + module_platform_driver(uniphier_glue_reset_driver); 168 180 169 181 MODULE_AUTHOR("Kunihiko Hayashi <hayashi.kunihiko@socionext.com>"); 170 - MODULE_DESCRIPTION("UniPhier USB3 Reset Driver"); 182 + MODULE_DESCRIPTION("UniPhier Glue layer reset driver"); 171 183 MODULE_LICENSE("GPL");
+7 -8
drivers/s390/net/ism_drv.c
··· 89 89 dma_addr_t dma_handle; 90 90 struct ism_sba *sba; 91 91 92 - sba = dma_zalloc_coherent(&ism->pdev->dev, PAGE_SIZE, 93 - &dma_handle, GFP_KERNEL); 92 + sba = dma_alloc_coherent(&ism->pdev->dev, PAGE_SIZE, &dma_handle, 93 + GFP_KERNEL); 94 94 if (!sba) 95 95 return -ENOMEM; 96 96 ··· 116 116 dma_addr_t dma_handle; 117 117 struct ism_eq *ieq; 118 118 119 - ieq = dma_zalloc_coherent(&ism->pdev->dev, PAGE_SIZE, 120 - &dma_handle, GFP_KERNEL); 119 + ieq = dma_alloc_coherent(&ism->pdev->dev, PAGE_SIZE, &dma_handle, 120 + GFP_KERNEL); 121 121 if (!ieq) 122 122 return -ENOMEM; 123 123 ··· 234 234 test_and_set_bit(dmb->sba_idx, ism->sba_bitmap)) 235 235 return -EINVAL; 236 236 237 - dmb->cpu_addr = dma_zalloc_coherent(&ism->pdev->dev, dmb->dmb_len, 238 - &dmb->dma_addr, GFP_KERNEL | 239 - __GFP_NOWARN | __GFP_NOMEMALLOC | 240 - __GFP_COMP | __GFP_NORETRY); 237 + dmb->cpu_addr = dma_alloc_coherent(&ism->pdev->dev, dmb->dmb_len, 238 + &dmb->dma_addr, 239 + GFP_KERNEL | __GFP_NOWARN | __GFP_NOMEMALLOC | __GFP_COMP | __GFP_NORETRY); 241 240 if (!dmb->cpu_addr) 242 241 clear_bit(dmb->sba_idx, ism->sba_bitmap); 243 242
+3 -2
drivers/scsi/3w-sas.c
··· 646 646 unsigned long *cpu_addr; 647 647 int retval = 1; 648 648 649 - cpu_addr = dma_zalloc_coherent(&tw_dev->tw_pci_dev->dev, 650 - size * TW_Q_LENGTH, &dma_handle, GFP_KERNEL); 649 + cpu_addr = dma_alloc_coherent(&tw_dev->tw_pci_dev->dev, 650 + size * TW_Q_LENGTH, &dma_handle, 651 + GFP_KERNEL); 651 652 if (!cpu_addr) { 652 653 TW_PRINTK(tw_dev->host, TW_DRIVER, 0x5, "Memory allocation failed"); 653 654 goto out;
+4 -4
drivers/scsi/a100u2w.c
··· 1123 1123 1124 1124 /* Get total memory needed for SCB */ 1125 1125 sz = ORC_MAXQUEUE * sizeof(struct orc_scb); 1126 - host->scb_virt = dma_zalloc_coherent(&pdev->dev, sz, &host->scb_phys, 1127 - GFP_KERNEL); 1126 + host->scb_virt = dma_alloc_coherent(&pdev->dev, sz, &host->scb_phys, 1127 + GFP_KERNEL); 1128 1128 if (!host->scb_virt) { 1129 1129 printk("inia100: SCB memory allocation error\n"); 1130 1130 goto out_host_put; ··· 1132 1132 1133 1133 /* Get total memory needed for ESCB */ 1134 1134 sz = ORC_MAXQUEUE * sizeof(struct orc_extended_scb); 1135 - host->escb_virt = dma_zalloc_coherent(&pdev->dev, sz, &host->escb_phys, 1136 - GFP_KERNEL); 1135 + host->escb_virt = dma_alloc_coherent(&pdev->dev, sz, &host->escb_phys, 1136 + GFP_KERNEL); 1137 1137 if (!host->escb_virt) { 1138 1138 printk("inia100: ESCB memory allocation error\n"); 1139 1139 goto out_free_scb_array;
+12 -6
drivers/scsi/arcmsr/arcmsr_hba.c
··· 587 587 case ACB_ADAPTER_TYPE_B: { 588 588 struct MessageUnit_B *reg; 589 589 acb->roundup_ccbsize = roundup(sizeof(struct MessageUnit_B), 32); 590 - dma_coherent = dma_zalloc_coherent(&pdev->dev, acb->roundup_ccbsize, 591 - &dma_coherent_handle, GFP_KERNEL); 590 + dma_coherent = dma_alloc_coherent(&pdev->dev, 591 + acb->roundup_ccbsize, 592 + &dma_coherent_handle, 593 + GFP_KERNEL); 592 594 if (!dma_coherent) { 593 595 pr_notice("arcmsr%d: DMA allocation failed\n", acb->host->host_no); 594 596 return false; ··· 619 617 struct MessageUnit_D *reg; 620 618 621 619 acb->roundup_ccbsize = roundup(sizeof(struct MessageUnit_D), 32); 622 - dma_coherent = dma_zalloc_coherent(&pdev->dev, acb->roundup_ccbsize, 623 - &dma_coherent_handle, GFP_KERNEL); 620 + dma_coherent = dma_alloc_coherent(&pdev->dev, 621 + acb->roundup_ccbsize, 622 + &dma_coherent_handle, 623 + GFP_KERNEL); 624 624 if (!dma_coherent) { 625 625 pr_notice("arcmsr%d: DMA allocation failed\n", acb->host->host_no); 626 626 return false; ··· 663 659 uint32_t completeQ_size; 664 660 completeQ_size = sizeof(struct deliver_completeQ) * ARCMSR_MAX_HBE_DONEQUEUE + 128; 665 661 acb->roundup_ccbsize = roundup(completeQ_size, 32); 666 - dma_coherent = dma_zalloc_coherent(&pdev->dev, acb->roundup_ccbsize, 667 - &dma_coherent_handle, GFP_KERNEL); 662 + dma_coherent = dma_alloc_coherent(&pdev->dev, 663 + acb->roundup_ccbsize, 664 + &dma_coherent_handle, 665 + GFP_KERNEL); 668 666 if (!dma_coherent){ 669 667 pr_notice("arcmsr%d: DMA allocation failed\n", acb->host->host_no); 670 668 return false;
+2 -2
drivers/scsi/be2iscsi/be_main.c
··· 3321 3321 q->len = len; 3322 3322 q->entry_size = entry_size; 3323 3323 mem->size = len * entry_size; 3324 - mem->va = dma_zalloc_coherent(&phba->pcidev->dev, mem->size, &mem->dma, 3325 - GFP_KERNEL); 3324 + mem->va = dma_alloc_coherent(&phba->pcidev->dev, mem->size, &mem->dma, 3325 + GFP_KERNEL); 3326 3326 if (!mem->va) 3327 3327 return -ENOMEM; 3328 3328 return 0;
+5 -6
drivers/scsi/be2iscsi/be_mgmt.c
··· 293 293 struct be_dma_mem *cmd, 294 294 u8 subsystem, u8 opcode, u32 size) 295 295 { 296 - cmd->va = dma_zalloc_coherent(&phba->ctrl.pdev->dev, size, &cmd->dma, 297 - GFP_KERNEL); 296 + cmd->va = dma_alloc_coherent(&phba->ctrl.pdev->dev, size, &cmd->dma, 297 + GFP_KERNEL); 298 298 if (!cmd->va) { 299 299 beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, 300 300 "BG_%d : Failed to allocate memory for if info\n"); ··· 1510 1510 return -EINVAL; 1511 1511 1512 1512 nonemb_cmd.size = sizeof(union be_invldt_cmds_params); 1513 - nonemb_cmd.va = dma_zalloc_coherent(&phba->ctrl.pdev->dev, 1514 - nonemb_cmd.size, 1515 - &nonemb_cmd.dma, 1516 - GFP_KERNEL); 1513 + nonemb_cmd.va = dma_alloc_coherent(&phba->ctrl.pdev->dev, 1514 + nonemb_cmd.size, &nonemb_cmd.dma, 1515 + GFP_KERNEL); 1517 1516 if (!nonemb_cmd.va) { 1518 1517 beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_EH, 1519 1518 "BM_%d : invldt_cmds_params alloc failed\n");
+3 -3
drivers/scsi/bfa/bfad_bsg.c
··· 3264 3264 /* Allocate dma coherent memory */ 3265 3265 buf_info = buf_base; 3266 3266 buf_info->size = payload_len; 3267 - buf_info->virt = dma_zalloc_coherent(&bfad->pcidev->dev, 3268 - buf_info->size, &buf_info->phys, 3269 - GFP_KERNEL); 3267 + buf_info->virt = dma_alloc_coherent(&bfad->pcidev->dev, 3268 + buf_info->size, &buf_info->phys, 3269 + GFP_KERNEL); 3270 3270 if (!buf_info->virt) 3271 3271 goto out_free_mem; 3272 3272
+24 -25
drivers/scsi/bnx2fc/bnx2fc_hwi.c
··· 1857 1857 * entries. Hence the limit with one page is 8192 task context 1858 1858 * entries. 1859 1859 */ 1860 - hba->task_ctx_bd_tbl = dma_zalloc_coherent(&hba->pcidev->dev, 1861 - PAGE_SIZE, 1862 - &hba->task_ctx_bd_dma, 1863 - GFP_KERNEL); 1860 + hba->task_ctx_bd_tbl = dma_alloc_coherent(&hba->pcidev->dev, 1861 + PAGE_SIZE, 1862 + &hba->task_ctx_bd_dma, 1863 + GFP_KERNEL); 1864 1864 if (!hba->task_ctx_bd_tbl) { 1865 1865 printk(KERN_ERR PFX "unable to allocate task context BDT\n"); 1866 1866 rc = -1; ··· 1894 1894 task_ctx_bdt = (struct regpair *)hba->task_ctx_bd_tbl; 1895 1895 for (i = 0; i < task_ctx_arr_sz; i++) { 1896 1896 1897 - hba->task_ctx[i] = dma_zalloc_coherent(&hba->pcidev->dev, 1898 - PAGE_SIZE, 1899 - &hba->task_ctx_dma[i], 1900 - GFP_KERNEL); 1897 + hba->task_ctx[i] = dma_alloc_coherent(&hba->pcidev->dev, 1898 + PAGE_SIZE, 1899 + &hba->task_ctx_dma[i], 1900 + GFP_KERNEL); 1901 1901 if (!hba->task_ctx[i]) { 1902 1902 printk(KERN_ERR PFX "unable to alloc task context\n"); 1903 1903 rc = -1; ··· 2031 2031 } 2032 2032 2033 2033 for (i = 0; i < segment_count; ++i) { 2034 - hba->hash_tbl_segments[i] = dma_zalloc_coherent(&hba->pcidev->dev, 2035 - BNX2FC_HASH_TBL_CHUNK_SIZE, 2036 - &dma_segment_array[i], 2037 - GFP_KERNEL); 2034 + hba->hash_tbl_segments[i] = dma_alloc_coherent(&hba->pcidev->dev, 2035 + BNX2FC_HASH_TBL_CHUNK_SIZE, 2036 + &dma_segment_array[i], 2037 + GFP_KERNEL); 2038 2038 if (!hba->hash_tbl_segments[i]) { 2039 2039 printk(KERN_ERR PFX "hash segment alloc failed\n"); 2040 2040 goto cleanup_dma; 2041 2041 } 2042 2042 } 2043 2043 2044 - hba->hash_tbl_pbl = dma_zalloc_coherent(&hba->pcidev->dev, PAGE_SIZE, 2045 - &hba->hash_tbl_pbl_dma, 2046 - GFP_KERNEL); 2044 + hba->hash_tbl_pbl = dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, 2045 + &hba->hash_tbl_pbl_dma, 2046 + GFP_KERNEL); 2047 2047 if (!hba->hash_tbl_pbl) { 2048 2048 printk(KERN_ERR PFX "hash table pbl alloc failed\n"); 2049 2049 goto cleanup_dma; ··· 2104 2104 return -ENOMEM; 2105 2105 2106 2106 mem_size = BNX2FC_NUM_MAX_SESS * sizeof(struct regpair); 2107 - hba->t2_hash_tbl_ptr = dma_zalloc_coherent(&hba->pcidev->dev, 2108 - mem_size, 2109 - &hba->t2_hash_tbl_ptr_dma, 2110 - GFP_KERNEL); 2107 + hba->t2_hash_tbl_ptr = dma_alloc_coherent(&hba->pcidev->dev, mem_size, 2108 + &hba->t2_hash_tbl_ptr_dma, 2109 + GFP_KERNEL); 2111 2110 if (!hba->t2_hash_tbl_ptr) { 2112 2111 printk(KERN_ERR PFX "unable to allocate t2 hash table ptr\n"); 2113 2112 bnx2fc_free_fw_resc(hba); ··· 2115 2116 2116 2117 mem_size = BNX2FC_NUM_MAX_SESS * 2117 2118 sizeof(struct fcoe_t2_hash_table_entry); 2118 - hba->t2_hash_tbl = dma_zalloc_coherent(&hba->pcidev->dev, mem_size, 2119 - &hba->t2_hash_tbl_dma, 2120 - GFP_KERNEL); 2119 + hba->t2_hash_tbl = dma_alloc_coherent(&hba->pcidev->dev, mem_size, 2120 + &hba->t2_hash_tbl_dma, 2121 + GFP_KERNEL); 2121 2122 if (!hba->t2_hash_tbl) { 2122 2123 printk(KERN_ERR PFX "unable to allocate t2 hash table\n"); 2123 2124 bnx2fc_free_fw_resc(hba); ··· 2139 2140 return -ENOMEM; 2140 2141 } 2141 2142 2142 - hba->stats_buffer = dma_zalloc_coherent(&hba->pcidev->dev, PAGE_SIZE, 2143 - &hba->stats_buf_dma, 2144 - GFP_KERNEL); 2143 + hba->stats_buffer = dma_alloc_coherent(&hba->pcidev->dev, PAGE_SIZE, 2144 + &hba->stats_buf_dma, 2145 + GFP_KERNEL); 2145 2146 if (!hba->stats_buffer) { 2146 2147 printk(KERN_ERR PFX "unable to alloc Stats Buffer\n"); 2147 2148 bnx2fc_free_fw_resc(hba);
+22 -22
drivers/scsi/bnx2fc/bnx2fc_tgt.c
··· 672 672 tgt->sq_mem_size = (tgt->sq_mem_size + (CNIC_PAGE_SIZE - 1)) & 673 673 CNIC_PAGE_MASK; 674 674 675 - tgt->sq = dma_zalloc_coherent(&hba->pcidev->dev, tgt->sq_mem_size, 676 - &tgt->sq_dma, GFP_KERNEL); 675 + tgt->sq = dma_alloc_coherent(&hba->pcidev->dev, tgt->sq_mem_size, 676 + &tgt->sq_dma, GFP_KERNEL); 677 677 if (!tgt->sq) { 678 678 printk(KERN_ERR PFX "unable to allocate SQ memory %d\n", 679 679 tgt->sq_mem_size); ··· 685 685 tgt->cq_mem_size = (tgt->cq_mem_size + (CNIC_PAGE_SIZE - 1)) & 686 686 CNIC_PAGE_MASK; 687 687 688 - tgt->cq = dma_zalloc_coherent(&hba->pcidev->dev, tgt->cq_mem_size, 689 - &tgt->cq_dma, GFP_KERNEL); 688 + tgt->cq = dma_alloc_coherent(&hba->pcidev->dev, tgt->cq_mem_size, 689 + &tgt->cq_dma, GFP_KERNEL); 690 690 if (!tgt->cq) { 691 691 printk(KERN_ERR PFX "unable to allocate CQ memory %d\n", 692 692 tgt->cq_mem_size); ··· 698 698 tgt->rq_mem_size = (tgt->rq_mem_size + (CNIC_PAGE_SIZE - 1)) & 699 699 CNIC_PAGE_MASK; 700 700 701 - tgt->rq = dma_zalloc_coherent(&hba->pcidev->dev, tgt->rq_mem_size, 702 - &tgt->rq_dma, GFP_KERNEL); 701 + tgt->rq = dma_alloc_coherent(&hba->pcidev->dev, tgt->rq_mem_size, 702 + &tgt->rq_dma, GFP_KERNEL); 703 703 if (!tgt->rq) { 704 704 printk(KERN_ERR PFX "unable to allocate RQ memory %d\n", 705 705 tgt->rq_mem_size); ··· 710 710 tgt->rq_pbl_size = (tgt->rq_pbl_size + (CNIC_PAGE_SIZE - 1)) & 711 711 CNIC_PAGE_MASK; 712 712 713 - tgt->rq_pbl = dma_zalloc_coherent(&hba->pcidev->dev, tgt->rq_pbl_size, 714 - &tgt->rq_pbl_dma, GFP_KERNEL); 713 + tgt->rq_pbl = dma_alloc_coherent(&hba->pcidev->dev, tgt->rq_pbl_size, 714 + &tgt->rq_pbl_dma, GFP_KERNEL); 715 715 if (!tgt->rq_pbl) { 716 716 printk(KERN_ERR PFX "unable to allocate RQ PBL %d\n", 717 717 tgt->rq_pbl_size); ··· 735 735 tgt->xferq_mem_size = (tgt->xferq_mem_size + (CNIC_PAGE_SIZE - 1)) & 736 736 CNIC_PAGE_MASK; 737 737 738 - tgt->xferq = dma_zalloc_coherent(&hba->pcidev->dev, 739 - tgt->xferq_mem_size, &tgt->xferq_dma, 740 - GFP_KERNEL); 738 + tgt->xferq = dma_alloc_coherent(&hba->pcidev->dev, 739 + tgt->xferq_mem_size, &tgt->xferq_dma, 740 + GFP_KERNEL); 741 741 if (!tgt->xferq) { 742 742 printk(KERN_ERR PFX "unable to allocate XFERQ %d\n", 743 743 tgt->xferq_mem_size); ··· 749 749 tgt->confq_mem_size = (tgt->confq_mem_size + (CNIC_PAGE_SIZE - 1)) & 750 750 CNIC_PAGE_MASK; 751 751 752 - tgt->confq = dma_zalloc_coherent(&hba->pcidev->dev, 753 - tgt->confq_mem_size, &tgt->confq_dma, 754 - GFP_KERNEL); 752 + tgt->confq = dma_alloc_coherent(&hba->pcidev->dev, 753 + tgt->confq_mem_size, &tgt->confq_dma, 754 + GFP_KERNEL); 755 755 if (!tgt->confq) { 756 756 printk(KERN_ERR PFX "unable to allocate CONFQ %d\n", 757 757 tgt->confq_mem_size); ··· 763 763 tgt->confq_pbl_size = 764 764 (tgt->confq_pbl_size + (CNIC_PAGE_SIZE - 1)) & CNIC_PAGE_MASK; 765 765 766 - tgt->confq_pbl = dma_zalloc_coherent(&hba->pcidev->dev, 767 - tgt->confq_pbl_size, 768 - &tgt->confq_pbl_dma, GFP_KERNEL); 766 + tgt->confq_pbl = dma_alloc_coherent(&hba->pcidev->dev, 767 + tgt->confq_pbl_size, 768 + &tgt->confq_pbl_dma, GFP_KERNEL); 769 769 if (!tgt->confq_pbl) { 770 770 printk(KERN_ERR PFX "unable to allocate CONFQ PBL %d\n", 771 771 tgt->confq_pbl_size); ··· 787 787 /* Allocate and map ConnDB */ 788 788 tgt->conn_db_mem_size = sizeof(struct fcoe_conn_db); 789 789 790 - tgt->conn_db = dma_zalloc_coherent(&hba->pcidev->dev, 791 - tgt->conn_db_mem_size, 792 - &tgt->conn_db_dma, GFP_KERNEL); 790 + tgt->conn_db = dma_alloc_coherent(&hba->pcidev->dev, 791 + tgt->conn_db_mem_size, 792 + &tgt->conn_db_dma, GFP_KERNEL); 793 793 if (!tgt->conn_db) { 794 794 printk(KERN_ERR PFX "unable to allocate conn_db %d\n", 795 795 tgt->conn_db_mem_size); ··· 802 802 tgt->lcq_mem_size = (tgt->lcq_mem_size + (CNIC_PAGE_SIZE - 1)) & 803 803 CNIC_PAGE_MASK; 804 804 805 - tgt->lcq = dma_zalloc_coherent(&hba->pcidev->dev, tgt->lcq_mem_size, 806 - &tgt->lcq_dma, GFP_KERNEL); 805 + tgt->lcq = dma_alloc_coherent(&hba->pcidev->dev, tgt->lcq_mem_size, 806 + &tgt->lcq_dma, GFP_KERNEL); 807 807 808 808 if (!tgt->lcq) { 809 809 printk(KERN_ERR PFX "unable to allocate lcq %d\n",
+4 -4
drivers/scsi/bnx2i/bnx2i_hwi.c
··· 1070 1070 1071 1071 /* Allocate memory area for actual SQ element */ 1072 1072 ep->qp.sq_virt = 1073 - dma_zalloc_coherent(&hba->pcidev->dev, ep->qp.sq_mem_size, 1074 - &ep->qp.sq_phys, GFP_KERNEL); 1073 + dma_alloc_coherent(&hba->pcidev->dev, ep->qp.sq_mem_size, 1074 + &ep->qp.sq_phys, GFP_KERNEL); 1075 1075 if (!ep->qp.sq_virt) { 1076 1076 printk(KERN_ALERT "bnx2i: unable to alloc SQ BD memory %d\n", 1077 1077 ep->qp.sq_mem_size); ··· 1106 1106 1107 1107 /* Allocate memory area for actual CQ element */ 1108 1108 ep->qp.cq_virt = 1109 - dma_zalloc_coherent(&hba->pcidev->dev, ep->qp.cq_mem_size, 1110 - &ep->qp.cq_phys, GFP_KERNEL); 1109 + dma_alloc_coherent(&hba->pcidev->dev, ep->qp.cq_mem_size, 1110 + &ep->qp.cq_phys, GFP_KERNEL); 1111 1111 if (!ep->qp.cq_virt) { 1112 1112 printk(KERN_ALERT "bnx2i: unable to alloc CQ BD memory %d\n", 1113 1113 ep->qp.cq_mem_size);
+2 -2
drivers/scsi/csiostor/csio_wr.c
··· 233 233 234 234 q = wrm->q_arr[free_idx]; 235 235 236 - q->vstart = dma_zalloc_coherent(&hw->pdev->dev, qsz, &q->pstart, 237 - GFP_KERNEL); 236 + q->vstart = dma_alloc_coherent(&hw->pdev->dev, qsz, &q->pstart, 237 + GFP_KERNEL); 238 238 if (!q->vstart) { 239 239 csio_err(hw, 240 240 "Failed to allocate DMA memory for "
+2 -2
drivers/scsi/lpfc/lpfc_bsg.c
··· 2730 2730 INIT_LIST_HEAD(&dmabuf->list); 2731 2731 2732 2732 /* now, allocate dma buffer */ 2733 - dmabuf->virt = dma_zalloc_coherent(&pcidev->dev, BSG_MBOX_SIZE, 2734 - &(dmabuf->phys), GFP_KERNEL); 2733 + dmabuf->virt = dma_alloc_coherent(&pcidev->dev, BSG_MBOX_SIZE, 2734 + &(dmabuf->phys), GFP_KERNEL); 2735 2735 2736 2736 if (!dmabuf->virt) { 2737 2737 kfree(dmabuf);
+7 -7
drivers/scsi/lpfc/lpfc_init.c
··· 6973 6973 if (!dmabuf) 6974 6974 return NULL; 6975 6975 6976 - dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, 6977 - LPFC_HDR_TEMPLATE_SIZE, 6978 - &dmabuf->phys, GFP_KERNEL); 6976 + dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev, 6977 + LPFC_HDR_TEMPLATE_SIZE, 6978 + &dmabuf->phys, GFP_KERNEL); 6979 6979 if (!dmabuf->virt) { 6980 6980 rpi_hdr = NULL; 6981 6981 goto err_free_dmabuf; ··· 7397 7397 } 7398 7398 7399 7399 /* Allocate memory for SLI-2 structures */ 7400 - phba->slim2p.virt = dma_zalloc_coherent(&pdev->dev, SLI2_SLIM_SIZE, 7401 - &phba->slim2p.phys, GFP_KERNEL); 7400 + phba->slim2p.virt = dma_alloc_coherent(&pdev->dev, SLI2_SLIM_SIZE, 7401 + &phba->slim2p.phys, GFP_KERNEL); 7402 7402 if (!phba->slim2p.virt) 7403 7403 goto out_iounmap; 7404 7404 ··· 7816 7816 * plus an alignment restriction of 16 bytes. 7817 7817 */ 7818 7818 bmbx_size = sizeof(struct lpfc_bmbx_create) + (LPFC_ALIGN_16_BYTE - 1); 7819 - dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, bmbx_size, 7820 - &dmabuf->phys, GFP_KERNEL); 7819 + dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev, bmbx_size, 7820 + &dmabuf->phys, GFP_KERNEL); 7821 7821 if (!dmabuf->virt) { 7822 7822 kfree(dmabuf); 7823 7823 return -ENOMEM;
+3 -3
drivers/scsi/lpfc/lpfc_mbox.c
··· 1827 1827 * page, this is used as a priori size of SLI4_PAGE_SIZE for 1828 1828 * the later DMA memory free. 1829 1829 */ 1830 - viraddr = dma_zalloc_coherent(&phba->pcidev->dev, 1831 - SLI4_PAGE_SIZE, &phyaddr, 1832 - GFP_KERNEL); 1830 + viraddr = dma_alloc_coherent(&phba->pcidev->dev, 1831 + SLI4_PAGE_SIZE, &phyaddr, 1832 + GFP_KERNEL); 1833 1833 /* In case of malloc fails, proceed with whatever we have */ 1834 1834 if (!viraddr) 1835 1835 break;
+7 -8
drivers/scsi/lpfc/lpfc_sli.c
··· 5362 5362 * mailbox command. 5363 5363 */ 5364 5364 dma_size = *vpd_size; 5365 - dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, dma_size, 5366 - &dmabuf->phys, GFP_KERNEL); 5365 + dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev, dma_size, 5366 + &dmabuf->phys, GFP_KERNEL); 5367 5367 if (!dmabuf->virt) { 5368 5368 kfree(dmabuf); 5369 5369 return -ENOMEM; ··· 6300 6300 goto free_mem; 6301 6301 } 6302 6302 6303 - dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, 6303 + dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev, 6304 6304 LPFC_RAS_MAX_ENTRY_SIZE, 6305 - &dmabuf->phys, 6306 - GFP_KERNEL); 6305 + &dmabuf->phys, GFP_KERNEL); 6307 6306 if (!dmabuf->virt) { 6308 6307 kfree(dmabuf); 6309 6308 rc = -ENOMEM; ··· 14612 14613 dmabuf = kzalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL); 14613 14614 if (!dmabuf) 14614 14615 goto out_fail; 14615 - dmabuf->virt = dma_zalloc_coherent(&phba->pcidev->dev, 14616 - hw_page_size, &dmabuf->phys, 14617 - GFP_KERNEL); 14616 + dmabuf->virt = dma_alloc_coherent(&phba->pcidev->dev, 14617 + hw_page_size, &dmabuf->phys, 14618 + GFP_KERNEL); 14618 14619 if (!dmabuf->virt) { 14619 14620 kfree(dmabuf); 14620 14621 goto out_fail;
+8 -7
drivers/scsi/megaraid/megaraid_mbox.c
··· 967 967 * Allocate the common 16-byte aligned memory for the handshake 968 968 * mailbox. 969 969 */ 970 - raid_dev->una_mbox64 = dma_zalloc_coherent(&adapter->pdev->dev, 971 - sizeof(mbox64_t), &raid_dev->una_mbox64_dma, 972 - GFP_KERNEL); 970 + raid_dev->una_mbox64 = dma_alloc_coherent(&adapter->pdev->dev, 971 + sizeof(mbox64_t), 972 + &raid_dev->una_mbox64_dma, 973 + GFP_KERNEL); 973 974 974 975 if (!raid_dev->una_mbox64) { 975 976 con_log(CL_ANN, (KERN_WARNING ··· 996 995 align; 997 996 998 997 // Allocate memory for commands issued internally 999 - adapter->ibuf = dma_zalloc_coherent(&pdev->dev, MBOX_IBUF_SIZE, 1000 - &adapter->ibuf_dma_h, GFP_KERNEL); 998 + adapter->ibuf = dma_alloc_coherent(&pdev->dev, MBOX_IBUF_SIZE, 999 + &adapter->ibuf_dma_h, GFP_KERNEL); 1001 1000 if (!adapter->ibuf) { 1002 1001 1003 1002 con_log(CL_ANN, (KERN_WARNING ··· 2898 2897 * Issue an ENQUIRY3 command to find out certain adapter parameters, 2899 2898 * e.g., max channels, max commands etc. 2900 2899 */ 2901 - pinfo = dma_zalloc_coherent(&adapter->pdev->dev, sizeof(mraid_pinfo_t), 2902 - &pinfo_dma_h, GFP_KERNEL); 2900 + pinfo = dma_alloc_coherent(&adapter->pdev->dev, sizeof(mraid_pinfo_t), 2901 + &pinfo_dma_h, GFP_KERNEL); 2903 2902 if (pinfo == NULL) { 2904 2903 con_log(CL_ANN, (KERN_WARNING 2905 2904 "megaraid: out of memory, %s %d\n", __func__,
+13 -13
drivers/scsi/megaraid/megaraid_sas_base.c
··· 2273 2273 sizeof(struct MR_LD_VF_AFFILIATION_111)); 2274 2274 else { 2275 2275 new_affiliation_111 = 2276 - dma_zalloc_coherent(&instance->pdev->dev, 2277 - sizeof(struct MR_LD_VF_AFFILIATION_111), 2278 - &new_affiliation_111_h, GFP_KERNEL); 2276 + dma_alloc_coherent(&instance->pdev->dev, 2277 + sizeof(struct MR_LD_VF_AFFILIATION_111), 2278 + &new_affiliation_111_h, GFP_KERNEL); 2279 2279 if (!new_affiliation_111) { 2280 2280 dev_printk(KERN_DEBUG, &instance->pdev->dev, "SR-IOV: Couldn't allocate " 2281 2281 "memory for new affiliation for scsi%d\n", ··· 2380 2380 sizeof(struct MR_LD_VF_AFFILIATION)); 2381 2381 else { 2382 2382 new_affiliation = 2383 - dma_zalloc_coherent(&instance->pdev->dev, 2384 - (MAX_LOGICAL_DRIVES + 1) * 2385 - sizeof(struct MR_LD_VF_AFFILIATION), 2386 - &new_affiliation_h, GFP_KERNEL); 2383 + dma_alloc_coherent(&instance->pdev->dev, 2384 + (MAX_LOGICAL_DRIVES + 1) * sizeof(struct MR_LD_VF_AFFILIATION), 2385 + &new_affiliation_h, GFP_KERNEL); 2387 2386 if (!new_affiliation) { 2388 2387 dev_printk(KERN_DEBUG, &instance->pdev->dev, "SR-IOV: Couldn't allocate " 2389 2388 "memory for new affiliation for scsi%d\n", ··· 2545 2546 2546 2547 if (initial) { 2547 2548 instance->hb_host_mem = 2548 - dma_zalloc_coherent(&instance->pdev->dev, 2549 - sizeof(struct MR_CTRL_HB_HOST_MEM), 2550 - &instance->hb_host_mem_h, GFP_KERNEL); 2549 + dma_alloc_coherent(&instance->pdev->dev, 2550 + sizeof(struct MR_CTRL_HB_HOST_MEM), 2551 + &instance->hb_host_mem_h, 2552 + GFP_KERNEL); 2551 2553 if (!instance->hb_host_mem) { 2552 2554 dev_printk(KERN_DEBUG, &instance->pdev->dev, "SR-IOV: Couldn't allocate" 2553 2555 " memory for heartbeat host memory for scsi%d\n", ··· 5816 5816 } 5817 5817 5818 5818 dcmd = &cmd->frame->dcmd; 5819 - el_info = dma_zalloc_coherent(&instance->pdev->dev, 5820 - sizeof(struct megasas_evt_log_info), &el_info_h, 5821 - GFP_KERNEL); 5819 + el_info = dma_alloc_coherent(&instance->pdev->dev, 5820 + sizeof(struct megasas_evt_log_info), 5821 + &el_info_h, GFP_KERNEL); 5822 5822 if (!el_info) { 5823 5823 megasas_return_cmd(instance, cmd); 5824 5824 return -ENOMEM;
+3 -2
drivers/scsi/megaraid/megaraid_sas_fusion.c
··· 689 689 array_size = sizeof(struct MPI2_IOC_INIT_RDPQ_ARRAY_ENTRY) * 690 690 MAX_MSIX_QUEUES_FUSION; 691 691 692 - fusion->rdpq_virt = dma_zalloc_coherent(&instance->pdev->dev, 693 - array_size, &fusion->rdpq_phys, GFP_KERNEL); 692 + fusion->rdpq_virt = dma_alloc_coherent(&instance->pdev->dev, 693 + array_size, &fusion->rdpq_phys, 694 + GFP_KERNEL); 694 695 if (!fusion->rdpq_virt) { 695 696 dev_err(&instance->pdev->dev, 696 697 "Failed from %s %d\n", __func__, __LINE__);
+3 -2
drivers/scsi/mesh.c
··· 1915 1915 /* We use the PCI APIs for now until the generic one gets fixed 1916 1916 * enough or until we get some macio-specific versions 1917 1917 */ 1918 - dma_cmd_space = dma_zalloc_coherent(&macio_get_pci_dev(mdev)->dev, 1919 - ms->dma_cmd_size, &dma_cmd_bus, GFP_KERNEL); 1918 + dma_cmd_space = dma_alloc_coherent(&macio_get_pci_dev(mdev)->dev, 1919 + ms->dma_cmd_size, &dma_cmd_bus, 1920 + GFP_KERNEL); 1920 1921 if (dma_cmd_space == NULL) { 1921 1922 printk(KERN_ERR "mesh: can't allocate DMA table\n"); 1922 1923 goto out_unmap;
+5 -4
drivers/scsi/mvumi.c
··· 143 143 144 144 case RESOURCE_UNCACHED_MEMORY: 145 145 size = round_up(size, 8); 146 - res->virt_addr = dma_zalloc_coherent(&mhba->pdev->dev, size, 147 - &res->bus_addr, GFP_KERNEL); 146 + res->virt_addr = dma_alloc_coherent(&mhba->pdev->dev, size, 147 + &res->bus_addr, 148 + GFP_KERNEL); 148 149 if (!res->virt_addr) { 149 150 dev_err(&mhba->pdev->dev, 150 151 "unable to allocate consistent mem," ··· 247 246 if (size == 0) 248 247 return 0; 249 248 250 - virt_addr = dma_zalloc_coherent(&mhba->pdev->dev, size, &phy_addr, 251 - GFP_KERNEL); 249 + virt_addr = dma_alloc_coherent(&mhba->pdev->dev, size, &phy_addr, 250 + GFP_KERNEL); 252 251 if (!virt_addr) 253 252 return -1; 254 253
+2 -2
drivers/scsi/pm8001/pm8001_sas.c
··· 116 116 u64 align_offset = 0; 117 117 if (align) 118 118 align_offset = (dma_addr_t)align - 1; 119 - mem_virt_alloc = dma_zalloc_coherent(&pdev->dev, mem_size + align, 120 - &mem_dma_handle, GFP_KERNEL); 119 + mem_virt_alloc = dma_alloc_coherent(&pdev->dev, mem_size + align, 120 + &mem_dma_handle, GFP_KERNEL); 121 121 if (!mem_virt_alloc) { 122 122 pm8001_printk("memory allocation error\n"); 123 123 return -1;
+17 -12
drivers/scsi/qedf/qedf_main.c
··· 1050 1050 sizeof(void *); 1051 1051 fcport->sq_pbl_size = fcport->sq_pbl_size + QEDF_PAGE_SIZE; 1052 1052 1053 - fcport->sq = dma_zalloc_coherent(&qedf->pdev->dev, 1054 - fcport->sq_mem_size, &fcport->sq_dma, GFP_KERNEL); 1053 + fcport->sq = dma_alloc_coherent(&qedf->pdev->dev, fcport->sq_mem_size, 1054 + &fcport->sq_dma, GFP_KERNEL); 1055 1055 if (!fcport->sq) { 1056 1056 QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate send queue.\n"); 1057 1057 rval = 1; 1058 1058 goto out; 1059 1059 } 1060 1060 1061 - fcport->sq_pbl = dma_zalloc_coherent(&qedf->pdev->dev, 1062 - fcport->sq_pbl_size, &fcport->sq_pbl_dma, GFP_KERNEL); 1061 + fcport->sq_pbl = dma_alloc_coherent(&qedf->pdev->dev, 1062 + fcport->sq_pbl_size, 1063 + &fcport->sq_pbl_dma, GFP_KERNEL); 1063 1064 if (!fcport->sq_pbl) { 1064 1065 QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate send queue PBL.\n"); 1065 1066 rval = 1; ··· 2681 2680 } 2682 2681 2683 2682 /* Allocate list of PBL pages */ 2684 - qedf->bdq_pbl_list = dma_zalloc_coherent(&qedf->pdev->dev, 2685 - QEDF_PAGE_SIZE, &qedf->bdq_pbl_list_dma, GFP_KERNEL); 2683 + qedf->bdq_pbl_list = dma_alloc_coherent(&qedf->pdev->dev, 2684 + QEDF_PAGE_SIZE, 2685 + &qedf->bdq_pbl_list_dma, 2686 + GFP_KERNEL); 2686 2687 if (!qedf->bdq_pbl_list) { 2687 2688 QEDF_ERR(&(qedf->dbg_ctx), "Could not allocate list of PBL pages.\n"); 2688 2689 return -ENOMEM; ··· 2773 2770 ALIGN(qedf->global_queues[i]->cq_pbl_size, QEDF_PAGE_SIZE); 2774 2771 2775 2772 qedf->global_queues[i]->cq = 2776 - dma_zalloc_coherent(&qedf->pdev->dev, 2777 - qedf->global_queues[i]->cq_mem_size, 2778 - &qedf->global_queues[i]->cq_dma, GFP_KERNEL); 2773 + dma_alloc_coherent(&qedf->pdev->dev, 2774 + qedf->global_queues[i]->cq_mem_size, 2775 + &qedf->global_queues[i]->cq_dma, 2776 + GFP_KERNEL); 2779 2777 2780 2778 if (!qedf->global_queues[i]->cq) { 2781 2779 QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate cq.\n"); ··· 2785 2781 } 2786 2782 2787 2783 qedf->global_queues[i]->cq_pbl = 2788 - dma_zalloc_coherent(&qedf->pdev->dev, 2789 - qedf->global_queues[i]->cq_pbl_size, 2790 - &qedf->global_queues[i]->cq_pbl_dma, GFP_KERNEL); 2784 + dma_alloc_coherent(&qedf->pdev->dev, 2785 + qedf->global_queues[i]->cq_pbl_size, 2786 + &qedf->global_queues[i]->cq_pbl_dma, 2787 + GFP_KERNEL); 2791 2788 2792 2789 if (!qedf->global_queues[i]->cq_pbl) { 2793 2790 QEDF_WARN(&(qedf->dbg_ctx), "Could not allocate cq PBL.\n");
+19 -20
drivers/scsi/qedi/qedi_main.c
··· 1394 1394 { 1395 1395 struct qedi_nvm_iscsi_image nvm_image; 1396 1396 1397 - qedi->iscsi_image = dma_zalloc_coherent(&qedi->pdev->dev, 1398 - sizeof(nvm_image), 1399 - &qedi->nvm_buf_dma, 1400 - GFP_KERNEL); 1397 + qedi->iscsi_image = dma_alloc_coherent(&qedi->pdev->dev, 1398 + sizeof(nvm_image), 1399 + &qedi->nvm_buf_dma, GFP_KERNEL); 1401 1400 if (!qedi->iscsi_image) { 1402 1401 QEDI_ERR(&qedi->dbg_ctx, "Could not allocate NVM BUF.\n"); 1403 1402 return -ENOMEM; ··· 1509 1510 } 1510 1511 1511 1512 /* Allocate list of PBL pages */ 1512 - qedi->bdq_pbl_list = dma_zalloc_coherent(&qedi->pdev->dev, 1513 - QEDI_PAGE_SIZE, 1514 - &qedi->bdq_pbl_list_dma, 1515 - GFP_KERNEL); 1513 + qedi->bdq_pbl_list = dma_alloc_coherent(&qedi->pdev->dev, 1514 + QEDI_PAGE_SIZE, 1515 + &qedi->bdq_pbl_list_dma, 1516 + GFP_KERNEL); 1516 1517 if (!qedi->bdq_pbl_list) { 1517 1518 QEDI_ERR(&qedi->dbg_ctx, 1518 1519 "Could not allocate list of PBL pages.\n"); ··· 1608 1609 (qedi->global_queues[i]->cq_pbl_size + 1609 1610 (QEDI_PAGE_SIZE - 1)); 1610 1611 1611 - qedi->global_queues[i]->cq = dma_zalloc_coherent(&qedi->pdev->dev, 1612 - qedi->global_queues[i]->cq_mem_size, 1613 - &qedi->global_queues[i]->cq_dma, 1614 - GFP_KERNEL); 1612 + qedi->global_queues[i]->cq = dma_alloc_coherent(&qedi->pdev->dev, 1613 + qedi->global_queues[i]->cq_mem_size, 1614 + &qedi->global_queues[i]->cq_dma, 1615 + GFP_KERNEL); 1615 1616 1616 1617 if (!qedi->global_queues[i]->cq) { 1617 1618 QEDI_WARN(&qedi->dbg_ctx, ··· 1619 1620 status = -ENOMEM; 1620 1621 goto mem_alloc_failure; 1621 1622 } 1622 - qedi->global_queues[i]->cq_pbl = dma_zalloc_coherent(&qedi->pdev->dev, 1623 - qedi->global_queues[i]->cq_pbl_size, 1624 - &qedi->global_queues[i]->cq_pbl_dma, 1625 - GFP_KERNEL); 1623 + qedi->global_queues[i]->cq_pbl = dma_alloc_coherent(&qedi->pdev->dev, 1624 + qedi->global_queues[i]->cq_pbl_size, 1625 + &qedi->global_queues[i]->cq_pbl_dma, 1626 + GFP_KERNEL); 1626 1627 1627 1628 if (!qedi->global_queues[i]->cq_pbl) { 1628 1629 QEDI_WARN(&qedi->dbg_ctx, ··· 1690 1691 ep->sq_pbl_size = (ep->sq_mem_size / QEDI_PAGE_SIZE) * sizeof(void *); 1691 1692 ep->sq_pbl_size = ep->sq_pbl_size + QEDI_PAGE_SIZE; 1692 1693 1693 - ep->sq = dma_zalloc_coherent(&qedi->pdev->dev, ep->sq_mem_size, 1694 - &ep->sq_dma, GFP_KERNEL); 1694 + ep->sq = dma_alloc_coherent(&qedi->pdev->dev, ep->sq_mem_size, 1695 + &ep->sq_dma, GFP_KERNEL); 1695 1696 if (!ep->sq) { 1696 1697 QEDI_WARN(&qedi->dbg_ctx, 1697 1698 "Could not allocate send queue.\n"); 1698 1699 rval = -ENOMEM; 1699 1700 goto out; 1700 1701 } 1701 - ep->sq_pbl = dma_zalloc_coherent(&qedi->pdev->dev, ep->sq_pbl_size, 1702 - &ep->sq_pbl_dma, GFP_KERNEL); 1702 + ep->sq_pbl = dma_alloc_coherent(&qedi->pdev->dev, ep->sq_pbl_size, 1703 + &ep->sq_pbl_dma, GFP_KERNEL); 1703 1704 if (!ep->sq_pbl) { 1704 1705 QEDI_WARN(&qedi->dbg_ctx, 1705 1706 "Could not allocate send queue PBL.\n");
+2 -2
drivers/scsi/qla2xxx/qla_attr.c
··· 2415 2415 if (qla2x00_chip_is_down(vha)) 2416 2416 goto done; 2417 2417 2418 - stats = dma_zalloc_coherent(&ha->pdev->dev, sizeof(*stats), 2419 - &stats_dma, GFP_KERNEL); 2418 + stats = dma_alloc_coherent(&ha->pdev->dev, sizeof(*stats), &stats_dma, 2419 + GFP_KERNEL); 2420 2420 if (!stats) { 2421 2421 ql_log(ql_log_warn, vha, 0x707d, 2422 2422 "Failed to allocate memory for stats.\n");
+2 -2
drivers/scsi/qla2xxx/qla_bsg.c
··· 2312 2312 if (!IS_FWI2_CAPABLE(ha)) 2313 2313 return -EPERM; 2314 2314 2315 - stats = dma_zalloc_coherent(&ha->pdev->dev, sizeof(*stats), 2316 - &stats_dma, GFP_KERNEL); 2315 + stats = dma_alloc_coherent(&ha->pdev->dev, sizeof(*stats), &stats_dma, 2316 + GFP_KERNEL); 2317 2317 if (!stats) { 2318 2318 ql_log(ql_log_warn, vha, 0x70e2, 2319 2319 "Failed to allocate memory for stats.\n");
+8 -6
drivers/scsi/qla2xxx/qla_gs.c
··· 4147 4147 return rval; 4148 4148 } 4149 4149 4150 - sp->u.iocb_cmd.u.ctarg.req = dma_zalloc_coherent( 4151 - &vha->hw->pdev->dev, sizeof(struct ct_sns_pkt), 4152 - &sp->u.iocb_cmd.u.ctarg.req_dma, GFP_KERNEL); 4150 + sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev, 4151 + sizeof(struct ct_sns_pkt), 4152 + &sp->u.iocb_cmd.u.ctarg.req_dma, 4153 + GFP_KERNEL); 4153 4154 sp->u.iocb_cmd.u.ctarg.req_allocated_size = sizeof(struct ct_sns_pkt); 4154 4155 if (!sp->u.iocb_cmd.u.ctarg.req) { 4155 4156 ql_log(ql_log_warn, vha, 0xffff, ··· 4166 4165 ((vha->hw->max_fibre_devices - 1) * 4167 4166 sizeof(struct ct_sns_gpn_ft_data)); 4168 4167 4169 - sp->u.iocb_cmd.u.ctarg.rsp = dma_zalloc_coherent( 4170 - &vha->hw->pdev->dev, rspsz, 4171 - &sp->u.iocb_cmd.u.ctarg.rsp_dma, GFP_KERNEL); 4168 + sp->u.iocb_cmd.u.ctarg.rsp = dma_alloc_coherent(&vha->hw->pdev->dev, 4169 + rspsz, 4170 + &sp->u.iocb_cmd.u.ctarg.rsp_dma, 4171 + GFP_KERNEL); 4172 4172 sp->u.iocb_cmd.u.ctarg.rsp_allocated_size = sizeof(struct ct_sns_pkt); 4173 4173 if (!sp->u.iocb_cmd.u.ctarg.rsp) { 4174 4174 ql_log(ql_log_warn, vha, 0xffff,
+4 -4
drivers/scsi/qla2xxx/qla_init.c
··· 3099 3099 FCE_SIZE, ha->fce, ha->fce_dma); 3100 3100 3101 3101 /* Allocate memory for Fibre Channel Event Buffer. */ 3102 - tc = dma_zalloc_coherent(&ha->pdev->dev, FCE_SIZE, &tc_dma, 3103 - GFP_KERNEL); 3102 + tc = dma_alloc_coherent(&ha->pdev->dev, FCE_SIZE, &tc_dma, 3103 + GFP_KERNEL); 3104 3104 if (!tc) { 3105 3105 ql_log(ql_log_warn, vha, 0x00be, 3106 3106 "Unable to allocate (%d KB) for FCE.\n", ··· 3131 3131 EFT_SIZE, ha->eft, ha->eft_dma); 3132 3132 3133 3133 /* Allocate memory for Extended Trace Buffer. */ 3134 - tc = dma_zalloc_coherent(&ha->pdev->dev, EFT_SIZE, &tc_dma, 3135 - GFP_KERNEL); 3134 + tc = dma_alloc_coherent(&ha->pdev->dev, EFT_SIZE, &tc_dma, 3135 + GFP_KERNEL); 3136 3136 if (!tc) { 3137 3137 ql_log(ql_log_warn, vha, 0x00c1, 3138 3138 "Unable to allocate (%d KB) for EFT.\n",
+2 -2
drivers/scsi/qla4xxx/ql4_init.c
··· 153 153 dma_addr_t sys_info_dma; 154 154 int status = QLA_ERROR; 155 155 156 - sys_info = dma_zalloc_coherent(&ha->pdev->dev, sizeof(*sys_info), 157 - &sys_info_dma, GFP_KERNEL); 156 + sys_info = dma_alloc_coherent(&ha->pdev->dev, sizeof(*sys_info), 157 + &sys_info_dma, GFP_KERNEL); 158 158 if (sys_info == NULL) { 159 159 DEBUG2(printk("scsi%ld: %s: Unable to allocate dma buffer.\n", 160 160 ha->host_no, __func__));
+9 -9
drivers/scsi/qla4xxx/ql4_mbx.c
··· 625 625 uint32_t mbox_sts[MBOX_REG_COUNT]; 626 626 int status = QLA_ERROR; 627 627 628 - init_fw_cb = dma_zalloc_coherent(&ha->pdev->dev, 629 - sizeof(struct addr_ctrl_blk), 630 - &init_fw_cb_dma, GFP_KERNEL); 628 + init_fw_cb = dma_alloc_coherent(&ha->pdev->dev, 629 + sizeof(struct addr_ctrl_blk), 630 + &init_fw_cb_dma, GFP_KERNEL); 631 631 if (init_fw_cb == NULL) { 632 632 DEBUG2(printk("scsi%ld: %s: Unable to alloc init_cb\n", 633 633 ha->host_no, __func__)); ··· 709 709 uint32_t mbox_cmd[MBOX_REG_COUNT]; 710 710 uint32_t mbox_sts[MBOX_REG_COUNT]; 711 711 712 - init_fw_cb = dma_zalloc_coherent(&ha->pdev->dev, 713 - sizeof(struct addr_ctrl_blk), 714 - &init_fw_cb_dma, GFP_KERNEL); 712 + init_fw_cb = dma_alloc_coherent(&ha->pdev->dev, 713 + sizeof(struct addr_ctrl_blk), 714 + &init_fw_cb_dma, GFP_KERNEL); 715 715 if (init_fw_cb == NULL) { 716 716 printk("scsi%ld: %s: Unable to alloc init_cb\n", ha->host_no, 717 717 __func__); ··· 1340 1340 uint32_t mbox_sts[MBOX_REG_COUNT]; 1341 1341 int status = QLA_ERROR; 1342 1342 1343 - about_fw = dma_zalloc_coherent(&ha->pdev->dev, 1344 - sizeof(struct about_fw_info), 1345 - &about_fw_dma, GFP_KERNEL); 1343 + about_fw = dma_alloc_coherent(&ha->pdev->dev, 1344 + sizeof(struct about_fw_info), 1345 + &about_fw_dma, GFP_KERNEL); 1346 1346 if (!about_fw) { 1347 1347 DEBUG2(ql4_printk(KERN_ERR, ha, "%s: Unable to alloc memory " 1348 1348 "for about_fw\n", __func__));
+2 -2
drivers/scsi/qla4xxx/ql4_nx.c
··· 4052 4052 dma_addr_t sys_info_dma; 4053 4053 int status = QLA_ERROR; 4054 4054 4055 - sys_info = dma_zalloc_coherent(&ha->pdev->dev, sizeof(*sys_info), 4056 - &sys_info_dma, GFP_KERNEL); 4055 + sys_info = dma_alloc_coherent(&ha->pdev->dev, sizeof(*sys_info), 4056 + &sys_info_dma, GFP_KERNEL); 4057 4057 if (sys_info == NULL) { 4058 4058 DEBUG2(printk("scsi%ld: %s: Unable to allocate dma buffer.\n", 4059 4059 ha->host_no, __func__));
+5 -5
drivers/scsi/qla4xxx/ql4_os.c
··· 2704 2704 uint32_t rem = len; 2705 2705 struct nlattr *attr; 2706 2706 2707 - init_fw_cb = dma_zalloc_coherent(&ha->pdev->dev, 2708 - sizeof(struct addr_ctrl_blk), 2709 - &init_fw_cb_dma, GFP_KERNEL); 2707 + init_fw_cb = dma_alloc_coherent(&ha->pdev->dev, 2708 + sizeof(struct addr_ctrl_blk), 2709 + &init_fw_cb_dma, GFP_KERNEL); 2710 2710 if (!init_fw_cb) { 2711 2711 ql4_printk(KERN_ERR, ha, "%s: Unable to alloc init_cb\n", 2712 2712 __func__); ··· 4206 4206 sizeof(struct shadow_regs) + 4207 4207 MEM_ALIGN_VALUE + 4208 4208 (PAGE_SIZE - 1)) & ~(PAGE_SIZE - 1); 4209 - ha->queues = dma_zalloc_coherent(&ha->pdev->dev, ha->queues_len, 4210 - &ha->queues_dma, GFP_KERNEL); 4209 + ha->queues = dma_alloc_coherent(&ha->pdev->dev, ha->queues_len, 4210 + &ha->queues_dma, GFP_KERNEL); 4211 4211 if (ha->queues == NULL) { 4212 4212 ql4_printk(KERN_WARNING, ha, 4213 4213 "Memory Allocation failed - queues.\n");
+16 -16
drivers/scsi/smartpqi/smartpqi_init.c
··· 3576 3576 alloc_length += PQI_EXTRA_SGL_MEMORY; 3577 3577 3578 3578 ctrl_info->queue_memory_base = 3579 - dma_zalloc_coherent(&ctrl_info->pci_dev->dev, 3580 - alloc_length, 3581 - &ctrl_info->queue_memory_base_dma_handle, GFP_KERNEL); 3579 + dma_alloc_coherent(&ctrl_info->pci_dev->dev, alloc_length, 3580 + &ctrl_info->queue_memory_base_dma_handle, 3581 + GFP_KERNEL); 3582 3582 3583 3583 if (!ctrl_info->queue_memory_base) 3584 3584 return -ENOMEM; ··· 3715 3715 PQI_QUEUE_ELEMENT_ARRAY_ALIGNMENT; 3716 3716 3717 3717 ctrl_info->admin_queue_memory_base = 3718 - dma_zalloc_coherent(&ctrl_info->pci_dev->dev, 3719 - alloc_length, 3720 - &ctrl_info->admin_queue_memory_base_dma_handle, 3721 - GFP_KERNEL); 3718 + dma_alloc_coherent(&ctrl_info->pci_dev->dev, alloc_length, 3719 + &ctrl_info->admin_queue_memory_base_dma_handle, 3720 + GFP_KERNEL); 3722 3721 3723 3722 if (!ctrl_info->admin_queue_memory_base) 3724 3723 return -ENOMEM; ··· 4601 4602 4602 4603 static inline int pqi_alloc_error_buffer(struct pqi_ctrl_info *ctrl_info) 4603 4604 { 4604 - ctrl_info->error_buffer = dma_zalloc_coherent(&ctrl_info->pci_dev->dev, 4605 - ctrl_info->error_buffer_length, 4606 - &ctrl_info->error_buffer_dma_handle, GFP_KERNEL); 4605 + ctrl_info->error_buffer = dma_alloc_coherent(&ctrl_info->pci_dev->dev, 4606 + ctrl_info->error_buffer_length, 4607 + &ctrl_info->error_buffer_dma_handle, 4608 + GFP_KERNEL); 4607 4609 4608 4610 if (!ctrl_info->error_buffer) 4609 4611 return -ENOMEM; ··· 7487 7487 dma_addr_t dma_handle; 7488 7488 7489 7489 ctrl_info->pqi_ofa_chunk_virt_addr[i] = 7490 - dma_zalloc_coherent(dev, chunk_size, &dma_handle, 7491 - GFP_KERNEL); 7490 + dma_alloc_coherent(dev, chunk_size, &dma_handle, 7491 + GFP_KERNEL); 7492 7492 7493 7493 if (!ctrl_info->pqi_ofa_chunk_virt_addr[i]) 7494 7494 break; ··· 7545 7545 struct device *dev; 7546 7546 7547 7547 dev = &ctrl_info->pci_dev->dev; 7548 - pqi_ofa_memory = dma_zalloc_coherent(dev, 7549 - PQI_OFA_MEMORY_DESCRIPTOR_LENGTH, 7550 - &ctrl_info->pqi_ofa_mem_dma_handle, 7551 - GFP_KERNEL); 7548 + pqi_ofa_memory = dma_alloc_coherent(dev, 7549 + PQI_OFA_MEMORY_DESCRIPTOR_LENGTH, 7550 + &ctrl_info->pqi_ofa_mem_dma_handle, 7551 + GFP_KERNEL); 7552 7552 7553 7553 if (!pqi_ofa_memory) 7554 7554 return;
+1 -1
drivers/soc/fsl/qbman/dpaa_sys.c
··· 62 62 return -ENODEV; 63 63 } 64 64 65 - if (!dma_zalloc_coherent(dev, *size, addr, 0)) { 65 + if (!dma_alloc_coherent(dev, *size, addr, 0)) { 66 66 dev_err(dev, "DMA Alloc memory failed\n"); 67 67 return -ENODEV; 68 68 }
+1 -1
drivers/soc/renesas/Kconfig
··· 44 44 bool 45 45 select ARM_AMBA 46 46 47 - if ARM 47 + if ARM && ARCH_RENESAS 48 48 49 49 #comment "Renesas ARM SoCs System Type" 50 50
+4 -19
drivers/soc/renesas/r8a774c0-sysc.c
··· 28 28 { "3dg-b", 0x100, 1, R8A774C0_PD_3DG_B, R8A774C0_PD_3DG_A }, 29 29 }; 30 30 31 - static void __init rcar_sysc_fix_parent(struct rcar_sysc_area *areas, 32 - unsigned int num_areas, u8 id, 33 - int new_parent) 34 - { 35 - unsigned int i; 36 - 37 - for (i = 0; i < num_areas; i++) 38 - if (areas[i].isr_bit == id) { 39 - areas[i].parent = new_parent; 40 - return; 41 - } 42 - } 43 - 44 31 /* Fixups for RZ/G2E ES1.0 revision */ 45 32 static const struct soc_device_attribute r8a774c0[] __initconst = { 46 33 { .soc_id = "r8a774c0", .revision = "ES1.0" }, ··· 37 50 static int __init r8a774c0_sysc_init(void) 38 51 { 39 52 if (soc_device_match(r8a774c0)) { 40 - rcar_sysc_fix_parent(r8a774c0_areas, 41 - ARRAY_SIZE(r8a774c0_areas), 42 - R8A774C0_PD_3DG_A, R8A774C0_PD_3DG_B); 43 - rcar_sysc_fix_parent(r8a774c0_areas, 44 - ARRAY_SIZE(r8a774c0_areas), 45 - R8A774C0_PD_3DG_B, R8A774C0_PD_ALWAYS_ON); 53 + /* Fix incorrect 3DG hierarchy */ 54 + swap(r8a774c0_areas[6], r8a774c0_areas[7]); 55 + r8a774c0_areas[6].parent = R8A774C0_PD_ALWAYS_ON; 56 + r8a774c0_areas[7].parent = R8A774C0_PD_3DG_B; 46 57 } 47 58 48 59 return 0;
+3 -3
drivers/spi/spi-pic32-sqi.c
··· 466 466 int i; 467 467 468 468 /* allocate coherent DMAable memory for hardware buffer descriptors. */ 469 - sqi->bd = dma_zalloc_coherent(&sqi->master->dev, 470 - sizeof(*bd) * PESQI_BD_COUNT, 471 - &sqi->bd_dma, GFP_KERNEL); 469 + sqi->bd = dma_alloc_coherent(&sqi->master->dev, 470 + sizeof(*bd) * PESQI_BD_COUNT, 471 + &sqi->bd_dma, GFP_KERNEL); 472 472 if (!sqi->bd) { 473 473 dev_err(&sqi->master->dev, "failed allocating dma buffer\n"); 474 474 return -ENOMEM;
+1 -2
drivers/staging/mt7621-eth/mtk_eth_soc.c
··· 1396 1396 if (!ring->tx_buf) 1397 1397 goto no_tx_mem; 1398 1398 1399 - ring->tx_dma = dma_zalloc_coherent(eth->dev, 1400 - ring->tx_ring_size * sz, 1399 + ring->tx_dma = dma_alloc_coherent(eth->dev, ring->tx_ring_size * sz, 1401 1400 &ring->tx_phys, 1402 1401 GFP_ATOMIC | __GFP_ZERO); 1403 1402 if (!ring->tx_dma)
+3 -3
drivers/staging/rtl8188eu/core/rtw_security.c
··· 154 154 155 155 pframe = ((struct xmit_frame *)pxmitframe)->buf_addr + hw_hdr_offset; 156 156 157 - crypto_ops = try_then_request_module(lib80211_get_crypto_ops("WEP"), "lib80211_crypt_wep"); 157 + crypto_ops = lib80211_get_crypto_ops("WEP"); 158 158 159 159 if (!crypto_ops) 160 160 return; ··· 210 210 void *crypto_private = NULL; 211 211 int status = _SUCCESS; 212 212 const int keyindex = prxattrib->key_index; 213 - struct lib80211_crypto_ops *crypto_ops = try_then_request_module(lib80211_get_crypto_ops("WEP"), "lib80211_crypt_wep"); 213 + struct lib80211_crypto_ops *crypto_ops = lib80211_get_crypto_ops("WEP"); 214 214 char iv[4], icv[4]; 215 215 216 216 if (!crypto_ops) { ··· 1291 1291 struct sk_buff *skb = ((struct recv_frame *)precvframe)->pkt; 1292 1292 void *crypto_private = NULL; 1293 1293 u8 *key, *pframe = skb->data; 1294 - struct lib80211_crypto_ops *crypto_ops = try_then_request_module(lib80211_get_crypto_ops("CCMP"), "lib80211_crypt_ccmp"); 1294 + struct lib80211_crypto_ops *crypto_ops = lib80211_get_crypto_ops("CCMP"); 1295 1295 struct security_priv *psecuritypriv = &padapter->securitypriv; 1296 1296 char iv[8], icv[8]; 1297 1297
+1 -1
drivers/staging/rtl8723bs/os_dep/sdio_intf.c
··· 22 22 { SDIO_DEVICE(0x024c, 0xb723), }, 23 23 { /* end: all zeroes */ }, 24 24 }; 25 - static const struct acpi_device_id acpi_ids[] __used = { 25 + static const struct acpi_device_id acpi_ids[] = { 26 26 {"OBDA8723", 0x0000}, 27 27 {} 28 28 };
+2 -4
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
··· 407 407 /* Allocate enough storage to hold the page pointers and the page 408 408 * list 409 409 */ 410 - pagelist = dma_zalloc_coherent(g_dev, 411 - pagelist_size, 412 - &dma_addr, 413 - GFP_KERNEL); 410 + pagelist = dma_alloc_coherent(g_dev, pagelist_size, &dma_addr, 411 + GFP_KERNEL); 414 412 415 413 vchiq_log_trace(vchiq_arm_log_level, "%s - %pK", __func__, pagelist); 416 414
+6 -13
drivers/staging/vt6655/device_main.c
··· 440 440 void *vir_pool; 441 441 442 442 /*allocate all RD/TD rings a single pool*/ 443 - vir_pool = dma_zalloc_coherent(&priv->pcid->dev, 444 - priv->opts.rx_descs0 * sizeof(struct vnt_rx_desc) + 445 - priv->opts.rx_descs1 * sizeof(struct vnt_rx_desc) + 446 - priv->opts.tx_descs[0] * sizeof(struct vnt_tx_desc) + 447 - priv->opts.tx_descs[1] * sizeof(struct vnt_tx_desc), 448 - &priv->pool_dma, GFP_ATOMIC); 443 + vir_pool = dma_alloc_coherent(&priv->pcid->dev, 444 + priv->opts.rx_descs0 * sizeof(struct vnt_rx_desc) + priv->opts.rx_descs1 * sizeof(struct vnt_rx_desc) + priv->opts.tx_descs[0] * sizeof(struct vnt_tx_desc) + priv->opts.tx_descs[1] * sizeof(struct vnt_tx_desc), 445 + &priv->pool_dma, GFP_ATOMIC); 449 446 if (!vir_pool) { 450 447 dev_err(&priv->pcid->dev, "allocate desc dma memory failed\n"); 451 448 return false; ··· 456 459 priv->rd1_pool_dma = priv->rd0_pool_dma + 457 460 priv->opts.rx_descs0 * sizeof(struct vnt_rx_desc); 458 461 459 - priv->tx0_bufs = dma_zalloc_coherent(&priv->pcid->dev, 460 - priv->opts.tx_descs[0] * PKT_BUF_SZ + 461 - priv->opts.tx_descs[1] * PKT_BUF_SZ + 462 - CB_BEACON_BUF_SIZE + 463 - CB_MAX_BUF_SIZE, 464 - &priv->tx_bufs_dma0, 465 - GFP_ATOMIC); 462 + priv->tx0_bufs = dma_alloc_coherent(&priv->pcid->dev, 463 + priv->opts.tx_descs[0] * PKT_BUF_SZ + priv->opts.tx_descs[1] * PKT_BUF_SZ + CB_BEACON_BUF_SIZE + CB_MAX_BUF_SIZE, 464 + &priv->tx_bufs_dma0, GFP_ATOMIC); 466 465 if (!priv->tx0_bufs) { 467 466 dev_err(&priv->pcid->dev, "allocate buf dma memory failed\n"); 468 467
+12
drivers/tty/serial/Kconfig
··· 85 85 with "earlycon=smh" on the kernel command line. The console is 86 86 enabled when early_param is processed. 87 87 88 + config SERIAL_EARLYCON_RISCV_SBI 89 + bool "Early console using RISC-V SBI" 90 + depends on RISCV 91 + select SERIAL_CORE 92 + select SERIAL_CORE_CONSOLE 93 + select SERIAL_EARLYCON 94 + help 95 + Support for early debug console using RISC-V SBI. This enables 96 + the console before standard serial driver is probed. This is enabled 97 + with "earlycon=sbi" on the kernel command line. The console is 98 + enabled when early_param is processed. 99 + 88 100 config SERIAL_SB1250_DUART 89 101 tristate "BCM1xxx on-chip DUART serial support" 90 102 depends on SIBYTE_SB1xxx_SOC=y
+1
drivers/tty/serial/Makefile
··· 7 7 8 8 obj-$(CONFIG_SERIAL_EARLYCON) += earlycon.o 9 9 obj-$(CONFIG_SERIAL_EARLYCON_ARM_SEMIHOST) += earlycon-arm-semihost.o 10 + obj-$(CONFIG_SERIAL_EARLYCON_RISCV_SBI) += earlycon-riscv-sbi.o 10 11 11 12 # These Sparc drivers have to appear before others such as 8250 12 13 # which share ttySx minor node space. Otherwise console device
+28
drivers/tty/serial/earlycon-riscv-sbi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * RISC-V SBI based earlycon 4 + * 5 + * Copyright (C) 2018 Anup Patel <anup@brainfault.org> 6 + */ 7 + #include <linux/kernel.h> 8 + #include <linux/console.h> 9 + #include <linux/init.h> 10 + #include <linux/serial_core.h> 11 + #include <asm/sbi.h> 12 + 13 + static void sbi_console_write(struct console *con, 14 + const char *s, unsigned int n) 15 + { 16 + int i; 17 + 18 + for (i = 0; i < n; ++i) 19 + sbi_console_putchar(s[i]); 20 + } 21 + 22 + static int __init early_sbi_setup(struct earlycon_device *device, 23 + const char *opt) 24 + { 25 + device->con->write = sbi_console_write; 26 + return 0; 27 + } 28 + EARLYCON_DECLARE(sbi, early_sbi_setup);
+19 -17
drivers/tty/serial/lantiq.c
··· 114 114 115 115 static inline void asc_update_bits(u32 clear, u32 set, void __iomem *reg) 116 116 { 117 - u32 tmp = readl(reg); 117 + u32 tmp = __raw_readl(reg); 118 118 119 - writel((tmp & ~clear) | set, reg); 119 + __raw_writel((tmp & ~clear) | set, reg); 120 120 } 121 121 122 122 static inline struct ··· 144 144 static void 145 145 lqasc_stop_rx(struct uart_port *port) 146 146 { 147 - writel(ASCWHBSTATE_CLRREN, port->membase + LTQ_ASC_WHBSTATE); 147 + __raw_writel(ASCWHBSTATE_CLRREN, port->membase + LTQ_ASC_WHBSTATE); 148 148 } 149 149 150 150 static int ··· 153 153 struct tty_port *tport = &port->state->port; 154 154 unsigned int ch = 0, rsr = 0, fifocnt; 155 155 156 - fifocnt = readl(port->membase + LTQ_ASC_FSTAT) & ASCFSTAT_RXFFLMASK; 156 + fifocnt = __raw_readl(port->membase + LTQ_ASC_FSTAT) & 157 + ASCFSTAT_RXFFLMASK; 157 158 while (fifocnt--) { 158 159 u8 flag = TTY_NORMAL; 159 160 ch = readb(port->membase + LTQ_ASC_RBUF); 160 - rsr = (readl(port->membase + LTQ_ASC_STATE) 161 + rsr = (__raw_readl(port->membase + LTQ_ASC_STATE) 161 162 & ASCSTATE_ANY) | UART_DUMMY_UER_RX; 162 163 tty_flip_buffer_push(tport); 163 164 port->icount.rx++; ··· 218 217 return; 219 218 } 220 219 221 - while (((readl(port->membase + LTQ_ASC_FSTAT) & 220 + while (((__raw_readl(port->membase + LTQ_ASC_FSTAT) & 222 221 ASCFSTAT_TXFREEMASK) >> ASCFSTAT_TXFREEOFF) != 0) { 223 222 if (port->x_char) { 224 223 writeb(port->x_char, port->membase + LTQ_ASC_TBUF); ··· 246 245 unsigned long flags; 247 246 struct uart_port *port = (struct uart_port *)_port; 248 247 spin_lock_irqsave(&ltq_asc_lock, flags); 249 - writel(ASC_IRNCR_TIR, port->membase + LTQ_ASC_IRNCR); 248 + __raw_writel(ASC_IRNCR_TIR, port->membase + LTQ_ASC_IRNCR); 250 249 spin_unlock_irqrestore(&ltq_asc_lock, flags); 251 250 lqasc_start_tx(port); 252 251 return IRQ_HANDLED; ··· 271 270 unsigned long flags; 272 271 struct uart_port *port = (struct uart_port *)_port; 273 272 spin_lock_irqsave(&ltq_asc_lock, flags); 274 - writel(ASC_IRNCR_RIR, port->membase + LTQ_ASC_IRNCR); 273 + __raw_writel(ASC_IRNCR_RIR, port->membase + LTQ_ASC_IRNCR); 275 274 lqasc_rx_chars(port); 276 275 spin_unlock_irqrestore(&ltq_asc_lock, flags); 277 276 return IRQ_HANDLED; ··· 281 280 lqasc_tx_empty(struct uart_port *port) 282 281 { 283 282 int status; 284 - status = readl(port->membase + LTQ_ASC_FSTAT) & ASCFSTAT_TXFFLMASK; 283 + status = __raw_readl(port->membase + LTQ_ASC_FSTAT) & 284 + ASCFSTAT_TXFFLMASK; 285 285 return status ? 0 : TIOCSER_TEMT; 286 286 } 287 287 ··· 315 313 asc_update_bits(ASCCLC_DISS | ASCCLC_RMCMASK, (1 << ASCCLC_RMCOFFSET), 316 314 port->membase + LTQ_ASC_CLC); 317 315 318 - writel(0, port->membase + LTQ_ASC_PISEL); 319 - writel( 316 + __raw_writel(0, port->membase + LTQ_ASC_PISEL); 317 + __raw_writel( 320 318 ((TXFIFO_FL << ASCTXFCON_TXFITLOFF) & ASCTXFCON_TXFITLMASK) | 321 319 ASCTXFCON_TXFEN | ASCTXFCON_TXFFLU, 322 320 port->membase + LTQ_ASC_TXFCON); 323 - writel( 321 + __raw_writel( 324 322 ((RXFIFO_FL << ASCRXFCON_RXFITLOFF) & ASCRXFCON_RXFITLMASK) 325 323 | ASCRXFCON_RXFEN | ASCRXFCON_RXFFLU, 326 324 port->membase + LTQ_ASC_RXFCON); ··· 352 350 goto err2; 353 351 } 354 352 355 - writel(ASC_IRNREN_RX | ASC_IRNREN_ERR | ASC_IRNREN_TX, 353 + __raw_writel(ASC_IRNREN_RX | ASC_IRNREN_ERR | ASC_IRNREN_TX, 356 354 port->membase + LTQ_ASC_IRNREN); 357 355 return 0; 358 356 ··· 371 369 free_irq(ltq_port->rx_irq, port); 372 370 free_irq(ltq_port->err_irq, port); 373 371 374 - writel(0, port->membase + LTQ_ASC_CON); 372 + __raw_writel(0, port->membase + LTQ_ASC_CON); 375 373 asc_update_bits(ASCRXFCON_RXFEN, ASCRXFCON_RXFFLU, 376 374 port->membase + LTQ_ASC_RXFCON); 377 375 asc_update_bits(ASCTXFCON_TXFEN, ASCTXFCON_TXFFLU, ··· 463 461 asc_update_bits(ASCCON_BRS, 0, port->membase + LTQ_ASC_CON); 464 462 465 463 /* now we can write the new baudrate into the register */ 466 - writel(divisor, port->membase + LTQ_ASC_BG); 464 + __raw_writel(divisor, port->membase + LTQ_ASC_BG); 467 465 468 466 /* turn the baudrate generator back on */ 469 467 asc_update_bits(0, ASCCON_R, port->membase + LTQ_ASC_CON); 470 468 471 469 /* enable rx */ 472 - writel(ASCWHBSTATE_SETREN, port->membase + LTQ_ASC_WHBSTATE); 470 + __raw_writel(ASCWHBSTATE_SETREN, port->membase + LTQ_ASC_WHBSTATE); 473 471 474 472 spin_unlock_irqrestore(&ltq_asc_lock, flags); 475 473 ··· 580 578 return; 581 579 582 580 do { 583 - fifofree = (readl(port->membase + LTQ_ASC_FSTAT) 581 + fifofree = (__raw_readl(port->membase + LTQ_ASC_FSTAT) 584 582 & ASCFSTAT_TXFREEMASK) >> ASCFSTAT_TXFREEOFF; 585 583 } while (fifofree == 0); 586 584 writeb(ch, port->membase + LTQ_ASC_TBUF);
+13 -7
drivers/tty/tty_io.c
··· 1256 1256 static int tty_reopen(struct tty_struct *tty) 1257 1257 { 1258 1258 struct tty_driver *driver = tty->driver; 1259 - int retval; 1259 + struct tty_ldisc *ld; 1260 + int retval = 0; 1260 1261 1261 1262 if (driver->type == TTY_DRIVER_TYPE_PTY && 1262 1263 driver->subtype == PTY_TYPE_MASTER) ··· 1269 1268 if (test_bit(TTY_EXCLUSIVE, &tty->flags) && !capable(CAP_SYS_ADMIN)) 1270 1269 return -EBUSY; 1271 1270 1272 - retval = tty_ldisc_lock(tty, 5 * HZ); 1273 - if (retval) 1274 - return retval; 1271 + ld = tty_ldisc_ref_wait(tty); 1272 + if (ld) { 1273 + tty_ldisc_deref(ld); 1274 + } else { 1275 + retval = tty_ldisc_lock(tty, 5 * HZ); 1276 + if (retval) 1277 + return retval; 1275 1278 1276 - if (!tty->ldisc) 1277 - retval = tty_ldisc_reinit(tty, tty->termios.c_line); 1278 - tty_ldisc_unlock(tty); 1279 + if (!tty->ldisc) 1280 + retval = tty_ldisc_reinit(tty, tty->termios.c_line); 1281 + tty_ldisc_unlock(tty); 1282 + } 1279 1283 1280 1284 if (retval == 0) 1281 1285 tty->count++;
+7
drivers/usb/class/cdc-acm.c
··· 1865 1865 .driver_info = IGNORE_DEVICE, 1866 1866 }, 1867 1867 1868 + { USB_DEVICE(0x1bc7, 0x0021), /* Telit 3G ACM only composition */ 1869 + .driver_info = SEND_ZERO_PACKET, 1870 + }, 1871 + { USB_DEVICE(0x1bc7, 0x0023), /* Telit 3G ACM + ECM composition */ 1872 + .driver_info = SEND_ZERO_PACKET, 1873 + }, 1874 + 1868 1875 /* control interfaces without any protocol set */ 1869 1876 { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ACM, 1870 1877 USB_CDC_PROTO_NONE) },
+6 -3
drivers/usb/core/generic.c
··· 143 143 continue; 144 144 } 145 145 146 - if (i > 0 && desc && is_audio(desc) && is_uac3_config(desc)) { 147 - best = c; 148 - break; 146 + if (i > 0 && desc && is_audio(desc)) { 147 + if (is_uac3_config(desc)) { 148 + best = c; 149 + break; 150 + } 151 + continue; 149 152 } 150 153 151 154 /* From the remaining configs, choose the first one whose
+2 -1
drivers/usb/core/quirks.c
··· 394 394 { USB_DEVICE(0x1a40, 0x0101), .driver_info = USB_QUIRK_HUB_SLOW_RESET }, 395 395 396 396 /* Corsair K70 RGB */ 397 - { USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT }, 397 + { USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT | 398 + USB_QUIRK_DELAY_CTRL_MSG }, 398 399 399 400 /* Corsair Strafe */ 400 401 { USB_DEVICE(0x1b1c, 0x1b15), .driver_info = USB_QUIRK_DELAY_INIT |
+6 -7
drivers/usb/gadget/udc/bdc/bdc_core.c
··· 172 172 /* Refer to BDC spec, Table 4 for description of SPB */ 173 173 sp_buff_size = 1 << (sp_buff_size + 5); 174 174 dev_dbg(bdc->dev, "Allocating %d bytes for scratchpad\n", sp_buff_size); 175 - bdc->scratchpad.buff = dma_zalloc_coherent(bdc->dev, sp_buff_size, 176 - &bdc->scratchpad.sp_dma, GFP_KERNEL); 175 + bdc->scratchpad.buff = dma_alloc_coherent(bdc->dev, sp_buff_size, 176 + &bdc->scratchpad.sp_dma, 177 + GFP_KERNEL); 177 178 178 179 if (!bdc->scratchpad.buff) 179 180 goto fail; ··· 203 202 bdc_writel(bdc->regs, BDC_SRRINT(0), BDC_SRR_RWS | BDC_SRR_RST); 204 203 bdc->srr.dqp_index = 0; 205 204 /* allocate the status report descriptors */ 206 - bdc->srr.sr_bds = dma_zalloc_coherent( 207 - bdc->dev, 208 - NUM_SR_ENTRIES * sizeof(struct bdc_bd), 209 - &bdc->srr.dma_addr, 210 - GFP_KERNEL); 205 + bdc->srr.sr_bds = dma_alloc_coherent(bdc->dev, 206 + NUM_SR_ENTRIES * sizeof(struct bdc_bd), 207 + &bdc->srr.dma_addr, GFP_KERNEL); 211 208 if (!bdc->srr.sr_bds) 212 209 return -ENOMEM; 213 210
+3 -3
drivers/usb/host/uhci-hcd.c
··· 596 596 &uhci_debug_operations); 597 597 #endif 598 598 599 - uhci->frame = dma_zalloc_coherent(uhci_dev(uhci), 600 - UHCI_NUMFRAMES * sizeof(*uhci->frame), 601 - &uhci->frame_dma_handle, GFP_KERNEL); 599 + uhci->frame = dma_alloc_coherent(uhci_dev(uhci), 600 + UHCI_NUMFRAMES * sizeof(*uhci->frame), 601 + &uhci->frame_dma_handle, GFP_KERNEL); 602 602 if (!uhci->frame) { 603 603 dev_err(uhci_dev(uhci), 604 604 "unable to allocate consistent memory for frame list\n");
+4 -4
drivers/usb/host/xhci-mem.c
··· 1672 1672 xhci->dcbaa->dev_context_ptrs[0] = cpu_to_le64(xhci->scratchpad->sp_dma); 1673 1673 for (i = 0; i < num_sp; i++) { 1674 1674 dma_addr_t dma; 1675 - void *buf = dma_zalloc_coherent(dev, xhci->page_size, &dma, 1676 - flags); 1675 + void *buf = dma_alloc_coherent(dev, xhci->page_size, &dma, 1676 + flags); 1677 1677 if (!buf) 1678 1678 goto fail_sp4; 1679 1679 ··· 1799 1799 struct xhci_erst_entry *entry; 1800 1800 1801 1801 size = sizeof(struct xhci_erst_entry) * evt_ring->num_segs; 1802 - erst->entries = dma_zalloc_coherent(xhci_to_hcd(xhci)->self.sysdev, 1803 - size, &erst->erst_dma_addr, flags); 1802 + erst->entries = dma_alloc_coherent(xhci_to_hcd(xhci)->self.sysdev, 1803 + size, &erst->erst_dma_addr, flags); 1804 1804 if (!erst->entries) 1805 1805 return -ENOMEM; 1806 1806
+6 -2
drivers/usb/storage/scsiglue.c
··· 235 235 if (!(us->fflags & US_FL_NEEDS_CAP16)) 236 236 sdev->try_rc_10_first = 1; 237 237 238 - /* assume SPC3 or latter devices support sense size > 18 */ 239 - if (sdev->scsi_level > SCSI_SPC_2) 238 + /* 239 + * assume SPC3 or latter devices support sense size > 18 240 + * unless US_FL_BAD_SENSE quirk is specified. 241 + */ 242 + if (sdev->scsi_level > SCSI_SPC_2 && 243 + !(us->fflags & US_FL_BAD_SENSE)) 240 244 us->fflags |= US_FL_SANE_SENSE; 241 245 242 246 /*
+12
drivers/usb/storage/unusual_devs.h
··· 1266 1266 US_FL_FIX_CAPACITY ), 1267 1267 1268 1268 /* 1269 + * Reported by Icenowy Zheng <icenowy@aosc.io> 1270 + * The SMI SM3350 USB-UFS bridge controller will enter a wrong state 1271 + * that do not process read/write command if a long sense is requested, 1272 + * so force to use 18-byte sense. 1273 + */ 1274 + UNUSUAL_DEV( 0x090c, 0x3350, 0x0000, 0xffff, 1275 + "SMI", 1276 + "SM3350 UFS-to-USB-Mass-Storage bridge", 1277 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1278 + US_FL_BAD_SENSE ), 1279 + 1280 + /* 1269 1281 * Reported by Paul Hartman <paul.hartman+linux@gmail.com> 1270 1282 * This card reader returns "Illegal Request, Logical Block Address 1271 1283 * Out of Range" for the first READ(10) after a new card is inserted.
+1 -1
drivers/vfio/pci/trace.h
··· 94 94 #endif /* _TRACE_VFIO_PCI_H */ 95 95 96 96 #undef TRACE_INCLUDE_PATH 97 - #define TRACE_INCLUDE_PATH . 97 + #define TRACE_INCLUDE_PATH ../../drivers/vfio/pci 98 98 #undef TRACE_INCLUDE_FILE 99 99 #define TRACE_INCLUDE_FILE trace 100 100
+1 -1
drivers/vfio/vfio_iommu_type1.c
··· 878 878 return -EINVAL; 879 879 if (!unmap->size || unmap->size & mask) 880 880 return -EINVAL; 881 - if (unmap->iova + unmap->size < unmap->iova || 881 + if (unmap->iova + unmap->size - 1 < unmap->iova || 882 882 unmap->size > SIZE_MAX) 883 883 return -EINVAL; 884 884
+3 -3
drivers/video/fbdev/da8xx-fb.c
··· 1446 1446 da8xx_fb_fix.line_length - 1; 1447 1447 1448 1448 /* allocate palette buffer */ 1449 - par->v_palette_base = dma_zalloc_coherent(NULL, PALETTE_SIZE, 1450 - &par->p_palette_base, 1451 - GFP_KERNEL | GFP_DMA); 1449 + par->v_palette_base = dma_alloc_coherent(NULL, PALETTE_SIZE, 1450 + &par->p_palette_base, 1451 + GFP_KERNEL | GFP_DMA); 1452 1452 if (!par->v_palette_base) { 1453 1453 dev_err(&device->dev, 1454 1454 "GLCD: kmalloc for palette buffer failed\n");
+9 -7
fs/btrfs/ctree.c
··· 1016 1016 parent_start = parent->start; 1017 1017 1018 1018 /* 1019 - * If we are COWing a node/leaf from the extent, chunk or device trees, 1020 - * make sure that we do not finish block group creation of pending block 1021 - * groups. We do this to avoid a deadlock. 1019 + * If we are COWing a node/leaf from the extent, chunk, device or free 1020 + * space trees, make sure that we do not finish block group creation of 1021 + * pending block groups. We do this to avoid a deadlock. 1022 1022 * COWing can result in allocation of a new chunk, and flushing pending 1023 1023 * block groups (btrfs_create_pending_block_groups()) can be triggered 1024 1024 * when finishing allocation of a new chunk. Creation of a pending block 1025 - * group modifies the extent, chunk and device trees, therefore we could 1026 - * deadlock with ourselves since we are holding a lock on an extent 1027 - * buffer that btrfs_create_pending_block_groups() may try to COW later. 1025 + * group modifies the extent, chunk, device and free space trees, 1026 + * therefore we could deadlock with ourselves since we are holding a 1027 + * lock on an extent buffer that btrfs_create_pending_block_groups() may 1028 + * try to COW later. 1028 1029 */ 1029 1030 if (root == fs_info->extent_root || 1030 1031 root == fs_info->chunk_root || 1031 - root == fs_info->dev_root) 1032 + root == fs_info->dev_root || 1033 + root == fs_info->free_space_root) 1032 1034 trans->can_flush_pending_bgs = false; 1033 1035 1034 1036 cow = btrfs_alloc_tree_block(trans, root, parent_start,
+43 -6
fs/btrfs/ioctl.c
··· 3221 3221 inode_lock_nested(inode2, I_MUTEX_CHILD); 3222 3222 } 3223 3223 3224 + static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1, 3225 + struct inode *inode2, u64 loff2, u64 len) 3226 + { 3227 + unlock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1); 3228 + unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1); 3229 + } 3230 + 3231 + static void btrfs_double_extent_lock(struct inode *inode1, u64 loff1, 3232 + struct inode *inode2, u64 loff2, u64 len) 3233 + { 3234 + if (inode1 < inode2) { 3235 + swap(inode1, inode2); 3236 + swap(loff1, loff2); 3237 + } else if (inode1 == inode2 && loff2 < loff1) { 3238 + swap(loff1, loff2); 3239 + } 3240 + lock_extent(&BTRFS_I(inode1)->io_tree, loff1, loff1 + len - 1); 3241 + lock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1); 3242 + } 3243 + 3224 3244 static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 olen, 3225 3245 struct inode *dst, u64 dst_loff) 3226 3246 { ··· 3262 3242 return -EINVAL; 3263 3243 3264 3244 /* 3265 - * Lock destination range to serialize with concurrent readpages(). 3245 + * Lock destination range to serialize with concurrent readpages() and 3246 + * source range to serialize with relocation. 3266 3247 */ 3267 - lock_extent(&BTRFS_I(dst)->io_tree, dst_loff, dst_loff + len - 1); 3248 + btrfs_double_extent_lock(src, loff, dst, dst_loff, len); 3268 3249 ret = btrfs_clone(src, dst, loff, olen, len, dst_loff, 1); 3269 - unlock_extent(&BTRFS_I(dst)->io_tree, dst_loff, dst_loff + len - 1); 3250 + btrfs_double_extent_unlock(src, loff, dst, dst_loff, len); 3270 3251 3271 3252 return ret; 3272 3253 } ··· 3926 3905 len = ALIGN(src->i_size, bs) - off; 3927 3906 3928 3907 if (destoff > inode->i_size) { 3908 + const u64 wb_start = ALIGN_DOWN(inode->i_size, bs); 3909 + 3929 3910 ret = btrfs_cont_expand(inode, inode->i_size, destoff); 3911 + if (ret) 3912 + return ret; 3913 + /* 3914 + * We may have truncated the last block if the inode's size is 3915 + * not sector size aligned, so we need to wait for writeback to 3916 + * complete before proceeding further, otherwise we can race 3917 + * with cloning and attempt to increment a reference to an 3918 + * extent that no longer exists (writeback completed right after 3919 + * we found the previous extent covering eof and before we 3920 + * attempted to increment its reference count). 3921 + */ 3922 + ret = btrfs_wait_ordered_range(inode, wb_start, 3923 + destoff - wb_start); 3930 3924 if (ret) 3931 3925 return ret; 3932 3926 } 3933 3927 3934 3928 /* 3935 - * Lock destination range to serialize with concurrent readpages(). 3929 + * Lock destination range to serialize with concurrent readpages() and 3930 + * source range to serialize with relocation. 3936 3931 */ 3937 - lock_extent(&BTRFS_I(inode)->io_tree, destoff, destoff + len - 1); 3932 + btrfs_double_extent_lock(src, off, inode, destoff, len); 3938 3933 ret = btrfs_clone(src, inode, off, olen, len, destoff, 0); 3939 - unlock_extent(&BTRFS_I(inode)->io_tree, destoff, destoff + len - 1); 3934 + btrfs_double_extent_unlock(src, off, inode, destoff, len); 3940 3935 /* 3941 3936 * Truncate page cache pages so that future reads will see the cloned 3942 3937 * data immediately and not the previous data.
+12
fs/btrfs/volumes.c
··· 7825 7825 ret = -EUCLEAN; 7826 7826 goto out; 7827 7827 } 7828 + 7829 + /* It's possible this device is a dummy for seed device */ 7830 + if (dev->disk_total_bytes == 0) { 7831 + dev = find_device(fs_info->fs_devices->seed, devid, NULL); 7832 + if (!dev) { 7833 + btrfs_err(fs_info, "failed to find seed devid %llu", 7834 + devid); 7835 + ret = -EUCLEAN; 7836 + goto out; 7837 + } 7838 + } 7839 + 7828 7840 if (physical_offset + physical_len > dev->disk_total_bytes) { 7829 7841 btrfs_err(fs_info, 7830 7842 "dev extent devid %llu physical offset %llu len %llu is beyond device boundary %llu",
+1 -4
fs/ceph/addr.c
··· 1494 1494 if (err < 0 || off >= i_size_read(inode)) { 1495 1495 unlock_page(page); 1496 1496 put_page(page); 1497 - if (err == -ENOMEM) 1498 - ret = VM_FAULT_OOM; 1499 - else 1500 - ret = VM_FAULT_SIGBUS; 1497 + ret = vmf_error(err); 1501 1498 goto out_inline; 1502 1499 } 1503 1500 if (err < PAGE_SIZE)
+2 -2
fs/ceph/super.c
··· 530 530 seq_putc(m, ','); 531 531 pos = m->count; 532 532 533 - ret = ceph_print_client_options(m, fsc->client); 533 + ret = ceph_print_client_options(m, fsc->client, false); 534 534 if (ret) 535 535 return ret; 536 536 ··· 640 640 opt = NULL; /* fsc->client now owns this */ 641 641 642 642 fsc->client->extra_mon_dispatch = extra_mon_dispatch; 643 - fsc->client->osdc.abort_on_full = true; 643 + ceph_set_opt(fsc->client, ABORT_ON_FULL); 644 644 645 645 if (!fsopt->mds_namespace) { 646 646 ceph_monc_want_map(&fsc->client->monc, CEPH_SUB_MDSMAP,
+1 -1
fs/cifs/cifsfs.h
··· 150 150 extern const struct export_operations cifs_export_ops; 151 151 #endif /* CONFIG_CIFS_NFSD_EXPORT */ 152 152 153 - #define CIFS_VERSION "2.15" 153 + #define CIFS_VERSION "2.16" 154 154 #endif /* _CIFSFS_H */
+20
fs/cifs/cifsglob.h
··· 1438 1438 int mid_state; /* wish this were enum but can not pass to wait_event */ 1439 1439 unsigned int mid_flags; 1440 1440 __le16 command; /* smb command code */ 1441 + unsigned int optype; /* operation type */ 1441 1442 bool large_buf:1; /* if valid response, is pointer to large buf */ 1442 1443 bool multiRsp:1; /* multiple trans2 responses for one request */ 1443 1444 bool multiEnd:1; /* both received */ ··· 1573 1572 kfree(param[i].node_name); 1574 1573 } 1575 1574 kfree(param); 1575 + } 1576 + 1577 + static inline bool is_interrupt_error(int error) 1578 + { 1579 + switch (error) { 1580 + case -EINTR: 1581 + case -ERESTARTSYS: 1582 + case -ERESTARTNOHAND: 1583 + case -ERESTARTNOINTR: 1584 + return true; 1585 + } 1586 + return false; 1587 + } 1588 + 1589 + static inline bool is_retryable_error(int error) 1590 + { 1591 + if (is_interrupt_error(error) || error == -EAGAIN) 1592 + return true; 1593 + return false; 1576 1594 } 1577 1595 1578 1596 #define MID_FREE 0
+20 -10
fs/cifs/cifssmb.c
··· 128 128 int rc; 129 129 struct dfs_cache_tgt_list tl; 130 130 struct dfs_cache_tgt_iterator *it = NULL; 131 - char tree[MAX_TREE_SIZE + 1]; 131 + char *tree; 132 132 const char *tcp_host; 133 133 size_t tcp_host_len; 134 134 const char *dfs_host; 135 135 size_t dfs_host_len; 136 136 137 + tree = kzalloc(MAX_TREE_SIZE, GFP_KERNEL); 138 + if (!tree) 139 + return -ENOMEM; 140 + 137 141 if (tcon->ipc) { 138 - snprintf(tree, sizeof(tree), "\\\\%s\\IPC$", 142 + snprintf(tree, MAX_TREE_SIZE, "\\\\%s\\IPC$", 139 143 tcon->ses->server->hostname); 140 - return CIFSTCon(0, tcon->ses, tree, tcon, nlsc); 144 + rc = CIFSTCon(0, tcon->ses, tree, tcon, nlsc); 145 + goto out; 141 146 } 142 147 143 - if (!tcon->dfs_path) 144 - return CIFSTCon(0, tcon->ses, tcon->treeName, tcon, nlsc); 148 + if (!tcon->dfs_path) { 149 + rc = CIFSTCon(0, tcon->ses, tcon->treeName, tcon, nlsc); 150 + goto out; 151 + } 145 152 146 153 rc = dfs_cache_noreq_find(tcon->dfs_path + 1, NULL, &tl); 147 154 if (rc) 148 - return rc; 155 + goto out; 149 156 150 157 extract_unc_hostname(tcon->ses->server->hostname, &tcp_host, 151 158 &tcp_host_len); ··· 172 165 continue; 173 166 } 174 167 175 - snprintf(tree, sizeof(tree), "\\%s", tgt); 168 + snprintf(tree, MAX_TREE_SIZE, "\\%s", tgt); 176 169 177 170 rc = CIFSTCon(0, tcon->ses, tree, tcon, nlsc); 178 171 if (!rc) ··· 189 182 rc = -ENOENT; 190 183 } 191 184 dfs_cache_free_tgts(&tl); 185 + out: 186 + kfree(tree); 192 187 return rc; 193 188 } 194 189 #else ··· 2123 2114 2124 2115 for (j = 0; j < nr_pages; j++) { 2125 2116 unlock_page(wdata2->pages[j]); 2126 - if (rc != 0 && rc != -EAGAIN) { 2117 + if (rc != 0 && !is_retryable_error(rc)) { 2127 2118 SetPageError(wdata2->pages[j]); 2128 2119 end_page_writeback(wdata2->pages[j]); 2129 2120 put_page(wdata2->pages[j]); ··· 2132 2123 2133 2124 if (rc) { 2134 2125 kref_put(&wdata2->refcount, cifs_writedata_release); 2135 - if (rc == -EAGAIN) 2126 + if (is_retryable_error(rc)) 2136 2127 continue; 2137 2128 break; 2138 2129 } ··· 2141 2132 i += nr_pages; 2142 2133 } while (i < wdata->nr_pages); 2143 2134 2144 - mapping_set_error(inode->i_mapping, rc); 2135 + if (rc != 0 && !is_retryable_error(rc)) 2136 + mapping_set_error(inode->i_mapping, rc); 2145 2137 kref_put(&wdata->refcount, cifs_writedata_release); 2146 2138 } 2147 2139
+4 -3
fs/cifs/connect.c
··· 433 433 kfree(server->hostname); 434 434 435 435 server->hostname = extract_hostname(name); 436 - if (!server->hostname) { 437 - cifs_dbg(FYI, "%s: failed to extract hostname from target: %d\n", 438 - __func__, -ENOMEM); 436 + if (IS_ERR(server->hostname)) { 437 + cifs_dbg(FYI, 438 + "%s: failed to extract hostname from target: %ld\n", 439 + __func__, PTR_ERR(server->hostname)); 439 440 } 440 441 } 441 442
+1
fs/cifs/dfs_cache.c
··· 776 776 it->it_name = kstrndup(t->t_name, strlen(t->t_name), 777 777 GFP_KERNEL); 778 778 if (!it->it_name) { 779 + kfree(it); 779 780 rc = -ENOMEM; 780 781 goto err_free_it; 781 782 }
+35 -10
fs/cifs/file.c
··· 733 733 734 734 if (can_flush) { 735 735 rc = filemap_write_and_wait(inode->i_mapping); 736 - mapping_set_error(inode->i_mapping, rc); 736 + if (!is_interrupt_error(rc)) 737 + mapping_set_error(inode->i_mapping, rc); 737 738 738 739 if (tcon->unix_ext) 739 740 rc = cifs_get_inode_info_unix(&inode, full_path, ··· 1133 1132 1134 1133 /* 1135 1134 * Accessing maxBuf is racy with cifs_reconnect - need to store value 1136 - * and check it for zero before using. 1135 + * and check it before using. 1137 1136 */ 1138 1137 max_buf = tcon->ses->server->maxBuf; 1139 - if (!max_buf) { 1138 + if (max_buf < (sizeof(struct smb_hdr) + sizeof(LOCKING_ANDX_RANGE))) { 1140 1139 free_xid(xid); 1141 1140 return -EINVAL; 1142 1141 } 1143 1142 1143 + BUILD_BUG_ON(sizeof(struct smb_hdr) + sizeof(LOCKING_ANDX_RANGE) > 1144 + PAGE_SIZE); 1145 + max_buf = min_t(unsigned int, max_buf - sizeof(struct smb_hdr), 1146 + PAGE_SIZE); 1144 1147 max_num = (max_buf - sizeof(struct smb_hdr)) / 1145 1148 sizeof(LOCKING_ANDX_RANGE); 1146 1149 buf = kcalloc(max_num, sizeof(LOCKING_ANDX_RANGE), GFP_KERNEL); ··· 1477 1472 1478 1473 /* 1479 1474 * Accessing maxBuf is racy with cifs_reconnect - need to store value 1480 - * and check it for zero before using. 1475 + * and check it before using. 1481 1476 */ 1482 1477 max_buf = tcon->ses->server->maxBuf; 1483 - if (!max_buf) 1478 + if (max_buf < (sizeof(struct smb_hdr) + sizeof(LOCKING_ANDX_RANGE))) 1484 1479 return -EINVAL; 1485 1480 1481 + BUILD_BUG_ON(sizeof(struct smb_hdr) + sizeof(LOCKING_ANDX_RANGE) > 1482 + PAGE_SIZE); 1483 + max_buf = min_t(unsigned int, max_buf - sizeof(struct smb_hdr), 1484 + PAGE_SIZE); 1486 1485 max_num = (max_buf - sizeof(struct smb_hdr)) / 1487 1486 sizeof(LOCKING_ANDX_RANGE); 1488 1487 buf = kcalloc(max_num, sizeof(LOCKING_ANDX_RANGE), GFP_KERNEL); ··· 2119 2110 pgoff_t end, index; 2120 2111 struct cifs_writedata *wdata; 2121 2112 int rc = 0; 2113 + int saved_rc = 0; 2122 2114 unsigned int xid; 2123 2115 2124 2116 /* ··· 2148 2138 2149 2139 rc = server->ops->wait_mtu_credits(server, cifs_sb->wsize, 2150 2140 &wsize, &credits); 2151 - if (rc) 2141 + if (rc != 0) { 2142 + done = true; 2152 2143 break; 2144 + } 2153 2145 2154 2146 tofind = min((wsize / PAGE_SIZE) - 1, end - index) + 1; 2155 2147 ··· 2159 2147 &found_pages); 2160 2148 if (!wdata) { 2161 2149 rc = -ENOMEM; 2150 + done = true; 2162 2151 add_credits_and_wake_if(server, credits, 0); 2163 2152 break; 2164 2153 } ··· 2188 2175 if (rc != 0) { 2189 2176 add_credits_and_wake_if(server, wdata->credits, 0); 2190 2177 for (i = 0; i < nr_pages; ++i) { 2191 - if (rc == -EAGAIN) 2178 + if (is_retryable_error(rc)) 2192 2179 redirty_page_for_writepage(wbc, 2193 2180 wdata->pages[i]); 2194 2181 else ··· 2196 2183 end_page_writeback(wdata->pages[i]); 2197 2184 put_page(wdata->pages[i]); 2198 2185 } 2199 - if (rc != -EAGAIN) 2186 + if (!is_retryable_error(rc)) 2200 2187 mapping_set_error(mapping, rc); 2201 2188 } 2202 2189 kref_put(&wdata->refcount, cifs_writedata_release); ··· 2205 2192 index = saved_index; 2206 2193 continue; 2207 2194 } 2195 + 2196 + /* Return immediately if we received a signal during writing */ 2197 + if (is_interrupt_error(rc)) { 2198 + done = true; 2199 + break; 2200 + } 2201 + 2202 + if (rc != 0 && saved_rc == 0) 2203 + saved_rc = rc; 2208 2204 2209 2205 wbc->nr_to_write -= nr_pages; 2210 2206 if (wbc->nr_to_write <= 0) ··· 2231 2209 index = 0; 2232 2210 goto retry; 2233 2211 } 2212 + 2213 + if (saved_rc != 0) 2214 + rc = saved_rc; 2234 2215 2235 2216 if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) 2236 2217 mapping->writeback_index = index; ··· 2267 2242 set_page_writeback(page); 2268 2243 retry_write: 2269 2244 rc = cifs_partialpagewrite(page, 0, PAGE_SIZE); 2270 - if (rc == -EAGAIN) { 2271 - if (wbc->sync_mode == WB_SYNC_ALL) 2245 + if (is_retryable_error(rc)) { 2246 + if (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN) 2272 2247 goto retry_write; 2273 2248 redirty_page_for_writepage(wbc, page); 2274 2249 } else if (rc != 0) {
+10
fs/cifs/inode.c
··· 2257 2257 * the flush returns error? 2258 2258 */ 2259 2259 rc = filemap_write_and_wait(inode->i_mapping); 2260 + if (is_interrupt_error(rc)) { 2261 + rc = -ERESTARTSYS; 2262 + goto out; 2263 + } 2264 + 2260 2265 mapping_set_error(inode->i_mapping, rc); 2261 2266 rc = 0; 2262 2267 ··· 2405 2400 * the flush returns error? 2406 2401 */ 2407 2402 rc = filemap_write_and_wait(inode->i_mapping); 2403 + if (is_interrupt_error(rc)) { 2404 + rc = -ERESTARTSYS; 2405 + goto cifs_setattr_exit; 2406 + } 2407 + 2408 2408 mapping_set_error(inode->i_mapping, rc); 2409 2409 rc = 0; 2410 2410
+6 -2
fs/cifs/smb2file.c
··· 122 122 123 123 /* 124 124 * Accessing maxBuf is racy with cifs_reconnect - need to store value 125 - * and check it for zero before using. 125 + * and check it before using. 126 126 */ 127 127 max_buf = tcon->ses->server->maxBuf; 128 - if (!max_buf) 128 + if (max_buf < sizeof(struct smb2_lock_element)) 129 129 return -EINVAL; 130 130 131 + BUILD_BUG_ON(sizeof(struct smb2_lock_element) > PAGE_SIZE); 132 + max_buf = min_t(unsigned int, max_buf, PAGE_SIZE); 131 133 max_num = max_buf / sizeof(struct smb2_lock_element); 132 134 buf = kcalloc(max_num, sizeof(struct smb2_lock_element), GFP_KERNEL); 133 135 if (!buf) ··· 266 264 return -EINVAL; 267 265 } 268 266 267 + BUILD_BUG_ON(sizeof(struct smb2_lock_element) > PAGE_SIZE); 268 + max_buf = min_t(unsigned int, max_buf, PAGE_SIZE); 269 269 max_num = max_buf / sizeof(struct smb2_lock_element); 270 270 buf = kcalloc(max_num, sizeof(struct smb2_lock_element), GFP_KERNEL); 271 271 if (!buf) {
+22 -9
fs/cifs/smb2pdu.c
··· 162 162 int rc; 163 163 struct dfs_cache_tgt_list tl; 164 164 struct dfs_cache_tgt_iterator *it = NULL; 165 - char tree[MAX_TREE_SIZE + 1]; 165 + char *tree; 166 166 const char *tcp_host; 167 167 size_t tcp_host_len; 168 168 const char *dfs_host; 169 169 size_t dfs_host_len; 170 170 171 + tree = kzalloc(MAX_TREE_SIZE, GFP_KERNEL); 172 + if (!tree) 173 + return -ENOMEM; 174 + 171 175 if (tcon->ipc) { 172 - snprintf(tree, sizeof(tree), "\\\\%s\\IPC$", 176 + snprintf(tree, MAX_TREE_SIZE, "\\\\%s\\IPC$", 173 177 tcon->ses->server->hostname); 174 - return SMB2_tcon(0, tcon->ses, tree, tcon, nlsc); 178 + rc = SMB2_tcon(0, tcon->ses, tree, tcon, nlsc); 179 + goto out; 175 180 } 176 181 177 - if (!tcon->dfs_path) 178 - return SMB2_tcon(0, tcon->ses, tcon->treeName, tcon, nlsc); 182 + if (!tcon->dfs_path) { 183 + rc = SMB2_tcon(0, tcon->ses, tcon->treeName, tcon, nlsc); 184 + goto out; 185 + } 179 186 180 187 rc = dfs_cache_noreq_find(tcon->dfs_path + 1, NULL, &tl); 181 188 if (rc) 182 - return rc; 189 + goto out; 183 190 184 191 extract_unc_hostname(tcon->ses->server->hostname, &tcp_host, 185 192 &tcp_host_len); ··· 206 199 continue; 207 200 } 208 201 209 - snprintf(tree, sizeof(tree), "\\%s", tgt); 202 + snprintf(tree, MAX_TREE_SIZE, "\\%s", tgt); 210 203 211 204 rc = SMB2_tcon(0, tcon->ses, tree, tcon, nlsc); 212 205 if (!rc) ··· 223 216 rc = -ENOENT; 224 217 } 225 218 dfs_cache_free_tgts(&tl); 219 + out: 220 + kfree(tree); 226 221 return rc; 227 222 } 228 223 #else ··· 3287 3278 if (rdata->credits) { 3288 3279 shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, 3289 3280 SMB2_MAX_BUFFER_SIZE)); 3290 - shdr->CreditRequest = shdr->CreditCharge; 3281 + shdr->CreditRequest = 3282 + cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 1); 3291 3283 spin_lock(&server->req_lock); 3292 3284 server->credits += rdata->credits - 3293 3285 le16_to_cpu(shdr->CreditCharge); 3294 3286 spin_unlock(&server->req_lock); 3295 3287 wake_up(&server->request_q); 3288 + rdata->credits = le16_to_cpu(shdr->CreditCharge); 3296 3289 flags |= CIFS_HAS_CREDITS; 3297 3290 } 3298 3291 ··· 3566 3555 if (wdata->credits) { 3567 3556 shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes, 3568 3557 SMB2_MAX_BUFFER_SIZE)); 3569 - shdr->CreditRequest = shdr->CreditCharge; 3558 + shdr->CreditRequest = 3559 + cpu_to_le16(le16_to_cpu(shdr->CreditCharge) + 1); 3570 3560 spin_lock(&server->req_lock); 3571 3561 server->credits += wdata->credits - 3572 3562 le16_to_cpu(shdr->CreditCharge); 3573 3563 spin_unlock(&server->req_lock); 3574 3564 wake_up(&server->request_q); 3565 + wdata->credits = le16_to_cpu(shdr->CreditCharge); 3575 3566 flags |= CIFS_HAS_CREDITS; 3576 3567 } 3577 3568
+92 -30
fs/cifs/transport.c
··· 387 387 if (rc < 0 && rc != -EINTR) 388 388 cifs_dbg(VFS, "Error %d sending data on socket to server\n", 389 389 rc); 390 - else 390 + else if (rc > 0) 391 391 rc = 0; 392 392 393 393 return rc; ··· 783 783 } 784 784 785 785 static void 786 - cifs_noop_callback(struct mid_q_entry *mid) 786 + cifs_compound_callback(struct mid_q_entry *mid) 787 787 { 788 + struct TCP_Server_Info *server = mid->server; 789 + unsigned int optype = mid->optype; 790 + unsigned int credits_received = 0; 791 + 792 + if (mid->mid_state == MID_RESPONSE_RECEIVED) { 793 + if (mid->resp_buf) 794 + credits_received = server->ops->get_credits(mid); 795 + else 796 + cifs_dbg(FYI, "Bad state for cancelled MID\n"); 797 + } 798 + 799 + add_credits(server, credits_received, optype); 800 + } 801 + 802 + static void 803 + cifs_compound_last_callback(struct mid_q_entry *mid) 804 + { 805 + cifs_compound_callback(mid); 806 + cifs_wake_up_task(mid); 807 + } 808 + 809 + static void 810 + cifs_cancelled_callback(struct mid_q_entry *mid) 811 + { 812 + cifs_compound_callback(mid); 813 + DeleteMidQEntry(mid); 788 814 } 789 815 790 816 int ··· 821 795 int i, j, rc = 0; 822 796 int timeout, optype; 823 797 struct mid_q_entry *midQ[MAX_COMPOUND]; 824 - unsigned int credits = 0; 798 + bool cancelled_mid[MAX_COMPOUND] = {false}; 799 + unsigned int credits[MAX_COMPOUND] = {0}; 825 800 char *buf; 826 801 827 802 timeout = flags & CIFS_TIMEOUT_MASK; ··· 840 813 return -ENOENT; 841 814 842 815 /* 843 - * Ensure that we do not send more than 50 overlapping requests 844 - * to the same server. We may make this configurable later or 845 - * use ses->maxReq. 816 + * Ensure we obtain 1 credit per request in the compound chain. 817 + * It can be optimized further by waiting for all the credits 818 + * at once but this can wait long enough if we don't have enough 819 + * credits due to some heavy operations in progress or the server 820 + * not granting us much, so a fallback to the current approach is 821 + * needed anyway. 846 822 */ 847 - rc = wait_for_free_request(ses->server, timeout, optype); 848 - if (rc) 849 - return rc; 823 + for (i = 0; i < num_rqst; i++) { 824 + rc = wait_for_free_request(ses->server, timeout, optype); 825 + if (rc) { 826 + /* 827 + * We haven't sent an SMB packet to the server yet but 828 + * we already obtained credits for i requests in the 829 + * compound chain - need to return those credits back 830 + * for future use. Note that we need to call add_credits 831 + * multiple times to match the way we obtained credits 832 + * in the first place and to account for in flight 833 + * requests correctly. 834 + */ 835 + for (j = 0; j < i; j++) 836 + add_credits(ses->server, 1, optype); 837 + return rc; 838 + } 839 + credits[i] = 1; 840 + } 850 841 851 842 /* 852 843 * Make sure that we sign in the same order that we send on this socket ··· 880 835 for (j = 0; j < i; j++) 881 836 cifs_delete_mid(midQ[j]); 882 837 mutex_unlock(&ses->server->srv_mutex); 838 + 883 839 /* Update # of requests on wire to server */ 884 - add_credits(ses->server, 1, optype); 840 + for (j = 0; j < num_rqst; j++) 841 + add_credits(ses->server, credits[j], optype); 885 842 return PTR_ERR(midQ[i]); 886 843 } 887 844 888 845 midQ[i]->mid_state = MID_REQUEST_SUBMITTED; 846 + midQ[i]->optype = optype; 889 847 /* 890 - * We don't invoke the callback compounds unless it is the last 891 - * request. 848 + * Invoke callback for every part of the compound chain 849 + * to calculate credits properly. Wake up this thread only when 850 + * the last element is received. 892 851 */ 893 852 if (i < num_rqst - 1) 894 - midQ[i]->callback = cifs_noop_callback; 853 + midQ[i]->callback = cifs_compound_callback; 854 + else 855 + midQ[i]->callback = cifs_compound_last_callback; 895 856 } 896 857 cifs_in_send_inc(ses->server); 897 858 rc = smb_send_rqst(ses->server, num_rqst, rqst, flags); ··· 911 860 912 861 mutex_unlock(&ses->server->srv_mutex); 913 862 914 - if (rc < 0) 863 + if (rc < 0) { 864 + /* Sending failed for some reason - return credits back */ 865 + for (i = 0; i < num_rqst; i++) 866 + add_credits(ses->server, credits[i], optype); 915 867 goto out; 868 + } 869 + 870 + /* 871 + * At this point the request is passed to the network stack - we assume 872 + * that any credits taken from the server structure on the client have 873 + * been spent and we can't return them back. Once we receive responses 874 + * we will collect credits granted by the server in the mid callbacks 875 + * and add those credits to the server structure. 876 + */ 916 877 917 878 /* 918 879 * Compounding is never used during session establish. ··· 938 875 939 876 for (i = 0; i < num_rqst; i++) { 940 877 rc = wait_for_response(ses->server, midQ[i]); 941 - if (rc != 0) { 878 + if (rc != 0) 879 + break; 880 + } 881 + if (rc != 0) { 882 + for (; i < num_rqst; i++) { 942 883 cifs_dbg(VFS, "Cancelling wait for mid %llu cmd: %d\n", 943 884 midQ[i]->mid, le16_to_cpu(midQ[i]->command)); 944 885 send_cancel(ses->server, &rqst[i], midQ[i]); 945 886 spin_lock(&GlobalMid_Lock); 946 887 if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED) { 947 888 midQ[i]->mid_flags |= MID_WAIT_CANCELLED; 948 - midQ[i]->callback = DeleteMidQEntry; 949 - spin_unlock(&GlobalMid_Lock); 950 - add_credits(ses->server, 1, optype); 951 - return rc; 889 + midQ[i]->callback = cifs_cancelled_callback; 890 + cancelled_mid[i] = true; 891 + credits[i] = 0; 952 892 } 953 893 spin_unlock(&GlobalMid_Lock); 954 894 } 955 895 } 956 - 957 - for (i = 0; i < num_rqst; i++) 958 - if (midQ[i]->resp_buf) 959 - credits += ses->server->ops->get_credits(midQ[i]); 960 - if (!credits) 961 - credits = 1; 962 896 963 897 for (i = 0; i < num_rqst; i++) { 964 898 if (rc < 0) ··· 963 903 964 904 rc = cifs_sync_mid_result(midQ[i], ses->server); 965 905 if (rc != 0) { 966 - add_credits(ses->server, credits, optype); 967 - return rc; 906 + /* mark this mid as cancelled to not free it below */ 907 + cancelled_mid[i] = true; 908 + goto out; 968 909 } 969 910 970 911 if (!midQ[i]->resp_buf || ··· 1012 951 * This is prevented above by using a noop callback that will not 1013 952 * wake this thread except for the very last PDU. 1014 953 */ 1015 - for (i = 0; i < num_rqst; i++) 1016 - cifs_delete_mid(midQ[i]); 1017 - add_credits(ses->server, credits, optype); 954 + for (i = 0; i < num_rqst; i++) { 955 + if (!cancelled_mid[i]) 956 + cifs_delete_mid(midQ[i]); 957 + } 1018 958 1019 959 return rc; 1020 960 }
+33 -28
fs/hugetlbfs/inode.c
··· 383 383 * truncation is indicated by end of range being LLONG_MAX 384 384 * In this case, we first scan the range and release found pages. 385 385 * After releasing pages, hugetlb_unreserve_pages cleans up region/reserv 386 - * maps and global counts. 386 + * maps and global counts. Page faults can not race with truncation 387 + * in this routine. hugetlb_no_page() prevents page faults in the 388 + * truncated range. It checks i_size before allocation, and again after 389 + * with the page table lock for the page held. The same lock must be 390 + * acquired to unmap a page. 387 391 * hole punch is indicated if end is not LLONG_MAX 388 392 * In the hole punch case we scan the range and release found pages. 389 393 * Only when releasing a page is the associated region/reserv map 390 394 * deleted. The region/reserv map for ranges without associated 391 - * pages are not modified. 392 - * 393 - * Callers of this routine must hold the i_mmap_rwsem in write mode to prevent 394 - * races with page faults. 395 - * 395 + * pages are not modified. Page faults can race with hole punch. 396 + * This is indicated if we find a mapped page. 396 397 * Note: If the passed end of range value is beyond the end of file, but 397 398 * not LLONG_MAX this routine still performs a hole punch operation. 398 399 */ ··· 423 422 424 423 for (i = 0; i < pagevec_count(&pvec); ++i) { 425 424 struct page *page = pvec.pages[i]; 425 + u32 hash; 426 426 427 427 index = page->index; 428 + hash = hugetlb_fault_mutex_hash(h, current->mm, 429 + &pseudo_vma, 430 + mapping, index, 0); 431 + mutex_lock(&hugetlb_fault_mutex_table[hash]); 432 + 428 433 /* 429 - * A mapped page is impossible as callers should unmap 430 - * all references before calling. And, i_mmap_rwsem 431 - * prevents the creation of additional mappings. 434 + * If page is mapped, it was faulted in after being 435 + * unmapped in caller. Unmap (again) now after taking 436 + * the fault mutex. The mutex will prevent faults 437 + * until we finish removing the page. 438 + * 439 + * This race can only happen in the hole punch case. 440 + * Getting here in a truncate operation is a bug. 432 441 */ 433 - VM_BUG_ON(page_mapped(page)); 442 + if (unlikely(page_mapped(page))) { 443 + BUG_ON(truncate_op); 444 + 445 + i_mmap_lock_write(mapping); 446 + hugetlb_vmdelete_list(&mapping->i_mmap, 447 + index * pages_per_huge_page(h), 448 + (index + 1) * pages_per_huge_page(h)); 449 + i_mmap_unlock_write(mapping); 450 + } 434 451 435 452 lock_page(page); 436 453 /* ··· 470 451 } 471 452 472 453 unlock_page(page); 454 + mutex_unlock(&hugetlb_fault_mutex_table[hash]); 473 455 } 474 456 huge_pagevec_release(&pvec); 475 457 cond_resched(); ··· 482 462 483 463 static void hugetlbfs_evict_inode(struct inode *inode) 484 464 { 485 - struct address_space *mapping = inode->i_mapping; 486 465 struct resv_map *resv_map; 487 466 488 - /* 489 - * The vfs layer guarantees that there are no other users of this 490 - * inode. Therefore, it would be safe to call remove_inode_hugepages 491 - * without holding i_mmap_rwsem. We acquire and hold here to be 492 - * consistent with other callers. Since there will be no contention 493 - * on the semaphore, overhead is negligible. 494 - */ 495 - i_mmap_lock_write(mapping); 496 467 remove_inode_hugepages(inode, 0, LLONG_MAX); 497 - i_mmap_unlock_write(mapping); 498 - 499 468 resv_map = (struct resv_map *)inode->i_mapping->private_data; 500 469 /* root inode doesn't have the resv_map, so we should check it */ 501 470 if (resv_map) ··· 505 496 i_mmap_lock_write(mapping); 506 497 if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)) 507 498 hugetlb_vmdelete_list(&mapping->i_mmap, pgoff, 0); 508 - remove_inode_hugepages(inode, offset, LLONG_MAX); 509 499 i_mmap_unlock_write(mapping); 500 + remove_inode_hugepages(inode, offset, LLONG_MAX); 510 501 return 0; 511 502 } 512 503 ··· 540 531 hugetlb_vmdelete_list(&mapping->i_mmap, 541 532 hole_start >> PAGE_SHIFT, 542 533 hole_end >> PAGE_SHIFT); 543 - remove_inode_hugepages(inode, hole_start, hole_end); 544 534 i_mmap_unlock_write(mapping); 535 + remove_inode_hugepages(inode, hole_start, hole_end); 545 536 inode_unlock(inode); 546 537 } 547 538 ··· 624 615 /* addr is the offset within the file (zero based) */ 625 616 addr = index * hpage_size; 626 617 627 - /* 628 - * fault mutex taken here, protects against fault path 629 - * and hole punch. inode_lock previously taken protects 630 - * against truncation. 631 - */ 618 + /* mutex taken here, fault path and hole punch */ 632 619 hash = hugetlb_fault_mutex_hash(h, mm, &pseudo_vma, mapping, 633 620 index, addr); 634 621 mutex_lock(&hugetlb_fault_mutex_table[hash]);
+2 -1
fs/sysfs/dir.c
··· 43 43 kuid_t uid; 44 44 kgid_t gid; 45 45 46 - BUG_ON(!kobj); 46 + if (WARN_ON(!kobj)) 47 + return -EINVAL; 47 48 48 49 if (kobj->parent) 49 50 parent = kobj->parent->sd;
+4 -2
fs/sysfs/file.c
··· 325 325 kuid_t uid; 326 326 kgid_t gid; 327 327 328 - BUG_ON(!kobj || !kobj->sd || !attr); 328 + if (WARN_ON(!kobj || !kobj->sd || !attr)) 329 + return -EINVAL; 329 330 330 331 kobject_get_ownership(kobj, &uid, &gid); 331 332 return sysfs_add_file_mode_ns(kobj->sd, attr, false, attr->mode, ··· 538 537 kuid_t uid; 539 538 kgid_t gid; 540 539 541 - BUG_ON(!kobj || !kobj->sd || !attr); 540 + if (WARN_ON(!kobj || !kobj->sd || !attr)) 541 + return -EINVAL; 542 542 543 543 kobject_get_ownership(kobj, &uid, &gid); 544 544 return sysfs_add_file_mode_ns(kobj->sd, &attr->attr, true,
+2 -1
fs/sysfs/group.c
··· 112 112 kgid_t gid; 113 113 int error; 114 114 115 - BUG_ON(!kobj || (!update && !kobj->sd)); 115 + if (WARN_ON(!kobj || (!update && !kobj->sd))) 116 + return -EINVAL; 116 117 117 118 /* Updates may happen before the object has been instantiated */ 118 119 if (unlikely(update && !kobj->sd))
+2 -1
fs/sysfs/symlink.c
··· 23 23 { 24 24 struct kernfs_node *kn, *target = NULL; 25 25 26 - BUG_ON(!name || !parent); 26 + if (WARN_ON(!name || !parent)) 27 + return -EINVAL; 27 28 28 29 /* 29 30 * We don't own @target_kobj and it may be removed at any time.
+7
include/drm/drm_dp_helper.h
··· 1365 1365 * to 16 bits. So will give a constant value (0x8000) for compatability. 1366 1366 */ 1367 1367 DP_DPCD_QUIRK_CONSTANT_N, 1368 + /** 1369 + * @DP_DPCD_QUIRK_NO_PSR: 1370 + * 1371 + * The device does not support PSR even if reports that it supports or 1372 + * driver still need to implement proper handling for such device. 1373 + */ 1374 + DP_DPCD_QUIRK_NO_PSR, 1368 1375 }; 1369 1376 1370 1377 /**
+2 -1
include/drm/drm_dp_mst_helper.h
··· 616 616 struct drm_dp_mst_topology_mgr *mgr); 617 617 618 618 void drm_dp_mst_topology_mgr_suspend(struct drm_dp_mst_topology_mgr *mgr); 619 - int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr); 619 + int __must_check 620 + drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr); 620 621 struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_atomic_state *state, 621 622 struct drm_dp_mst_topology_mgr *mgr); 622 623 int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
+4 -2
include/linux/ceph/libceph.h
··· 35 35 #define CEPH_OPT_NOMSGAUTH (1<<4) /* don't require msg signing feat */ 36 36 #define CEPH_OPT_TCP_NODELAY (1<<5) /* TCP_NODELAY on TCP sockets */ 37 37 #define CEPH_OPT_NOMSGSIGN (1<<6) /* don't sign msgs */ 38 + #define CEPH_OPT_ABORT_ON_FULL (1<<7) /* abort w/ ENOSPC when full */ 38 39 39 40 #define CEPH_OPT_DEFAULT (CEPH_OPT_TCP_NODELAY) 40 41 ··· 54 53 unsigned long osd_request_timeout; /* jiffies */ 55 54 56 55 /* 57 - * any type that can't be simply compared or doesn't need need 56 + * any type that can't be simply compared or doesn't need 58 57 * to be compared should go beyond this point, 59 58 * ceph_compare_options() should be updated accordingly 60 59 */ ··· 282 281 const char *dev_name, const char *dev_name_end, 283 282 int (*parse_extra_token)(char *c, void *private), 284 283 void *private); 285 - int ceph_print_client_options(struct seq_file *m, struct ceph_client *client); 284 + int ceph_print_client_options(struct seq_file *m, struct ceph_client *client, 285 + bool show_all); 286 286 extern void ceph_destroy_options(struct ceph_options *opt); 287 287 extern int ceph_compare_options(struct ceph_options *new_opt, 288 288 struct ceph_client *client);
-1
include/linux/ceph/osd_client.h
··· 354 354 struct rb_root linger_map_checks; 355 355 atomic_t num_requests; 356 356 atomic_t num_homeless; 357 - bool abort_on_full; /* abort w/ ENOSPC when full */ 358 357 int abort_err; 359 358 struct delayed_work timeout_work; 360 359 struct delayed_work osds_timeout_work;
+1 -1
include/linux/compiler-gcc.h
··· 68 68 */ 69 69 #define uninitialized_var(x) x = x 70 70 71 - #ifdef RETPOLINE 71 + #ifdef CONFIG_RETPOLINE 72 72 #define __noretpoline __attribute__((__indirect_branch__("keep"))) 73 73 #endif 74 74
-9
include/linux/dma-mapping.h
··· 717 717 } 718 718 #endif 719 719 720 - /* 721 - * Please always use dma_alloc_coherent instead as it already zeroes the memory! 722 - */ 723 - static inline void *dma_zalloc_coherent(struct device *dev, size_t size, 724 - dma_addr_t *dma_handle, gfp_t flag) 725 - { 726 - return dma_alloc_coherent(dev, size, dma_handle, flag); 727 - } 728 - 729 720 static inline int dma_get_cache_alignment(void) 730 721 { 731 722 #ifdef ARCH_DMA_MINALIGN
+6
include/linux/mmzone.h
··· 520 520 PGDAT_RECLAIM_LOCKED, /* prevents concurrent reclaim */ 521 521 }; 522 522 523 + enum zone_flags { 524 + ZONE_BOOSTED_WATERMARK, /* zone recently boosted watermarks. 525 + * Cleared when kswapd is woken. 526 + */ 527 + }; 528 + 523 529 static inline unsigned long zone_managed_pages(struct zone *zone) 524 530 { 525 531 return (unsigned long)atomic_long_read(&zone->managed_pages);
+1 -1
include/linux/module.h
··· 828 828 static inline void module_bug_cleanup(struct module *mod) {} 829 829 #endif /* CONFIG_GENERIC_BUG */ 830 830 831 - #ifdef RETPOLINE 831 + #ifdef CONFIG_RETPOLINE 832 832 extern bool retpoline_module_ok(bool has_retpoline); 833 833 #else 834 834 static inline bool retpoline_module_ok(bool has_retpoline)
+1 -1
include/linux/pci-dma-compat.h
··· 24 24 pci_zalloc_consistent(struct pci_dev *hwdev, size_t size, 25 25 dma_addr_t *dma_handle) 26 26 { 27 - return dma_zalloc_coherent(&hwdev->dev, size, dma_handle, GFP_ATOMIC); 27 + return dma_alloc_coherent(&hwdev->dev, size, dma_handle, GFP_ATOMIC); 28 28 } 29 29 30 30 static inline void
+1
include/linux/phy/phy.h
··· 42 42 PHY_MODE_PCIE, 43 43 PHY_MODE_ETHERNET, 44 44 PHY_MODE_MIPI_DPHY, 45 + PHY_MODE_SATA 45 46 }; 46 47 47 48 /**
+5
include/linux/pm_opp.h
··· 108 108 int dev_pm_opp_add(struct device *dev, unsigned long freq, 109 109 unsigned long u_volt); 110 110 void dev_pm_opp_remove(struct device *dev, unsigned long freq); 111 + void dev_pm_opp_remove_all_dynamic(struct device *dev); 111 112 112 113 int dev_pm_opp_enable(struct device *dev, unsigned long freq); 113 114 ··· 215 214 } 216 215 217 216 static inline void dev_pm_opp_remove(struct device *dev, unsigned long freq) 217 + { 218 + } 219 + 220 + static inline void dev_pm_opp_remove_all_dynamic(struct device *dev) 218 221 { 219 222 } 220 223
+1
include/linux/qcom_scm.h
··· 13 13 #ifndef __QCOM_SCM_H 14 14 #define __QCOM_SCM_H 15 15 16 + #include <linux/err.h> 16 17 #include <linux/types.h> 17 18 #include <linux/cpumask.h> 18 19
+11 -4
include/linux/reset.h
··· 32 32 struct reset_control *of_reset_control_array_get(struct device_node *np, 33 33 bool shared, bool optional); 34 34 35 + int reset_control_get_count(struct device *dev); 36 + 35 37 #else 36 38 37 39 static inline int reset_control_reset(struct reset_control *rstc) ··· 99 97 return optional ? NULL : ERR_PTR(-ENOTSUPP); 100 98 } 101 99 100 + static inline int reset_control_get_count(struct device *dev) 101 + { 102 + return -ENOENT; 103 + } 104 + 102 105 #endif /* CONFIG_RESET_CONTROLLER */ 103 106 104 107 static inline int __must_check device_reset(struct device *dev) ··· 145 138 * 146 139 * Returns a struct reset_control or IS_ERR() condition containing errno. 147 140 * This function is intended for use with reset-controls which are shared 148 - * between hardware-blocks. 141 + * between hardware blocks. 149 142 * 150 143 * When a reset-control is shared, the behavior of reset_control_assert / 151 144 * deassert is changed, the reset-core will keep track of a deassert_count ··· 194 187 } 195 188 196 189 /** 197 - * of_reset_control_get_shared - Lookup and obtain an shared reference 190 + * of_reset_control_get_shared - Lookup and obtain a shared reference 198 191 * to a reset controller. 199 192 * @node: device to be reset by the controller 200 193 * @id: reset line name ··· 236 229 } 237 230 238 231 /** 239 - * of_reset_control_get_shared_by_index - Lookup and obtain an shared 232 + * of_reset_control_get_shared_by_index - Lookup and obtain a shared 240 233 * reference to a reset controller 241 234 * by index. 242 235 * @node: device to be reset by the controller ··· 329 322 330 323 /** 331 324 * devm_reset_control_get_shared_by_index - resource managed 332 - * reset_control_get_shared 325 + * reset_control_get_shared 333 326 * @dev: device to be reset by the controller 334 327 * @index: index of the reset controller 335 328 *
+1 -1
include/linux/sched.h
··· 995 995 /* cg_list protected by css_set_lock and tsk->alloc_lock: */ 996 996 struct list_head cg_list; 997 997 #endif 998 - #ifdef CONFIG_RESCTRL 998 + #ifdef CONFIG_X86_RESCTRL 999 999 u32 closid; 1000 1000 u32 rmid; 1001 1001 #endif
+2
include/uapi/linux/audit.h
··· 400 400 /* do not define AUDIT_ARCH_PPCLE since it is not supported by audit */ 401 401 #define AUDIT_ARCH_PPC64 (EM_PPC64|__AUDIT_ARCH_64BIT) 402 402 #define AUDIT_ARCH_PPC64LE (EM_PPC64|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) 403 + #define AUDIT_ARCH_RISCV32 (EM_RISCV|__AUDIT_ARCH_LE) 404 + #define AUDIT_ARCH_RISCV64 (EM_RISCV|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE) 403 405 #define AUDIT_ARCH_S390 (EM_S390) 404 406 #define AUDIT_ARCH_S390X (EM_S390|__AUDIT_ARCH_64BIT) 405 407 #define AUDIT_ARCH_SH (EM_SH)
+12 -2
kernel/fork.c
··· 217 217 memset(s->addr, 0, THREAD_SIZE); 218 218 219 219 tsk->stack_vm_area = s; 220 + tsk->stack = s->addr; 220 221 return s->addr; 221 222 } 222 223 ··· 1834 1833 1835 1834 posix_cpu_timers_init(p); 1836 1835 1837 - p->start_time = ktime_get_ns(); 1838 - p->real_start_time = ktime_get_boot_ns(); 1839 1836 p->io_context = NULL; 1840 1837 audit_set_context(p, NULL); 1841 1838 cgroup_fork(p); ··· 1998 1999 retval = cgroup_can_fork(p); 1999 2000 if (retval) 2000 2001 goto bad_fork_free_pid; 2002 + 2003 + /* 2004 + * From this point on we must avoid any synchronous user-space 2005 + * communication until we take the tasklist-lock. In particular, we do 2006 + * not want user-space to be able to predict the process start-time by 2007 + * stalling fork(2) after we recorded the start_time but before it is 2008 + * visible to the system. 2009 + */ 2010 + 2011 + p->start_time = ktime_get_ns(); 2012 + p->real_start_time = ktime_get_boot_ns(); 2001 2013 2002 2014 /* 2003 2015 * Make it visible to the rest of the system, but dont wake it up yet.
+2 -1
kernel/sys.c
··· 1207 1207 /* 1208 1208 * Work around broken programs that cannot handle "Linux 3.0". 1209 1209 * Instead we map 3.x to 2.6.40+x, so e.g. 3.0 would be 2.6.40 1210 - * And we map 4.x to 2.6.60+x, so 4.0 would be 2.6.60. 1210 + * And we map 4.x and later versions to 2.6.60+x, so 4.0/5.0/6.0/... would be 1211 + * 2.6.60. 1211 1212 */ 1212 1213 static int override_release(char __user *release, size_t len) 1213 1214 {
+24 -57
mm/hugetlb.c
··· 3238 3238 struct page *ptepage; 3239 3239 unsigned long addr; 3240 3240 int cow; 3241 - struct address_space *mapping = vma->vm_file->f_mapping; 3242 3241 struct hstate *h = hstate_vma(vma); 3243 3242 unsigned long sz = huge_page_size(h); 3244 3243 struct mmu_notifier_range range; ··· 3249 3250 mmu_notifier_range_init(&range, src, vma->vm_start, 3250 3251 vma->vm_end); 3251 3252 mmu_notifier_invalidate_range_start(&range); 3252 - } else { 3253 - /* 3254 - * For shared mappings i_mmap_rwsem must be held to call 3255 - * huge_pte_alloc, otherwise the returned ptep could go 3256 - * away if part of a shared pmd and another thread calls 3257 - * huge_pmd_unshare. 3258 - */ 3259 - i_mmap_lock_read(mapping); 3260 3253 } 3261 3254 3262 3255 for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) { 3263 3256 spinlock_t *src_ptl, *dst_ptl; 3264 - 3265 3257 src_pte = huge_pte_offset(src, addr, sz); 3266 3258 if (!src_pte) 3267 3259 continue; 3268 - 3269 3260 dst_pte = huge_pte_alloc(dst, addr, sz); 3270 3261 if (!dst_pte) { 3271 3262 ret = -ENOMEM; ··· 3326 3337 3327 3338 if (cow) 3328 3339 mmu_notifier_invalidate_range_end(&range); 3329 - else 3330 - i_mmap_unlock_read(mapping); 3331 3340 3332 3341 return ret; 3333 3342 } ··· 3742 3755 } 3743 3756 3744 3757 /* 3745 - * We can not race with truncation due to holding i_mmap_rwsem. 3746 - * Check once here for faults beyond end of file. 3758 + * Use page lock to guard against racing truncation 3759 + * before we get page_table_lock. 3747 3760 */ 3748 - size = i_size_read(mapping->host) >> huge_page_shift(h); 3749 - if (idx >= size) 3750 - goto out; 3751 - 3752 3761 retry: 3753 3762 page = find_lock_page(mapping, idx); 3754 3763 if (!page) { 3764 + size = i_size_read(mapping->host) >> huge_page_shift(h); 3765 + if (idx >= size) 3766 + goto out; 3767 + 3755 3768 /* 3756 3769 * Check for page in userfault range 3757 3770 */ ··· 3771 3784 }; 3772 3785 3773 3786 /* 3774 - * hugetlb_fault_mutex and i_mmap_rwsem must be 3775 - * dropped before handling userfault. Reacquire 3776 - * after handling fault to make calling code simpler. 3787 + * hugetlb_fault_mutex must be dropped before 3788 + * handling userfault. Reacquire after handling 3789 + * fault to make calling code simpler. 3777 3790 */ 3778 3791 hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, 3779 3792 idx, haddr); 3780 3793 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 3781 - i_mmap_unlock_read(mapping); 3782 - 3783 3794 ret = handle_userfault(&vmf, VM_UFFD_MISSING); 3784 - 3785 - i_mmap_lock_read(mapping); 3786 3795 mutex_lock(&hugetlb_fault_mutex_table[hash]); 3787 3796 goto out; 3788 3797 } ··· 3837 3854 } 3838 3855 3839 3856 ptl = huge_pte_lock(h, mm, ptep); 3857 + size = i_size_read(mapping->host) >> huge_page_shift(h); 3858 + if (idx >= size) 3859 + goto backout; 3840 3860 3841 3861 ret = 0; 3842 3862 if (!huge_pte_none(huge_ptep_get(ptep))) ··· 3926 3940 3927 3941 ptep = huge_pte_offset(mm, haddr, huge_page_size(h)); 3928 3942 if (ptep) { 3929 - /* 3930 - * Since we hold no locks, ptep could be stale. That is 3931 - * OK as we are only making decisions based on content and 3932 - * not actually modifying content here. 3933 - */ 3934 3943 entry = huge_ptep_get(ptep); 3935 3944 if (unlikely(is_hugetlb_entry_migration(entry))) { 3936 3945 migration_entry_wait_huge(vma, mm, ptep); ··· 3933 3952 } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) 3934 3953 return VM_FAULT_HWPOISON_LARGE | 3935 3954 VM_FAULT_SET_HINDEX(hstate_index(h)); 3955 + } else { 3956 + ptep = huge_pte_alloc(mm, haddr, huge_page_size(h)); 3957 + if (!ptep) 3958 + return VM_FAULT_OOM; 3936 3959 } 3937 3960 3938 - /* 3939 - * Acquire i_mmap_rwsem before calling huge_pte_alloc and hold 3940 - * until finished with ptep. This serves two purposes: 3941 - * 1) It prevents huge_pmd_unshare from being called elsewhere 3942 - * and making the ptep no longer valid. 3943 - * 2) It synchronizes us with file truncation. 3944 - * 3945 - * ptep could have already be assigned via huge_pte_offset. That 3946 - * is OK, as huge_pte_alloc will return the same value unless 3947 - * something changed. 3948 - */ 3949 3961 mapping = vma->vm_file->f_mapping; 3950 - i_mmap_lock_read(mapping); 3951 - ptep = huge_pte_alloc(mm, haddr, huge_page_size(h)); 3952 - if (!ptep) { 3953 - i_mmap_unlock_read(mapping); 3954 - return VM_FAULT_OOM; 3955 - } 3962 + idx = vma_hugecache_offset(h, vma, haddr); 3956 3963 3957 3964 /* 3958 3965 * Serialize hugepage allocation and instantiation, so that we don't 3959 3966 * get spurious allocation failures if two CPUs race to instantiate 3960 3967 * the same page in the page cache. 3961 3968 */ 3962 - idx = vma_hugecache_offset(h, vma, haddr); 3963 3969 hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, idx, haddr); 3964 3970 mutex_lock(&hugetlb_fault_mutex_table[hash]); 3965 3971 ··· 4034 4066 } 4035 4067 out_mutex: 4036 4068 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 4037 - i_mmap_unlock_read(mapping); 4038 4069 /* 4039 4070 * Generally it's safe to hold refcount during waiting page lock. But 4040 4071 * here we just wait to defer the next page fault to avoid busy loop and ··· 4638 4671 * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc() 4639 4672 * and returns the corresponding pte. While this is not necessary for the 4640 4673 * !shared pmd case because we can allocate the pmd later as well, it makes the 4641 - * code much cleaner. 4642 - * 4643 - * This routine must be called with i_mmap_rwsem held in at least read mode. 4644 - * For hugetlbfs, this prevents removal of any page table entries associated 4645 - * with the address space. This is important as we are setting up sharing 4646 - * based on existing page table entries (mappings). 4674 + * code much cleaner. pmd allocation is essential for the shared case because 4675 + * pud has to be populated inside the same i_mmap_rwsem section - otherwise 4676 + * racing tasks could either miss the sharing (see huge_pte_offset) or select a 4677 + * bad pmd for sharing. 4647 4678 */ 4648 4679 pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) 4649 4680 { ··· 4658 4693 if (!vma_shareable(vma, addr)) 4659 4694 return (pte_t *)pmd_alloc(mm, pud, addr); 4660 4695 4696 + i_mmap_lock_write(mapping); 4661 4697 vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) { 4662 4698 if (svma == vma) 4663 4699 continue; ··· 4688 4722 spin_unlock(ptl); 4689 4723 out: 4690 4724 pte = (pte_t *)pmd_alloc(mm, pud, addr); 4725 + i_mmap_unlock_write(mapping); 4691 4726 return pte; 4692 4727 } 4693 4728 ··· 4699 4732 * indicated by page_count > 1, unmap is achieved by clearing pud and 4700 4733 * decrementing the ref count. If count == 1, the pte page is not shared. 4701 4734 * 4702 - * Called with page table lock held and i_mmap_rwsem held in write mode. 4735 + * called with page table lock held. 4703 4736 * 4704 4737 * returns: 1 successfully unmapped a shared pte page 4705 4738 * 0 the underlying pte page is not shared, or it is the last user
+44 -23
mm/kasan/common.c
··· 298 298 return; 299 299 } 300 300 301 - cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE); 302 - 303 301 *flags |= SLAB_KASAN; 304 302 } 305 303 ··· 347 349 } 348 350 349 351 /* 350 - * Since it's desirable to only call object contructors once during slab 351 - * allocation, we preassign tags to all such objects. Also preassign tags for 352 - * SLAB_TYPESAFE_BY_RCU slabs to avoid use-after-free reports. 353 - * For SLAB allocator we can't preassign tags randomly since the freelist is 354 - * stored as an array of indexes instead of a linked list. Assign tags based 355 - * on objects indexes, so that objects that are next to each other get 356 - * different tags. 357 - * After a tag is assigned, the object always gets allocated with the same tag. 358 - * The reason is that we can't change tags for objects with constructors on 359 - * reallocation (even for non-SLAB_TYPESAFE_BY_RCU), because the constructor 360 - * code can save the pointer to the object somewhere (e.g. in the object 361 - * itself). Then if we retag it, the old saved pointer will become invalid. 352 + * This function assigns a tag to an object considering the following: 353 + * 1. A cache might have a constructor, which might save a pointer to a slab 354 + * object somewhere (e.g. in the object itself). We preassign a tag for 355 + * each object in caches with constructors during slab creation and reuse 356 + * the same tag each time a particular object is allocated. 357 + * 2. A cache might be SLAB_TYPESAFE_BY_RCU, which means objects can be 358 + * accessed after being freed. We preassign tags for objects in these 359 + * caches as well. 360 + * 3. For SLAB allocator we can't preassign tags randomly since the freelist 361 + * is stored as an array of indexes instead of a linked list. Assign tags 362 + * based on objects indexes, so that objects that are next to each other 363 + * get different tags. 362 364 */ 363 - static u8 assign_tag(struct kmem_cache *cache, const void *object, bool new) 365 + static u8 assign_tag(struct kmem_cache *cache, const void *object, 366 + bool init, bool krealloc) 364 367 { 365 - if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU)) 366 - return new ? KASAN_TAG_KERNEL : random_tag(); 368 + /* Reuse the same tag for krealloc'ed objects. */ 369 + if (krealloc) 370 + return get_tag(object); 367 371 372 + /* 373 + * If the cache neither has a constructor nor has SLAB_TYPESAFE_BY_RCU 374 + * set, assign a tag when the object is being allocated (init == false). 375 + */ 376 + if (!cache->ctor && !(cache->flags & SLAB_TYPESAFE_BY_RCU)) 377 + return init ? KASAN_TAG_KERNEL : random_tag(); 378 + 379 + /* For caches that either have a constructor or SLAB_TYPESAFE_BY_RCU: */ 368 380 #ifdef CONFIG_SLAB 381 + /* For SLAB assign tags based on the object index in the freelist. */ 369 382 return (u8)obj_to_index(cache, virt_to_page(object), (void *)object); 370 383 #else 371 - return new ? random_tag() : get_tag(object); 384 + /* 385 + * For SLUB assign a random tag during slab creation, otherwise reuse 386 + * the already assigned tag. 387 + */ 388 + return init ? random_tag() : get_tag(object); 372 389 #endif 373 390 } 374 391 ··· 399 386 __memset(alloc_info, 0, sizeof(*alloc_info)); 400 387 401 388 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) 402 - object = set_tag(object, assign_tag(cache, object, true)); 389 + object = set_tag(object, 390 + assign_tag(cache, object, true, false)); 403 391 404 392 return (void *)object; 405 393 } ··· 466 452 return __kasan_slab_free(cache, object, ip, true); 467 453 } 468 454 469 - void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, 470 - size_t size, gfp_t flags) 455 + static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object, 456 + size_t size, gfp_t flags, bool krealloc) 471 457 { 472 458 unsigned long redzone_start; 473 459 unsigned long redzone_end; ··· 485 471 KASAN_SHADOW_SCALE_SIZE); 486 472 487 473 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) 488 - tag = assign_tag(cache, object, false); 474 + tag = assign_tag(cache, object, false, krealloc); 489 475 490 476 /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */ 491 477 kasan_unpoison_shadow(set_tag(object, tag), size); ··· 496 482 set_track(&get_alloc_info(cache, object)->alloc_track, flags); 497 483 498 484 return set_tag(object, tag); 485 + } 486 + 487 + void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, 488 + size_t size, gfp_t flags) 489 + { 490 + return __kasan_kmalloc(cache, object, size, flags, false); 499 491 } 500 492 EXPORT_SYMBOL(kasan_kmalloc); 501 493 ··· 542 522 if (unlikely(!PageSlab(page))) 543 523 return kasan_kmalloc_large(object, size, flags); 544 524 else 545 - return kasan_kmalloc(page->slab_cache, object, size, flags); 525 + return __kasan_kmalloc(page->slab_cache, object, size, 526 + flags, true); 546 527 } 547 528 548 529 void kasan_poison_kfree(void *ptr, unsigned long ip)
+2 -14
mm/memory-failure.c
··· 966 966 enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS; 967 967 struct address_space *mapping; 968 968 LIST_HEAD(tokill); 969 - bool unmap_success = true; 969 + bool unmap_success; 970 970 int kill = 1, forcekill; 971 971 struct page *hpage = *hpagep; 972 972 bool mlocked = PageMlocked(hpage); ··· 1028 1028 if (kill) 1029 1029 collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); 1030 1030 1031 - if (!PageHuge(hpage)) { 1032 - unmap_success = try_to_unmap(hpage, ttu); 1033 - } else if (mapping) { 1034 - /* 1035 - * For hugetlb pages, try_to_unmap could potentially call 1036 - * huge_pmd_unshare. Because of this, take semaphore in 1037 - * write mode here and set TTU_RMAP_LOCKED to indicate we 1038 - * have taken the lock at this higer level. 1039 - */ 1040 - i_mmap_lock_write(mapping); 1041 - unmap_success = try_to_unmap(hpage, ttu|TTU_RMAP_LOCKED); 1042 - i_mmap_unlock_write(mapping); 1043 - } 1031 + unmap_success = try_to_unmap(hpage, ttu); 1044 1032 if (!unmap_success) 1045 1033 pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n", 1046 1034 pfn, page_mapcount(hpage));
+24 -2
mm/memory.c
··· 2994 2994 struct vm_area_struct *vma = vmf->vma; 2995 2995 vm_fault_t ret; 2996 2996 2997 + /* 2998 + * Preallocate pte before we take page_lock because this might lead to 2999 + * deadlocks for memcg reclaim which waits for pages under writeback: 3000 + * lock_page(A) 3001 + * SetPageWriteback(A) 3002 + * unlock_page(A) 3003 + * lock_page(B) 3004 + * lock_page(B) 3005 + * pte_alloc_pne 3006 + * shrink_page_list 3007 + * wait_on_page_writeback(A) 3008 + * SetPageWriteback(B) 3009 + * unlock_page(B) 3010 + * # flush A, B to clear the writeback 3011 + */ 3012 + if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) { 3013 + vmf->prealloc_pte = pte_alloc_one(vmf->vma->vm_mm); 3014 + if (!vmf->prealloc_pte) 3015 + return VM_FAULT_OOM; 3016 + smp_wmb(); /* See comment in __pte_alloc() */ 3017 + } 3018 + 2997 3019 ret = vma->vm_ops->fault(vmf); 2998 3020 if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY | 2999 3021 VM_FAULT_DONE_COW))) ··· 4099 4077 goto out; 4100 4078 4101 4079 if (range) { 4102 - range->start = address & PAGE_MASK; 4103 - range->end = range->start + PAGE_SIZE; 4080 + mmu_notifier_range_init(range, mm, address & PAGE_MASK, 4081 + (address & PAGE_MASK) + PAGE_SIZE); 4104 4082 mmu_notifier_invalidate_range_start(range); 4105 4083 } 4106 4084 ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
+1 -12
mm/migrate.c
··· 1324 1324 goto put_anon; 1325 1325 1326 1326 if (page_mapped(hpage)) { 1327 - struct address_space *mapping = page_mapping(hpage); 1328 - 1329 - /* 1330 - * try_to_unmap could potentially call huge_pmd_unshare. 1331 - * Because of this, take semaphore in write mode here and 1332 - * set TTU_RMAP_LOCKED to let lower levels know we have 1333 - * taken the lock. 1334 - */ 1335 - i_mmap_lock_write(mapping); 1336 1327 try_to_unmap(hpage, 1337 - TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS| 1338 - TTU_RMAP_LOCKED); 1339 - i_mmap_unlock_write(mapping); 1328 + TTU_MIGRATION|TTU_IGNORE_MLOCK|TTU_IGNORE_ACCESS); 1340 1329 page_was_mapped = 1; 1341 1330 } 1342 1331
+7 -1
mm/page_alloc.c
··· 2214 2214 */ 2215 2215 boost_watermark(zone); 2216 2216 if (alloc_flags & ALLOC_KSWAPD) 2217 - wakeup_kswapd(zone, 0, 0, zone_idx(zone)); 2217 + set_bit(ZONE_BOOSTED_WATERMARK, &zone->flags); 2218 2218 2219 2219 /* We are not allowed to try stealing from the whole block */ 2220 2220 if (!whole_block) ··· 3102 3102 local_irq_restore(flags); 3103 3103 3104 3104 out: 3105 + /* Separate test+clear to avoid unnecessary atomics */ 3106 + if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) { 3107 + clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags); 3108 + wakeup_kswapd(zone, 0, 0, zone_idx(zone)); 3109 + } 3110 + 3105 3111 VM_BUG_ON_PAGE(page && bad_range(zone, page), page); 3106 3112 return page; 3107 3113
+2 -6
mm/rmap.c
··· 25 25 * page->flags PG_locked (lock_page) 26 26 * hugetlbfs_i_mmap_rwsem_key (in huge_pmd_share) 27 27 * mapping->i_mmap_rwsem 28 - * hugetlb_fault_mutex (hugetlbfs specific page fault mutex) 29 28 * anon_vma->rwsem 30 29 * mm->page_table_lock or pte_lock 31 30 * zone_lru_lock (in mark_page_accessed, isolate_lru_page) ··· 1371 1372 * Note that the page can not be free in this function as call of 1372 1373 * try_to_unmap() must hold a reference on the page. 1373 1374 */ 1374 - mmu_notifier_range_init(&range, vma->vm_mm, vma->vm_start, 1375 - min(vma->vm_end, vma->vm_start + 1375 + mmu_notifier_range_init(&range, vma->vm_mm, address, 1376 + min(vma->vm_end, address + 1376 1377 (PAGE_SIZE << compound_order(page)))); 1377 1378 if (PageHuge(page)) { 1378 1379 /* 1379 1380 * If sharing is possible, start and end will be adjusted 1380 1381 * accordingly. 1381 - * 1382 - * If called for a huge page, caller must hold i_mmap_rwsem 1383 - * in write mode as it is possible to call huge_pmd_unshare. 1384 1382 */ 1385 1383 adjust_range_if_pmd_sharing_possible(vma, &range.start, 1386 1384 &range.end);
+4 -2
mm/slab.c
··· 666 666 struct alien_cache *alc = NULL; 667 667 668 668 alc = kmalloc_node(memsize, gfp, node); 669 - init_arraycache(&alc->ac, entries, batch); 670 - spin_lock_init(&alc->lock); 669 + if (alc) { 670 + init_arraycache(&alc->ac, entries, batch); 671 + spin_lock_init(&alc->lock); 672 + } 671 673 return alc; 672 674 } 673 675
+2
mm/slub.c
··· 3846 3846 unsigned int offset; 3847 3847 size_t object_size; 3848 3848 3849 + ptr = kasan_reset_tag(ptr); 3850 + 3849 3851 /* Find object and usable object size. */ 3850 3852 s = page->slab_cache; 3851 3853
+5 -4
mm/usercopy.c
··· 247 247 /* 248 248 * Validates that the given object is: 249 249 * - not bogus address 250 - * - known-safe heap or stack object 250 + * - fully contained by stack (or stack frame, when available) 251 + * - fully within SLAB object (or object whitelist area, when available) 251 252 * - not in kernel text 252 253 */ 253 254 void __check_object_size(const void *ptr, unsigned long n, bool to_user) ··· 262 261 263 262 /* Check for invalid addresses. */ 264 263 check_bogus_address((const unsigned long)ptr, n, to_user); 265 - 266 - /* Check for bad heap object. */ 267 - check_heap_object(ptr, n, to_user); 268 264 269 265 /* Check for bad stack object. */ 270 266 switch (check_stack_object(ptr, n)) { ··· 279 281 default: 280 282 usercopy_abort("process stack", NULL, to_user, 0, n); 281 283 } 284 + 285 + /* Check for bad heap object. */ 286 + check_heap_object(ptr, n, to_user); 282 287 283 288 /* Check for object in kernel to avoid text exposure. */ 284 289 check_kernel_text_object((const unsigned long)ptr, n, to_user);
+2 -9
mm/userfaultfd.c
··· 267 267 VM_BUG_ON(dst_addr & ~huge_page_mask(h)); 268 268 269 269 /* 270 - * Serialize via i_mmap_rwsem and hugetlb_fault_mutex. 271 - * i_mmap_rwsem ensures the dst_pte remains valid even 272 - * in the case of shared pmds. fault mutex prevents 273 - * races with other faulting threads. 270 + * Serialize via hugetlb_fault_mutex 274 271 */ 275 - mapping = dst_vma->vm_file->f_mapping; 276 - i_mmap_lock_read(mapping); 277 272 idx = linear_page_index(dst_vma, dst_addr); 273 + mapping = dst_vma->vm_file->f_mapping; 278 274 hash = hugetlb_fault_mutex_hash(h, dst_mm, dst_vma, mapping, 279 275 idx, dst_addr); 280 276 mutex_lock(&hugetlb_fault_mutex_table[hash]); ··· 279 283 dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h)); 280 284 if (!dst_pte) { 281 285 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 282 - i_mmap_unlock_read(mapping); 283 286 goto out_unlock; 284 287 } 285 288 ··· 286 291 dst_pteval = huge_ptep_get(dst_pte); 287 292 if (!huge_pte_none(dst_pteval)) { 288 293 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 289 - i_mmap_unlock_read(mapping); 290 294 goto out_unlock; 291 295 } 292 296 ··· 293 299 dst_addr, src_addr, &page); 294 300 295 301 mutex_unlock(&hugetlb_fault_mutex_table[hash]); 296 - i_mmap_unlock_read(mapping); 297 302 vm_alloc_shared = vm_shared; 298 303 299 304 cond_resched();
+1 -1
mm/util.c
··· 478 478 return true; 479 479 if (PageHuge(page)) 480 480 return false; 481 - for (i = 0; i < hpage_nr_pages(page); i++) { 481 + for (i = 0; i < (1 << compound_order(page)); i++) { 482 482 if (atomic_read(&page[i]._mapcount) >= 0) 483 483 return true; 484 484 }
+10 -1
net/ceph/ceph_common.c
··· 255 255 Opt_nocephx_sign_messages, 256 256 Opt_tcp_nodelay, 257 257 Opt_notcp_nodelay, 258 + Opt_abort_on_full, 258 259 }; 259 260 260 261 static match_table_t opt_tokens = { ··· 281 280 {Opt_nocephx_sign_messages, "nocephx_sign_messages"}, 282 281 {Opt_tcp_nodelay, "tcp_nodelay"}, 283 282 {Opt_notcp_nodelay, "notcp_nodelay"}, 283 + {Opt_abort_on_full, "abort_on_full"}, 284 284 {-1, NULL} 285 285 }; 286 286 ··· 537 535 opt->flags &= ~CEPH_OPT_TCP_NODELAY; 538 536 break; 539 537 538 + case Opt_abort_on_full: 539 + opt->flags |= CEPH_OPT_ABORT_ON_FULL; 540 + break; 541 + 540 542 default: 541 543 BUG_ON(token); 542 544 } ··· 555 549 } 556 550 EXPORT_SYMBOL(ceph_parse_options); 557 551 558 - int ceph_print_client_options(struct seq_file *m, struct ceph_client *client) 552 + int ceph_print_client_options(struct seq_file *m, struct ceph_client *client, 553 + bool show_all) 559 554 { 560 555 struct ceph_options *opt = client->options; 561 556 size_t pos = m->count; ··· 581 574 seq_puts(m, "nocephx_sign_messages,"); 582 575 if ((opt->flags & CEPH_OPT_TCP_NODELAY) == 0) 583 576 seq_puts(m, "notcp_nodelay,"); 577 + if (show_all && (opt->flags & CEPH_OPT_ABORT_ON_FULL)) 578 + seq_puts(m, "abort_on_full,"); 584 579 585 580 if (opt->mount_timeout != CEPH_MOUNT_TIMEOUT_DEFAULT) 586 581 seq_printf(m, "mount_timeout=%d,",
+1 -1
net/ceph/debugfs.c
··· 375 375 struct ceph_client *client = s->private; 376 376 int ret; 377 377 378 - ret = ceph_print_client_options(s, client); 378 + ret = ceph_print_client_options(s, client, true); 379 379 if (ret) 380 380 return ret; 381 381
+2 -2
net/ceph/osd_client.c
··· 2315 2315 (ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) || 2316 2316 pool_full(osdc, req->r_t.base_oloc.pool))) { 2317 2317 dout("req %p full/pool_full\n", req); 2318 - if (osdc->abort_on_full) { 2318 + if (ceph_test_opt(osdc->client, ABORT_ON_FULL)) { 2319 2319 err = -ENOSPC; 2320 2320 } else { 2321 2321 pr_warn_ratelimited("FULL or reached pool quota\n"); ··· 2545 2545 { 2546 2546 bool victims = false; 2547 2547 2548 - if (osdc->abort_on_full && 2548 + if (ceph_test_opt(osdc->client, ABORT_ON_FULL) && 2549 2549 (ceph_osdmap_flag(osdc, CEPH_OSDMAP_FULL) || have_pool_full(osdc))) 2550 2550 for_each_request(osdc, abort_on_full_fn, &victims); 2551 2551 }
+1
samples/seccomp/Makefile
··· 34 34 HOSTCFLAGS_dropper.o += $(MFLAG) 35 35 HOSTCFLAGS_bpf-helper.o += $(MFLAG) 36 36 HOSTCFLAGS_bpf-fancy.o += $(MFLAG) 37 + HOSTCFLAGS_user-trap.o += $(MFLAG) 37 38 HOSTLDLIBS_bpf-direct += $(MFLAG) 38 39 HOSTLDLIBS_bpf-fancy += $(MFLAG) 39 40 HOSTLDLIBS_dropper += $(MFLAG)
+4 -4
scripts/coccinelle/api/alloc/alloc_cast.cocci
··· 32 32 (T *) 33 33 \(kmalloc\|kzalloc\|kcalloc\|kmem_cache_alloc\|kmem_cache_zalloc\| 34 34 kmem_cache_alloc_node\|kmalloc_node\|kzalloc_node\|vmalloc\|vzalloc\| 35 - dma_alloc_coherent\|dma_zalloc_coherent\|devm_kmalloc\|devm_kzalloc\| 35 + dma_alloc_coherent\|devm_kmalloc\|devm_kzalloc\| 36 36 kvmalloc\|kvzalloc\|kvmalloc_node\|kvzalloc_node\|pci_alloc_consistent\| 37 37 pci_zalloc_consistent\|kmem_alloc\|kmem_zalloc\|kmem_zone_alloc\| 38 38 kmem_zone_zalloc\|vmalloc_node\|vzalloc_node\)(...) ··· 55 55 * (T *) 56 56 \(kmalloc\|kzalloc\|kcalloc\|kmem_cache_alloc\|kmem_cache_zalloc\| 57 57 kmem_cache_alloc_node\|kmalloc_node\|kzalloc_node\|vmalloc\|vzalloc\| 58 - dma_alloc_coherent\|dma_zalloc_coherent\|devm_kmalloc\|devm_kzalloc\| 58 + dma_alloc_coherent\|devm_kmalloc\|devm_kzalloc\| 59 59 kvmalloc\|kvzalloc\|kvmalloc_node\|kvzalloc_node\|pci_alloc_consistent\| 60 60 pci_zalloc_consistent\|kmem_alloc\|kmem_zalloc\|kmem_zone_alloc\| 61 61 kmem_zone_zalloc\|vmalloc_node\|vzalloc_node\)(...) ··· 78 78 - (T *) 79 79 \(kmalloc\|kzalloc\|kcalloc\|kmem_cache_alloc\|kmem_cache_zalloc\| 80 80 kmem_cache_alloc_node\|kmalloc_node\|kzalloc_node\|vmalloc\|vzalloc\| 81 - dma_alloc_coherent\|dma_zalloc_coherent\|devm_kmalloc\|devm_kzalloc\| 81 + dma_alloc_coherent\|devm_kmalloc\|devm_kzalloc\| 82 82 kvmalloc\|kvzalloc\|kvmalloc_node\|kvzalloc_node\|pci_alloc_consistent\| 83 83 pci_zalloc_consistent\|kmem_alloc\|kmem_zalloc\|kmem_zone_alloc\| 84 84 kmem_zone_zalloc\|vmalloc_node\|vzalloc_node\)(...) ··· 95 95 (T@p *) 96 96 \(kmalloc\|kzalloc\|kcalloc\|kmem_cache_alloc\|kmem_cache_zalloc\| 97 97 kmem_cache_alloc_node\|kmalloc_node\|kzalloc_node\|vmalloc\|vzalloc\| 98 - dma_alloc_coherent\|dma_zalloc_coherent\|devm_kmalloc\|devm_kzalloc\| 98 + dma_alloc_coherent\|devm_kmalloc\|devm_kzalloc\| 99 99 kvmalloc\|kvzalloc\|kvmalloc_node\|kvzalloc_node\|pci_alloc_consistent\| 100 100 pci_zalloc_consistent\|kmem_alloc\|kmem_zalloc\|kmem_zone_alloc\| 101 101 kmem_zone_zalloc\|vmalloc_node\|vzalloc_node\)(...)
+1 -10
scripts/coccinelle/api/alloc/zalloc-simple.cocci
··· 69 69 - x = (T)vmalloc(E1); 70 70 + x = (T)vzalloc(E1); 71 71 | 72 - - x = dma_alloc_coherent(E2,E1,E3,E4); 73 - + x = dma_zalloc_coherent(E2,E1,E3,E4); 74 - | 75 - - x = (T *)dma_alloc_coherent(E2,E1,E3,E4); 76 - + x = dma_zalloc_coherent(E2,E1,E3,E4); 77 - | 78 - - x = (T)dma_alloc_coherent(E2,E1,E3,E4); 79 - + x = (T)dma_zalloc_coherent(E2,E1,E3,E4); 80 - | 81 72 - x = kmalloc_node(E1,E2,E3); 82 73 + x = kzalloc_node(E1,E2,E3); 83 74 | ··· 216 225 x << r2.x; 217 226 @@ 218 227 219 - msg="WARNING: dma_zalloc_coherent should be used for %s, instead of dma_alloc_coherent/memset" % (x) 228 + msg="WARNING: dma_alloc_coherent use in %s already zeroes out memory, so memset is not needed" % (x) 220 229 coccilib.report.print_report(p[0], msg) 221 230 222 231 //-----------------------------------------------------------------
+1 -1
scripts/mod/modpost.c
··· 2185 2185 /* Cannot check for assembler */ 2186 2186 static void add_retpoline(struct buffer *b) 2187 2187 { 2188 - buf_printf(b, "\n#ifdef RETPOLINE\n"); 2188 + buf_printf(b, "\n#ifdef CONFIG_RETPOLINE\n"); 2189 2189 buf_printf(b, "MODULE_INFO(retpoline, \"Y\");\n"); 2190 2190 buf_printf(b, "#endif\n"); 2191 2191 }
+2 -2
sound/aoa/soundbus/i2sbus/core.c
··· 47 47 /* We use the PCI APIs for now until the generic one gets fixed 48 48 * enough or until we get some macio-specific versions 49 49 */ 50 - r->space = dma_zalloc_coherent(&macio_get_pci_dev(i2sdev->macio)->dev, 51 - r->size, &r->bus_addr, GFP_KERNEL); 50 + r->space = dma_alloc_coherent(&macio_get_pci_dev(i2sdev->macio)->dev, 51 + r->size, &r->bus_addr, GFP_KERNEL); 52 52 if (!r->space) 53 53 return -ENOMEM; 54 54
+3
sound/pci/cs46xx/dsp_spos.c
··· 903 903 struct dsp_spos_instance * ins = chip->dsp_spos_instance; 904 904 int i; 905 905 906 + if (!ins) 907 + return 0; 908 + 906 909 snd_info_free_entry(ins->proc_sym_info_entry); 907 910 ins->proc_sym_info_entry = NULL; 908 911
+17 -1
sound/pci/hda/patch_realtek.c
··· 4102 4102 case 0x10ec0295: 4103 4103 case 0x10ec0289: 4104 4104 case 0x10ec0299: 4105 + alc_process_coef_fw(codec, alc225_pre_hsmode); 4105 4106 alc_process_coef_fw(codec, coef0225); 4106 4107 break; 4107 4108 case 0x10ec0867: ··· 5441 5440 } 5442 5441 } 5443 5442 5443 + static void alc_fixup_disable_mic_vref(struct hda_codec *codec, 5444 + const struct hda_fixup *fix, int action) 5445 + { 5446 + if (action == HDA_FIXUP_ACT_PRE_PROBE) 5447 + snd_hda_codec_set_pin_target(codec, 0x19, PIN_VREFHIZ); 5448 + } 5449 + 5444 5450 /* for hda_fixup_thinkpad_acpi() */ 5445 5451 #include "thinkpad_helper.c" 5446 5452 ··· 5557 5549 ALC293_FIXUP_LENOVO_SPK_NOISE, 5558 5550 ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, 5559 5551 ALC255_FIXUP_DELL_SPK_NOISE, 5552 + ALC225_FIXUP_DISABLE_MIC_VREF, 5560 5553 ALC225_FIXUP_DELL1_MIC_NO_PRESENCE, 5561 5554 ALC295_FIXUP_DISABLE_DAC3, 5562 5555 ALC280_FIXUP_HP_HEADSET_MIC, ··· 6277 6268 .chained = true, 6278 6269 .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE 6279 6270 }, 6271 + [ALC225_FIXUP_DISABLE_MIC_VREF] = { 6272 + .type = HDA_FIXUP_FUNC, 6273 + .v.func = alc_fixup_disable_mic_vref, 6274 + .chained = true, 6275 + .chain_id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE 6276 + }, 6280 6277 [ALC225_FIXUP_DELL1_MIC_NO_PRESENCE] = { 6281 6278 .type = HDA_FIXUP_VERBS, 6282 6279 .v.verbs = (const struct hda_verb[]) { ··· 6292 6277 {} 6293 6278 }, 6294 6279 .chained = true, 6295 - .chain_id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE 6280 + .chain_id = ALC225_FIXUP_DISABLE_MIC_VREF 6296 6281 }, 6297 6282 [ALC280_FIXUP_HP_HEADSET_MIC] = { 6298 6283 .type = HDA_FIXUP_FUNC, ··· 6599 6584 SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC), 6600 6585 SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC), 6601 6586 SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB), 6587 + SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB), 6602 6588 SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 6603 6589 SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 6604 6590 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+2 -2
sound/sparc/dbri.c
··· 2541 2541 dbri->op = op; 2542 2542 dbri->irq = irq; 2543 2543 2544 - dbri->dma = dma_zalloc_coherent(&op->dev, sizeof(struct dbri_dma), 2545 - &dbri->dma_dvma, GFP_KERNEL); 2544 + dbri->dma = dma_alloc_coherent(&op->dev, sizeof(struct dbri_dma), 2545 + &dbri->dma_dvma, GFP_KERNEL); 2546 2546 if (!dbri->dma) 2547 2547 return -ENOMEM; 2548 2548
+1 -1
sound/usb/card.c
··· 246 246 h1 = snd_usb_find_csint_desc(host_iface->extra, 247 247 host_iface->extralen, 248 248 NULL, UAC_HEADER); 249 - if (!h1) { 249 + if (!h1 || h1->bLength < sizeof(*h1)) { 250 250 dev_err(&dev->dev, "cannot find UAC_HEADER\n"); 251 251 return -EINVAL; 252 252 }
+23 -6
sound/usb/mixer.c
··· 753 753 struct uac_mixer_unit_descriptor *desc) 754 754 { 755 755 int mu_channels; 756 + void *c; 756 757 757 - if (desc->bLength < 11) 758 + if (desc->bLength < sizeof(*desc)) 758 759 return -EINVAL; 759 760 if (!desc->bNrInPins) 760 761 return -EINVAL; ··· 764 763 case UAC_VERSION_1: 765 764 case UAC_VERSION_2: 766 765 default: 766 + if (desc->bLength < sizeof(*desc) + desc->bNrInPins + 1) 767 + return 0; /* no bmControls -> skip */ 767 768 mu_channels = uac_mixer_unit_bNrChannels(desc); 768 769 break; 769 770 case UAC_VERSION_3: ··· 775 772 } 776 773 777 774 if (!mu_channels) 778 - return -EINVAL; 775 + return 0; 776 + 777 + c = uac_mixer_unit_bmControls(desc, state->mixer->protocol); 778 + if (c - (void *)desc + (mu_channels - 1) / 8 >= desc->bLength) 779 + return 0; /* no bmControls -> skip */ 779 780 780 781 return mu_channels; 781 782 } ··· 951 944 struct uac_mixer_unit_descriptor *d = p1; 952 945 953 946 err = uac_mixer_unit_get_channels(state, d); 954 - if (err < 0) 947 + if (err <= 0) 955 948 return err; 956 949 957 950 term->channels = err; ··· 2075 2068 2076 2069 if (state->mixer->protocol == UAC_VERSION_2) { 2077 2070 struct uac2_input_terminal_descriptor *d_v2 = raw_desc; 2071 + if (d_v2->bLength < sizeof(*d_v2)) 2072 + return -EINVAL; 2078 2073 control = UAC2_TE_CONNECTOR; 2079 2074 term_id = d_v2->bTerminalID; 2080 2075 bmctls = le16_to_cpu(d_v2->bmControls); 2081 2076 } else if (state->mixer->protocol == UAC_VERSION_3) { 2082 2077 struct uac3_input_terminal_descriptor *d_v3 = raw_desc; 2078 + if (d_v3->bLength < sizeof(*d_v3)) 2079 + return -EINVAL; 2083 2080 control = UAC3_TE_INSERTION; 2084 2081 term_id = d_v3->bTerminalID; 2085 2082 bmctls = le32_to_cpu(d_v3->bmControls); ··· 2129 2118 if (err < 0) 2130 2119 continue; 2131 2120 /* no bmControls field (e.g. Maya44) -> ignore */ 2132 - if (desc->bLength <= 10 + input_pins) 2121 + if (!num_outs) 2133 2122 continue; 2134 2123 err = check_input_term(state, desc->baSourceID[pin], &iterm); 2135 2124 if (err < 0) ··· 2325 2314 char *name) 2326 2315 { 2327 2316 struct uac_processing_unit_descriptor *desc = raw_desc; 2328 - int num_ins = desc->bNrInPins; 2317 + int num_ins; 2329 2318 struct usb_mixer_elem_info *cval; 2330 2319 struct snd_kcontrol *kctl; 2331 2320 int i, err, nameid, type, len; ··· 2340 2329 0, NULL, default_value_info 2341 2330 }; 2342 2331 2343 - if (desc->bLength < 13 || desc->bLength < 13 + num_ins || 2332 + if (desc->bLength < 13) { 2333 + usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid); 2334 + return -EINVAL; 2335 + } 2336 + 2337 + num_ins = desc->bNrInPins; 2338 + if (desc->bLength < 13 + num_ins || 2344 2339 desc->bLength < num_ins + uac_processing_unit_bControlSize(desc, state->mixer->protocol)) { 2345 2340 usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid); 2346 2341 return -EINVAL;
+6
sound/usb/quirks-table.h
··· 3326 3326 } 3327 3327 } 3328 3328 }, 3329 + { 3330 + .ifnum = -1 3331 + }, 3329 3332 } 3330 3333 } 3331 3334 }, ··· 3371 3368 48000 3372 3369 } 3373 3370 } 3371 + }, 3372 + { 3373 + .ifnum = -1 3374 3374 }, 3375 3375 } 3376 3376 }
+1 -1
sound/usb/quirks.c
··· 768 768 * REG1: PLL binary search enable, soft mute enable. 769 769 */ 770 770 CM6206_REG1_PLLBIN_EN | 771 - CM6206_REG1_SOFT_MUTE_EN | 771 + CM6206_REG1_SOFT_MUTE_EN, 772 772 /* 773 773 * REG2: enable output drivers, 774 774 * select front channels to the headphone output,
+25 -11
sound/usb/stream.c
··· 596 596 csep = snd_usb_find_desc(alts->extra, alts->extralen, NULL, USB_DT_CS_ENDPOINT); 597 597 598 598 if (!csep || csep->bLength < 7 || 599 - csep->bDescriptorSubtype != UAC_EP_GENERAL) { 600 - usb_audio_warn(chip, 601 - "%u:%d : no or invalid class specific endpoint descriptor\n", 602 - iface_no, altsd->bAlternateSetting); 603 - return 0; 604 - } 599 + csep->bDescriptorSubtype != UAC_EP_GENERAL) 600 + goto error; 605 601 606 602 if (protocol == UAC_VERSION_1) { 607 603 attributes = csep->bmAttributes; ··· 605 609 struct uac2_iso_endpoint_descriptor *csep2 = 606 610 (struct uac2_iso_endpoint_descriptor *) csep; 607 611 612 + if (csep2->bLength < sizeof(*csep2)) 613 + goto error; 608 614 attributes = csep->bmAttributes & UAC_EP_CS_ATTR_FILL_MAX; 609 615 610 616 /* emulate the endpoint attributes of a v1 device */ ··· 616 618 struct uac3_iso_endpoint_descriptor *csep3 = 617 619 (struct uac3_iso_endpoint_descriptor *) csep; 618 620 621 + if (csep3->bLength < sizeof(*csep3)) 622 + goto error; 619 623 /* emulate the endpoint attributes of a v1 device */ 620 624 if (le32_to_cpu(csep3->bmControls) & UAC2_CONTROL_PITCH) 621 625 attributes |= UAC_EP_CS_ATTR_PITCH_CONTROL; 622 626 } 623 627 624 628 return attributes; 629 + 630 + error: 631 + usb_audio_warn(chip, 632 + "%u:%d : no or invalid class specific endpoint descriptor\n", 633 + iface_no, altsd->bAlternateSetting); 634 + return 0; 625 635 } 626 636 627 637 /* find an input terminal descriptor (either UAC1 or UAC2) with the given ··· 637 631 */ 638 632 static void * 639 633 snd_usb_find_input_terminal_descriptor(struct usb_host_interface *ctrl_iface, 640 - int terminal_id) 634 + int terminal_id, bool uac23) 641 635 { 642 636 struct uac2_input_terminal_descriptor *term = NULL; 637 + size_t minlen = uac23 ? sizeof(struct uac2_input_terminal_descriptor) : 638 + sizeof(struct uac_input_terminal_descriptor); 643 639 644 640 while ((term = snd_usb_find_csint_desc(ctrl_iface->extra, 645 641 ctrl_iface->extralen, 646 642 term, UAC_INPUT_TERMINAL))) { 643 + if (term->bLength < minlen) 644 + continue; 647 645 if (term->bTerminalID == terminal_id) 648 646 return term; 649 647 } ··· 665 655 while ((term = snd_usb_find_csint_desc(ctrl_iface->extra, 666 656 ctrl_iface->extralen, 667 657 term, UAC_OUTPUT_TERMINAL))) { 668 - if (term->bTerminalID == terminal_id) 658 + if (term->bLength >= sizeof(*term) && 659 + term->bTerminalID == terminal_id) 669 660 return term; 670 661 } 671 662 ··· 740 729 format = le16_to_cpu(as->wFormatTag); /* remember the format value */ 741 730 742 731 iterm = snd_usb_find_input_terminal_descriptor(chip->ctrl_intf, 743 - as->bTerminalLink); 732 + as->bTerminalLink, 733 + false); 744 734 if (iterm) { 745 735 num_channels = iterm->bNrChannels; 746 736 chconfig = le16_to_cpu(iterm->wChannelConfig); ··· 776 764 * to extract the clock 777 765 */ 778 766 input_term = snd_usb_find_input_terminal_descriptor(chip->ctrl_intf, 779 - as->bTerminalLink); 767 + as->bTerminalLink, 768 + true); 780 769 if (input_term) { 781 770 clock = input_term->bCSourceID; 782 771 if (!chconfig && (num_channels == input_term->bNrChannels)) ··· 1011 998 * to extract the clock 1012 999 */ 1013 1000 input_term = snd_usb_find_input_terminal_descriptor(chip->ctrl_intf, 1014 - as->bTerminalLink); 1001 + as->bTerminalLink, 1002 + true); 1015 1003 if (input_term) { 1016 1004 clock = input_term->bCSourceID; 1017 1005 goto found_clock;
-404
tools/arch/powerpc/include/uapi/asm/unistd.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ 2 - /* 3 - * This file contains the system call numbers. 4 - * 5 - * This program is free software; you can redistribute it and/or 6 - * modify it under the terms of the GNU General Public License 7 - * as published by the Free Software Foundation; either version 8 - * 2 of the License, or (at your option) any later version. 9 - */ 10 - #ifndef _UAPI_ASM_POWERPC_UNISTD_H_ 11 - #define _UAPI_ASM_POWERPC_UNISTD_H_ 12 - 13 - 14 - #define __NR_restart_syscall 0 15 - #define __NR_exit 1 16 - #define __NR_fork 2 17 - #define __NR_read 3 18 - #define __NR_write 4 19 - #define __NR_open 5 20 - #define __NR_close 6 21 - #define __NR_waitpid 7 22 - #define __NR_creat 8 23 - #define __NR_link 9 24 - #define __NR_unlink 10 25 - #define __NR_execve 11 26 - #define __NR_chdir 12 27 - #define __NR_time 13 28 - #define __NR_mknod 14 29 - #define __NR_chmod 15 30 - #define __NR_lchown 16 31 - #define __NR_break 17 32 - #define __NR_oldstat 18 33 - #define __NR_lseek 19 34 - #define __NR_getpid 20 35 - #define __NR_mount 21 36 - #define __NR_umount 22 37 - #define __NR_setuid 23 38 - #define __NR_getuid 24 39 - #define __NR_stime 25 40 - #define __NR_ptrace 26 41 - #define __NR_alarm 27 42 - #define __NR_oldfstat 28 43 - #define __NR_pause 29 44 - #define __NR_utime 30 45 - #define __NR_stty 31 46 - #define __NR_gtty 32 47 - #define __NR_access 33 48 - #define __NR_nice 34 49 - #define __NR_ftime 35 50 - #define __NR_sync 36 51 - #define __NR_kill 37 52 - #define __NR_rename 38 53 - #define __NR_mkdir 39 54 - #define __NR_rmdir 40 55 - #define __NR_dup 41 56 - #define __NR_pipe 42 57 - #define __NR_times 43 58 - #define __NR_prof 44 59 - #define __NR_brk 45 60 - #define __NR_setgid 46 61 - #define __NR_getgid 47 62 - #define __NR_signal 48 63 - #define __NR_geteuid 49 64 - #define __NR_getegid 50 65 - #define __NR_acct 51 66 - #define __NR_umount2 52 67 - #define __NR_lock 53 68 - #define __NR_ioctl 54 69 - #define __NR_fcntl 55 70 - #define __NR_mpx 56 71 - #define __NR_setpgid 57 72 - #define __NR_ulimit 58 73 - #define __NR_oldolduname 59 74 - #define __NR_umask 60 75 - #define __NR_chroot 61 76 - #define __NR_ustat 62 77 - #define __NR_dup2 63 78 - #define __NR_getppid 64 79 - #define __NR_getpgrp 65 80 - #define __NR_setsid 66 81 - #define __NR_sigaction 67 82 - #define __NR_sgetmask 68 83 - #define __NR_ssetmask 69 84 - #define __NR_setreuid 70 85 - #define __NR_setregid 71 86 - #define __NR_sigsuspend 72 87 - #define __NR_sigpending 73 88 - #define __NR_sethostname 74 89 - #define __NR_setrlimit 75 90 - #define __NR_getrlimit 76 91 - #define __NR_getrusage 77 92 - #define __NR_gettimeofday 78 93 - #define __NR_settimeofday 79 94 - #define __NR_getgroups 80 95 - #define __NR_setgroups 81 96 - #define __NR_select 82 97 - #define __NR_symlink 83 98 - #define __NR_oldlstat 84 99 - #define __NR_readlink 85 100 - #define __NR_uselib 86 101 - #define __NR_swapon 87 102 - #define __NR_reboot 88 103 - #define __NR_readdir 89 104 - #define __NR_mmap 90 105 - #define __NR_munmap 91 106 - #define __NR_truncate 92 107 - #define __NR_ftruncate 93 108 - #define __NR_fchmod 94 109 - #define __NR_fchown 95 110 - #define __NR_getpriority 96 111 - #define __NR_setpriority 97 112 - #define __NR_profil 98 113 - #define __NR_statfs 99 114 - #define __NR_fstatfs 100 115 - #define __NR_ioperm 101 116 - #define __NR_socketcall 102 117 - #define __NR_syslog 103 118 - #define __NR_setitimer 104 119 - #define __NR_getitimer 105 120 - #define __NR_stat 106 121 - #define __NR_lstat 107 122 - #define __NR_fstat 108 123 - #define __NR_olduname 109 124 - #define __NR_iopl 110 125 - #define __NR_vhangup 111 126 - #define __NR_idle 112 127 - #define __NR_vm86 113 128 - #define __NR_wait4 114 129 - #define __NR_swapoff 115 130 - #define __NR_sysinfo 116 131 - #define __NR_ipc 117 132 - #define __NR_fsync 118 133 - #define __NR_sigreturn 119 134 - #define __NR_clone 120 135 - #define __NR_setdomainname 121 136 - #define __NR_uname 122 137 - #define __NR_modify_ldt 123 138 - #define __NR_adjtimex 124 139 - #define __NR_mprotect 125 140 - #define __NR_sigprocmask 126 141 - #define __NR_create_module 127 142 - #define __NR_init_module 128 143 - #define __NR_delete_module 129 144 - #define __NR_get_kernel_syms 130 145 - #define __NR_quotactl 131 146 - #define __NR_getpgid 132 147 - #define __NR_fchdir 133 148 - #define __NR_bdflush 134 149 - #define __NR_sysfs 135 150 - #define __NR_personality 136 151 - #define __NR_afs_syscall 137 /* Syscall for Andrew File System */ 152 - #define __NR_setfsuid 138 153 - #define __NR_setfsgid 139 154 - #define __NR__llseek 140 155 - #define __NR_getdents 141 156 - #define __NR__newselect 142 157 - #define __NR_flock 143 158 - #define __NR_msync 144 159 - #define __NR_readv 145 160 - #define __NR_writev 146 161 - #define __NR_getsid 147 162 - #define __NR_fdatasync 148 163 - #define __NR__sysctl 149 164 - #define __NR_mlock 150 165 - #define __NR_munlock 151 166 - #define __NR_mlockall 152 167 - #define __NR_munlockall 153 168 - #define __NR_sched_setparam 154 169 - #define __NR_sched_getparam 155 170 - #define __NR_sched_setscheduler 156 171 - #define __NR_sched_getscheduler 157 172 - #define __NR_sched_yield 158 173 - #define __NR_sched_get_priority_max 159 174 - #define __NR_sched_get_priority_min 160 175 - #define __NR_sched_rr_get_interval 161 176 - #define __NR_nanosleep 162 177 - #define __NR_mremap 163 178 - #define __NR_setresuid 164 179 - #define __NR_getresuid 165 180 - #define __NR_query_module 166 181 - #define __NR_poll 167 182 - #define __NR_nfsservctl 168 183 - #define __NR_setresgid 169 184 - #define __NR_getresgid 170 185 - #define __NR_prctl 171 186 - #define __NR_rt_sigreturn 172 187 - #define __NR_rt_sigaction 173 188 - #define __NR_rt_sigprocmask 174 189 - #define __NR_rt_sigpending 175 190 - #define __NR_rt_sigtimedwait 176 191 - #define __NR_rt_sigqueueinfo 177 192 - #define __NR_rt_sigsuspend 178 193 - #define __NR_pread64 179 194 - #define __NR_pwrite64 180 195 - #define __NR_chown 181 196 - #define __NR_getcwd 182 197 - #define __NR_capget 183 198 - #define __NR_capset 184 199 - #define __NR_sigaltstack 185 200 - #define __NR_sendfile 186 201 - #define __NR_getpmsg 187 /* some people actually want streams */ 202 - #define __NR_putpmsg 188 /* some people actually want streams */ 203 - #define __NR_vfork 189 204 - #define __NR_ugetrlimit 190 /* SuS compliant getrlimit */ 205 - #define __NR_readahead 191 206 - #ifndef __powerpc64__ /* these are 32-bit only */ 207 - #define __NR_mmap2 192 208 - #define __NR_truncate64 193 209 - #define __NR_ftruncate64 194 210 - #define __NR_stat64 195 211 - #define __NR_lstat64 196 212 - #define __NR_fstat64 197 213 - #endif 214 - #define __NR_pciconfig_read 198 215 - #define __NR_pciconfig_write 199 216 - #define __NR_pciconfig_iobase 200 217 - #define __NR_multiplexer 201 218 - #define __NR_getdents64 202 219 - #define __NR_pivot_root 203 220 - #ifndef __powerpc64__ 221 - #define __NR_fcntl64 204 222 - #endif 223 - #define __NR_madvise 205 224 - #define __NR_mincore 206 225 - #define __NR_gettid 207 226 - #define __NR_tkill 208 227 - #define __NR_setxattr 209 228 - #define __NR_lsetxattr 210 229 - #define __NR_fsetxattr 211 230 - #define __NR_getxattr 212 231 - #define __NR_lgetxattr 213 232 - #define __NR_fgetxattr 214 233 - #define __NR_listxattr 215 234 - #define __NR_llistxattr 216 235 - #define __NR_flistxattr 217 236 - #define __NR_removexattr 218 237 - #define __NR_lremovexattr 219 238 - #define __NR_fremovexattr 220 239 - #define __NR_futex 221 240 - #define __NR_sched_setaffinity 222 241 - #define __NR_sched_getaffinity 223 242 - /* 224 currently unused */ 243 - #define __NR_tuxcall 225 244 - #ifndef __powerpc64__ 245 - #define __NR_sendfile64 226 246 - #endif 247 - #define __NR_io_setup 227 248 - #define __NR_io_destroy 228 249 - #define __NR_io_getevents 229 250 - #define __NR_io_submit 230 251 - #define __NR_io_cancel 231 252 - #define __NR_set_tid_address 232 253 - #define __NR_fadvise64 233 254 - #define __NR_exit_group 234 255 - #define __NR_lookup_dcookie 235 256 - #define __NR_epoll_create 236 257 - #define __NR_epoll_ctl 237 258 - #define __NR_epoll_wait 238 259 - #define __NR_remap_file_pages 239 260 - #define __NR_timer_create 240 261 - #define __NR_timer_settime 241 262 - #define __NR_timer_gettime 242 263 - #define __NR_timer_getoverrun 243 264 - #define __NR_timer_delete 244 265 - #define __NR_clock_settime 245 266 - #define __NR_clock_gettime 246 267 - #define __NR_clock_getres 247 268 - #define __NR_clock_nanosleep 248 269 - #define __NR_swapcontext 249 270 - #define __NR_tgkill 250 271 - #define __NR_utimes 251 272 - #define __NR_statfs64 252 273 - #define __NR_fstatfs64 253 274 - #ifndef __powerpc64__ 275 - #define __NR_fadvise64_64 254 276 - #endif 277 - #define __NR_rtas 255 278 - #define __NR_sys_debug_setcontext 256 279 - /* Number 257 is reserved for vserver */ 280 - #define __NR_migrate_pages 258 281 - #define __NR_mbind 259 282 - #define __NR_get_mempolicy 260 283 - #define __NR_set_mempolicy 261 284 - #define __NR_mq_open 262 285 - #define __NR_mq_unlink 263 286 - #define __NR_mq_timedsend 264 287 - #define __NR_mq_timedreceive 265 288 - #define __NR_mq_notify 266 289 - #define __NR_mq_getsetattr 267 290 - #define __NR_kexec_load 268 291 - #define __NR_add_key 269 292 - #define __NR_request_key 270 293 - #define __NR_keyctl 271 294 - #define __NR_waitid 272 295 - #define __NR_ioprio_set 273 296 - #define __NR_ioprio_get 274 297 - #define __NR_inotify_init 275 298 - #define __NR_inotify_add_watch 276 299 - #define __NR_inotify_rm_watch 277 300 - #define __NR_spu_run 278 301 - #define __NR_spu_create 279 302 - #define __NR_pselect6 280 303 - #define __NR_ppoll 281 304 - #define __NR_unshare 282 305 - #define __NR_splice 283 306 - #define __NR_tee 284 307 - #define __NR_vmsplice 285 308 - #define __NR_openat 286 309 - #define __NR_mkdirat 287 310 - #define __NR_mknodat 288 311 - #define __NR_fchownat 289 312 - #define __NR_futimesat 290 313 - #ifdef __powerpc64__ 314 - #define __NR_newfstatat 291 315 - #else 316 - #define __NR_fstatat64 291 317 - #endif 318 - #define __NR_unlinkat 292 319 - #define __NR_renameat 293 320 - #define __NR_linkat 294 321 - #define __NR_symlinkat 295 322 - #define __NR_readlinkat 296 323 - #define __NR_fchmodat 297 324 - #define __NR_faccessat 298 325 - #define __NR_get_robust_list 299 326 - #define __NR_set_robust_list 300 327 - #define __NR_move_pages 301 328 - #define __NR_getcpu 302 329 - #define __NR_epoll_pwait 303 330 - #define __NR_utimensat 304 331 - #define __NR_signalfd 305 332 - #define __NR_timerfd_create 306 333 - #define __NR_eventfd 307 334 - #define __NR_sync_file_range2 308 335 - #define __NR_fallocate 309 336 - #define __NR_subpage_prot 310 337 - #define __NR_timerfd_settime 311 338 - #define __NR_timerfd_gettime 312 339 - #define __NR_signalfd4 313 340 - #define __NR_eventfd2 314 341 - #define __NR_epoll_create1 315 342 - #define __NR_dup3 316 343 - #define __NR_pipe2 317 344 - #define __NR_inotify_init1 318 345 - #define __NR_perf_event_open 319 346 - #define __NR_preadv 320 347 - #define __NR_pwritev 321 348 - #define __NR_rt_tgsigqueueinfo 322 349 - #define __NR_fanotify_init 323 350 - #define __NR_fanotify_mark 324 351 - #define __NR_prlimit64 325 352 - #define __NR_socket 326 353 - #define __NR_bind 327 354 - #define __NR_connect 328 355 - #define __NR_listen 329 356 - #define __NR_accept 330 357 - #define __NR_getsockname 331 358 - #define __NR_getpeername 332 359 - #define __NR_socketpair 333 360 - #define __NR_send 334 361 - #define __NR_sendto 335 362 - #define __NR_recv 336 363 - #define __NR_recvfrom 337 364 - #define __NR_shutdown 338 365 - #define __NR_setsockopt 339 366 - #define __NR_getsockopt 340 367 - #define __NR_sendmsg 341 368 - #define __NR_recvmsg 342 369 - #define __NR_recvmmsg 343 370 - #define __NR_accept4 344 371 - #define __NR_name_to_handle_at 345 372 - #define __NR_open_by_handle_at 346 373 - #define __NR_clock_adjtime 347 374 - #define __NR_syncfs 348 375 - #define __NR_sendmmsg 349 376 - #define __NR_setns 350 377 - #define __NR_process_vm_readv 351 378 - #define __NR_process_vm_writev 352 379 - #define __NR_finit_module 353 380 - #define __NR_kcmp 354 381 - #define __NR_sched_setattr 355 382 - #define __NR_sched_getattr 356 383 - #define __NR_renameat2 357 384 - #define __NR_seccomp 358 385 - #define __NR_getrandom 359 386 - #define __NR_memfd_create 360 387 - #define __NR_bpf 361 388 - #define __NR_execveat 362 389 - #define __NR_switch_endian 363 390 - #define __NR_userfaultfd 364 391 - #define __NR_membarrier 365 392 - #define __NR_mlock2 378 393 - #define __NR_copy_file_range 379 394 - #define __NR_preadv2 380 395 - #define __NR_pwritev2 381 396 - #define __NR_kexec_file_load 382 397 - #define __NR_statx 383 398 - #define __NR_pkey_alloc 384 399 - #define __NR_pkey_free 385 400 - #define __NR_pkey_mprotect 386 401 - #define __NR_rseq 387 402 - #define __NR_io_pgetevents 388 403 - 404 - #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
+25
tools/arch/riscv/include/uapi/asm/bitsperlong.h
··· 1 + /* 2 + * Copyright (C) 2012 ARM Ltd. 3 + * Copyright (C) 2015 Regents of the University of California 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #ifndef _UAPI_ASM_RISCV_BITSPERLONG_H 19 + #define _UAPI_ASM_RISCV_BITSPERLONG_H 20 + 21 + #define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8) 22 + 23 + #include <asm-generic/bitsperlong.h> 24 + 25 + #endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */
+2
tools/arch/x86/include/asm/cpufeatures.h
··· 281 281 #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */ 282 282 #define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */ 283 283 #define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */ 284 + #define X86_FEATURE_WBNOINVD (13*32+ 9) /* WBNOINVD instruction */ 284 285 #define X86_FEATURE_AMD_IBPB (13*32+12) /* "" Indirect Branch Prediction Barrier */ 285 286 #define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */ 286 287 #define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */ 288 + #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */ 287 289 #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */ 288 290 #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ 289 291 #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
+7 -1
tools/arch/x86/include/asm/disabled-features.h
··· 16 16 # define DISABLE_MPX (1<<(X86_FEATURE_MPX & 31)) 17 17 #endif 18 18 19 + #ifdef CONFIG_X86_SMAP 20 + # define DISABLE_SMAP 0 21 + #else 22 + # define DISABLE_SMAP (1<<(X86_FEATURE_SMAP & 31)) 23 + #endif 24 + 19 25 #ifdef CONFIG_X86_INTEL_UMIP 20 26 # define DISABLE_UMIP 0 21 27 #else ··· 74 68 #define DISABLED_MASK6 0 75 69 #define DISABLED_MASK7 (DISABLE_PTI) 76 70 #define DISABLED_MASK8 0 77 - #define DISABLED_MASK9 (DISABLE_MPX) 71 + #define DISABLED_MASK9 (DISABLE_MPX|DISABLE_SMAP) 78 72 #define DISABLED_MASK10 0 79 73 #define DISABLED_MASK11 0 80 74 #define DISABLED_MASK12 0
+3 -1
tools/include/uapi/asm-generic/unistd.h
··· 738 738 __SC_COMP(__NR_io_pgetevents, sys_io_pgetevents, compat_sys_io_pgetevents) 739 739 #define __NR_rseq 293 740 740 __SYSCALL(__NR_rseq, sys_rseq) 741 + #define __NR_kexec_file_load 294 742 + __SYSCALL(__NR_kexec_file_load, sys_kexec_file_load) 741 743 742 744 #undef __NR_syscalls 743 - #define __NR_syscalls 294 745 + #define __NR_syscalls 295 744 746 745 747 /* 746 748 * 32 bit systems traditionally used different
+2
tools/include/uapi/asm/bitsperlong.h
··· 13 13 #include "../../arch/mips/include/uapi/asm/bitsperlong.h" 14 14 #elif defined(__ia64__) 15 15 #include "../../arch/ia64/include/uapi/asm/bitsperlong.h" 16 + #elif defined(__riscv) 17 + #include "../../arch/riscv/include/uapi/asm/bitsperlong.h" 16 18 #else 17 19 #include <asm-generic/bitsperlong.h> 18 20 #endif
+8
tools/include/uapi/drm/i915_drm.h
··· 412 412 int irq_seq; 413 413 } drm_i915_irq_wait_t; 414 414 415 + /* 416 + * Different modes of per-process Graphics Translation Table, 417 + * see I915_PARAM_HAS_ALIASING_PPGTT 418 + */ 419 + #define I915_GEM_PPGTT_NONE 0 420 + #define I915_GEM_PPGTT_ALIASING 1 421 + #define I915_GEM_PPGTT_FULL 2 422 + 415 423 /* Ioctl to query kernel params: 416 424 */ 417 425 #define I915_PARAM_IRQ_ACTIVE 1
+8 -52
tools/include/uapi/linux/fs.h
··· 14 14 #include <linux/ioctl.h> 15 15 #include <linux/types.h> 16 16 17 + /* Use of MS_* flags within the kernel is restricted to core mount(2) code. */ 18 + #if !defined(__KERNEL__) 19 + #include <linux/mount.h> 20 + #endif 21 + 17 22 /* 18 23 * It's silly to have NR_OPEN bigger than NR_FILE, but you can change 19 24 * the file limit at runtime and only root can increase the per-process ··· 105 100 106 101 107 102 #define NR_FILE 8192 /* this can well be larger on a larger system */ 108 - 109 - 110 - /* 111 - * These are the fs-independent mount-flags: up to 32 flags are supported 112 - */ 113 - #define MS_RDONLY 1 /* Mount read-only */ 114 - #define MS_NOSUID 2 /* Ignore suid and sgid bits */ 115 - #define MS_NODEV 4 /* Disallow access to device special files */ 116 - #define MS_NOEXEC 8 /* Disallow program execution */ 117 - #define MS_SYNCHRONOUS 16 /* Writes are synced at once */ 118 - #define MS_REMOUNT 32 /* Alter flags of a mounted FS */ 119 - #define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */ 120 - #define MS_DIRSYNC 128 /* Directory modifications are synchronous */ 121 - #define MS_NOATIME 1024 /* Do not update access times. */ 122 - #define MS_NODIRATIME 2048 /* Do not update directory access times */ 123 - #define MS_BIND 4096 124 - #define MS_MOVE 8192 125 - #define MS_REC 16384 126 - #define MS_VERBOSE 32768 /* War is peace. Verbosity is silence. 127 - MS_VERBOSE is deprecated. */ 128 - #define MS_SILENT 32768 129 - #define MS_POSIXACL (1<<16) /* VFS does not apply the umask */ 130 - #define MS_UNBINDABLE (1<<17) /* change to unbindable */ 131 - #define MS_PRIVATE (1<<18) /* change to private */ 132 - #define MS_SLAVE (1<<19) /* change to slave */ 133 - #define MS_SHARED (1<<20) /* change to shared */ 134 - #define MS_RELATIME (1<<21) /* Update atime relative to mtime/ctime. */ 135 - #define MS_KERNMOUNT (1<<22) /* this is a kern_mount call */ 136 - #define MS_I_VERSION (1<<23) /* Update inode I_version field */ 137 - #define MS_STRICTATIME (1<<24) /* Always perform atime updates */ 138 - #define MS_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */ 139 - 140 - /* These sb flags are internal to the kernel */ 141 - #define MS_SUBMOUNT (1<<26) 142 - #define MS_NOREMOTELOCK (1<<27) 143 - #define MS_NOSEC (1<<28) 144 - #define MS_BORN (1<<29) 145 - #define MS_ACTIVE (1<<30) 146 - #define MS_NOUSER (1<<31) 147 - 148 - /* 149 - * Superblock flags that can be altered by MS_REMOUNT 150 - */ 151 - #define MS_RMT_MASK (MS_RDONLY|MS_SYNCHRONOUS|MS_MANDLOCK|MS_I_VERSION|\ 152 - MS_LAZYTIME) 153 - 154 - /* 155 - * Old magic mount flag and mask 156 - */ 157 - #define MS_MGC_VAL 0xC0ED0000 158 - #define MS_MGC_MSK 0xffff0000 159 103 160 104 /* 161 105 * Structure for FS_IOC_FSGETXATTR[A] and FS_IOC_FSSETXATTR. ··· 223 269 #define FS_POLICY_FLAGS_PAD_16 0x02 224 270 #define FS_POLICY_FLAGS_PAD_32 0x03 225 271 #define FS_POLICY_FLAGS_PAD_MASK 0x03 226 - #define FS_POLICY_FLAGS_VALID 0x03 272 + #define FS_POLICY_FLAG_DIRECT_KEY 0x04 /* use master key directly */ 273 + #define FS_POLICY_FLAGS_VALID 0x07 227 274 228 275 /* Encryption algorithms */ 229 276 #define FS_ENCRYPTION_MODE_INVALID 0 ··· 236 281 #define FS_ENCRYPTION_MODE_AES_128_CTS 6 237 282 #define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* Removed, do not use. */ 238 283 #define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* Removed, do not use. */ 284 + #define FS_ENCRYPTION_MODE_ADIANTUM 9 239 285 240 286 struct fscrypt_policy { 241 287 __u8 version;
+19
tools/include/uapi/linux/if_link.h
··· 288 288 IFLA_BR_MCAST_IGMP_VERSION, 289 289 IFLA_BR_MCAST_MLD_VERSION, 290 290 IFLA_BR_VLAN_STATS_PER_PORT, 291 + IFLA_BR_MULTI_BOOLOPT, 291 292 __IFLA_BR_MAX, 292 293 }; 293 294 ··· 534 533 IFLA_VXLAN_LABEL, 535 534 IFLA_VXLAN_GPE, 536 535 IFLA_VXLAN_TTL_INHERIT, 536 + IFLA_VXLAN_DF, 537 537 __IFLA_VXLAN_MAX 538 538 }; 539 539 #define IFLA_VXLAN_MAX (__IFLA_VXLAN_MAX - 1) ··· 542 540 struct ifla_vxlan_port_range { 543 541 __be16 low; 544 542 __be16 high; 543 + }; 544 + 545 + enum ifla_vxlan_df { 546 + VXLAN_DF_UNSET = 0, 547 + VXLAN_DF_SET, 548 + VXLAN_DF_INHERIT, 549 + __VXLAN_DF_END, 550 + VXLAN_DF_MAX = __VXLAN_DF_END - 1, 545 551 }; 546 552 547 553 /* GENEVE section */ ··· 567 557 IFLA_GENEVE_UDP_ZERO_CSUM6_RX, 568 558 IFLA_GENEVE_LABEL, 569 559 IFLA_GENEVE_TTL_INHERIT, 560 + IFLA_GENEVE_DF, 570 561 __IFLA_GENEVE_MAX 571 562 }; 572 563 #define IFLA_GENEVE_MAX (__IFLA_GENEVE_MAX - 1) 564 + 565 + enum ifla_geneve_df { 566 + GENEVE_DF_UNSET = 0, 567 + GENEVE_DF_SET, 568 + GENEVE_DF_INHERIT, 569 + __GENEVE_DF_END, 570 + GENEVE_DF_MAX = __GENEVE_DF_END - 1, 571 + }; 573 572 574 573 /* PPP section */ 575 574 enum {
+7 -3
tools/include/uapi/linux/in.h
··· 266 266 267 267 #define IN_CLASSD(a) ((((long int) (a)) & 0xf0000000) == 0xe0000000) 268 268 #define IN_MULTICAST(a) IN_CLASSD(a) 269 - #define IN_MULTICAST_NET 0xF0000000 269 + #define IN_MULTICAST_NET 0xe0000000 270 270 271 - #define IN_EXPERIMENTAL(a) ((((long int) (a)) & 0xf0000000) == 0xf0000000) 272 - #define IN_BADCLASS(a) IN_EXPERIMENTAL((a)) 271 + #define IN_BADCLASS(a) ((((long int) (a) ) == 0xffffffff) 272 + #define IN_EXPERIMENTAL(a) IN_BADCLASS((a)) 273 + 274 + #define IN_CLASSE(a) ((((long int) (a)) & 0xf0000000) == 0xf0000000) 275 + #define IN_CLASSE_NET 0xffffffff 276 + #define IN_CLASSE_NSHIFT 0 273 277 274 278 /* Address to accept any incoming messages. */ 275 279 #define INADDR_ANY ((unsigned long int) 0x00000000)
+19
tools/include/uapi/linux/kvm.h
··· 492 492 }; 493 493 }; 494 494 495 + /* for KVM_CLEAR_DIRTY_LOG */ 496 + struct kvm_clear_dirty_log { 497 + __u32 slot; 498 + __u32 num_pages; 499 + __u64 first_page; 500 + union { 501 + void __user *dirty_bitmap; /* one bit per page */ 502 + __u64 padding2; 503 + }; 504 + }; 505 + 495 506 /* for KVM_SET_SIGNAL_MASK */ 496 507 struct kvm_signal_mask { 497 508 __u32 len; ··· 986 975 #define KVM_CAP_HYPERV_ENLIGHTENED_VMCS 163 987 976 #define KVM_CAP_EXCEPTION_PAYLOAD 164 988 977 #define KVM_CAP_ARM_VM_IPA_SIZE 165 978 + #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166 979 + #define KVM_CAP_HYPERV_CPUID 167 989 980 990 981 #ifdef KVM_CAP_IRQ_ROUTING 991 982 ··· 1433 1420 /* Available with KVM_CAP_NESTED_STATE */ 1434 1421 #define KVM_GET_NESTED_STATE _IOWR(KVMIO, 0xbe, struct kvm_nested_state) 1435 1422 #define KVM_SET_NESTED_STATE _IOW(KVMIO, 0xbf, struct kvm_nested_state) 1423 + 1424 + /* Available with KVM_CAP_MANUAL_DIRTY_LOG_PROTECT */ 1425 + #define KVM_CLEAR_DIRTY_LOG _IOWR(KVMIO, 0xc0, struct kvm_clear_dirty_log) 1426 + 1427 + /* Available with KVM_CAP_HYPERV_CPUID */ 1428 + #define KVM_GET_SUPPORTED_HV_CPUID _IOWR(KVMIO, 0xc1, struct kvm_cpuid2) 1436 1429 1437 1430 /* Secure Encrypted Virtualization command */ 1438 1431 enum sev_cmd_id {
+58
tools/include/uapi/linux/mount.h
··· 1 + #ifndef _UAPI_LINUX_MOUNT_H 2 + #define _UAPI_LINUX_MOUNT_H 3 + 4 + /* 5 + * These are the fs-independent mount-flags: up to 32 flags are supported 6 + * 7 + * Usage of these is restricted within the kernel to core mount(2) code and 8 + * callers of sys_mount() only. Filesystems should be using the SB_* 9 + * equivalent instead. 10 + */ 11 + #define MS_RDONLY 1 /* Mount read-only */ 12 + #define MS_NOSUID 2 /* Ignore suid and sgid bits */ 13 + #define MS_NODEV 4 /* Disallow access to device special files */ 14 + #define MS_NOEXEC 8 /* Disallow program execution */ 15 + #define MS_SYNCHRONOUS 16 /* Writes are synced at once */ 16 + #define MS_REMOUNT 32 /* Alter flags of a mounted FS */ 17 + #define MS_MANDLOCK 64 /* Allow mandatory locks on an FS */ 18 + #define MS_DIRSYNC 128 /* Directory modifications are synchronous */ 19 + #define MS_NOATIME 1024 /* Do not update access times. */ 20 + #define MS_NODIRATIME 2048 /* Do not update directory access times */ 21 + #define MS_BIND 4096 22 + #define MS_MOVE 8192 23 + #define MS_REC 16384 24 + #define MS_VERBOSE 32768 /* War is peace. Verbosity is silence. 25 + MS_VERBOSE is deprecated. */ 26 + #define MS_SILENT 32768 27 + #define MS_POSIXACL (1<<16) /* VFS does not apply the umask */ 28 + #define MS_UNBINDABLE (1<<17) /* change to unbindable */ 29 + #define MS_PRIVATE (1<<18) /* change to private */ 30 + #define MS_SLAVE (1<<19) /* change to slave */ 31 + #define MS_SHARED (1<<20) /* change to shared */ 32 + #define MS_RELATIME (1<<21) /* Update atime relative to mtime/ctime. */ 33 + #define MS_KERNMOUNT (1<<22) /* this is a kern_mount call */ 34 + #define MS_I_VERSION (1<<23) /* Update inode I_version field */ 35 + #define MS_STRICTATIME (1<<24) /* Always perform atime updates */ 36 + #define MS_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */ 37 + 38 + /* These sb flags are internal to the kernel */ 39 + #define MS_SUBMOUNT (1<<26) 40 + #define MS_NOREMOTELOCK (1<<27) 41 + #define MS_NOSEC (1<<28) 42 + #define MS_BORN (1<<29) 43 + #define MS_ACTIVE (1<<30) 44 + #define MS_NOUSER (1<<31) 45 + 46 + /* 47 + * Superblock flags that can be altered by MS_REMOUNT 48 + */ 49 + #define MS_RMT_MASK (MS_RDONLY|MS_SYNCHRONOUS|MS_MANDLOCK|MS_I_VERSION|\ 50 + MS_LAZYTIME) 51 + 52 + /* 53 + * Old magic mount flag and mask 54 + */ 55 + #define MS_MGC_VAL 0xC0ED0000 56 + #define MS_MGC_MSK 0xffff0000 57 + 58 + #endif /* _UAPI_LINUX_MOUNT_H */
+8
tools/include/uapi/linux/prctl.h
··· 220 220 # define PR_SPEC_DISABLE (1UL << 2) 221 221 # define PR_SPEC_FORCE_DISABLE (1UL << 3) 222 222 223 + /* Reset arm64 pointer authentication keys */ 224 + #define PR_PAC_RESET_KEYS 54 225 + # define PR_PAC_APIAKEY (1UL << 0) 226 + # define PR_PAC_APIBKEY (1UL << 1) 227 + # define PR_PAC_APDAKEY (1UL << 2) 228 + # define PR_PAC_APDBKEY (1UL << 3) 229 + # define PR_PAC_APGAKEY (1UL << 4) 230 + 223 231 #endif /* _LINUX_PRCTL_H */
+2 -111
tools/include/uapi/linux/vhost.h
··· 11 11 * device configuration. 12 12 */ 13 13 14 + #include <linux/vhost_types.h> 14 15 #include <linux/types.h> 15 - #include <linux/compiler.h> 16 16 #include <linux/ioctl.h> 17 - #include <linux/virtio_config.h> 18 - #include <linux/virtio_ring.h> 19 - 20 - struct vhost_vring_state { 21 - unsigned int index; 22 - unsigned int num; 23 - }; 24 - 25 - struct vhost_vring_file { 26 - unsigned int index; 27 - int fd; /* Pass -1 to unbind from file. */ 28 - 29 - }; 30 - 31 - struct vhost_vring_addr { 32 - unsigned int index; 33 - /* Option flags. */ 34 - unsigned int flags; 35 - /* Flag values: */ 36 - /* Whether log address is valid. If set enables logging. */ 37 - #define VHOST_VRING_F_LOG 0 38 - 39 - /* Start of array of descriptors (virtually contiguous) */ 40 - __u64 desc_user_addr; 41 - /* Used structure address. Must be 32 bit aligned */ 42 - __u64 used_user_addr; 43 - /* Available structure address. Must be 16 bit aligned */ 44 - __u64 avail_user_addr; 45 - /* Logging support. */ 46 - /* Log writes to used structure, at offset calculated from specified 47 - * address. Address must be 32 bit aligned. */ 48 - __u64 log_guest_addr; 49 - }; 50 - 51 - /* no alignment requirement */ 52 - struct vhost_iotlb_msg { 53 - __u64 iova; 54 - __u64 size; 55 - __u64 uaddr; 56 - #define VHOST_ACCESS_RO 0x1 57 - #define VHOST_ACCESS_WO 0x2 58 - #define VHOST_ACCESS_RW 0x3 59 - __u8 perm; 60 - #define VHOST_IOTLB_MISS 1 61 - #define VHOST_IOTLB_UPDATE 2 62 - #define VHOST_IOTLB_INVALIDATE 3 63 - #define VHOST_IOTLB_ACCESS_FAIL 4 64 - __u8 type; 65 - }; 66 - 67 - #define VHOST_IOTLB_MSG 0x1 68 - #define VHOST_IOTLB_MSG_V2 0x2 69 - 70 - struct vhost_msg { 71 - int type; 72 - union { 73 - struct vhost_iotlb_msg iotlb; 74 - __u8 padding[64]; 75 - }; 76 - }; 77 - 78 - struct vhost_msg_v2 { 79 - __u32 type; 80 - __u32 reserved; 81 - union { 82 - struct vhost_iotlb_msg iotlb; 83 - __u8 padding[64]; 84 - }; 85 - }; 86 - 87 - struct vhost_memory_region { 88 - __u64 guest_phys_addr; 89 - __u64 memory_size; /* bytes */ 90 - __u64 userspace_addr; 91 - __u64 flags_padding; /* No flags are currently specified. */ 92 - }; 93 - 94 - /* All region addresses and sizes must be 4K aligned. */ 95 - #define VHOST_PAGE_SIZE 0x1000 96 - 97 - struct vhost_memory { 98 - __u32 nregions; 99 - __u32 padding; 100 - struct vhost_memory_region regions[0]; 101 - }; 102 17 103 18 /* ioctls */ 104 19 ··· 101 186 * device. This can be used to stop the ring (e.g. for migration). */ 102 187 #define VHOST_NET_SET_BACKEND _IOW(VHOST_VIRTIO, 0x30, struct vhost_vring_file) 103 188 104 - /* Feature bits */ 105 - /* Log all write descriptors. Can be changed while device is active. */ 106 - #define VHOST_F_LOG_ALL 26 107 - /* vhost-net should add virtio_net_hdr for RX, and strip for TX packets. */ 108 - #define VHOST_NET_F_VIRTIO_NET_HDR 27 109 - 110 - /* VHOST_SCSI specific definitions */ 111 - 112 - /* 113 - * Used by QEMU userspace to ensure a consistent vhost-scsi ABI. 114 - * 115 - * ABI Rev 0: July 2012 version starting point for v3.6-rc merge candidate + 116 - * RFC-v2 vhost-scsi userspace. Add GET_ABI_VERSION ioctl usage 117 - * ABI Rev 1: January 2013. Ignore vhost_tpgt filed in struct vhost_scsi_target. 118 - * All the targets under vhost_wwpn can be seen and used by guset. 119 - */ 120 - 121 - #define VHOST_SCSI_ABI_VERSION 1 122 - 123 - struct vhost_scsi_target { 124 - int abi_version; 125 - char vhost_wwpn[224]; /* TRANSPORT_IQN_LEN */ 126 - unsigned short vhost_tpgt; 127 - unsigned short reserved; 128 - }; 189 + /* VHOST_SCSI specific defines */ 129 190 130 191 #define VHOST_SCSI_SET_ENDPOINT _IOW(VHOST_VIRTIO, 0x40, struct vhost_scsi_target) 131 192 #define VHOST_SCSI_CLEAR_ENDPOINT _IOW(VHOST_VIRTIO, 0x41, struct vhost_scsi_target)
+2 -2
tools/lib/traceevent/event-parse-api.c
··· 194 194 } 195 195 196 196 /** 197 - * tep_is_file_bigendian - get if the file is in big endian order 197 + * tep_file_bigendian - get if the file is in big endian order 198 198 * @pevent: a handle to the tep_handle 199 199 * 200 200 * This returns if the file is in big endian order 201 201 * If @pevent is NULL, 0 is returned. 202 202 */ 203 - int tep_is_file_bigendian(struct tep_handle *pevent) 203 + int tep_file_bigendian(struct tep_handle *pevent) 204 204 { 205 205 if(pevent) 206 206 return pevent->file_bigendian;
+2 -2
tools/lib/traceevent/event-parse-local.h
··· 7 7 #ifndef _PARSE_EVENTS_INT_H 8 8 #define _PARSE_EVENTS_INT_H 9 9 10 - struct cmdline; 10 + struct tep_cmdline; 11 11 struct cmdline_list; 12 12 struct func_map; 13 13 struct func_list; ··· 36 36 int long_size; 37 37 int page_size; 38 38 39 - struct cmdline *cmdlines; 39 + struct tep_cmdline *cmdlines; 40 40 struct cmdline_list *cmdlist; 41 41 int cmdline_count; 42 42
+82 -47
tools/lib/traceevent/event-parse.c
··· 124 124 return calloc(1, sizeof(struct tep_print_arg)); 125 125 } 126 126 127 - struct cmdline { 127 + struct tep_cmdline { 128 128 char *comm; 129 129 int pid; 130 130 }; 131 131 132 132 static int cmdline_cmp(const void *a, const void *b) 133 133 { 134 - const struct cmdline *ca = a; 135 - const struct cmdline *cb = b; 134 + const struct tep_cmdline *ca = a; 135 + const struct tep_cmdline *cb = b; 136 136 137 137 if (ca->pid < cb->pid) 138 138 return -1; ··· 152 152 { 153 153 struct cmdline_list *cmdlist = pevent->cmdlist; 154 154 struct cmdline_list *item; 155 - struct cmdline *cmdlines; 155 + struct tep_cmdline *cmdlines; 156 156 int i; 157 157 158 158 cmdlines = malloc(sizeof(*cmdlines) * pevent->cmdline_count); ··· 179 179 180 180 static const char *find_cmdline(struct tep_handle *pevent, int pid) 181 181 { 182 - const struct cmdline *comm; 183 - struct cmdline key; 182 + const struct tep_cmdline *comm; 183 + struct tep_cmdline key; 184 184 185 185 if (!pid) 186 186 return "<idle>"; ··· 208 208 */ 209 209 int tep_pid_is_registered(struct tep_handle *pevent, int pid) 210 210 { 211 - const struct cmdline *comm; 212 - struct cmdline key; 211 + const struct tep_cmdline *comm; 212 + struct tep_cmdline key; 213 213 214 214 if (!pid) 215 215 return 1; ··· 232 232 * we must add this pid. This is much slower than when cmdlines 233 233 * are added before the array is initialized. 234 234 */ 235 - static int add_new_comm(struct tep_handle *pevent, const char *comm, int pid) 235 + static int add_new_comm(struct tep_handle *pevent, 236 + const char *comm, int pid, bool override) 236 237 { 237 - struct cmdline *cmdlines = pevent->cmdlines; 238 - const struct cmdline *cmdline; 239 - struct cmdline key; 238 + struct tep_cmdline *cmdlines = pevent->cmdlines; 239 + struct tep_cmdline *cmdline; 240 + struct tep_cmdline key; 241 + char *new_comm; 240 242 241 243 if (!pid) 242 244 return 0; ··· 249 247 cmdline = bsearch(&key, pevent->cmdlines, pevent->cmdline_count, 250 248 sizeof(*pevent->cmdlines), cmdline_cmp); 251 249 if (cmdline) { 252 - errno = EEXIST; 253 - return -1; 250 + if (!override) { 251 + errno = EEXIST; 252 + return -1; 253 + } 254 + new_comm = strdup(comm); 255 + if (!new_comm) { 256 + errno = ENOMEM; 257 + return -1; 258 + } 259 + free(cmdline->comm); 260 + cmdline->comm = new_comm; 261 + 262 + return 0; 254 263 } 255 264 256 265 cmdlines = realloc(cmdlines, sizeof(*cmdlines) * (pevent->cmdline_count + 1)); ··· 288 275 return 0; 289 276 } 290 277 291 - /** 292 - * tep_register_comm - register a pid / comm mapping 293 - * @pevent: handle for the pevent 294 - * @comm: the command line to register 295 - * @pid: the pid to map the command line to 296 - * 297 - * This adds a mapping to search for command line names with 298 - * a given pid. The comm is duplicated. 299 - */ 300 - int tep_register_comm(struct tep_handle *pevent, const char *comm, int pid) 278 + static int _tep_register_comm(struct tep_handle *pevent, 279 + const char *comm, int pid, bool override) 301 280 { 302 281 struct cmdline_list *item; 303 282 304 283 if (pevent->cmdlines) 305 - return add_new_comm(pevent, comm, pid); 284 + return add_new_comm(pevent, comm, pid, override); 306 285 307 286 item = malloc(sizeof(*item)); 308 287 if (!item) ··· 315 310 pevent->cmdline_count++; 316 311 317 312 return 0; 313 + } 314 + 315 + /** 316 + * tep_register_comm - register a pid / comm mapping 317 + * @pevent: handle for the pevent 318 + * @comm: the command line to register 319 + * @pid: the pid to map the command line to 320 + * 321 + * This adds a mapping to search for command line names with 322 + * a given pid. The comm is duplicated. If a command with the same pid 323 + * already exist, -1 is returned and errno is set to EEXIST 324 + */ 325 + int tep_register_comm(struct tep_handle *pevent, const char *comm, int pid) 326 + { 327 + return _tep_register_comm(pevent, comm, pid, false); 328 + } 329 + 330 + /** 331 + * tep_override_comm - register a pid / comm mapping 332 + * @pevent: handle for the pevent 333 + * @comm: the command line to register 334 + * @pid: the pid to map the command line to 335 + * 336 + * This adds a mapping to search for command line names with 337 + * a given pid. The comm is duplicated. If a command with the same pid 338 + * already exist, the command string is udapted with the new one 339 + */ 340 + int tep_override_comm(struct tep_handle *pevent, const char *comm, int pid) 341 + { 342 + if (!pevent->cmdlines && cmdline_init(pevent)) { 343 + errno = ENOMEM; 344 + return -1; 345 + } 346 + return _tep_register_comm(pevent, comm, pid, true); 318 347 } 319 348 320 349 int tep_register_trace_clock(struct tep_handle *pevent, const char *trace_clock) ··· 5266 5227 } 5267 5228 5268 5229 /** 5269 - * tep_data_event_from_type - find the event by a given type 5270 - * @pevent: a handle to the pevent 5271 - * @type: the type of the event. 5272 - * 5273 - * This returns the event form a given @type; 5274 - */ 5275 - struct tep_event *tep_data_event_from_type(struct tep_handle *pevent, int type) 5276 - { 5277 - return tep_find_event(pevent, type); 5278 - } 5279 - 5280 - /** 5281 5230 * tep_data_pid - parse the PID from record 5282 5231 * @pevent: a handle to the pevent 5283 5232 * @rec: the record to parse ··· 5319 5292 return comm; 5320 5293 } 5321 5294 5322 - static struct cmdline * 5323 - pid_from_cmdlist(struct tep_handle *pevent, const char *comm, struct cmdline *next) 5295 + static struct tep_cmdline * 5296 + pid_from_cmdlist(struct tep_handle *pevent, const char *comm, struct tep_cmdline *next) 5324 5297 { 5325 5298 struct cmdline_list *cmdlist = (struct cmdline_list *)next; 5326 5299 ··· 5332 5305 while (cmdlist && strcmp(cmdlist->comm, comm) != 0) 5333 5306 cmdlist = cmdlist->next; 5334 5307 5335 - return (struct cmdline *)cmdlist; 5308 + return (struct tep_cmdline *)cmdlist; 5336 5309 } 5337 5310 5338 5311 /** ··· 5348 5321 * next pid. 5349 5322 * Also, it does a linear search, so it may be slow. 5350 5323 */ 5351 - struct cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm, 5352 - struct cmdline *next) 5324 + struct tep_cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm, 5325 + struct tep_cmdline *next) 5353 5326 { 5354 - struct cmdline *cmdline; 5327 + struct tep_cmdline *cmdline; 5355 5328 5356 5329 /* 5357 5330 * If the cmdlines have not been converted yet, then use ··· 5390 5363 * Returns the pid for a give cmdline. If @cmdline is NULL, then 5391 5364 * -1 is returned. 5392 5365 */ 5393 - int tep_cmdline_pid(struct tep_handle *pevent, struct cmdline *cmdline) 5366 + int tep_cmdline_pid(struct tep_handle *pevent, struct tep_cmdline *cmdline) 5394 5367 { 5395 5368 struct cmdline_list *cmdlist = (struct cmdline_list *)cmdline; 5396 5369 ··· 6620 6593 * 6621 6594 * If @id is >= 0, then it is used to find the event. 6622 6595 * else @sys_name and @event_name are used. 6596 + * 6597 + * Returns: 6598 + * TEP_REGISTER_SUCCESS_OVERWRITE if an existing handler is overwritten 6599 + * TEP_REGISTER_SUCCESS if a new handler is registered successfully 6600 + * negative TEP_ERRNO_... in case of an error 6601 + * 6623 6602 */ 6624 6603 int tep_register_event_handler(struct tep_handle *pevent, int id, 6625 6604 const char *sys_name, const char *event_name, ··· 6643 6610 6644 6611 event->handler = func; 6645 6612 event->context = context; 6646 - return 0; 6613 + return TEP_REGISTER_SUCCESS_OVERWRITE; 6647 6614 6648 6615 not_found: 6649 6616 /* Save for later use. */ ··· 6673 6640 pevent->handlers = handle; 6674 6641 handle->context = context; 6675 6642 6676 - return -1; 6643 + return TEP_REGISTER_SUCCESS; 6677 6644 } 6678 6645 6679 6646 static int handle_matches(struct event_handler *handler, int id, ··· 6756 6723 { 6757 6724 struct tep_handle *pevent = calloc(1, sizeof(*pevent)); 6758 6725 6759 - if (pevent) 6726 + if (pevent) { 6760 6727 pevent->ref_count = 1; 6728 + pevent->host_bigendian = tep_host_bigendian(); 6729 + } 6761 6730 6762 6731 return pevent; 6763 6732 }
+11 -6
tools/lib/traceevent/event-parse.h
··· 432 432 tep_func_resolver_t *func, void *priv); 433 433 void tep_reset_function_resolver(struct tep_handle *pevent); 434 434 int tep_register_comm(struct tep_handle *pevent, const char *comm, int pid); 435 + int tep_override_comm(struct tep_handle *pevent, const char *comm, int pid); 435 436 int tep_register_trace_clock(struct tep_handle *pevent, const char *trace_clock); 436 437 int tep_register_function(struct tep_handle *pevent, char *name, 437 438 unsigned long long addr, char *mod); ··· 485 484 struct tep_event *event, const char *name, 486 485 struct tep_record *record, int err); 487 486 487 + enum tep_reg_handler { 488 + TEP_REGISTER_SUCCESS = 0, 489 + TEP_REGISTER_SUCCESS_OVERWRITE, 490 + }; 491 + 488 492 int tep_register_event_handler(struct tep_handle *pevent, int id, 489 493 const char *sys_name, const char *event_name, 490 494 tep_event_handler_func func, void *context); ··· 526 520 void tep_data_lat_fmt(struct tep_handle *pevent, 527 521 struct trace_seq *s, struct tep_record *record); 528 522 int tep_data_type(struct tep_handle *pevent, struct tep_record *rec); 529 - struct tep_event *tep_data_event_from_type(struct tep_handle *pevent, int type); 530 523 int tep_data_pid(struct tep_handle *pevent, struct tep_record *rec); 531 524 int tep_data_preempt_count(struct tep_handle *pevent, struct tep_record *rec); 532 525 int tep_data_flags(struct tep_handle *pevent, struct tep_record *rec); 533 526 const char *tep_data_comm_from_pid(struct tep_handle *pevent, int pid); 534 - struct cmdline; 535 - struct cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm, 536 - struct cmdline *next); 537 - int tep_cmdline_pid(struct tep_handle *pevent, struct cmdline *cmdline); 527 + struct tep_cmdline; 528 + struct tep_cmdline *tep_data_pid_from_comm(struct tep_handle *pevent, const char *comm, 529 + struct tep_cmdline *next); 530 + int tep_cmdline_pid(struct tep_handle *pevent, struct tep_cmdline *cmdline); 538 531 539 532 void tep_print_field(struct trace_seq *s, void *data, 540 533 struct tep_format_field *field); ··· 558 553 void tep_set_long_size(struct tep_handle *pevent, int long_size); 559 554 int tep_get_page_size(struct tep_handle *pevent); 560 555 void tep_set_page_size(struct tep_handle *pevent, int _page_size); 561 - int tep_is_file_bigendian(struct tep_handle *pevent); 556 + int tep_file_bigendian(struct tep_handle *pevent); 562 557 void tep_set_file_bigendian(struct tep_handle *pevent, enum tep_endian endian); 563 558 int tep_is_host_bigendian(struct tep_handle *pevent); 564 559 void tep_set_host_bigendian(struct tep_handle *pevent, enum tep_endian endian);
+1 -1
tools/lib/traceevent/plugin_kvm.c
··· 389 389 * We can only use the structure if file is of the same 390 390 * endianness. 391 391 */ 392 - if (tep_is_file_bigendian(event->pevent) == 392 + if (tep_file_bigendian(event->pevent) == 393 393 tep_is_host_bigendian(event->pevent)) { 394 394 395 395 trace_seq_printf(s, "%u q%u%s %s%s %spae %snxe %swp%s%s%s",
+12 -5
tools/lib/traceevent/trace-seq.c
··· 100 100 * @fmt: printf format string 101 101 * 102 102 * It returns 0 if the trace oversizes the buffer's free 103 - * space, 1 otherwise. 103 + * space, the number of characters printed, or a negative 104 + * value in case of an error. 104 105 * 105 106 * The tracer may use either sequence operations or its own 106 107 * copy to user routines. To simplify formating of a trace ··· 130 129 goto try_again; 131 130 } 132 131 133 - s->len += ret; 132 + if (ret > 0) 133 + s->len += ret; 134 134 135 - return 1; 135 + return ret; 136 136 } 137 137 138 138 /** ··· 141 139 * @s: trace sequence descriptor 142 140 * @fmt: printf format string 143 141 * 142 + * It returns 0 if the trace oversizes the buffer's free 143 + * space, the number of characters printed, or a negative 144 + * value in case of an error. 145 + * * 144 146 * The tracer may use either sequence operations or its own 145 147 * copy to user routines. To simplify formating of a trace 146 148 * trace_seq_printf is used to store strings into a special ··· 169 163 goto try_again; 170 164 } 171 165 172 - s->len += ret; 166 + if (ret > 0) 167 + s->len += ret; 173 168 174 - return len; 169 + return ret; 175 170 } 176 171 177 172 /**
+5 -3
tools/perf/Makefile.perf
··· 524 524 525 525 all: shell_compatibility_test $(ALL_PROGRAMS) $(LANG_BINDINGS) $(OTHER_PROGRAMS) 526 526 527 + # Create python binding output directory if not already present 528 + _dummy := $(shell [ -d '$(OUTPUT)python' ] || mkdir -p '$(OUTPUT)python') 529 + 527 530 $(OUTPUT)python/perf.so: $(PYTHON_EXT_SRCS) $(PYTHON_EXT_DEPS) $(LIBTRACEEVENT_DYNAMIC_LIST) 528 531 $(QUIET_GEN)LDSHARED="$(CC) -pthread -shared" \ 529 532 CFLAGS='$(CFLAGS)' LDFLAGS='$(LDFLAGS) $(LIBTRACEEVENT_DYNAMIC_LIST_LDFLAGS)' \ 530 533 $(PYTHON_WORD) util/setup.py \ 531 534 --quiet build_ext; \ 532 - mkdir -p $(OUTPUT)python && \ 533 535 cp $(PYTHON_EXTBUILD_LIB)perf*.so $(OUTPUT)python/ 534 536 535 537 please_set_SHELL_PATH_to_a_more_modern_shell: ··· 662 660 $(QUIET_LINK)$(CC) $(CFLAGS) -o $@ $(LDFLAGS) $(filter %.o,$^) $(LIBS) 663 661 664 662 ifndef NO_PERF_READ_VDSO32 665 - $(OUTPUT)perf-read-vdso32: perf-read-vdso.c util/find-vdso-map.c 663 + $(OUTPUT)perf-read-vdso32: perf-read-vdso.c util/find-map.c 666 664 $(QUIET_CC)$(CC) -m32 $(filter -static,$(LDFLAGS)) -Wall -Werror -o $@ perf-read-vdso.c 667 665 endif 668 666 669 667 ifndef NO_PERF_READ_VDSOX32 670 - $(OUTPUT)perf-read-vdsox32: perf-read-vdso.c util/find-vdso-map.c 668 + $(OUTPUT)perf-read-vdsox32: perf-read-vdso.c util/find-map.c 671 669 $(QUIET_CC)$(CC) -mx32 $(filter -static,$(LDFLAGS)) -Wall -Werror -o $@ perf-read-vdso.c 672 670 endif 673 671
+1
tools/perf/arch/arm/tests/Build
··· 1 1 libperf-y += regs_load.o 2 2 libperf-y += dwarf-unwind.o 3 + libperf-y += vectors-page.o 3 4 4 5 libperf-y += arch-tests.o
+4
tools/perf/arch/arm/tests/arch-tests.c
··· 11 11 }, 12 12 #endif 13 13 { 14 + .desc = "Vectors page", 15 + .func = test__vectors_page, 16 + }, 17 + { 14 18 .func = NULL, 15 19 }, 16 20 };
+24
tools/perf/arch/arm/tests/vectors-page.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <stdio.h> 3 + #include <string.h> 4 + #include <linux/compiler.h> 5 + 6 + #include "debug.h" 7 + #include "tests/tests.h" 8 + #include "util/find-map.c" 9 + 10 + #define VECTORS__MAP_NAME "[vectors]" 11 + 12 + int test__vectors_page(struct test *test __maybe_unused, 13 + int subtest __maybe_unused) 14 + { 15 + void *start, *end; 16 + 17 + if (find_map(&start, &end, VECTORS__MAP_NAME)) { 18 + pr_err("%s not found, is CONFIG_KUSER_HELPERS enabled?\n", 19 + VECTORS__MAP_NAME); 20 + return TEST_FAIL; 21 + } 22 + 23 + return TEST_OK; 24 + }
+11 -4
tools/perf/arch/powerpc/Makefile
··· 14 14 out := $(OUTPUT)arch/powerpc/include/generated/asm 15 15 header32 := $(out)/syscalls_32.c 16 16 header64 := $(out)/syscalls_64.c 17 - sysdef := $(srctree)/tools/arch/powerpc/include/uapi/asm/unistd.h 18 - sysprf := $(srctree)/tools/perf/arch/powerpc/entry/syscalls/ 17 + syskrn := $(srctree)/arch/powerpc/kernel/syscalls/syscall.tbl 18 + sysprf := $(srctree)/tools/perf/arch/powerpc/entry/syscalls 19 + sysdef := $(sysprf)/syscall.tbl 19 20 systbl := $(sysprf)/mksyscalltbl 20 21 21 22 # Create output directory if not already present 22 23 _dummy := $(shell [ -d '$(out)' ] || mkdir -p '$(out)') 23 24 24 25 $(header64): $(sysdef) $(systbl) 25 - $(Q)$(SHELL) '$(systbl)' '64' '$(CC)' $(sysdef) > $@ 26 + @(test -d ../../kernel -a -d ../../tools -a -d ../perf && ( \ 27 + (diff -B $(sysdef) $(syskrn) >/dev/null) \ 28 + || echo "Warning: Kernel ABI header at '$(sysdef)' differs from latest version at '$(syskrn)'" >&2 )) || true 29 + $(Q)$(SHELL) '$(systbl)' '64' $(sysdef) > $@ 26 30 27 31 $(header32): $(sysdef) $(systbl) 28 - $(Q)$(SHELL) '$(systbl)' '32' '$(CC)' $(sysdef) > $@ 32 + @(test -d ../../kernel -a -d ../../tools -a -d ../perf && ( \ 33 + (diff -B $(sysdef) $(syskrn) >/dev/null) \ 34 + || echo "Warning: Kernel ABI header at '$(sysdef)' differs from latest version at '$(syskrn)'" >&2 )) || true 35 + $(Q)$(SHELL) '$(systbl)' '32' $(sysdef) > $@ 29 36 30 37 clean:: 31 38 $(call QUIET_CLEAN, powerpc) $(RM) $(header32) $(header64)
+12 -10
tools/perf/arch/powerpc/entry/syscalls/mksyscalltbl
··· 9 9 # Changed by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> 10 10 11 11 wordsize=$1 12 - gcc=$2 13 - input=$3 12 + SYSCALL_TBL=$2 14 13 15 - if ! test -r $input; then 14 + if ! test -r $SYSCALL_TBL; then 16 15 echo "Could not read input file" >&2 17 16 exit 1 18 17 fi ··· 19 20 create_table() 20 21 { 21 22 local wordsize=$1 22 - local max_nr 23 + local max_nr nr abi sc discard 24 + max_nr=-1 25 + nr=0 23 26 24 27 echo "static const char *syscalltbl_powerpc_${wordsize}[] = {" 25 - while read sc nr; do 26 - printf '\t[%d] = "%s",\n' $nr $sc 27 - max_nr=$nr 28 + while read nr abi sc discard; do 29 + if [ "$max_nr" -lt "$nr" ]; then 30 + printf '\t[%d] = "%s",\n' $nr $sc 31 + max_nr=$nr 32 + fi 28 33 done 29 34 echo '};' 30 35 echo "#define SYSCALLTBL_POWERPC_${wordsize}_MAX_ID $max_nr" 31 36 } 32 37 33 - $gcc -m${wordsize} -E -dM -x c $input \ 34 - |sed -ne 's/^#define __NR_//p' \ 35 - |sort -t' ' -k2 -nu \ 38 + grep -E "^[[:digit:]]+[[:space:]]+(common|spu|nospu|${wordsize})" $SYSCALL_TBL \ 39 + |sort -k1 -n \ 36 40 |create_table ${wordsize}
+427
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 1 + # SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 + # 3 + # system call numbers and entry vectors for powerpc 4 + # 5 + # The format is: 6 + # <number> <abi> <name> <entry point> <compat entry point> 7 + # 8 + # The <abi> can be common, spu, nospu, 64, or 32 for this file. 9 + # 10 + 0 nospu restart_syscall sys_restart_syscall 11 + 1 nospu exit sys_exit 12 + 2 nospu fork ppc_fork 13 + 3 common read sys_read 14 + 4 common write sys_write 15 + 5 common open sys_open compat_sys_open 16 + 6 common close sys_close 17 + 7 common waitpid sys_waitpid 18 + 8 common creat sys_creat 19 + 9 common link sys_link 20 + 10 common unlink sys_unlink 21 + 11 nospu execve sys_execve compat_sys_execve 22 + 12 common chdir sys_chdir 23 + 13 common time sys_time compat_sys_time 24 + 14 common mknod sys_mknod 25 + 15 common chmod sys_chmod 26 + 16 common lchown sys_lchown 27 + 17 common break sys_ni_syscall 28 + 18 32 oldstat sys_stat sys_ni_syscall 29 + 18 64 oldstat sys_ni_syscall 30 + 18 spu oldstat sys_ni_syscall 31 + 19 common lseek sys_lseek compat_sys_lseek 32 + 20 common getpid sys_getpid 33 + 21 nospu mount sys_mount compat_sys_mount 34 + 22 32 umount sys_oldumount 35 + 22 64 umount sys_ni_syscall 36 + 22 spu umount sys_ni_syscall 37 + 23 common setuid sys_setuid 38 + 24 common getuid sys_getuid 39 + 25 common stime sys_stime compat_sys_stime 40 + 26 nospu ptrace sys_ptrace compat_sys_ptrace 41 + 27 common alarm sys_alarm 42 + 28 32 oldfstat sys_fstat sys_ni_syscall 43 + 28 64 oldfstat sys_ni_syscall 44 + 28 spu oldfstat sys_ni_syscall 45 + 29 nospu pause sys_pause 46 + 30 nospu utime sys_utime compat_sys_utime 47 + 31 common stty sys_ni_syscall 48 + 32 common gtty sys_ni_syscall 49 + 33 common access sys_access 50 + 34 common nice sys_nice 51 + 35 common ftime sys_ni_syscall 52 + 36 common sync sys_sync 53 + 37 common kill sys_kill 54 + 38 common rename sys_rename 55 + 39 common mkdir sys_mkdir 56 + 40 common rmdir sys_rmdir 57 + 41 common dup sys_dup 58 + 42 common pipe sys_pipe 59 + 43 common times sys_times compat_sys_times 60 + 44 common prof sys_ni_syscall 61 + 45 common brk sys_brk 62 + 46 common setgid sys_setgid 63 + 47 common getgid sys_getgid 64 + 48 nospu signal sys_signal 65 + 49 common geteuid sys_geteuid 66 + 50 common getegid sys_getegid 67 + 51 nospu acct sys_acct 68 + 52 nospu umount2 sys_umount 69 + 53 common lock sys_ni_syscall 70 + 54 common ioctl sys_ioctl compat_sys_ioctl 71 + 55 common fcntl sys_fcntl compat_sys_fcntl 72 + 56 common mpx sys_ni_syscall 73 + 57 common setpgid sys_setpgid 74 + 58 common ulimit sys_ni_syscall 75 + 59 32 oldolduname sys_olduname 76 + 59 64 oldolduname sys_ni_syscall 77 + 59 spu oldolduname sys_ni_syscall 78 + 60 common umask sys_umask 79 + 61 common chroot sys_chroot 80 + 62 nospu ustat sys_ustat compat_sys_ustat 81 + 63 common dup2 sys_dup2 82 + 64 common getppid sys_getppid 83 + 65 common getpgrp sys_getpgrp 84 + 66 common setsid sys_setsid 85 + 67 32 sigaction sys_sigaction compat_sys_sigaction 86 + 67 64 sigaction sys_ni_syscall 87 + 67 spu sigaction sys_ni_syscall 88 + 68 common sgetmask sys_sgetmask 89 + 69 common ssetmask sys_ssetmask 90 + 70 common setreuid sys_setreuid 91 + 71 common setregid sys_setregid 92 + 72 32 sigsuspend sys_sigsuspend 93 + 72 64 sigsuspend sys_ni_syscall 94 + 72 spu sigsuspend sys_ni_syscall 95 + 73 32 sigpending sys_sigpending compat_sys_sigpending 96 + 73 64 sigpending sys_ni_syscall 97 + 73 spu sigpending sys_ni_syscall 98 + 74 common sethostname sys_sethostname 99 + 75 common setrlimit sys_setrlimit compat_sys_setrlimit 100 + 76 32 getrlimit sys_old_getrlimit compat_sys_old_getrlimit 101 + 76 64 getrlimit sys_ni_syscall 102 + 76 spu getrlimit sys_ni_syscall 103 + 77 common getrusage sys_getrusage compat_sys_getrusage 104 + 78 common gettimeofday sys_gettimeofday compat_sys_gettimeofday 105 + 79 common settimeofday sys_settimeofday compat_sys_settimeofday 106 + 80 common getgroups sys_getgroups 107 + 81 common setgroups sys_setgroups 108 + 82 32 select ppc_select sys_ni_syscall 109 + 82 64 select sys_ni_syscall 110 + 82 spu select sys_ni_syscall 111 + 83 common symlink sys_symlink 112 + 84 32 oldlstat sys_lstat sys_ni_syscall 113 + 84 64 oldlstat sys_ni_syscall 114 + 84 spu oldlstat sys_ni_syscall 115 + 85 common readlink sys_readlink 116 + 86 nospu uselib sys_uselib 117 + 87 nospu swapon sys_swapon 118 + 88 nospu reboot sys_reboot 119 + 89 32 readdir sys_old_readdir compat_sys_old_readdir 120 + 89 64 readdir sys_ni_syscall 121 + 89 spu readdir sys_ni_syscall 122 + 90 common mmap sys_mmap 123 + 91 common munmap sys_munmap 124 + 92 common truncate sys_truncate compat_sys_truncate 125 + 93 common ftruncate sys_ftruncate compat_sys_ftruncate 126 + 94 common fchmod sys_fchmod 127 + 95 common fchown sys_fchown 128 + 96 common getpriority sys_getpriority 129 + 97 common setpriority sys_setpriority 130 + 98 common profil sys_ni_syscall 131 + 99 nospu statfs sys_statfs compat_sys_statfs 132 + 100 nospu fstatfs sys_fstatfs compat_sys_fstatfs 133 + 101 common ioperm sys_ni_syscall 134 + 102 common socketcall sys_socketcall compat_sys_socketcall 135 + 103 common syslog sys_syslog 136 + 104 common setitimer sys_setitimer compat_sys_setitimer 137 + 105 common getitimer sys_getitimer compat_sys_getitimer 138 + 106 common stat sys_newstat compat_sys_newstat 139 + 107 common lstat sys_newlstat compat_sys_newlstat 140 + 108 common fstat sys_newfstat compat_sys_newfstat 141 + 109 32 olduname sys_uname 142 + 109 64 olduname sys_ni_syscall 143 + 109 spu olduname sys_ni_syscall 144 + 110 common iopl sys_ni_syscall 145 + 111 common vhangup sys_vhangup 146 + 112 common idle sys_ni_syscall 147 + 113 common vm86 sys_ni_syscall 148 + 114 common wait4 sys_wait4 compat_sys_wait4 149 + 115 nospu swapoff sys_swapoff 150 + 116 common sysinfo sys_sysinfo compat_sys_sysinfo 151 + 117 nospu ipc sys_ipc compat_sys_ipc 152 + 118 common fsync sys_fsync 153 + 119 32 sigreturn sys_sigreturn compat_sys_sigreturn 154 + 119 64 sigreturn sys_ni_syscall 155 + 119 spu sigreturn sys_ni_syscall 156 + 120 nospu clone ppc_clone 157 + 121 common setdomainname sys_setdomainname 158 + 122 common uname sys_newuname 159 + 123 common modify_ldt sys_ni_syscall 160 + 124 common adjtimex sys_adjtimex compat_sys_adjtimex 161 + 125 common mprotect sys_mprotect 162 + 126 32 sigprocmask sys_sigprocmask compat_sys_sigprocmask 163 + 126 64 sigprocmask sys_ni_syscall 164 + 126 spu sigprocmask sys_ni_syscall 165 + 127 common create_module sys_ni_syscall 166 + 128 nospu init_module sys_init_module 167 + 129 nospu delete_module sys_delete_module 168 + 130 common get_kernel_syms sys_ni_syscall 169 + 131 nospu quotactl sys_quotactl 170 + 132 common getpgid sys_getpgid 171 + 133 common fchdir sys_fchdir 172 + 134 common bdflush sys_bdflush 173 + 135 common sysfs sys_sysfs 174 + 136 32 personality sys_personality ppc64_personality 175 + 136 64 personality ppc64_personality 176 + 136 spu personality ppc64_personality 177 + 137 common afs_syscall sys_ni_syscall 178 + 138 common setfsuid sys_setfsuid 179 + 139 common setfsgid sys_setfsgid 180 + 140 common _llseek sys_llseek 181 + 141 common getdents sys_getdents compat_sys_getdents 182 + 142 common _newselect sys_select compat_sys_select 183 + 143 common flock sys_flock 184 + 144 common msync sys_msync 185 + 145 common readv sys_readv compat_sys_readv 186 + 146 common writev sys_writev compat_sys_writev 187 + 147 common getsid sys_getsid 188 + 148 common fdatasync sys_fdatasync 189 + 149 nospu _sysctl sys_sysctl compat_sys_sysctl 190 + 150 common mlock sys_mlock 191 + 151 common munlock sys_munlock 192 + 152 common mlockall sys_mlockall 193 + 153 common munlockall sys_munlockall 194 + 154 common sched_setparam sys_sched_setparam 195 + 155 common sched_getparam sys_sched_getparam 196 + 156 common sched_setscheduler sys_sched_setscheduler 197 + 157 common sched_getscheduler sys_sched_getscheduler 198 + 158 common sched_yield sys_sched_yield 199 + 159 common sched_get_priority_max sys_sched_get_priority_max 200 + 160 common sched_get_priority_min sys_sched_get_priority_min 201 + 161 common sched_rr_get_interval sys_sched_rr_get_interval compat_sys_sched_rr_get_interval 202 + 162 common nanosleep sys_nanosleep compat_sys_nanosleep 203 + 163 common mremap sys_mremap 204 + 164 common setresuid sys_setresuid 205 + 165 common getresuid sys_getresuid 206 + 166 common query_module sys_ni_syscall 207 + 167 common poll sys_poll 208 + 168 common nfsservctl sys_ni_syscall 209 + 169 common setresgid sys_setresgid 210 + 170 common getresgid sys_getresgid 211 + 171 common prctl sys_prctl 212 + 172 nospu rt_sigreturn sys_rt_sigreturn compat_sys_rt_sigreturn 213 + 173 nospu rt_sigaction sys_rt_sigaction compat_sys_rt_sigaction 214 + 174 nospu rt_sigprocmask sys_rt_sigprocmask compat_sys_rt_sigprocmask 215 + 175 nospu rt_sigpending sys_rt_sigpending compat_sys_rt_sigpending 216 + 176 nospu rt_sigtimedwait sys_rt_sigtimedwait compat_sys_rt_sigtimedwait 217 + 177 nospu rt_sigqueueinfo sys_rt_sigqueueinfo compat_sys_rt_sigqueueinfo 218 + 178 nospu rt_sigsuspend sys_rt_sigsuspend compat_sys_rt_sigsuspend 219 + 179 common pread64 sys_pread64 compat_sys_pread64 220 + 180 common pwrite64 sys_pwrite64 compat_sys_pwrite64 221 + 181 common chown sys_chown 222 + 182 common getcwd sys_getcwd 223 + 183 common capget sys_capget 224 + 184 common capset sys_capset 225 + 185 nospu sigaltstack sys_sigaltstack compat_sys_sigaltstack 226 + 186 32 sendfile sys_sendfile compat_sys_sendfile 227 + 186 64 sendfile sys_sendfile64 228 + 186 spu sendfile sys_sendfile64 229 + 187 common getpmsg sys_ni_syscall 230 + 188 common putpmsg sys_ni_syscall 231 + 189 nospu vfork ppc_vfork 232 + 190 common ugetrlimit sys_getrlimit compat_sys_getrlimit 233 + 191 common readahead sys_readahead compat_sys_readahead 234 + 192 32 mmap2 sys_mmap2 compat_sys_mmap2 235 + 193 32 truncate64 sys_truncate64 compat_sys_truncate64 236 + 194 32 ftruncate64 sys_ftruncate64 compat_sys_ftruncate64 237 + 195 32 stat64 sys_stat64 238 + 196 32 lstat64 sys_lstat64 239 + 197 32 fstat64 sys_fstat64 240 + 198 nospu pciconfig_read sys_pciconfig_read 241 + 199 nospu pciconfig_write sys_pciconfig_write 242 + 200 nospu pciconfig_iobase sys_pciconfig_iobase 243 + 201 common multiplexer sys_ni_syscall 244 + 202 common getdents64 sys_getdents64 245 + 203 common pivot_root sys_pivot_root 246 + 204 32 fcntl64 sys_fcntl64 compat_sys_fcntl64 247 + 205 common madvise sys_madvise 248 + 206 common mincore sys_mincore 249 + 207 common gettid sys_gettid 250 + 208 common tkill sys_tkill 251 + 209 common setxattr sys_setxattr 252 + 210 common lsetxattr sys_lsetxattr 253 + 211 common fsetxattr sys_fsetxattr 254 + 212 common getxattr sys_getxattr 255 + 213 common lgetxattr sys_lgetxattr 256 + 214 common fgetxattr sys_fgetxattr 257 + 215 common listxattr sys_listxattr 258 + 216 common llistxattr sys_llistxattr 259 + 217 common flistxattr sys_flistxattr 260 + 218 common removexattr sys_removexattr 261 + 219 common lremovexattr sys_lremovexattr 262 + 220 common fremovexattr sys_fremovexattr 263 + 221 common futex sys_futex compat_sys_futex 264 + 222 common sched_setaffinity sys_sched_setaffinity compat_sys_sched_setaffinity 265 + 223 common sched_getaffinity sys_sched_getaffinity compat_sys_sched_getaffinity 266 + # 224 unused 267 + 225 common tuxcall sys_ni_syscall 268 + 226 32 sendfile64 sys_sendfile64 compat_sys_sendfile64 269 + 227 common io_setup sys_io_setup compat_sys_io_setup 270 + 228 common io_destroy sys_io_destroy 271 + 229 common io_getevents sys_io_getevents compat_sys_io_getevents 272 + 230 common io_submit sys_io_submit compat_sys_io_submit 273 + 231 common io_cancel sys_io_cancel 274 + 232 nospu set_tid_address sys_set_tid_address 275 + 233 common fadvise64 sys_fadvise64 ppc32_fadvise64 276 + 234 nospu exit_group sys_exit_group 277 + 235 nospu lookup_dcookie sys_lookup_dcookie compat_sys_lookup_dcookie 278 + 236 common epoll_create sys_epoll_create 279 + 237 common epoll_ctl sys_epoll_ctl 280 + 238 common epoll_wait sys_epoll_wait 281 + 239 common remap_file_pages sys_remap_file_pages 282 + 240 common timer_create sys_timer_create compat_sys_timer_create 283 + 241 common timer_settime sys_timer_settime compat_sys_timer_settime 284 + 242 common timer_gettime sys_timer_gettime compat_sys_timer_gettime 285 + 243 common timer_getoverrun sys_timer_getoverrun 286 + 244 common timer_delete sys_timer_delete 287 + 245 common clock_settime sys_clock_settime compat_sys_clock_settime 288 + 246 common clock_gettime sys_clock_gettime compat_sys_clock_gettime 289 + 247 common clock_getres sys_clock_getres compat_sys_clock_getres 290 + 248 common clock_nanosleep sys_clock_nanosleep compat_sys_clock_nanosleep 291 + 249 32 swapcontext ppc_swapcontext ppc32_swapcontext 292 + 249 64 swapcontext ppc64_swapcontext 293 + 249 spu swapcontext sys_ni_syscall 294 + 250 common tgkill sys_tgkill 295 + 251 common utimes sys_utimes compat_sys_utimes 296 + 252 common statfs64 sys_statfs64 compat_sys_statfs64 297 + 253 common fstatfs64 sys_fstatfs64 compat_sys_fstatfs64 298 + 254 32 fadvise64_64 ppc_fadvise64_64 299 + 254 spu fadvise64_64 sys_ni_syscall 300 + 255 common rtas sys_rtas 301 + 256 32 sys_debug_setcontext sys_debug_setcontext sys_ni_syscall 302 + 256 64 sys_debug_setcontext sys_ni_syscall 303 + 256 spu sys_debug_setcontext sys_ni_syscall 304 + # 257 reserved for vserver 305 + 258 nospu migrate_pages sys_migrate_pages compat_sys_migrate_pages 306 + 259 nospu mbind sys_mbind compat_sys_mbind 307 + 260 nospu get_mempolicy sys_get_mempolicy compat_sys_get_mempolicy 308 + 261 nospu set_mempolicy sys_set_mempolicy compat_sys_set_mempolicy 309 + 262 nospu mq_open sys_mq_open compat_sys_mq_open 310 + 263 nospu mq_unlink sys_mq_unlink 311 + 264 nospu mq_timedsend sys_mq_timedsend compat_sys_mq_timedsend 312 + 265 nospu mq_timedreceive sys_mq_timedreceive compat_sys_mq_timedreceive 313 + 266 nospu mq_notify sys_mq_notify compat_sys_mq_notify 314 + 267 nospu mq_getsetattr sys_mq_getsetattr compat_sys_mq_getsetattr 315 + 268 nospu kexec_load sys_kexec_load compat_sys_kexec_load 316 + 269 nospu add_key sys_add_key 317 + 270 nospu request_key sys_request_key 318 + 271 nospu keyctl sys_keyctl compat_sys_keyctl 319 + 272 nospu waitid sys_waitid compat_sys_waitid 320 + 273 nospu ioprio_set sys_ioprio_set 321 + 274 nospu ioprio_get sys_ioprio_get 322 + 275 nospu inotify_init sys_inotify_init 323 + 276 nospu inotify_add_watch sys_inotify_add_watch 324 + 277 nospu inotify_rm_watch sys_inotify_rm_watch 325 + 278 nospu spu_run sys_spu_run 326 + 279 nospu spu_create sys_spu_create 327 + 280 nospu pselect6 sys_pselect6 compat_sys_pselect6 328 + 281 nospu ppoll sys_ppoll compat_sys_ppoll 329 + 282 common unshare sys_unshare 330 + 283 common splice sys_splice 331 + 284 common tee sys_tee 332 + 285 common vmsplice sys_vmsplice compat_sys_vmsplice 333 + 286 common openat sys_openat compat_sys_openat 334 + 287 common mkdirat sys_mkdirat 335 + 288 common mknodat sys_mknodat 336 + 289 common fchownat sys_fchownat 337 + 290 common futimesat sys_futimesat compat_sys_futimesat 338 + 291 32 fstatat64 sys_fstatat64 339 + 291 64 newfstatat sys_newfstatat 340 + 291 spu newfstatat sys_newfstatat 341 + 292 common unlinkat sys_unlinkat 342 + 293 common renameat sys_renameat 343 + 294 common linkat sys_linkat 344 + 295 common symlinkat sys_symlinkat 345 + 296 common readlinkat sys_readlinkat 346 + 297 common fchmodat sys_fchmodat 347 + 298 common faccessat sys_faccessat 348 + 299 common get_robust_list sys_get_robust_list compat_sys_get_robust_list 349 + 300 common set_robust_list sys_set_robust_list compat_sys_set_robust_list 350 + 301 common move_pages sys_move_pages compat_sys_move_pages 351 + 302 common getcpu sys_getcpu 352 + 303 nospu epoll_pwait sys_epoll_pwait compat_sys_epoll_pwait 353 + 304 common utimensat sys_utimensat compat_sys_utimensat 354 + 305 common signalfd sys_signalfd compat_sys_signalfd 355 + 306 common timerfd_create sys_timerfd_create 356 + 307 common eventfd sys_eventfd 357 + 308 common sync_file_range2 sys_sync_file_range2 compat_sys_sync_file_range2 358 + 309 nospu fallocate sys_fallocate compat_sys_fallocate 359 + 310 nospu subpage_prot sys_subpage_prot 360 + 311 common timerfd_settime sys_timerfd_settime compat_sys_timerfd_settime 361 + 312 common timerfd_gettime sys_timerfd_gettime compat_sys_timerfd_gettime 362 + 313 common signalfd4 sys_signalfd4 compat_sys_signalfd4 363 + 314 common eventfd2 sys_eventfd2 364 + 315 common epoll_create1 sys_epoll_create1 365 + 316 common dup3 sys_dup3 366 + 317 common pipe2 sys_pipe2 367 + 318 nospu inotify_init1 sys_inotify_init1 368 + 319 common perf_event_open sys_perf_event_open 369 + 320 common preadv sys_preadv compat_sys_preadv 370 + 321 common pwritev sys_pwritev compat_sys_pwritev 371 + 322 nospu rt_tgsigqueueinfo sys_rt_tgsigqueueinfo compat_sys_rt_tgsigqueueinfo 372 + 323 nospu fanotify_init sys_fanotify_init 373 + 324 nospu fanotify_mark sys_fanotify_mark compat_sys_fanotify_mark 374 + 325 common prlimit64 sys_prlimit64 375 + 326 common socket sys_socket 376 + 327 common bind sys_bind 377 + 328 common connect sys_connect 378 + 329 common listen sys_listen 379 + 330 common accept sys_accept 380 + 331 common getsockname sys_getsockname 381 + 332 common getpeername sys_getpeername 382 + 333 common socketpair sys_socketpair 383 + 334 common send sys_send 384 + 335 common sendto sys_sendto 385 + 336 common recv sys_recv compat_sys_recv 386 + 337 common recvfrom sys_recvfrom compat_sys_recvfrom 387 + 338 common shutdown sys_shutdown 388 + 339 common setsockopt sys_setsockopt compat_sys_setsockopt 389 + 340 common getsockopt sys_getsockopt compat_sys_getsockopt 390 + 341 common sendmsg sys_sendmsg compat_sys_sendmsg 391 + 342 common recvmsg sys_recvmsg compat_sys_recvmsg 392 + 343 common recvmmsg sys_recvmmsg compat_sys_recvmmsg 393 + 344 common accept4 sys_accept4 394 + 345 common name_to_handle_at sys_name_to_handle_at 395 + 346 common open_by_handle_at sys_open_by_handle_at compat_sys_open_by_handle_at 396 + 347 common clock_adjtime sys_clock_adjtime compat_sys_clock_adjtime 397 + 348 common syncfs sys_syncfs 398 + 349 common sendmmsg sys_sendmmsg compat_sys_sendmmsg 399 + 350 common setns sys_setns 400 + 351 nospu process_vm_readv sys_process_vm_readv compat_sys_process_vm_readv 401 + 352 nospu process_vm_writev sys_process_vm_writev compat_sys_process_vm_writev 402 + 353 nospu finit_module sys_finit_module 403 + 354 nospu kcmp sys_kcmp 404 + 355 common sched_setattr sys_sched_setattr 405 + 356 common sched_getattr sys_sched_getattr 406 + 357 common renameat2 sys_renameat2 407 + 358 common seccomp sys_seccomp 408 + 359 common getrandom sys_getrandom 409 + 360 common memfd_create sys_memfd_create 410 + 361 common bpf sys_bpf 411 + 362 nospu execveat sys_execveat compat_sys_execveat 412 + 363 32 switch_endian sys_ni_syscall 413 + 363 64 switch_endian ppc_switch_endian 414 + 363 spu switch_endian sys_ni_syscall 415 + 364 common userfaultfd sys_userfaultfd 416 + 365 common membarrier sys_membarrier 417 + 378 nospu mlock2 sys_mlock2 418 + 379 nospu copy_file_range sys_copy_file_range 419 + 380 common preadv2 sys_preadv2 compat_sys_preadv2 420 + 381 common pwritev2 sys_pwritev2 compat_sys_pwritev2 421 + 382 nospu kexec_file_load sys_kexec_file_load 422 + 383 nospu statx sys_statx 423 + 384 nospu pkey_alloc sys_pkey_alloc 424 + 385 nospu pkey_free sys_pkey_free 425 + 386 nospu pkey_mprotect sys_pkey_mprotect 426 + 387 nospu rseq sys_rseq 427 + 388 nospu io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents
+2 -1
tools/perf/builtin-stat.c
··· 561 561 break; 562 562 } 563 563 } 564 - wait4(child_pid, &status, 0, &stat_config.ru_data); 564 + if (child_pid != -1) 565 + wait4(child_pid, &status, 0, &stat_config.ru_data); 565 566 566 567 if (workload_exec_errno) { 567 568 const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
+1 -6
tools/perf/builtin-top.c
··· 1028 1028 1029 1029 static int callchain_param__setup_sample_type(struct callchain_param *callchain) 1030 1030 { 1031 - if (!perf_hpp_list.sym) { 1032 - if (callchain->enabled) { 1033 - ui__error("Selected -g but \"sym\" not present in --sort/-s."); 1034 - return -EINVAL; 1035 - } 1036 - } else if (callchain->mode != CHAIN_NONE) { 1031 + if (callchain->mode != CHAIN_NONE) { 1037 1032 if (callchain_register_param(callchain) < 0) { 1038 1033 ui__error("Can't register callchain params.\n"); 1039 1034 return -EINVAL;
+11 -4
tools/perf/builtin-trace.c
··· 1758 1758 { 1759 1759 struct thread_trace *ttrace; 1760 1760 size_t printed; 1761 + int len; 1761 1762 1762 1763 if (trace->failure_only || trace->current == NULL) 1763 1764 return 0; ··· 1769 1768 return 0; 1770 1769 1771 1770 printed = trace__fprintf_entry_head(trace, trace->current, 0, false, ttrace->entry_time, trace->output); 1772 - printed += fprintf(trace->output, ")%-*s ...\n", trace->args_alignment, ttrace->entry_str); 1773 - ttrace->entry_pending = false; 1771 + printed += len = fprintf(trace->output, "%s)", ttrace->entry_str); 1774 1772 1773 + if (len < trace->args_alignment - 4) 1774 + printed += fprintf(trace->output, "%-*s", trace->args_alignment - 4 - len, " "); 1775 + 1776 + printed += fprintf(trace->output, " ...\n"); 1777 + 1778 + ttrace->entry_pending = false; 1775 1779 ++trace->nr_events_printed; 1776 1780 1777 1781 return printed; ··· 2032 2026 if (ttrace->entry_pending) { 2033 2027 printed = fprintf(trace->output, "%s", ttrace->entry_str); 2034 2028 } else { 2035 - fprintf(trace->output, " ... ["); 2029 + printed += fprintf(trace->output, " ... ["); 2036 2030 color_fprintf(trace->output, PERF_COLOR_YELLOW, "continued"); 2037 - fprintf(trace->output, "]: %s()", sc->name); 2031 + printed += 9; 2032 + printed += fprintf(trace->output, "]: %s()", sc->name); 2038 2033 } 2039 2034 2040 2035 printed++; /* the closing ')' */
+1 -1
tools/perf/check-headers.sh
··· 10 10 include/uapi/linux/kcmp.h 11 11 include/uapi/linux/kvm.h 12 12 include/uapi/linux/in.h 13 + include/uapi/linux/mount.h 13 14 include/uapi/linux/perf_event.h 14 15 include/uapi/linux/prctl.h 15 16 include/uapi/linux/sched.h ··· 50 49 arch/powerpc/include/uapi/asm/errno.h 51 50 arch/sparc/include/uapi/asm/errno.h 52 51 arch/x86/include/uapi/asm/errno.h 53 - arch/powerpc/include/uapi/asm/unistd.h 54 52 include/asm-generic/bitops/arch_hweight.h 55 53 include/asm-generic/bitops/const_hweight.h 56 54 include/asm-generic/bitops/__fls.h
+3 -3
tools/perf/perf-read-vdso.c
··· 5 5 #define VDSO__MAP_NAME "[vdso]" 6 6 7 7 /* 8 - * Include definition of find_vdso_map() also used in util/vdso.c for 8 + * Include definition of find_map() also used in util/vdso.c for 9 9 * building perf. 10 10 */ 11 - #include "util/find-vdso-map.c" 11 + #include "util/find-map.c" 12 12 13 13 int main(void) 14 14 { 15 15 void *start, *end; 16 16 size_t size, written; 17 17 18 - if (find_vdso_map(&start, &end)) 18 + if (find_map(&start, &end, VDSO__MAP_NAME)) 19 19 return 1; 20 20 21 21 size = end - start;
+2 -1
tools/perf/tests/shell/lib/probe_vfs_getname.sh
··· 13 13 local verbose=$1 14 14 if [ $had_vfs_getname -eq 1 ] ; then 15 15 line=$(perf probe -L getname_flags 2>&1 | egrep 'result.*=.*filename;' | sed -r 's/[[:space:]]+([[:digit:]]+)[[:space:]]+result->uptr.*/\1/') 16 - perf probe $verbose "vfs_getname=getname_flags:${line} pathname=result->name:string" 16 + perf probe -q "vfs_getname=getname_flags:${line} pathname=result->name:string" || \ 17 + perf probe $verbose "vfs_getname=getname_flags:${line} pathname=filename:string" 17 18 fi 18 19 } 19 20
+5
tools/perf/tests/tests.h
··· 119 119 struct thread *thread); 120 120 #endif 121 121 #endif 122 + 123 + #if defined(__arm__) 124 + int test__vectors_page(struct test *test, int subtest); 125 + #endif 126 + 122 127 #endif /* TESTS_H */
+2 -2
tools/perf/trace/beauty/mount_flags.sh
··· 5 5 6 6 printf "static const char *mount_flags[] = {\n" 7 7 regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MS_([[:alnum:]_]+)[[:space:]]+([[:digit:]]+)[[:space:]]*.*' 8 - egrep $regex ${header_dir}/fs.h | egrep -v '(MSK|VERBOSE|MGC_VAL)\>' | \ 8 + egrep $regex ${header_dir}/mount.h | egrep -v '(MSK|VERBOSE|MGC_VAL)\>' | \ 9 9 sed -r "s/$regex/\2 \2 \1/g" | sort -n | \ 10 10 xargs printf "\t[%s ? (ilog2(%s) + 1) : 0] = \"%s\",\n" 11 11 regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MS_([[:alnum:]_]+)[[:space:]]+\(1<<([[:digit:]]+)\)[[:space:]]*.*' 12 - egrep $regex ${header_dir}/fs.h | \ 12 + egrep $regex ${header_dir}/mount.h | \ 13 13 sed -r "s/$regex/\2 \1/g" | \ 14 14 xargs printf "\t[%s + 1] = \"%s\",\n" 15 15 printf "};\n"
+1 -1
tools/perf/trace/beauty/prctl_option.sh
··· 4 4 [ $# -eq 1 ] && header_dir=$1 || header_dir=tools/include/uapi/linux/ 5 5 6 6 printf "static const char *prctl_options[] = {\n" 7 - regex='^#define[[:space:]]+PR_([GS]ET\w+)[[:space:]]*([[:xdigit:]]+).*' 7 + regex='^#define[[:space:]]+PR_(\w+)[[:space:]]*([[:xdigit:]]+).*' 8 8 egrep $regex ${header_dir}/prctl.h | grep -v PR_SET_PTRACER | \ 9 9 sed -r "s/$regex/\2 \1/g" | \ 10 10 sort -n | xargs printf "\t[%s] = \"%s\",\n"
+4 -4
tools/perf/util/annotate.c
··· 1723 1723 err = asprintf(&command, 1724 1724 "%s %s%s --start-address=0x%016" PRIx64 1725 1725 " --stop-address=0x%016" PRIx64 1726 - " -l -d %s %s -C \"%s\" 2>/dev/null|grep -v \"%s:\"|expand", 1726 + " -l -d %s %s -C \"$1\" 2>/dev/null|grep -v \"$1:\"|expand", 1727 1727 opts->objdump_path ?: "objdump", 1728 1728 opts->disassembler_style ? "-M " : "", 1729 1729 opts->disassembler_style ?: "", 1730 1730 map__rip_2objdump(map, sym->start), 1731 1731 map__rip_2objdump(map, sym->end), 1732 1732 opts->show_asm_raw ? "" : "--no-show-raw", 1733 - opts->annotate_src ? "-S" : "", 1734 - symfs_filename, symfs_filename); 1733 + opts->annotate_src ? "-S" : ""); 1735 1734 1736 1735 if (err < 0) { 1737 1736 pr_err("Failure allocating memory for the command to run\n"); ··· 1755 1756 close(stdout_fd[0]); 1756 1757 dup2(stdout_fd[1], 1); 1757 1758 close(stdout_fd[1]); 1758 - execl("/bin/sh", "sh", "-c", command, NULL); 1759 + execl("/bin/sh", "sh", "-c", command, "--", symfs_filename, 1760 + NULL); 1759 1761 perror(command); 1760 1762 exit(-1); 1761 1763 }
+20 -12
tools/perf/util/callchain.c
··· 766 766 cnode->cycles_count += node->branch_flags.cycles; 767 767 cnode->iter_count += node->nr_loop_iter; 768 768 cnode->iter_cycles += node->iter_cycles; 769 + cnode->from_count++; 769 770 } 770 771 } 771 772 ··· 1346 1345 static int branch_from_str(char *bf, int bfsize, 1347 1346 u64 branch_count, 1348 1347 u64 cycles_count, u64 iter_count, 1349 - u64 iter_cycles) 1348 + u64 iter_cycles, u64 from_count) 1350 1349 { 1351 1350 int printed = 0, i = 0; 1352 - u64 cycles; 1351 + u64 cycles, v = 0; 1353 1352 1354 1353 cycles = cycles_count / branch_count; 1355 1354 if (cycles) { ··· 1358 1357 bf + printed, bfsize - printed); 1359 1358 } 1360 1359 1361 - if (iter_count) { 1362 - printed += count_pri64_printf(i++, "iter", 1363 - iter_count, 1364 - bf + printed, bfsize - printed); 1360 + if (iter_count && from_count) { 1361 + v = iter_count / from_count; 1362 + if (v) { 1363 + printed += count_pri64_printf(i++, "iter", 1364 + v, bf + printed, bfsize - printed); 1365 1365 1366 - printed += count_pri64_printf(i++, "avg_cycles", 1367 - iter_cycles / iter_count, 1368 - bf + printed, bfsize - printed); 1366 + printed += count_pri64_printf(i++, "avg_cycles", 1367 + iter_cycles / iter_count, 1368 + bf + printed, bfsize - printed); 1369 + } 1369 1370 } 1370 1371 1371 1372 if (i) ··· 1380 1377 u64 branch_count, u64 predicted_count, 1381 1378 u64 abort_count, u64 cycles_count, 1382 1379 u64 iter_count, u64 iter_cycles, 1380 + u64 from_count, 1383 1381 struct branch_type_stat *brtype_stat) 1384 1382 { 1385 1383 int printed; ··· 1393 1389 predicted_count, abort_count, brtype_stat); 1394 1390 } else { 1395 1391 printed = branch_from_str(bf, bfsize, branch_count, 1396 - cycles_count, iter_count, iter_cycles); 1392 + cycles_count, iter_count, iter_cycles, 1393 + from_count); 1397 1394 } 1398 1395 1399 1396 if (!printed) ··· 1407 1402 u64 branch_count, u64 predicted_count, 1408 1403 u64 abort_count, u64 cycles_count, 1409 1404 u64 iter_count, u64 iter_cycles, 1405 + u64 from_count, 1410 1406 struct branch_type_stat *brtype_stat) 1411 1407 { 1412 1408 char str[256]; 1413 1409 1414 1410 counts_str_build(str, sizeof(str), branch_count, 1415 1411 predicted_count, abort_count, cycles_count, 1416 - iter_count, iter_cycles, brtype_stat); 1412 + iter_count, iter_cycles, from_count, brtype_stat); 1417 1413 1418 1414 if (fp) 1419 1415 return fprintf(fp, "%s", str); ··· 1428 1422 u64 branch_count, predicted_count; 1429 1423 u64 abort_count, cycles_count; 1430 1424 u64 iter_count, iter_cycles; 1425 + u64 from_count; 1431 1426 1432 1427 branch_count = clist->branch_count; 1433 1428 predicted_count = clist->predicted_count; ··· 1436 1429 cycles_count = clist->cycles_count; 1437 1430 iter_count = clist->iter_count; 1438 1431 iter_cycles = clist->iter_cycles; 1432 + from_count = clist->from_count; 1439 1433 1440 1434 return callchain_counts_printf(fp, bf, bfsize, branch_count, 1441 1435 predicted_count, abort_count, 1442 1436 cycles_count, iter_count, iter_cycles, 1443 - &clist->brtype_stat); 1437 + from_count, &clist->brtype_stat); 1444 1438 } 1445 1439 1446 1440 static void free_callchain_node(struct callchain_node *node)
+1
tools/perf/util/callchain.h
··· 118 118 bool has_children; 119 119 }; 120 120 u64 branch_count; 121 + u64 from_count; 121 122 u64 predicted_count; 122 123 u64 abort_count; 123 124 u64 cycles_count;
+3 -4
tools/perf/util/find-vdso-map.c tools/perf/util/find-map.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - static int find_vdso_map(void **start, void **end) 2 + static int find_map(void **start, void **end, const char *name) 3 3 { 4 4 FILE *maps; 5 5 char line[128]; ··· 7 7 8 8 maps = fopen("/proc/self/maps", "r"); 9 9 if (!maps) { 10 - fprintf(stderr, "vdso: cannot open maps\n"); 10 + fprintf(stderr, "cannot open maps\n"); 11 11 return -1; 12 12 } 13 13 ··· 21 21 if (m < 0) 22 22 continue; 23 23 24 - if (!strncmp(&line[m], VDSO__MAP_NAME, 25 - sizeof(VDSO__MAP_NAME) - 1)) 24 + if (!strncmp(&line[m], name, strlen(name))) 26 25 found = 1; 27 26 } 28 27
+1 -1
tools/perf/util/machine.c
··· 2005 2005 { 2006 2006 int i; 2007 2007 2008 - iter->nr_loop_iter = nr; 2008 + iter->nr_loop_iter++; 2009 2009 iter->cycles = 0; 2010 2010 2011 2011 for (i = 0; i < nr; i++)
-1
tools/perf/util/strbuf.c
··· 109 109 return ret; 110 110 } 111 111 len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap_saved); 112 - va_end(ap_saved); 113 112 if (len > strbuf_avail(sb)) { 114 113 pr_debug("this should not happen, your vsnprintf is broken"); 115 114 va_end(ap_saved);
+1
tools/perf/util/symbol.c
··· 614 614 static bool symbol__is_idle(const char *name) 615 615 { 616 616 const char * const idle_symbols[] = { 617 + "arch_cpu_idle", 617 618 "cpu_idle", 618 619 "cpu_startup_entry", 619 620 "intel_idle",
+3 -3
tools/perf/util/vdso.c
··· 18 18 #include "debug.h" 19 19 20 20 /* 21 - * Include definition of find_vdso_map() also used in perf-read-vdso.c for 21 + * Include definition of find_map() also used in perf-read-vdso.c for 22 22 * building perf-read-vdso32 and perf-read-vdsox32. 23 23 */ 24 - #include "find-vdso-map.c" 24 + #include "find-map.c" 25 25 26 26 #define VDSO__TEMP_FILE_NAME "/tmp/perf-vdso.so-XXXXXX" 27 27 ··· 76 76 if (vdso_file->found) 77 77 return vdso_file->temp_file_name; 78 78 79 - if (vdso_file->error || find_vdso_map(&start, &end)) 79 + if (vdso_file->error || find_map(&start, &end, VDSO__MAP_NAME)) 80 80 return NULL; 81 81 82 82 size = end - start;
+1 -1
tools/thermal/tmon/Makefile
··· 6 6 7 7 BINDIR=usr/bin 8 8 WARNFLAGS=-Wall -Wshadow -W -Wformat -Wimplicit-function-declaration -Wimplicit-int 9 - override CFLAGS+= -O1 ${WARNFLAGS} 9 + override CFLAGS+= $(call cc-option,-O3,-O1) ${WARNFLAGS} 10 10 # Add "-fstack-protector" only if toolchain supports it. 11 11 override CFLAGS+= $(call cc-option,-fstack-protector-strong) 12 12 CC?= $(CROSS_COMPILE)gcc
+3 -1
tools/vm/page_owner_sort.c
··· 5 5 * Example use: 6 6 * cat /sys/kernel/debug/page_owner > page_owner_full.txt 7 7 * grep -v ^PFN page_owner_full.txt > page_owner.txt 8 - * ./sort page_owner.txt sorted_page_owner.txt 8 + * ./page_owner_sort page_owner.txt sorted_page_owner.txt 9 + * 10 + * See Documentation/vm/page_owner.rst 9 11 */ 10 12 11 13 #include <stdio.h>
+7 -2
virt/kvm/kvm_main.c
··· 1227 1227 { 1228 1228 struct kvm_memslots *slots; 1229 1229 struct kvm_memory_slot *memslot; 1230 - int as_id, id, n; 1230 + int as_id, id; 1231 1231 gfn_t offset; 1232 - unsigned long i; 1232 + unsigned long i, n; 1233 1233 unsigned long *dirty_bitmap; 1234 1234 unsigned long *dirty_bitmap_buffer; 1235 1235 ··· 1249 1249 return -ENOENT; 1250 1250 1251 1251 n = kvm_dirty_bitmap_bytes(memslot); 1252 + 1253 + if (log->first_page > memslot->npages || 1254 + log->num_pages > memslot->npages - log->first_page) 1255 + return -EINVAL; 1256 + 1252 1257 *flush = false; 1253 1258 dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot); 1254 1259 if (copy_from_user(dirty_bitmap_buffer, log->dirty_bitmap, n))