Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'akpm' (patches from Andrew)

Merge third patch-bomb from Andrew Morton:
"I'm pretty much done for -rc1 now:

- the rest of MM, basically

- lib/ updates

- checkpatch, epoll, hfs, fatfs, ptrace, coredump, exit

- cpu_mask simplifications

- kexec, rapidio, MAINTAINERS etc, etc.

- more dma-mapping cleanups/simplifications from hch"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (109 commits)
MAINTAINERS: add/fix git URLs for various subsystems
mm: memcontrol: add "sock" to cgroup2 memory.stat
mm: memcontrol: basic memory statistics in cgroup2 memory controller
mm: memcontrol: do not uncharge old page in page cache replacement
Documentation: cgroup: add memory.swap.{current,max} description
mm: free swap cache aggressively if memcg swap is full
mm: vmscan: do not scan anon pages if memcg swap limit is hit
swap.h: move memcg related stuff to the end of the file
mm: memcontrol: replace mem_cgroup_lruvec_online with mem_cgroup_online
mm: vmscan: pass memcg to get_scan_count()
mm: memcontrol: charge swap to cgroup2
mm: memcontrol: clean up alloc, online, offline, free functions
mm: memcontrol: flatten struct cg_proto
mm: memcontrol: rein in the CONFIG space madness
net: drop tcp_memcontrol.c
mm: memcontrol: introduce CONFIG_MEMCG_LEGACY_KMEM
mm: memcontrol: allow to disable kmem accounting for cgroup2
mm: memcontrol: account "kmem" consumers in cgroup2 memory controller
mm: memcontrol: move kmem accounting code to CONFIG_MEMCG
mm: memcontrol: separate kmem code from legacy tcp accounting code
...

+3665 -4014
+10
CREDITS
··· 1856 1856 S: 1403 ND BUSSUM 1857 1857 S: The Netherlands 1858 1858 1859 + N: Martin Kepplinger 1860 + E: martink@posteo.de 1861 + E: martin.kepplinger@theobroma-systems.com 1862 + W: http://www.martinkepplinger.com 1863 + D: mma8452 accelerators iio driver 1864 + D: Kernel cleanups 1865 + S: Garnisonstraße 26 1866 + S: 4020 Linz 1867 + S: Austria 1868 + 1859 1869 N: Karl Keyte 1860 1870 E: karl@koft.com 1861 1871 D: Disk usage statistics and modifications to line printer driver
-10
Documentation/DMA-API-HOWTO.txt
··· 951 951 alignment constraints (e.g. the alignment constraints about 64-bit 952 952 objects). 953 953 954 - 3) Supporting multiple types of IOMMUs 955 - 956 - If your architecture needs to support multiple types of IOMMUs, you 957 - can use include/linux/asm-generic/dma-mapping-common.h. It's a 958 - library to support the DMA API with multiple types of IOMMUs. Lots 959 - of architectures (x86, powerpc, sh, alpha, ia64, microblaze and 960 - sparc) use it. Choose one to see how it can be used. If you need to 961 - support multiple types of IOMMUs in a single system, the example of 962 - x86 or powerpc helps. 963 - 964 954 Closing 965 955 966 956 This document, and the API itself, would not be in its current
+89
Documentation/cgroup-v2.txt
··· 819 819 the cgroup. This may not exactly match the number of 820 820 processes killed but should generally be close. 821 821 822 + memory.stat 823 + 824 + A read-only flat-keyed file which exists on non-root cgroups. 825 + 826 + This breaks down the cgroup's memory footprint into different 827 + types of memory, type-specific details, and other information 828 + on the state and past events of the memory management system. 829 + 830 + All memory amounts are in bytes. 831 + 832 + The entries are ordered to be human readable, and new entries 833 + can show up in the middle. Don't rely on items remaining in a 834 + fixed position; use the keys to look up specific values! 835 + 836 + anon 837 + 838 + Amount of memory used in anonymous mappings such as 839 + brk(), sbrk(), and mmap(MAP_ANONYMOUS) 840 + 841 + file 842 + 843 + Amount of memory used to cache filesystem data, 844 + including tmpfs and shared memory. 845 + 846 + file_mapped 847 + 848 + Amount of cached filesystem data mapped with mmap() 849 + 850 + file_dirty 851 + 852 + Amount of cached filesystem data that was modified but 853 + not yet written back to disk 854 + 855 + file_writeback 856 + 857 + Amount of cached filesystem data that was modified and 858 + is currently being written back to disk 859 + 860 + inactive_anon 861 + active_anon 862 + inactive_file 863 + active_file 864 + unevictable 865 + 866 + Amount of memory, swap-backed and filesystem-backed, 867 + on the internal memory management lists used by the 868 + page reclaim algorithm 869 + 870 + pgfault 871 + 872 + Total number of page faults incurred 873 + 874 + pgmajfault 875 + 876 + Number of major page faults incurred 877 + 878 + memory.swap.current 879 + 880 + A read-only single value file which exists on non-root 881 + cgroups. 882 + 883 + The total amount of swap currently being used by the cgroup 884 + and its descendants. 885 + 886 + memory.swap.max 887 + 888 + A read-write single value file which exists on non-root 889 + cgroups. The default is "max". 890 + 891 + Swap usage hard limit. If a cgroup's swap usage reaches this 892 + limit, anonymous meomry of the cgroup will not be swapped out. 893 + 822 894 823 895 5-2-2. General Usage 824 896 ··· 1363 1291 system than killing the group. Otherwise, memory.max is there to 1364 1292 limit this type of spillover and ultimately contain buggy or even 1365 1293 malicious applications. 1294 + 1295 + The combined memory+swap accounting and limiting is replaced by real 1296 + control over swap space. 1297 + 1298 + The main argument for a combined memory+swap facility in the original 1299 + cgroup design was that global or parental pressure would always be 1300 + able to swap all anonymous memory of a child group, regardless of the 1301 + child's own (possibly untrusted) configuration. However, untrusted 1302 + groups can sabotage swapping by other means - such as referencing its 1303 + anonymous memory in a tight loop - and an admin can not assume full 1304 + swappability when overcommitting untrusted jobs. 1305 + 1306 + For trusted jobs, on the other hand, a combined counter is not an 1307 + intuitive userspace interface, and it flies in the face of the idea 1308 + that cgroup controllers should account and limit specific physical 1309 + resources. Swap space is a resource like all others in the system, 1310 + and that's why unified hierarchy allows distributing it separately.
-40
Documentation/features/io/dma_map_attrs/arch-support.txt
··· 1 - # 2 - # Feature name: dma_map_attrs 3 - # Kconfig: HAVE_DMA_ATTRS 4 - # description: arch provides dma_*map*_attrs() APIs 5 - # 6 - ----------------------- 7 - | arch |status| 8 - ----------------------- 9 - | alpha: | ok | 10 - | arc: | TODO | 11 - | arm: | ok | 12 - | arm64: | ok | 13 - | avr32: | TODO | 14 - | blackfin: | TODO | 15 - | c6x: | TODO | 16 - | cris: | TODO | 17 - | frv: | TODO | 18 - | h8300: | ok | 19 - | hexagon: | ok | 20 - | ia64: | ok | 21 - | m32r: | TODO | 22 - | m68k: | TODO | 23 - | metag: | TODO | 24 - | microblaze: | ok | 25 - | mips: | ok | 26 - | mn10300: | TODO | 27 - | nios2: | TODO | 28 - | openrisc: | ok | 29 - | parisc: | TODO | 30 - | powerpc: | ok | 31 - | s390: | ok | 32 - | score: | TODO | 33 - | sh: | ok | 34 - | sparc: | ok | 35 - | tile: | ok | 36 - | um: | TODO | 37 - | unicore32: | ok | 38 - | x86: | ok | 39 - | xtensa: | TODO | 40 - -----------------------
+10
Documentation/filesystems/vfat.txt
··· 180 180 181 181 <bool>: 0,1,yes,no,true,false 182 182 183 + LIMITATION 184 + --------------------------------------------------------------------- 185 + * The fallocated region of file is discarded at umount/evict time 186 + when using fallocate with FALLOC_FL_KEEP_SIZE. 187 + So, User should assume that fallocated region can be discarded at 188 + last close if there is memory pressure resulting in eviction of 189 + the inode from the memory. As a result, for any dependency on 190 + the fallocated region, user should make sure to recheck fallocate 191 + after reopening the file. 192 + 183 193 TODO 184 194 ---------------------------------------------------------------------- 185 195 * Need to get rid of the raw scanning stuff. Instead, always use
+1
Documentation/kernel-parameters.txt
··· 611 611 cgroup.memory= [KNL] Pass options to the cgroup memory controller. 612 612 Format: <string> 613 613 nosocket -- Disable socket memory accounting. 614 + nokmem -- Disable kernel memory accounting. 614 615 615 616 checkreqprot [SELINUX] Set initial checkreqprot flag value. 616 617 Format: { "0" | "1" }
+7 -8
Documentation/sysctl/kernel.txt
··· 825 825 Each write syscall must fully contain the sysctl value to be 826 826 written, and multiple writes on the same sysctl file descriptor 827 827 will rewrite the sysctl value, regardless of file position. 828 - 0 - (default) Same behavior as above, but warn about processes that 829 - perform writes to a sysctl file descriptor when the file position 830 - is not 0. 831 - 1 - Respect file position when writing sysctl strings. Multiple writes 832 - will append to the sysctl value buffer. Anything past the max length 833 - of the sysctl value buffer will be ignored. Writes to numeric sysctl 834 - entries must always be at file position 0 and the value must be 835 - fully contained in the buffer sent in the write syscall. 828 + 0 - Same behavior as above, but warn about processes that perform writes 829 + to a sysctl file descriptor when the file position is not 0. 830 + 1 - (default) Respect file position when writing sysctl strings. Multiple 831 + writes will append to the sysctl value buffer. Anything past the max 832 + length of the sysctl value buffer will be ignored. Writes to numeric 833 + sysctl entries must always be at file position 0 and the value must 834 + be fully contained in the buffer sent in the write syscall. 836 835 837 836 ============================================================== 838 837
+84
Documentation/ubsan.txt
··· 1 + Undefined Behavior Sanitizer - UBSAN 2 + 3 + Overview 4 + -------- 5 + 6 + UBSAN is a runtime undefined behaviour checker. 7 + 8 + UBSAN uses compile-time instrumentation to catch undefined behavior (UB). 9 + Compiler inserts code that perform certain kinds of checks before operations 10 + that may cause UB. If check fails (i.e. UB detected) __ubsan_handle_* 11 + function called to print error message. 12 + 13 + GCC has that feature since 4.9.x [1] (see -fsanitize=undefined option and 14 + its suboptions). GCC 5.x has more checkers implemented [2]. 15 + 16 + Report example 17 + --------------- 18 + 19 + ================================================================================ 20 + UBSAN: Undefined behaviour in ../include/linux/bitops.h:110:33 21 + shift exponent 32 is to large for 32-bit type 'unsigned int' 22 + CPU: 0 PID: 0 Comm: swapper Not tainted 4.4.0-rc1+ #26 23 + 0000000000000000 ffffffff82403cc8 ffffffff815e6cd6 0000000000000001 24 + ffffffff82403cf8 ffffffff82403ce0 ffffffff8163a5ed 0000000000000020 25 + ffffffff82403d78 ffffffff8163ac2b ffffffff815f0001 0000000000000002 26 + Call Trace: 27 + [<ffffffff815e6cd6>] dump_stack+0x45/0x5f 28 + [<ffffffff8163a5ed>] ubsan_epilogue+0xd/0x40 29 + [<ffffffff8163ac2b>] __ubsan_handle_shift_out_of_bounds+0xeb/0x130 30 + [<ffffffff815f0001>] ? radix_tree_gang_lookup_slot+0x51/0x150 31 + [<ffffffff8173c586>] _mix_pool_bytes+0x1e6/0x480 32 + [<ffffffff83105653>] ? dmi_walk_early+0x48/0x5c 33 + [<ffffffff8173c881>] add_device_randomness+0x61/0x130 34 + [<ffffffff83105b35>] ? dmi_save_one_device+0xaa/0xaa 35 + [<ffffffff83105653>] dmi_walk_early+0x48/0x5c 36 + [<ffffffff831066ae>] dmi_scan_machine+0x278/0x4b4 37 + [<ffffffff8111d58a>] ? vprintk_default+0x1a/0x20 38 + [<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120 39 + [<ffffffff830b2240>] setup_arch+0x405/0xc2c 40 + [<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120 41 + [<ffffffff830ae053>] start_kernel+0x83/0x49a 42 + [<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120 43 + [<ffffffff830ad386>] x86_64_start_reservations+0x2a/0x2c 44 + [<ffffffff830ad4f3>] x86_64_start_kernel+0x16b/0x17a 45 + ================================================================================ 46 + 47 + Usage 48 + ----- 49 + 50 + To enable UBSAN configure kernel with: 51 + 52 + CONFIG_UBSAN=y 53 + 54 + and to check the entire kernel: 55 + 56 + CONFIG_UBSAN_SANITIZE_ALL=y 57 + 58 + To enable instrumentation for specific files or directories, add a line 59 + similar to the following to the respective kernel Makefile: 60 + 61 + For a single file (e.g. main.o): 62 + UBSAN_SANITIZE_main.o := y 63 + 64 + For all files in one directory: 65 + UBSAN_SANITIZE := y 66 + 67 + To exclude files from being instrumented even if 68 + CONFIG_UBSAN_SANITIZE_ALL=y, use: 69 + 70 + UBSAN_SANITIZE_main.o := n 71 + and: 72 + UBSAN_SANITIZE := n 73 + 74 + Detection of unaligned accesses controlled through the separate option - 75 + CONFIG_UBSAN_ALIGNMENT. It's off by default on architectures that support 76 + unaligned accesses (CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y). One could 77 + still enable it in config, just note that it will produce a lot of UBSAN 78 + reports. 79 + 80 + References 81 + ---------- 82 + 83 + [1] - https://gcc.gnu.org/onlinedocs/gcc-4.9.0/gcc/Debugging-Options.html 84 + [2] - https://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html
+42 -11
MAINTAINERS
··· 781 781 APM DRIVER 782 782 M: Jiri Kosina <jikos@kernel.org> 783 783 S: Odd fixes 784 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/apm.git 784 785 F: arch/x86/kernel/apm_32.c 785 786 F: include/linux/apm_bios.h 786 787 F: include/uapi/linux/apm_bios.h ··· 947 946 M: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com> 948 947 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 949 948 W: http://www.linux4sam.org 949 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/nferre/linux-at91.git 950 950 S: Supported 951 951 F: arch/arm/mach-at91/ 952 952 F: include/soc/at91/ ··· 1466 1464 M: Heiko Stuebner <heiko@sntech.de> 1467 1465 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1468 1466 L: linux-rockchip@lists.infradead.org 1467 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip.git 1469 1468 S: Maintained 1470 1469 F: arch/arm/boot/dts/rk3* 1471 1470 F: arch/arm/mach-rockchip/ ··· 1799 1796 M: Catalin Marinas <catalin.marinas@arm.com> 1800 1797 M: Will Deacon <will.deacon@arm.com> 1801 1798 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1799 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git 1802 1800 S: Maintained 1803 1801 F: arch/arm64/ 1804 1802 F: Documentation/arm64/ ··· 1885 1881 M: Kalle Valo <kvalo@qca.qualcomm.com> 1886 1882 L: linux-wireless@vger.kernel.org 1887 1883 W: http://wireless.kernel.org/en/users/Drivers/ath6kl 1888 - T: git git://github.com/kvalo/ath.git 1884 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git 1889 1885 S: Supported 1890 1886 F: drivers/net/wireless/ath/ath6kl/ 1891 1887 ··· 2137 2133 BACKLIGHT CLASS/SUBSYSTEM 2138 2134 M: Jingoo Han <jingoohan1@gmail.com> 2139 2135 M: Lee Jones <lee.jones@linaro.org> 2136 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/lee/backlight.git 2140 2137 S: Maintained 2141 2138 F: drivers/video/backlight/ 2142 2139 F: include/linux/backlight.h ··· 2820 2815 CHROME HARDWARE PLATFORM SUPPORT 2821 2816 M: Olof Johansson <olof@lixom.net> 2822 2817 S: Maintained 2818 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/olof/chrome-platform.git 2823 2819 F: drivers/platform/chrome/ 2824 2820 2825 2821 CISCO VIC ETHERNET NIC DRIVER ··· 3119 3113 M: Jesper Nilsson <jesper.nilsson@axis.com> 3120 3114 L: linux-cris-kernel@axis.com 3121 3115 W: http://developer.axis.com 3116 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jesper/cris.git 3122 3117 S: Maintained 3123 3118 F: arch/cris/ 3124 3119 F: drivers/tty/serial/crisv10.* ··· 3128 3121 M: Herbert Xu <herbert@gondor.apana.org.au> 3129 3122 M: "David S. Miller" <davem@davemloft.net> 3130 3123 L: linux-crypto@vger.kernel.org 3124 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git 3131 3125 T: git git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6.git 3132 3126 S: Maintained 3133 3127 F: Documentation/crypto/ ··· 3591 3583 M: David Teigland <teigland@redhat.com> 3592 3584 L: cluster-devel@redhat.com 3593 3585 W: http://sources.redhat.com/cluster/ 3594 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/teigland/dlm.git 3586 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/teigland/linux-dlm.git 3595 3587 S: Supported 3596 3588 F: fs/dlm/ 3597 3589 ··· 4005 3997 L: ecryptfs@vger.kernel.org 4006 3998 W: http://ecryptfs.org 4007 3999 W: https://launchpad.net/ecryptfs 4000 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tyhicks/ecryptfs.git 4008 4001 S: Supported 4009 4002 F: Documentation/filesystems/ecryptfs.txt 4010 4003 F: fs/ecryptfs/ ··· 4284 4275 L: linux-ext4@vger.kernel.org 4285 4276 W: http://ext4.wiki.kernel.org 4286 4277 Q: http://patchwork.ozlabs.org/project/linux-ext4/list/ 4278 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4.git 4287 4279 S: Maintained 4288 4280 F: Documentation/filesystems/ext4.txt 4289 4281 F: fs/ext4/ ··· 4967 4957 HARDWARE SPINLOCK CORE 4968 4958 M: Ohad Ben-Cohen <ohad@wizery.com> 4969 4959 S: Maintained 4960 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/ohad/hwspinlock.git 4970 4961 F: Documentation/hwspinlock.txt 4971 4962 F: drivers/hwspinlock/hwspinlock_* 4972 4963 F: include/linux/hwspinlock.h ··· 5506 5495 L: linux-ima-devel@lists.sourceforge.net 5507 5496 L: linux-ima-user@lists.sourceforge.net 5508 5497 L: linux-security-module@vger.kernel.org 5498 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity.git 5509 5499 S: Supported 5510 5500 F: security/integrity/ima/ 5511 5501 ··· 5762 5750 F: include/linux/scif.h 5763 5751 F: include/uapi/linux/mic_common.h 5764 5752 F: include/uapi/linux/mic_ioctl.h 5765 - F include/uapi/linux/scif_ioctl.h 5753 + F: include/uapi/linux/scif_ioctl.h 5766 5754 F: drivers/misc/mic/ 5767 5755 F: drivers/dma/mic_x100_dma.c 5768 5756 F: drivers/dma/mic_x100_dma.h 5769 - F Documentation/mic/ 5757 + F: Documentation/mic/ 5770 5758 5771 5759 INTEL PMC/P-Unit IPC DRIVER 5772 5760 M: Zha Qipeng<qipeng.zha@intel.com> ··· 5847 5835 L: netdev@vger.kernel.org 5848 5836 L: lvs-devel@vger.kernel.org 5849 5837 S: Maintained 5838 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs-next.git 5839 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs.git 5850 5840 F: Documentation/networking/ipvs-sysctl.txt 5851 5841 F: include/net/ip_vs.h 5852 5842 F: include/uapi/linux/ip_vs.h ··· 6132 6118 M: Jeff Layton <jlayton@poochiereds.net> 6133 6119 L: linux-nfs@vger.kernel.org 6134 6120 W: http://nfs.sourceforge.net/ 6121 + T: git git://linux-nfs.org/~bfields/linux.git 6135 6122 S: Supported 6136 6123 F: fs/nfsd/ 6137 6124 F: include/uapi/linux/nfsd/ ··· 6189 6174 M: Cornelia Huck <cornelia.huck@de.ibm.com> 6190 6175 L: linux-s390@vger.kernel.org 6191 6176 W: http://www.ibm.com/developerworks/linux/linux390/ 6177 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux.git 6192 6178 S: Supported 6193 6179 F: Documentation/s390/kvm.txt 6194 6180 F: arch/s390/include/asm/kvm* ··· 6263 6247 M: Jason Wessel <jason.wessel@windriver.com> 6264 6248 W: http://kgdb.wiki.kernel.org/ 6265 6249 L: kgdb-bugreport@lists.sourceforge.net 6250 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb.git 6266 6251 S: Maintained 6267 6252 F: Documentation/DocBook/kgdb.tmpl 6268 6253 F: drivers/misc/kgdbts.c ··· 6435 6418 M: Dan Williams <dan.j.williams@intel.com> 6436 6419 L: linux-nvdimm@lists.01.org 6437 6420 Q: https://patchwork.kernel.org/project/linux-nvdimm/list/ 6421 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm.git 6438 6422 S: Supported 6439 6423 F: drivers/nvdimm/* 6440 6424 F: include/linux/nd.h ··· 7105 7087 METAG ARCHITECTURE 7106 7088 M: James Hogan <james.hogan@imgtec.com> 7107 7089 L: linux-metag@vger.kernel.org 7090 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jhogan/metag.git 7108 7091 S: Odd Fixes 7109 7092 F: arch/metag/ 7110 7093 F: Documentation/metag/ ··· 7587 7568 M: Kalle Valo <kvalo@codeaurora.org> 7588 7569 L: linux-wireless@vger.kernel.org 7589 7570 Q: http://patchwork.kernel.org/project/linux-wireless/list/ 7590 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers.git/ 7571 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers.git 7572 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next.git 7591 7573 S: Maintained 7592 7574 F: drivers/net/wireless/ 7593 7575 ··· 7994 7974 M: Ian Campbell <ijc+devicetree@hellion.org.uk> 7995 7975 M: Kumar Gala <galak@codeaurora.org> 7996 7976 L: devicetree@vger.kernel.org 7977 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git 7997 7978 S: Maintained 7998 7979 F: Documentation/devicetree/ 7999 7980 F: arch/*/boot/dts/ ··· 8385 8364 P: Linux PCMCIA Team 8386 8365 L: linux-pcmcia@lists.infradead.org 8387 8366 W: http://lists.infradead.org/mailman/listinfo/linux-pcmcia 8388 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/brodo/pcmcia-2.6.git 8367 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/brodo/pcmcia.git 8389 8368 S: Maintained 8390 8369 F: Documentation/pcmcia/ 8391 8370 F: drivers/pcmcia/ ··· 8707 8686 M: Kees Cook <keescook@chromium.org> 8708 8687 M: Tony Luck <tony.luck@intel.com> 8709 8688 S: Maintained 8710 - T: git git://git.infradead.org/users/cbou/linux-pstore.git 8689 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux.git 8711 8690 F: fs/pstore/ 8712 8691 F: include/linux/pstore* 8713 8692 F: drivers/firmware/efi/efi-pstore.c ··· 8916 8895 M: Kalle Valo <kvalo@qca.qualcomm.com> 8917 8896 L: ath10k@lists.infradead.org 8918 8897 W: http://wireless.kernel.org/en/users/Drivers/ath10k 8919 - T: git git://github.com/kvalo/ath.git 8898 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git 8920 8899 S: Supported 8921 8900 F: drivers/net/wireless/ath/ath10k/ 8922 8901 8923 8902 QUALCOMM HEXAGON ARCHITECTURE 8924 8903 M: Richard Kuo <rkuo@codeaurora.org> 8925 8904 L: linux-hexagon@vger.kernel.org 8905 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rkuo/linux-hexagon-kernel.git 8926 8906 S: Supported 8927 8907 F: arch/hexagon/ 8928 8908 ··· 9122 9100 9123 9101 RESET CONTROLLER FRAMEWORK 9124 9102 M: Philipp Zabel <p.zabel@pengutronix.de> 9103 + T: git git://git.pengutronix.de/git/pza/linux 9125 9104 S: Maintained 9126 9105 F: drivers/reset/ 9127 9106 F: Documentation/devicetree/bindings/reset/ ··· 9270 9247 M: Heiko Carstens <heiko.carstens@de.ibm.com> 9271 9248 L: linux-s390@vger.kernel.org 9272 9249 W: http://www.ibm.com/developerworks/linux/linux390/ 9250 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git 9273 9251 S: Supported 9274 9252 F: arch/s390/ 9275 9253 F: drivers/s390/ ··· 9463 9439 L: linux-pm@vger.kernel.org 9464 9440 L: linux-samsung-soc@vger.kernel.org 9465 9441 S: Supported 9466 - T: https://github.com/lmajewski/linux-samsung-thermal.git 9442 + T: git https://github.com/lmajewski/linux-samsung-thermal.git 9467 9443 F: drivers/thermal/samsung/ 9468 9444 9469 9445 SAMSUNG USB2 PHY DRIVER ··· 10116 10092 10117 10093 SOFTWARE RAID (Multiple Disks) SUPPORT 10118 10094 L: linux-raid@vger.kernel.org 10095 + T: git git://neil.brown.name/md 10119 10096 S: Supported 10120 10097 F: drivers/md/ 10121 10098 F: include/linux/raid/ ··· 10288 10263 M: Phillip Lougher <phillip@squashfs.org.uk> 10289 10264 L: squashfs-devel@lists.sourceforge.net (subscribers-only) 10290 10265 W: http://squashfs.org.uk 10266 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/pkl/squashfs-next.git 10291 10267 S: Maintained 10292 10268 F: Documentation/filesystems/squashfs.txt 10293 10269 F: fs/squashfs/ ··· 10485 10459 SWIOTLB SUBSYSTEM 10486 10460 M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> 10487 10461 L: linux-kernel@vger.kernel.org 10462 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb.git 10488 10463 S: Supported 10489 10464 F: lib/swiotlb.c 10490 10465 F: arch/*/kernel/pci-swiotlb.c ··· 10749 10722 M: Chris Zankel <chris@zankel.net> 10750 10723 M: Max Filippov <jcmvbkbc@gmail.com> 10751 10724 L: linux-xtensa@linux-xtensa.org 10725 + T: git git://github.com/czankel/xtensa-linux.git 10752 10726 S: Maintained 10753 10727 F: arch/xtensa/ 10754 10728 F: drivers/irqchip/irq-xtensa-* ··· 11032 11004 W: http://tpmdd.sourceforge.net 11033 11005 L: tpmdd-devel@lists.sourceforge.net (moderated for non-subscribers) 11034 11006 Q: git git://github.com/PeterHuewe/linux-tpmdd.git 11035 - T: https://github.com/PeterHuewe/linux-tpmdd 11007 + T: git https://github.com/PeterHuewe/linux-tpmdd 11036 11008 S: Maintained 11037 11009 F: drivers/char/tpm/ 11038 11010 ··· 11489 11461 L: user-mode-linux-devel@lists.sourceforge.net 11490 11462 L: user-mode-linux-user@lists.sourceforge.net 11491 11463 W: http://user-mode-linux.sourceforge.net 11464 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml.git 11492 11465 S: Maintained 11493 11466 F: Documentation/virtual/uml/ 11494 11467 F: arch/um/ ··· 11536 11507 VFIO DRIVER 11537 11508 M: Alex Williamson <alex.williamson@redhat.com> 11538 11509 L: kvm@vger.kernel.org 11510 + T: git git://github.com/awilliam/linux-vfio.git 11539 11511 S: Maintained 11540 11512 F: Documentation/vfio.txt 11541 11513 F: drivers/vfio/ ··· 11606 11576 L: kvm@vger.kernel.org 11607 11577 L: virtualization@lists.linux-foundation.org 11608 11578 L: netdev@vger.kernel.org 11579 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git 11609 11580 S: Maintained 11610 11581 F: drivers/vhost/ 11611 11582 F: include/uapi/linux/vhost.h ··· 12023 11992 M: xfs@oss.sgi.com 12024 11993 L: xfs@oss.sgi.com 12025 11994 W: http://oss.sgi.com/projects/xfs 12026 - T: git git://oss.sgi.com/xfs/xfs.git 11995 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs.git 12027 11996 S: Supported 12028 11997 F: Documentation/filesystems/xfs.txt 12029 11998 F: fs/xfs/
+2 -1
Makefile
··· 411 411 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS 412 412 413 413 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS 414 - export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN 414 + export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN CFLAGS_UBSAN 415 415 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE 416 416 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE 417 417 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL ··· 784 784 785 785 include scripts/Makefile.kasan 786 786 include scripts/Makefile.extrawarn 787 + include scripts/Makefile.ubsan 787 788 788 789 # Add any arch overrides and user supplied CPPFLAGS, AFLAGS and CFLAGS as the 789 790 # last assignments
+3 -3
arch/Kconfig
··· 205 205 config HAVE_ARCH_TRACEHOOK 206 206 bool 207 207 208 - config HAVE_DMA_ATTRS 209 - bool 210 - 211 208 config HAVE_DMA_CONTIGUOUS 212 209 bool 213 210 ··· 627 630 compatibility... 628 631 629 632 config COMPAT_OLD_SIGACTION 633 + bool 634 + 635 + config ARCH_NO_COHERENT_DMA_MMAP 630 636 bool 631 637 632 638 source "kernel/gcov/Kconfig"
-1
arch/alpha/Kconfig
··· 9 9 select HAVE_OPROFILE 10 10 select HAVE_PCSPKR_PLATFORM 11 11 select HAVE_PERF_EVENTS 12 - select HAVE_DMA_ATTRS 13 12 select VIRT_TO_BUS 14 13 select GENERIC_IRQ_PROBE 15 14 select AUTO_IRQ_AFFINITY if SMP
-2
arch/alpha/include/asm/dma-mapping.h
··· 10 10 return dma_ops; 11 11 } 12 12 13 - #include <asm-generic/dma-mapping-common.h> 14 - 15 13 #define dma_cache_sync(dev, va, size, dir) ((void)0) 16 14 17 15 #endif /* _ALPHA_DMA_MAPPING_H */
-1
arch/alpha/include/uapi/asm/mman.h
··· 47 47 #define MADV_WILLNEED 3 /* will need these pages */ 48 48 #define MADV_SPACEAVAIL 5 /* ensure resources are available */ 49 49 #define MADV_DONTNEED 6 /* don't need these pages */ 50 - #define MADV_FREE 7 /* free pages only if memory pressure */ 51 50 52 51 /* common/generic parameters */ 53 52 #define MADV_FREE 8 /* free pages only if memory pressure */
+3 -184
arch/arc/include/asm/dma-mapping.h
··· 11 11 #ifndef ASM_ARC_DMA_MAPPING_H 12 12 #define ASM_ARC_DMA_MAPPING_H 13 13 14 - #include <asm-generic/dma-coherent.h> 15 - #include <asm/cacheflush.h> 14 + extern struct dma_map_ops arc_dma_ops; 16 15 17 - void *dma_alloc_noncoherent(struct device *dev, size_t size, 18 - dma_addr_t *dma_handle, gfp_t gfp); 19 - 20 - void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, 21 - dma_addr_t dma_handle); 22 - 23 - void *dma_alloc_coherent(struct device *dev, size_t size, 24 - dma_addr_t *dma_handle, gfp_t gfp); 25 - 26 - void dma_free_coherent(struct device *dev, size_t size, void *kvaddr, 27 - dma_addr_t dma_handle); 28 - 29 - /* drivers/base/dma-mapping.c */ 30 - extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, 31 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 32 - extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 33 - void *cpu_addr, dma_addr_t dma_addr, 34 - size_t size); 35 - 36 - #define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s) 37 - #define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s) 38 - 39 - /* 40 - * streaming DMA Mapping API... 41 - * CPU accesses page via normal paddr, thus needs to explicitly made 42 - * consistent before each use 43 - */ 44 - 45 - static inline void __inline_dma_cache_sync(unsigned long paddr, size_t size, 46 - enum dma_data_direction dir) 16 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 47 17 { 48 - switch (dir) { 49 - case DMA_FROM_DEVICE: 50 - dma_cache_inv(paddr, size); 51 - break; 52 - case DMA_TO_DEVICE: 53 - dma_cache_wback(paddr, size); 54 - break; 55 - case DMA_BIDIRECTIONAL: 56 - dma_cache_wback_inv(paddr, size); 57 - break; 58 - default: 59 - pr_err("Invalid DMA dir [%d] for OP @ %lx\n", dir, paddr); 60 - } 61 - } 62 - 63 - void __arc_dma_cache_sync(unsigned long paddr, size_t size, 64 - enum dma_data_direction dir); 65 - 66 - #define _dma_cache_sync(addr, sz, dir) \ 67 - do { \ 68 - if (__builtin_constant_p(dir)) \ 69 - __inline_dma_cache_sync(addr, sz, dir); \ 70 - else \ 71 - __arc_dma_cache_sync(addr, sz, dir); \ 72 - } \ 73 - while (0); 74 - 75 - static inline dma_addr_t 76 - dma_map_single(struct device *dev, void *cpu_addr, size_t size, 77 - enum dma_data_direction dir) 78 - { 79 - _dma_cache_sync((unsigned long)cpu_addr, size, dir); 80 - return (dma_addr_t)cpu_addr; 81 - } 82 - 83 - static inline void 84 - dma_unmap_single(struct device *dev, dma_addr_t dma_addr, 85 - size_t size, enum dma_data_direction dir) 86 - { 87 - } 88 - 89 - static inline dma_addr_t 90 - dma_map_page(struct device *dev, struct page *page, 91 - unsigned long offset, size_t size, 92 - enum dma_data_direction dir) 93 - { 94 - unsigned long paddr = page_to_phys(page) + offset; 95 - return dma_map_single(dev, (void *)paddr, size, dir); 96 - } 97 - 98 - static inline void 99 - dma_unmap_page(struct device *dev, dma_addr_t dma_handle, 100 - size_t size, enum dma_data_direction dir) 101 - { 102 - } 103 - 104 - static inline int 105 - dma_map_sg(struct device *dev, struct scatterlist *sg, 106 - int nents, enum dma_data_direction dir) 107 - { 108 - struct scatterlist *s; 109 - int i; 110 - 111 - for_each_sg(sg, s, nents, i) 112 - s->dma_address = dma_map_page(dev, sg_page(s), s->offset, 113 - s->length, dir); 114 - 115 - return nents; 116 - } 117 - 118 - static inline void 119 - dma_unmap_sg(struct device *dev, struct scatterlist *sg, 120 - int nents, enum dma_data_direction dir) 121 - { 122 - struct scatterlist *s; 123 - int i; 124 - 125 - for_each_sg(sg, s, nents, i) 126 - dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir); 127 - } 128 - 129 - static inline void 130 - dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 131 - size_t size, enum dma_data_direction dir) 132 - { 133 - _dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE); 134 - } 135 - 136 - static inline void 137 - dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 138 - size_t size, enum dma_data_direction dir) 139 - { 140 - _dma_cache_sync(dma_handle, size, DMA_TO_DEVICE); 141 - } 142 - 143 - static inline void 144 - dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 145 - unsigned long offset, size_t size, 146 - enum dma_data_direction direction) 147 - { 148 - _dma_cache_sync(dma_handle + offset, size, DMA_FROM_DEVICE); 149 - } 150 - 151 - static inline void 152 - dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, 153 - unsigned long offset, size_t size, 154 - enum dma_data_direction direction) 155 - { 156 - _dma_cache_sync(dma_handle + offset, size, DMA_TO_DEVICE); 157 - } 158 - 159 - static inline void 160 - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nelems, 161 - enum dma_data_direction dir) 162 - { 163 - int i; 164 - struct scatterlist *sg; 165 - 166 - for_each_sg(sglist, sg, nelems, i) 167 - _dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir); 168 - } 169 - 170 - static inline void 171 - dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, 172 - int nelems, enum dma_data_direction dir) 173 - { 174 - int i; 175 - struct scatterlist *sg; 176 - 177 - for_each_sg(sglist, sg, nelems, i) 178 - _dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir); 179 - } 180 - 181 - static inline int dma_supported(struct device *dev, u64 dma_mask) 182 - { 183 - /* Support 32 bit DMA mask exclusively */ 184 - return dma_mask == DMA_BIT_MASK(32); 185 - } 186 - 187 - static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 188 - { 189 - return 0; 190 - } 191 - 192 - static inline int dma_set_mask(struct device *dev, u64 dma_mask) 193 - { 194 - if (!dev->dma_mask || !dma_supported(dev, dma_mask)) 195 - return -EIO; 196 - 197 - *dev->dma_mask = dma_mask; 198 - 199 - return 0; 18 + return &arc_dma_ops; 200 19 } 201 20 202 21 #endif
+105 -47
arch/arc/mm/dma.c
··· 17 17 */ 18 18 19 19 #include <linux/dma-mapping.h> 20 - #include <linux/dma-debug.h> 21 - #include <linux/export.h> 22 20 #include <asm/cache.h> 23 21 #include <asm/cacheflush.h> 24 22 25 - /* 26 - * Helpers for Coherent DMA API. 27 - */ 28 - void *dma_alloc_noncoherent(struct device *dev, size_t size, 29 - dma_addr_t *dma_handle, gfp_t gfp) 23 + 24 + static void *arc_dma_alloc(struct device *dev, size_t size, 25 + dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs) 30 26 { 31 - void *paddr; 27 + void *paddr, *kvaddr; 32 28 33 29 /* This is linear addr (0x8000_0000 based) */ 34 30 paddr = alloc_pages_exact(size, gfp); ··· 33 37 34 38 /* This is bus address, platform dependent */ 35 39 *dma_handle = (dma_addr_t)paddr; 36 - 37 - return paddr; 38 - } 39 - EXPORT_SYMBOL(dma_alloc_noncoherent); 40 - 41 - void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr, 42 - dma_addr_t dma_handle) 43 - { 44 - free_pages_exact((void *)dma_handle, size); 45 - } 46 - EXPORT_SYMBOL(dma_free_noncoherent); 47 - 48 - void *dma_alloc_coherent(struct device *dev, size_t size, 49 - dma_addr_t *dma_handle, gfp_t gfp) 50 - { 51 - void *paddr, *kvaddr; 52 40 53 41 /* 54 42 * IOC relies on all data (even coherent DMA data) being in cache ··· 45 65 * -For coherent data, Read/Write to buffers terminate early in cache 46 66 * (vs. always going to memory - thus are faster) 47 67 */ 48 - if (is_isa_arcv2() && ioc_exists) 49 - return dma_alloc_noncoherent(dev, size, dma_handle, gfp); 50 - 51 - /* This is linear addr (0x8000_0000 based) */ 52 - paddr = alloc_pages_exact(size, gfp); 53 - if (!paddr) 54 - return NULL; 68 + if ((is_isa_arcv2() && ioc_exists) || 69 + dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs)) 70 + return paddr; 55 71 56 72 /* This is kernel Virtual address (0x7000_0000 based) */ 57 73 kvaddr = ioremap_nocache((unsigned long)paddr, size); 58 74 if (kvaddr == NULL) 59 75 return NULL; 60 - 61 - /* This is bus address, platform dependent */ 62 - *dma_handle = (dma_addr_t)paddr; 63 76 64 77 /* 65 78 * Evict any existing L1 and/or L2 lines for the backing page ··· 68 95 69 96 return kvaddr; 70 97 } 71 - EXPORT_SYMBOL(dma_alloc_coherent); 72 98 73 - void dma_free_coherent(struct device *dev, size_t size, void *kvaddr, 74 - dma_addr_t dma_handle) 99 + static void arc_dma_free(struct device *dev, size_t size, void *vaddr, 100 + dma_addr_t dma_handle, struct dma_attrs *attrs) 75 101 { 76 - if (is_isa_arcv2() && ioc_exists) 77 - return dma_free_noncoherent(dev, size, kvaddr, dma_handle); 78 - 79 - iounmap((void __force __iomem *)kvaddr); 102 + if (!dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs) && 103 + !(is_isa_arcv2() && ioc_exists)) 104 + iounmap((void __force __iomem *)vaddr); 80 105 81 106 free_pages_exact((void *)dma_handle, size); 82 107 } 83 - EXPORT_SYMBOL(dma_free_coherent); 84 108 85 109 /* 86 - * Helper for streaming DMA... 110 + * streaming DMA Mapping API... 111 + * CPU accesses page via normal paddr, thus needs to explicitly made 112 + * consistent before each use 87 113 */ 88 - void __arc_dma_cache_sync(unsigned long paddr, size_t size, 89 - enum dma_data_direction dir) 114 + static void _dma_cache_sync(unsigned long paddr, size_t size, 115 + enum dma_data_direction dir) 90 116 { 91 - __inline_dma_cache_sync(paddr, size, dir); 117 + switch (dir) { 118 + case DMA_FROM_DEVICE: 119 + dma_cache_inv(paddr, size); 120 + break; 121 + case DMA_TO_DEVICE: 122 + dma_cache_wback(paddr, size); 123 + break; 124 + case DMA_BIDIRECTIONAL: 125 + dma_cache_wback_inv(paddr, size); 126 + break; 127 + default: 128 + pr_err("Invalid DMA dir [%d] for OP @ %lx\n", dir, paddr); 129 + } 92 130 } 93 - EXPORT_SYMBOL(__arc_dma_cache_sync); 131 + 132 + static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page, 133 + unsigned long offset, size_t size, enum dma_data_direction dir, 134 + struct dma_attrs *attrs) 135 + { 136 + unsigned long paddr = page_to_phys(page) + offset; 137 + _dma_cache_sync(paddr, size, dir); 138 + return (dma_addr_t)paddr; 139 + } 140 + 141 + static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg, 142 + int nents, enum dma_data_direction dir, struct dma_attrs *attrs) 143 + { 144 + struct scatterlist *s; 145 + int i; 146 + 147 + for_each_sg(sg, s, nents, i) 148 + s->dma_address = dma_map_page(dev, sg_page(s), s->offset, 149 + s->length, dir); 150 + 151 + return nents; 152 + } 153 + 154 + static void arc_dma_sync_single_for_cpu(struct device *dev, 155 + dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) 156 + { 157 + _dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE); 158 + } 159 + 160 + static void arc_dma_sync_single_for_device(struct device *dev, 161 + dma_addr_t dma_handle, size_t size, enum dma_data_direction dir) 162 + { 163 + _dma_cache_sync(dma_handle, size, DMA_TO_DEVICE); 164 + } 165 + 166 + static void arc_dma_sync_sg_for_cpu(struct device *dev, 167 + struct scatterlist *sglist, int nelems, 168 + enum dma_data_direction dir) 169 + { 170 + int i; 171 + struct scatterlist *sg; 172 + 173 + for_each_sg(sglist, sg, nelems, i) 174 + _dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir); 175 + } 176 + 177 + static void arc_dma_sync_sg_for_device(struct device *dev, 178 + struct scatterlist *sglist, int nelems, 179 + enum dma_data_direction dir) 180 + { 181 + int i; 182 + struct scatterlist *sg; 183 + 184 + for_each_sg(sglist, sg, nelems, i) 185 + _dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir); 186 + } 187 + 188 + static int arc_dma_supported(struct device *dev, u64 dma_mask) 189 + { 190 + /* Support 32 bit DMA mask exclusively */ 191 + return dma_mask == DMA_BIT_MASK(32); 192 + } 193 + 194 + struct dma_map_ops arc_dma_ops = { 195 + .alloc = arc_dma_alloc, 196 + .free = arc_dma_free, 197 + .map_page = arc_dma_map_page, 198 + .map_sg = arc_dma_map_sg, 199 + .sync_single_for_device = arc_dma_sync_single_for_device, 200 + .sync_single_for_cpu = arc_dma_sync_single_for_cpu, 201 + .sync_sg_for_cpu = arc_dma_sync_sg_for_cpu, 202 + .sync_sg_for_device = arc_dma_sync_sg_for_device, 203 + .dma_supported = arc_dma_supported, 204 + }; 205 + EXPORT_SYMBOL(arc_dma_ops);
-1
arch/arm/Kconfig
··· 47 47 select HAVE_C_RECORDMCOUNT 48 48 select HAVE_DEBUG_KMEMLEAK 49 49 select HAVE_DMA_API_DEBUG 50 - select HAVE_DMA_ATTRS 51 50 select HAVE_DMA_CONTIGUOUS if MMU 52 51 select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU 53 52 select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU
-7
arch/arm/include/asm/dma-mapping.h
··· 41 41 #define HAVE_ARCH_DMA_SUPPORTED 1 42 42 extern int dma_supported(struct device *dev, u64 mask); 43 43 44 - /* 45 - * Note that while the generic code provides dummy dma_{alloc,free}_noncoherent 46 - * implementations, we don't provide a dma_cache_sync function so drivers using 47 - * this API are highlighted with build warnings. 48 - */ 49 - #include <asm-generic/dma-mapping-common.h> 50 - 51 44 #ifdef __arch_page_to_dma 52 45 #error Please update to __arch_pfn_to_dma 53 46 #endif
-1
arch/arm64/Kconfig
··· 64 64 select HAVE_DEBUG_BUGVERBOSE 65 65 select HAVE_DEBUG_KMEMLEAK 66 66 select HAVE_DMA_API_DEBUG 67 - select HAVE_DMA_ATTRS 68 67 select HAVE_DMA_CONTIGUOUS 69 68 select HAVE_DYNAMIC_FTRACE 70 69 select HAVE_EFFICIENT_UNALIGNED_ACCESS
-2
arch/arm64/include/asm/dma-mapping.h
··· 64 64 return dev->archdata.dma_coherent; 65 65 } 66 66 67 - #include <asm-generic/dma-mapping-common.h> 68 - 69 67 static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) 70 68 { 71 69 return (dma_addr_t)paddr;
+4 -340
arch/avr32/include/asm/dma-mapping.h
··· 1 1 #ifndef __ASM_AVR32_DMA_MAPPING_H 2 2 #define __ASM_AVR32_DMA_MAPPING_H 3 3 4 - #include <linux/mm.h> 5 - #include <linux/device.h> 6 - #include <linux/scatterlist.h> 7 - #include <asm/processor.h> 8 - #include <asm/cacheflush.h> 9 - #include <asm/io.h> 10 - 11 4 extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 12 5 int direction); 13 6 14 - /* 15 - * Return whether the given device DMA address mask can be supported 16 - * properly. For example, if your device can only drive the low 24-bits 17 - * during bus mastering, then you would pass 0x00ffffff as the mask 18 - * to this function. 19 - */ 20 - static inline int dma_supported(struct device *dev, u64 mask) 7 + extern struct dma_map_ops avr32_dma_ops; 8 + 9 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 21 10 { 22 - /* Fix when needed. I really don't know of any limitations */ 23 - return 1; 11 + return &avr32_dma_ops; 24 12 } 25 - 26 - static inline int dma_set_mask(struct device *dev, u64 dma_mask) 27 - { 28 - if (!dev->dma_mask || !dma_supported(dev, dma_mask)) 29 - return -EIO; 30 - 31 - *dev->dma_mask = dma_mask; 32 - return 0; 33 - } 34 - 35 - /* 36 - * dma_map_single can't fail as it is implemented now. 37 - */ 38 - static inline int dma_mapping_error(struct device *dev, dma_addr_t addr) 39 - { 40 - return 0; 41 - } 42 - 43 - /** 44 - * dma_alloc_coherent - allocate consistent memory for DMA 45 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 46 - * @size: required memory size 47 - * @handle: bus-specific DMA address 48 - * 49 - * Allocate some uncached, unbuffered memory for a device for 50 - * performing DMA. This function allocates pages, and will 51 - * return the CPU-viewed address, and sets @handle to be the 52 - * device-viewed address. 53 - */ 54 - extern void *dma_alloc_coherent(struct device *dev, size_t size, 55 - dma_addr_t *handle, gfp_t gfp); 56 - 57 - /** 58 - * dma_free_coherent - free memory allocated by dma_alloc_coherent 59 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 60 - * @size: size of memory originally requested in dma_alloc_coherent 61 - * @cpu_addr: CPU-view address returned from dma_alloc_coherent 62 - * @handle: device-view address returned from dma_alloc_coherent 63 - * 64 - * Free (and unmap) a DMA buffer previously allocated by 65 - * dma_alloc_coherent(). 66 - * 67 - * References to memory and mappings associated with cpu_addr/handle 68 - * during and after this call executing are illegal. 69 - */ 70 - extern void dma_free_coherent(struct device *dev, size_t size, 71 - void *cpu_addr, dma_addr_t handle); 72 - 73 - /** 74 - * dma_alloc_writecombine - allocate write-combining memory for DMA 75 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 76 - * @size: required memory size 77 - * @handle: bus-specific DMA address 78 - * 79 - * Allocate some uncached, buffered memory for a device for 80 - * performing DMA. This function allocates pages, and will 81 - * return the CPU-viewed address, and sets @handle to be the 82 - * device-viewed address. 83 - */ 84 - extern void *dma_alloc_writecombine(struct device *dev, size_t size, 85 - dma_addr_t *handle, gfp_t gfp); 86 - 87 - /** 88 - * dma_free_coherent - free memory allocated by dma_alloc_writecombine 89 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 90 - * @size: size of memory originally requested in dma_alloc_writecombine 91 - * @cpu_addr: CPU-view address returned from dma_alloc_writecombine 92 - * @handle: device-view address returned from dma_alloc_writecombine 93 - * 94 - * Free (and unmap) a DMA buffer previously allocated by 95 - * dma_alloc_writecombine(). 96 - * 97 - * References to memory and mappings associated with cpu_addr/handle 98 - * during and after this call executing are illegal. 99 - */ 100 - extern void dma_free_writecombine(struct device *dev, size_t size, 101 - void *cpu_addr, dma_addr_t handle); 102 - 103 - /** 104 - * dma_map_single - map a single buffer for streaming DMA 105 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 106 - * @cpu_addr: CPU direct mapped address of buffer 107 - * @size: size of buffer to map 108 - * @dir: DMA transfer direction 109 - * 110 - * Ensure that any data held in the cache is appropriately discarded 111 - * or written back. 112 - * 113 - * The device owns this memory once this call has completed. The CPU 114 - * can regain ownership by calling dma_unmap_single() or dma_sync_single(). 115 - */ 116 - static inline dma_addr_t 117 - dma_map_single(struct device *dev, void *cpu_addr, size_t size, 118 - enum dma_data_direction direction) 119 - { 120 - dma_cache_sync(dev, cpu_addr, size, direction); 121 - return virt_to_bus(cpu_addr); 122 - } 123 - 124 - /** 125 - * dma_unmap_single - unmap a single buffer previously mapped 126 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 127 - * @handle: DMA address of buffer 128 - * @size: size of buffer to map 129 - * @dir: DMA transfer direction 130 - * 131 - * Unmap a single streaming mode DMA translation. The handle and size 132 - * must match what was provided in the previous dma_map_single() call. 133 - * All other usages are undefined. 134 - * 135 - * After this call, reads by the CPU to the buffer are guaranteed to see 136 - * whatever the device wrote there. 137 - */ 138 - static inline void 139 - dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 140 - enum dma_data_direction direction) 141 - { 142 - 143 - } 144 - 145 - /** 146 - * dma_map_page - map a portion of a page for streaming DMA 147 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 148 - * @page: page that buffer resides in 149 - * @offset: offset into page for start of buffer 150 - * @size: size of buffer to map 151 - * @dir: DMA transfer direction 152 - * 153 - * Ensure that any data held in the cache is appropriately discarded 154 - * or written back. 155 - * 156 - * The device owns this memory once this call has completed. The CPU 157 - * can regain ownership by calling dma_unmap_page() or dma_sync_single(). 158 - */ 159 - static inline dma_addr_t 160 - dma_map_page(struct device *dev, struct page *page, 161 - unsigned long offset, size_t size, 162 - enum dma_data_direction direction) 163 - { 164 - return dma_map_single(dev, page_address(page) + offset, 165 - size, direction); 166 - } 167 - 168 - /** 169 - * dma_unmap_page - unmap a buffer previously mapped through dma_map_page() 170 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 171 - * @handle: DMA address of buffer 172 - * @size: size of buffer to map 173 - * @dir: DMA transfer direction 174 - * 175 - * Unmap a single streaming mode DMA translation. The handle and size 176 - * must match what was provided in the previous dma_map_single() call. 177 - * All other usages are undefined. 178 - * 179 - * After this call, reads by the CPU to the buffer are guaranteed to see 180 - * whatever the device wrote there. 181 - */ 182 - static inline void 183 - dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 184 - enum dma_data_direction direction) 185 - { 186 - dma_unmap_single(dev, dma_address, size, direction); 187 - } 188 - 189 - /** 190 - * dma_map_sg - map a set of SG buffers for streaming mode DMA 191 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 192 - * @sg: list of buffers 193 - * @nents: number of buffers to map 194 - * @dir: DMA transfer direction 195 - * 196 - * Map a set of buffers described by scatterlist in streaming 197 - * mode for DMA. This is the scatter-gather version of the 198 - * above pci_map_single interface. Here the scatter gather list 199 - * elements are each tagged with the appropriate dma address 200 - * and length. They are obtained via sg_dma_{address,length}(SG). 201 - * 202 - * NOTE: An implementation may be able to use a smaller number of 203 - * DMA address/length pairs than there are SG table elements. 204 - * (for example via virtual mapping capabilities) 205 - * The routine returns the number of addr/length pairs actually 206 - * used, at most nents. 207 - * 208 - * Device ownership issues as mentioned above for pci_map_single are 209 - * the same here. 210 - */ 211 - static inline int 212 - dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 213 - enum dma_data_direction direction) 214 - { 215 - int i; 216 - struct scatterlist *sg; 217 - 218 - for_each_sg(sglist, sg, nents, i) { 219 - char *virt; 220 - 221 - sg->dma_address = page_to_bus(sg_page(sg)) + sg->offset; 222 - virt = sg_virt(sg); 223 - dma_cache_sync(dev, virt, sg->length, direction); 224 - } 225 - 226 - return nents; 227 - } 228 - 229 - /** 230 - * dma_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg 231 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 232 - * @sg: list of buffers 233 - * @nents: number of buffers to map 234 - * @dir: DMA transfer direction 235 - * 236 - * Unmap a set of streaming mode DMA translations. 237 - * Again, CPU read rules concerning calls here are the same as for 238 - * pci_unmap_single() above. 239 - */ 240 - static inline void 241 - dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 242 - enum dma_data_direction direction) 243 - { 244 - 245 - } 246 - 247 - /** 248 - * dma_sync_single_for_cpu 249 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 250 - * @handle: DMA address of buffer 251 - * @size: size of buffer to map 252 - * @dir: DMA transfer direction 253 - * 254 - * Make physical memory consistent for a single streaming mode DMA 255 - * translation after a transfer. 256 - * 257 - * If you perform a dma_map_single() but wish to interrogate the 258 - * buffer using the cpu, yet do not wish to teardown the DMA mapping, 259 - * you must call this function before doing so. At the next point you 260 - * give the DMA address back to the card, you must first perform a 261 - * dma_sync_single_for_device, and then the device again owns the 262 - * buffer. 263 - */ 264 - static inline void 265 - dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 266 - size_t size, enum dma_data_direction direction) 267 - { 268 - /* 269 - * No need to do anything since the CPU isn't supposed to 270 - * touch this memory after we flushed it at mapping- or 271 - * sync-for-device time. 272 - */ 273 - } 274 - 275 - static inline void 276 - dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 277 - size_t size, enum dma_data_direction direction) 278 - { 279 - dma_cache_sync(dev, bus_to_virt(dma_handle), size, direction); 280 - } 281 - 282 - static inline void 283 - dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 284 - unsigned long offset, size_t size, 285 - enum dma_data_direction direction) 286 - { 287 - /* just sync everything, that's all the pci API can do */ 288 - dma_sync_single_for_cpu(dev, dma_handle, offset+size, direction); 289 - } 290 - 291 - static inline void 292 - dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, 293 - unsigned long offset, size_t size, 294 - enum dma_data_direction direction) 295 - { 296 - /* just sync everything, that's all the pci API can do */ 297 - dma_sync_single_for_device(dev, dma_handle, offset+size, direction); 298 - } 299 - 300 - /** 301 - * dma_sync_sg_for_cpu 302 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 303 - * @sg: list of buffers 304 - * @nents: number of buffers to map 305 - * @dir: DMA transfer direction 306 - * 307 - * Make physical memory consistent for a set of streaming 308 - * mode DMA translations after a transfer. 309 - * 310 - * The same as dma_sync_single_for_* but for a scatter-gather list, 311 - * same rules and usage. 312 - */ 313 - static inline void 314 - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 315 - int nents, enum dma_data_direction direction) 316 - { 317 - /* 318 - * No need to do anything since the CPU isn't supposed to 319 - * touch this memory after we flushed it at mapping- or 320 - * sync-for-device time. 321 - */ 322 - } 323 - 324 - static inline void 325 - dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, 326 - int nents, enum dma_data_direction direction) 327 - { 328 - int i; 329 - struct scatterlist *sg; 330 - 331 - for_each_sg(sglist, sg, nents, i) 332 - dma_cache_sync(dev, sg_virt(sg), sg->length, direction); 333 - } 334 - 335 - /* Now for the API extensions over the pci_ one */ 336 - 337 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 338 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 339 - 340 - /* drivers/base/dma-mapping.c */ 341 - extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, 342 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 343 - extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 344 - void *cpu_addr, dma_addr_t dma_addr, 345 - size_t size); 346 - 347 - #define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s) 348 - #define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s) 349 13 350 14 #endif /* __ASM_AVR32_DMA_MAPPING_H */
+86 -41
arch/avr32/mm/dma-coherent.c
··· 9 9 #include <linux/dma-mapping.h> 10 10 #include <linux/gfp.h> 11 11 #include <linux/export.h> 12 + #include <linux/mm.h> 13 + #include <linux/device.h> 14 + #include <linux/scatterlist.h> 12 15 13 - #include <asm/addrspace.h> 16 + #include <asm/processor.h> 14 17 #include <asm/cacheflush.h> 18 + #include <asm/io.h> 19 + #include <asm/addrspace.h> 15 20 16 21 void dma_cache_sync(struct device *dev, void *vaddr, size_t size, int direction) 17 22 { ··· 98 93 __free_page(page++); 99 94 } 100 95 101 - void *dma_alloc_coherent(struct device *dev, size_t size, 102 - dma_addr_t *handle, gfp_t gfp) 103 - { 104 - struct page *page; 105 - void *ret = NULL; 106 - 107 - page = __dma_alloc(dev, size, handle, gfp); 108 - if (page) 109 - ret = phys_to_uncached(page_to_phys(page)); 110 - 111 - return ret; 112 - } 113 - EXPORT_SYMBOL(dma_alloc_coherent); 114 - 115 - void dma_free_coherent(struct device *dev, size_t size, 116 - void *cpu_addr, dma_addr_t handle) 117 - { 118 - void *addr = phys_to_cached(uncached_to_phys(cpu_addr)); 119 - struct page *page; 120 - 121 - pr_debug("dma_free_coherent addr %p (phys %08lx) size %u\n", 122 - cpu_addr, (unsigned long)handle, (unsigned)size); 123 - BUG_ON(!virt_addr_valid(addr)); 124 - page = virt_to_page(addr); 125 - __dma_free(dev, size, page, handle); 126 - } 127 - EXPORT_SYMBOL(dma_free_coherent); 128 - 129 - void *dma_alloc_writecombine(struct device *dev, size_t size, 130 - dma_addr_t *handle, gfp_t gfp) 96 + static void *avr32_dma_alloc(struct device *dev, size_t size, 97 + dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs) 131 98 { 132 99 struct page *page; 133 100 dma_addr_t phys; ··· 107 130 page = __dma_alloc(dev, size, handle, gfp); 108 131 if (!page) 109 132 return NULL; 110 - 111 133 phys = page_to_phys(page); 112 - *handle = phys; 113 134 114 - /* Now, map the page into P3 with write-combining turned on */ 115 - return __ioremap(phys, size, _PAGE_BUFFER); 135 + if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs)) { 136 + /* Now, map the page into P3 with write-combining turned on */ 137 + *handle = phys; 138 + return __ioremap(phys, size, _PAGE_BUFFER); 139 + } else { 140 + return phys_to_uncached(phys); 141 + } 116 142 } 117 - EXPORT_SYMBOL(dma_alloc_writecombine); 118 143 119 - void dma_free_writecombine(struct device *dev, size_t size, 120 - void *cpu_addr, dma_addr_t handle) 144 + static void avr32_dma_free(struct device *dev, size_t size, 145 + void *cpu_addr, dma_addr_t handle, struct dma_attrs *attrs) 121 146 { 122 147 struct page *page; 123 148 124 - iounmap(cpu_addr); 149 + if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs)) { 150 + iounmap(cpu_addr); 125 151 126 - page = phys_to_page(handle); 152 + page = phys_to_page(handle); 153 + } else { 154 + void *addr = phys_to_cached(uncached_to_phys(cpu_addr)); 155 + 156 + pr_debug("avr32_dma_free addr %p (phys %08lx) size %u\n", 157 + cpu_addr, (unsigned long)handle, (unsigned)size); 158 + 159 + BUG_ON(!virt_addr_valid(addr)); 160 + page = virt_to_page(addr); 161 + } 162 + 127 163 __dma_free(dev, size, page, handle); 128 164 } 129 - EXPORT_SYMBOL(dma_free_writecombine); 165 + 166 + static dma_addr_t avr32_dma_map_page(struct device *dev, struct page *page, 167 + unsigned long offset, size_t size, 168 + enum dma_data_direction direction, struct dma_attrs *attrs) 169 + { 170 + void *cpu_addr = page_address(page) + offset; 171 + 172 + dma_cache_sync(dev, cpu_addr, size, direction); 173 + return virt_to_bus(cpu_addr); 174 + } 175 + 176 + static int avr32_dma_map_sg(struct device *dev, struct scatterlist *sglist, 177 + int nents, enum dma_data_direction direction, 178 + struct dma_attrs *attrs) 179 + { 180 + int i; 181 + struct scatterlist *sg; 182 + 183 + for_each_sg(sglist, sg, nents, i) { 184 + char *virt; 185 + 186 + sg->dma_address = page_to_bus(sg_page(sg)) + sg->offset; 187 + virt = sg_virt(sg); 188 + dma_cache_sync(dev, virt, sg->length, direction); 189 + } 190 + 191 + return nents; 192 + } 193 + 194 + static void avr32_dma_sync_single_for_device(struct device *dev, 195 + dma_addr_t dma_handle, size_t size, 196 + enum dma_data_direction direction) 197 + { 198 + dma_cache_sync(dev, bus_to_virt(dma_handle), size, direction); 199 + } 200 + 201 + static void avr32_dma_sync_sg_for_device(struct device *dev, 202 + struct scatterlist *sglist, int nents, 203 + enum dma_data_direction direction) 204 + { 205 + int i; 206 + struct scatterlist *sg; 207 + 208 + for_each_sg(sglist, sg, nents, i) 209 + dma_cache_sync(dev, sg_virt(sg), sg->length, direction); 210 + } 211 + 212 + struct dma_map_ops avr32_dma_ops = { 213 + .alloc = avr32_dma_alloc, 214 + .free = avr32_dma_free, 215 + .map_page = avr32_dma_map_page, 216 + .map_sg = avr32_dma_map_sg, 217 + .sync_single_for_device = avr32_dma_sync_single_for_device, 218 + .sync_sg_for_device = avr32_dma_sync_sg_for_device, 219 + }; 220 + EXPORT_SYMBOL(avr32_dma_ops);
+4 -125
arch/blackfin/include/asm/dma-mapping.h
··· 8 8 #define _BLACKFIN_DMA_MAPPING_H 9 9 10 10 #include <asm/cacheflush.h> 11 - struct scatterlist; 12 - 13 - void *dma_alloc_coherent(struct device *dev, size_t size, 14 - dma_addr_t *dma_handle, gfp_t gfp); 15 - void dma_free_coherent(struct device *dev, size_t size, void *vaddr, 16 - dma_addr_t dma_handle); 17 - 18 - /* 19 - * Now for the API extensions over the pci_ one 20 - */ 21 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 22 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 23 - #define dma_supported(d, m) (1) 24 - 25 - static inline int 26 - dma_set_mask(struct device *dev, u64 dma_mask) 27 - { 28 - if (!dev->dma_mask || !dma_supported(dev, dma_mask)) 29 - return -EIO; 30 - 31 - *dev->dma_mask = dma_mask; 32 - 33 - return 0; 34 - } 35 - 36 - static inline int 37 - dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 38 - { 39 - return 0; 40 - } 41 11 42 12 extern void 43 13 __dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir); ··· 36 66 __dma_sync(addr, size, dir); 37 67 } 38 68 39 - static inline dma_addr_t 40 - dma_map_single(struct device *dev, void *ptr, size_t size, 41 - enum dma_data_direction dir) 69 + extern struct dma_map_ops bfin_dma_ops; 70 + 71 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 42 72 { 43 - _dma_sync((dma_addr_t)ptr, size, dir); 44 - return (dma_addr_t) ptr; 73 + return &bfin_dma_ops; 45 74 } 46 - 47 - static inline dma_addr_t 48 - dma_map_page(struct device *dev, struct page *page, 49 - unsigned long offset, size_t size, 50 - enum dma_data_direction dir) 51 - { 52 - return dma_map_single(dev, page_address(page) + offset, size, dir); 53 - } 54 - 55 - static inline void 56 - dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 57 - enum dma_data_direction dir) 58 - { 59 - BUG_ON(!valid_dma_direction(dir)); 60 - } 61 - 62 - static inline void 63 - dma_unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size, 64 - enum dma_data_direction dir) 65 - { 66 - dma_unmap_single(dev, dma_addr, size, dir); 67 - } 68 - 69 - extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 70 - enum dma_data_direction dir); 71 - 72 - static inline void 73 - dma_unmap_sg(struct device *dev, struct scatterlist *sg, 74 - int nhwentries, enum dma_data_direction dir) 75 - { 76 - BUG_ON(!valid_dma_direction(dir)); 77 - } 78 - 79 - static inline void 80 - dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t handle, 81 - unsigned long offset, size_t size, 82 - enum dma_data_direction dir) 83 - { 84 - BUG_ON(!valid_dma_direction(dir)); 85 - } 86 - 87 - static inline void 88 - dma_sync_single_range_for_device(struct device *dev, dma_addr_t handle, 89 - unsigned long offset, size_t size, 90 - enum dma_data_direction dir) 91 - { 92 - _dma_sync(handle + offset, size, dir); 93 - } 94 - 95 - static inline void 96 - dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, 97 - enum dma_data_direction dir) 98 - { 99 - dma_sync_single_range_for_cpu(dev, handle, 0, size, dir); 100 - } 101 - 102 - static inline void 103 - dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size, 104 - enum dma_data_direction dir) 105 - { 106 - dma_sync_single_range_for_device(dev, handle, 0, size, dir); 107 - } 108 - 109 - static inline void 110 - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents, 111 - enum dma_data_direction dir) 112 - { 113 - BUG_ON(!valid_dma_direction(dir)); 114 - } 115 - 116 - extern void 117 - dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 118 - int nents, enum dma_data_direction dir); 119 - 120 - static inline void 121 - dma_cache_sync(struct device *dev, void *vaddr, size_t size, 122 - enum dma_data_direction dir) 123 - { 124 - _dma_sync((dma_addr_t)vaddr, size, dir); 125 - } 126 - 127 - /* drivers/base/dma-mapping.c */ 128 - extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, 129 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 130 - extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 131 - void *cpu_addr, dma_addr_t dma_addr, 132 - size_t size); 133 - 134 - #define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s) 135 - #define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s) 136 75 137 76 #endif /* _BLACKFIN_DMA_MAPPING_H */
+38 -14
arch/blackfin/kernel/dma-mapping.c
··· 78 78 spin_unlock_irqrestore(&dma_page_lock, flags); 79 79 } 80 80 81 - void *dma_alloc_coherent(struct device *dev, size_t size, 82 - dma_addr_t *dma_handle, gfp_t gfp) 81 + static void *bfin_dma_alloc(struct device *dev, size_t size, 82 + dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs) 83 83 { 84 84 void *ret; 85 85 ··· 92 92 93 93 return ret; 94 94 } 95 - EXPORT_SYMBOL(dma_alloc_coherent); 96 95 97 - void 98 - dma_free_coherent(struct device *dev, size_t size, void *vaddr, 99 - dma_addr_t dma_handle) 96 + static void bfin_dma_free(struct device *dev, size_t size, void *vaddr, 97 + dma_addr_t dma_handle, struct dma_attrs *attrs) 100 98 { 101 99 __free_dma_pages((unsigned long)vaddr, get_pages(size)); 102 100 } 103 - EXPORT_SYMBOL(dma_free_coherent); 104 101 105 102 /* 106 103 * Streaming DMA mappings ··· 109 112 } 110 113 EXPORT_SYMBOL(__dma_sync); 111 114 112 - int 113 - dma_map_sg(struct device *dev, struct scatterlist *sg_list, int nents, 114 - enum dma_data_direction direction) 115 + static int bfin_dma_map_sg(struct device *dev, struct scatterlist *sg_list, 116 + int nents, enum dma_data_direction direction, 117 + struct dma_attrs *attrs) 115 118 { 116 119 struct scatterlist *sg; 117 120 int i; ··· 123 126 124 127 return nents; 125 128 } 126 - EXPORT_SYMBOL(dma_map_sg); 127 129 128 - void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg_list, 129 - int nelems, enum dma_data_direction direction) 130 + static void bfin_dma_sync_sg_for_device(struct device *dev, 131 + struct scatterlist *sg_list, int nelems, 132 + enum dma_data_direction direction) 130 133 { 131 134 struct scatterlist *sg; 132 135 int i; ··· 136 139 __dma_sync(sg_dma_address(sg), sg_dma_len(sg), direction); 137 140 } 138 141 } 139 - EXPORT_SYMBOL(dma_sync_sg_for_device); 142 + 143 + static dma_addr_t bfin_dma_map_page(struct device *dev, struct page *page, 144 + unsigned long offset, size_t size, enum dma_data_direction dir, 145 + struct dma_attrs *attrs) 146 + { 147 + dma_addr_t handle = (dma_addr_t)(page_address(page) + offset); 148 + 149 + _dma_sync(handle, size, dir); 150 + return handle; 151 + } 152 + 153 + static inline void bfin_dma_sync_single_for_device(struct device *dev, 154 + dma_addr_t handle, size_t size, enum dma_data_direction dir) 155 + { 156 + _dma_sync(handle, size, dir); 157 + } 158 + 159 + struct dma_map_ops bfin_dma_ops = { 160 + .alloc = bfin_dma_alloc, 161 + .free = bfin_dma_free, 162 + 163 + .map_page = bfin_dma_map_page, 164 + .map_sg = bfin_dma_map_sg, 165 + 166 + .sync_single_for_device = bfin_dma_sync_single_for_device, 167 + .sync_sg_for_device = bfin_dma_sync_sg_for_device, 168 + }; 169 + EXPORT_SYMBOL(bfin_dma_ops);
+1
arch/c6x/Kconfig
··· 17 17 select OF_EARLY_FLATTREE 18 18 select GENERIC_CLOCKEVENTS 19 19 select MODULES_USE_ELF_RELA 20 + select ARCH_NO_COHERENT_DMA_MMAP 20 21 21 22 config MMU 22 23 def_bool n
+10 -92
arch/c6x/include/asm/dma-mapping.h
··· 12 12 #ifndef _ASM_C6X_DMA_MAPPING_H 13 13 #define _ASM_C6X_DMA_MAPPING_H 14 14 15 - #include <linux/dma-debug.h> 16 - #include <asm-generic/dma-coherent.h> 17 - 18 - #define dma_supported(d, m) 1 19 - 20 - static inline void dma_sync_single_range_for_device(struct device *dev, 21 - dma_addr_t addr, 22 - unsigned long offset, 23 - size_t size, 24 - enum dma_data_direction dir) 25 - { 26 - } 27 - 28 - static inline int dma_set_mask(struct device *dev, u64 dma_mask) 29 - { 30 - if (!dev->dma_mask || !dma_supported(dev, dma_mask)) 31 - return -EIO; 32 - 33 - *dev->dma_mask = dma_mask; 34 - 35 - return 0; 36 - } 37 - 38 15 /* 39 16 * DMA errors are defined by all-bits-set in the DMA address. 40 17 */ 41 - static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 18 + #define DMA_ERROR_CODE ~0 19 + 20 + extern struct dma_map_ops c6x_dma_ops; 21 + 22 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 42 23 { 43 - debug_dma_mapping_error(dev, dma_addr); 44 - return dma_addr == ~0; 24 + return &c6x_dma_ops; 45 25 } 46 - 47 - extern dma_addr_t dma_map_single(struct device *dev, void *cpu_addr, 48 - size_t size, enum dma_data_direction dir); 49 - 50 - extern void dma_unmap_single(struct device *dev, dma_addr_t handle, 51 - size_t size, enum dma_data_direction dir); 52 - 53 - extern int dma_map_sg(struct device *dev, struct scatterlist *sglist, 54 - int nents, enum dma_data_direction direction); 55 - 56 - extern void dma_unmap_sg(struct device *dev, struct scatterlist *sglist, 57 - int nents, enum dma_data_direction direction); 58 - 59 - static inline dma_addr_t dma_map_page(struct device *dev, struct page *page, 60 - unsigned long offset, size_t size, 61 - enum dma_data_direction dir) 62 - { 63 - dma_addr_t handle; 64 - 65 - handle = dma_map_single(dev, page_address(page) + offset, size, dir); 66 - 67 - debug_dma_map_page(dev, page, offset, size, dir, handle, false); 68 - 69 - return handle; 70 - } 71 - 72 - static inline void dma_unmap_page(struct device *dev, dma_addr_t handle, 73 - size_t size, enum dma_data_direction dir) 74 - { 75 - dma_unmap_single(dev, handle, size, dir); 76 - 77 - debug_dma_unmap_page(dev, handle, size, dir, false); 78 - } 79 - 80 - extern void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, 81 - size_t size, enum dma_data_direction dir); 82 - 83 - extern void dma_sync_single_for_device(struct device *dev, dma_addr_t handle, 84 - size_t size, 85 - enum dma_data_direction dir); 86 - 87 - extern void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 88 - int nents, enum dma_data_direction dir); 89 - 90 - extern void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 91 - int nents, enum dma_data_direction dir); 92 26 93 27 extern void coherent_mem_init(u32 start, u32 size); 94 - extern void *dma_alloc_coherent(struct device *, size_t, dma_addr_t *, gfp_t); 95 - extern void dma_free_coherent(struct device *, size_t, void *, dma_addr_t); 96 - 97 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f)) 98 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h)) 99 - 100 - /* Not supported for now */ 101 - static inline int dma_mmap_coherent(struct device *dev, 102 - struct vm_area_struct *vma, void *cpu_addr, 103 - dma_addr_t dma_addr, size_t size) 104 - { 105 - return -EINVAL; 106 - } 107 - 108 - static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt, 109 - void *cpu_addr, dma_addr_t dma_addr, 110 - size_t size) 111 - { 112 - return -EINVAL; 113 - } 28 + void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 29 + gfp_t gfp, struct dma_attrs *attrs); 30 + void c6x_dma_free(struct device *dev, size_t size, void *vaddr, 31 + dma_addr_t dma_handle, struct dma_attrs *attrs); 114 32 115 33 #endif /* _ASM_C6X_DMA_MAPPING_H */
+43 -52
arch/c6x/kernel/dma.c
··· 36 36 } 37 37 } 38 38 39 - dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, 40 - enum dma_data_direction dir) 39 + static dma_addr_t c6x_dma_map_page(struct device *dev, struct page *page, 40 + unsigned long offset, size_t size, enum dma_data_direction dir, 41 + struct dma_attrs *attrs) 41 42 { 42 - dma_addr_t addr = virt_to_phys(ptr); 43 + dma_addr_t handle = virt_to_phys(page_address(page) + offset); 43 44 44 - c6x_dma_sync(addr, size, dir); 45 - 46 - debug_dma_map_page(dev, virt_to_page(ptr), 47 - (unsigned long)ptr & ~PAGE_MASK, size, 48 - dir, addr, true); 49 - return addr; 45 + c6x_dma_sync(handle, size, dir); 46 + return handle; 50 47 } 51 - EXPORT_SYMBOL(dma_map_single); 52 48 53 - 54 - void dma_unmap_single(struct device *dev, dma_addr_t handle, 55 - size_t size, enum dma_data_direction dir) 49 + static void c6x_dma_unmap_page(struct device *dev, dma_addr_t handle, 50 + size_t size, enum dma_data_direction dir, struct dma_attrs *attrs) 56 51 { 57 52 c6x_dma_sync(handle, size, dir); 58 - 59 - debug_dma_unmap_page(dev, handle, size, dir, true); 60 53 } 61 - EXPORT_SYMBOL(dma_unmap_single); 62 54 63 - 64 - int dma_map_sg(struct device *dev, struct scatterlist *sglist, 65 - int nents, enum dma_data_direction dir) 55 + static int c6x_dma_map_sg(struct device *dev, struct scatterlist *sglist, 56 + int nents, enum dma_data_direction dir, struct dma_attrs *attrs) 66 57 { 67 58 struct scatterlist *sg; 68 59 int i; 69 60 70 - for_each_sg(sglist, sg, nents, i) 71 - sg->dma_address = dma_map_single(dev, sg_virt(sg), sg->length, 72 - dir); 73 - 74 - debug_dma_map_sg(dev, sglist, nents, nents, dir); 61 + for_each_sg(sglist, sg, nents, i) { 62 + sg->dma_address = sg_phys(sg); 63 + c6x_dma_sync(sg->dma_address, sg->length, dir); 64 + } 75 65 76 66 return nents; 77 67 } 78 - EXPORT_SYMBOL(dma_map_sg); 79 68 80 - 81 - void dma_unmap_sg(struct device *dev, struct scatterlist *sglist, 82 - int nents, enum dma_data_direction dir) 69 + static void c6x_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, 70 + int nents, enum dma_data_direction dir, 71 + struct dma_attrs *attrs) 83 72 { 84 73 struct scatterlist *sg; 85 74 int i; 86 75 87 76 for_each_sg(sglist, sg, nents, i) 88 - dma_unmap_single(dev, sg_dma_address(sg), sg->length, dir); 77 + c6x_dma_sync(sg_dma_address(sg), sg->length, dir); 89 78 90 - debug_dma_unmap_sg(dev, sglist, nents, dir); 91 79 } 92 - EXPORT_SYMBOL(dma_unmap_sg); 93 80 94 - void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, 95 - size_t size, enum dma_data_direction dir) 81 + static void c6x_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, 82 + size_t size, enum dma_data_direction dir) 96 83 { 97 84 c6x_dma_sync(handle, size, dir); 98 85 99 - debug_dma_sync_single_for_cpu(dev, handle, size, dir); 100 86 } 101 - EXPORT_SYMBOL(dma_sync_single_for_cpu); 102 87 103 - 104 - void dma_sync_single_for_device(struct device *dev, dma_addr_t handle, 105 - size_t size, enum dma_data_direction dir) 88 + static void c6x_dma_sync_single_for_device(struct device *dev, 89 + dma_addr_t handle, size_t size, enum dma_data_direction dir) 106 90 { 107 91 c6x_dma_sync(handle, size, dir); 108 92 109 - debug_dma_sync_single_for_device(dev, handle, size, dir); 110 93 } 111 - EXPORT_SYMBOL(dma_sync_single_for_device); 112 94 113 - 114 - void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, 115 - int nents, enum dma_data_direction dir) 95 + static void c6x_dma_sync_sg_for_cpu(struct device *dev, 96 + struct scatterlist *sglist, int nents, 97 + enum dma_data_direction dir) 116 98 { 117 99 struct scatterlist *sg; 118 100 int i; 119 101 120 102 for_each_sg(sglist, sg, nents, i) 121 - dma_sync_single_for_cpu(dev, sg_dma_address(sg), 103 + c6x_dma_sync_single_for_cpu(dev, sg_dma_address(sg), 122 104 sg->length, dir); 123 105 124 - debug_dma_sync_sg_for_cpu(dev, sglist, nents, dir); 125 106 } 126 - EXPORT_SYMBOL(dma_sync_sg_for_cpu); 127 107 128 - 129 - void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, 130 - int nents, enum dma_data_direction dir) 108 + static void c6x_dma_sync_sg_for_device(struct device *dev, 109 + struct scatterlist *sglist, int nents, 110 + enum dma_data_direction dir) 131 111 { 132 112 struct scatterlist *sg; 133 113 int i; 134 114 135 115 for_each_sg(sglist, sg, nents, i) 136 - dma_sync_single_for_device(dev, sg_dma_address(sg), 116 + c6x_dma_sync_single_for_device(dev, sg_dma_address(sg), 137 117 sg->length, dir); 138 118 139 - debug_dma_sync_sg_for_device(dev, sglist, nents, dir); 140 119 } 141 - EXPORT_SYMBOL(dma_sync_sg_for_device); 142 120 121 + struct dma_map_ops c6x_dma_ops = { 122 + .alloc = c6x_dma_alloc, 123 + .free = c6x_dma_free, 124 + .map_page = c6x_dma_map_page, 125 + .unmap_page = c6x_dma_unmap_page, 126 + .map_sg = c6x_dma_map_sg, 127 + .unmap_sg = c6x_dma_unmap_sg, 128 + .sync_single_for_device = c6x_dma_sync_single_for_device, 129 + .sync_single_for_cpu = c6x_dma_sync_single_for_cpu, 130 + .sync_sg_for_device = c6x_dma_sync_sg_for_device, 131 + .sync_sg_for_cpu = c6x_dma_sync_sg_for_cpu, 132 + }; 133 + EXPORT_SYMBOL(c6x_dma_ops); 143 134 144 135 /* Number of entries preallocated for DMA-API debugging */ 145 136 #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
+4 -6
arch/c6x/mm/dma-coherent.c
··· 73 73 * Allocate DMA coherent memory space and return both the kernel 74 74 * virtual and DMA address for that space. 75 75 */ 76 - void *dma_alloc_coherent(struct device *dev, size_t size, 77 - dma_addr_t *handle, gfp_t gfp) 76 + void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 77 + gfp_t gfp, struct dma_attrs *attrs) 78 78 { 79 79 u32 paddr; 80 80 int order; ··· 94 94 95 95 return phys_to_virt(paddr); 96 96 } 97 - EXPORT_SYMBOL(dma_alloc_coherent); 98 97 99 98 /* 100 99 * Free DMA coherent memory as defined by the above mapping. 101 100 */ 102 - void dma_free_coherent(struct device *dev, size_t size, void *vaddr, 103 - dma_addr_t dma_handle) 101 + void c6x_dma_free(struct device *dev, size_t size, void *vaddr, 102 + dma_addr_t dma_handle, struct dma_attrs *attrs) 104 103 { 105 104 int order; 106 105 ··· 110 111 111 112 __free_dma_pages(virt_to_phys(vaddr), order); 112 113 } 113 - EXPORT_SYMBOL(dma_free_coherent); 114 114 115 115 /* 116 116 * Initialise the coherent DMA memory allocator using the given uncached region.
+43 -13
arch/cris/arch-v32/drivers/pci/dma.c
··· 16 16 #include <linux/gfp.h> 17 17 #include <asm/io.h> 18 18 19 - void *dma_alloc_coherent(struct device *dev, size_t size, 20 - dma_addr_t *dma_handle, gfp_t gfp) 19 + static void *v32_dma_alloc(struct device *dev, size_t size, 20 + dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs) 21 21 { 22 22 void *ret; 23 - int order = get_order(size); 23 + 24 24 /* ignore region specifiers */ 25 25 gfp &= ~(__GFP_DMA | __GFP_HIGHMEM); 26 - 27 - if (dma_alloc_from_coherent(dev, size, dma_handle, &ret)) 28 - return ret; 29 26 30 27 if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff)) 31 28 gfp |= GFP_DMA; 32 29 33 - ret = (void *)__get_free_pages(gfp, order); 30 + ret = (void *)__get_free_pages(gfp, get_order(size)); 34 31 35 32 if (ret != NULL) { 36 33 memset(ret, 0, size); ··· 36 39 return ret; 37 40 } 38 41 39 - void dma_free_coherent(struct device *dev, size_t size, 40 - void *vaddr, dma_addr_t dma_handle) 42 + static void v32_dma_free(struct device *dev, size_t size, void *vaddr, 43 + dma_addr_t dma_handle, struct dma_attrs *attrs) 41 44 { 42 - int order = get_order(size); 43 - 44 - if (!dma_release_from_coherent(dev, order, vaddr)) 45 - free_pages((unsigned long)vaddr, order); 45 + free_pages((unsigned long)vaddr, get_order(size)); 46 46 } 47 47 48 + static inline dma_addr_t v32_dma_map_page(struct device *dev, 49 + struct page *page, unsigned long offset, size_t size, 50 + enum dma_data_direction direction, 51 + struct dma_attrs *attrs) 52 + { 53 + return page_to_phys(page) + offset; 54 + } 55 + 56 + static inline int v32_dma_map_sg(struct device *dev, struct scatterlist *sg, 57 + int nents, enum dma_data_direction direction, 58 + struct dma_attrs *attrs) 59 + { 60 + printk("Map sg\n"); 61 + return nents; 62 + } 63 + 64 + static inline int v32_dma_supported(struct device *dev, u64 mask) 65 + { 66 + /* 67 + * we fall back to GFP_DMA when the mask isn't all 1s, 68 + * so we can't guarantee allocations that must be 69 + * within a tighter range than GFP_DMA.. 70 + */ 71 + if (mask < 0x00ffffff) 72 + return 0; 73 + return 1; 74 + } 75 + 76 + struct dma_map_ops v32_dma_ops = { 77 + .alloc = v32_dma_alloc, 78 + .free = v32_dma_free, 79 + .map_page = v32_dma_map_page, 80 + .map_sg = v32_dma_map_sg, 81 + .dma_supported = v32_dma_supported, 82 + }; 83 + EXPORT_SYMBOL(v32_dma_ops);
+7 -154
arch/cris/include/asm/dma-mapping.h
··· 1 - /* DMA mapping. Nothing tricky here, just virt_to_phys */ 2 - 3 1 #ifndef _ASM_CRIS_DMA_MAPPING_H 4 2 #define _ASM_CRIS_DMA_MAPPING_H 5 3 6 - #include <linux/mm.h> 7 - #include <linux/kernel.h> 8 - #include <linux/scatterlist.h> 9 - 10 - #include <asm/cache.h> 11 - #include <asm/io.h> 12 - 13 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 14 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 15 - 16 4 #ifdef CONFIG_PCI 17 - #include <asm-generic/dma-coherent.h> 5 + extern struct dma_map_ops v32_dma_ops; 18 6 19 - void *dma_alloc_coherent(struct device *dev, size_t size, 20 - dma_addr_t *dma_handle, gfp_t flag); 21 - 22 - void dma_free_coherent(struct device *dev, size_t size, 23 - void *vaddr, dma_addr_t dma_handle); 24 - #else 25 - static inline void * 26 - dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, 27 - gfp_t flag) 7 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 28 8 { 29 - BUG(); 30 - return NULL; 9 + return &v32_dma_ops; 31 10 } 32 - 33 - static inline void 34 - dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 35 - dma_addr_t dma_handle) 11 + #else 12 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 36 13 { 37 - BUG(); 14 + BUG(); 15 + return NULL; 38 16 } 39 17 #endif 40 - static inline dma_addr_t 41 - dma_map_single(struct device *dev, void *ptr, size_t size, 42 - enum dma_data_direction direction) 43 - { 44 - BUG_ON(direction == DMA_NONE); 45 - return virt_to_phys(ptr); 46 - } 47 - 48 - static inline void 49 - dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 50 - enum dma_data_direction direction) 51 - { 52 - BUG_ON(direction == DMA_NONE); 53 - } 54 - 55 - static inline int 56 - dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 57 - enum dma_data_direction direction) 58 - { 59 - printk("Map sg\n"); 60 - return nents; 61 - } 62 - 63 - static inline dma_addr_t 64 - dma_map_page(struct device *dev, struct page *page, unsigned long offset, 65 - size_t size, enum dma_data_direction direction) 66 - { 67 - BUG_ON(direction == DMA_NONE); 68 - return page_to_phys(page) + offset; 69 - } 70 - 71 - static inline void 72 - dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 73 - enum dma_data_direction direction) 74 - { 75 - BUG_ON(direction == DMA_NONE); 76 - } 77 - 78 - 79 - static inline void 80 - dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 81 - enum dma_data_direction direction) 82 - { 83 - BUG_ON(direction == DMA_NONE); 84 - } 85 - 86 - static inline void 87 - dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, 88 - enum dma_data_direction direction) 89 - { 90 - } 91 - 92 - static inline void 93 - dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, 94 - enum dma_data_direction direction) 95 - { 96 - } 97 - 98 - static inline void 99 - dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 100 - unsigned long offset, size_t size, 101 - enum dma_data_direction direction) 102 - { 103 - } 104 - 105 - static inline void 106 - dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, 107 - unsigned long offset, size_t size, 108 - enum dma_data_direction direction) 109 - { 110 - } 111 - 112 - static inline void 113 - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, 114 - enum dma_data_direction direction) 115 - { 116 - } 117 - 118 - static inline void 119 - dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, 120 - enum dma_data_direction direction) 121 - { 122 - } 123 - 124 - static inline int 125 - dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 126 - { 127 - return 0; 128 - } 129 - 130 - static inline int 131 - dma_supported(struct device *dev, u64 mask) 132 - { 133 - /* 134 - * we fall back to GFP_DMA when the mask isn't all 1s, 135 - * so we can't guarantee allocations that must be 136 - * within a tighter range than GFP_DMA.. 137 - */ 138 - if(mask < 0x00ffffff) 139 - return 0; 140 - 141 - return 1; 142 - } 143 - 144 - static inline int 145 - dma_set_mask(struct device *dev, u64 mask) 146 - { 147 - if(!dev->dma_mask || !dma_supported(dev, mask)) 148 - return -EIO; 149 - 150 - *dev->dma_mask = mask; 151 - 152 - return 0; 153 - } 154 18 155 19 static inline void 156 20 dma_cache_sync(struct device *dev, void *vaddr, size_t size, 157 21 enum dma_data_direction direction) 158 22 { 159 23 } 160 - 161 - /* drivers/base/dma-mapping.c */ 162 - extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, 163 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 164 - extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 165 - void *cpu_addr, dma_addr_t dma_addr, 166 - size_t size); 167 - 168 - #define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s) 169 - #define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s) 170 - 171 24 172 25 #endif
+1
arch/frv/Kconfig
··· 15 15 select OLD_SIGSUSPEND3 16 16 select OLD_SIGACTION 17 17 select HAVE_DEBUG_STACKOVERFLOW 18 + select ARCH_NO_COHERENT_DMA_MMAP 18 19 19 20 config ZONE_DMA 20 21 bool
+3 -129
arch/frv/include/asm/dma-mapping.h
··· 1 1 #ifndef _ASM_DMA_MAPPING_H 2 2 #define _ASM_DMA_MAPPING_H 3 3 4 - #include <linux/device.h> 5 - #include <linux/scatterlist.h> 6 4 #include <asm/cache.h> 7 5 #include <asm/cacheflush.h> 8 - #include <asm/io.h> 9 - 10 - /* 11 - * See Documentation/DMA-API.txt for the description of how the 12 - * following DMA API should work. 13 - */ 14 - 15 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 16 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 17 6 18 7 extern unsigned long __nongprelbss dma_coherent_mem_start; 19 8 extern unsigned long __nongprelbss dma_coherent_mem_end; 20 9 21 - void *dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp); 22 - void dma_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle); 10 + extern struct dma_map_ops frv_dma_ops; 23 11 24 - extern dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, 25 - enum dma_data_direction direction); 26 - 27 - static inline 28 - void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 29 - enum dma_data_direction direction) 12 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 30 13 { 31 - BUG_ON(direction == DMA_NONE); 32 - } 33 - 34 - extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 35 - enum dma_data_direction direction); 36 - 37 - static inline 38 - void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 39 - enum dma_data_direction direction) 40 - { 41 - BUG_ON(direction == DMA_NONE); 42 - } 43 - 44 - extern 45 - dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset, 46 - size_t size, enum dma_data_direction direction); 47 - 48 - static inline 49 - void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 50 - enum dma_data_direction direction) 51 - { 52 - BUG_ON(direction == DMA_NONE); 53 - } 54 - 55 - 56 - static inline 57 - void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, 58 - enum dma_data_direction direction) 59 - { 60 - } 61 - 62 - static inline 63 - void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, 64 - enum dma_data_direction direction) 65 - { 66 - flush_write_buffers(); 67 - } 68 - 69 - static inline 70 - void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 71 - unsigned long offset, size_t size, 72 - enum dma_data_direction direction) 73 - { 74 - } 75 - 76 - static inline 77 - void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, 78 - unsigned long offset, size_t size, 79 - enum dma_data_direction direction) 80 - { 81 - flush_write_buffers(); 82 - } 83 - 84 - static inline 85 - void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, 86 - enum dma_data_direction direction) 87 - { 88 - } 89 - 90 - static inline 91 - void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, 92 - enum dma_data_direction direction) 93 - { 94 - flush_write_buffers(); 95 - } 96 - 97 - static inline 98 - int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 99 - { 100 - return 0; 101 - } 102 - 103 - static inline 104 - int dma_supported(struct device *dev, u64 mask) 105 - { 106 - /* 107 - * we fall back to GFP_DMA when the mask isn't all 1s, 108 - * so we can't guarantee allocations that must be 109 - * within a tighter range than GFP_DMA.. 110 - */ 111 - if (mask < 0x00ffffff) 112 - return 0; 113 - 114 - return 1; 115 - } 116 - 117 - static inline 118 - int dma_set_mask(struct device *dev, u64 mask) 119 - { 120 - if (!dev->dma_mask || !dma_supported(dev, mask)) 121 - return -EIO; 122 - 123 - *dev->dma_mask = mask; 124 - 125 - return 0; 14 + return &frv_dma_ops; 126 15 } 127 16 128 17 static inline ··· 19 130 enum dma_data_direction direction) 20 131 { 21 132 flush_write_buffers(); 22 - } 23 - 24 - /* Not supported for now */ 25 - static inline int dma_mmap_coherent(struct device *dev, 26 - struct vm_area_struct *vma, void *cpu_addr, 27 - dma_addr_t dma_addr, size_t size) 28 - { 29 - return -EINVAL; 30 - } 31 - 32 - static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt, 33 - void *cpu_addr, dma_addr_t dma_addr, 34 - size_t size) 35 - { 36 - return -EINVAL; 37 133 } 38 134 39 135 #endif /* _ASM_DMA_MAPPING_H */
+14 -3
arch/frv/include/asm/io.h
··· 43 43 //#define __iormb() asm volatile("membar") 44 44 //#define __iowmb() asm volatile("membar") 45 45 46 - #define __raw_readb __builtin_read8 47 - #define __raw_readw __builtin_read16 48 - #define __raw_readl __builtin_read32 46 + static inline u8 __raw_readb(const volatile void __iomem *addr) 47 + { 48 + return __builtin_read8((volatile void __iomem *)addr); 49 + } 50 + 51 + static inline u16 __raw_readw(const volatile void __iomem *addr) 52 + { 53 + return __builtin_read16((volatile void __iomem *)addr); 54 + } 55 + 56 + static inline u32 __raw_readl(const volatile void __iomem *addr) 57 + { 58 + return __builtin_read32((volatile void __iomem *)addr); 59 + } 49 60 50 61 #define __raw_writeb(datum, addr) __builtin_write8(addr, datum) 51 62 #define __raw_writew(datum, addr) __builtin_write16(addr, datum)
+47 -25
arch/frv/mb93090-mb00/pci-dma-nommu.c
··· 34 34 static DEFINE_SPINLOCK(dma_alloc_lock); 35 35 static LIST_HEAD(dma_alloc_list); 36 36 37 - void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) 37 + static void *frv_dma_alloc(struct device *hwdev, size_t size, dma_addr_t *dma_handle, 38 + gfp_t gfp, struct dma_attrs *attrs) 38 39 { 39 40 struct dma_alloc_record *new; 40 41 struct list_head *this = &dma_alloc_list; ··· 85 84 return NULL; 86 85 } 87 86 88 - EXPORT_SYMBOL(dma_alloc_coherent); 89 - 90 - void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle) 87 + static void frv_dma_free(struct device *hwdev, size_t size, void *vaddr, 88 + dma_addr_t dma_handle, struct dma_attrs *attrs) 91 89 { 92 90 struct dma_alloc_record *rec; 93 91 unsigned long flags; ··· 105 105 BUG(); 106 106 } 107 107 108 - EXPORT_SYMBOL(dma_free_coherent); 109 - 110 - dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, 111 - enum dma_data_direction direction) 112 - { 113 - BUG_ON(direction == DMA_NONE); 114 - 115 - frv_cache_wback_inv((unsigned long) ptr, (unsigned long) ptr + size); 116 - 117 - return virt_to_bus(ptr); 118 - } 119 - 120 - EXPORT_SYMBOL(dma_map_single); 121 - 122 - int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 123 - enum dma_data_direction direction) 108 + static int frv_dma_map_sg(struct device *dev, struct scatterlist *sglist, 109 + int nents, enum dma_data_direction direction, 110 + struct dma_attrs *attrs) 124 111 { 125 112 int i; 126 113 struct scatterlist *sg; ··· 122 135 return nents; 123 136 } 124 137 125 - EXPORT_SYMBOL(dma_map_sg); 126 - 127 - dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset, 128 - size_t size, enum dma_data_direction direction) 138 + static dma_addr_t frv_dma_map_page(struct device *dev, struct page *page, 139 + unsigned long offset, size_t size, 140 + enum dma_data_direction direction, struct dma_attrs *attrs) 129 141 { 130 142 BUG_ON(direction == DMA_NONE); 131 143 flush_dcache_page(page); 132 144 return (dma_addr_t) page_to_phys(page) + offset; 133 145 } 134 146 135 - EXPORT_SYMBOL(dma_map_page); 147 + static void frv_dma_sync_single_for_device(struct device *dev, 148 + dma_addr_t dma_handle, size_t size, 149 + enum dma_data_direction direction) 150 + { 151 + flush_write_buffers(); 152 + } 153 + 154 + static void frv_dma_sync_sg_for_device(struct device *dev, 155 + struct scatterlist *sg, int nelems, 156 + enum dma_data_direction direction) 157 + { 158 + flush_write_buffers(); 159 + } 160 + 161 + 162 + static int frv_dma_supported(struct device *dev, u64 mask) 163 + { 164 + /* 165 + * we fall back to GFP_DMA when the mask isn't all 1s, 166 + * so we can't guarantee allocations that must be 167 + * within a tighter range than GFP_DMA.. 168 + */ 169 + if (mask < 0x00ffffff) 170 + return 0; 171 + return 1; 172 + } 173 + 174 + struct dma_map_ops frv_dma_ops = { 175 + .alloc = frv_dma_alloc, 176 + .free = frv_dma_free, 177 + .map_page = frv_dma_map_page, 178 + .map_sg = frv_dma_map_sg, 179 + .sync_single_for_device = frv_dma_sync_single_for_device, 180 + .sync_sg_for_device = frv_dma_sync_sg_for_device, 181 + .dma_supported = frv_dma_supported, 182 + }; 183 + EXPORT_SYMBOL(frv_dma_ops);
+48 -26
arch/frv/mb93090-mb00/pci-dma.c
··· 18 18 #include <linux/scatterlist.h> 19 19 #include <asm/io.h> 20 20 21 - void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) 21 + static void *frv_dma_alloc(struct device *hwdev, size_t size, 22 + dma_addr_t *dma_handle, gfp_t gfp, 23 + struct dma_attrs *attrs) 22 24 { 23 25 void *ret; 24 26 ··· 31 29 return ret; 32 30 } 33 31 34 - EXPORT_SYMBOL(dma_alloc_coherent); 35 - 36 - void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle) 32 + static void frv_dma_free(struct device *hwdev, size_t size, void *vaddr, 33 + dma_addr_t dma_handle, struct dma_attrs *attrs) 37 34 { 38 35 consistent_free(vaddr); 39 36 } 40 37 41 - EXPORT_SYMBOL(dma_free_coherent); 42 - 43 - dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, 44 - enum dma_data_direction direction) 45 - { 46 - BUG_ON(direction == DMA_NONE); 47 - 48 - frv_cache_wback_inv((unsigned long) ptr, (unsigned long) ptr + size); 49 - 50 - return virt_to_bus(ptr); 51 - } 52 - 53 - EXPORT_SYMBOL(dma_map_single); 54 - 55 - int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 56 - enum dma_data_direction direction) 38 + static int frv_dma_map_sg(struct device *dev, struct scatterlist *sglist, 39 + int nents, enum dma_data_direction direction, 40 + struct dma_attrs *attrs) 57 41 { 58 42 unsigned long dampr2; 59 43 void *vaddr; ··· 67 79 return nents; 68 80 } 69 81 70 - EXPORT_SYMBOL(dma_map_sg); 71 - 72 - dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset, 73 - size_t size, enum dma_data_direction direction) 82 + static dma_addr_t frv_dma_map_page(struct device *dev, struct page *page, 83 + unsigned long offset, size_t size, 84 + enum dma_data_direction direction, struct dma_attrs *attrs) 74 85 { 75 - BUG_ON(direction == DMA_NONE); 76 86 flush_dcache_page(page); 77 87 return (dma_addr_t) page_to_phys(page) + offset; 78 88 } 79 89 80 - EXPORT_SYMBOL(dma_map_page); 90 + static void frv_dma_sync_single_for_device(struct device *dev, 91 + dma_addr_t dma_handle, size_t size, 92 + enum dma_data_direction direction) 93 + { 94 + flush_write_buffers(); 95 + } 96 + 97 + static void frv_dma_sync_sg_for_device(struct device *dev, 98 + struct scatterlist *sg, int nelems, 99 + enum dma_data_direction direction) 100 + { 101 + flush_write_buffers(); 102 + } 103 + 104 + 105 + static int frv_dma_supported(struct device *dev, u64 mask) 106 + { 107 + /* 108 + * we fall back to GFP_DMA when the mask isn't all 1s, 109 + * so we can't guarantee allocations that must be 110 + * within a tighter range than GFP_DMA.. 111 + */ 112 + if (mask < 0x00ffffff) 113 + return 0; 114 + return 1; 115 + } 116 + 117 + struct dma_map_ops frv_dma_ops = { 118 + .alloc = frv_dma_alloc, 119 + .free = frv_dma_free, 120 + .map_page = frv_dma_map_page, 121 + .map_sg = frv_dma_map_sg, 122 + .sync_single_for_device = frv_dma_sync_single_for_device, 123 + .sync_sg_for_device = frv_dma_sync_sg_for_device, 124 + .dma_supported = frv_dma_supported, 125 + }; 126 + EXPORT_SYMBOL(frv_dma_ops);
-1
arch/h8300/Kconfig
··· 15 15 select OF_IRQ 16 16 select OF_EARLY_FLATTREE 17 17 select HAVE_MEMBLOCK 18 - select HAVE_DMA_ATTRS 19 18 select CLKSRC_OF 20 19 select H8300_TMR8 21 20 select HAVE_KERNEL_GZIP
-2
arch/h8300/include/asm/dma-mapping.h
··· 8 8 return &h8300_dma_map_ops; 9 9 } 10 10 11 - #include <asm-generic/dma-mapping-common.h> 12 - 13 11 #endif
-1
arch/hexagon/Kconfig
··· 27 27 select GENERIC_CLOCKEVENTS_BROADCAST 28 28 select MODULES_USE_ELF_RELA 29 29 select GENERIC_CPU_DEVICES 30 - select HAVE_DMA_ATTRS 31 30 ---help--- 32 31 Qualcomm Hexagon is a processor architecture designed for high 33 32 performance and low power across a wide variety of applications.
-2
arch/hexagon/include/asm/dma-mapping.h
··· 49 49 extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 50 50 enum dma_data_direction direction); 51 51 52 - #include <asm-generic/dma-mapping-common.h> 53 - 54 52 static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) 55 53 { 56 54 if (!dev->dma_mask)
-1
arch/ia64/Kconfig
··· 25 25 select HAVE_FTRACE_MCOUNT_RECORD 26 26 select HAVE_DYNAMIC_FTRACE if (!ITANIUM) 27 27 select HAVE_FUNCTION_TRACER 28 - select HAVE_DMA_ATTRS 29 28 select TTY 30 29 select HAVE_ARCH_TRACEHOOK 31 30 select HAVE_DMA_API_DEBUG
-2
arch/ia64/include/asm/dma-mapping.h
··· 25 25 26 26 #define get_dma_ops(dev) platform_dma_get_ops(dev) 27 27 28 - #include <asm-generic/dma-mapping-common.h> 29 - 30 28 static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) 31 29 { 32 30 if (!dev->dma_mask)
+3 -109
arch/m68k/include/asm/dma-mapping.h
··· 1 1 #ifndef _M68K_DMA_MAPPING_H 2 2 #define _M68K_DMA_MAPPING_H 3 3 4 - #include <asm/cache.h> 4 + extern struct dma_map_ops m68k_dma_ops; 5 5 6 - struct scatterlist; 7 - 8 - static inline int dma_supported(struct device *dev, u64 mask) 6 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 9 7 { 10 - return 1; 8 + return &m68k_dma_ops; 11 9 } 12 10 13 - static inline int dma_set_mask(struct device *dev, u64 mask) 14 - { 15 - return 0; 16 - } 17 - 18 - extern void *dma_alloc_coherent(struct device *, size_t, 19 - dma_addr_t *, gfp_t); 20 - extern void dma_free_coherent(struct device *, size_t, 21 - void *, dma_addr_t); 22 - 23 - static inline void *dma_alloc_attrs(struct device *dev, size_t size, 24 - dma_addr_t *dma_handle, gfp_t flag, 25 - struct dma_attrs *attrs) 26 - { 27 - /* attrs is not supported and ignored */ 28 - return dma_alloc_coherent(dev, size, dma_handle, flag); 29 - } 30 - 31 - static inline void dma_free_attrs(struct device *dev, size_t size, 32 - void *cpu_addr, dma_addr_t dma_handle, 33 - struct dma_attrs *attrs) 34 - { 35 - /* attrs is not supported and ignored */ 36 - dma_free_coherent(dev, size, cpu_addr, dma_handle); 37 - } 38 - 39 - static inline void *dma_alloc_noncoherent(struct device *dev, size_t size, 40 - dma_addr_t *handle, gfp_t flag) 41 - { 42 - return dma_alloc_coherent(dev, size, handle, flag); 43 - } 44 - static inline void dma_free_noncoherent(struct device *dev, size_t size, 45 - void *addr, dma_addr_t handle) 46 - { 47 - dma_free_coherent(dev, size, addr, handle); 48 - } 49 11 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 50 12 enum dma_data_direction dir) 51 13 { 52 14 /* we use coherent allocation, so not much to do here. */ 53 15 } 54 - 55 - extern dma_addr_t dma_map_single(struct device *, void *, size_t, 56 - enum dma_data_direction); 57 - static inline void dma_unmap_single(struct device *dev, dma_addr_t addr, 58 - size_t size, enum dma_data_direction dir) 59 - { 60 - } 61 - 62 - extern dma_addr_t dma_map_page(struct device *, struct page *, 63 - unsigned long, size_t size, 64 - enum dma_data_direction); 65 - static inline void dma_unmap_page(struct device *dev, dma_addr_t address, 66 - size_t size, enum dma_data_direction dir) 67 - { 68 - } 69 - 70 - extern int dma_map_sg(struct device *, struct scatterlist *, int, 71 - enum dma_data_direction); 72 - static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sg, 73 - int nhwentries, enum dma_data_direction dir) 74 - { 75 - } 76 - 77 - extern void dma_sync_single_for_device(struct device *, dma_addr_t, size_t, 78 - enum dma_data_direction); 79 - extern void dma_sync_sg_for_device(struct device *, struct scatterlist *, int, 80 - enum dma_data_direction); 81 - 82 - static inline void dma_sync_single_range_for_device(struct device *dev, 83 - dma_addr_t dma_handle, unsigned long offset, size_t size, 84 - enum dma_data_direction direction) 85 - { 86 - /* just sync everything for now */ 87 - dma_sync_single_for_device(dev, dma_handle, offset + size, direction); 88 - } 89 - 90 - static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, 91 - size_t size, enum dma_data_direction dir) 92 - { 93 - } 94 - 95 - static inline void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 96 - int nents, enum dma_data_direction dir) 97 - { 98 - } 99 - 100 - static inline void dma_sync_single_range_for_cpu(struct device *dev, 101 - dma_addr_t dma_handle, unsigned long offset, size_t size, 102 - enum dma_data_direction direction) 103 - { 104 - /* just sync everything for now */ 105 - dma_sync_single_for_cpu(dev, dma_handle, offset + size, direction); 106 - } 107 - 108 - static inline int dma_mapping_error(struct device *dev, dma_addr_t handle) 109 - { 110 - return 0; 111 - } 112 - 113 - /* drivers/base/dma-mapping.c */ 114 - extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, 115 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 116 - extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 117 - void *cpu_addr, dma_addr_t dma_addr, 118 - size_t size); 119 - 120 - #define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s) 121 - #define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s) 122 16 123 17 #endif /* _M68K_DMA_MAPPING_H */
+27 -34
arch/m68k/kernel/dma.c
··· 18 18 19 19 #if defined(CONFIG_MMU) && !defined(CONFIG_COLDFIRE) 20 20 21 - void *dma_alloc_coherent(struct device *dev, size_t size, 22 - dma_addr_t *handle, gfp_t flag) 21 + static void *m68k_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 22 + gfp_t flag, struct dma_attrs *attrs) 23 23 { 24 24 struct page *page, **map; 25 25 pgprot_t pgprot; ··· 61 61 return addr; 62 62 } 63 63 64 - void dma_free_coherent(struct device *dev, size_t size, 65 - void *addr, dma_addr_t handle) 64 + static void m68k_dma_free(struct device *dev, size_t size, void *addr, 65 + dma_addr_t handle, struct dma_attrs *attrs) 66 66 { 67 67 pr_debug("dma_free_coherent: %p, %x\n", addr, handle); 68 68 vfree(addr); ··· 72 72 73 73 #include <asm/cacheflush.h> 74 74 75 - void *dma_alloc_coherent(struct device *dev, size_t size, 76 - dma_addr_t *dma_handle, gfp_t gfp) 75 + static void *m68k_dma_alloc(struct device *dev, size_t size, 76 + dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs) 77 77 { 78 78 void *ret; 79 79 /* ignore region specifiers */ ··· 90 90 return ret; 91 91 } 92 92 93 - void dma_free_coherent(struct device *dev, size_t size, 94 - void *vaddr, dma_addr_t dma_handle) 93 + static void m68k_dma_free(struct device *dev, size_t size, void *vaddr, 94 + dma_addr_t dma_handle, struct dma_attrs *attrs) 95 95 { 96 96 free_pages((unsigned long)vaddr, get_order(size)); 97 97 } 98 98 99 99 #endif /* CONFIG_MMU && !CONFIG_COLDFIRE */ 100 100 101 - EXPORT_SYMBOL(dma_alloc_coherent); 102 - EXPORT_SYMBOL(dma_free_coherent); 103 - 104 - void dma_sync_single_for_device(struct device *dev, dma_addr_t handle, 105 - size_t size, enum dma_data_direction dir) 101 + static void m68k_dma_sync_single_for_device(struct device *dev, 102 + dma_addr_t handle, size_t size, enum dma_data_direction dir) 106 103 { 107 104 switch (dir) { 108 105 case DMA_BIDIRECTIONAL: ··· 115 118 break; 116 119 } 117 120 } 118 - EXPORT_SYMBOL(dma_sync_single_for_device); 119 121 120 - void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, 121 - int nents, enum dma_data_direction dir) 122 + static void m68k_dma_sync_sg_for_device(struct device *dev, 123 + struct scatterlist *sglist, int nents, enum dma_data_direction dir) 122 124 { 123 125 int i; 124 126 struct scatterlist *sg; ··· 127 131 dir); 128 132 } 129 133 } 130 - EXPORT_SYMBOL(dma_sync_sg_for_device); 131 134 132 - dma_addr_t dma_map_single(struct device *dev, void *addr, size_t size, 133 - enum dma_data_direction dir) 134 - { 135 - dma_addr_t handle = virt_to_bus(addr); 136 - 137 - dma_sync_single_for_device(dev, handle, size, dir); 138 - return handle; 139 - } 140 - EXPORT_SYMBOL(dma_map_single); 141 - 142 - dma_addr_t dma_map_page(struct device *dev, struct page *page, 143 - unsigned long offset, size_t size, 144 - enum dma_data_direction dir) 135 + static dma_addr_t m68k_dma_map_page(struct device *dev, struct page *page, 136 + unsigned long offset, size_t size, enum dma_data_direction dir, 137 + struct dma_attrs *attrs) 145 138 { 146 139 dma_addr_t handle = page_to_phys(page) + offset; 147 140 148 141 dma_sync_single_for_device(dev, handle, size, dir); 149 142 return handle; 150 143 } 151 - EXPORT_SYMBOL(dma_map_page); 152 144 153 - int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 154 - enum dma_data_direction dir) 145 + static int m68k_dma_map_sg(struct device *dev, struct scatterlist *sglist, 146 + int nents, enum dma_data_direction dir, struct dma_attrs *attrs) 155 147 { 156 148 int i; 157 149 struct scatterlist *sg; ··· 151 167 } 152 168 return nents; 153 169 } 154 - EXPORT_SYMBOL(dma_map_sg); 170 + 171 + struct dma_map_ops m68k_dma_ops = { 172 + .alloc = m68k_dma_alloc, 173 + .free = m68k_dma_free, 174 + .map_page = m68k_dma_map_page, 175 + .map_sg = m68k_dma_map_sg, 176 + .sync_single_for_device = m68k_dma_sync_single_for_device, 177 + .sync_sg_for_device = m68k_dma_sync_sg_for_device, 178 + }; 179 + EXPORT_SYMBOL(m68k_dma_ops);
+3 -176
arch/metag/include/asm/dma-mapping.h
··· 1 1 #ifndef _ASM_METAG_DMA_MAPPING_H 2 2 #define _ASM_METAG_DMA_MAPPING_H 3 3 4 - #include <linux/mm.h> 4 + extern struct dma_map_ops metag_dma_ops; 5 5 6 - #include <asm/cache.h> 7 - #include <asm/io.h> 8 - #include <linux/scatterlist.h> 9 - #include <asm/bug.h> 10 - 11 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 12 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 13 - 14 - void *dma_alloc_coherent(struct device *dev, size_t size, 15 - dma_addr_t *dma_handle, gfp_t flag); 16 - 17 - void dma_free_coherent(struct device *dev, size_t size, 18 - void *vaddr, dma_addr_t dma_handle); 19 - 20 - void dma_sync_for_device(void *vaddr, size_t size, int dma_direction); 21 - void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction); 22 - 23 - int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma, 24 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 25 - 26 - int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma, 27 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 28 - 29 - static inline dma_addr_t 30 - dma_map_single(struct device *dev, void *ptr, size_t size, 31 - enum dma_data_direction direction) 6 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 32 7 { 33 - BUG_ON(!valid_dma_direction(direction)); 34 - WARN_ON(size == 0); 35 - dma_sync_for_device(ptr, size, direction); 36 - return virt_to_phys(ptr); 37 - } 38 - 39 - static inline void 40 - dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 41 - enum dma_data_direction direction) 42 - { 43 - BUG_ON(!valid_dma_direction(direction)); 44 - dma_sync_for_cpu(phys_to_virt(dma_addr), size, direction); 45 - } 46 - 47 - static inline int 48 - dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 49 - enum dma_data_direction direction) 50 - { 51 - struct scatterlist *sg; 52 - int i; 53 - 54 - BUG_ON(!valid_dma_direction(direction)); 55 - WARN_ON(nents == 0 || sglist[0].length == 0); 56 - 57 - for_each_sg(sglist, sg, nents, i) { 58 - BUG_ON(!sg_page(sg)); 59 - 60 - sg->dma_address = sg_phys(sg); 61 - dma_sync_for_device(sg_virt(sg), sg->length, direction); 62 - } 63 - 64 - return nents; 65 - } 66 - 67 - static inline dma_addr_t 68 - dma_map_page(struct device *dev, struct page *page, unsigned long offset, 69 - size_t size, enum dma_data_direction direction) 70 - { 71 - BUG_ON(!valid_dma_direction(direction)); 72 - dma_sync_for_device((void *)(page_to_phys(page) + offset), size, 73 - direction); 74 - return page_to_phys(page) + offset; 75 - } 76 - 77 - static inline void 78 - dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 79 - enum dma_data_direction direction) 80 - { 81 - BUG_ON(!valid_dma_direction(direction)); 82 - dma_sync_for_cpu(phys_to_virt(dma_address), size, direction); 83 - } 84 - 85 - 86 - static inline void 87 - dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nhwentries, 88 - enum dma_data_direction direction) 89 - { 90 - struct scatterlist *sg; 91 - int i; 92 - 93 - BUG_ON(!valid_dma_direction(direction)); 94 - WARN_ON(nhwentries == 0 || sglist[0].length == 0); 95 - 96 - for_each_sg(sglist, sg, nhwentries, i) { 97 - BUG_ON(!sg_page(sg)); 98 - 99 - sg->dma_address = sg_phys(sg); 100 - dma_sync_for_cpu(sg_virt(sg), sg->length, direction); 101 - } 102 - } 103 - 104 - static inline void 105 - dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, 106 - enum dma_data_direction direction) 107 - { 108 - dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction); 109 - } 110 - 111 - static inline void 112 - dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 113 - size_t size, enum dma_data_direction direction) 114 - { 115 - dma_sync_for_device(phys_to_virt(dma_handle), size, direction); 116 - } 117 - 118 - static inline void 119 - dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 120 - unsigned long offset, size_t size, 121 - enum dma_data_direction direction) 122 - { 123 - dma_sync_for_cpu(phys_to_virt(dma_handle)+offset, size, 124 - direction); 125 - } 126 - 127 - static inline void 128 - dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, 129 - unsigned long offset, size_t size, 130 - enum dma_data_direction direction) 131 - { 132 - dma_sync_for_device(phys_to_virt(dma_handle)+offset, size, 133 - direction); 134 - } 135 - 136 - static inline void 137 - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nelems, 138 - enum dma_data_direction direction) 139 - { 140 - int i; 141 - struct scatterlist *sg; 142 - 143 - for_each_sg(sglist, sg, nelems, i) 144 - dma_sync_for_cpu(sg_virt(sg), sg->length, direction); 145 - } 146 - 147 - static inline void 148 - dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, 149 - int nelems, enum dma_data_direction direction) 150 - { 151 - int i; 152 - struct scatterlist *sg; 153 - 154 - for_each_sg(sglist, sg, nelems, i) 155 - dma_sync_for_device(sg_virt(sg), sg->length, direction); 156 - } 157 - 158 - static inline int 159 - dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 160 - { 161 - return 0; 162 - } 163 - 164 - #define dma_supported(dev, mask) (1) 165 - 166 - static inline int 167 - dma_set_mask(struct device *dev, u64 mask) 168 - { 169 - if (!dev->dma_mask || !dma_supported(dev, mask)) 170 - return -EIO; 171 - 172 - *dev->dma_mask = mask; 173 - 174 - return 0; 8 + return &metag_dma_ops; 175 9 } 176 10 177 11 /* ··· 17 183 enum dma_data_direction direction) 18 184 { 19 185 } 20 - 21 - /* drivers/base/dma-mapping.c */ 22 - extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 23 - void *cpu_addr, dma_addr_t dma_addr, 24 - size_t size); 25 - 26 - #define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s) 27 186 28 187 #endif
+112 -34
arch/metag/kernel/dma.c
··· 171 171 * Allocate DMA-coherent memory space and return both the kernel remapped 172 172 * virtual and bus address for that space. 173 173 */ 174 - void *dma_alloc_coherent(struct device *dev, size_t size, 175 - dma_addr_t *handle, gfp_t gfp) 174 + static void *metag_dma_alloc(struct device *dev, size_t size, 175 + dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs) 176 176 { 177 177 struct page *page; 178 178 struct metag_vm_region *c; ··· 263 263 no_page: 264 264 return NULL; 265 265 } 266 - EXPORT_SYMBOL(dma_alloc_coherent); 267 266 268 267 /* 269 268 * free a page as defined by the above mapping. 270 269 */ 271 - void dma_free_coherent(struct device *dev, size_t size, 272 - void *vaddr, dma_addr_t dma_handle) 270 + static void metag_dma_free(struct device *dev, size_t size, void *vaddr, 271 + dma_addr_t dma_handle, struct dma_attrs *attrs) 273 272 { 274 273 struct metag_vm_region *c; 275 274 unsigned long flags, addr; ··· 328 329 __func__, vaddr); 329 330 dump_stack(); 330 331 } 331 - EXPORT_SYMBOL(dma_free_coherent); 332 332 333 - 334 - static int dma_mmap(struct device *dev, struct vm_area_struct *vma, 335 - void *cpu_addr, dma_addr_t dma_addr, size_t size) 333 + static int metag_dma_mmap(struct device *dev, struct vm_area_struct *vma, 334 + void *cpu_addr, dma_addr_t dma_addr, size_t size, 335 + struct dma_attrs *attrs) 336 336 { 337 - int ret = -ENXIO; 338 - 339 337 unsigned long flags, user_size, kern_size; 340 338 struct metag_vm_region *c; 339 + int ret = -ENXIO; 340 + 341 + if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs)) 342 + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 343 + else 344 + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 341 345 342 346 user_size = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 343 347 ··· 365 363 366 364 return ret; 367 365 } 368 - 369 - int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma, 370 - void *cpu_addr, dma_addr_t dma_addr, size_t size) 371 - { 372 - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 373 - return dma_mmap(dev, vma, cpu_addr, dma_addr, size); 374 - } 375 - EXPORT_SYMBOL(dma_mmap_coherent); 376 - 377 - int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma, 378 - void *cpu_addr, dma_addr_t dma_addr, size_t size) 379 - { 380 - vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 381 - return dma_mmap(dev, vma, cpu_addr, dma_addr, size); 382 - } 383 - EXPORT_SYMBOL(dma_mmap_writecombine); 384 - 385 - 386 - 387 366 388 367 /* 389 368 * Initialise the consistent memory allocation. ··· 406 423 /* 407 424 * make an area consistent to devices. 408 425 */ 409 - void dma_sync_for_device(void *vaddr, size_t size, int dma_direction) 426 + static void dma_sync_for_device(void *vaddr, size_t size, int dma_direction) 410 427 { 411 428 /* 412 429 * Ensure any writes get through the write combiner. This is necessary ··· 448 465 449 466 wmb(); 450 467 } 451 - EXPORT_SYMBOL(dma_sync_for_device); 452 468 453 469 /* 454 470 * make an area consistent to the core. 455 471 */ 456 - void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction) 472 + static void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction) 457 473 { 458 474 /* 459 475 * Hardware L2 cache prefetch doesn't occur across 4K physical ··· 479 497 480 498 rmb(); 481 499 } 482 - EXPORT_SYMBOL(dma_sync_for_cpu); 500 + 501 + static dma_addr_t metag_dma_map_page(struct device *dev, struct page *page, 502 + unsigned long offset, size_t size, 503 + enum dma_data_direction direction, struct dma_attrs *attrs) 504 + { 505 + dma_sync_for_device((void *)(page_to_phys(page) + offset), size, 506 + direction); 507 + return page_to_phys(page) + offset; 508 + } 509 + 510 + static void metag_dma_unmap_page(struct device *dev, dma_addr_t dma_address, 511 + size_t size, enum dma_data_direction direction, 512 + struct dma_attrs *attrs) 513 + { 514 + dma_sync_for_cpu(phys_to_virt(dma_address), size, direction); 515 + } 516 + 517 + static int metag_dma_map_sg(struct device *dev, struct scatterlist *sglist, 518 + int nents, enum dma_data_direction direction, 519 + struct dma_attrs *attrs) 520 + { 521 + struct scatterlist *sg; 522 + int i; 523 + 524 + for_each_sg(sglist, sg, nents, i) { 525 + BUG_ON(!sg_page(sg)); 526 + 527 + sg->dma_address = sg_phys(sg); 528 + dma_sync_for_device(sg_virt(sg), sg->length, direction); 529 + } 530 + 531 + return nents; 532 + } 533 + 534 + 535 + static void metag_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, 536 + int nhwentries, enum dma_data_direction direction, 537 + struct dma_attrs *attrs) 538 + { 539 + struct scatterlist *sg; 540 + int i; 541 + 542 + for_each_sg(sglist, sg, nhwentries, i) { 543 + BUG_ON(!sg_page(sg)); 544 + 545 + sg->dma_address = sg_phys(sg); 546 + dma_sync_for_cpu(sg_virt(sg), sg->length, direction); 547 + } 548 + } 549 + 550 + static void metag_dma_sync_single_for_cpu(struct device *dev, 551 + dma_addr_t dma_handle, size_t size, 552 + enum dma_data_direction direction) 553 + { 554 + dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction); 555 + } 556 + 557 + static void metag_dma_sync_single_for_device(struct device *dev, 558 + dma_addr_t dma_handle, size_t size, 559 + enum dma_data_direction direction) 560 + { 561 + dma_sync_for_device(phys_to_virt(dma_handle), size, direction); 562 + } 563 + 564 + static void metag_dma_sync_sg_for_cpu(struct device *dev, 565 + struct scatterlist *sglist, int nelems, 566 + enum dma_data_direction direction) 567 + { 568 + int i; 569 + struct scatterlist *sg; 570 + 571 + for_each_sg(sglist, sg, nelems, i) 572 + dma_sync_for_cpu(sg_virt(sg), sg->length, direction); 573 + } 574 + 575 + static void metag_dma_sync_sg_for_device(struct device *dev, 576 + struct scatterlist *sglist, int nelems, 577 + enum dma_data_direction direction) 578 + { 579 + int i; 580 + struct scatterlist *sg; 581 + 582 + for_each_sg(sglist, sg, nelems, i) 583 + dma_sync_for_device(sg_virt(sg), sg->length, direction); 584 + } 585 + 586 + struct dma_map_ops metag_dma_ops = { 587 + .alloc = metag_dma_alloc, 588 + .free = metag_dma_free, 589 + .map_page = metag_dma_map_page, 590 + .map_sg = metag_dma_map_sg, 591 + .sync_single_for_device = metag_dma_sync_single_for_device, 592 + .sync_single_for_cpu = metag_dma_sync_single_for_cpu, 593 + .sync_sg_for_cpu = metag_dma_sync_sg_for_cpu, 594 + .mmap = metag_dma_mmap, 595 + }; 596 + EXPORT_SYMBOL(metag_dma_ops);
-1
arch/microblaze/Kconfig
··· 19 19 select HAVE_ARCH_KGDB 20 20 select HAVE_DEBUG_KMEMLEAK 21 21 select HAVE_DMA_API_DEBUG 22 - select HAVE_DMA_ATTRS 23 22 select HAVE_DYNAMIC_FTRACE 24 23 select HAVE_FTRACE_MCOUNT_RECORD 25 24 select HAVE_FUNCTION_GRAPH_TRACER
-2
arch/microblaze/include/asm/dma-mapping.h
··· 44 44 return &dma_direct_ops; 45 45 } 46 46 47 - #include <asm-generic/dma-mapping-common.h> 48 - 49 47 static inline void __dma_sync(unsigned long paddr, 50 48 size_t size, enum dma_data_direction direction) 51 49 {
-1
arch/mips/Kconfig
··· 31 31 select RTC_LIB if !MACH_LOONGSON64 32 32 select GENERIC_ATOMIC64 if !64BIT 33 33 select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE 34 - select HAVE_DMA_ATTRS 35 34 select HAVE_DMA_CONTIGUOUS 36 35 select HAVE_DMA_API_DEBUG 37 36 select GENERIC_IRQ_PROBE
-2
arch/mips/include/asm/dma-mapping.h
··· 29 29 30 30 static inline void dma_mark_clean(void *addr, size_t size) {} 31 31 32 - #include <asm-generic/dma-mapping-common.h> 33 - 34 32 extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 35 33 enum dma_data_direction direction); 36 34
-1
arch/mips/include/uapi/asm/mman.h
··· 73 73 #define MADV_SEQUENTIAL 2 /* expect sequential page references */ 74 74 #define MADV_WILLNEED 3 /* will need these pages */ 75 75 #define MADV_DONTNEED 4 /* don't need these pages */ 76 - #define MADV_FREE 5 /* free pages only if memory pressure */ 77 76 78 77 /* common parameters: try to keep these consistent across architectures */ 79 78 #define MADV_FREE 8 /* free pages only if memory pressure */
+1
arch/mn10300/Kconfig
··· 14 14 select OLD_SIGSUSPEND3 15 15 select OLD_SIGACTION 16 16 select HAVE_DEBUG_STACKOVERFLOW 17 + select ARCH_NO_COHERENT_DMA_MMAP 17 18 18 19 config AM33_2 19 20 def_bool n
+3 -158
arch/mn10300/include/asm/dma-mapping.h
··· 11 11 #ifndef _ASM_DMA_MAPPING_H 12 12 #define _ASM_DMA_MAPPING_H 13 13 14 - #include <linux/mm.h> 15 - #include <linux/scatterlist.h> 16 - 17 14 #include <asm/cache.h> 18 15 #include <asm/io.h> 19 16 20 - /* 21 - * See Documentation/DMA-API.txt for the description of how the 22 - * following DMA API should work. 23 - */ 17 + extern struct dma_map_ops mn10300_dma_ops; 24 18 25 - extern void *dma_alloc_coherent(struct device *dev, size_t size, 26 - dma_addr_t *dma_handle, int flag); 27 - 28 - extern void dma_free_coherent(struct device *dev, size_t size, 29 - void *vaddr, dma_addr_t dma_handle); 30 - 31 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f)) 32 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h)) 33 - 34 - static inline 35 - dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, 36 - enum dma_data_direction direction) 19 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 37 20 { 38 - BUG_ON(direction == DMA_NONE); 39 - mn10300_dcache_flush_inv(); 40 - return virt_to_bus(ptr); 41 - } 42 - 43 - static inline 44 - void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 45 - enum dma_data_direction direction) 46 - { 47 - BUG_ON(direction == DMA_NONE); 48 - } 49 - 50 - static inline 51 - int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 52 - enum dma_data_direction direction) 53 - { 54 - struct scatterlist *sg; 55 - int i; 56 - 57 - BUG_ON(!valid_dma_direction(direction)); 58 - WARN_ON(nents == 0 || sglist[0].length == 0); 59 - 60 - for_each_sg(sglist, sg, nents, i) { 61 - BUG_ON(!sg_page(sg)); 62 - 63 - sg->dma_address = sg_phys(sg); 64 - } 65 - 66 - mn10300_dcache_flush_inv(); 67 - return nents; 68 - } 69 - 70 - static inline 71 - void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 72 - enum dma_data_direction direction) 73 - { 74 - BUG_ON(!valid_dma_direction(direction)); 75 - } 76 - 77 - static inline 78 - dma_addr_t dma_map_page(struct device *dev, struct page *page, 79 - unsigned long offset, size_t size, 80 - enum dma_data_direction direction) 81 - { 82 - BUG_ON(direction == DMA_NONE); 83 - return page_to_bus(page) + offset; 84 - } 85 - 86 - static inline 87 - void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 88 - enum dma_data_direction direction) 89 - { 90 - BUG_ON(direction == DMA_NONE); 91 - } 92 - 93 - static inline 94 - void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 95 - size_t size, enum dma_data_direction direction) 96 - { 97 - } 98 - 99 - static inline 100 - void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 101 - size_t size, enum dma_data_direction direction) 102 - { 103 - mn10300_dcache_flush_inv(); 104 - } 105 - 106 - static inline 107 - void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 108 - unsigned long offset, size_t size, 109 - enum dma_data_direction direction) 110 - { 111 - } 112 - 113 - static inline void 114 - dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, 115 - unsigned long offset, size_t size, 116 - enum dma_data_direction direction) 117 - { 118 - mn10300_dcache_flush_inv(); 119 - } 120 - 121 - 122 - static inline 123 - void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 124 - int nelems, enum dma_data_direction direction) 125 - { 126 - } 127 - 128 - static inline 129 - void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 130 - int nelems, enum dma_data_direction direction) 131 - { 132 - mn10300_dcache_flush_inv(); 133 - } 134 - 135 - static inline 136 - int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 137 - { 138 - return 0; 139 - } 140 - 141 - static inline 142 - int dma_supported(struct device *dev, u64 mask) 143 - { 144 - /* 145 - * we fall back to GFP_DMA when the mask isn't all 1s, so we can't 146 - * guarantee allocations that must be within a tighter range than 147 - * GFP_DMA 148 - */ 149 - if (mask < 0x00ffffff) 150 - return 0; 151 - return 1; 152 - } 153 - 154 - static inline 155 - int dma_set_mask(struct device *dev, u64 mask) 156 - { 157 - if (!dev->dma_mask || !dma_supported(dev, mask)) 158 - return -EIO; 159 - 160 - *dev->dma_mask = mask; 161 - return 0; 21 + return &mn10300_dma_ops; 162 22 } 163 23 164 24 static inline ··· 26 166 enum dma_data_direction direction) 27 167 { 28 168 mn10300_dcache_flush_inv(); 29 - } 30 - 31 - /* Not supported for now */ 32 - static inline int dma_mmap_coherent(struct device *dev, 33 - struct vm_area_struct *vma, void *cpu_addr, 34 - dma_addr_t dma_addr, size_t size) 35 - { 36 - return -EINVAL; 37 - } 38 - 39 - static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt, 40 - void *cpu_addr, dma_addr_t dma_addr, 41 - size_t size) 42 - { 43 - return -EINVAL; 44 169 } 45 170 46 171 #endif
+61 -6
arch/mn10300/mm/dma-alloc.c
··· 20 20 21 21 static unsigned long pci_sram_allocated = 0xbc000000; 22 22 23 - void *dma_alloc_coherent(struct device *dev, size_t size, 24 - dma_addr_t *dma_handle, int gfp) 23 + static void *mn10300_dma_alloc(struct device *dev, size_t size, 24 + dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs) 25 25 { 26 26 unsigned long addr; 27 27 void *ret; ··· 61 61 printk("dma_alloc_coherent() = %p [%x]\n", ret, *dma_handle); 62 62 return ret; 63 63 } 64 - EXPORT_SYMBOL(dma_alloc_coherent); 65 64 66 - void dma_free_coherent(struct device *dev, size_t size, void *vaddr, 67 - dma_addr_t dma_handle) 65 + static void mn10300_dma_free(struct device *dev, size_t size, void *vaddr, 66 + dma_addr_t dma_handle, struct dma_attrs *attrs) 68 67 { 69 68 unsigned long addr = (unsigned long) vaddr & ~0x20000000; 70 69 ··· 72 73 73 74 free_pages(addr, get_order(size)); 74 75 } 75 - EXPORT_SYMBOL(dma_free_coherent); 76 + 77 + static int mn10300_dma_map_sg(struct device *dev, struct scatterlist *sglist, 78 + int nents, enum dma_data_direction direction, 79 + struct dma_attrs *attrs) 80 + { 81 + struct scatterlist *sg; 82 + int i; 83 + 84 + for_each_sg(sglist, sg, nents, i) { 85 + BUG_ON(!sg_page(sg)); 86 + 87 + sg->dma_address = sg_phys(sg); 88 + } 89 + 90 + mn10300_dcache_flush_inv(); 91 + return nents; 92 + } 93 + 94 + static dma_addr_t mn10300_dma_map_page(struct device *dev, struct page *page, 95 + unsigned long offset, size_t size, 96 + enum dma_data_direction direction, struct dma_attrs *attrs) 97 + { 98 + return page_to_bus(page) + offset; 99 + } 100 + 101 + static void mn10300_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 102 + size_t size, enum dma_data_direction direction) 103 + { 104 + mn10300_dcache_flush_inv(); 105 + } 106 + 107 + static void mn10300_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 108 + int nelems, enum dma_data_direction direction) 109 + { 110 + mn10300_dcache_flush_inv(); 111 + } 112 + 113 + static int mn10300_dma_supported(struct device *dev, u64 mask) 114 + { 115 + /* 116 + * we fall back to GFP_DMA when the mask isn't all 1s, so we can't 117 + * guarantee allocations that must be within a tighter range than 118 + * GFP_DMA 119 + */ 120 + if (mask < 0x00ffffff) 121 + return 0; 122 + return 1; 123 + } 124 + 125 + struct dma_map_ops mn10300_dma_ops = { 126 + .alloc = mn10300_dma_alloc, 127 + .free = mn10300_dma_free, 128 + .map_page = mn10300_dma_map_page, 129 + .map_sg = mn10300_dma_map_sg, 130 + .sync_single_for_device = mn10300_dma_sync_single_for_device, 131 + .sync_sg_for_device = mn10300_dma_sync_sg_for_device, 132 + };
+6 -117
arch/nios2/include/asm/dma-mapping.h
··· 10 10 #ifndef _ASM_NIOS2_DMA_MAPPING_H 11 11 #define _ASM_NIOS2_DMA_MAPPING_H 12 12 13 - #include <linux/scatterlist.h> 14 - #include <linux/cache.h> 15 - #include <asm/cacheflush.h> 13 + extern struct dma_map_ops nios2_dma_ops; 16 14 17 - static inline void __dma_sync_for_device(void *vaddr, size_t size, 18 - enum dma_data_direction direction) 15 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 19 16 { 20 - switch (direction) { 21 - case DMA_FROM_DEVICE: 22 - invalidate_dcache_range((unsigned long)vaddr, 23 - (unsigned long)(vaddr + size)); 24 - break; 25 - case DMA_TO_DEVICE: 26 - /* 27 - * We just need to flush the caches here , but Nios2 flush 28 - * instruction will do both writeback and invalidate. 29 - */ 30 - case DMA_BIDIRECTIONAL: /* flush and invalidate */ 31 - flush_dcache_range((unsigned long)vaddr, 32 - (unsigned long)(vaddr + size)); 33 - break; 34 - default: 35 - BUG(); 36 - } 37 - } 38 - 39 - static inline void __dma_sync_for_cpu(void *vaddr, size_t size, 40 - enum dma_data_direction direction) 41 - { 42 - switch (direction) { 43 - case DMA_BIDIRECTIONAL: 44 - case DMA_FROM_DEVICE: 45 - invalidate_dcache_range((unsigned long)vaddr, 46 - (unsigned long)(vaddr + size)); 47 - break; 48 - case DMA_TO_DEVICE: 49 - break; 50 - default: 51 - BUG(); 52 - } 53 - } 54 - 55 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 56 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 57 - 58 - void *dma_alloc_coherent(struct device *dev, size_t size, 59 - dma_addr_t *dma_handle, gfp_t flag); 60 - 61 - void dma_free_coherent(struct device *dev, size_t size, 62 - void *vaddr, dma_addr_t dma_handle); 63 - 64 - static inline dma_addr_t dma_map_single(struct device *dev, void *ptr, 65 - size_t size, 66 - enum dma_data_direction direction) 67 - { 68 - BUG_ON(!valid_dma_direction(direction)); 69 - __dma_sync_for_device(ptr, size, direction); 70 - return virt_to_phys(ptr); 71 - } 72 - 73 - static inline void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, 74 - size_t size, enum dma_data_direction direction) 75 - { 76 - } 77 - 78 - extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 79 - enum dma_data_direction direction); 80 - extern dma_addr_t dma_map_page(struct device *dev, struct page *page, 81 - unsigned long offset, size_t size, enum dma_data_direction direction); 82 - extern void dma_unmap_page(struct device *dev, dma_addr_t dma_address, 83 - size_t size, enum dma_data_direction direction); 84 - extern void dma_unmap_sg(struct device *dev, struct scatterlist *sg, 85 - int nhwentries, enum dma_data_direction direction); 86 - extern void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 87 - size_t size, enum dma_data_direction direction); 88 - extern void dma_sync_single_for_device(struct device *dev, 89 - dma_addr_t dma_handle, size_t size, enum dma_data_direction direction); 90 - extern void dma_sync_single_range_for_cpu(struct device *dev, 91 - dma_addr_t dma_handle, unsigned long offset, size_t size, 92 - enum dma_data_direction direction); 93 - extern void dma_sync_single_range_for_device(struct device *dev, 94 - dma_addr_t dma_handle, unsigned long offset, size_t size, 95 - enum dma_data_direction direction); 96 - extern void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 97 - int nelems, enum dma_data_direction direction); 98 - extern void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 99 - int nelems, enum dma_data_direction direction); 100 - 101 - static inline int dma_supported(struct device *dev, u64 mask) 102 - { 103 - return 1; 104 - } 105 - 106 - static inline int dma_set_mask(struct device *dev, u64 mask) 107 - { 108 - if (!dev->dma_mask || !dma_supported(dev, mask)) 109 - return -EIO; 110 - 111 - *dev->dma_mask = mask; 112 - 113 - return 0; 114 - } 115 - 116 - static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 117 - { 118 - return 0; 17 + return &nios2_dma_ops; 119 18 } 120 19 121 20 /* 122 - * dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to 123 - * do any flushing here. 124 - */ 21 + * dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to 22 + * do any flushing here. 23 + */ 125 24 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 126 25 enum dma_data_direction direction) 127 26 { 128 27 } 129 - 130 - /* drivers/base/dma-mapping.c */ 131 - extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, 132 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 133 - extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 134 - void *cpu_addr, dma_addr_t dma_addr, 135 - size_t size); 136 - 137 - #define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s) 138 - #define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s) 139 28 140 29 #endif /* _ASM_NIOS2_DMA_MAPPING_H */
+80 -69
arch/nios2/mm/dma-mapping.c
··· 20 20 #include <linux/cache.h> 21 21 #include <asm/cacheflush.h> 22 22 23 + static inline void __dma_sync_for_device(void *vaddr, size_t size, 24 + enum dma_data_direction direction) 25 + { 26 + switch (direction) { 27 + case DMA_FROM_DEVICE: 28 + invalidate_dcache_range((unsigned long)vaddr, 29 + (unsigned long)(vaddr + size)); 30 + break; 31 + case DMA_TO_DEVICE: 32 + /* 33 + * We just need to flush the caches here , but Nios2 flush 34 + * instruction will do both writeback and invalidate. 35 + */ 36 + case DMA_BIDIRECTIONAL: /* flush and invalidate */ 37 + flush_dcache_range((unsigned long)vaddr, 38 + (unsigned long)(vaddr + size)); 39 + break; 40 + default: 41 + BUG(); 42 + } 43 + } 23 44 24 - void *dma_alloc_coherent(struct device *dev, size_t size, 25 - dma_addr_t *dma_handle, gfp_t gfp) 45 + static inline void __dma_sync_for_cpu(void *vaddr, size_t size, 46 + enum dma_data_direction direction) 47 + { 48 + switch (direction) { 49 + case DMA_BIDIRECTIONAL: 50 + case DMA_FROM_DEVICE: 51 + invalidate_dcache_range((unsigned long)vaddr, 52 + (unsigned long)(vaddr + size)); 53 + break; 54 + case DMA_TO_DEVICE: 55 + break; 56 + default: 57 + BUG(); 58 + } 59 + } 60 + 61 + static void *nios2_dma_alloc(struct device *dev, size_t size, 62 + dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs) 26 63 { 27 64 void *ret; 28 65 ··· 82 45 83 46 return ret; 84 47 } 85 - EXPORT_SYMBOL(dma_alloc_coherent); 86 48 87 - void dma_free_coherent(struct device *dev, size_t size, void *vaddr, 88 - dma_addr_t dma_handle) 49 + static void nios2_dma_free(struct device *dev, size_t size, void *vaddr, 50 + dma_addr_t dma_handle, struct dma_attrs *attrs) 89 51 { 90 52 unsigned long addr = (unsigned long) CAC_ADDR((unsigned long) vaddr); 91 53 92 54 free_pages(addr, get_order(size)); 93 55 } 94 - EXPORT_SYMBOL(dma_free_coherent); 95 56 96 - int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 97 - enum dma_data_direction direction) 57 + static int nios2_dma_map_sg(struct device *dev, struct scatterlist *sg, 58 + int nents, enum dma_data_direction direction, 59 + struct dma_attrs *attrs) 98 60 { 99 61 int i; 100 - 101 - BUG_ON(!valid_dma_direction(direction)); 102 62 103 63 for_each_sg(sg, sg, nents, i) { 104 64 void *addr; ··· 109 75 110 76 return nents; 111 77 } 112 - EXPORT_SYMBOL(dma_map_sg); 113 78 114 - dma_addr_t dma_map_page(struct device *dev, struct page *page, 79 + static dma_addr_t nios2_dma_map_page(struct device *dev, struct page *page, 115 80 unsigned long offset, size_t size, 116 - enum dma_data_direction direction) 81 + enum dma_data_direction direction, 82 + struct dma_attrs *attrs) 117 83 { 118 - void *addr; 84 + void *addr = page_address(page) + offset; 119 85 120 - BUG_ON(!valid_dma_direction(direction)); 121 - 122 - addr = page_address(page) + offset; 123 86 __dma_sync_for_device(addr, size, direction); 124 - 125 87 return page_to_phys(page) + offset; 126 88 } 127 - EXPORT_SYMBOL(dma_map_page); 128 89 129 - void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 130 - enum dma_data_direction direction) 90 + static void nios2_dma_unmap_page(struct device *dev, dma_addr_t dma_address, 91 + size_t size, enum dma_data_direction direction, 92 + struct dma_attrs *attrs) 131 93 { 132 - BUG_ON(!valid_dma_direction(direction)); 133 - 134 94 __dma_sync_for_cpu(phys_to_virt(dma_address), size, direction); 135 95 } 136 - EXPORT_SYMBOL(dma_unmap_page); 137 96 138 - void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 139 - enum dma_data_direction direction) 97 + static void nios2_dma_unmap_sg(struct device *dev, struct scatterlist *sg, 98 + int nhwentries, enum dma_data_direction direction, 99 + struct dma_attrs *attrs) 140 100 { 141 101 void *addr; 142 102 int i; 143 - 144 - BUG_ON(!valid_dma_direction(direction)); 145 103 146 104 if (direction == DMA_TO_DEVICE) 147 105 return; ··· 144 118 __dma_sync_for_cpu(addr, sg->length, direction); 145 119 } 146 120 } 147 - EXPORT_SYMBOL(dma_unmap_sg); 148 121 149 - void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, 150 - size_t size, enum dma_data_direction direction) 122 + static void nios2_dma_sync_single_for_cpu(struct device *dev, 123 + dma_addr_t dma_handle, size_t size, 124 + enum dma_data_direction direction) 151 125 { 152 - BUG_ON(!valid_dma_direction(direction)); 153 - 154 126 __dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction); 155 127 } 156 - EXPORT_SYMBOL(dma_sync_single_for_cpu); 157 128 158 - void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, 159 - size_t size, enum dma_data_direction direction) 129 + static void nios2_dma_sync_single_for_device(struct device *dev, 130 + dma_addr_t dma_handle, size_t size, 131 + enum dma_data_direction direction) 160 132 { 161 - BUG_ON(!valid_dma_direction(direction)); 162 - 163 133 __dma_sync_for_device(phys_to_virt(dma_handle), size, direction); 164 134 } 165 - EXPORT_SYMBOL(dma_sync_single_for_device); 166 135 167 - void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 168 - unsigned long offset, size_t size, 169 - enum dma_data_direction direction) 170 - { 171 - BUG_ON(!valid_dma_direction(direction)); 172 - 173 - __dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction); 174 - } 175 - EXPORT_SYMBOL(dma_sync_single_range_for_cpu); 176 - 177 - void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, 178 - unsigned long offset, size_t size, 179 - enum dma_data_direction direction) 180 - { 181 - BUG_ON(!valid_dma_direction(direction)); 182 - 183 - __dma_sync_for_device(phys_to_virt(dma_handle), size, direction); 184 - } 185 - EXPORT_SYMBOL(dma_sync_single_range_for_device); 186 - 187 - void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, 188 - enum dma_data_direction direction) 136 + static void nios2_dma_sync_sg_for_cpu(struct device *dev, 137 + struct scatterlist *sg, int nelems, 138 + enum dma_data_direction direction) 189 139 { 190 140 int i; 191 - 192 - BUG_ON(!valid_dma_direction(direction)); 193 141 194 142 /* Make sure that gcc doesn't leave the empty loop body. */ 195 143 for_each_sg(sg, sg, nelems, i) 196 144 __dma_sync_for_cpu(sg_virt(sg), sg->length, direction); 197 145 } 198 - EXPORT_SYMBOL(dma_sync_sg_for_cpu); 199 146 200 - void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 201 - int nelems, enum dma_data_direction direction) 147 + static void nios2_dma_sync_sg_for_device(struct device *dev, 148 + struct scatterlist *sg, int nelems, 149 + enum dma_data_direction direction) 202 150 { 203 151 int i; 204 - 205 - BUG_ON(!valid_dma_direction(direction)); 206 152 207 153 /* Make sure that gcc doesn't leave the empty loop body. */ 208 154 for_each_sg(sg, sg, nelems, i) 209 155 __dma_sync_for_device(sg_virt(sg), sg->length, direction); 210 156 211 157 } 212 - EXPORT_SYMBOL(dma_sync_sg_for_device); 158 + 159 + struct dma_map_ops nios2_dma_ops = { 160 + .alloc = nios2_dma_alloc, 161 + .free = nios2_dma_free, 162 + .map_page = nios2_dma_map_page, 163 + .unmap_page = nios2_dma_unmap_page, 164 + .map_sg = nios2_dma_map_sg, 165 + .unmap_sg = nios2_dma_unmap_sg, 166 + .sync_single_for_device = nios2_dma_sync_single_for_device, 167 + .sync_single_for_cpu = nios2_dma_sync_single_for_cpu, 168 + .sync_sg_for_cpu = nios2_dma_sync_sg_for_cpu, 169 + .sync_sg_for_device = nios2_dma_sync_sg_for_device, 170 + }; 171 + EXPORT_SYMBOL(nios2_dma_ops);
-3
arch/openrisc/Kconfig
··· 29 29 config MMU 30 30 def_bool y 31 31 32 - config HAVE_DMA_ATTRS 33 - def_bool y 34 - 35 32 config RWSEM_GENERIC_SPINLOCK 36 33 def_bool y 37 34
-2
arch/openrisc/include/asm/dma-mapping.h
··· 42 42 return dma_mask == DMA_BIT_MASK(32); 43 43 } 44 44 45 - #include <asm-generic/dma-mapping-common.h> 46 - 47 45 #endif /* __ASM_OPENRISC_DMA_MAPPING_H */
+1
arch/parisc/Kconfig
··· 29 29 select TTY # Needed for pdc_cons.c 30 30 select HAVE_DEBUG_STACKOVERFLOW 31 31 select HAVE_ARCH_AUDITSYSCALL 32 + select ARCH_NO_COHERENT_DMA_MMAP 32 33 33 34 help 34 35 The PA-RISC microprocessor is designed by Hewlett-Packard and used
+8 -181
arch/parisc/include/asm/dma-mapping.h
··· 1 1 #ifndef _PARISC_DMA_MAPPING_H 2 2 #define _PARISC_DMA_MAPPING_H 3 3 4 - #include <linux/mm.h> 5 - #include <linux/scatterlist.h> 6 4 #include <asm/cacheflush.h> 7 5 8 - /* See Documentation/DMA-API-HOWTO.txt */ 9 - struct hppa_dma_ops { 10 - int (*dma_supported)(struct device *dev, u64 mask); 11 - void *(*alloc_consistent)(struct device *dev, size_t size, dma_addr_t *iova, gfp_t flag); 12 - void *(*alloc_noncoherent)(struct device *dev, size_t size, dma_addr_t *iova, gfp_t flag); 13 - void (*free_consistent)(struct device *dev, size_t size, void *vaddr, dma_addr_t iova); 14 - dma_addr_t (*map_single)(struct device *dev, void *addr, size_t size, enum dma_data_direction direction); 15 - void (*unmap_single)(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction); 16 - int (*map_sg)(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction direction); 17 - void (*unmap_sg)(struct device *dev, struct scatterlist *sg, int nhwents, enum dma_data_direction direction); 18 - void (*dma_sync_single_for_cpu)(struct device *dev, dma_addr_t iova, unsigned long offset, size_t size, enum dma_data_direction direction); 19 - void (*dma_sync_single_for_device)(struct device *dev, dma_addr_t iova, unsigned long offset, size_t size, enum dma_data_direction direction); 20 - void (*dma_sync_sg_for_cpu)(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction direction); 21 - void (*dma_sync_sg_for_device)(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction direction); 22 - }; 23 - 24 6 /* 25 - ** We could live without the hppa_dma_ops indirection if we didn't want 26 - ** to support 4 different coherent dma models with one binary (they will 27 - ** someday be loadable modules): 7 + ** We need to support 4 different coherent dma models with one binary: 8 + ** 28 9 ** I/O MMU consistent method dma_sync behavior 29 10 ** ============= ====================== ======================= 30 11 ** a) PA-7x00LC uncachable host memory flush/purge ··· 21 40 */ 22 41 23 42 #ifdef CONFIG_PA11 24 - extern struct hppa_dma_ops pcxl_dma_ops; 25 - extern struct hppa_dma_ops pcx_dma_ops; 43 + extern struct dma_map_ops pcxl_dma_ops; 44 + extern struct dma_map_ops pcx_dma_ops; 26 45 #endif 27 46 28 - extern struct hppa_dma_ops *hppa_dma_ops; 47 + extern struct dma_map_ops *hppa_dma_ops; 29 48 30 - #define dma_alloc_attrs(d, s, h, f, a) dma_alloc_coherent(d, s, h, f) 31 - #define dma_free_attrs(d, s, h, f, a) dma_free_coherent(d, s, h, f) 32 - 33 - static inline void * 34 - dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, 35 - gfp_t flag) 49 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 36 50 { 37 - return hppa_dma_ops->alloc_consistent(dev, size, dma_handle, flag); 38 - } 39 - 40 - static inline void * 41 - dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle, 42 - gfp_t flag) 43 - { 44 - return hppa_dma_ops->alloc_noncoherent(dev, size, dma_handle, flag); 45 - } 46 - 47 - static inline void 48 - dma_free_coherent(struct device *dev, size_t size, 49 - void *vaddr, dma_addr_t dma_handle) 50 - { 51 - hppa_dma_ops->free_consistent(dev, size, vaddr, dma_handle); 52 - } 53 - 54 - static inline void 55 - dma_free_noncoherent(struct device *dev, size_t size, 56 - void *vaddr, dma_addr_t dma_handle) 57 - { 58 - hppa_dma_ops->free_consistent(dev, size, vaddr, dma_handle); 59 - } 60 - 61 - static inline dma_addr_t 62 - dma_map_single(struct device *dev, void *ptr, size_t size, 63 - enum dma_data_direction direction) 64 - { 65 - return hppa_dma_ops->map_single(dev, ptr, size, direction); 66 - } 67 - 68 - static inline void 69 - dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 70 - enum dma_data_direction direction) 71 - { 72 - hppa_dma_ops->unmap_single(dev, dma_addr, size, direction); 73 - } 74 - 75 - static inline int 76 - dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 77 - enum dma_data_direction direction) 78 - { 79 - return hppa_dma_ops->map_sg(dev, sg, nents, direction); 80 - } 81 - 82 - static inline void 83 - dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 84 - enum dma_data_direction direction) 85 - { 86 - hppa_dma_ops->unmap_sg(dev, sg, nhwentries, direction); 87 - } 88 - 89 - static inline dma_addr_t 90 - dma_map_page(struct device *dev, struct page *page, unsigned long offset, 91 - size_t size, enum dma_data_direction direction) 92 - { 93 - return dma_map_single(dev, (page_address(page) + (offset)), size, direction); 94 - } 95 - 96 - static inline void 97 - dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 98 - enum dma_data_direction direction) 99 - { 100 - dma_unmap_single(dev, dma_address, size, direction); 101 - } 102 - 103 - 104 - static inline void 105 - dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, 106 - enum dma_data_direction direction) 107 - { 108 - if(hppa_dma_ops->dma_sync_single_for_cpu) 109 - hppa_dma_ops->dma_sync_single_for_cpu(dev, dma_handle, 0, size, direction); 110 - } 111 - 112 - static inline void 113 - dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, 114 - enum dma_data_direction direction) 115 - { 116 - if(hppa_dma_ops->dma_sync_single_for_device) 117 - hppa_dma_ops->dma_sync_single_for_device(dev, dma_handle, 0, size, direction); 118 - } 119 - 120 - static inline void 121 - dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 122 - unsigned long offset, size_t size, 123 - enum dma_data_direction direction) 124 - { 125 - if(hppa_dma_ops->dma_sync_single_for_cpu) 126 - hppa_dma_ops->dma_sync_single_for_cpu(dev, dma_handle, offset, size, direction); 127 - } 128 - 129 - static inline void 130 - dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, 131 - unsigned long offset, size_t size, 132 - enum dma_data_direction direction) 133 - { 134 - if(hppa_dma_ops->dma_sync_single_for_device) 135 - hppa_dma_ops->dma_sync_single_for_device(dev, dma_handle, offset, size, direction); 136 - } 137 - 138 - static inline void 139 - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, 140 - enum dma_data_direction direction) 141 - { 142 - if(hppa_dma_ops->dma_sync_sg_for_cpu) 143 - hppa_dma_ops->dma_sync_sg_for_cpu(dev, sg, nelems, direction); 144 - } 145 - 146 - static inline void 147 - dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems, 148 - enum dma_data_direction direction) 149 - { 150 - if(hppa_dma_ops->dma_sync_sg_for_device) 151 - hppa_dma_ops->dma_sync_sg_for_device(dev, sg, nelems, direction); 152 - } 153 - 154 - static inline int 155 - dma_supported(struct device *dev, u64 mask) 156 - { 157 - return hppa_dma_ops->dma_supported(dev, mask); 158 - } 159 - 160 - static inline int 161 - dma_set_mask(struct device *dev, u64 mask) 162 - { 163 - if(!dev->dma_mask || !dma_supported(dev, mask)) 164 - return -EIO; 165 - 166 - *dev->dma_mask = mask; 167 - 168 - return 0; 51 + return hppa_dma_ops; 169 52 } 170 53 171 54 static inline void 172 55 dma_cache_sync(struct device *dev, void *vaddr, size_t size, 173 56 enum dma_data_direction direction) 174 57 { 175 - if(hppa_dma_ops->dma_sync_single_for_cpu) 58 + if (hppa_dma_ops->sync_single_for_cpu) 176 59 flush_kernel_dcache_range((unsigned long)vaddr, size); 177 60 } 178 61 ··· 82 237 struct parisc_device; 83 238 void * sba_get_iommu(struct parisc_device *dev); 84 239 #endif 85 - 86 - /* At the moment, we panic on error for IOMMU resource exaustion */ 87 - #define dma_mapping_error(dev, x) 0 88 - 89 - /* This API cannot be supported on PA-RISC */ 90 - static inline int dma_mmap_coherent(struct device *dev, 91 - struct vm_area_struct *vma, void *cpu_addr, 92 - dma_addr_t dma_addr, size_t size) 93 - { 94 - return -EINVAL; 95 - } 96 - 97 - static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt, 98 - void *cpu_addr, dma_addr_t dma_addr, 99 - size_t size) 100 - { 101 - return -EINVAL; 102 - } 103 240 104 241 #endif
-1
arch/parisc/include/uapi/asm/mman.h
··· 43 43 #define MADV_SPACEAVAIL 5 /* insure that resources are reserved */ 44 44 #define MADV_VPS_PURGE 6 /* Purge pages from VM page cache */ 45 45 #define MADV_VPS_INHERIT 7 /* Inherit parents page size */ 46 - #define MADV_FREE 8 /* free pages only if memory pressure */ 47 46 48 47 /* common/generic parameters */ 49 48 #define MADV_FREE 8 /* free pages only if memory pressure */
+1 -1
arch/parisc/kernel/drivers.c
··· 40 40 #include <asm/parisc-device.h> 41 41 42 42 /* See comments in include/asm-parisc/pci.h */ 43 - struct hppa_dma_ops *hppa_dma_ops __read_mostly; 43 + struct dma_map_ops *hppa_dma_ops __read_mostly; 44 44 EXPORT_SYMBOL(hppa_dma_ops); 45 45 46 46 static struct device root = {
+52 -40
arch/parisc/kernel/pci-dma.c
··· 413 413 414 414 __initcall(pcxl_dma_init); 415 415 416 - static void * pa11_dma_alloc_consistent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag) 416 + static void *pa11_dma_alloc(struct device *dev, size_t size, 417 + dma_addr_t *dma_handle, gfp_t flag, struct dma_attrs *attrs) 417 418 { 418 419 unsigned long vaddr; 419 420 unsigned long paddr; ··· 440 439 return (void *)vaddr; 441 440 } 442 441 443 - static void pa11_dma_free_consistent (struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle) 442 + static void pa11_dma_free(struct device *dev, size_t size, void *vaddr, 443 + dma_addr_t dma_handle, struct dma_attrs *attrs) 444 444 { 445 445 int order; 446 446 ··· 452 450 free_pages((unsigned long)__va(dma_handle), order); 453 451 } 454 452 455 - static dma_addr_t pa11_dma_map_single(struct device *dev, void *addr, size_t size, enum dma_data_direction direction) 453 + static dma_addr_t pa11_dma_map_page(struct device *dev, struct page *page, 454 + unsigned long offset, size_t size, 455 + enum dma_data_direction direction, struct dma_attrs *attrs) 456 456 { 457 + void *addr = page_address(page) + offset; 457 458 BUG_ON(direction == DMA_NONE); 458 459 459 460 flush_kernel_dcache_range((unsigned long) addr, size); 460 461 return virt_to_phys(addr); 461 462 } 462 463 463 - static void pa11_dma_unmap_single(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction) 464 + static void pa11_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, 465 + size_t size, enum dma_data_direction direction, 466 + struct dma_attrs *attrs) 464 467 { 465 468 BUG_ON(direction == DMA_NONE); 466 469 ··· 482 475 return; 483 476 } 484 477 485 - static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) 478 + static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, 479 + int nents, enum dma_data_direction direction, 480 + struct dma_attrs *attrs) 486 481 { 487 482 int i; 488 483 struct scatterlist *sg; ··· 501 492 return nents; 502 493 } 503 494 504 - static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) 495 + static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, 496 + int nents, enum dma_data_direction direction, 497 + struct dma_attrs *attrs) 505 498 { 506 499 int i; 507 500 struct scatterlist *sg; ··· 520 509 return; 521 510 } 522 511 523 - static void pa11_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, unsigned long offset, size_t size, enum dma_data_direction direction) 512 + static void pa11_dma_sync_single_for_cpu(struct device *dev, 513 + dma_addr_t dma_handle, size_t size, 514 + enum dma_data_direction direction) 524 515 { 525 516 BUG_ON(direction == DMA_NONE); 526 517 527 - flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle) + offset, size); 518 + flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle), 519 + size); 528 520 } 529 521 530 - static void pa11_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, unsigned long offset, size_t size, enum dma_data_direction direction) 522 + static void pa11_dma_sync_single_for_device(struct device *dev, 523 + dma_addr_t dma_handle, size_t size, 524 + enum dma_data_direction direction) 531 525 { 532 526 BUG_ON(direction == DMA_NONE); 533 527 534 - flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle) + offset, size); 528 + flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle), 529 + size); 535 530 } 536 531 537 532 static void pa11_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) ··· 562 545 flush_kernel_vmap_range(sg_virt(sg), sg->length); 563 546 } 564 547 565 - struct hppa_dma_ops pcxl_dma_ops = { 548 + struct dma_map_ops pcxl_dma_ops = { 566 549 .dma_supported = pa11_dma_supported, 567 - .alloc_consistent = pa11_dma_alloc_consistent, 568 - .alloc_noncoherent = pa11_dma_alloc_consistent, 569 - .free_consistent = pa11_dma_free_consistent, 570 - .map_single = pa11_dma_map_single, 571 - .unmap_single = pa11_dma_unmap_single, 550 + .alloc = pa11_dma_alloc, 551 + .free = pa11_dma_free, 552 + .map_page = pa11_dma_map_page, 553 + .unmap_page = pa11_dma_unmap_page, 572 554 .map_sg = pa11_dma_map_sg, 573 555 .unmap_sg = pa11_dma_unmap_sg, 574 - .dma_sync_single_for_cpu = pa11_dma_sync_single_for_cpu, 575 - .dma_sync_single_for_device = pa11_dma_sync_single_for_device, 576 - .dma_sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, 577 - .dma_sync_sg_for_device = pa11_dma_sync_sg_for_device, 556 + .sync_single_for_cpu = pa11_dma_sync_single_for_cpu, 557 + .sync_single_for_device = pa11_dma_sync_single_for_device, 558 + .sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, 559 + .sync_sg_for_device = pa11_dma_sync_sg_for_device, 578 560 }; 579 561 580 - static void *fail_alloc_consistent(struct device *dev, size_t size, 581 - dma_addr_t *dma_handle, gfp_t flag) 582 - { 583 - return NULL; 584 - } 585 - 586 - static void *pa11_dma_alloc_noncoherent(struct device *dev, size_t size, 587 - dma_addr_t *dma_handle, gfp_t flag) 562 + static void *pcx_dma_alloc(struct device *dev, size_t size, 563 + dma_addr_t *dma_handle, gfp_t flag, struct dma_attrs *attrs) 588 564 { 589 565 void *addr; 566 + 567 + if (!dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs)) 568 + return NULL; 590 569 591 570 addr = (void *)__get_free_pages(flag, get_order(size)); 592 571 if (addr) ··· 591 578 return addr; 592 579 } 593 580 594 - static void pa11_dma_free_noncoherent(struct device *dev, size_t size, 595 - void *vaddr, dma_addr_t iova) 581 + static void pcx_dma_free(struct device *dev, size_t size, void *vaddr, 582 + dma_addr_t iova, struct dma_attrs *attrs) 596 583 { 597 584 free_pages((unsigned long)vaddr, get_order(size)); 598 585 return; 599 586 } 600 587 601 - struct hppa_dma_ops pcx_dma_ops = { 588 + struct dma_map_ops pcx_dma_ops = { 602 589 .dma_supported = pa11_dma_supported, 603 - .alloc_consistent = fail_alloc_consistent, 604 - .alloc_noncoherent = pa11_dma_alloc_noncoherent, 605 - .free_consistent = pa11_dma_free_noncoherent, 606 - .map_single = pa11_dma_map_single, 607 - .unmap_single = pa11_dma_unmap_single, 590 + .alloc = pcx_dma_alloc, 591 + .free = pcx_dma_free, 592 + .map_page = pa11_dma_map_page, 593 + .unmap_page = pa11_dma_unmap_page, 608 594 .map_sg = pa11_dma_map_sg, 609 595 .unmap_sg = pa11_dma_unmap_sg, 610 - .dma_sync_single_for_cpu = pa11_dma_sync_single_for_cpu, 611 - .dma_sync_single_for_device = pa11_dma_sync_single_for_device, 612 - .dma_sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, 613 - .dma_sync_sg_for_device = pa11_dma_sync_sg_for_device, 596 + .sync_single_for_cpu = pa11_dma_sync_single_for_cpu, 597 + .sync_single_for_device = pa11_dma_sync_single_for_device, 598 + .sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, 599 + .sync_sg_for_device = pa11_dma_sync_sg_for_device, 614 600 };
+1 -1
arch/powerpc/Kconfig
··· 108 108 select HAVE_ARCH_TRACEHOOK 109 109 select HAVE_MEMBLOCK 110 110 select HAVE_MEMBLOCK_NODE_MAP 111 - select HAVE_DMA_ATTRS 112 111 select HAVE_DMA_API_DEBUG 113 112 select HAVE_OPROFILE 114 113 select HAVE_DEBUG_KMEMLEAK ··· 157 158 select ARCH_HAS_DMA_SET_COHERENT_MASK 158 159 select ARCH_HAS_DEVMEM_IS_ALLOWED 159 160 select HAVE_ARCH_SECCOMP_FILTER 161 + select ARCH_HAS_UBSAN_SANITIZE_ALL 160 162 161 163 config GENERIC_CSUM 162 164 def_bool CPU_LITTLE_ENDIAN
-2
arch/powerpc/include/asm/dma-mapping.h
··· 125 125 #define HAVE_ARCH_DMA_SET_MASK 1 126 126 extern int dma_set_mask(struct device *dev, u64 dma_mask); 127 127 128 - #include <asm-generic/dma-mapping-common.h> 129 - 130 128 extern int __dma_set_mask(struct device *dev, u64 dma_mask); 131 129 extern u64 __dma_get_required_mask(struct device *dev); 132 130
+1 -1
arch/powerpc/include/asm/fadump.h
··· 191 191 u64 elfcorehdr_addr; 192 192 u32 crashing_cpu; 193 193 struct pt_regs regs; 194 - struct cpumask cpu_online_mask; 194 + struct cpumask online_mask; 195 195 }; 196 196 197 197 /* Crash memory ranges */
+7 -1
arch/powerpc/kernel/Makefile
··· 136 136 obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o 137 137 obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o 138 138 139 - # Disable GCOV in odd or sensitive code 139 + # Disable GCOV & sanitizers in odd or sensitive code 140 140 GCOV_PROFILE_prom_init.o := n 141 + UBSAN_SANITIZE_prom_init.o := n 141 142 GCOV_PROFILE_ftrace.o := n 143 + UBSAN_SANITIZE_ftrace.o := n 142 144 GCOV_PROFILE_machine_kexec_64.o := n 145 + UBSAN_SANITIZE_machine_kexec_64.o := n 143 146 GCOV_PROFILE_machine_kexec_32.o := n 147 + UBSAN_SANITIZE_machine_kexec_32.o := n 144 148 GCOV_PROFILE_kprobes.o := n 149 + UBSAN_SANITIZE_kprobes.o := n 150 + UBSAN_SANITIZE_vdso.o := n 145 151 146 152 extra-$(CONFIG_PPC_FPU) += fpu.o 147 153 extra-$(CONFIG_ALTIVEC) += vector.o
+2 -2
arch/powerpc/kernel/fadump.c
··· 415 415 else 416 416 ppc_save_regs(&fdh->regs); 417 417 418 - fdh->cpu_online_mask = *cpu_online_mask; 418 + fdh->online_mask = *cpu_online_mask; 419 419 420 420 /* Call ibm,os-term rtas call to trigger firmware assisted dump */ 421 421 rtas_os_term((char *)str); ··· 646 646 } 647 647 /* Lower 4 bytes of reg_value contains logical cpu id */ 648 648 cpu = be64_to_cpu(reg_entry->reg_value) & FADUMP_CPU_ID_MASK; 649 - if (fdh && !cpumask_test_cpu(cpu, &fdh->cpu_online_mask)) { 649 + if (fdh && !cpumask_test_cpu(cpu, &fdh->online_mask)) { 650 650 SKIP_TO_NEXT_CPU(reg_entry); 651 651 continue; 652 652 }
+1
arch/powerpc/kernel/vdso32/Makefile
··· 15 15 obj-vdso32 := $(addprefix $(obj)/, $(obj-vdso32)) 16 16 17 17 GCOV_PROFILE := n 18 + UBSAN_SANITIZE := n 18 19 19 20 ccflags-y := -shared -fno-common -fno-builtin 20 21 ccflags-y += -nostdlib -Wl,-soname=linux-vdso32.so.1 \
+1
arch/powerpc/kernel/vdso64/Makefile
··· 8 8 obj-vdso64 := $(addprefix $(obj)/, $(obj-vdso64)) 9 9 10 10 GCOV_PROFILE := n 11 + UBSAN_SANITIZE := n 11 12 12 13 ccflags-y := -shared -fno-common -fno-builtin 13 14 ccflags-y += -nostdlib -Wl,-soname=linux-vdso64.so.1 \
+1
arch/powerpc/xmon/Makefile
··· 3 3 subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror 4 4 5 5 GCOV_PROFILE := n 6 + UBSAN_SANITIZE := n 6 7 7 8 ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 8 9
-1
arch/s390/Kconfig
··· 579 579 580 580 menuconfig PCI 581 581 bool "PCI support" 582 - select HAVE_DMA_ATTRS 583 582 select PCI_MSI 584 583 select IOMMU_SUPPORT 585 584 help
-2
arch/s390/include/asm/dma-mapping.h
··· 23 23 { 24 24 } 25 25 26 - #include <asm-generic/dma-mapping-common.h> 27 - 28 26 static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) 29 27 { 30 28 if (!dev->dma_mask)
-1
arch/sh/Kconfig
··· 11 11 select HAVE_GENERIC_DMA_COHERENT 12 12 select HAVE_ARCH_TRACEHOOK 13 13 select HAVE_DMA_API_DEBUG 14 - select HAVE_DMA_ATTRS 15 14 select HAVE_PERF_EVENTS 16 15 select HAVE_DEBUG_BUGVERBOSE 17 16 select ARCH_HAVE_CUSTOM_GPIO_H
-2
arch/sh/include/asm/dma-mapping.h
··· 11 11 12 12 #define DMA_ERROR_CODE 0 13 13 14 - #include <asm-generic/dma-mapping-common.h> 15 - 16 14 void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 17 15 enum dma_data_direction dir); 18 16
-1
arch/sparc/Kconfig
··· 26 26 select RTC_CLASS 27 27 select RTC_DRV_M48T59 28 28 select RTC_SYSTOHC 29 - select HAVE_DMA_ATTRS 30 29 select HAVE_DMA_API_DEBUG 31 30 select HAVE_ARCH_JUMP_LABEL if SPARC64 32 31 select GENERIC_IRQ_SHOW
-17
arch/sparc/include/asm/dma-mapping.h
··· 37 37 return dma_ops; 38 38 } 39 39 40 - #define HAVE_ARCH_DMA_SET_MASK 1 41 - 42 - static inline int dma_set_mask(struct device *dev, u64 mask) 43 - { 44 - #ifdef CONFIG_PCI 45 - if (dev->bus == &pci_bus_type) { 46 - if (!dev->dma_mask || !dma_supported(dev, mask)) 47 - return -EINVAL; 48 - *dev->dma_mask = mask; 49 - return 0; 50 - } 51 - #endif 52 - return -EINVAL; 53 - } 54 - 55 - #include <asm-generic/dma-mapping-common.h> 56 - 57 40 #endif
-1
arch/tile/Kconfig
··· 5 5 def_bool y 6 6 select HAVE_PERF_EVENTS 7 7 select USE_PMC if PERF_EVENTS 8 - select HAVE_DMA_ATTRS 9 8 select HAVE_DMA_API_DEBUG 10 9 select HAVE_KVM if !TILEGX 11 10 select GENERIC_FIND_FIRST_BIT
+1 -31
arch/tile/include/asm/dma-mapping.h
··· 73 73 } 74 74 75 75 #define HAVE_ARCH_DMA_SET_MASK 1 76 - 77 - #include <asm-generic/dma-mapping-common.h> 78 - 79 - static inline int 80 - dma_set_mask(struct device *dev, u64 mask) 81 - { 82 - struct dma_map_ops *dma_ops = get_dma_ops(dev); 83 - 84 - /* 85 - * For PCI devices with 64-bit DMA addressing capability, promote 86 - * the dma_ops to hybrid, with the consistent memory DMA space limited 87 - * to 32-bit. For 32-bit capable devices, limit the streaming DMA 88 - * address range to max_direct_dma_addr. 89 - */ 90 - if (dma_ops == gx_pci_dma_map_ops || 91 - dma_ops == gx_hybrid_pci_dma_map_ops || 92 - dma_ops == gx_legacy_pci_dma_map_ops) { 93 - if (mask == DMA_BIT_MASK(64) && 94 - dma_ops == gx_legacy_pci_dma_map_ops) 95 - set_dma_ops(dev, gx_hybrid_pci_dma_map_ops); 96 - else if (mask > dev->archdata.max_direct_dma_addr) 97 - mask = dev->archdata.max_direct_dma_addr; 98 - } 99 - 100 - if (!dev->dma_mask || !dma_supported(dev, mask)) 101 - return -EIO; 102 - 103 - *dev->dma_mask = mask; 104 - 105 - return 0; 106 - } 76 + int dma_set_mask(struct device *dev, u64 mask); 107 77 108 78 /* 109 79 * dma_alloc_noncoherent() is #defined to return coherent memory,
+29
arch/tile/kernel/pci-dma.c
··· 583 583 EXPORT_SYMBOL(gx_legacy_pci_dma_map_ops); 584 584 EXPORT_SYMBOL(gx_hybrid_pci_dma_map_ops); 585 585 586 + int dma_set_mask(struct device *dev, u64 mask) 587 + { 588 + struct dma_map_ops *dma_ops = get_dma_ops(dev); 589 + 590 + /* 591 + * For PCI devices with 64-bit DMA addressing capability, promote 592 + * the dma_ops to hybrid, with the consistent memory DMA space limited 593 + * to 32-bit. For 32-bit capable devices, limit the streaming DMA 594 + * address range to max_direct_dma_addr. 595 + */ 596 + if (dma_ops == gx_pci_dma_map_ops || 597 + dma_ops == gx_hybrid_pci_dma_map_ops || 598 + dma_ops == gx_legacy_pci_dma_map_ops) { 599 + if (mask == DMA_BIT_MASK(64) && 600 + dma_ops == gx_legacy_pci_dma_map_ops) 601 + set_dma_ops(dev, gx_hybrid_pci_dma_map_ops); 602 + else if (mask > dev->archdata.max_direct_dma_addr) 603 + mask = dev->archdata.max_direct_dma_addr; 604 + } 605 + 606 + if (!dev->dma_mask || !dma_supported(dev, mask)) 607 + return -EIO; 608 + 609 + *dev->dma_mask = mask; 610 + 611 + return 0; 612 + } 613 + EXPORT_SYMBOL(dma_set_mask); 614 + 586 615 #ifdef CONFIG_ARCH_HAS_DMA_SET_COHERENT_MASK 587 616 int dma_set_coherent_mask(struct device *dev, u64 mask) 588 617 {
-1
arch/unicore32/Kconfig
··· 5 5 select ARCH_MIGHT_HAVE_PC_SERIO 6 6 select HAVE_MEMBLOCK 7 7 select HAVE_GENERIC_DMA_COHERENT 8 - select HAVE_DMA_ATTRS 9 8 select HAVE_KERNEL_GZIP 10 9 select HAVE_KERNEL_BZIP2 11 10 select GENERIC_ATOMIC64
-2
arch/unicore32/include/asm/dma-mapping.h
··· 28 28 return &swiotlb_dma_map_ops; 29 29 } 30 30 31 - #include <asm-generic/dma-mapping-common.h> 32 - 33 31 static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) 34 32 { 35 33 if (dev && dev->dma_mask)
+1 -1
arch/x86/Kconfig
··· 31 31 select ARCH_HAS_PMEM_API if X86_64 32 32 select ARCH_HAS_MMIO_FLUSH 33 33 select ARCH_HAS_SG_CHAIN 34 + select ARCH_HAS_UBSAN_SANITIZE_ALL 34 35 select ARCH_HAVE_NMI_SAFE_CMPXCHG 35 36 select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI 36 37 select ARCH_MIGHT_HAVE_PC_PARPORT ··· 100 99 select HAVE_DEBUG_KMEMLEAK 101 100 select HAVE_DEBUG_STACKOVERFLOW 102 101 select HAVE_DMA_API_DEBUG 103 - select HAVE_DMA_ATTRS 104 102 select HAVE_DMA_CONTIGUOUS 105 103 select HAVE_DYNAMIC_FTRACE 106 104 select HAVE_DYNAMIC_FTRACE_WITH_REGS
+1
arch/x86/boot/Makefile
··· 60 60 KBUILD_CFLAGS := $(USERINCLUDE) $(REALMODE_CFLAGS) -D_SETUP 61 61 KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ 62 62 GCOV_PROFILE := n 63 + UBSAN_SANITIZE := n 63 64 64 65 $(obj)/bzImage: asflags-y := $(SVGA_MODE) 65 66
+1
arch/x86/boot/compressed/Makefile
··· 33 33 34 34 KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ 35 35 GCOV_PROFILE := n 36 + UBSAN_SANITIZE :=n 36 37 37 38 LDFLAGS := -m elf_$(UTS_MACHINE) 38 39 LDFLAGS_vmlinux := -T
+1
arch/x86/entry/vdso/Makefile
··· 4 4 5 5 KBUILD_CFLAGS += $(DISABLE_LTO) 6 6 KASAN_SANITIZE := n 7 + UBSAN_SANITIZE := n 7 8 8 9 VDSO64-$(CONFIG_X86_64) := y 9 10 VDSOX32-$(CONFIG_X86_X32_ABI) := y
-2
arch/x86/include/asm/dma-mapping.h
··· 46 46 #define HAVE_ARCH_DMA_SUPPORTED 1 47 47 extern int dma_supported(struct device *hwdev, u64 mask); 48 48 49 - #include <asm-generic/dma-mapping-common.h> 50 - 51 49 extern void *dma_generic_alloc_coherent(struct device *dev, size_t size, 52 50 dma_addr_t *dma_addr, gfp_t flag, 53 51 struct dma_attrs *attrs);
+2
arch/x86/kernel/machine_kexec_64.c
··· 385 385 return image->fops->cleanup(image->image_loader_data); 386 386 } 387 387 388 + #ifdef CONFIG_KEXEC_VERIFY_SIG 388 389 int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel, 389 390 unsigned long kernel_len) 390 391 { ··· 396 395 397 396 return image->fops->verify_sig(kernel, kernel_len); 398 397 } 398 + #endif 399 399 400 400 /* 401 401 * Apply purgatory relocations.
+1
arch/x86/realmode/rm/Makefile
··· 70 70 -I$(srctree)/arch/x86/boot 71 71 KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ 72 72 GCOV_PROFILE := n 73 + UBSAN_SANITIZE := n
-1
arch/xtensa/Kconfig
··· 15 15 select GENERIC_PCI_IOMAP 16 16 select GENERIC_SCHED_CLOCK 17 17 select HAVE_DMA_API_DEBUG 18 - select HAVE_DMA_ATTRS 19 18 select HAVE_FUNCTION_TRACER 20 19 select HAVE_FUTEX_CMPXCHG if !MMU 21 20 select HAVE_IRQ_TIME_ACCOUNTING
-4
arch/xtensa/include/asm/dma-mapping.h
··· 13 13 #include <asm/cache.h> 14 14 #include <asm/io.h> 15 15 16 - #include <asm-generic/dma-coherent.h> 17 - 18 16 #include <linux/mm.h> 19 17 #include <linux/scatterlist.h> 20 18 ··· 27 29 else 28 30 return &xtensa_dma_map_ops; 29 31 } 30 - 31 - #include <asm-generic/dma-mapping-common.h> 32 32 33 33 void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 34 34 enum dma_data_direction direction);
-1
arch/xtensa/include/uapi/asm/mman.h
··· 86 86 #define MADV_SEQUENTIAL 2 /* expect sequential page references */ 87 87 #define MADV_WILLNEED 3 /* will need these pages */ 88 88 #define MADV_DONTNEED 4 /* don't need these pages */ 89 - #define MADV_FREE 5 /* free pages only if memory pressure */ 90 89 91 90 /* common parameters: try to keep these consistent across architectures */ 92 91 #define MADV_FREE 8 /* free pages only if memory pressure */
+5 -5
drivers/base/cpu.c
··· 200 200 201 201 struct cpu_attr { 202 202 struct device_attribute attr; 203 - const struct cpumask *const * const map; 203 + const struct cpumask *const map; 204 204 }; 205 205 206 206 static ssize_t show_cpus_attr(struct device *dev, ··· 209 209 { 210 210 struct cpu_attr *ca = container_of(attr, struct cpu_attr, attr); 211 211 212 - return cpumap_print_to_pagebuf(true, buf, *ca->map); 212 + return cpumap_print_to_pagebuf(true, buf, ca->map); 213 213 } 214 214 215 215 #define _CPU_ATTR(name, map) \ ··· 217 217 218 218 /* Keep in sync with cpu_subsys_attrs */ 219 219 static struct cpu_attr cpu_attrs[] = { 220 - _CPU_ATTR(online, &cpu_online_mask), 221 - _CPU_ATTR(possible, &cpu_possible_mask), 222 - _CPU_ATTR(present, &cpu_present_mask), 220 + _CPU_ATTR(online, &__cpu_online_mask), 221 + _CPU_ATTR(possible, &__cpu_possible_mask), 222 + _CPU_ATTR(present, &__cpu_present_mask), 223 223 }; 224 224 225 225 /*
+3 -4
drivers/base/dma-mapping.c
··· 12 12 #include <linux/gfp.h> 13 13 #include <linux/slab.h> 14 14 #include <linux/vmalloc.h> 15 - #include <asm-generic/dma-coherent.h> 16 15 17 16 /* 18 17 * Managed DMA API ··· 166 167 } 167 168 EXPORT_SYMBOL(dmam_free_noncoherent); 168 169 169 - #ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY 170 + #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT 170 171 171 172 static void dmam_coherent_decl_release(struct device *dev, void *res) 172 173 { ··· 246 247 void *cpu_addr, dma_addr_t dma_addr, size_t size) 247 248 { 248 249 int ret = -ENXIO; 249 - #ifdef CONFIG_MMU 250 + #if defined(CONFIG_MMU) && !defined(CONFIG_ARCH_NO_COHERENT_DMA_MMAP) 250 251 unsigned long user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 251 252 unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; 252 253 unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr)); ··· 263 264 user_count << PAGE_SHIFT, 264 265 vma->vm_page_prot); 265 266 } 266 - #endif /* CONFIG_MMU */ 267 + #endif /* CONFIG_MMU && !CONFIG_ARCH_NO_COHERENT_DMA_MMAP */ 267 268 268 269 return ret; 269 270 }
+3 -8
drivers/firmware/broadcom/bcm47xx_nvram.c
··· 56 56 static int nvram_find_and_copy(void __iomem *iobase, u32 lim) 57 57 { 58 58 struct nvram_header __iomem *header; 59 - int i; 60 59 u32 off; 61 - u32 *src, *dst; 62 60 u32 size; 63 61 64 62 if (nvram_len) { ··· 93 95 return -ENXIO; 94 96 95 97 found: 96 - src = (u32 *)header; 97 - dst = (u32 *)nvram_buf; 98 - for (i = 0; i < sizeof(struct nvram_header); i += 4) 99 - *dst++ = __raw_readl(src++); 98 + __ioread32_copy(nvram_buf, header, sizeof(*header) / 4); 100 99 header = (struct nvram_header *)nvram_buf; 101 100 nvram_len = header->len; 102 101 if (nvram_len > size) { ··· 106 111 nvram_len = NVRAM_SPACE - 1; 107 112 } 108 113 /* proceed reading data after header */ 109 - for (; i < nvram_len; i += 4) 110 - *dst++ = readl(src++); 114 + __ioread32_copy(nvram_buf + sizeof(*header), header + 1, 115 + DIV_ROUND_UP(nvram_len, 4)); 111 116 nvram_buf[NVRAM_SPACE - 1] = '\0'; 112 117 113 118 return 0;
+1
drivers/firmware/efi/libstub/Makefile
··· 22 22 23 23 GCOV_PROFILE := n 24 24 KASAN_SANITIZE := n 25 + UBSAN_SANITIZE := n 25 26 26 27 lib-y := efi-stub-helper.o 27 28
+2 -2
drivers/gpu/drm/Kconfig
··· 82 82 83 83 config DRM_GEM_CMA_HELPER 84 84 bool 85 - depends on DRM && HAVE_DMA_ATTRS 85 + depends on DRM 86 86 help 87 87 Choose this if you need the GEM CMA helper functions 88 88 89 89 config DRM_KMS_CMA_HELPER 90 90 bool 91 - depends on DRM && HAVE_DMA_ATTRS 91 + depends on DRM 92 92 select DRM_GEM_CMA_HELPER 93 93 select DRM_KMS_FB_HELPER 94 94 select FB_SYS_FILLRECT
+1 -1
drivers/gpu/drm/imx/Kconfig
··· 5 5 select VIDEOMODE_HELPERS 6 6 select DRM_GEM_CMA_HELPER 7 7 select DRM_KMS_CMA_HELPER 8 - depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM) && HAVE_DMA_ATTRS 8 + depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM) 9 9 depends on IMX_IPUV3_CORE 10 10 help 11 11 enable i.MX graphics support
+1 -1
drivers/gpu/drm/rcar-du/Kconfig
··· 1 1 config DRM_RCAR_DU 2 2 tristate "DRM Support for R-Car Display Unit" 3 - depends on DRM && ARM && HAVE_DMA_ATTRS && OF 3 + depends on DRM && ARM && OF 4 4 depends on ARCH_SHMOBILE || COMPILE_TEST 5 5 select DRM_KMS_HELPER 6 6 select DRM_KMS_CMA_HELPER
+1 -1
drivers/gpu/drm/shmobile/Kconfig
··· 1 1 config DRM_SHMOBILE 2 2 tristate "DRM Support for SH Mobile" 3 - depends on DRM && ARM && HAVE_DMA_ATTRS 3 + depends on DRM && ARM 4 4 depends on ARCH_SHMOBILE || COMPILE_TEST 5 5 depends on FB_SH_MOBILE_MERAM || !FB_SH_MOBILE_MERAM 6 6 select BACKLIGHT_CLASS_DEVICE
+1 -1
drivers/gpu/drm/sti/Kconfig
··· 1 1 config DRM_STI 2 2 tristate "DRM Support for STMicroelectronics SoC stiH41x Series" 3 - depends on DRM && (SOC_STIH415 || SOC_STIH416 || ARCH_MULTIPLATFORM) && HAVE_DMA_ATTRS 3 + depends on DRM && (SOC_STIH415 || SOC_STIH416 || ARCH_MULTIPLATFORM) 4 4 select RESET_CONTROLLER 5 5 select DRM_KMS_HELPER 6 6 select DRM_GEM_CMA_HELPER
+1 -1
drivers/gpu/drm/tilcdc/Kconfig
··· 1 1 config DRM_TILCDC 2 2 tristate "DRM Support for TI LCDC Display Controller" 3 - depends on DRM && OF && ARM && HAVE_DMA_ATTRS 3 + depends on DRM && OF && ARM 4 4 select DRM_KMS_HELPER 5 5 select DRM_KMS_FB_HELPER 6 6 select DRM_KMS_CMA_HELPER
+1 -1
drivers/gpu/drm/vc4/Kconfig
··· 1 1 config DRM_VC4 2 2 tristate "Broadcom VC4 Graphics" 3 3 depends on ARCH_BCM2835 || COMPILE_TEST 4 - depends on DRM && HAVE_DMA_ATTRS 4 + depends on DRM 5 5 select DRM_KMS_HELPER 6 6 select DRM_KMS_CMA_HELPER 7 7 select DRM_GEM_CMA_HELPER
+1 -3
drivers/iio/industrialio-sw-trigger.c
··· 167 167 configfs_register_default_group(&iio_configfs_subsys.su_group, 168 168 "triggers", 169 169 &iio_triggers_group_type); 170 - if (IS_ERR(iio_triggers_group)) 171 - return PTR_ERR(iio_triggers_group); 172 - return 0; 170 + return PTR_ERR_OR_ZERO(iio_triggers_group); 173 171 } 174 172 module_init(iio_sw_trigger_init); 175 173
-1
drivers/media/platform/Kconfig
··· 216 216 tristate "STMicroelectronics BDISP 2D blitter driver" 217 217 depends on VIDEO_DEV && VIDEO_V4L2 218 218 depends on ARCH_STI || COMPILE_TEST 219 - depends on HAVE_DMA_ATTRS 220 219 select VIDEOBUF2_DMA_CONTIG 221 220 select V4L2_MEM2MEM_DEV 222 221 help
+1 -1
drivers/memstick/core/ms_block.c
··· 1909 1909 lba = blk_rq_pos(msb->req); 1910 1910 1911 1911 sector_div(lba, msb->page_size / 512); 1912 - page = do_div(lba, msb->pages_in_block); 1912 + page = sector_div(lba, msb->pages_in_block); 1913 1913 1914 1914 if (rq_data_dir(msb->req) == READ) 1915 1915 error = msb_do_read_request(msb, lba, page, sg,
+1
drivers/misc/Kconfig
··· 95 95 config IBM_ASM 96 96 tristate "Device driver for IBM RSA service processor" 97 97 depends on X86 && PCI && INPUT 98 + depends on SERIAL_8250 || SERIAL_8250=n 98 99 ---help--- 99 100 This option enables device driver support for in-band access to the 100 101 IBM RSA (Condor) service processor in eServer xSeries systems.
+31 -26
drivers/parisc/ccio-dma.c
··· 786 786 return CCIO_IOVA(iovp, offset); 787 787 } 788 788 789 + 790 + static dma_addr_t 791 + ccio_map_page(struct device *dev, struct page *page, unsigned long offset, 792 + size_t size, enum dma_data_direction direction, 793 + struct dma_attrs *attrs) 794 + { 795 + return ccio_map_single(dev, page_address(page) + offset, size, 796 + direction); 797 + } 798 + 799 + 789 800 /** 790 - * ccio_unmap_single - Unmap an address range from the IOMMU. 801 + * ccio_unmap_page - Unmap an address range from the IOMMU. 791 802 * @dev: The PCI device. 792 803 * @addr: The start address of the DMA region. 793 804 * @size: The length of the DMA region. 794 805 * @direction: The direction of the DMA transaction (to/from device). 795 - * 796 - * This function implements the pci_unmap_single function. 797 806 */ 798 807 static void 799 - ccio_unmap_single(struct device *dev, dma_addr_t iova, size_t size, 800 - enum dma_data_direction direction) 808 + ccio_unmap_page(struct device *dev, dma_addr_t iova, size_t size, 809 + enum dma_data_direction direction, struct dma_attrs *attrs) 801 810 { 802 811 struct ioc *ioc; 803 812 unsigned long flags; ··· 835 826 } 836 827 837 828 /** 838 - * ccio_alloc_consistent - Allocate a consistent DMA mapping. 829 + * ccio_alloc - Allocate a consistent DMA mapping. 839 830 * @dev: The PCI device. 840 831 * @size: The length of the DMA region. 841 832 * @dma_handle: The DMA address handed back to the device (not the cpu). ··· 843 834 * This function implements the pci_alloc_consistent function. 844 835 */ 845 836 static void * 846 - ccio_alloc_consistent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag) 837 + ccio_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag, 838 + struct dma_attrs *attrs) 847 839 { 848 840 void *ret; 849 841 #if 0 ··· 868 858 } 869 859 870 860 /** 871 - * ccio_free_consistent - Free a consistent DMA mapping. 861 + * ccio_free - Free a consistent DMA mapping. 872 862 * @dev: The PCI device. 873 863 * @size: The length of the DMA region. 874 864 * @cpu_addr: The cpu address returned from the ccio_alloc_consistent. ··· 877 867 * This function implements the pci_free_consistent function. 878 868 */ 879 869 static void 880 - ccio_free_consistent(struct device *dev, size_t size, void *cpu_addr, 881 - dma_addr_t dma_handle) 870 + ccio_free(struct device *dev, size_t size, void *cpu_addr, 871 + dma_addr_t dma_handle, struct dma_attrs *attrs) 882 872 { 883 - ccio_unmap_single(dev, dma_handle, size, 0); 873 + ccio_unmap_page(dev, dma_handle, size, 0, NULL); 884 874 free_pages((unsigned long)cpu_addr, get_order(size)); 885 875 } 886 876 ··· 907 897 */ 908 898 static int 909 899 ccio_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 910 - enum dma_data_direction direction) 900 + enum dma_data_direction direction, struct dma_attrs *attrs) 911 901 { 912 902 struct ioc *ioc; 913 903 int coalesced, filled = 0; ··· 984 974 */ 985 975 static void 986 976 ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, 987 - enum dma_data_direction direction) 977 + enum dma_data_direction direction, struct dma_attrs *attrs) 988 978 { 989 979 struct ioc *ioc; 990 980 ··· 1003 993 #ifdef CCIO_COLLECT_STATS 1004 994 ioc->usg_pages += sg_dma_len(sglist) >> PAGE_SHIFT; 1005 995 #endif 1006 - ccio_unmap_single(dev, sg_dma_address(sglist), 1007 - sg_dma_len(sglist), direction); 996 + ccio_unmap_page(dev, sg_dma_address(sglist), 997 + sg_dma_len(sglist), direction, NULL); 1008 998 ++sglist; 1009 999 } 1010 1000 1011 1001 DBG_RUN_SG("%s() DONE (nents %d)\n", __func__, nents); 1012 1002 } 1013 1003 1014 - static struct hppa_dma_ops ccio_ops = { 1004 + static struct dma_map_ops ccio_ops = { 1015 1005 .dma_supported = ccio_dma_supported, 1016 - .alloc_consistent = ccio_alloc_consistent, 1017 - .alloc_noncoherent = ccio_alloc_consistent, 1018 - .free_consistent = ccio_free_consistent, 1019 - .map_single = ccio_map_single, 1020 - .unmap_single = ccio_unmap_single, 1006 + .alloc = ccio_alloc, 1007 + .free = ccio_free, 1008 + .map_page = ccio_map_page, 1009 + .unmap_page = ccio_unmap_page, 1021 1010 .map_sg = ccio_map_sg, 1022 1011 .unmap_sg = ccio_unmap_sg, 1023 - .dma_sync_single_for_cpu = NULL, /* NOP for U2/Uturn */ 1024 - .dma_sync_single_for_device = NULL, /* NOP for U2/Uturn */ 1025 - .dma_sync_sg_for_cpu = NULL, /* ditto */ 1026 - .dma_sync_sg_for_device = NULL, /* ditto */ 1027 1012 }; 1028 1013 1029 1014 #ifdef CONFIG_PROC_FS ··· 1067 1062 ioc->msingle_calls, ioc->msingle_pages, 1068 1063 (int)((ioc->msingle_pages * 1000)/ioc->msingle_calls)); 1069 1064 1070 - /* KLUGE - unmap_sg calls unmap_single for each mapped page */ 1065 + /* KLUGE - unmap_sg calls unmap_page for each mapped page */ 1071 1066 min = ioc->usingle_calls - ioc->usg_calls; 1072 1067 max = ioc->usingle_pages - ioc->usg_pages; 1073 1068 seq_printf(m, "pci_unmap_single: %8ld calls %8ld pages (avg %d/1000)\n",
+29 -23
drivers/parisc/sba_iommu.c
··· 780 780 } 781 781 782 782 783 + static dma_addr_t 784 + sba_map_page(struct device *dev, struct page *page, unsigned long offset, 785 + size_t size, enum dma_data_direction direction, 786 + struct dma_attrs *attrs) 787 + { 788 + return sba_map_single(dev, page_address(page) + offset, size, 789 + direction); 790 + } 791 + 792 + 783 793 /** 784 - * sba_unmap_single - unmap one IOVA and free resources 794 + * sba_unmap_page - unmap one IOVA and free resources 785 795 * @dev: instance of PCI owned by the driver that's asking. 786 796 * @iova: IOVA of driver buffer previously mapped. 787 797 * @size: number of bytes mapped in driver buffer. ··· 800 790 * See Documentation/DMA-API-HOWTO.txt 801 791 */ 802 792 static void 803 - sba_unmap_single(struct device *dev, dma_addr_t iova, size_t size, 804 - enum dma_data_direction direction) 793 + sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, 794 + enum dma_data_direction direction, struct dma_attrs *attrs) 805 795 { 806 796 struct ioc *ioc; 807 797 #if DELAYED_RESOURCE_CNT > 0 ··· 868 858 869 859 870 860 /** 871 - * sba_alloc_consistent - allocate/map shared mem for DMA 861 + * sba_alloc - allocate/map shared mem for DMA 872 862 * @hwdev: instance of PCI owned by the driver that's asking. 873 863 * @size: number of bytes mapped in driver buffer. 874 864 * @dma_handle: IOVA of new buffer. 875 865 * 876 866 * See Documentation/DMA-API-HOWTO.txt 877 867 */ 878 - static void *sba_alloc_consistent(struct device *hwdev, size_t size, 879 - dma_addr_t *dma_handle, gfp_t gfp) 868 + static void *sba_alloc(struct device *hwdev, size_t size, dma_addr_t *dma_handle, 869 + gfp_t gfp, struct dma_attrs *attrs) 880 870 { 881 871 void *ret; 882 872 ··· 898 888 899 889 900 890 /** 901 - * sba_free_consistent - free/unmap shared mem for DMA 891 + * sba_free - free/unmap shared mem for DMA 902 892 * @hwdev: instance of PCI owned by the driver that's asking. 903 893 * @size: number of bytes mapped in driver buffer. 904 894 * @vaddr: virtual address IOVA of "consistent" buffer. ··· 907 897 * See Documentation/DMA-API-HOWTO.txt 908 898 */ 909 899 static void 910 - sba_free_consistent(struct device *hwdev, size_t size, void *vaddr, 911 - dma_addr_t dma_handle) 900 + sba_free(struct device *hwdev, size_t size, void *vaddr, 901 + dma_addr_t dma_handle, struct dma_attrs *attrs) 912 902 { 913 - sba_unmap_single(hwdev, dma_handle, size, 0); 903 + sba_unmap_page(hwdev, dma_handle, size, 0, NULL); 914 904 free_pages((unsigned long) vaddr, get_order(size)); 915 905 } 916 906 ··· 943 933 */ 944 934 static int 945 935 sba_map_sg(struct device *dev, struct scatterlist *sglist, int nents, 946 - enum dma_data_direction direction) 936 + enum dma_data_direction direction, struct dma_attrs *attrs) 947 937 { 948 938 struct ioc *ioc; 949 939 int coalesced, filled = 0; ··· 1026 1016 */ 1027 1017 static void 1028 1018 sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, 1029 - enum dma_data_direction direction) 1019 + enum dma_data_direction direction, struct dma_attrs *attrs) 1030 1020 { 1031 1021 struct ioc *ioc; 1032 1022 #ifdef ASSERT_PDIR_SANITY ··· 1050 1040 1051 1041 while (sg_dma_len(sglist) && nents--) { 1052 1042 1053 - sba_unmap_single(dev, sg_dma_address(sglist), sg_dma_len(sglist), direction); 1043 + sba_unmap_page(dev, sg_dma_address(sglist), sg_dma_len(sglist), 1044 + direction, NULL); 1054 1045 #ifdef SBA_COLLECT_STATS 1055 1046 ioc->usg_pages += ((sg_dma_address(sglist) & ~IOVP_MASK) + sg_dma_len(sglist) + IOVP_SIZE - 1) >> PAGE_SHIFT; 1056 1047 ioc->usingle_calls--; /* kluge since call is unmap_sg() */ ··· 1069 1058 1070 1059 } 1071 1060 1072 - static struct hppa_dma_ops sba_ops = { 1061 + static struct dma_map_ops sba_ops = { 1073 1062 .dma_supported = sba_dma_supported, 1074 - .alloc_consistent = sba_alloc_consistent, 1075 - .alloc_noncoherent = sba_alloc_consistent, 1076 - .free_consistent = sba_free_consistent, 1077 - .map_single = sba_map_single, 1078 - .unmap_single = sba_unmap_single, 1063 + .alloc = sba_alloc, 1064 + .free = sba_free, 1065 + .map_page = sba_map_page, 1066 + .unmap_page = sba_unmap_page, 1079 1067 .map_sg = sba_map_sg, 1080 1068 .unmap_sg = sba_unmap_sg, 1081 - .dma_sync_single_for_cpu = NULL, 1082 - .dma_sync_single_for_device = NULL, 1083 - .dma_sync_sg_for_cpu = NULL, 1084 - .dma_sync_sg_for_device = NULL, 1085 1069 }; 1086 1070 1087 1071
+2 -4
drivers/rapidio/rio-sysfs.c
··· 125 125 struct bin_attribute *bin_attr, 126 126 char *buf, loff_t off, size_t count) 127 127 { 128 - struct rio_dev *dev = 129 - to_rio_dev(container_of(kobj, struct device, kobj)); 128 + struct rio_dev *dev = to_rio_dev(kobj_to_dev(kobj)); 130 129 unsigned int size = 0x100; 131 130 loff_t init_off = off; 132 131 u8 *data = (u8 *) buf; ··· 196 197 struct bin_attribute *bin_attr, 197 198 char *buf, loff_t off, size_t count) 198 199 { 199 - struct rio_dev *dev = 200 - to_rio_dev(container_of(kobj, struct device, kobj)); 200 + struct rio_dev *dev = to_rio_dev(kobj_to_dev(kobj)); 201 201 unsigned int size = count; 202 202 loff_t init_off = off; 203 203 u8 *data = (u8 *) buf;
+4 -9
drivers/soc/qcom/smd.c
··· 434 434 /* 435 435 * Copy count bytes of data using 32bit accesses, if that is required. 436 436 */ 437 - static void smd_copy_from_fifo(void *_dst, 438 - const void __iomem *_src, 437 + static void smd_copy_from_fifo(void *dst, 438 + const void __iomem *src, 439 439 size_t count, 440 440 bool word_aligned) 441 441 { 442 - u32 *dst = (u32 *)_dst; 443 - u32 *src = (u32 *)_src; 444 - 445 442 if (word_aligned) { 446 - count /= sizeof(u32); 447 - while (count--) 448 - *dst++ = __raw_readl(src++); 443 + __ioread32_copy(dst, src, count / sizeof(u32)); 449 444 } else { 450 - memcpy_fromio(_dst, _src, count); 445 + memcpy_fromio(dst, src, count); 451 446 } 452 447 } 453 448
+14 -14
fs/adfs/adfs.h
··· 44 44 */ 45 45 struct adfs_sb_info { 46 46 union { struct { 47 - struct adfs_discmap *s_map; /* bh list containing map */ 48 - const struct adfs_dir_ops *s_dir; /* directory operations */ 47 + struct adfs_discmap *s_map; /* bh list containing map */ 48 + const struct adfs_dir_ops *s_dir; /* directory operations */ 49 49 }; 50 - struct rcu_head rcu; /* used only at shutdown time */ 50 + struct rcu_head rcu; /* used only at shutdown time */ 51 51 }; 52 - kuid_t s_uid; /* owner uid */ 53 - kgid_t s_gid; /* owner gid */ 54 - umode_t s_owner_mask; /* ADFS owner perm -> unix perm */ 55 - umode_t s_other_mask; /* ADFS other perm -> unix perm */ 52 + kuid_t s_uid; /* owner uid */ 53 + kgid_t s_gid; /* owner gid */ 54 + umode_t s_owner_mask; /* ADFS owner perm -> unix perm */ 55 + umode_t s_other_mask; /* ADFS other perm -> unix perm */ 56 56 int s_ftsuffix; /* ,xyz hex filetype suffix option */ 57 57 58 - __u32 s_ids_per_zone; /* max. no ids in one zone */ 59 - __u32 s_idlen; /* length of ID in map */ 60 - __u32 s_map_size; /* sector size of a map */ 61 - unsigned long s_size; /* total size (in blocks) of this fs */ 62 - signed int s_map2blk; /* shift left by this for map->sector */ 63 - unsigned int s_log2sharesize;/* log2 share size */ 64 - __le32 s_version; /* disc format version */ 58 + __u32 s_ids_per_zone; /* max. no ids in one zone */ 59 + __u32 s_idlen; /* length of ID in map */ 60 + __u32 s_map_size; /* sector size of a map */ 61 + unsigned long s_size; /* total size (in blocks) of this fs */ 62 + signed int s_map2blk; /* shift left by this for map->sector*/ 63 + unsigned int s_log2sharesize;/* log2 share size */ 64 + __le32 s_version; /* disc format version */ 65 65 unsigned int s_namelen; /* maximum number of characters in name */ 66 66 }; 67 67
+20
fs/coredump.c
··· 118 118 ret = cn_vprintf(cn, fmt, arg); 119 119 va_end(arg); 120 120 121 + if (ret == 0) { 122 + /* 123 + * Ensure that this coredump name component can't cause the 124 + * resulting corefile path to consist of a ".." or ".". 125 + */ 126 + if ((cn->used - cur == 1 && cn->corename[cur] == '.') || 127 + (cn->used - cur == 2 && cn->corename[cur] == '.' 128 + && cn->corename[cur+1] == '.')) 129 + cn->corename[cur] = '!'; 130 + 131 + /* 132 + * Empty names are fishy and could be used to create a "//" in a 133 + * corefile name, causing the coredump to happen one directory 134 + * level too high. Enforce that all components of the core 135 + * pattern are at least one character long. 136 + */ 137 + if (cn->used == cur) 138 + ret = cn_printf(cn, "!"); 139 + } 140 + 121 141 for (; cur < cn->used; ++cur) { 122 142 if (cn->corename[cur] == '/') 123 143 cn->corename[cur] = '!';
+21 -3
fs/eventpoll.c
··· 92 92 */ 93 93 94 94 /* Epoll private bits inside the event mask */ 95 - #define EP_PRIVATE_BITS (EPOLLWAKEUP | EPOLLONESHOT | EPOLLET) 95 + #define EP_PRIVATE_BITS (EPOLLWAKEUP | EPOLLONESHOT | EPOLLET | EPOLLEXCLUSIVE) 96 96 97 97 /* Maximum number of nesting allowed inside epoll sets */ 98 98 #define EP_MAX_NESTS 4 ··· 1002 1002 unsigned long flags; 1003 1003 struct epitem *epi = ep_item_from_wait(wait); 1004 1004 struct eventpoll *ep = epi->ep; 1005 + int ewake = 0; 1005 1006 1006 1007 if ((unsigned long)key & POLLFREE) { 1007 1008 ep_pwq_from_wait(wait)->whead = NULL; ··· 1067 1066 * Wake up ( if active ) both the eventpoll wait list and the ->poll() 1068 1067 * wait list. 1069 1068 */ 1070 - if (waitqueue_active(&ep->wq)) 1069 + if (waitqueue_active(&ep->wq)) { 1070 + ewake = 1; 1071 1071 wake_up_locked(&ep->wq); 1072 + } 1072 1073 if (waitqueue_active(&ep->poll_wait)) 1073 1074 pwake++; 1074 1075 ··· 1080 1077 /* We have to call this outside the lock */ 1081 1078 if (pwake) 1082 1079 ep_poll_safewake(&ep->poll_wait); 1080 + 1081 + if (epi->event.events & EPOLLEXCLUSIVE) 1082 + return ewake; 1083 1083 1084 1084 return 1; 1085 1085 } ··· 1101 1095 init_waitqueue_func_entry(&pwq->wait, ep_poll_callback); 1102 1096 pwq->whead = whead; 1103 1097 pwq->base = epi; 1104 - add_wait_queue(whead, &pwq->wait); 1098 + if (epi->event.events & EPOLLEXCLUSIVE) 1099 + add_wait_queue_exclusive(whead, &pwq->wait); 1100 + else 1101 + add_wait_queue(whead, &pwq->wait); 1105 1102 list_add_tail(&pwq->llink, &epi->pwqlist); 1106 1103 epi->nwait++; 1107 1104 } else { ··· 1868 1859 */ 1869 1860 error = -EINVAL; 1870 1861 if (f.file == tf.file || !is_file_epoll(f.file)) 1862 + goto error_tgt_fput; 1863 + 1864 + /* 1865 + * epoll adds to the wakeup queue at EPOLL_CTL_ADD time only, 1866 + * so EPOLLEXCLUSIVE is not allowed for a EPOLL_CTL_MOD operation. 1867 + * Also, we do not currently supported nested exclusive wakeups. 1868 + */ 1869 + if ((epds.events & EPOLLEXCLUSIVE) && (op == EPOLL_CTL_MOD || 1870 + (op == EPOLL_CTL_ADD && is_file_epoll(tf.file)))) 1871 1871 goto error_tgt_fput; 1872 1872 1873 1873 /*
+54 -25
fs/fat/cache.c
··· 301 301 return dclus; 302 302 } 303 303 304 - int fat_bmap(struct inode *inode, sector_t sector, sector_t *phys, 305 - unsigned long *mapped_blocks, int create) 304 + int fat_get_mapped_cluster(struct inode *inode, sector_t sector, 305 + sector_t last_block, 306 + unsigned long *mapped_blocks, sector_t *bmap) 306 307 { 307 308 struct super_block *sb = inode->i_sb; 308 309 struct msdos_sb_info *sbi = MSDOS_SB(sb); 310 + int cluster, offset; 311 + 312 + cluster = sector >> (sbi->cluster_bits - sb->s_blocksize_bits); 313 + offset = sector & (sbi->sec_per_clus - 1); 314 + cluster = fat_bmap_cluster(inode, cluster); 315 + if (cluster < 0) 316 + return cluster; 317 + else if (cluster) { 318 + *bmap = fat_clus_to_blknr(sbi, cluster) + offset; 319 + *mapped_blocks = sbi->sec_per_clus - offset; 320 + if (*mapped_blocks > last_block - sector) 321 + *mapped_blocks = last_block - sector; 322 + } 323 + 324 + return 0; 325 + } 326 + 327 + static int is_exceed_eof(struct inode *inode, sector_t sector, 328 + sector_t *last_block, int create) 329 + { 330 + struct super_block *sb = inode->i_sb; 309 331 const unsigned long blocksize = sb->s_blocksize; 310 332 const unsigned char blocksize_bits = sb->s_blocksize_bits; 333 + 334 + *last_block = (i_size_read(inode) + (blocksize - 1)) >> blocksize_bits; 335 + if (sector >= *last_block) { 336 + if (!create) 337 + return 1; 338 + 339 + /* 340 + * ->mmu_private can access on only allocation path. 341 + * (caller must hold ->i_mutex) 342 + */ 343 + *last_block = (MSDOS_I(inode)->mmu_private + (blocksize - 1)) 344 + >> blocksize_bits; 345 + if (sector >= *last_block) 346 + return 1; 347 + } 348 + 349 + return 0; 350 + } 351 + 352 + int fat_bmap(struct inode *inode, sector_t sector, sector_t *phys, 353 + unsigned long *mapped_blocks, int create, bool from_bmap) 354 + { 355 + struct msdos_sb_info *sbi = MSDOS_SB(inode->i_sb); 311 356 sector_t last_block; 312 - int cluster, offset; 313 357 314 358 *phys = 0; 315 359 *mapped_blocks = 0; ··· 365 321 return 0; 366 322 } 367 323 368 - last_block = (i_size_read(inode) + (blocksize - 1)) >> blocksize_bits; 369 - if (sector >= last_block) { 370 - if (!create) 324 + if (!from_bmap) { 325 + if (is_exceed_eof(inode, sector, &last_block, create)) 371 326 return 0; 372 - 373 - /* 374 - * ->mmu_private can access on only allocation path. 375 - * (caller must hold ->i_mutex) 376 - */ 377 - last_block = (MSDOS_I(inode)->mmu_private + (blocksize - 1)) 378 - >> blocksize_bits; 327 + } else { 328 + last_block = inode->i_blocks >> 329 + (inode->i_sb->s_blocksize_bits - 9); 379 330 if (sector >= last_block) 380 331 return 0; 381 332 } 382 333 383 - cluster = sector >> (sbi->cluster_bits - sb->s_blocksize_bits); 384 - offset = sector & (sbi->sec_per_clus - 1); 385 - cluster = fat_bmap_cluster(inode, cluster); 386 - if (cluster < 0) 387 - return cluster; 388 - else if (cluster) { 389 - *phys = fat_clus_to_blknr(sbi, cluster) + offset; 390 - *mapped_blocks = sbi->sec_per_clus - offset; 391 - if (*mapped_blocks > last_block - sector) 392 - *mapped_blocks = last_block - sector; 393 - } 394 - return 0; 334 + return fat_get_mapped_cluster(inode, sector, last_block, mapped_blocks, 335 + phys); 395 336 }
+1 -1
fs/fat/dir.c
··· 91 91 92 92 *bh = NULL; 93 93 iblock = *pos >> sb->s_blocksize_bits; 94 - err = fat_bmap(dir, iblock, &phys, &mapped_blocks, 0); 94 + err = fat_bmap(dir, iblock, &phys, &mapped_blocks, 0, false); 95 95 if (err || !phys) 96 96 return -1; /* beyond EOF or error */ 97 97
+6 -2
fs/fat/fat.h
··· 87 87 unsigned int vol_id; /*volume ID*/ 88 88 89 89 int fatent_shift; 90 - struct fatent_operations *fatent_ops; 90 + const struct fatent_operations *fatent_ops; 91 91 struct inode *fat_inode; 92 92 struct inode *fsinfo_inode; 93 93 ··· 285 285 extern void fat_cache_inval_inode(struct inode *inode); 286 286 extern int fat_get_cluster(struct inode *inode, int cluster, 287 287 int *fclus, int *dclus); 288 + extern int fat_get_mapped_cluster(struct inode *inode, sector_t sector, 289 + sector_t last_block, 290 + unsigned long *mapped_blocks, sector_t *bmap); 288 291 extern int fat_bmap(struct inode *inode, sector_t sector, sector_t *phys, 289 - unsigned long *mapped_blocks, int create); 292 + unsigned long *mapped_blocks, int create, bool from_bmap); 290 293 291 294 /* fat/dir.c */ 292 295 extern const struct file_operations fat_dir_operations; ··· 387 384 { 388 385 return hash_32(logstart, FAT_HASH_BITS); 389 386 } 387 + extern int fat_add_cluster(struct inode *inode); 390 388 391 389 /* fat/misc.c */ 392 390 extern __printf(3, 4) __cold
+12 -12
fs/fat/fatent.c
··· 99 99 static int fat_ent_bread(struct super_block *sb, struct fat_entry *fatent, 100 100 int offset, sector_t blocknr) 101 101 { 102 - struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops; 102 + const struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops; 103 103 104 104 WARN_ON(blocknr < MSDOS_SB(sb)->fat_start); 105 105 fatent->fat_inode = MSDOS_SB(sb)->fat_inode; ··· 246 246 return 0; 247 247 } 248 248 249 - static struct fatent_operations fat12_ops = { 249 + static const struct fatent_operations fat12_ops = { 250 250 .ent_blocknr = fat12_ent_blocknr, 251 251 .ent_set_ptr = fat12_ent_set_ptr, 252 252 .ent_bread = fat12_ent_bread, ··· 255 255 .ent_next = fat12_ent_next, 256 256 }; 257 257 258 - static struct fatent_operations fat16_ops = { 258 + static const struct fatent_operations fat16_ops = { 259 259 .ent_blocknr = fat_ent_blocknr, 260 260 .ent_set_ptr = fat16_ent_set_ptr, 261 261 .ent_bread = fat_ent_bread, ··· 264 264 .ent_next = fat16_ent_next, 265 265 }; 266 266 267 - static struct fatent_operations fat32_ops = { 267 + static const struct fatent_operations fat32_ops = { 268 268 .ent_blocknr = fat_ent_blocknr, 269 269 .ent_set_ptr = fat32_ent_set_ptr, 270 270 .ent_bread = fat_ent_bread, ··· 320 320 int offset, sector_t blocknr) 321 321 { 322 322 struct msdos_sb_info *sbi = MSDOS_SB(sb); 323 - struct fatent_operations *ops = sbi->fatent_ops; 323 + const struct fatent_operations *ops = sbi->fatent_ops; 324 324 struct buffer_head **bhs = fatent->bhs; 325 325 326 326 /* Is this fatent's blocks including this entry? */ ··· 349 349 { 350 350 struct super_block *sb = inode->i_sb; 351 351 struct msdos_sb_info *sbi = MSDOS_SB(inode->i_sb); 352 - struct fatent_operations *ops = sbi->fatent_ops; 352 + const struct fatent_operations *ops = sbi->fatent_ops; 353 353 int err, offset; 354 354 sector_t blocknr; 355 355 ··· 407 407 int new, int wait) 408 408 { 409 409 struct super_block *sb = inode->i_sb; 410 - struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops; 410 + const struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops; 411 411 int err; 412 412 413 413 ops->ent_put(fatent, new); ··· 432 432 static inline int fat_ent_read_block(struct super_block *sb, 433 433 struct fat_entry *fatent) 434 434 { 435 - struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops; 435 + const struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops; 436 436 sector_t blocknr; 437 437 int offset; 438 438 ··· 463 463 { 464 464 struct super_block *sb = inode->i_sb; 465 465 struct msdos_sb_info *sbi = MSDOS_SB(sb); 466 - struct fatent_operations *ops = sbi->fatent_ops; 466 + const struct fatent_operations *ops = sbi->fatent_ops; 467 467 struct fat_entry fatent, prev_ent; 468 468 struct buffer_head *bhs[MAX_BUF_PER_PAGE]; 469 469 int i, count, err, nr_bhs, idx_clus; ··· 551 551 { 552 552 struct super_block *sb = inode->i_sb; 553 553 struct msdos_sb_info *sbi = MSDOS_SB(sb); 554 - struct fatent_operations *ops = sbi->fatent_ops; 554 + const struct fatent_operations *ops = sbi->fatent_ops; 555 555 struct fat_entry fatent; 556 556 struct buffer_head *bhs[MAX_BUF_PER_PAGE]; 557 557 int i, err, nr_bhs; ··· 636 636 static void fat_ent_reada(struct super_block *sb, struct fat_entry *fatent, 637 637 unsigned long reada_blocks) 638 638 { 639 - struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops; 639 + const struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops; 640 640 sector_t blocknr; 641 641 int i, offset; 642 642 ··· 649 649 int fat_count_free_clusters(struct super_block *sb) 650 650 { 651 651 struct msdos_sb_info *sbi = MSDOS_SB(sb); 652 - struct fatent_operations *ops = sbi->fatent_ops; 652 + const struct fatent_operations *ops = sbi->fatent_ops; 653 653 struct fat_entry fatent; 654 654 unsigned long reada_blocks, reada_mask, cur_block; 655 655 int err = 0, free;
+61
fs/fat/file.c
··· 14 14 #include <linux/backing-dev.h> 15 15 #include <linux/fsnotify.h> 16 16 #include <linux/security.h> 17 + #include <linux/falloc.h> 17 18 #include "fat.h" 19 + 20 + static long fat_fallocate(struct file *file, int mode, 21 + loff_t offset, loff_t len); 18 22 19 23 static int fat_ioctl_get_attributes(struct inode *inode, u32 __user *user_attr) 20 24 { ··· 181 177 #endif 182 178 .fsync = fat_file_fsync, 183 179 .splice_read = generic_file_splice_read, 180 + .fallocate = fat_fallocate, 184 181 }; 185 182 186 183 static int fat_cont_expand(struct inode *inode, loff_t size) ··· 217 212 } 218 213 } 219 214 out: 215 + return err; 216 + } 217 + 218 + /* 219 + * Preallocate space for a file. This implements fat's fallocate file 220 + * operation, which gets called from sys_fallocate system call. User 221 + * space requests len bytes at offset. If FALLOC_FL_KEEP_SIZE is set 222 + * we just allocate clusters without zeroing them out. Otherwise we 223 + * allocate and zero out clusters via an expanding truncate. 224 + */ 225 + static long fat_fallocate(struct file *file, int mode, 226 + loff_t offset, loff_t len) 227 + { 228 + int nr_cluster; /* Number of clusters to be allocated */ 229 + loff_t mm_bytes; /* Number of bytes to be allocated for file */ 230 + loff_t ondisksize; /* block aligned on-disk size in bytes*/ 231 + struct inode *inode = file->f_mapping->host; 232 + struct super_block *sb = inode->i_sb; 233 + struct msdos_sb_info *sbi = MSDOS_SB(sb); 234 + int err = 0; 235 + 236 + /* No support for hole punch or other fallocate flags. */ 237 + if (mode & ~FALLOC_FL_KEEP_SIZE) 238 + return -EOPNOTSUPP; 239 + 240 + /* No support for dir */ 241 + if (!S_ISREG(inode->i_mode)) 242 + return -EOPNOTSUPP; 243 + 244 + mutex_lock(&inode->i_mutex); 245 + if (mode & FALLOC_FL_KEEP_SIZE) { 246 + ondisksize = inode->i_blocks << 9; 247 + if ((offset + len) <= ondisksize) 248 + goto error; 249 + 250 + /* First compute the number of clusters to be allocated */ 251 + mm_bytes = offset + len - ondisksize; 252 + nr_cluster = (mm_bytes + (sbi->cluster_size - 1)) >> 253 + sbi->cluster_bits; 254 + 255 + /* Start the allocation.We are not zeroing out the clusters */ 256 + while (nr_cluster-- > 0) { 257 + err = fat_add_cluster(inode); 258 + if (err) 259 + goto error; 260 + } 261 + } else { 262 + if ((offset + len) <= i_size_read(inode)) 263 + goto error; 264 + 265 + /* This is just an expanding truncate */ 266 + err = fat_cont_expand(inode, (offset + len)); 267 + } 268 + 269 + error: 270 + mutex_unlock(&inode->i_mutex); 220 271 return err; 221 272 } 222 273
+96 -8
fs/fat/inode.c
··· 93 93 }, 94 94 }; 95 95 96 - static int fat_add_cluster(struct inode *inode) 96 + int fat_add_cluster(struct inode *inode) 97 97 { 98 98 int err, cluster; 99 99 ··· 115 115 struct super_block *sb = inode->i_sb; 116 116 struct msdos_sb_info *sbi = MSDOS_SB(sb); 117 117 unsigned long mapped_blocks; 118 - sector_t phys; 118 + sector_t phys, last_block; 119 119 int err, offset; 120 120 121 - err = fat_bmap(inode, iblock, &phys, &mapped_blocks, create); 121 + err = fat_bmap(inode, iblock, &phys, &mapped_blocks, create, false); 122 122 if (err) 123 123 return err; 124 124 if (phys) { ··· 135 135 return -EIO; 136 136 } 137 137 138 + last_block = inode->i_blocks >> (sb->s_blocksize_bits - 9); 138 139 offset = (unsigned long)iblock & (sbi->sec_per_clus - 1); 139 - if (!offset) { 140 + /* 141 + * allocate a cluster according to the following. 142 + * 1) no more available blocks 143 + * 2) not part of fallocate region 144 + */ 145 + if (!offset && !(iblock < last_block)) { 140 146 /* TODO: multiple cluster allocation would be desirable. */ 141 147 err = fat_add_cluster(inode); 142 148 if (err) ··· 154 148 *max_blocks = min(mapped_blocks, *max_blocks); 155 149 MSDOS_I(inode)->mmu_private += *max_blocks << sb->s_blocksize_bits; 156 150 157 - err = fat_bmap(inode, iblock, &phys, &mapped_blocks, create); 151 + err = fat_bmap(inode, iblock, &phys, &mapped_blocks, create, false); 158 152 if (err) 159 153 return err; 160 154 ··· 279 273 return ret; 280 274 } 281 275 276 + static int fat_get_block_bmap(struct inode *inode, sector_t iblock, 277 + struct buffer_head *bh_result, int create) 278 + { 279 + struct super_block *sb = inode->i_sb; 280 + unsigned long max_blocks = bh_result->b_size >> inode->i_blkbits; 281 + int err; 282 + sector_t bmap; 283 + unsigned long mapped_blocks; 284 + 285 + BUG_ON(create != 0); 286 + 287 + err = fat_bmap(inode, iblock, &bmap, &mapped_blocks, create, true); 288 + if (err) 289 + return err; 290 + 291 + if (bmap) { 292 + map_bh(bh_result, sb, bmap); 293 + max_blocks = min(mapped_blocks, max_blocks); 294 + } 295 + 296 + bh_result->b_size = max_blocks << sb->s_blocksize_bits; 297 + 298 + return 0; 299 + } 300 + 282 301 static sector_t _fat_bmap(struct address_space *mapping, sector_t block) 283 302 { 284 303 sector_t blocknr; 285 304 286 305 /* fat_get_cluster() assumes the requested blocknr isn't truncated. */ 287 306 down_read(&MSDOS_I(mapping->host)->truncate_lock); 288 - blocknr = generic_block_bmap(mapping, block, fat_get_block); 307 + blocknr = generic_block_bmap(mapping, block, fat_get_block_bmap); 289 308 up_read(&MSDOS_I(mapping->host)->truncate_lock); 290 309 291 310 return blocknr; ··· 480 449 return 0; 481 450 } 482 451 452 + static int fat_validate_dir(struct inode *dir) 453 + { 454 + struct super_block *sb = dir->i_sb; 455 + 456 + if (dir->i_nlink < 2) { 457 + /* Directory should have "."/".." entries at least. */ 458 + fat_fs_error(sb, "corrupted directory (invalid entries)"); 459 + return -EIO; 460 + } 461 + if (MSDOS_I(dir)->i_start == 0 || 462 + MSDOS_I(dir)->i_start == MSDOS_SB(sb)->root_cluster) { 463 + /* Directory should point valid cluster. */ 464 + fat_fs_error(sb, "corrupted directory (invalid i_start)"); 465 + return -EIO; 466 + } 467 + return 0; 468 + } 469 + 483 470 /* doesn't deal with root inode */ 484 471 int fat_fill_inode(struct inode *inode, struct msdos_dir_entry *de) 485 472 { ··· 524 475 MSDOS_I(inode)->mmu_private = inode->i_size; 525 476 526 477 set_nlink(inode, fat_subdirs(inode)); 478 + 479 + error = fat_validate_dir(inode); 480 + if (error < 0) 481 + return error; 527 482 } else { /* not a directory */ 528 483 inode->i_generation |= 1; 529 484 inode->i_mode = fat_make_mode(sbi, de->attr, ··· 606 553 607 554 EXPORT_SYMBOL_GPL(fat_build_inode); 608 555 556 + static int __fat_write_inode(struct inode *inode, int wait); 557 + 558 + static void fat_free_eofblocks(struct inode *inode) 559 + { 560 + /* Release unwritten fallocated blocks on inode eviction. */ 561 + if ((inode->i_blocks << 9) > 562 + round_up(MSDOS_I(inode)->mmu_private, 563 + MSDOS_SB(inode->i_sb)->cluster_size)) { 564 + int err; 565 + 566 + fat_truncate_blocks(inode, MSDOS_I(inode)->mmu_private); 567 + /* Fallocate results in updating the i_start/iogstart 568 + * for the zero byte file. So, make it return to 569 + * original state during evict and commit it to avoid 570 + * any corruption on the next access to the cluster 571 + * chain for the file. 572 + */ 573 + err = __fat_write_inode(inode, inode_needs_sync(inode)); 574 + if (err) { 575 + fat_msg(inode->i_sb, KERN_WARNING, "Failed to " 576 + "update on disk inode for unused " 577 + "fallocated blocks, inode could be " 578 + "corrupted. Please run fsck"); 579 + } 580 + 581 + } 582 + } 583 + 609 584 static void fat_evict_inode(struct inode *inode) 610 585 { 611 586 truncate_inode_pages_final(&inode->i_data); 612 587 if (!inode->i_nlink) { 613 588 inode->i_size = 0; 614 589 fat_truncate_blocks(inode, 0); 615 - } 590 + } else 591 + fat_free_eofblocks(inode); 592 + 616 593 invalidate_inode_buffers(inode); 617 594 clear_inode(inode); 618 595 fat_cache_inval_inode(inode); ··· 1229 1146 case Opt_time_offset: 1230 1147 if (match_int(&args[0], &option)) 1231 1148 return -EINVAL; 1232 - if (option < -12 * 60 || option > 12 * 60) 1149 + /* 1150 + * GMT+-12 zones may have DST corrections so at least 1151 + * 13 hours difference is needed. Make the limit 24 1152 + * just in case someone invents something unusual. 1153 + */ 1154 + if (option < -24 * 60 || option > 24 * 60) 1233 1155 return -EINVAL; 1234 1156 opts->tz_set = 1; 1235 1157 opts->time_offset = option;
+2 -4
fs/hfs/catalog.c
··· 214 214 { 215 215 struct super_block *sb; 216 216 struct hfs_find_data fd; 217 - struct list_head *pos; 217 + struct hfs_readdir_data *rd; 218 218 int res, type; 219 219 220 220 hfs_dbg(CAT_MOD, "delete_cat: %s,%u\n", str ? str->name : NULL, cnid); ··· 240 240 } 241 241 } 242 242 243 - list_for_each(pos, &HFS_I(dir)->open_dir_list) { 244 - struct hfs_readdir_data *rd = 245 - list_entry(pos, struct hfs_readdir_data, list); 243 + list_for_each_entry(rd, &HFS_I(dir)->open_dir_list, list) { 246 244 if (fd.tree->keycmp(fd.search_key, (void *)&rd->key) < 0) 247 245 rd->file->f_pos--; 248 246 }
+1
fs/overlayfs/super.c
··· 16 16 #include <linux/slab.h> 17 17 #include <linux/parser.h> 18 18 #include <linux/module.h> 19 + #include <linux/pagemap.h> 19 20 #include <linux/sched.h> 20 21 #include <linux/statfs.h> 21 22 #include <linux/seq_file.h>
+1 -1
fs/proc/array.c
··· 395 395 396 396 state = *get_task_state(task); 397 397 vsize = eip = esp = 0; 398 - permitted = ptrace_may_access(task, PTRACE_MODE_READ | PTRACE_MODE_NOAUDIT); 398 + permitted = ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS | PTRACE_MODE_NOAUDIT); 399 399 mm = get_task_mm(task); 400 400 if (mm) { 401 401 vsize = task_vsize(mm);
+21 -13
fs/proc/base.c
··· 403 403 static int proc_pid_auxv(struct seq_file *m, struct pid_namespace *ns, 404 404 struct pid *pid, struct task_struct *task) 405 405 { 406 - struct mm_struct *mm = mm_access(task, PTRACE_MODE_READ); 406 + struct mm_struct *mm = mm_access(task, PTRACE_MODE_READ_FSCREDS); 407 407 if (mm && !IS_ERR(mm)) { 408 408 unsigned int nwords = 0; 409 409 do { ··· 430 430 431 431 wchan = get_wchan(task); 432 432 433 - if (wchan && ptrace_may_access(task, PTRACE_MODE_READ) && !lookup_symbol_name(wchan, symname)) 433 + if (wchan && ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS) 434 + && !lookup_symbol_name(wchan, symname)) 434 435 seq_printf(m, "%s", symname); 435 436 else 436 437 seq_putc(m, '0'); ··· 445 444 int err = mutex_lock_killable(&task->signal->cred_guard_mutex); 446 445 if (err) 447 446 return err; 448 - if (!ptrace_may_access(task, PTRACE_MODE_ATTACH)) { 447 + if (!ptrace_may_access(task, PTRACE_MODE_ATTACH_FSCREDS)) { 449 448 mutex_unlock(&task->signal->cred_guard_mutex); 450 449 return -EPERM; 451 450 } ··· 698 697 */ 699 698 task = get_proc_task(inode); 700 699 if (task) { 701 - allowed = ptrace_may_access(task, PTRACE_MODE_READ); 700 + allowed = ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS); 702 701 put_task_struct(task); 703 702 } 704 703 return allowed; ··· 733 732 return true; 734 733 if (in_group_p(pid->pid_gid)) 735 734 return true; 736 - return ptrace_may_access(task, PTRACE_MODE_READ); 735 + return ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS); 737 736 } 738 737 739 738 ··· 810 809 struct mm_struct *mm = ERR_PTR(-ESRCH); 811 810 812 811 if (task) { 813 - mm = mm_access(task, mode); 812 + mm = mm_access(task, mode | PTRACE_MODE_FSCREDS); 814 813 put_task_struct(task); 815 814 816 815 if (!IS_ERR_OR_NULL(mm)) { ··· 953 952 unsigned long src = *ppos; 954 953 int ret = 0; 955 954 struct mm_struct *mm = file->private_data; 955 + unsigned long env_start, env_end; 956 956 957 957 if (!mm) 958 958 return 0; ··· 965 963 ret = 0; 966 964 if (!atomic_inc_not_zero(&mm->mm_users)) 967 965 goto free; 966 + 967 + down_read(&mm->mmap_sem); 968 + env_start = mm->env_start; 969 + env_end = mm->env_end; 970 + up_read(&mm->mmap_sem); 971 + 968 972 while (count > 0) { 969 973 size_t this_len, max_len; 970 974 int retval; 971 975 972 - if (src >= (mm->env_end - mm->env_start)) 976 + if (src >= (env_end - env_start)) 973 977 break; 974 978 975 - this_len = mm->env_end - (mm->env_start + src); 979 + this_len = env_end - (env_start + src); 976 980 977 981 max_len = min_t(size_t, PAGE_SIZE, count); 978 982 this_len = min(max_len, this_len); 979 983 980 - retval = access_remote_vm(mm, (mm->env_start + src), 984 + retval = access_remote_vm(mm, (env_start + src), 981 985 page, this_len, 0); 982 986 983 987 if (retval <= 0) { ··· 1868 1860 if (!task) 1869 1861 goto out_notask; 1870 1862 1871 - mm = mm_access(task, PTRACE_MODE_READ); 1863 + mm = mm_access(task, PTRACE_MODE_READ_FSCREDS); 1872 1864 if (IS_ERR_OR_NULL(mm)) 1873 1865 goto out; 1874 1866 ··· 2021 2013 goto out; 2022 2014 2023 2015 result = -EACCES; 2024 - if (!ptrace_may_access(task, PTRACE_MODE_READ)) 2016 + if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) 2025 2017 goto out_put_task; 2026 2018 2027 2019 result = -ENOENT; ··· 2074 2066 goto out; 2075 2067 2076 2068 ret = -EACCES; 2077 - if (!ptrace_may_access(task, PTRACE_MODE_READ)) 2069 + if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) 2078 2070 goto out_put_task; 2079 2071 2080 2072 ret = 0; ··· 2541 2533 if (result) 2542 2534 return result; 2543 2535 2544 - if (!ptrace_may_access(task, PTRACE_MODE_READ)) { 2536 + if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) { 2545 2537 result = -EACCES; 2546 2538 goto out_unlock; 2547 2539 }
+2 -2
fs/proc/namespaces.c
··· 46 46 if (!task) 47 47 return error; 48 48 49 - if (ptrace_may_access(task, PTRACE_MODE_READ)) { 49 + if (ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) { 50 50 error = ns_get_path(&ns_path, task, ns_ops); 51 51 if (!error) 52 52 nd_jump_link(&ns_path); ··· 67 67 if (!task) 68 68 return res; 69 69 70 - if (ptrace_may_access(task, PTRACE_MODE_READ)) { 70 + if (ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) { 71 71 res = ns_get_name(name, sizeof(name), task, ns_ops); 72 72 if (res >= 0) 73 73 res = readlink_copy(buffer, buflen, name);
+1 -1
fs/proc/task_mmu.c
··· 468 468 static void smaps_account(struct mem_size_stats *mss, struct page *page, 469 469 bool compound, bool young, bool dirty) 470 470 { 471 - int i, nr = compound ? HPAGE_PMD_NR : 1; 471 + int i, nr = compound ? 1 << compound_order(page) : 1; 472 472 unsigned long size = nr * PAGE_SIZE; 473 473 474 474 if (PageAnon(page))
-32
include/asm-generic/dma-coherent.h
··· 1 - #ifndef DMA_COHERENT_H 2 - #define DMA_COHERENT_H 3 - 4 - #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT 5 - /* 6 - * These three functions are only for dma allocator. 7 - * Don't use them in device drivers. 8 - */ 9 - int dma_alloc_from_coherent(struct device *dev, ssize_t size, 10 - dma_addr_t *dma_handle, void **ret); 11 - int dma_release_from_coherent(struct device *dev, int order, void *vaddr); 12 - 13 - int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma, 14 - void *cpu_addr, size_t size, int *ret); 15 - /* 16 - * Standard interface 17 - */ 18 - #define ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY 19 - int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 20 - dma_addr_t device_addr, size_t size, int flags); 21 - 22 - void dma_release_declared_memory(struct device *dev); 23 - 24 - void *dma_mark_declared_memory_occupied(struct device *dev, 25 - dma_addr_t device_addr, size_t size); 26 - #else 27 - #define dma_alloc_from_coherent(dev, size, handle, ret) (0) 28 - #define dma_release_from_coherent(dev, order, vaddr) (0) 29 - #define dma_mmap_from_coherent(dev, vma, vaddr, order, ret) (0) 30 - #endif 31 - 32 - #endif
-95
include/asm-generic/dma-mapping-broken.h
··· 1 - #ifndef _ASM_GENERIC_DMA_MAPPING_H 2 - #define _ASM_GENERIC_DMA_MAPPING_H 3 - 4 - /* define the dma api to allow compilation but not linking of 5 - * dma dependent code. Code that depends on the dma-mapping 6 - * API needs to set 'depends on HAS_DMA' in its Kconfig 7 - */ 8 - 9 - struct scatterlist; 10 - 11 - extern void * 12 - dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, 13 - gfp_t flag); 14 - 15 - extern void 16 - dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, 17 - dma_addr_t dma_handle); 18 - 19 - static inline void *dma_alloc_attrs(struct device *dev, size_t size, 20 - dma_addr_t *dma_handle, gfp_t flag, 21 - struct dma_attrs *attrs) 22 - { 23 - /* attrs is not supported and ignored */ 24 - return dma_alloc_coherent(dev, size, dma_handle, flag); 25 - } 26 - 27 - static inline void dma_free_attrs(struct device *dev, size_t size, 28 - void *cpu_addr, dma_addr_t dma_handle, 29 - struct dma_attrs *attrs) 30 - { 31 - /* attrs is not supported and ignored */ 32 - dma_free_coherent(dev, size, cpu_addr, dma_handle); 33 - } 34 - 35 - #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 36 - #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 37 - 38 - extern dma_addr_t 39 - dma_map_single(struct device *dev, void *ptr, size_t size, 40 - enum dma_data_direction direction); 41 - 42 - extern void 43 - dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, 44 - enum dma_data_direction direction); 45 - 46 - extern int 47 - dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 48 - enum dma_data_direction direction); 49 - 50 - extern void 51 - dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, 52 - enum dma_data_direction direction); 53 - 54 - extern dma_addr_t 55 - dma_map_page(struct device *dev, struct page *page, unsigned long offset, 56 - size_t size, enum dma_data_direction direction); 57 - 58 - extern void 59 - dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, 60 - enum dma_data_direction direction); 61 - 62 - extern void 63 - dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, 64 - enum dma_data_direction direction); 65 - 66 - extern void 67 - dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle, 68 - unsigned long offset, size_t size, 69 - enum dma_data_direction direction); 70 - 71 - extern void 72 - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, 73 - enum dma_data_direction direction); 74 - 75 - #define dma_sync_single_for_device dma_sync_single_for_cpu 76 - #define dma_sync_single_range_for_device dma_sync_single_range_for_cpu 77 - #define dma_sync_sg_for_device dma_sync_sg_for_cpu 78 - 79 - extern int 80 - dma_mapping_error(struct device *dev, dma_addr_t dma_addr); 81 - 82 - extern int 83 - dma_supported(struct device *dev, u64 mask); 84 - 85 - extern int 86 - dma_set_mask(struct device *dev, u64 mask); 87 - 88 - extern int 89 - dma_get_cache_alignment(void); 90 - 91 - extern void 92 - dma_cache_sync(struct device *dev, void *vaddr, size_t size, 93 - enum dma_data_direction direction); 94 - 95 - #endif /* _ASM_GENERIC_DMA_MAPPING_H */
-358
include/asm-generic/dma-mapping-common.h
··· 1 - #ifndef _ASM_GENERIC_DMA_MAPPING_H 2 - #define _ASM_GENERIC_DMA_MAPPING_H 3 - 4 - #include <linux/kmemcheck.h> 5 - #include <linux/bug.h> 6 - #include <linux/scatterlist.h> 7 - #include <linux/dma-debug.h> 8 - #include <linux/dma-attrs.h> 9 - #include <asm-generic/dma-coherent.h> 10 - 11 - static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, 12 - size_t size, 13 - enum dma_data_direction dir, 14 - struct dma_attrs *attrs) 15 - { 16 - struct dma_map_ops *ops = get_dma_ops(dev); 17 - dma_addr_t addr; 18 - 19 - kmemcheck_mark_initialized(ptr, size); 20 - BUG_ON(!valid_dma_direction(dir)); 21 - addr = ops->map_page(dev, virt_to_page(ptr), 22 - (unsigned long)ptr & ~PAGE_MASK, size, 23 - dir, attrs); 24 - debug_dma_map_page(dev, virt_to_page(ptr), 25 - (unsigned long)ptr & ~PAGE_MASK, size, 26 - dir, addr, true); 27 - return addr; 28 - } 29 - 30 - static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr, 31 - size_t size, 32 - enum dma_data_direction dir, 33 - struct dma_attrs *attrs) 34 - { 35 - struct dma_map_ops *ops = get_dma_ops(dev); 36 - 37 - BUG_ON(!valid_dma_direction(dir)); 38 - if (ops->unmap_page) 39 - ops->unmap_page(dev, addr, size, dir, attrs); 40 - debug_dma_unmap_page(dev, addr, size, dir, true); 41 - } 42 - 43 - /* 44 - * dma_maps_sg_attrs returns 0 on error and > 0 on success. 45 - * It should never return a value < 0. 46 - */ 47 - static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, 48 - int nents, enum dma_data_direction dir, 49 - struct dma_attrs *attrs) 50 - { 51 - struct dma_map_ops *ops = get_dma_ops(dev); 52 - int i, ents; 53 - struct scatterlist *s; 54 - 55 - for_each_sg(sg, s, nents, i) 56 - kmemcheck_mark_initialized(sg_virt(s), s->length); 57 - BUG_ON(!valid_dma_direction(dir)); 58 - ents = ops->map_sg(dev, sg, nents, dir, attrs); 59 - BUG_ON(ents < 0); 60 - debug_dma_map_sg(dev, sg, nents, ents, dir); 61 - 62 - return ents; 63 - } 64 - 65 - static inline void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, 66 - int nents, enum dma_data_direction dir, 67 - struct dma_attrs *attrs) 68 - { 69 - struct dma_map_ops *ops = get_dma_ops(dev); 70 - 71 - BUG_ON(!valid_dma_direction(dir)); 72 - debug_dma_unmap_sg(dev, sg, nents, dir); 73 - if (ops->unmap_sg) 74 - ops->unmap_sg(dev, sg, nents, dir, attrs); 75 - } 76 - 77 - static inline dma_addr_t dma_map_page(struct device *dev, struct page *page, 78 - size_t offset, size_t size, 79 - enum dma_data_direction dir) 80 - { 81 - struct dma_map_ops *ops = get_dma_ops(dev); 82 - dma_addr_t addr; 83 - 84 - kmemcheck_mark_initialized(page_address(page) + offset, size); 85 - BUG_ON(!valid_dma_direction(dir)); 86 - addr = ops->map_page(dev, page, offset, size, dir, NULL); 87 - debug_dma_map_page(dev, page, offset, size, dir, addr, false); 88 - 89 - return addr; 90 - } 91 - 92 - static inline void dma_unmap_page(struct device *dev, dma_addr_t addr, 93 - size_t size, enum dma_data_direction dir) 94 - { 95 - struct dma_map_ops *ops = get_dma_ops(dev); 96 - 97 - BUG_ON(!valid_dma_direction(dir)); 98 - if (ops->unmap_page) 99 - ops->unmap_page(dev, addr, size, dir, NULL); 100 - debug_dma_unmap_page(dev, addr, size, dir, false); 101 - } 102 - 103 - static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, 104 - size_t size, 105 - enum dma_data_direction dir) 106 - { 107 - struct dma_map_ops *ops = get_dma_ops(dev); 108 - 109 - BUG_ON(!valid_dma_direction(dir)); 110 - if (ops->sync_single_for_cpu) 111 - ops->sync_single_for_cpu(dev, addr, size, dir); 112 - debug_dma_sync_single_for_cpu(dev, addr, size, dir); 113 - } 114 - 115 - static inline void dma_sync_single_for_device(struct device *dev, 116 - dma_addr_t addr, size_t size, 117 - enum dma_data_direction dir) 118 - { 119 - struct dma_map_ops *ops = get_dma_ops(dev); 120 - 121 - BUG_ON(!valid_dma_direction(dir)); 122 - if (ops->sync_single_for_device) 123 - ops->sync_single_for_device(dev, addr, size, dir); 124 - debug_dma_sync_single_for_device(dev, addr, size, dir); 125 - } 126 - 127 - static inline void dma_sync_single_range_for_cpu(struct device *dev, 128 - dma_addr_t addr, 129 - unsigned long offset, 130 - size_t size, 131 - enum dma_data_direction dir) 132 - { 133 - const struct dma_map_ops *ops = get_dma_ops(dev); 134 - 135 - BUG_ON(!valid_dma_direction(dir)); 136 - if (ops->sync_single_for_cpu) 137 - ops->sync_single_for_cpu(dev, addr + offset, size, dir); 138 - debug_dma_sync_single_range_for_cpu(dev, addr, offset, size, dir); 139 - } 140 - 141 - static inline void dma_sync_single_range_for_device(struct device *dev, 142 - dma_addr_t addr, 143 - unsigned long offset, 144 - size_t size, 145 - enum dma_data_direction dir) 146 - { 147 - const struct dma_map_ops *ops = get_dma_ops(dev); 148 - 149 - BUG_ON(!valid_dma_direction(dir)); 150 - if (ops->sync_single_for_device) 151 - ops->sync_single_for_device(dev, addr + offset, size, dir); 152 - debug_dma_sync_single_range_for_device(dev, addr, offset, size, dir); 153 - } 154 - 155 - static inline void 156 - dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 157 - int nelems, enum dma_data_direction dir) 158 - { 159 - struct dma_map_ops *ops = get_dma_ops(dev); 160 - 161 - BUG_ON(!valid_dma_direction(dir)); 162 - if (ops->sync_sg_for_cpu) 163 - ops->sync_sg_for_cpu(dev, sg, nelems, dir); 164 - debug_dma_sync_sg_for_cpu(dev, sg, nelems, dir); 165 - } 166 - 167 - static inline void 168 - dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 169 - int nelems, enum dma_data_direction dir) 170 - { 171 - struct dma_map_ops *ops = get_dma_ops(dev); 172 - 173 - BUG_ON(!valid_dma_direction(dir)); 174 - if (ops->sync_sg_for_device) 175 - ops->sync_sg_for_device(dev, sg, nelems, dir); 176 - debug_dma_sync_sg_for_device(dev, sg, nelems, dir); 177 - 178 - } 179 - 180 - #define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, NULL) 181 - #define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, NULL) 182 - #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, NULL) 183 - #define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, NULL) 184 - 185 - extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, 186 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 187 - 188 - void *dma_common_contiguous_remap(struct page *page, size_t size, 189 - unsigned long vm_flags, 190 - pgprot_t prot, const void *caller); 191 - 192 - void *dma_common_pages_remap(struct page **pages, size_t size, 193 - unsigned long vm_flags, pgprot_t prot, 194 - const void *caller); 195 - void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags); 196 - 197 - /** 198 - * dma_mmap_attrs - map a coherent DMA allocation into user space 199 - * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 200 - * @vma: vm_area_struct describing requested user mapping 201 - * @cpu_addr: kernel CPU-view address returned from dma_alloc_attrs 202 - * @handle: device-view address returned from dma_alloc_attrs 203 - * @size: size of memory originally requested in dma_alloc_attrs 204 - * @attrs: attributes of mapping properties requested in dma_alloc_attrs 205 - * 206 - * Map a coherent DMA buffer previously allocated by dma_alloc_attrs 207 - * into user space. The coherent DMA buffer must not be freed by the 208 - * driver until the user space mapping has been released. 209 - */ 210 - static inline int 211 - dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, 212 - dma_addr_t dma_addr, size_t size, struct dma_attrs *attrs) 213 - { 214 - struct dma_map_ops *ops = get_dma_ops(dev); 215 - BUG_ON(!ops); 216 - if (ops->mmap) 217 - return ops->mmap(dev, vma, cpu_addr, dma_addr, size, attrs); 218 - return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size); 219 - } 220 - 221 - #define dma_mmap_coherent(d, v, c, h, s) dma_mmap_attrs(d, v, c, h, s, NULL) 222 - 223 - int 224 - dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 225 - void *cpu_addr, dma_addr_t dma_addr, size_t size); 226 - 227 - static inline int 228 - dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, void *cpu_addr, 229 - dma_addr_t dma_addr, size_t size, struct dma_attrs *attrs) 230 - { 231 - struct dma_map_ops *ops = get_dma_ops(dev); 232 - BUG_ON(!ops); 233 - if (ops->get_sgtable) 234 - return ops->get_sgtable(dev, sgt, cpu_addr, dma_addr, size, 235 - attrs); 236 - return dma_common_get_sgtable(dev, sgt, cpu_addr, dma_addr, size); 237 - } 238 - 239 - #define dma_get_sgtable(d, t, v, h, s) dma_get_sgtable_attrs(d, t, v, h, s, NULL) 240 - 241 - #ifndef arch_dma_alloc_attrs 242 - #define arch_dma_alloc_attrs(dev, flag) (true) 243 - #endif 244 - 245 - static inline void *dma_alloc_attrs(struct device *dev, size_t size, 246 - dma_addr_t *dma_handle, gfp_t flag, 247 - struct dma_attrs *attrs) 248 - { 249 - struct dma_map_ops *ops = get_dma_ops(dev); 250 - void *cpu_addr; 251 - 252 - BUG_ON(!ops); 253 - 254 - if (dma_alloc_from_coherent(dev, size, dma_handle, &cpu_addr)) 255 - return cpu_addr; 256 - 257 - if (!arch_dma_alloc_attrs(&dev, &flag)) 258 - return NULL; 259 - if (!ops->alloc) 260 - return NULL; 261 - 262 - cpu_addr = ops->alloc(dev, size, dma_handle, flag, attrs); 263 - debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr); 264 - return cpu_addr; 265 - } 266 - 267 - static inline void dma_free_attrs(struct device *dev, size_t size, 268 - void *cpu_addr, dma_addr_t dma_handle, 269 - struct dma_attrs *attrs) 270 - { 271 - struct dma_map_ops *ops = get_dma_ops(dev); 272 - 273 - BUG_ON(!ops); 274 - WARN_ON(irqs_disabled()); 275 - 276 - if (dma_release_from_coherent(dev, get_order(size), cpu_addr)) 277 - return; 278 - 279 - if (!ops->free) 280 - return; 281 - 282 - debug_dma_free_coherent(dev, size, cpu_addr, dma_handle); 283 - ops->free(dev, size, cpu_addr, dma_handle, attrs); 284 - } 285 - 286 - static inline void *dma_alloc_coherent(struct device *dev, size_t size, 287 - dma_addr_t *dma_handle, gfp_t flag) 288 - { 289 - return dma_alloc_attrs(dev, size, dma_handle, flag, NULL); 290 - } 291 - 292 - static inline void dma_free_coherent(struct device *dev, size_t size, 293 - void *cpu_addr, dma_addr_t dma_handle) 294 - { 295 - return dma_free_attrs(dev, size, cpu_addr, dma_handle, NULL); 296 - } 297 - 298 - static inline void *dma_alloc_noncoherent(struct device *dev, size_t size, 299 - dma_addr_t *dma_handle, gfp_t gfp) 300 - { 301 - DEFINE_DMA_ATTRS(attrs); 302 - 303 - dma_set_attr(DMA_ATTR_NON_CONSISTENT, &attrs); 304 - return dma_alloc_attrs(dev, size, dma_handle, gfp, &attrs); 305 - } 306 - 307 - static inline void dma_free_noncoherent(struct device *dev, size_t size, 308 - void *cpu_addr, dma_addr_t dma_handle) 309 - { 310 - DEFINE_DMA_ATTRS(attrs); 311 - 312 - dma_set_attr(DMA_ATTR_NON_CONSISTENT, &attrs); 313 - dma_free_attrs(dev, size, cpu_addr, dma_handle, &attrs); 314 - } 315 - 316 - static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 317 - { 318 - debug_dma_mapping_error(dev, dma_addr); 319 - 320 - if (get_dma_ops(dev)->mapping_error) 321 - return get_dma_ops(dev)->mapping_error(dev, dma_addr); 322 - 323 - #ifdef DMA_ERROR_CODE 324 - return dma_addr == DMA_ERROR_CODE; 325 - #else 326 - return 0; 327 - #endif 328 - } 329 - 330 - #ifndef HAVE_ARCH_DMA_SUPPORTED 331 - static inline int dma_supported(struct device *dev, u64 mask) 332 - { 333 - struct dma_map_ops *ops = get_dma_ops(dev); 334 - 335 - if (!ops) 336 - return 0; 337 - if (!ops->dma_supported) 338 - return 1; 339 - return ops->dma_supported(dev, mask); 340 - } 341 - #endif 342 - 343 - #ifndef HAVE_ARCH_DMA_SET_MASK 344 - static inline int dma_set_mask(struct device *dev, u64 mask) 345 - { 346 - struct dma_map_ops *ops = get_dma_ops(dev); 347 - 348 - if (ops->set_dma_mask) 349 - return ops->set_dma_mask(dev, mask); 350 - 351 - if (!dev->dma_mask || !dma_supported(dev, mask)) 352 - return -EIO; 353 - *dev->dma_mask = mask; 354 - return 0; 355 - } 356 - #endif 357 - 358 - #endif
+47 -8
include/linux/cpumask.h
··· 85 85 * only one CPU. 86 86 */ 87 87 88 - extern const struct cpumask *const cpu_possible_mask; 89 - extern const struct cpumask *const cpu_online_mask; 90 - extern const struct cpumask *const cpu_present_mask; 91 - extern const struct cpumask *const cpu_active_mask; 88 + extern struct cpumask __cpu_possible_mask; 89 + extern struct cpumask __cpu_online_mask; 90 + extern struct cpumask __cpu_present_mask; 91 + extern struct cpumask __cpu_active_mask; 92 + #define cpu_possible_mask ((const struct cpumask *)&__cpu_possible_mask) 93 + #define cpu_online_mask ((const struct cpumask *)&__cpu_online_mask) 94 + #define cpu_present_mask ((const struct cpumask *)&__cpu_present_mask) 95 + #define cpu_active_mask ((const struct cpumask *)&__cpu_active_mask) 92 96 93 97 #if NR_CPUS > 1 94 98 #define num_online_cpus() cpumask_weight(cpu_online_mask) ··· 720 716 #define for_each_present_cpu(cpu) for_each_cpu((cpu), cpu_present_mask) 721 717 722 718 /* Wrappers for arch boot code to manipulate normally-constant masks */ 723 - void set_cpu_possible(unsigned int cpu, bool possible); 724 - void set_cpu_present(unsigned int cpu, bool present); 725 - void set_cpu_online(unsigned int cpu, bool online); 726 - void set_cpu_active(unsigned int cpu, bool active); 727 719 void init_cpu_present(const struct cpumask *src); 728 720 void init_cpu_possible(const struct cpumask *src); 729 721 void init_cpu_online(const struct cpumask *src); 722 + 723 + static inline void 724 + set_cpu_possible(unsigned int cpu, bool possible) 725 + { 726 + if (possible) 727 + cpumask_set_cpu(cpu, &__cpu_possible_mask); 728 + else 729 + cpumask_clear_cpu(cpu, &__cpu_possible_mask); 730 + } 731 + 732 + static inline void 733 + set_cpu_present(unsigned int cpu, bool present) 734 + { 735 + if (present) 736 + cpumask_set_cpu(cpu, &__cpu_present_mask); 737 + else 738 + cpumask_clear_cpu(cpu, &__cpu_present_mask); 739 + } 740 + 741 + static inline void 742 + set_cpu_online(unsigned int cpu, bool online) 743 + { 744 + if (online) { 745 + cpumask_set_cpu(cpu, &__cpu_online_mask); 746 + cpumask_set_cpu(cpu, &__cpu_active_mask); 747 + } else { 748 + cpumask_clear_cpu(cpu, &__cpu_online_mask); 749 + } 750 + } 751 + 752 + static inline void 753 + set_cpu_active(unsigned int cpu, bool active) 754 + { 755 + if (active) 756 + cpumask_set_cpu(cpu, &__cpu_active_mask); 757 + else 758 + cpumask_clear_cpu(cpu, &__cpu_active_mask); 759 + } 760 + 730 761 731 762 /** 732 763 * to_cpumask - convert an NR_CPUS bitmap to a struct cpumask *
-10
include/linux/dma-attrs.h
··· 41 41 bitmap_zero(attrs->flags, __DMA_ATTRS_LONGS); 42 42 } 43 43 44 - #ifdef CONFIG_HAVE_DMA_ATTRS 45 44 /** 46 45 * dma_set_attr - set a specific attribute 47 46 * @attr: attribute to set ··· 66 67 BUG_ON(attr >= DMA_ATTR_MAX); 67 68 return test_bit(attr, attrs->flags); 68 69 } 69 - #else /* !CONFIG_HAVE_DMA_ATTRS */ 70 - static inline void dma_set_attr(enum dma_attr attr, struct dma_attrs *attrs) 71 - { 72 - } 73 70 74 - static inline int dma_get_attr(enum dma_attr attr, struct dma_attrs *attrs) 75 - { 76 - return 0; 77 - } 78 - #endif /* CONFIG_HAVE_DMA_ATTRS */ 79 71 #endif /* _DMA_ATTR_H */
+388 -23
include/linux/dma-mapping.h
··· 6 6 #include <linux/device.h> 7 7 #include <linux/err.h> 8 8 #include <linux/dma-attrs.h> 9 + #include <linux/dma-debug.h> 9 10 #include <linux/dma-direction.h> 10 11 #include <linux/scatterlist.h> 12 + #include <linux/kmemcheck.h> 13 + #include <linux/bug.h> 11 14 12 15 /* 13 16 * A dma_addr_t can hold any valid DMA or bus address for the platform. ··· 86 83 return dev->dma_mask != NULL && *dev->dma_mask != DMA_MASK_NONE; 87 84 } 88 85 86 + #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT 87 + /* 88 + * These three functions are only for dma allocator. 89 + * Don't use them in device drivers. 90 + */ 91 + int dma_alloc_from_coherent(struct device *dev, ssize_t size, 92 + dma_addr_t *dma_handle, void **ret); 93 + int dma_release_from_coherent(struct device *dev, int order, void *vaddr); 94 + 95 + int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma, 96 + void *cpu_addr, size_t size, int *ret); 97 + #else 98 + #define dma_alloc_from_coherent(dev, size, handle, ret) (0) 99 + #define dma_release_from_coherent(dev, order, vaddr) (0) 100 + #define dma_mmap_from_coherent(dev, vma, vaddr, order, ret) (0) 101 + #endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */ 102 + 89 103 #ifdef CONFIG_HAS_DMA 90 104 #include <asm/dma-mapping.h> 91 105 #else 92 - #include <asm-generic/dma-mapping-broken.h> 106 + /* 107 + * Define the dma api to allow compilation but not linking of 108 + * dma dependent code. Code that depends on the dma-mapping 109 + * API needs to set 'depends on HAS_DMA' in its Kconfig 110 + */ 111 + extern struct dma_map_ops bad_dma_ops; 112 + static inline struct dma_map_ops *get_dma_ops(struct device *dev) 113 + { 114 + return &bad_dma_ops; 115 + } 116 + #endif 117 + 118 + static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, 119 + size_t size, 120 + enum dma_data_direction dir, 121 + struct dma_attrs *attrs) 122 + { 123 + struct dma_map_ops *ops = get_dma_ops(dev); 124 + dma_addr_t addr; 125 + 126 + kmemcheck_mark_initialized(ptr, size); 127 + BUG_ON(!valid_dma_direction(dir)); 128 + addr = ops->map_page(dev, virt_to_page(ptr), 129 + offset_in_page(ptr), size, 130 + dir, attrs); 131 + debug_dma_map_page(dev, virt_to_page(ptr), 132 + offset_in_page(ptr), size, 133 + dir, addr, true); 134 + return addr; 135 + } 136 + 137 + static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr, 138 + size_t size, 139 + enum dma_data_direction dir, 140 + struct dma_attrs *attrs) 141 + { 142 + struct dma_map_ops *ops = get_dma_ops(dev); 143 + 144 + BUG_ON(!valid_dma_direction(dir)); 145 + if (ops->unmap_page) 146 + ops->unmap_page(dev, addr, size, dir, attrs); 147 + debug_dma_unmap_page(dev, addr, size, dir, true); 148 + } 149 + 150 + /* 151 + * dma_maps_sg_attrs returns 0 on error and > 0 on success. 152 + * It should never return a value < 0. 153 + */ 154 + static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, 155 + int nents, enum dma_data_direction dir, 156 + struct dma_attrs *attrs) 157 + { 158 + struct dma_map_ops *ops = get_dma_ops(dev); 159 + int i, ents; 160 + struct scatterlist *s; 161 + 162 + for_each_sg(sg, s, nents, i) 163 + kmemcheck_mark_initialized(sg_virt(s), s->length); 164 + BUG_ON(!valid_dma_direction(dir)); 165 + ents = ops->map_sg(dev, sg, nents, dir, attrs); 166 + BUG_ON(ents < 0); 167 + debug_dma_map_sg(dev, sg, nents, ents, dir); 168 + 169 + return ents; 170 + } 171 + 172 + static inline void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, 173 + int nents, enum dma_data_direction dir, 174 + struct dma_attrs *attrs) 175 + { 176 + struct dma_map_ops *ops = get_dma_ops(dev); 177 + 178 + BUG_ON(!valid_dma_direction(dir)); 179 + debug_dma_unmap_sg(dev, sg, nents, dir); 180 + if (ops->unmap_sg) 181 + ops->unmap_sg(dev, sg, nents, dir, attrs); 182 + } 183 + 184 + static inline dma_addr_t dma_map_page(struct device *dev, struct page *page, 185 + size_t offset, size_t size, 186 + enum dma_data_direction dir) 187 + { 188 + struct dma_map_ops *ops = get_dma_ops(dev); 189 + dma_addr_t addr; 190 + 191 + kmemcheck_mark_initialized(page_address(page) + offset, size); 192 + BUG_ON(!valid_dma_direction(dir)); 193 + addr = ops->map_page(dev, page, offset, size, dir, NULL); 194 + debug_dma_map_page(dev, page, offset, size, dir, addr, false); 195 + 196 + return addr; 197 + } 198 + 199 + static inline void dma_unmap_page(struct device *dev, dma_addr_t addr, 200 + size_t size, enum dma_data_direction dir) 201 + { 202 + struct dma_map_ops *ops = get_dma_ops(dev); 203 + 204 + BUG_ON(!valid_dma_direction(dir)); 205 + if (ops->unmap_page) 206 + ops->unmap_page(dev, addr, size, dir, NULL); 207 + debug_dma_unmap_page(dev, addr, size, dir, false); 208 + } 209 + 210 + static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t addr, 211 + size_t size, 212 + enum dma_data_direction dir) 213 + { 214 + struct dma_map_ops *ops = get_dma_ops(dev); 215 + 216 + BUG_ON(!valid_dma_direction(dir)); 217 + if (ops->sync_single_for_cpu) 218 + ops->sync_single_for_cpu(dev, addr, size, dir); 219 + debug_dma_sync_single_for_cpu(dev, addr, size, dir); 220 + } 221 + 222 + static inline void dma_sync_single_for_device(struct device *dev, 223 + dma_addr_t addr, size_t size, 224 + enum dma_data_direction dir) 225 + { 226 + struct dma_map_ops *ops = get_dma_ops(dev); 227 + 228 + BUG_ON(!valid_dma_direction(dir)); 229 + if (ops->sync_single_for_device) 230 + ops->sync_single_for_device(dev, addr, size, dir); 231 + debug_dma_sync_single_for_device(dev, addr, size, dir); 232 + } 233 + 234 + static inline void dma_sync_single_range_for_cpu(struct device *dev, 235 + dma_addr_t addr, 236 + unsigned long offset, 237 + size_t size, 238 + enum dma_data_direction dir) 239 + { 240 + const struct dma_map_ops *ops = get_dma_ops(dev); 241 + 242 + BUG_ON(!valid_dma_direction(dir)); 243 + if (ops->sync_single_for_cpu) 244 + ops->sync_single_for_cpu(dev, addr + offset, size, dir); 245 + debug_dma_sync_single_range_for_cpu(dev, addr, offset, size, dir); 246 + } 247 + 248 + static inline void dma_sync_single_range_for_device(struct device *dev, 249 + dma_addr_t addr, 250 + unsigned long offset, 251 + size_t size, 252 + enum dma_data_direction dir) 253 + { 254 + const struct dma_map_ops *ops = get_dma_ops(dev); 255 + 256 + BUG_ON(!valid_dma_direction(dir)); 257 + if (ops->sync_single_for_device) 258 + ops->sync_single_for_device(dev, addr + offset, size, dir); 259 + debug_dma_sync_single_range_for_device(dev, addr, offset, size, dir); 260 + } 261 + 262 + static inline void 263 + dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, 264 + int nelems, enum dma_data_direction dir) 265 + { 266 + struct dma_map_ops *ops = get_dma_ops(dev); 267 + 268 + BUG_ON(!valid_dma_direction(dir)); 269 + if (ops->sync_sg_for_cpu) 270 + ops->sync_sg_for_cpu(dev, sg, nelems, dir); 271 + debug_dma_sync_sg_for_cpu(dev, sg, nelems, dir); 272 + } 273 + 274 + static inline void 275 + dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, 276 + int nelems, enum dma_data_direction dir) 277 + { 278 + struct dma_map_ops *ops = get_dma_ops(dev); 279 + 280 + BUG_ON(!valid_dma_direction(dir)); 281 + if (ops->sync_sg_for_device) 282 + ops->sync_sg_for_device(dev, sg, nelems, dir); 283 + debug_dma_sync_sg_for_device(dev, sg, nelems, dir); 284 + 285 + } 286 + 287 + #define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, NULL) 288 + #define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, NULL) 289 + #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, NULL) 290 + #define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, NULL) 291 + 292 + extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, 293 + void *cpu_addr, dma_addr_t dma_addr, size_t size); 294 + 295 + void *dma_common_contiguous_remap(struct page *page, size_t size, 296 + unsigned long vm_flags, 297 + pgprot_t prot, const void *caller); 298 + 299 + void *dma_common_pages_remap(struct page **pages, size_t size, 300 + unsigned long vm_flags, pgprot_t prot, 301 + const void *caller); 302 + void dma_common_free_remap(void *cpu_addr, size_t size, unsigned long vm_flags); 303 + 304 + /** 305 + * dma_mmap_attrs - map a coherent DMA allocation into user space 306 + * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices 307 + * @vma: vm_area_struct describing requested user mapping 308 + * @cpu_addr: kernel CPU-view address returned from dma_alloc_attrs 309 + * @handle: device-view address returned from dma_alloc_attrs 310 + * @size: size of memory originally requested in dma_alloc_attrs 311 + * @attrs: attributes of mapping properties requested in dma_alloc_attrs 312 + * 313 + * Map a coherent DMA buffer previously allocated by dma_alloc_attrs 314 + * into user space. The coherent DMA buffer must not be freed by the 315 + * driver until the user space mapping has been released. 316 + */ 317 + static inline int 318 + dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, 319 + dma_addr_t dma_addr, size_t size, struct dma_attrs *attrs) 320 + { 321 + struct dma_map_ops *ops = get_dma_ops(dev); 322 + BUG_ON(!ops); 323 + if (ops->mmap) 324 + return ops->mmap(dev, vma, cpu_addr, dma_addr, size, attrs); 325 + return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size); 326 + } 327 + 328 + #define dma_mmap_coherent(d, v, c, h, s) dma_mmap_attrs(d, v, c, h, s, NULL) 329 + 330 + int 331 + dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, 332 + void *cpu_addr, dma_addr_t dma_addr, size_t size); 333 + 334 + static inline int 335 + dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt, void *cpu_addr, 336 + dma_addr_t dma_addr, size_t size, struct dma_attrs *attrs) 337 + { 338 + struct dma_map_ops *ops = get_dma_ops(dev); 339 + BUG_ON(!ops); 340 + if (ops->get_sgtable) 341 + return ops->get_sgtable(dev, sgt, cpu_addr, dma_addr, size, 342 + attrs); 343 + return dma_common_get_sgtable(dev, sgt, cpu_addr, dma_addr, size); 344 + } 345 + 346 + #define dma_get_sgtable(d, t, v, h, s) dma_get_sgtable_attrs(d, t, v, h, s, NULL) 347 + 348 + #ifndef arch_dma_alloc_attrs 349 + #define arch_dma_alloc_attrs(dev, flag) (true) 350 + #endif 351 + 352 + static inline void *dma_alloc_attrs(struct device *dev, size_t size, 353 + dma_addr_t *dma_handle, gfp_t flag, 354 + struct dma_attrs *attrs) 355 + { 356 + struct dma_map_ops *ops = get_dma_ops(dev); 357 + void *cpu_addr; 358 + 359 + BUG_ON(!ops); 360 + 361 + if (dma_alloc_from_coherent(dev, size, dma_handle, &cpu_addr)) 362 + return cpu_addr; 363 + 364 + if (!arch_dma_alloc_attrs(&dev, &flag)) 365 + return NULL; 366 + if (!ops->alloc) 367 + return NULL; 368 + 369 + cpu_addr = ops->alloc(dev, size, dma_handle, flag, attrs); 370 + debug_dma_alloc_coherent(dev, size, *dma_handle, cpu_addr); 371 + return cpu_addr; 372 + } 373 + 374 + static inline void dma_free_attrs(struct device *dev, size_t size, 375 + void *cpu_addr, dma_addr_t dma_handle, 376 + struct dma_attrs *attrs) 377 + { 378 + struct dma_map_ops *ops = get_dma_ops(dev); 379 + 380 + BUG_ON(!ops); 381 + WARN_ON(irqs_disabled()); 382 + 383 + if (dma_release_from_coherent(dev, get_order(size), cpu_addr)) 384 + return; 385 + 386 + if (!ops->free) 387 + return; 388 + 389 + debug_dma_free_coherent(dev, size, cpu_addr, dma_handle); 390 + ops->free(dev, size, cpu_addr, dma_handle, attrs); 391 + } 392 + 393 + static inline void *dma_alloc_coherent(struct device *dev, size_t size, 394 + dma_addr_t *dma_handle, gfp_t flag) 395 + { 396 + return dma_alloc_attrs(dev, size, dma_handle, flag, NULL); 397 + } 398 + 399 + static inline void dma_free_coherent(struct device *dev, size_t size, 400 + void *cpu_addr, dma_addr_t dma_handle) 401 + { 402 + return dma_free_attrs(dev, size, cpu_addr, dma_handle, NULL); 403 + } 404 + 405 + static inline void *dma_alloc_noncoherent(struct device *dev, size_t size, 406 + dma_addr_t *dma_handle, gfp_t gfp) 407 + { 408 + DEFINE_DMA_ATTRS(attrs); 409 + 410 + dma_set_attr(DMA_ATTR_NON_CONSISTENT, &attrs); 411 + return dma_alloc_attrs(dev, size, dma_handle, gfp, &attrs); 412 + } 413 + 414 + static inline void dma_free_noncoherent(struct device *dev, size_t size, 415 + void *cpu_addr, dma_addr_t dma_handle) 416 + { 417 + DEFINE_DMA_ATTRS(attrs); 418 + 419 + dma_set_attr(DMA_ATTR_NON_CONSISTENT, &attrs); 420 + dma_free_attrs(dev, size, cpu_addr, dma_handle, &attrs); 421 + } 422 + 423 + static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 424 + { 425 + debug_dma_mapping_error(dev, dma_addr); 426 + 427 + if (get_dma_ops(dev)->mapping_error) 428 + return get_dma_ops(dev)->mapping_error(dev, dma_addr); 429 + 430 + #ifdef DMA_ERROR_CODE 431 + return dma_addr == DMA_ERROR_CODE; 432 + #else 433 + return 0; 434 + #endif 435 + } 436 + 437 + #ifndef HAVE_ARCH_DMA_SUPPORTED 438 + static inline int dma_supported(struct device *dev, u64 mask) 439 + { 440 + struct dma_map_ops *ops = get_dma_ops(dev); 441 + 442 + if (!ops) 443 + return 0; 444 + if (!ops->dma_supported) 445 + return 1; 446 + return ops->dma_supported(dev, mask); 447 + } 448 + #endif 449 + 450 + #ifndef HAVE_ARCH_DMA_SET_MASK 451 + static inline int dma_set_mask(struct device *dev, u64 mask) 452 + { 453 + struct dma_map_ops *ops = get_dma_ops(dev); 454 + 455 + if (ops->set_dma_mask) 456 + return ops->set_dma_mask(dev, mask); 457 + 458 + if (!dev->dma_mask || !dma_supported(dev, mask)) 459 + return -EIO; 460 + *dev->dma_mask = mask; 461 + return 0; 462 + } 93 463 #endif 94 464 95 465 static inline u64 dma_get_mask(struct device *dev) ··· 584 208 #define DMA_MEMORY_INCLUDES_CHILDREN 0x04 585 209 #define DMA_MEMORY_EXCLUSIVE 0x08 586 210 587 - #ifndef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY 211 + #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT 212 + int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 213 + dma_addr_t device_addr, size_t size, int flags); 214 + void dma_release_declared_memory(struct device *dev); 215 + void *dma_mark_declared_memory_occupied(struct device *dev, 216 + dma_addr_t device_addr, size_t size); 217 + #else 588 218 static inline int 589 219 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, 590 220 dma_addr_t device_addr, size_t size, int flags) ··· 609 227 { 610 228 return ERR_PTR(-EBUSY); 611 229 } 612 - #endif 230 + #endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */ 613 231 614 232 /* 615 233 * Managed DMA API ··· 622 240 dma_addr_t *dma_handle, gfp_t gfp); 623 241 extern void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr, 624 242 dma_addr_t dma_handle); 625 - #ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY 243 + #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT 626 244 extern int dmam_declare_coherent_memory(struct device *dev, 627 245 phys_addr_t phys_addr, 628 246 dma_addr_t device_addr, size_t size, 629 247 int flags); 630 248 extern void dmam_release_declared_memory(struct device *dev); 631 - #else /* ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY */ 249 + #else /* CONFIG_HAVE_GENERIC_DMA_COHERENT */ 632 250 static inline int dmam_declare_coherent_memory(struct device *dev, 633 251 phys_addr_t phys_addr, dma_addr_t device_addr, 634 252 size_t size, gfp_t gfp) ··· 639 257 static inline void dmam_release_declared_memory(struct device *dev) 640 258 { 641 259 } 642 - #endif /* ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY */ 260 + #endif /* CONFIG_HAVE_GENERIC_DMA_COHERENT */ 643 261 644 - #ifndef CONFIG_HAVE_DMA_ATTRS 645 - struct dma_attrs; 646 - 647 - #define dma_map_single_attrs(dev, cpu_addr, size, dir, attrs) \ 648 - dma_map_single(dev, cpu_addr, size, dir) 649 - 650 - #define dma_unmap_single_attrs(dev, dma_addr, size, dir, attrs) \ 651 - dma_unmap_single(dev, dma_addr, size, dir) 652 - 653 - #define dma_map_sg_attrs(dev, sgl, nents, dir, attrs) \ 654 - dma_map_sg(dev, sgl, nents, dir) 655 - 656 - #define dma_unmap_sg_attrs(dev, sgl, nents, dir, attrs) \ 657 - dma_unmap_sg(dev, sgl, nents, dir) 658 - 659 - #else 660 262 static inline void *dma_alloc_writecombine(struct device *dev, size_t size, 661 263 dma_addr_t *dma_addr, gfp_t gfp) 662 264 { ··· 666 300 dma_set_attr(DMA_ATTR_WRITE_COMBINE, &attrs); 667 301 return dma_mmap_attrs(dev, vma, cpu_addr, dma_addr, size, &attrs); 668 302 } 669 - #endif /* CONFIG_HAVE_DMA_ATTRS */ 670 303 671 304 #ifdef CONFIG_NEED_DMA_MAP_STATE 672 305 #define DEFINE_DMA_UNMAP_ADDR(ADDR_NAME) dma_addr_t ADDR_NAME
+1
include/linux/io.h
··· 29 29 struct resource; 30 30 31 31 __visible void __iowrite32_copy(void __iomem *to, const void *from, size_t count); 32 + void __ioread32_copy(void *to, const void __iomem *from, size_t count); 32 33 void __iowrite64_copy(void __iomem *to, const void *from, size_t count); 33 34 34 35 #ifdef CONFIG_MMU
+25 -37
include/linux/kexec.h
··· 109 109 }; 110 110 #endif 111 111 112 - struct kexec_sha_region { 113 - unsigned long start; 114 - unsigned long len; 115 - }; 116 - 112 + #ifdef CONFIG_KEXEC_FILE 117 113 struct purgatory_info { 118 114 /* Pointer to elf header of read only purgatory */ 119 115 Elf_Ehdr *ehdr; ··· 125 129 /* Address where purgatory is finally loaded and is executed from */ 126 130 unsigned long purgatory_load_addr; 127 131 }; 132 + 133 + typedef int (kexec_probe_t)(const char *kernel_buf, unsigned long kernel_size); 134 + typedef void *(kexec_load_t)(struct kimage *image, char *kernel_buf, 135 + unsigned long kernel_len, char *initrd, 136 + unsigned long initrd_len, char *cmdline, 137 + unsigned long cmdline_len); 138 + typedef int (kexec_cleanup_t)(void *loader_data); 139 + 140 + #ifdef CONFIG_KEXEC_VERIFY_SIG 141 + typedef int (kexec_verify_sig_t)(const char *kernel_buf, 142 + unsigned long kernel_len); 143 + #endif 144 + 145 + struct kexec_file_ops { 146 + kexec_probe_t *probe; 147 + kexec_load_t *load; 148 + kexec_cleanup_t *cleanup; 149 + #ifdef CONFIG_KEXEC_VERIFY_SIG 150 + kexec_verify_sig_t *verify_sig; 151 + #endif 152 + }; 153 + #endif 128 154 129 155 struct kimage { 130 156 kimage_entry_t head; ··· 179 161 struct kimage_arch arch; 180 162 #endif 181 163 164 + #ifdef CONFIG_KEXEC_FILE 182 165 /* Additional fields for file based kexec syscall */ 183 166 void *kernel_buf; 184 167 unsigned long kernel_buf_len; ··· 198 179 199 180 /* Information for loading purgatory */ 200 181 struct purgatory_info purgatory_info; 201 - }; 202 - 203 - /* 204 - * Keeps track of buffer parameters as provided by caller for requesting 205 - * memory placement of buffer. 206 - */ 207 - struct kexec_buf { 208 - struct kimage *image; 209 - char *buffer; 210 - unsigned long bufsz; 211 - unsigned long mem; 212 - unsigned long memsz; 213 - unsigned long buf_align; 214 - unsigned long buf_min; 215 - unsigned long buf_max; 216 - bool top_down; /* allocate from top of memory hole */ 217 - }; 218 - 219 - typedef int (kexec_probe_t)(const char *kernel_buf, unsigned long kernel_size); 220 - typedef void *(kexec_load_t)(struct kimage *image, char *kernel_buf, 221 - unsigned long kernel_len, char *initrd, 222 - unsigned long initrd_len, char *cmdline, 223 - unsigned long cmdline_len); 224 - typedef int (kexec_cleanup_t)(void *loader_data); 225 - typedef int (kexec_verify_sig_t)(const char *kernel_buf, 226 - unsigned long kernel_len); 227 - 228 - struct kexec_file_ops { 229 - kexec_probe_t *probe; 230 - kexec_load_t *load; 231 - kexec_cleanup_t *cleanup; 232 - kexec_verify_sig_t *verify_sig; 182 + #endif 233 183 }; 234 184 235 185 /* kexec interface functions */
+2 -2
include/linux/list_lru.h
··· 40 40 spinlock_t lock; 41 41 /* global list, used for the root cgroup in cgroup aware lrus */ 42 42 struct list_lru_one lru; 43 - #ifdef CONFIG_MEMCG_KMEM 43 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 44 44 /* for cgroup aware lrus points to per cgroup lists, otherwise NULL */ 45 45 struct list_lru_memcg *memcg_lrus; 46 46 #endif ··· 48 48 49 49 struct list_lru { 50 50 struct list_lru_node *node; 51 - #ifdef CONFIG_MEMCG_KMEM 51 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 52 52 struct list_head list; 53 53 #endif 54 54 };
+2 -2
include/linux/lz4.h
··· 9 9 * it under the terms of the GNU General Public License version 2 as 10 10 * published by the Free Software Foundation. 11 11 */ 12 - #define LZ4_MEM_COMPRESS (4096 * sizeof(unsigned char *)) 13 - #define LZ4HC_MEM_COMPRESS (65538 * sizeof(unsigned char *)) 12 + #define LZ4_MEM_COMPRESS (16384) 13 + #define LZ4HC_MEM_COMPRESS (262144 + (2 * sizeof(unsigned char *))) 14 14 15 15 /* 16 16 * lz4_compressbound()
+40 -46
include/linux/memcontrol.h
··· 50 50 MEM_CGROUP_STAT_WRITEBACK, /* # of pages under writeback */ 51 51 MEM_CGROUP_STAT_SWAP, /* # of pages, swapped out */ 52 52 MEM_CGROUP_STAT_NSTATS, 53 + /* default hierarchy stats */ 54 + MEMCG_SOCK, 55 + MEMCG_NR_STAT, 53 56 }; 54 57 55 58 struct mem_cgroup_reclaim_cookie { ··· 88 85 MEM_CGROUP_NTARGETS, 89 86 }; 90 87 91 - struct cg_proto { 92 - struct page_counter memory_allocated; /* Current allocated memory. */ 93 - int memory_pressure; 94 - bool active; 95 - }; 96 - 97 88 #ifdef CONFIG_MEMCG 98 89 struct mem_cgroup_stat_cpu { 99 - long count[MEM_CGROUP_STAT_NSTATS]; 90 + long count[MEMCG_NR_STAT]; 100 91 unsigned long events[MEMCG_NR_EVENTS]; 101 92 unsigned long nr_page_events; 102 93 unsigned long targets[MEM_CGROUP_NTARGETS]; ··· 149 152 struct mem_cgroup_threshold_ary *spare; 150 153 }; 151 154 155 + enum memcg_kmem_state { 156 + KMEM_NONE, 157 + KMEM_ALLOCATED, 158 + KMEM_ONLINE, 159 + }; 160 + 152 161 /* 153 162 * The memory controller data structure. The memory controller controls both 154 163 * page cache and RSS per cgroup. We would eventually like to provide ··· 166 163 167 164 /* Accounted resources */ 168 165 struct page_counter memory; 166 + struct page_counter swap; 167 + 168 + /* Legacy consumer-oriented counters */ 169 169 struct page_counter memsw; 170 170 struct page_counter kmem; 171 + struct page_counter tcpmem; 171 172 172 173 /* Normal memory consumption range */ 173 174 unsigned long low; ··· 184 177 185 178 /* vmpressure notifications */ 186 179 struct vmpressure vmpressure; 187 - 188 - /* css_online() has been completed */ 189 - int initialized; 190 180 191 181 /* 192 182 * Should the accounting and control be hierarchical, per subtree? ··· 231 227 */ 232 228 struct mem_cgroup_stat_cpu __percpu *stat; 233 229 234 - #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_INET) 235 - struct cg_proto tcp_mem; 236 - #endif 237 - #if defined(CONFIG_MEMCG_KMEM) 230 + unsigned long socket_pressure; 231 + 232 + /* Legacy tcp memory accounting */ 233 + bool tcpmem_active; 234 + int tcpmem_pressure; 235 + 236 + #ifndef CONFIG_SLOB 238 237 /* Index in the kmem_cache->memcg_params.memcg_caches array */ 239 238 int kmemcg_id; 240 - bool kmem_acct_activated; 241 - bool kmem_acct_active; 239 + enum memcg_kmem_state kmem_state; 242 240 #endif 243 241 244 242 int last_scanned_node; ··· 253 247 #ifdef CONFIG_CGROUP_WRITEBACK 254 248 struct list_head cgwb_list; 255 249 struct wb_domain cgwb_domain; 256 - #endif 257 - 258 - #ifdef CONFIG_INET 259 - unsigned long socket_pressure; 260 250 #endif 261 251 262 252 /* List of events which userspace want to receive */ ··· 358 356 return !cgroup_subsys_enabled(memory_cgrp_subsys); 359 357 } 360 358 359 + static inline bool mem_cgroup_online(struct mem_cgroup *memcg) 360 + { 361 + if (mem_cgroup_disabled()) 362 + return true; 363 + return !!(memcg->css.flags & CSS_ONLINE); 364 + } 365 + 361 366 /* 362 367 * For memory reclaim. 363 368 */ ··· 372 363 373 364 void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru, 374 365 int nr_pages); 375 - 376 - static inline bool mem_cgroup_lruvec_online(struct lruvec *lruvec) 377 - { 378 - struct mem_cgroup_per_zone *mz; 379 - struct mem_cgroup *memcg; 380 - 381 - if (mem_cgroup_disabled()) 382 - return true; 383 - 384 - mz = container_of(lruvec, struct mem_cgroup_per_zone, lruvec); 385 - memcg = mz->memcg; 386 - 387 - return !!(memcg->css.flags & CSS_ONLINE); 388 - } 389 366 390 367 static inline 391 368 unsigned long mem_cgroup_get_lru_size(struct lruvec *lruvec, enum lru_list lru) ··· 585 590 return true; 586 591 } 587 592 588 - static inline bool 589 - mem_cgroup_inactive_anon_is_low(struct lruvec *lruvec) 593 + static inline bool mem_cgroup_online(struct mem_cgroup *memcg) 590 594 { 591 595 return true; 592 596 } 593 597 594 - static inline bool mem_cgroup_lruvec_online(struct lruvec *lruvec) 598 + static inline bool 599 + mem_cgroup_inactive_anon_is_low(struct lruvec *lruvec) 595 600 { 596 601 return true; 597 602 } ··· 702 707 void sock_release_memcg(struct sock *sk); 703 708 bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages); 704 709 void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages); 705 - #if defined(CONFIG_MEMCG) && defined(CONFIG_INET) 710 + #ifdef CONFIG_MEMCG 706 711 extern struct static_key_false memcg_sockets_enabled_key; 707 712 #define mem_cgroup_sockets_enabled static_branch_unlikely(&memcg_sockets_enabled_key) 708 713 static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg) 709 714 { 710 - #ifdef CONFIG_MEMCG_KMEM 711 - if (memcg->tcp_mem.memory_pressure) 715 + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg->tcpmem_pressure) 712 716 return true; 713 - #endif 714 717 do { 715 718 if (time_before(jiffies, memcg->socket_pressure)) 716 719 return true; ··· 723 730 } 724 731 #endif 725 732 726 - #ifdef CONFIG_MEMCG_KMEM 733 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 727 734 extern struct static_key_false memcg_kmem_enabled_key; 728 735 729 736 extern int memcg_nr_cache_ids; ··· 743 750 return static_branch_unlikely(&memcg_kmem_enabled_key); 744 751 } 745 752 746 - static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg) 753 + static inline bool memcg_kmem_online(struct mem_cgroup *memcg) 747 754 { 748 - return memcg->kmem_acct_active; 755 + return memcg->kmem_state == KMEM_ONLINE; 749 756 } 750 757 751 758 /* ··· 843 850 return false; 844 851 } 845 852 846 - static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg) 853 + static inline bool memcg_kmem_online(struct mem_cgroup *memcg) 847 854 { 848 855 return false; 849 856 } ··· 879 886 static inline void memcg_kmem_put_cache(struct kmem_cache *cachep) 880 887 { 881 888 } 882 - #endif /* CONFIG_MEMCG_KMEM */ 889 + #endif /* CONFIG_MEMCG && !CONFIG_SLOB */ 890 + 883 891 #endif /* _LINUX_MEMCONTROL_H */
+23 -1
include/linux/ptrace.h
··· 57 57 #define PTRACE_MODE_READ 0x01 58 58 #define PTRACE_MODE_ATTACH 0x02 59 59 #define PTRACE_MODE_NOAUDIT 0x04 60 - /* Returns true on success, false on denial. */ 60 + #define PTRACE_MODE_FSCREDS 0x08 61 + #define PTRACE_MODE_REALCREDS 0x10 62 + 63 + /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */ 64 + #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS) 65 + #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS) 66 + #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS) 67 + #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS) 68 + 69 + /** 70 + * ptrace_may_access - check whether the caller is permitted to access 71 + * a target task. 72 + * @task: target task 73 + * @mode: selects type of access and caller credentials 74 + * 75 + * Returns true on success, false on denial. 76 + * 77 + * One of the flags PTRACE_MODE_FSCREDS and PTRACE_MODE_REALCREDS must 78 + * be set in @mode to specify whether the access was requested through 79 + * a filesystem syscall (should use effective capabilities and fsuid 80 + * of the caller) or through an explicit syscall such as 81 + * process_vm_writev or ptrace (and should use the real credentials). 82 + */ 61 83 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode); 62 84 63 85 static inline int ptrace_reparented(struct task_struct *child)
+1 -1
include/linux/radix-tree.h
··· 154 154 * radix_tree_gang_lookup_tag_slot 155 155 * radix_tree_tagged 156 156 * 157 - * The first 7 functions are able to be called locklessly, using RCU. The 157 + * The first 8 functions are able to be called locklessly, using RCU. The 158 158 * caller must ensure calls to these functions are made within rcu_read_lock() 159 159 * regions. Other readers (lock-free or otherwise) and modifications may be 160 160 * running concurrently.
+1 -1
include/linux/rbtree.h
··· 50 50 #define RB_ROOT (struct rb_root) { NULL, } 51 51 #define rb_entry(ptr, type, member) container_of(ptr, type, member) 52 52 53 - #define RB_EMPTY_ROOT(root) ((root)->rb_node == NULL) 53 + #define RB_EMPTY_ROOT(root) (READ_ONCE((root)->rb_node) == NULL) 54 54 55 55 /* 'empty' nodes are nodes that are known not to be inserted in an rbtree */ 56 56 #define RB_EMPTY_NODE(node) \
+5 -2
include/linux/sched.h
··· 1476 1476 unsigned in_iowait:1; 1477 1477 #ifdef CONFIG_MEMCG 1478 1478 unsigned memcg_may_oom:1; 1479 - #endif 1480 - #ifdef CONFIG_MEMCG_KMEM 1479 + #ifndef CONFIG_SLOB 1481 1480 unsigned memcg_kmem_skip_account:1; 1481 + #endif 1482 1482 #endif 1483 1483 #ifdef CONFIG_COMPAT_BRK 1484 1484 unsigned brk_randomized:1; ··· 1642 1642 unsigned int lockdep_recursion; 1643 1643 struct held_lock held_locks[MAX_LOCK_DEPTH]; 1644 1644 gfp_t lockdep_reclaim_gfp; 1645 + #endif 1646 + #ifdef CONFIG_UBSAN 1647 + unsigned int in_ubsan; 1645 1648 #endif 1646 1649 1647 1650 /* journalling filesystem info */
+3 -3
include/linux/shm.h
··· 52 52 53 53 long do_shmat(int shmid, char __user *shmaddr, int shmflg, unsigned long *addr, 54 54 unsigned long shmlba); 55 - int is_file_shm_hugepages(struct file *file); 55 + bool is_file_shm_hugepages(struct file *file); 56 56 void exit_shm(struct task_struct *task); 57 57 #define shm_init_task(task) INIT_LIST_HEAD(&(task)->sysvshm.shm_clist) 58 58 #else ··· 66 66 { 67 67 return -ENOSYS; 68 68 } 69 - static inline int is_file_shm_hugepages(struct file *file) 69 + static inline bool is_file_shm_hugepages(struct file *file) 70 70 { 71 - return 0; 71 + return false; 72 72 } 73 73 static inline void exit_shm(struct task_struct *task) 74 74 {
+1 -1
include/linux/slab.h
··· 86 86 #else 87 87 # define SLAB_FAILSLAB 0x00000000UL 88 88 #endif 89 - #ifdef CONFIG_MEMCG_KMEM 89 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 90 90 # define SLAB_ACCOUNT 0x04000000UL /* Account to memcg */ 91 91 #else 92 92 # define SLAB_ACCOUNT 0x00000000UL
+2 -1
include/linux/slab_def.h
··· 69 69 */ 70 70 int obj_offset; 71 71 #endif /* CONFIG_DEBUG_SLAB */ 72 - #ifdef CONFIG_MEMCG_KMEM 72 + 73 + #ifdef CONFIG_MEMCG 73 74 struct memcg_cache_params memcg_params; 74 75 #endif 75 76
+1 -1
include/linux/slub_def.h
··· 84 84 #ifdef CONFIG_SYSFS 85 85 struct kobject kobj; /* For sysfs */ 86 86 #endif 87 - #ifdef CONFIG_MEMCG_KMEM 87 + #ifdef CONFIG_MEMCG 88 88 struct memcg_cache_params memcg_params; 89 89 int max_attr_size; /* for propagation, maximum size of a stored attr */ 90 90 #ifdef CONFIG_SYSFS
+50 -26
include/linux/swap.h
··· 350 350 351 351 extern int kswapd_run(int nid); 352 352 extern void kswapd_stop(int nid); 353 - #ifdef CONFIG_MEMCG 354 - static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg) 355 - { 356 - /* root ? */ 357 - if (mem_cgroup_disabled() || !memcg->css.parent) 358 - return vm_swappiness; 359 353 360 - return memcg->swappiness; 361 - } 362 - 363 - #else 364 - static inline int mem_cgroup_swappiness(struct mem_cgroup *mem) 365 - { 366 - return vm_swappiness; 367 - } 368 - #endif 369 - #ifdef CONFIG_MEMCG_SWAP 370 - extern void mem_cgroup_swapout(struct page *page, swp_entry_t entry); 371 - extern void mem_cgroup_uncharge_swap(swp_entry_t entry); 372 - #else 373 - static inline void mem_cgroup_swapout(struct page *page, swp_entry_t entry) 374 - { 375 - } 376 - static inline void mem_cgroup_uncharge_swap(swp_entry_t entry) 377 - { 378 - } 379 - #endif 380 354 #ifdef CONFIG_SWAP 381 355 /* linux/mm/page_io.c */ 382 356 extern int swap_readpage(struct page *); ··· 529 555 } 530 556 531 557 #endif /* CONFIG_SWAP */ 558 + 559 + #ifdef CONFIG_MEMCG 560 + static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg) 561 + { 562 + /* root ? */ 563 + if (mem_cgroup_disabled() || !memcg->css.parent) 564 + return vm_swappiness; 565 + 566 + return memcg->swappiness; 567 + } 568 + 569 + #else 570 + static inline int mem_cgroup_swappiness(struct mem_cgroup *mem) 571 + { 572 + return vm_swappiness; 573 + } 574 + #endif 575 + 576 + #ifdef CONFIG_MEMCG_SWAP 577 + extern void mem_cgroup_swapout(struct page *page, swp_entry_t entry); 578 + extern int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry); 579 + extern void mem_cgroup_uncharge_swap(swp_entry_t entry); 580 + extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg); 581 + extern bool mem_cgroup_swap_full(struct page *page); 582 + #else 583 + static inline void mem_cgroup_swapout(struct page *page, swp_entry_t entry) 584 + { 585 + } 586 + 587 + static inline int mem_cgroup_try_charge_swap(struct page *page, 588 + swp_entry_t entry) 589 + { 590 + return 0; 591 + } 592 + 593 + static inline void mem_cgroup_uncharge_swap(swp_entry_t entry) 594 + { 595 + } 596 + 597 + static inline long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) 598 + { 599 + return get_nr_swap_pages(); 600 + } 601 + 602 + static inline bool mem_cgroup_swap_full(struct page *page) 603 + { 604 + return vm_swap_full(); 605 + } 606 + #endif 607 + 532 608 #endif /* __KERNEL__*/ 533 609 #endif /* _LINUX_SWAP_H */
-9
include/net/tcp_memcontrol.h
··· 1 - #ifndef _TCP_MEMCG_H 2 - #define _TCP_MEMCG_H 3 - 4 - struct cgroup_subsys; 5 - struct mem_cgroup; 6 - 7 - int tcp_init_cgroup(struct mem_cgroup *memcg, struct cgroup_subsys *ss); 8 - void tcp_destroy_cgroup(struct mem_cgroup *memcg); 9 - #endif /* _TCP_MEMCG_H */
+3
include/uapi/linux/eventpoll.h
··· 26 26 #define EPOLL_CTL_DEL 2 27 27 #define EPOLL_CTL_MOD 3 28 28 29 + /* Set exclusive wakeup mode for the target file descriptor */ 30 + #define EPOLLEXCLUSIVE (1 << 28) 31 + 29 32 /* 30 33 * Request the handling of system wakeup events so as to prevent system suspends 31 34 * from happening while those events are being processed.
+8 -15
init/Kconfig
··· 964 964 For those who want to have the feature enabled by default should 965 965 select this option (if, for some reason, they need to disable it 966 966 then swapaccount=0 does the trick). 967 - config MEMCG_KMEM 968 - bool "Memory Resource Controller Kernel Memory accounting" 969 - depends on MEMCG 970 - depends on SLUB || SLAB 971 - help 972 - The Kernel Memory extension for Memory Resource Controller can limit 973 - the amount of memory used by kernel objects in the system. Those are 974 - fundamentally different from the entities handled by the standard 975 - Memory Controller, which are page-based, and can be swapped. Users of 976 - the kmem extension can use it to guarantee that no group of processes 977 - will ever exhaust kernel resources alone. 978 967 979 968 config BLK_CGROUP 980 969 bool "IO controller" ··· 1059 1070 help 1060 1071 Provides a way to freeze and unfreeze all tasks in a 1061 1072 cgroup. 1073 + 1074 + This option affects the ORIGINAL cgroup interface. The cgroup2 memory 1075 + controller includes important in-kernel memory consumers per default. 1076 + 1077 + If you're using cgroup2, say N. 1062 1078 1063 1079 config CGROUP_HUGETLB 1064 1080 bool "HugeTLB controller" ··· 1176 1182 to provide different user info for different servers. 1177 1183 1178 1184 When user namespaces are enabled in the kernel it is 1179 - recommended that the MEMCG and MEMCG_KMEM options also be 1180 - enabled and that user-space use the memory control groups to 1181 - limit the amount of memory a memory unprivileged users can 1182 - use. 1185 + recommended that the MEMCG option also be enabled and that 1186 + user-space use the memory control groups to limit the amount 1187 + of memory a memory unprivileged users can use. 1183 1188 1184 1189 If unsure, say N. 1185 1190
+2 -2
init/do_mounts.h
··· 57 57 58 58 #ifdef CONFIG_BLK_DEV_INITRD 59 59 60 - int __init initrd_load(void); 60 + bool __init initrd_load(void); 61 61 62 62 #else 63 63 64 - static inline int initrd_load(void) { return 0; } 64 + static inline bool initrd_load(void) { return false; } 65 65 66 66 #endif 67 67
+3 -3
init/do_mounts_initrd.c
··· 116 116 } 117 117 } 118 118 119 - int __init initrd_load(void) 119 + bool __init initrd_load(void) 120 120 { 121 121 if (mount_initrd) { 122 122 create_dev("/dev/ram", Root_RAM0); ··· 129 129 if (rd_load_image("/initrd.image") && ROOT_DEV != Root_RAM0) { 130 130 sys_unlink("/initrd.image"); 131 131 handle_initrd(); 132 - return 1; 132 + return true; 133 133 } 134 134 } 135 135 sys_unlink("/initrd.image"); 136 - return 0; 136 + return false; 137 137 }
+5 -5
init/main.c
··· 164 164 165 165 extern const struct obs_kernel_param __setup_start[], __setup_end[]; 166 166 167 - static int __init obsolete_checksetup(char *line) 167 + static bool __init obsolete_checksetup(char *line) 168 168 { 169 169 const struct obs_kernel_param *p; 170 - int had_early_param = 0; 170 + bool had_early_param = false; 171 171 172 172 p = __setup_start; 173 173 do { ··· 179 179 * Keep iterating, as we can have early 180 180 * params and __setups of same names 8( */ 181 181 if (line[n] == '\0' || line[n] == '=') 182 - had_early_param = 1; 182 + had_early_param = true; 183 183 } else if (!p->setup_func) { 184 184 pr_warn("Parameter %s is obsolete, ignored\n", 185 185 p->str); 186 - return 1; 186 + return true; 187 187 } else if (p->setup_func(line + n)) 188 - return 1; 188 + return true; 189 189 } 190 190 p++; 191 191 } while (p < __setup_end);
+1 -1
ipc/shm.c
··· 459 459 .fallocate = shm_fallocate, 460 460 }; 461 461 462 - int is_file_shm_hugepages(struct file *file) 462 + bool is_file_shm_hugepages(struct file *file) 463 463 { 464 464 return file->f_op == &shm_file_operations_huge; 465 465 }
+13 -51
kernel/cpu.c
··· 759 759 EXPORT_SYMBOL(cpu_all_bits); 760 760 761 761 #ifdef CONFIG_INIT_ALL_POSSIBLE 762 - static DECLARE_BITMAP(cpu_possible_bits, CONFIG_NR_CPUS) __read_mostly 763 - = CPU_BITS_ALL; 762 + struct cpumask __cpu_possible_mask __read_mostly 763 + = {CPU_BITS_ALL}; 764 764 #else 765 - static DECLARE_BITMAP(cpu_possible_bits, CONFIG_NR_CPUS) __read_mostly; 765 + struct cpumask __cpu_possible_mask __read_mostly; 766 766 #endif 767 - const struct cpumask *const cpu_possible_mask = to_cpumask(cpu_possible_bits); 768 - EXPORT_SYMBOL(cpu_possible_mask); 767 + EXPORT_SYMBOL(__cpu_possible_mask); 769 768 770 - static DECLARE_BITMAP(cpu_online_bits, CONFIG_NR_CPUS) __read_mostly; 771 - const struct cpumask *const cpu_online_mask = to_cpumask(cpu_online_bits); 772 - EXPORT_SYMBOL(cpu_online_mask); 769 + struct cpumask __cpu_online_mask __read_mostly; 770 + EXPORT_SYMBOL(__cpu_online_mask); 773 771 774 - static DECLARE_BITMAP(cpu_present_bits, CONFIG_NR_CPUS) __read_mostly; 775 - const struct cpumask *const cpu_present_mask = to_cpumask(cpu_present_bits); 776 - EXPORT_SYMBOL(cpu_present_mask); 772 + struct cpumask __cpu_present_mask __read_mostly; 773 + EXPORT_SYMBOL(__cpu_present_mask); 777 774 778 - static DECLARE_BITMAP(cpu_active_bits, CONFIG_NR_CPUS) __read_mostly; 779 - const struct cpumask *const cpu_active_mask = to_cpumask(cpu_active_bits); 780 - EXPORT_SYMBOL(cpu_active_mask); 781 - 782 - void set_cpu_possible(unsigned int cpu, bool possible) 783 - { 784 - if (possible) 785 - cpumask_set_cpu(cpu, to_cpumask(cpu_possible_bits)); 786 - else 787 - cpumask_clear_cpu(cpu, to_cpumask(cpu_possible_bits)); 788 - } 789 - 790 - void set_cpu_present(unsigned int cpu, bool present) 791 - { 792 - if (present) 793 - cpumask_set_cpu(cpu, to_cpumask(cpu_present_bits)); 794 - else 795 - cpumask_clear_cpu(cpu, to_cpumask(cpu_present_bits)); 796 - } 797 - 798 - void set_cpu_online(unsigned int cpu, bool online) 799 - { 800 - if (online) { 801 - cpumask_set_cpu(cpu, to_cpumask(cpu_online_bits)); 802 - cpumask_set_cpu(cpu, to_cpumask(cpu_active_bits)); 803 - } else { 804 - cpumask_clear_cpu(cpu, to_cpumask(cpu_online_bits)); 805 - } 806 - } 807 - 808 - void set_cpu_active(unsigned int cpu, bool active) 809 - { 810 - if (active) 811 - cpumask_set_cpu(cpu, to_cpumask(cpu_active_bits)); 812 - else 813 - cpumask_clear_cpu(cpu, to_cpumask(cpu_active_bits)); 814 - } 775 + struct cpumask __cpu_active_mask __read_mostly; 776 + EXPORT_SYMBOL(__cpu_active_mask); 815 777 816 778 void init_cpu_present(const struct cpumask *src) 817 779 { 818 - cpumask_copy(to_cpumask(cpu_present_bits), src); 780 + cpumask_copy(&__cpu_present_mask, src); 819 781 } 820 782 821 783 void init_cpu_possible(const struct cpumask *src) 822 784 { 823 - cpumask_copy(to_cpumask(cpu_possible_bits), src); 785 + cpumask_copy(&__cpu_possible_mask, src); 824 786 } 825 787 826 788 void init_cpu_online(const struct cpumask *src) 827 789 { 828 - cpumask_copy(to_cpumask(cpu_online_bits), src); 790 + cpumask_copy(&__cpu_online_mask, src); 829 791 }
+1 -1
kernel/events/core.c
··· 3376 3376 3377 3377 /* Reuse ptrace permission checks for now. */ 3378 3378 err = -EACCES; 3379 - if (!ptrace_may_access(task, PTRACE_MODE_READ)) 3379 + if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) 3380 3380 goto errout; 3381 3381 3382 3382 return task;
+1 -4
kernel/exit.c
··· 59 59 #include <asm/pgtable.h> 60 60 #include <asm/mmu_context.h> 61 61 62 - static void exit_mm(struct task_struct *tsk); 63 - 64 62 static void __unhash_process(struct task_struct *p, bool group_dead) 65 63 { 66 64 nr_threads--; ··· 1118 1120 static int *task_stopped_code(struct task_struct *p, bool ptrace) 1119 1121 { 1120 1122 if (ptrace) { 1121 - if (task_is_stopped_or_traced(p) && 1122 - !(p->jobctl & JOBCTL_LISTENING)) 1123 + if (task_is_traced(p) && !(p->jobctl & JOBCTL_LISTENING)) 1123 1124 return &p->exit_code; 1124 1125 } else { 1125 1126 if (p->signal->flags & SIGNAL_STOP_STOPPED)
+1 -1
kernel/futex.c
··· 2884 2884 } 2885 2885 2886 2886 ret = -EPERM; 2887 - if (!ptrace_may_access(p, PTRACE_MODE_READ)) 2887 + if (!ptrace_may_access(p, PTRACE_MODE_READ_REALCREDS)) 2888 2888 goto err_unlock; 2889 2889 2890 2890 head = p->robust_list;
+1 -1
kernel/futex_compat.c
··· 155 155 } 156 156 157 157 ret = -EPERM; 158 - if (!ptrace_may_access(p, PTRACE_MODE_READ)) 158 + if (!ptrace_may_access(p, PTRACE_MODE_READ_REALCREDS)) 159 159 goto err_unlock; 160 160 161 161 head = p->compat_robust_list;
+2 -2
kernel/kcmp.c
··· 122 122 &task2->signal->cred_guard_mutex); 123 123 if (ret) 124 124 goto err; 125 - if (!ptrace_may_access(task1, PTRACE_MODE_READ) || 126 - !ptrace_may_access(task2, PTRACE_MODE_READ)) { 125 + if (!ptrace_may_access(task1, PTRACE_MODE_READ_REALCREDS) || 126 + !ptrace_may_access(task2, PTRACE_MODE_READ_REALCREDS)) { 127 127 ret = -EPERM; 128 128 goto err_unlock; 129 129 }
+5 -5
kernel/kexec.c
··· 63 63 if (ret) 64 64 goto out_free_image; 65 65 66 - ret = sanity_check_segment_list(image); 67 - if (ret) 68 - goto out_free_image; 69 - 70 - /* Enable the special crash kernel control page allocation policy. */ 71 66 if (kexec_on_panic) { 67 + /* Enable special crash kernel control page alloc policy. */ 72 68 image->control_page = crashk_res.start; 73 69 image->type = KEXEC_TYPE_CRASH; 74 70 } 71 + 72 + ret = sanity_check_segment_list(image); 73 + if (ret) 74 + goto out_free_image; 75 75 76 76 /* 77 77 * Find a location for the control code buffer, and add it
+2 -5
kernel/kexec_core.c
··· 310 310 311 311 void kimage_free_page_list(struct list_head *list) 312 312 { 313 - struct list_head *pos, *next; 313 + struct page *page, *next; 314 314 315 - list_for_each_safe(pos, next, list) { 316 - struct page *page; 317 - 318 - page = list_entry(pos, struct page, lru); 315 + list_for_each_entry_safe(page, next, list, lru) { 319 316 list_del(&page->lru); 320 317 kimage_free_pages(page); 321 318 }
+2
kernel/kexec_file.c
··· 109 109 return -EINVAL; 110 110 } 111 111 112 + #ifdef CONFIG_KEXEC_VERIFY_SIG 112 113 int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf, 113 114 unsigned long buf_len) 114 115 { 115 116 return -EKEYREJECTED; 116 117 } 118 + #endif 117 119 118 120 /* Apply relocations of type RELA */ 119 121 int __weak
+21
kernel/kexec_internal.h
··· 15 15 extern struct mutex kexec_mutex; 16 16 17 17 #ifdef CONFIG_KEXEC_FILE 18 + struct kexec_sha_region { 19 + unsigned long start; 20 + unsigned long len; 21 + }; 22 + 23 + /* 24 + * Keeps track of buffer parameters as provided by caller for requesting 25 + * memory placement of buffer. 26 + */ 27 + struct kexec_buf { 28 + struct kimage *image; 29 + char *buffer; 30 + unsigned long bufsz; 31 + unsigned long mem; 32 + unsigned long memsz; 33 + unsigned long buf_align; 34 + unsigned long buf_min; 35 + unsigned long buf_max; 36 + bool top_down; /* allocate from top of memory hole */ 37 + }; 38 + 18 39 void kimage_file_post_load_cleanup(struct kimage *image); 19 40 #else /* CONFIG_KEXEC_FILE */ 20 41 static inline void kimage_file_post_load_cleanup(struct kimage *image) { }
+5 -5
kernel/printk/printk.c
··· 233 233 u8 facility; /* syslog facility */ 234 234 u8 flags:5; /* internal record flags */ 235 235 u8 level:3; /* syslog level */ 236 - }; 236 + } 237 + #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 238 + __packed __aligned(4) 239 + #endif 240 + ; 237 241 238 242 /* 239 243 * The logbuf_lock protects kmsg buffer, indices, counters. This can be taken ··· 278 274 #define LOG_FACILITY(v) ((v) >> 3 & 0xff) 279 275 280 276 /* record buffer */ 281 - #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) 282 - #define LOG_ALIGN 4 283 - #else 284 277 #define LOG_ALIGN __alignof__(struct printk_log) 285 - #endif 286 278 #define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT) 287 279 static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN); 288 280 static char *log_buf = __log_buf;
+39 -10
kernel/ptrace.c
··· 219 219 static int __ptrace_may_access(struct task_struct *task, unsigned int mode) 220 220 { 221 221 const struct cred *cred = current_cred(), *tcred; 222 + int dumpable = 0; 223 + kuid_t caller_uid; 224 + kgid_t caller_gid; 225 + 226 + if (!(mode & PTRACE_MODE_FSCREDS) == !(mode & PTRACE_MODE_REALCREDS)) { 227 + WARN(1, "denying ptrace access check without PTRACE_MODE_*CREDS\n"); 228 + return -EPERM; 229 + } 222 230 223 231 /* May we inspect the given task? 224 232 * This check is used both for attaching with ptrace ··· 236 228 * because setting up the necessary parent/child relationship 237 229 * or halting the specified task is impossible. 238 230 */ 239 - int dumpable = 0; 231 + 240 232 /* Don't let security modules deny introspection */ 241 233 if (same_thread_group(task, current)) 242 234 return 0; 243 235 rcu_read_lock(); 236 + if (mode & PTRACE_MODE_FSCREDS) { 237 + caller_uid = cred->fsuid; 238 + caller_gid = cred->fsgid; 239 + } else { 240 + /* 241 + * Using the euid would make more sense here, but something 242 + * in userland might rely on the old behavior, and this 243 + * shouldn't be a security problem since 244 + * PTRACE_MODE_REALCREDS implies that the caller explicitly 245 + * used a syscall that requests access to another process 246 + * (and not a filesystem syscall to procfs). 247 + */ 248 + caller_uid = cred->uid; 249 + caller_gid = cred->gid; 250 + } 244 251 tcred = __task_cred(task); 245 - if (uid_eq(cred->uid, tcred->euid) && 246 - uid_eq(cred->uid, tcred->suid) && 247 - uid_eq(cred->uid, tcred->uid) && 248 - gid_eq(cred->gid, tcred->egid) && 249 - gid_eq(cred->gid, tcred->sgid) && 250 - gid_eq(cred->gid, tcred->gid)) 252 + if (uid_eq(caller_uid, tcred->euid) && 253 + uid_eq(caller_uid, tcred->suid) && 254 + uid_eq(caller_uid, tcred->uid) && 255 + gid_eq(caller_gid, tcred->egid) && 256 + gid_eq(caller_gid, tcred->sgid) && 257 + gid_eq(caller_gid, tcred->gid)) 251 258 goto ok; 252 259 if (ptrace_has_cap(tcred->user_ns, mode)) 253 260 goto ok; ··· 329 306 goto out; 330 307 331 308 task_lock(task); 332 - retval = __ptrace_may_access(task, PTRACE_MODE_ATTACH); 309 + retval = __ptrace_may_access(task, PTRACE_MODE_ATTACH_REALCREDS); 333 310 task_unlock(task); 334 311 if (retval) 335 312 goto unlock_creds; ··· 387 364 mutex_unlock(&task->signal->cred_guard_mutex); 388 365 out: 389 366 if (!retval) { 390 - wait_on_bit(&task->jobctl, JOBCTL_TRAPPING_BIT, 391 - TASK_UNINTERRUPTIBLE); 367 + /* 368 + * We do not bother to change retval or clear JOBCTL_TRAPPING 369 + * if wait_on_bit() was interrupted by SIGKILL. The tracer will 370 + * not return to user-mode, it will exit and clear this bit in 371 + * __ptrace_unlink() if it wasn't already cleared by the tracee; 372 + * and until then nobody can ptrace this task. 373 + */ 374 + wait_on_bit(&task->jobctl, JOBCTL_TRAPPING_BIT, TASK_KILLABLE); 392 375 proc_ptrace_connector(task, PTRACE_ATTACH); 393 376 } 394 377
+10 -10
kernel/sys.c
··· 1853 1853 user_auxv[AT_VECTOR_SIZE - 1] = AT_NULL; 1854 1854 } 1855 1855 1856 - if (prctl_map.exe_fd != (u32)-1) 1856 + if (prctl_map.exe_fd != (u32)-1) { 1857 1857 error = prctl_set_mm_exe_file(mm, prctl_map.exe_fd); 1858 - down_read(&mm->mmap_sem); 1859 - if (error) 1860 - goto out; 1858 + if (error) 1859 + return error; 1860 + } 1861 + 1862 + down_write(&mm->mmap_sem); 1861 1863 1862 1864 /* 1863 1865 * We don't validate if these members are pointing to ··· 1896 1894 if (prctl_map.auxv_size) 1897 1895 memcpy(mm->saved_auxv, user_auxv, sizeof(user_auxv)); 1898 1896 1899 - error = 0; 1900 - out: 1901 - up_read(&mm->mmap_sem); 1902 - return error; 1897 + up_write(&mm->mmap_sem); 1898 + return 0; 1903 1899 } 1904 1900 #endif /* CONFIG_CHECKPOINT_RESTORE */ 1905 1901 ··· 1963 1963 1964 1964 error = -EINVAL; 1965 1965 1966 - down_read(&mm->mmap_sem); 1966 + down_write(&mm->mmap_sem); 1967 1967 vma = find_vma(mm, addr); 1968 1968 1969 1969 prctl_map.start_code = mm->start_code; ··· 2056 2056 2057 2057 error = 0; 2058 2058 out: 2059 - up_read(&mm->mmap_sem); 2059 + up_write(&mm->mmap_sem); 2060 2060 return error; 2061 2061 } 2062 2062
+1 -1
kernel/sysctl.c
··· 173 173 #define SYSCTL_WRITES_WARN 0 174 174 #define SYSCTL_WRITES_STRICT 1 175 175 176 - static int sysctl_writes_strict = SYSCTL_WRITES_WARN; 176 + static int sysctl_writes_strict = SYSCTL_WRITES_STRICT; 177 177 178 178 static int proc_do_cad_pid(struct ctl_table *table, int write, 179 179 void __user *buffer, size_t *lenp, loff_t *ppos);
+2
lib/Kconfig.debug
··· 1893 1893 1894 1894 source "lib/Kconfig.kgdb" 1895 1895 1896 + source "lib/Kconfig.ubsan" 1897 + 1896 1898 config ARCH_HAS_DEVMEM_IS_ALLOWED 1897 1899 bool 1898 1900
+29
lib/Kconfig.ubsan
··· 1 + config ARCH_HAS_UBSAN_SANITIZE_ALL 2 + bool 3 + 4 + config UBSAN 5 + bool "Undefined behaviour sanity checker" 6 + help 7 + This option enables undefined behaviour sanity checker 8 + Compile-time instrumentation is used to detect various undefined 9 + behaviours in runtime. Various types of checks may be enabled 10 + via boot parameter ubsan_handle (see: Documentation/ubsan.txt). 11 + 12 + config UBSAN_SANITIZE_ALL 13 + bool "Enable instrumentation for the entire kernel" 14 + depends on UBSAN 15 + depends on ARCH_HAS_UBSAN_SANITIZE_ALL 16 + default y 17 + help 18 + This option activates instrumentation for the entire kernel. 19 + If you don't enable this option, you have to explicitly specify 20 + UBSAN_SANITIZE := y for the files/directories you want to check for UB. 21 + 22 + config UBSAN_ALIGNMENT 23 + bool "Enable checking of pointers alignment" 24 + depends on UBSAN 25 + default y if !HAVE_EFFICIENT_UNALIGNED_ACCESS 26 + help 27 + This option enables detection of unaligned memory accesses. 28 + Enabling this option on architectures that support unalligned 29 + accesses may produce a lot of false positives.
+5 -2
lib/Makefile
··· 31 31 obj-y += string_helpers.o 32 32 obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o 33 33 obj-y += hexdump.o 34 - obj-$(CONFIG_TEST_HEXDUMP) += test-hexdump.o 34 + obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o 35 35 obj-y += kstrtox.o 36 36 obj-$(CONFIG_TEST_BPF) += test_bpf.o 37 37 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o ··· 154 154 obj-$(CONFIG_MPILIB) += mpi/ 155 155 obj-$(CONFIG_SIGNATURE) += digsig.o 156 156 157 - obj-$(CONFIG_CLZ_TAB) += clz_tab.o 157 + lib-$(CONFIG_CLZ_TAB) += clz_tab.o 158 158 159 159 obj-$(CONFIG_DDR) += jedec_ddr_data.o 160 160 ··· 209 209 clean-files += oid_registry_data.c 210 210 211 211 obj-$(CONFIG_UCS2_STRING) += ucs2_string.o 212 + obj-$(CONFIG_UBSAN) += ubsan.o 213 + 214 + UBSAN_SANITIZE_ubsan.o := n
+21
lib/iomap_copy.c
··· 42 42 EXPORT_SYMBOL_GPL(__iowrite32_copy); 43 43 44 44 /** 45 + * __ioread32_copy - copy data from MMIO space, in 32-bit units 46 + * @to: destination (must be 32-bit aligned) 47 + * @from: source, in MMIO space (must be 32-bit aligned) 48 + * @count: number of 32-bit quantities to copy 49 + * 50 + * Copy data from MMIO space to kernel space, in units of 32 bits at a 51 + * time. Order of access is not guaranteed, nor is a memory barrier 52 + * performed afterwards. 53 + */ 54 + void __ioread32_copy(void *to, const void __iomem *from, size_t count) 55 + { 56 + u32 *dst = to; 57 + const u32 __iomem *src = from; 58 + const u32 __iomem *end = src + count; 59 + 60 + while (src < end) 61 + *dst++ = __raw_readl(src++); 62 + } 63 + EXPORT_SYMBOL_GPL(__ioread32_copy); 64 + 65 + /** 45 66 * __iowrite64_copy - copy data to MMIO space, in 64-bit or 32-bit units 46 67 * @to: destination, in MMIO space (must be 64-bit aligned) 47 68 * @from: source (must be 64-bit aligned)
+1
lib/libcrc32c.c
··· 36 36 #include <linux/init.h> 37 37 #include <linux/kernel.h> 38 38 #include <linux/module.h> 39 + #include <linux/crc32c.h> 39 40 40 41 static struct crypto_shash *tfm; 41 42
+46 -23
lib/string_helpers.c
··· 43 43 [STRING_UNITS_10] = 1000, 44 44 [STRING_UNITS_2] = 1024, 45 45 }; 46 - int i, j; 47 - u32 remainder = 0, sf_cap, exp; 46 + static const unsigned int rounding[] = { 500, 50, 5 }; 47 + int i = 0, j; 48 + u32 remainder = 0, sf_cap; 48 49 char tmp[8]; 49 50 const char *unit; 50 51 51 52 tmp[0] = '\0'; 52 - i = 0; 53 - if (!size) 53 + 54 + if (blk_size == 0) 55 + size = 0; 56 + if (size == 0) 54 57 goto out; 55 58 56 - while (blk_size >= divisor[units]) { 57 - remainder = do_div(blk_size, divisor[units]); 58 - i++; 59 - } 60 - 61 - exp = divisor[units] / (u32)blk_size; 62 - /* 63 - * size must be strictly greater than exp here to ensure that remainder 64 - * is greater than divisor[units] coming out of the if below. 59 + /* This is Napier's algorithm. Reduce the original block size to 60 + * 61 + * coefficient * divisor[units]^i 62 + * 63 + * we do the reduction so both coefficients are just under 32 bits so 64 + * that multiplying them together won't overflow 64 bits and we keep 65 + * as much precision as possible in the numbers. 66 + * 67 + * Note: it's safe to throw away the remainders here because all the 68 + * precision is in the coefficients. 65 69 */ 66 - if (size > exp) { 67 - remainder = do_div(size, divisor[units]); 68 - remainder *= blk_size; 70 + while (blk_size >> 32) { 71 + do_div(blk_size, divisor[units]); 69 72 i++; 70 - } else { 71 - remainder *= size; 72 73 } 73 74 74 - size *= blk_size; 75 - size += remainder / divisor[units]; 76 - remainder %= divisor[units]; 75 + while (size >> 32) { 76 + do_div(size, divisor[units]); 77 + i++; 78 + } 77 79 80 + /* now perform the actual multiplication keeping i as the sum of the 81 + * two logarithms */ 82 + size *= blk_size; 83 + 84 + /* and logarithmically reduce it until it's just under the divisor */ 78 85 while (size >= divisor[units]) { 79 86 remainder = do_div(size, divisor[units]); 80 87 i++; 81 88 } 82 89 90 + /* work out in j how many digits of precision we need from the 91 + * remainder */ 83 92 sf_cap = size; 84 93 for (j = 0; sf_cap*10 < 1000; j++) 85 94 sf_cap *= 10; 86 95 87 - if (j) { 96 + if (units == STRING_UNITS_2) { 97 + /* express the remainder as a decimal. It's currently the 98 + * numerator of a fraction whose denominator is 99 + * divisor[units], which is 1 << 10 for STRING_UNITS_2 */ 88 100 remainder *= 1000; 89 - remainder /= divisor[units]; 101 + remainder >>= 10; 102 + } 103 + 104 + /* add a 5 to the digit below what will be printed to ensure 105 + * an arithmetical round up and carry it through to size */ 106 + remainder += rounding[j]; 107 + if (remainder >= 1000) { 108 + remainder -= 1000; 109 + size += 1; 110 + } 111 + 112 + if (j) { 90 113 snprintf(tmp, sizeof(tmp), ".%03u", remainder); 91 114 tmp[j+1] = '\0'; 92 115 }
+101 -43
lib/test-hexdump.c lib/test_hexdump.c
··· 42 42 "e9ac0f9cad319ca6", "0cafb1439919d14c", 43 43 }; 44 44 45 - static void __init test_hexdump(size_t len, int rowsize, int groupsize, 46 - bool ascii) 45 + #define FILL_CHAR '#' 46 + 47 + static unsigned total_tests __initdata; 48 + static unsigned failed_tests __initdata; 49 + 50 + static void __init test_hexdump_prepare_test(size_t len, int rowsize, 51 + int groupsize, char *test, 52 + size_t testlen, bool ascii) 47 53 { 48 - char test[32 * 3 + 2 + 32 + 1]; 49 - char real[32 * 3 + 2 + 32 + 1]; 50 54 char *p; 51 55 const char * const *result; 52 56 size_t l = len; 53 57 int gs = groupsize, rs = rowsize; 54 58 unsigned int i; 55 - 56 - hex_dump_to_buffer(data_b, l, rs, gs, real, sizeof(real), ascii); 57 59 58 60 if (rs != 16 && rs != 32) 59 61 rs = 16; ··· 75 73 else 76 74 result = test_data_1_le; 77 75 78 - memset(test, ' ', sizeof(test)); 79 - 80 76 /* hex dump */ 81 77 p = test; 82 78 for (i = 0; i < l / gs; i++) { ··· 82 82 size_t amount = strlen(q); 83 83 84 84 strncpy(p, q, amount); 85 - p += amount + 1; 85 + p += amount; 86 + 87 + *p++ = ' '; 86 88 } 87 89 if (i) 88 90 p--; 89 91 90 92 /* ASCII part */ 91 93 if (ascii) { 92 - p = test + rs * 2 + rs / gs + 1; 94 + do { 95 + *p++ = ' '; 96 + } while (p < test + rs * 2 + rs / gs + 1); 97 + 93 98 strncpy(p, data_a, l); 94 99 p += l; 95 100 } 96 101 97 102 *p = '\0'; 103 + } 98 104 99 - if (strcmp(test, real)) { 105 + #define TEST_HEXDUMP_BUF_SIZE (32 * 3 + 2 + 32 + 1) 106 + 107 + static void __init test_hexdump(size_t len, int rowsize, int groupsize, 108 + bool ascii) 109 + { 110 + char test[TEST_HEXDUMP_BUF_SIZE]; 111 + char real[TEST_HEXDUMP_BUF_SIZE]; 112 + 113 + total_tests++; 114 + 115 + memset(real, FILL_CHAR, sizeof(real)); 116 + hex_dump_to_buffer(data_b, len, rowsize, groupsize, real, sizeof(real), 117 + ascii); 118 + 119 + memset(test, FILL_CHAR, sizeof(test)); 120 + test_hexdump_prepare_test(len, rowsize, groupsize, test, sizeof(test), 121 + ascii); 122 + 123 + if (memcmp(test, real, TEST_HEXDUMP_BUF_SIZE)) { 100 124 pr_err("Len: %zu row: %d group: %d\n", len, rowsize, groupsize); 101 125 pr_err("Result: '%s'\n", real); 102 126 pr_err("Expect: '%s'\n", test); 127 + failed_tests++; 103 128 } 104 129 } 105 130 ··· 139 114 test_hexdump(len, rowsize, 1, ascii); 140 115 } 141 116 142 - static void __init test_hexdump_overflow(bool ascii) 117 + static void __init test_hexdump_overflow(size_t buflen, size_t len, 118 + int rowsize, int groupsize, 119 + bool ascii) 143 120 { 144 - char buf[56]; 145 - const char *t = test_data_1_le[0]; 146 - size_t l = get_random_int() % sizeof(buf); 121 + char test[TEST_HEXDUMP_BUF_SIZE]; 122 + char buf[TEST_HEXDUMP_BUF_SIZE]; 123 + int rs = rowsize, gs = groupsize; 124 + int ae, he, e, f, r; 147 125 bool a; 148 - int e, r; 149 126 150 - memset(buf, ' ', sizeof(buf)); 127 + total_tests++; 151 128 152 - r = hex_dump_to_buffer(data_b, 1, 16, 1, buf, l, ascii); 129 + memset(buf, FILL_CHAR, sizeof(buf)); 130 + 131 + r = hex_dump_to_buffer(data_b, len, rs, gs, buf, buflen, ascii); 132 + 133 + /* 134 + * Caller must provide the data length multiple of groupsize. The 135 + * calculations below are made with that assumption in mind. 136 + */ 137 + ae = rs * 2 /* hex */ + rs / gs /* spaces */ + 1 /* space */ + len /* ascii */; 138 + he = (gs * 2 /* hex */ + 1 /* space */) * len / gs - 1 /* no trailing space */; 153 139 154 140 if (ascii) 155 - e = 50; 141 + e = ae; 156 142 else 157 - e = 2; 158 - buf[e + 2] = '\0'; 143 + e = he; 159 144 160 - if (!l) { 161 - a = r == e && buf[0] == ' '; 162 - } else if (l < 3) { 163 - a = r == e && buf[0] == '\0'; 164 - } else if (l < 4) { 165 - a = r == e && !strcmp(buf, t); 166 - } else if (ascii) { 167 - if (l < 51) 168 - a = r == e && buf[l - 1] == '\0' && buf[l - 2] == ' '; 169 - else 170 - a = r == e && buf[50] == '\0' && buf[49] == '.'; 171 - } else { 172 - a = r == e && buf[e] == '\0'; 145 + f = min_t(int, e + 1, buflen); 146 + if (buflen) { 147 + test_hexdump_prepare_test(len, rs, gs, test, sizeof(test), ascii); 148 + test[f - 1] = '\0'; 173 149 } 150 + memset(test + f, FILL_CHAR, sizeof(test) - f); 151 + 152 + a = r == e && !memcmp(test, buf, TEST_HEXDUMP_BUF_SIZE); 153 + 154 + buf[sizeof(buf) - 1] = '\0'; 174 155 175 156 if (!a) { 176 - pr_err("Len: %zu rc: %u strlen: %zu\n", l, r, strlen(buf)); 177 - pr_err("Result: '%s'\n", buf); 157 + pr_err("Len: %zu buflen: %zu strlen: %zu\n", 158 + len, buflen, strnlen(buf, sizeof(buf))); 159 + pr_err("Result: %d '%s'\n", r, buf); 160 + pr_err("Expect: %d '%s'\n", e, test); 161 + failed_tests++; 178 162 } 163 + } 164 + 165 + static void __init test_hexdump_overflow_set(size_t buflen, bool ascii) 166 + { 167 + unsigned int i = 0; 168 + int rs = (get_random_int() % 2 + 1) * 16; 169 + 170 + do { 171 + int gs = 1 << i; 172 + size_t len = get_random_int() % rs + gs; 173 + 174 + test_hexdump_overflow(buflen, rounddown(len, gs), rs, gs, ascii); 175 + } while (i++ < 3); 179 176 } 180 177 181 178 static int __init test_hexdump_init(void) 182 179 { 183 180 unsigned int i; 184 181 int rowsize; 185 - 186 - pr_info("Running tests...\n"); 187 182 188 183 rowsize = (get_random_int() % 2 + 1) * 16; 189 184 for (i = 0; i < 16; i++) ··· 213 168 for (i = 0; i < 16; i++) 214 169 test_hexdump_set(rowsize, true); 215 170 216 - for (i = 0; i < 16; i++) 217 - test_hexdump_overflow(false); 171 + for (i = 0; i <= TEST_HEXDUMP_BUF_SIZE; i++) 172 + test_hexdump_overflow_set(i, false); 218 173 219 - for (i = 0; i < 16; i++) 220 - test_hexdump_overflow(true); 174 + for (i = 0; i <= TEST_HEXDUMP_BUF_SIZE; i++) 175 + test_hexdump_overflow_set(i, true); 221 176 222 - return -EINVAL; 177 + if (failed_tests == 0) 178 + pr_info("all %u tests passed\n", total_tests); 179 + else 180 + pr_err("failed %u out of %u tests\n", failed_tests, total_tests); 181 + 182 + return failed_tests ? -EINVAL : 0; 223 183 } 224 184 module_init(test_hexdump_init); 185 + 186 + static void __exit test_hexdump_exit(void) 187 + { 188 + /* do nothing */ 189 + } 190 + module_exit(test_hexdump_exit); 191 + 192 + MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>"); 225 193 MODULE_LICENSE("Dual BSD/GPL");
+456
lib/ubsan.c
··· 1 + /* 2 + * UBSAN error reporting functions 3 + * 4 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 5 + * Author: Andrey Ryabinin <ryabinin.a.a@gmail.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + */ 12 + 13 + #include <linux/bitops.h> 14 + #include <linux/bug.h> 15 + #include <linux/ctype.h> 16 + #include <linux/init.h> 17 + #include <linux/kernel.h> 18 + #include <linux/types.h> 19 + #include <linux/sched.h> 20 + 21 + #include "ubsan.h" 22 + 23 + const char *type_check_kinds[] = { 24 + "load of", 25 + "store to", 26 + "reference binding to", 27 + "member access within", 28 + "member call on", 29 + "constructor call on", 30 + "downcast of", 31 + "downcast of" 32 + }; 33 + 34 + #define REPORTED_BIT 31 35 + 36 + #if (BITS_PER_LONG == 64) && defined(__BIG_ENDIAN) 37 + #define COLUMN_MASK (~(1U << REPORTED_BIT)) 38 + #define LINE_MASK (~0U) 39 + #else 40 + #define COLUMN_MASK (~0U) 41 + #define LINE_MASK (~(1U << REPORTED_BIT)) 42 + #endif 43 + 44 + #define VALUE_LENGTH 40 45 + 46 + static bool was_reported(struct source_location *location) 47 + { 48 + return test_and_set_bit(REPORTED_BIT, &location->reported); 49 + } 50 + 51 + static void print_source_location(const char *prefix, 52 + struct source_location *loc) 53 + { 54 + pr_err("%s %s:%d:%d\n", prefix, loc->file_name, 55 + loc->line & LINE_MASK, loc->column & COLUMN_MASK); 56 + } 57 + 58 + static bool suppress_report(struct source_location *loc) 59 + { 60 + return current->in_ubsan || was_reported(loc); 61 + } 62 + 63 + static bool type_is_int(struct type_descriptor *type) 64 + { 65 + return type->type_kind == type_kind_int; 66 + } 67 + 68 + static bool type_is_signed(struct type_descriptor *type) 69 + { 70 + WARN_ON(!type_is_int(type)); 71 + return type->type_info & 1; 72 + } 73 + 74 + static unsigned type_bit_width(struct type_descriptor *type) 75 + { 76 + return 1 << (type->type_info >> 1); 77 + } 78 + 79 + static bool is_inline_int(struct type_descriptor *type) 80 + { 81 + unsigned inline_bits = sizeof(unsigned long)*8; 82 + unsigned bits = type_bit_width(type); 83 + 84 + WARN_ON(!type_is_int(type)); 85 + 86 + return bits <= inline_bits; 87 + } 88 + 89 + static s_max get_signed_val(struct type_descriptor *type, unsigned long val) 90 + { 91 + if (is_inline_int(type)) { 92 + unsigned extra_bits = sizeof(s_max)*8 - type_bit_width(type); 93 + return ((s_max)val) << extra_bits >> extra_bits; 94 + } 95 + 96 + if (type_bit_width(type) == 64) 97 + return *(s64 *)val; 98 + 99 + return *(s_max *)val; 100 + } 101 + 102 + static bool val_is_negative(struct type_descriptor *type, unsigned long val) 103 + { 104 + return type_is_signed(type) && get_signed_val(type, val) < 0; 105 + } 106 + 107 + static u_max get_unsigned_val(struct type_descriptor *type, unsigned long val) 108 + { 109 + if (is_inline_int(type)) 110 + return val; 111 + 112 + if (type_bit_width(type) == 64) 113 + return *(u64 *)val; 114 + 115 + return *(u_max *)val; 116 + } 117 + 118 + static void val_to_string(char *str, size_t size, struct type_descriptor *type, 119 + unsigned long value) 120 + { 121 + if (type_is_int(type)) { 122 + if (type_bit_width(type) == 128) { 123 + #if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) 124 + u_max val = get_unsigned_val(type, value); 125 + 126 + scnprintf(str, size, "0x%08x%08x%08x%08x", 127 + (u32)(val >> 96), 128 + (u32)(val >> 64), 129 + (u32)(val >> 32), 130 + (u32)(val)); 131 + #else 132 + WARN_ON(1); 133 + #endif 134 + } else if (type_is_signed(type)) { 135 + scnprintf(str, size, "%lld", 136 + (s64)get_signed_val(type, value)); 137 + } else { 138 + scnprintf(str, size, "%llu", 139 + (u64)get_unsigned_val(type, value)); 140 + } 141 + } 142 + } 143 + 144 + static bool location_is_valid(struct source_location *loc) 145 + { 146 + return loc->file_name != NULL; 147 + } 148 + 149 + static DEFINE_SPINLOCK(report_lock); 150 + 151 + static void ubsan_prologue(struct source_location *location, 152 + unsigned long *flags) 153 + { 154 + current->in_ubsan++; 155 + spin_lock_irqsave(&report_lock, *flags); 156 + 157 + pr_err("========================================" 158 + "========================================\n"); 159 + print_source_location("UBSAN: Undefined behaviour in", location); 160 + } 161 + 162 + static void ubsan_epilogue(unsigned long *flags) 163 + { 164 + dump_stack(); 165 + pr_err("========================================" 166 + "========================================\n"); 167 + spin_unlock_irqrestore(&report_lock, *flags); 168 + current->in_ubsan--; 169 + } 170 + 171 + static void handle_overflow(struct overflow_data *data, unsigned long lhs, 172 + unsigned long rhs, char op) 173 + { 174 + 175 + struct type_descriptor *type = data->type; 176 + unsigned long flags; 177 + char lhs_val_str[VALUE_LENGTH]; 178 + char rhs_val_str[VALUE_LENGTH]; 179 + 180 + if (suppress_report(&data->location)) 181 + return; 182 + 183 + ubsan_prologue(&data->location, &flags); 184 + 185 + val_to_string(lhs_val_str, sizeof(lhs_val_str), type, lhs); 186 + val_to_string(rhs_val_str, sizeof(rhs_val_str), type, rhs); 187 + pr_err("%s integer overflow:\n", 188 + type_is_signed(type) ? "signed" : "unsigned"); 189 + pr_err("%s %c %s cannot be represented in type %s\n", 190 + lhs_val_str, 191 + op, 192 + rhs_val_str, 193 + type->type_name); 194 + 195 + ubsan_epilogue(&flags); 196 + } 197 + 198 + void __ubsan_handle_add_overflow(struct overflow_data *data, 199 + unsigned long lhs, 200 + unsigned long rhs) 201 + { 202 + 203 + handle_overflow(data, lhs, rhs, '+'); 204 + } 205 + EXPORT_SYMBOL(__ubsan_handle_add_overflow); 206 + 207 + void __ubsan_handle_sub_overflow(struct overflow_data *data, 208 + unsigned long lhs, 209 + unsigned long rhs) 210 + { 211 + handle_overflow(data, lhs, rhs, '-'); 212 + } 213 + EXPORT_SYMBOL(__ubsan_handle_sub_overflow); 214 + 215 + void __ubsan_handle_mul_overflow(struct overflow_data *data, 216 + unsigned long lhs, 217 + unsigned long rhs) 218 + { 219 + handle_overflow(data, lhs, rhs, '*'); 220 + } 221 + EXPORT_SYMBOL(__ubsan_handle_mul_overflow); 222 + 223 + void __ubsan_handle_negate_overflow(struct overflow_data *data, 224 + unsigned long old_val) 225 + { 226 + unsigned long flags; 227 + char old_val_str[VALUE_LENGTH]; 228 + 229 + if (suppress_report(&data->location)) 230 + return; 231 + 232 + ubsan_prologue(&data->location, &flags); 233 + 234 + val_to_string(old_val_str, sizeof(old_val_str), data->type, old_val); 235 + 236 + pr_err("negation of %s cannot be represented in type %s:\n", 237 + old_val_str, data->type->type_name); 238 + 239 + ubsan_epilogue(&flags); 240 + } 241 + EXPORT_SYMBOL(__ubsan_handle_negate_overflow); 242 + 243 + 244 + void __ubsan_handle_divrem_overflow(struct overflow_data *data, 245 + unsigned long lhs, 246 + unsigned long rhs) 247 + { 248 + unsigned long flags; 249 + char rhs_val_str[VALUE_LENGTH]; 250 + 251 + if (suppress_report(&data->location)) 252 + return; 253 + 254 + ubsan_prologue(&data->location, &flags); 255 + 256 + val_to_string(rhs_val_str, sizeof(rhs_val_str), data->type, rhs); 257 + 258 + if (type_is_signed(data->type) && get_signed_val(data->type, rhs) == -1) 259 + pr_err("division of %s by -1 cannot be represented in type %s\n", 260 + rhs_val_str, data->type->type_name); 261 + else 262 + pr_err("division by zero\n"); 263 + 264 + ubsan_epilogue(&flags); 265 + } 266 + EXPORT_SYMBOL(__ubsan_handle_divrem_overflow); 267 + 268 + static void handle_null_ptr_deref(struct type_mismatch_data *data) 269 + { 270 + unsigned long flags; 271 + 272 + if (suppress_report(&data->location)) 273 + return; 274 + 275 + ubsan_prologue(&data->location, &flags); 276 + 277 + pr_err("%s null pointer of type %s\n", 278 + type_check_kinds[data->type_check_kind], 279 + data->type->type_name); 280 + 281 + ubsan_epilogue(&flags); 282 + } 283 + 284 + static void handle_missaligned_access(struct type_mismatch_data *data, 285 + unsigned long ptr) 286 + { 287 + unsigned long flags; 288 + 289 + if (suppress_report(&data->location)) 290 + return; 291 + 292 + ubsan_prologue(&data->location, &flags); 293 + 294 + pr_err("%s misaligned address %p for type %s\n", 295 + type_check_kinds[data->type_check_kind], 296 + (void *)ptr, data->type->type_name); 297 + pr_err("which requires %ld byte alignment\n", data->alignment); 298 + 299 + ubsan_epilogue(&flags); 300 + } 301 + 302 + static void handle_object_size_mismatch(struct type_mismatch_data *data, 303 + unsigned long ptr) 304 + { 305 + unsigned long flags; 306 + 307 + if (suppress_report(&data->location)) 308 + return; 309 + 310 + ubsan_prologue(&data->location, &flags); 311 + pr_err("%s address %pk with insufficient space\n", 312 + type_check_kinds[data->type_check_kind], 313 + (void *) ptr); 314 + pr_err("for an object of type %s\n", data->type->type_name); 315 + ubsan_epilogue(&flags); 316 + } 317 + 318 + void __ubsan_handle_type_mismatch(struct type_mismatch_data *data, 319 + unsigned long ptr) 320 + { 321 + 322 + if (!ptr) 323 + handle_null_ptr_deref(data); 324 + else if (data->alignment && !IS_ALIGNED(ptr, data->alignment)) 325 + handle_missaligned_access(data, ptr); 326 + else 327 + handle_object_size_mismatch(data, ptr); 328 + } 329 + EXPORT_SYMBOL(__ubsan_handle_type_mismatch); 330 + 331 + void __ubsan_handle_nonnull_return(struct nonnull_return_data *data) 332 + { 333 + unsigned long flags; 334 + 335 + if (suppress_report(&data->location)) 336 + return; 337 + 338 + ubsan_prologue(&data->location, &flags); 339 + 340 + pr_err("null pointer returned from function declared to never return null\n"); 341 + 342 + if (location_is_valid(&data->attr_location)) 343 + print_source_location("returns_nonnull attribute specified in", 344 + &data->attr_location); 345 + 346 + ubsan_epilogue(&flags); 347 + } 348 + EXPORT_SYMBOL(__ubsan_handle_nonnull_return); 349 + 350 + void __ubsan_handle_vla_bound_not_positive(struct vla_bound_data *data, 351 + unsigned long bound) 352 + { 353 + unsigned long flags; 354 + char bound_str[VALUE_LENGTH]; 355 + 356 + if (suppress_report(&data->location)) 357 + return; 358 + 359 + ubsan_prologue(&data->location, &flags); 360 + 361 + val_to_string(bound_str, sizeof(bound_str), data->type, bound); 362 + pr_err("variable length array bound value %s <= 0\n", bound_str); 363 + 364 + ubsan_epilogue(&flags); 365 + } 366 + EXPORT_SYMBOL(__ubsan_handle_vla_bound_not_positive); 367 + 368 + void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data, 369 + unsigned long index) 370 + { 371 + unsigned long flags; 372 + char index_str[VALUE_LENGTH]; 373 + 374 + if (suppress_report(&data->location)) 375 + return; 376 + 377 + ubsan_prologue(&data->location, &flags); 378 + 379 + val_to_string(index_str, sizeof(index_str), data->index_type, index); 380 + pr_err("index %s is out of range for type %s\n", index_str, 381 + data->array_type->type_name); 382 + ubsan_epilogue(&flags); 383 + } 384 + EXPORT_SYMBOL(__ubsan_handle_out_of_bounds); 385 + 386 + void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data, 387 + unsigned long lhs, unsigned long rhs) 388 + { 389 + unsigned long flags; 390 + struct type_descriptor *rhs_type = data->rhs_type; 391 + struct type_descriptor *lhs_type = data->lhs_type; 392 + char rhs_str[VALUE_LENGTH]; 393 + char lhs_str[VALUE_LENGTH]; 394 + 395 + if (suppress_report(&data->location)) 396 + return; 397 + 398 + ubsan_prologue(&data->location, &flags); 399 + 400 + val_to_string(rhs_str, sizeof(rhs_str), rhs_type, rhs); 401 + val_to_string(lhs_str, sizeof(lhs_str), lhs_type, lhs); 402 + 403 + if (val_is_negative(rhs_type, rhs)) 404 + pr_err("shift exponent %s is negative\n", rhs_str); 405 + 406 + else if (get_unsigned_val(rhs_type, rhs) >= 407 + type_bit_width(lhs_type)) 408 + pr_err("shift exponent %s is too large for %u-bit type %s\n", 409 + rhs_str, 410 + type_bit_width(lhs_type), 411 + lhs_type->type_name); 412 + else if (val_is_negative(lhs_type, lhs)) 413 + pr_err("left shift of negative value %s\n", 414 + lhs_str); 415 + else 416 + pr_err("left shift of %s by %s places cannot be" 417 + " represented in type %s\n", 418 + lhs_str, rhs_str, 419 + lhs_type->type_name); 420 + 421 + ubsan_epilogue(&flags); 422 + } 423 + EXPORT_SYMBOL(__ubsan_handle_shift_out_of_bounds); 424 + 425 + 426 + void __noreturn 427 + __ubsan_handle_builtin_unreachable(struct unreachable_data *data) 428 + { 429 + unsigned long flags; 430 + 431 + ubsan_prologue(&data->location, &flags); 432 + pr_err("calling __builtin_unreachable()\n"); 433 + ubsan_epilogue(&flags); 434 + panic("can't return from __builtin_unreachable()"); 435 + } 436 + EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable); 437 + 438 + void __ubsan_handle_load_invalid_value(struct invalid_value_data *data, 439 + unsigned long val) 440 + { 441 + unsigned long flags; 442 + char val_str[VALUE_LENGTH]; 443 + 444 + if (suppress_report(&data->location)) 445 + return; 446 + 447 + ubsan_prologue(&data->location, &flags); 448 + 449 + val_to_string(val_str, sizeof(val_str), data->type, val); 450 + 451 + pr_err("load of value %s is not a valid value for type %s\n", 452 + val_str, data->type->type_name); 453 + 454 + ubsan_epilogue(&flags); 455 + } 456 + EXPORT_SYMBOL(__ubsan_handle_load_invalid_value);
+84
lib/ubsan.h
··· 1 + #ifndef _LIB_UBSAN_H 2 + #define _LIB_UBSAN_H 3 + 4 + enum { 5 + type_kind_int = 0, 6 + type_kind_float = 1, 7 + type_unknown = 0xffff 8 + }; 9 + 10 + struct type_descriptor { 11 + u16 type_kind; 12 + u16 type_info; 13 + char type_name[1]; 14 + }; 15 + 16 + struct source_location { 17 + const char *file_name; 18 + union { 19 + unsigned long reported; 20 + struct { 21 + u32 line; 22 + u32 column; 23 + }; 24 + }; 25 + }; 26 + 27 + struct overflow_data { 28 + struct source_location location; 29 + struct type_descriptor *type; 30 + }; 31 + 32 + struct type_mismatch_data { 33 + struct source_location location; 34 + struct type_descriptor *type; 35 + unsigned long alignment; 36 + unsigned char type_check_kind; 37 + }; 38 + 39 + struct nonnull_arg_data { 40 + struct source_location location; 41 + struct source_location attr_location; 42 + int arg_index; 43 + }; 44 + 45 + struct nonnull_return_data { 46 + struct source_location location; 47 + struct source_location attr_location; 48 + }; 49 + 50 + struct vla_bound_data { 51 + struct source_location location; 52 + struct type_descriptor *type; 53 + }; 54 + 55 + struct out_of_bounds_data { 56 + struct source_location location; 57 + struct type_descriptor *array_type; 58 + struct type_descriptor *index_type; 59 + }; 60 + 61 + struct shift_out_of_bounds_data { 62 + struct source_location location; 63 + struct type_descriptor *lhs_type; 64 + struct type_descriptor *rhs_type; 65 + }; 66 + 67 + struct unreachable_data { 68 + struct source_location location; 69 + }; 70 + 71 + struct invalid_value_data { 72 + struct source_location location; 73 + struct type_descriptor *type; 74 + }; 75 + 76 + #if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) 77 + typedef __int128 s_max; 78 + typedef unsigned __int128 u_max; 79 + #else 80 + typedef s64 s_max; 81 + typedef u64 u_max; 82 + #endif 83 + 84 + #endif
+5 -4
mm/huge_memory.c
··· 3357 3357 struct anon_vma *anon_vma; 3358 3358 int count, mapcount, ret; 3359 3359 bool mlocked; 3360 + unsigned long flags; 3360 3361 3361 3362 VM_BUG_ON_PAGE(is_huge_zero_page(page), page); 3362 3363 VM_BUG_ON_PAGE(!PageAnon(page), page); ··· 3397 3396 lru_add_drain(); 3398 3397 3399 3398 /* Prevent deferred_split_scan() touching ->_count */ 3400 - spin_lock(&split_queue_lock); 3399 + spin_lock_irqsave(&split_queue_lock, flags); 3401 3400 count = page_count(head); 3402 3401 mapcount = total_mapcount(head); 3403 3402 if (!mapcount && count == 1) { ··· 3405 3404 split_queue_len--; 3406 3405 list_del(page_deferred_list(head)); 3407 3406 } 3408 - spin_unlock(&split_queue_lock); 3407 + spin_unlock_irqrestore(&split_queue_lock, flags); 3409 3408 __split_huge_page(page, list); 3410 3409 ret = 0; 3411 3410 } else if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) { 3412 - spin_unlock(&split_queue_lock); 3411 + spin_unlock_irqrestore(&split_queue_lock, flags); 3413 3412 pr_alert("total_mapcount: %u, page_count(): %u\n", 3414 3413 mapcount, count); 3415 3414 if (PageTail(page)) ··· 3417 3416 dump_page(page, "total_mapcount(head) > 0"); 3418 3417 BUG(); 3419 3418 } else { 3420 - spin_unlock(&split_queue_lock); 3419 + spin_unlock_irqrestore(&split_queue_lock, flags); 3421 3420 unfreeze_page(anon_vma, head); 3422 3421 ret = -EBUSY; 3423 3422 }
+1
mm/kasan/Makefile
··· 1 1 KASAN_SANITIZE := n 2 + UBSAN_SANITIZE_kasan.o := n 2 3 3 4 CFLAGS_REMOVE_kasan.o = -pg 4 5 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+6 -6
mm/list_lru.c
··· 12 12 #include <linux/mutex.h> 13 13 #include <linux/memcontrol.h> 14 14 15 - #ifdef CONFIG_MEMCG_KMEM 15 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 16 16 static LIST_HEAD(list_lrus); 17 17 static DEFINE_MUTEX(list_lrus_mutex); 18 18 ··· 37 37 static void list_lru_unregister(struct list_lru *lru) 38 38 { 39 39 } 40 - #endif /* CONFIG_MEMCG_KMEM */ 40 + #endif /* CONFIG_MEMCG && !CONFIG_SLOB */ 41 41 42 - #ifdef CONFIG_MEMCG_KMEM 42 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 43 43 static inline bool list_lru_memcg_aware(struct list_lru *lru) 44 44 { 45 45 /* ··· 104 104 { 105 105 return &nlru->lru; 106 106 } 107 - #endif /* CONFIG_MEMCG_KMEM */ 107 + #endif /* CONFIG_MEMCG && !CONFIG_SLOB */ 108 108 109 109 bool list_lru_add(struct list_lru *lru, struct list_head *item) 110 110 { ··· 292 292 l->nr_items = 0; 293 293 } 294 294 295 - #ifdef CONFIG_MEMCG_KMEM 295 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 296 296 static void __memcg_destroy_list_lru_node(struct list_lru_memcg *memcg_lrus, 297 297 int begin, int end) 298 298 { ··· 529 529 static void memcg_destroy_list_lru(struct list_lru *lru) 530 530 { 531 531 } 532 - #endif /* CONFIG_MEMCG_KMEM */ 532 + #endif /* CONFIG_MEMCG && !CONFIG_SLOB */ 533 533 534 534 int __list_lru_init(struct list_lru *lru, bool memcg_aware, 535 535 struct lock_class_key *key)
+518 -354
mm/memcontrol.c
··· 66 66 #include "internal.h" 67 67 #include <net/sock.h> 68 68 #include <net/ip.h> 69 - #include <net/tcp_memcontrol.h> 70 69 #include "slab.h" 71 70 72 71 #include <asm/uaccess.h> ··· 81 82 82 83 /* Socket memory accounting disabled? */ 83 84 static bool cgroup_memory_nosocket; 85 + 86 + /* Kernel memory accounting disabled? */ 87 + static bool cgroup_memory_nokmem; 84 88 85 89 /* Whether the swap controller is active */ 86 90 #ifdef CONFIG_MEMCG_SWAP ··· 241 239 _MEMSWAP, 242 240 _OOM_TYPE, 243 241 _KMEM, 242 + _TCP, 244 243 }; 245 244 246 245 #define MEMFILE_PRIVATE(x, val) ((x) << 16 | (val)) ··· 249 246 #define MEMFILE_ATTR(val) ((val) & 0xffff) 250 247 /* Used for OOM nofiier */ 251 248 #define OOM_CONTROL (0) 252 - 253 - /* 254 - * The memcg_create_mutex will be held whenever a new cgroup is created. 255 - * As a consequence, any change that needs to protect against new child cgroups 256 - * appearing has to hold it as well. 257 - */ 258 - static DEFINE_MUTEX(memcg_create_mutex); 259 249 260 250 /* Some nice accessors for the vmpressure. */ 261 251 struct vmpressure *memcg_to_vmpressure(struct mem_cgroup *memcg) ··· 293 297 return mem_cgroup_from_css(css); 294 298 } 295 299 296 - #ifdef CONFIG_MEMCG_KMEM 300 + #ifndef CONFIG_SLOB 297 301 /* 298 302 * This will be the memcg's index in each cache's ->memcg_params.memcg_caches. 299 303 * The main reason for not using cgroup id for this: ··· 345 349 DEFINE_STATIC_KEY_FALSE(memcg_kmem_enabled_key); 346 350 EXPORT_SYMBOL(memcg_kmem_enabled_key); 347 351 348 - #endif /* CONFIG_MEMCG_KMEM */ 352 + #endif /* !CONFIG_SLOB */ 349 353 350 354 static struct mem_cgroup_per_zone * 351 355 mem_cgroup_zone_zoneinfo(struct mem_cgroup *memcg, struct zone *zone) ··· 366 370 * 367 371 * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup 368 372 * is returned. 369 - * 370 - * XXX: The above description of behavior on the default hierarchy isn't 371 - * strictly true yet as replace_page_cache_page() can modify the 372 - * association before @page is released even on the default hierarchy; 373 - * however, the current and planned usages don't mix the the two functions 374 - * and replace_page_cache_page() will soon be updated to make the invariant 375 - * actually true. 376 373 */ 377 374 struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page) 378 375 { ··· 885 896 if (css == &root->css) 886 897 break; 887 898 888 - if (css_tryget(css)) { 889 - /* 890 - * Make sure the memcg is initialized: 891 - * mem_cgroup_css_online() orders the the 892 - * initialization against setting the flag. 893 - */ 894 - if (smp_load_acquire(&memcg->initialized)) 895 - break; 896 - 897 - css_put(css); 898 - } 899 + if (css_tryget(css)) 900 + break; 899 901 900 902 memcg = NULL; 901 903 } ··· 1213 1233 pr_cont(":"); 1214 1234 1215 1235 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { 1216 - if (i == MEM_CGROUP_STAT_SWAP && !do_memsw_account()) 1236 + if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) 1217 1237 continue; 1218 1238 pr_cont(" %s:%luKB", mem_cgroup_stat_names[i], 1219 1239 K(mem_cgroup_read_stat(iter, i))); ··· 1252 1272 limit = memcg->memory.limit; 1253 1273 if (mem_cgroup_swappiness(memcg)) { 1254 1274 unsigned long memsw_limit; 1275 + unsigned long swap_limit; 1255 1276 1256 1277 memsw_limit = memcg->memsw.limit; 1257 - limit = min(limit + total_swap_pages, memsw_limit); 1278 + swap_limit = memcg->swap.limit; 1279 + swap_limit = min(swap_limit, (unsigned long)total_swap_pages); 1280 + limit = min(limit + swap_limit, memsw_limit); 1258 1281 } 1259 1282 return limit; 1260 1283 } ··· 2186 2203 unlock_page_lru(page, isolated); 2187 2204 } 2188 2205 2189 - #ifdef CONFIG_MEMCG_KMEM 2206 + #ifndef CONFIG_SLOB 2190 2207 static int memcg_alloc_cache_id(void) 2191 2208 { 2192 2209 int id, size; ··· 2361 2378 struct page_counter *counter; 2362 2379 int ret; 2363 2380 2364 - if (!memcg_kmem_is_active(memcg)) 2381 + if (!memcg_kmem_online(memcg)) 2365 2382 return 0; 2366 2383 2367 - if (!page_counter_try_charge(&memcg->kmem, nr_pages, &counter)) 2368 - return -ENOMEM; 2369 - 2370 2384 ret = try_charge(memcg, gfp, nr_pages); 2371 - if (ret) { 2372 - page_counter_uncharge(&memcg->kmem, nr_pages); 2385 + if (ret) 2373 2386 return ret; 2387 + 2388 + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && 2389 + !page_counter_try_charge(&memcg->kmem, nr_pages, &counter)) { 2390 + cancel_charge(memcg, nr_pages); 2391 + return -ENOMEM; 2374 2392 } 2375 2393 2376 2394 page->mem_cgroup = memcg; ··· 2400 2416 2401 2417 VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page); 2402 2418 2403 - page_counter_uncharge(&memcg->kmem, nr_pages); 2419 + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) 2420 + page_counter_uncharge(&memcg->kmem, nr_pages); 2421 + 2404 2422 page_counter_uncharge(&memcg->memory, nr_pages); 2405 2423 if (do_memsw_account()) 2406 2424 page_counter_uncharge(&memcg->memsw, nr_pages); ··· 2410 2424 page->mem_cgroup = NULL; 2411 2425 css_put_many(&memcg->css, nr_pages); 2412 2426 } 2413 - #endif /* CONFIG_MEMCG_KMEM */ 2427 + #endif /* !CONFIG_SLOB */ 2414 2428 2415 2429 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 2416 2430 ··· 2670 2684 { 2671 2685 bool ret; 2672 2686 2673 - /* 2674 - * The lock does not prevent addition or deletion of children, but 2675 - * it prevents a new child from being initialized based on this 2676 - * parent in css_online(), so it's enough to decide whether 2677 - * hierarchically inherited attributes can still be changed or not. 2678 - */ 2679 - lockdep_assert_held(&memcg_create_mutex); 2680 - 2681 2687 rcu_read_lock(); 2682 2688 ret = css_next_child(NULL, &memcg->css); 2683 2689 rcu_read_unlock(); ··· 2732 2754 struct mem_cgroup *memcg = mem_cgroup_from_css(css); 2733 2755 struct mem_cgroup *parent_memcg = mem_cgroup_from_css(memcg->css.parent); 2734 2756 2735 - mutex_lock(&memcg_create_mutex); 2736 - 2737 2757 if (memcg->use_hierarchy == val) 2738 - goto out; 2758 + return 0; 2739 2759 2740 2760 /* 2741 2761 * If parent's use_hierarchy is set, we can't make any modifications ··· 2752 2776 } else 2753 2777 retval = -EINVAL; 2754 2778 2755 - out: 2756 - mutex_unlock(&memcg_create_mutex); 2757 - 2758 2779 return retval; 2759 2780 } 2760 2781 ··· 2763 2790 2764 2791 for_each_mem_cgroup_tree(iter, memcg) 2765 2792 val += mem_cgroup_read_stat(iter, idx); 2793 + 2794 + return val; 2795 + } 2796 + 2797 + static unsigned long tree_events(struct mem_cgroup *memcg, 2798 + enum mem_cgroup_events_index idx) 2799 + { 2800 + struct mem_cgroup *iter; 2801 + unsigned long val = 0; 2802 + 2803 + for_each_mem_cgroup_tree(iter, memcg) 2804 + val += mem_cgroup_read_events(iter, idx); 2766 2805 2767 2806 return val; 2768 2807 } ··· 2821 2836 case _KMEM: 2822 2837 counter = &memcg->kmem; 2823 2838 break; 2839 + case _TCP: 2840 + counter = &memcg->tcpmem; 2841 + break; 2824 2842 default: 2825 2843 BUG(); 2826 2844 } ··· 2848 2860 } 2849 2861 } 2850 2862 2851 - #ifdef CONFIG_MEMCG_KMEM 2852 - static int memcg_activate_kmem(struct mem_cgroup *memcg, 2853 - unsigned long nr_pages) 2863 + #ifndef CONFIG_SLOB 2864 + static int memcg_online_kmem(struct mem_cgroup *memcg) 2854 2865 { 2855 - int err = 0; 2856 2866 int memcg_id; 2857 2867 2858 2868 BUG_ON(memcg->kmemcg_id >= 0); 2859 - BUG_ON(memcg->kmem_acct_activated); 2860 - BUG_ON(memcg->kmem_acct_active); 2861 - 2862 - /* 2863 - * For simplicity, we won't allow this to be disabled. It also can't 2864 - * be changed if the cgroup has children already, or if tasks had 2865 - * already joined. 2866 - * 2867 - * If tasks join before we set the limit, a person looking at 2868 - * kmem.usage_in_bytes will have no way to determine when it took 2869 - * place, which makes the value quite meaningless. 2870 - * 2871 - * After it first became limited, changes in the value of the limit are 2872 - * of course permitted. 2873 - */ 2874 - mutex_lock(&memcg_create_mutex); 2875 - if (cgroup_is_populated(memcg->css.cgroup) || 2876 - (memcg->use_hierarchy && memcg_has_children(memcg))) 2877 - err = -EBUSY; 2878 - mutex_unlock(&memcg_create_mutex); 2879 - if (err) 2880 - goto out; 2869 + BUG_ON(memcg->kmem_state); 2881 2870 2882 2871 memcg_id = memcg_alloc_cache_id(); 2883 - if (memcg_id < 0) { 2884 - err = memcg_id; 2885 - goto out; 2886 - } 2887 - 2888 - /* 2889 - * We couldn't have accounted to this cgroup, because it hasn't got 2890 - * activated yet, so this should succeed. 2891 - */ 2892 - err = page_counter_limit(&memcg->kmem, nr_pages); 2893 - VM_BUG_ON(err); 2872 + if (memcg_id < 0) 2873 + return memcg_id; 2894 2874 2895 2875 static_branch_inc(&memcg_kmem_enabled_key); 2896 2876 /* 2897 - * A memory cgroup is considered kmem-active as soon as it gets 2877 + * A memory cgroup is considered kmem-online as soon as it gets 2898 2878 * kmemcg_id. Setting the id after enabling static branching will 2899 2879 * guarantee no one starts accounting before all call sites are 2900 2880 * patched. 2901 2881 */ 2902 2882 memcg->kmemcg_id = memcg_id; 2903 - memcg->kmem_acct_activated = true; 2904 - memcg->kmem_acct_active = true; 2905 - out: 2906 - return err; 2883 + memcg->kmem_state = KMEM_ONLINE; 2884 + 2885 + return 0; 2907 2886 } 2887 + 2888 + static int memcg_propagate_kmem(struct mem_cgroup *parent, 2889 + struct mem_cgroup *memcg) 2890 + { 2891 + int ret = 0; 2892 + 2893 + mutex_lock(&memcg_limit_mutex); 2894 + /* 2895 + * If the parent cgroup is not kmem-online now, it cannot be 2896 + * onlined after this point, because it has at least one child 2897 + * already. 2898 + */ 2899 + if (memcg_kmem_online(parent) || 2900 + (cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_nokmem)) 2901 + ret = memcg_online_kmem(memcg); 2902 + mutex_unlock(&memcg_limit_mutex); 2903 + return ret; 2904 + } 2905 + 2906 + static void memcg_offline_kmem(struct mem_cgroup *memcg) 2907 + { 2908 + struct cgroup_subsys_state *css; 2909 + struct mem_cgroup *parent, *child; 2910 + int kmemcg_id; 2911 + 2912 + if (memcg->kmem_state != KMEM_ONLINE) 2913 + return; 2914 + /* 2915 + * Clear the online state before clearing memcg_caches array 2916 + * entries. The slab_mutex in memcg_deactivate_kmem_caches() 2917 + * guarantees that no cache will be created for this cgroup 2918 + * after we are done (see memcg_create_kmem_cache()). 2919 + */ 2920 + memcg->kmem_state = KMEM_ALLOCATED; 2921 + 2922 + memcg_deactivate_kmem_caches(memcg); 2923 + 2924 + kmemcg_id = memcg->kmemcg_id; 2925 + BUG_ON(kmemcg_id < 0); 2926 + 2927 + parent = parent_mem_cgroup(memcg); 2928 + if (!parent) 2929 + parent = root_mem_cgroup; 2930 + 2931 + /* 2932 + * Change kmemcg_id of this cgroup and all its descendants to the 2933 + * parent's id, and then move all entries from this cgroup's list_lrus 2934 + * to ones of the parent. After we have finished, all list_lrus 2935 + * corresponding to this cgroup are guaranteed to remain empty. The 2936 + * ordering is imposed by list_lru_node->lock taken by 2937 + * memcg_drain_all_list_lrus(). 2938 + */ 2939 + css_for_each_descendant_pre(css, &memcg->css) { 2940 + child = mem_cgroup_from_css(css); 2941 + BUG_ON(child->kmemcg_id != kmemcg_id); 2942 + child->kmemcg_id = parent->kmemcg_id; 2943 + if (!memcg->use_hierarchy) 2944 + break; 2945 + } 2946 + memcg_drain_all_list_lrus(kmemcg_id, parent->kmemcg_id); 2947 + 2948 + memcg_free_cache_id(kmemcg_id); 2949 + } 2950 + 2951 + static void memcg_free_kmem(struct mem_cgroup *memcg) 2952 + { 2953 + /* css_alloc() failed, offlining didn't happen */ 2954 + if (unlikely(memcg->kmem_state == KMEM_ONLINE)) 2955 + memcg_offline_kmem(memcg); 2956 + 2957 + if (memcg->kmem_state == KMEM_ALLOCATED) { 2958 + memcg_destroy_kmem_caches(memcg); 2959 + static_branch_dec(&memcg_kmem_enabled_key); 2960 + WARN_ON(page_counter_read(&memcg->kmem)); 2961 + } 2962 + } 2963 + #else 2964 + static int memcg_propagate_kmem(struct mem_cgroup *parent, struct mem_cgroup *memcg) 2965 + { 2966 + return 0; 2967 + } 2968 + static int memcg_online_kmem(struct mem_cgroup *memcg) 2969 + { 2970 + return 0; 2971 + } 2972 + static void memcg_offline_kmem(struct mem_cgroup *memcg) 2973 + { 2974 + } 2975 + static void memcg_free_kmem(struct mem_cgroup *memcg) 2976 + { 2977 + } 2978 + #endif /* !CONFIG_SLOB */ 2908 2979 2909 2980 static int memcg_update_kmem_limit(struct mem_cgroup *memcg, 2910 2981 unsigned long limit) 2982 + { 2983 + int ret = 0; 2984 + 2985 + mutex_lock(&memcg_limit_mutex); 2986 + /* Top-level cgroup doesn't propagate from root */ 2987 + if (!memcg_kmem_online(memcg)) { 2988 + if (cgroup_is_populated(memcg->css.cgroup) || 2989 + (memcg->use_hierarchy && memcg_has_children(memcg))) 2990 + ret = -EBUSY; 2991 + if (ret) 2992 + goto out; 2993 + ret = memcg_online_kmem(memcg); 2994 + if (ret) 2995 + goto out; 2996 + } 2997 + ret = page_counter_limit(&memcg->kmem, limit); 2998 + out: 2999 + mutex_unlock(&memcg_limit_mutex); 3000 + return ret; 3001 + } 3002 + 3003 + static int memcg_update_tcp_limit(struct mem_cgroup *memcg, unsigned long limit) 2911 3004 { 2912 3005 int ret; 2913 3006 2914 3007 mutex_lock(&memcg_limit_mutex); 2915 - if (!memcg_kmem_is_active(memcg)) 2916 - ret = memcg_activate_kmem(memcg, limit); 2917 - else 2918 - ret = page_counter_limit(&memcg->kmem, limit); 3008 + 3009 + ret = page_counter_limit(&memcg->tcpmem, limit); 3010 + if (ret) 3011 + goto out; 3012 + 3013 + if (!memcg->tcpmem_active) { 3014 + /* 3015 + * The active flag needs to be written after the static_key 3016 + * update. This is what guarantees that the socket activation 3017 + * function is the last one to run. See sock_update_memcg() for 3018 + * details, and note that we don't mark any socket as belonging 3019 + * to this memcg until that flag is up. 3020 + * 3021 + * We need to do this, because static_keys will span multiple 3022 + * sites, but we can't control their order. If we mark a socket 3023 + * as accounted, but the accounting functions are not patched in 3024 + * yet, we'll lose accounting. 3025 + * 3026 + * We never race with the readers in sock_update_memcg(), 3027 + * because when this value change, the code to process it is not 3028 + * patched in yet. 3029 + */ 3030 + static_branch_inc(&memcg_sockets_enabled_key); 3031 + memcg->tcpmem_active = true; 3032 + } 3033 + out: 2919 3034 mutex_unlock(&memcg_limit_mutex); 2920 3035 return ret; 2921 3036 } 2922 - 2923 - static int memcg_propagate_kmem(struct mem_cgroup *memcg) 2924 - { 2925 - int ret = 0; 2926 - struct mem_cgroup *parent = parent_mem_cgroup(memcg); 2927 - 2928 - if (!parent) 2929 - return 0; 2930 - 2931 - mutex_lock(&memcg_limit_mutex); 2932 - /* 2933 - * If the parent cgroup is not kmem-active now, it cannot be activated 2934 - * after this point, because it has at least one child already. 2935 - */ 2936 - if (memcg_kmem_is_active(parent)) 2937 - ret = memcg_activate_kmem(memcg, PAGE_COUNTER_MAX); 2938 - mutex_unlock(&memcg_limit_mutex); 2939 - return ret; 2940 - } 2941 - #else 2942 - static int memcg_update_kmem_limit(struct mem_cgroup *memcg, 2943 - unsigned long limit) 2944 - { 2945 - return -EINVAL; 2946 - } 2947 - #endif /* CONFIG_MEMCG_KMEM */ 2948 3037 2949 3038 /* 2950 3039 * The user of this function is... ··· 3055 2990 case _KMEM: 3056 2991 ret = memcg_update_kmem_limit(memcg, nr_pages); 3057 2992 break; 2993 + case _TCP: 2994 + ret = memcg_update_tcp_limit(memcg, nr_pages); 2995 + break; 3058 2996 } 3059 2997 break; 3060 2998 case RES_SOFT_LIMIT: ··· 3083 3015 break; 3084 3016 case _KMEM: 3085 3017 counter = &memcg->kmem; 3018 + break; 3019 + case _TCP: 3020 + counter = &memcg->tcpmem; 3086 3021 break; 3087 3022 default: 3088 3023 BUG(); ··· 3653 3582 return 0; 3654 3583 } 3655 3584 3656 - #ifdef CONFIG_MEMCG_KMEM 3657 - static int memcg_init_kmem(struct mem_cgroup *memcg, struct cgroup_subsys *ss) 3658 - { 3659 - int ret; 3660 - 3661 - ret = memcg_propagate_kmem(memcg); 3662 - if (ret) 3663 - return ret; 3664 - 3665 - return tcp_init_cgroup(memcg, ss); 3666 - } 3667 - 3668 - static void memcg_deactivate_kmem(struct mem_cgroup *memcg) 3669 - { 3670 - struct cgroup_subsys_state *css; 3671 - struct mem_cgroup *parent, *child; 3672 - int kmemcg_id; 3673 - 3674 - if (!memcg->kmem_acct_active) 3675 - return; 3676 - 3677 - /* 3678 - * Clear the 'active' flag before clearing memcg_caches arrays entries. 3679 - * Since we take the slab_mutex in memcg_deactivate_kmem_caches(), it 3680 - * guarantees no cache will be created for this cgroup after we are 3681 - * done (see memcg_create_kmem_cache()). 3682 - */ 3683 - memcg->kmem_acct_active = false; 3684 - 3685 - memcg_deactivate_kmem_caches(memcg); 3686 - 3687 - kmemcg_id = memcg->kmemcg_id; 3688 - BUG_ON(kmemcg_id < 0); 3689 - 3690 - parent = parent_mem_cgroup(memcg); 3691 - if (!parent) 3692 - parent = root_mem_cgroup; 3693 - 3694 - /* 3695 - * Change kmemcg_id of this cgroup and all its descendants to the 3696 - * parent's id, and then move all entries from this cgroup's list_lrus 3697 - * to ones of the parent. After we have finished, all list_lrus 3698 - * corresponding to this cgroup are guaranteed to remain empty. The 3699 - * ordering is imposed by list_lru_node->lock taken by 3700 - * memcg_drain_all_list_lrus(). 3701 - */ 3702 - css_for_each_descendant_pre(css, &memcg->css) { 3703 - child = mem_cgroup_from_css(css); 3704 - BUG_ON(child->kmemcg_id != kmemcg_id); 3705 - child->kmemcg_id = parent->kmemcg_id; 3706 - if (!memcg->use_hierarchy) 3707 - break; 3708 - } 3709 - memcg_drain_all_list_lrus(kmemcg_id, parent->kmemcg_id); 3710 - 3711 - memcg_free_cache_id(kmemcg_id); 3712 - } 3713 - 3714 - static void memcg_destroy_kmem(struct mem_cgroup *memcg) 3715 - { 3716 - if (memcg->kmem_acct_activated) { 3717 - memcg_destroy_kmem_caches(memcg); 3718 - static_branch_dec(&memcg_kmem_enabled_key); 3719 - WARN_ON(page_counter_read(&memcg->kmem)); 3720 - } 3721 - tcp_destroy_cgroup(memcg); 3722 - } 3723 - #else 3724 - static int memcg_init_kmem(struct mem_cgroup *memcg, struct cgroup_subsys *ss) 3725 - { 3726 - return 0; 3727 - } 3728 - 3729 - static void memcg_deactivate_kmem(struct mem_cgroup *memcg) 3730 - { 3731 - } 3732 - 3733 - static void memcg_destroy_kmem(struct mem_cgroup *memcg) 3734 - { 3735 - } 3736 - #endif 3737 - 3738 3585 #ifdef CONFIG_CGROUP_WRITEBACK 3739 3586 3740 3587 struct list_head *mem_cgroup_cgwb_list(struct mem_cgroup *memcg) ··· 4040 4051 .seq_show = memcg_numa_stat_show, 4041 4052 }, 4042 4053 #endif 4043 - #ifdef CONFIG_MEMCG_KMEM 4044 4054 { 4045 4055 .name = "kmem.limit_in_bytes", 4046 4056 .private = MEMFILE_PRIVATE(_KMEM, RES_LIMIT), ··· 4072 4084 .seq_show = memcg_slab_show, 4073 4085 }, 4074 4086 #endif 4075 - #endif 4087 + { 4088 + .name = "kmem.tcp.limit_in_bytes", 4089 + .private = MEMFILE_PRIVATE(_TCP, RES_LIMIT), 4090 + .write = mem_cgroup_write, 4091 + .read_u64 = mem_cgroup_read_u64, 4092 + }, 4093 + { 4094 + .name = "kmem.tcp.usage_in_bytes", 4095 + .private = MEMFILE_PRIVATE(_TCP, RES_USAGE), 4096 + .read_u64 = mem_cgroup_read_u64, 4097 + }, 4098 + { 4099 + .name = "kmem.tcp.failcnt", 4100 + .private = MEMFILE_PRIVATE(_TCP, RES_FAILCNT), 4101 + .write = mem_cgroup_reset, 4102 + .read_u64 = mem_cgroup_read_u64, 4103 + }, 4104 + { 4105 + .name = "kmem.tcp.max_usage_in_bytes", 4106 + .private = MEMFILE_PRIVATE(_TCP, RES_MAX_USAGE), 4107 + .write = mem_cgroup_reset, 4108 + .read_u64 = mem_cgroup_read_u64, 4109 + }, 4076 4110 { }, /* terminate */ 4077 4111 }; 4078 4112 ··· 4133 4123 kfree(memcg->nodeinfo[node]); 4134 4124 } 4135 4125 4126 + static void mem_cgroup_free(struct mem_cgroup *memcg) 4127 + { 4128 + int node; 4129 + 4130 + memcg_wb_domain_exit(memcg); 4131 + for_each_node(node) 4132 + free_mem_cgroup_per_zone_info(memcg, node); 4133 + free_percpu(memcg->stat); 4134 + kfree(memcg); 4135 + } 4136 + 4136 4137 static struct mem_cgroup *mem_cgroup_alloc(void) 4137 4138 { 4138 4139 struct mem_cgroup *memcg; 4139 4140 size_t size; 4141 + int node; 4140 4142 4141 4143 size = sizeof(struct mem_cgroup); 4142 4144 size += nr_node_ids * sizeof(struct mem_cgroup_per_node *); ··· 4159 4137 4160 4138 memcg->stat = alloc_percpu(struct mem_cgroup_stat_cpu); 4161 4139 if (!memcg->stat) 4162 - goto out_free; 4163 - 4164 - if (memcg_wb_domain_init(memcg, GFP_KERNEL)) 4165 - goto out_free_stat; 4166 - 4167 - return memcg; 4168 - 4169 - out_free_stat: 4170 - free_percpu(memcg->stat); 4171 - out_free: 4172 - kfree(memcg); 4173 - return NULL; 4174 - } 4175 - 4176 - /* 4177 - * At destroying mem_cgroup, references from swap_cgroup can remain. 4178 - * (scanning all at force_empty is too costly...) 4179 - * 4180 - * Instead of clearing all references at force_empty, we remember 4181 - * the number of reference from swap_cgroup and free mem_cgroup when 4182 - * it goes down to 0. 4183 - * 4184 - * Removal of cgroup itself succeeds regardless of refs from swap. 4185 - */ 4186 - 4187 - static void __mem_cgroup_free(struct mem_cgroup *memcg) 4188 - { 4189 - int node; 4190 - 4191 - cancel_work_sync(&memcg->high_work); 4192 - 4193 - mem_cgroup_remove_from_trees(memcg); 4194 - 4195 - for_each_node(node) 4196 - free_mem_cgroup_per_zone_info(memcg, node); 4197 - 4198 - free_percpu(memcg->stat); 4199 - memcg_wb_domain_exit(memcg); 4200 - kfree(memcg); 4201 - } 4202 - 4203 - static struct cgroup_subsys_state * __ref 4204 - mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) 4205 - { 4206 - struct mem_cgroup *memcg; 4207 - long error = -ENOMEM; 4208 - int node; 4209 - 4210 - memcg = mem_cgroup_alloc(); 4211 - if (!memcg) 4212 - return ERR_PTR(error); 4140 + goto fail; 4213 4141 4214 4142 for_each_node(node) 4215 4143 if (alloc_mem_cgroup_per_zone_info(memcg, node)) 4216 - goto free_out; 4144 + goto fail; 4217 4145 4218 - /* root ? */ 4219 - if (parent_css == NULL) { 4220 - root_mem_cgroup = memcg; 4221 - page_counter_init(&memcg->memory, NULL); 4222 - memcg->high = PAGE_COUNTER_MAX; 4223 - memcg->soft_limit = PAGE_COUNTER_MAX; 4224 - page_counter_init(&memcg->memsw, NULL); 4225 - page_counter_init(&memcg->kmem, NULL); 4226 - } 4146 + if (memcg_wb_domain_init(memcg, GFP_KERNEL)) 4147 + goto fail; 4227 4148 4228 4149 INIT_WORK(&memcg->high_work, high_work_func); 4229 4150 memcg->last_scanned_node = MAX_NUMNODES; 4230 4151 INIT_LIST_HEAD(&memcg->oom_notify); 4231 - memcg->move_charge_at_immigrate = 0; 4232 4152 mutex_init(&memcg->thresholds_lock); 4233 4153 spin_lock_init(&memcg->move_lock); 4234 4154 vmpressure_init(&memcg->vmpressure); 4235 4155 INIT_LIST_HEAD(&memcg->event_list); 4236 4156 spin_lock_init(&memcg->event_list_lock); 4237 - #ifdef CONFIG_MEMCG_KMEM 4157 + memcg->socket_pressure = jiffies; 4158 + #ifndef CONFIG_SLOB 4238 4159 memcg->kmemcg_id = -1; 4239 4160 #endif 4240 4161 #ifdef CONFIG_CGROUP_WRITEBACK 4241 4162 INIT_LIST_HEAD(&memcg->cgwb_list); 4242 4163 #endif 4243 - #ifdef CONFIG_INET 4244 - memcg->socket_pressure = jiffies; 4245 - #endif 4246 - return &memcg->css; 4247 - 4248 - free_out: 4249 - __mem_cgroup_free(memcg); 4250 - return ERR_PTR(error); 4164 + return memcg; 4165 + fail: 4166 + mem_cgroup_free(memcg); 4167 + return NULL; 4251 4168 } 4252 4169 4253 - static int 4254 - mem_cgroup_css_online(struct cgroup_subsys_state *css) 4170 + static struct cgroup_subsys_state * __ref 4171 + mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) 4255 4172 { 4256 - struct mem_cgroup *memcg = mem_cgroup_from_css(css); 4257 - struct mem_cgroup *parent = mem_cgroup_from_css(css->parent); 4258 - int ret; 4173 + struct mem_cgroup *parent = mem_cgroup_from_css(parent_css); 4174 + struct mem_cgroup *memcg; 4175 + long error = -ENOMEM; 4259 4176 4260 - if (css->id > MEM_CGROUP_ID_MAX) 4261 - return -ENOSPC; 4177 + memcg = mem_cgroup_alloc(); 4178 + if (!memcg) 4179 + return ERR_PTR(error); 4262 4180 4263 - if (!parent) 4264 - return 0; 4265 - 4266 - mutex_lock(&memcg_create_mutex); 4267 - 4268 - memcg->use_hierarchy = parent->use_hierarchy; 4269 - memcg->oom_kill_disable = parent->oom_kill_disable; 4270 - memcg->swappiness = mem_cgroup_swappiness(parent); 4271 - 4272 - if (parent->use_hierarchy) { 4181 + memcg->high = PAGE_COUNTER_MAX; 4182 + memcg->soft_limit = PAGE_COUNTER_MAX; 4183 + if (parent) { 4184 + memcg->swappiness = mem_cgroup_swappiness(parent); 4185 + memcg->oom_kill_disable = parent->oom_kill_disable; 4186 + } 4187 + if (parent && parent->use_hierarchy) { 4188 + memcg->use_hierarchy = true; 4273 4189 page_counter_init(&memcg->memory, &parent->memory); 4274 - memcg->high = PAGE_COUNTER_MAX; 4275 - memcg->soft_limit = PAGE_COUNTER_MAX; 4190 + page_counter_init(&memcg->swap, &parent->swap); 4276 4191 page_counter_init(&memcg->memsw, &parent->memsw); 4277 4192 page_counter_init(&memcg->kmem, &parent->kmem); 4278 - 4279 - /* 4280 - * No need to take a reference to the parent because cgroup 4281 - * core guarantees its existence. 4282 - */ 4193 + page_counter_init(&memcg->tcpmem, &parent->tcpmem); 4283 4194 } else { 4284 4195 page_counter_init(&memcg->memory, NULL); 4285 - memcg->high = PAGE_COUNTER_MAX; 4286 - memcg->soft_limit = PAGE_COUNTER_MAX; 4196 + page_counter_init(&memcg->swap, NULL); 4287 4197 page_counter_init(&memcg->memsw, NULL); 4288 4198 page_counter_init(&memcg->kmem, NULL); 4199 + page_counter_init(&memcg->tcpmem, NULL); 4289 4200 /* 4290 4201 * Deeper hierachy with use_hierarchy == false doesn't make 4291 4202 * much sense so let cgroup subsystem know about this ··· 4227 4272 if (parent != root_mem_cgroup) 4228 4273 memory_cgrp_subsys.broken_hierarchy = true; 4229 4274 } 4230 - mutex_unlock(&memcg_create_mutex); 4231 4275 4232 - ret = memcg_init_kmem(memcg, &memory_cgrp_subsys); 4233 - if (ret) 4234 - return ret; 4276 + /* The following stuff does not apply to the root */ 4277 + if (!parent) { 4278 + root_mem_cgroup = memcg; 4279 + return &memcg->css; 4280 + } 4235 4281 4236 - #ifdef CONFIG_INET 4282 + error = memcg_propagate_kmem(parent, memcg); 4283 + if (error) 4284 + goto fail; 4285 + 4237 4286 if (cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_nosocket) 4238 4287 static_branch_inc(&memcg_sockets_enabled_key); 4239 - #endif 4240 4288 4241 - /* 4242 - * Make sure the memcg is initialized: mem_cgroup_iter() 4243 - * orders reading memcg->initialized against its callers 4244 - * reading the memcg members. 4245 - */ 4246 - smp_store_release(&memcg->initialized, 1); 4289 + return &memcg->css; 4290 + fail: 4291 + mem_cgroup_free(memcg); 4292 + return NULL; 4293 + } 4294 + 4295 + static int 4296 + mem_cgroup_css_online(struct cgroup_subsys_state *css) 4297 + { 4298 + if (css->id > MEM_CGROUP_ID_MAX) 4299 + return -ENOSPC; 4247 4300 4248 4301 return 0; 4249 4302 } ··· 4273 4310 } 4274 4311 spin_unlock(&memcg->event_list_lock); 4275 4312 4276 - vmpressure_cleanup(&memcg->vmpressure); 4277 - 4278 - memcg_deactivate_kmem(memcg); 4279 - 4313 + memcg_offline_kmem(memcg); 4280 4314 wb_memcg_offline(memcg); 4281 4315 } 4282 4316 ··· 4288 4328 { 4289 4329 struct mem_cgroup *memcg = mem_cgroup_from_css(css); 4290 4330 4291 - memcg_destroy_kmem(memcg); 4292 - #ifdef CONFIG_INET 4293 4331 if (cgroup_subsys_on_dfl(memory_cgrp_subsys) && !cgroup_memory_nosocket) 4294 4332 static_branch_dec(&memcg_sockets_enabled_key); 4295 - #endif 4296 - __mem_cgroup_free(memcg); 4333 + 4334 + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && memcg->tcpmem_active) 4335 + static_branch_dec(&memcg_sockets_enabled_key); 4336 + 4337 + vmpressure_cleanup(&memcg->vmpressure); 4338 + cancel_work_sync(&memcg->high_work); 4339 + mem_cgroup_remove_from_trees(memcg); 4340 + memcg_free_kmem(memcg); 4341 + mem_cgroup_free(memcg); 4297 4342 } 4298 4343 4299 4344 /** ··· 5108 5143 return 0; 5109 5144 } 5110 5145 5146 + static int memory_stat_show(struct seq_file *m, void *v) 5147 + { 5148 + struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); 5149 + int i; 5150 + 5151 + /* 5152 + * Provide statistics on the state of the memory subsystem as 5153 + * well as cumulative event counters that show past behavior. 5154 + * 5155 + * This list is ordered following a combination of these gradients: 5156 + * 1) generic big picture -> specifics and details 5157 + * 2) reflecting userspace activity -> reflecting kernel heuristics 5158 + * 5159 + * Current memory state: 5160 + */ 5161 + 5162 + seq_printf(m, "anon %llu\n", 5163 + (u64)tree_stat(memcg, MEM_CGROUP_STAT_RSS) * PAGE_SIZE); 5164 + seq_printf(m, "file %llu\n", 5165 + (u64)tree_stat(memcg, MEM_CGROUP_STAT_CACHE) * PAGE_SIZE); 5166 + seq_printf(m, "sock %llu\n", 5167 + (u64)tree_stat(memcg, MEMCG_SOCK) * PAGE_SIZE); 5168 + 5169 + seq_printf(m, "file_mapped %llu\n", 5170 + (u64)tree_stat(memcg, MEM_CGROUP_STAT_FILE_MAPPED) * 5171 + PAGE_SIZE); 5172 + seq_printf(m, "file_dirty %llu\n", 5173 + (u64)tree_stat(memcg, MEM_CGROUP_STAT_DIRTY) * 5174 + PAGE_SIZE); 5175 + seq_printf(m, "file_writeback %llu\n", 5176 + (u64)tree_stat(memcg, MEM_CGROUP_STAT_WRITEBACK) * 5177 + PAGE_SIZE); 5178 + 5179 + for (i = 0; i < NR_LRU_LISTS; i++) { 5180 + struct mem_cgroup *mi; 5181 + unsigned long val = 0; 5182 + 5183 + for_each_mem_cgroup_tree(mi, memcg) 5184 + val += mem_cgroup_nr_lru_pages(mi, BIT(i)); 5185 + seq_printf(m, "%s %llu\n", 5186 + mem_cgroup_lru_names[i], (u64)val * PAGE_SIZE); 5187 + } 5188 + 5189 + /* Accumulated memory events */ 5190 + 5191 + seq_printf(m, "pgfault %lu\n", 5192 + tree_events(memcg, MEM_CGROUP_EVENTS_PGFAULT)); 5193 + seq_printf(m, "pgmajfault %lu\n", 5194 + tree_events(memcg, MEM_CGROUP_EVENTS_PGMAJFAULT)); 5195 + 5196 + return 0; 5197 + } 5198 + 5111 5199 static struct cftype memory_files[] = { 5112 5200 { 5113 5201 .name = "current", ··· 5190 5172 .flags = CFTYPE_NOT_ON_ROOT, 5191 5173 .file_offset = offsetof(struct mem_cgroup, events_file), 5192 5174 .seq_show = memory_events_show, 5175 + }, 5176 + { 5177 + .name = "stat", 5178 + .flags = CFTYPE_NOT_ON_ROOT, 5179 + .seq_show = memory_stat_show, 5193 5180 }, 5194 5181 { } /* terminate */ 5195 5182 }; ··· 5292 5269 if (page->mem_cgroup) 5293 5270 goto out; 5294 5271 5295 - if (do_memsw_account()) { 5272 + if (do_swap_account) { 5296 5273 swp_entry_t ent = { .val = page_private(page), }; 5297 5274 unsigned short id = lookup_swap_cgroup_id(ent); 5298 5275 ··· 5527 5504 void mem_cgroup_replace_page(struct page *oldpage, struct page *newpage) 5528 5505 { 5529 5506 struct mem_cgroup *memcg; 5530 - int isolated; 5507 + unsigned int nr_pages; 5508 + bool compound; 5531 5509 5532 5510 VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage); 5533 5511 VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); ··· 5548 5524 if (!memcg) 5549 5525 return; 5550 5526 5551 - lock_page_lru(oldpage, &isolated); 5552 - oldpage->mem_cgroup = NULL; 5553 - unlock_page_lru(oldpage, isolated); 5527 + /* Force-charge the new page. The old one will be freed soon */ 5528 + compound = PageTransHuge(newpage); 5529 + nr_pages = compound ? hpage_nr_pages(newpage) : 1; 5530 + 5531 + page_counter_charge(&memcg->memory, nr_pages); 5532 + if (do_memsw_account()) 5533 + page_counter_charge(&memcg->memsw, nr_pages); 5534 + css_get_many(&memcg->css, nr_pages); 5554 5535 5555 5536 commit_charge(newpage, memcg, true); 5556 - } 5557 5537 5558 - #ifdef CONFIG_INET 5538 + local_irq_disable(); 5539 + mem_cgroup_charge_statistics(memcg, newpage, compound, nr_pages); 5540 + memcg_check_events(memcg, newpage); 5541 + local_irq_enable(); 5542 + } 5559 5543 5560 5544 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key); 5561 5545 EXPORT_SYMBOL(memcg_sockets_enabled_key); ··· 5590 5558 memcg = mem_cgroup_from_task(current); 5591 5559 if (memcg == root_mem_cgroup) 5592 5560 goto out; 5593 - #ifdef CONFIG_MEMCG_KMEM 5594 - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && !memcg->tcp_mem.active) 5561 + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && !memcg->tcpmem_active) 5595 5562 goto out; 5596 - #endif 5597 5563 if (css_tryget_online(&memcg->css)) 5598 5564 sk->sk_memcg = memcg; 5599 5565 out: ··· 5617 5587 { 5618 5588 gfp_t gfp_mask = GFP_KERNEL; 5619 5589 5620 - #ifdef CONFIG_MEMCG_KMEM 5621 5590 if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) { 5622 - struct page_counter *counter; 5591 + struct page_counter *fail; 5623 5592 5624 - if (page_counter_try_charge(&memcg->tcp_mem.memory_allocated, 5625 - nr_pages, &counter)) { 5626 - memcg->tcp_mem.memory_pressure = 0; 5593 + if (page_counter_try_charge(&memcg->tcpmem, nr_pages, &fail)) { 5594 + memcg->tcpmem_pressure = 0; 5627 5595 return true; 5628 5596 } 5629 - page_counter_charge(&memcg->tcp_mem.memory_allocated, nr_pages); 5630 - memcg->tcp_mem.memory_pressure = 1; 5597 + page_counter_charge(&memcg->tcpmem, nr_pages); 5598 + memcg->tcpmem_pressure = 1; 5631 5599 return false; 5632 5600 } 5633 - #endif 5601 + 5634 5602 /* Don't block in the packet receive path */ 5635 5603 if (in_softirq()) 5636 5604 gfp_mask = GFP_NOWAIT; 5605 + 5606 + this_cpu_add(memcg->stat->count[MEMCG_SOCK], nr_pages); 5637 5607 5638 5608 if (try_charge(memcg, gfp_mask, nr_pages) == 0) 5639 5609 return true; ··· 5649 5619 */ 5650 5620 void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages) 5651 5621 { 5652 - #ifdef CONFIG_MEMCG_KMEM 5653 5622 if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) { 5654 - page_counter_uncharge(&memcg->tcp_mem.memory_allocated, 5655 - nr_pages); 5623 + page_counter_uncharge(&memcg->tcpmem, nr_pages); 5656 5624 return; 5657 5625 } 5658 - #endif 5626 + 5627 + this_cpu_sub(memcg->stat->count[MEMCG_SOCK], nr_pages); 5628 + 5659 5629 page_counter_uncharge(&memcg->memory, nr_pages); 5660 5630 css_put_many(&memcg->css, nr_pages); 5661 5631 } 5662 - 5663 - #endif /* CONFIG_INET */ 5664 5632 5665 5633 static int __init cgroup_memory(char *s) 5666 5634 { ··· 5669 5641 continue; 5670 5642 if (!strcmp(token, "nosocket")) 5671 5643 cgroup_memory_nosocket = true; 5644 + if (!strcmp(token, "nokmem")) 5645 + cgroup_memory_nokmem = true; 5672 5646 } 5673 5647 return 0; 5674 5648 } ··· 5760 5730 memcg_check_events(memcg, page); 5761 5731 } 5762 5732 5733 + /* 5734 + * mem_cgroup_try_charge_swap - try charging a swap entry 5735 + * @page: page being added to swap 5736 + * @entry: swap entry to charge 5737 + * 5738 + * Try to charge @entry to the memcg that @page belongs to. 5739 + * 5740 + * Returns 0 on success, -ENOMEM on failure. 5741 + */ 5742 + int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry) 5743 + { 5744 + struct mem_cgroup *memcg; 5745 + struct page_counter *counter; 5746 + unsigned short oldid; 5747 + 5748 + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) || !do_swap_account) 5749 + return 0; 5750 + 5751 + memcg = page->mem_cgroup; 5752 + 5753 + /* Readahead page, never charged */ 5754 + if (!memcg) 5755 + return 0; 5756 + 5757 + if (!mem_cgroup_is_root(memcg) && 5758 + !page_counter_try_charge(&memcg->swap, 1, &counter)) 5759 + return -ENOMEM; 5760 + 5761 + oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg)); 5762 + VM_BUG_ON_PAGE(oldid, page); 5763 + mem_cgroup_swap_statistics(memcg, true); 5764 + 5765 + css_get(&memcg->css); 5766 + return 0; 5767 + } 5768 + 5763 5769 /** 5764 5770 * mem_cgroup_uncharge_swap - uncharge a swap entry 5765 5771 * @entry: swap entry to uncharge 5766 5772 * 5767 - * Drop the memsw charge associated with @entry. 5773 + * Drop the swap charge associated with @entry. 5768 5774 */ 5769 5775 void mem_cgroup_uncharge_swap(swp_entry_t entry) 5770 5776 { 5771 5777 struct mem_cgroup *memcg; 5772 5778 unsigned short id; 5773 5779 5774 - if (!do_memsw_account()) 5780 + if (!do_swap_account) 5775 5781 return; 5776 5782 5777 5783 id = swap_cgroup_record(entry, 0); 5778 5784 rcu_read_lock(); 5779 5785 memcg = mem_cgroup_from_id(id); 5780 5786 if (memcg) { 5781 - if (!mem_cgroup_is_root(memcg)) 5782 - page_counter_uncharge(&memcg->memsw, 1); 5787 + if (!mem_cgroup_is_root(memcg)) { 5788 + if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) 5789 + page_counter_uncharge(&memcg->swap, 1); 5790 + else 5791 + page_counter_uncharge(&memcg->memsw, 1); 5792 + } 5783 5793 mem_cgroup_swap_statistics(memcg, false); 5784 5794 css_put(&memcg->css); 5785 5795 } 5786 5796 rcu_read_unlock(); 5797 + } 5798 + 5799 + long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) 5800 + { 5801 + long nr_swap_pages = get_nr_swap_pages(); 5802 + 5803 + if (!do_swap_account || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) 5804 + return nr_swap_pages; 5805 + for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) 5806 + nr_swap_pages = min_t(long, nr_swap_pages, 5807 + READ_ONCE(memcg->swap.limit) - 5808 + page_counter_read(&memcg->swap)); 5809 + return nr_swap_pages; 5810 + } 5811 + 5812 + bool mem_cgroup_swap_full(struct page *page) 5813 + { 5814 + struct mem_cgroup *memcg; 5815 + 5816 + VM_BUG_ON_PAGE(!PageLocked(page), page); 5817 + 5818 + if (vm_swap_full()) 5819 + return true; 5820 + if (!do_swap_account || !cgroup_subsys_on_dfl(memory_cgrp_subsys)) 5821 + return false; 5822 + 5823 + memcg = page->mem_cgroup; 5824 + if (!memcg) 5825 + return false; 5826 + 5827 + for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) 5828 + if (page_counter_read(&memcg->swap) * 2 >= memcg->swap.limit) 5829 + return true; 5830 + 5831 + return false; 5787 5832 } 5788 5833 5789 5834 /* for remember boot option*/ ··· 5877 5772 return 1; 5878 5773 } 5879 5774 __setup("swapaccount=", enable_swap_account); 5775 + 5776 + static u64 swap_current_read(struct cgroup_subsys_state *css, 5777 + struct cftype *cft) 5778 + { 5779 + struct mem_cgroup *memcg = mem_cgroup_from_css(css); 5780 + 5781 + return (u64)page_counter_read(&memcg->swap) * PAGE_SIZE; 5782 + } 5783 + 5784 + static int swap_max_show(struct seq_file *m, void *v) 5785 + { 5786 + struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); 5787 + unsigned long max = READ_ONCE(memcg->swap.limit); 5788 + 5789 + if (max == PAGE_COUNTER_MAX) 5790 + seq_puts(m, "max\n"); 5791 + else 5792 + seq_printf(m, "%llu\n", (u64)max * PAGE_SIZE); 5793 + 5794 + return 0; 5795 + } 5796 + 5797 + static ssize_t swap_max_write(struct kernfs_open_file *of, 5798 + char *buf, size_t nbytes, loff_t off) 5799 + { 5800 + struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); 5801 + unsigned long max; 5802 + int err; 5803 + 5804 + buf = strstrip(buf); 5805 + err = page_counter_memparse(buf, "max", &max); 5806 + if (err) 5807 + return err; 5808 + 5809 + mutex_lock(&memcg_limit_mutex); 5810 + err = page_counter_limit(&memcg->swap, max); 5811 + mutex_unlock(&memcg_limit_mutex); 5812 + if (err) 5813 + return err; 5814 + 5815 + return nbytes; 5816 + } 5817 + 5818 + static struct cftype swap_files[] = { 5819 + { 5820 + .name = "swap.current", 5821 + .flags = CFTYPE_NOT_ON_ROOT, 5822 + .read_u64 = swap_current_read, 5823 + }, 5824 + { 5825 + .name = "swap.max", 5826 + .flags = CFTYPE_NOT_ON_ROOT, 5827 + .seq_show = swap_max_show, 5828 + .write = swap_max_write, 5829 + }, 5830 + { } /* terminate */ 5831 + }; 5880 5832 5881 5833 static struct cftype memsw_cgroup_files[] = { 5882 5834 { ··· 5966 5804 { 5967 5805 if (!mem_cgroup_disabled() && really_do_swap_account) { 5968 5806 do_swap_account = 1; 5807 + WARN_ON(cgroup_add_dfl_cftypes(&memory_cgrp_subsys, 5808 + swap_files)); 5969 5809 WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, 5970 5810 memsw_cgroup_files)); 5971 5811 }
+2 -1
mm/memory.c
··· 2582 2582 } 2583 2583 2584 2584 swap_free(entry); 2585 - if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page)) 2585 + if (mem_cgroup_swap_full(page) || 2586 + (vma->vm_flags & VM_LOCKED) || PageMlocked(page)) 2586 2587 try_to_free_swap(page); 2587 2588 unlock_page(page); 2588 2589 if (page != swapcache) {
+1 -1
mm/process_vm_access.c
··· 194 194 goto free_proc_pages; 195 195 } 196 196 197 - mm = mm_access(task, PTRACE_MODE_ATTACH); 197 + mm = mm_access(task, PTRACE_MODE_ATTACH_REALCREDS); 198 198 if (!mm || IS_ERR(mm)) { 199 199 rc = IS_ERR(mm) ? PTR_ERR(mm) : -ESRCH; 200 200 /*
+4
mm/shmem.c
··· 912 912 if (!swap.val) 913 913 goto redirty; 914 914 915 + if (mem_cgroup_try_charge_swap(page, swap)) 916 + goto free_swap; 917 + 915 918 /* 916 919 * Add inode to shmem_unuse()'s list of swapped-out inodes, 917 920 * if it's not already there. Do it now before the page is ··· 943 940 } 944 941 945 942 mutex_unlock(&shmem_swaplist_mutex); 943 + free_swap: 946 944 swapcache_free(swap); 947 945 redirty: 948 946 set_page_dirty(page);
+3 -3
mm/slab.h
··· 173 173 void __kmem_cache_free_bulk(struct kmem_cache *, size_t, void **); 174 174 int __kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **); 175 175 176 - #ifdef CONFIG_MEMCG_KMEM 176 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 177 177 /* 178 178 * Iterate over all memcg caches of the given root cache. The caller must hold 179 179 * slab_mutex. ··· 251 251 252 252 extern void slab_init_memcg_params(struct kmem_cache *); 253 253 254 - #else /* !CONFIG_MEMCG_KMEM */ 254 + #else /* CONFIG_MEMCG && !CONFIG_SLOB */ 255 255 256 256 #define for_each_memcg_cache(iter, root) \ 257 257 for ((void)(iter), (void)(root); 0; ) ··· 292 292 static inline void slab_init_memcg_params(struct kmem_cache *s) 293 293 { 294 294 } 295 - #endif /* CONFIG_MEMCG_KMEM */ 295 + #endif /* CONFIG_MEMCG && !CONFIG_SLOB */ 296 296 297 297 static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) 298 298 {
+7 -7
mm/slab_common.c
··· 128 128 return i; 129 129 } 130 130 131 - #ifdef CONFIG_MEMCG_KMEM 131 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 132 132 void slab_init_memcg_params(struct kmem_cache *s) 133 133 { 134 134 s->memcg_params.is_root_cache = true; ··· 221 221 static inline void destroy_memcg_params(struct kmem_cache *s) 222 222 { 223 223 } 224 - #endif /* CONFIG_MEMCG_KMEM */ 224 + #endif /* CONFIG_MEMCG && !CONFIG_SLOB */ 225 225 226 226 /* 227 227 * Find a mergeable slab cache ··· 477 477 } 478 478 } 479 479 480 - #ifdef CONFIG_MEMCG_KMEM 480 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 481 481 /* 482 482 * memcg_create_kmem_cache - Create a cache for a memory cgroup. 483 483 * @memcg: The memory cgroup the new cache is for. ··· 503 503 mutex_lock(&slab_mutex); 504 504 505 505 /* 506 - * The memory cgroup could have been deactivated while the cache 506 + * The memory cgroup could have been offlined while the cache 507 507 * creation work was pending. 508 508 */ 509 - if (!memcg_kmem_is_active(memcg)) 509 + if (!memcg_kmem_online(memcg)) 510 510 goto out_unlock; 511 511 512 512 idx = memcg_cache_id(memcg); ··· 689 689 { 690 690 return 0; 691 691 } 692 - #endif /* CONFIG_MEMCG_KMEM */ 692 + #endif /* CONFIG_MEMCG && !CONFIG_SLOB */ 693 693 694 694 void slab_kmem_cache_release(struct kmem_cache *s) 695 695 { ··· 1123 1123 return 0; 1124 1124 } 1125 1125 1126 - #ifdef CONFIG_MEMCG_KMEM 1126 + #if defined(CONFIG_MEMCG) && !defined(CONFIG_SLOB) 1127 1127 int memcg_slab_show(struct seq_file *m, void *p) 1128 1128 { 1129 1129 struct kmem_cache *s = list_entry(p, struct kmem_cache, list);
+5 -5
mm/slub.c
··· 5207 5207 return -EIO; 5208 5208 5209 5209 err = attribute->store(s, buf, len); 5210 - #ifdef CONFIG_MEMCG_KMEM 5210 + #ifdef CONFIG_MEMCG 5211 5211 if (slab_state >= FULL && err >= 0 && is_root_cache(s)) { 5212 5212 struct kmem_cache *c; 5213 5213 ··· 5242 5242 5243 5243 static void memcg_propagate_slab_attrs(struct kmem_cache *s) 5244 5244 { 5245 - #ifdef CONFIG_MEMCG_KMEM 5245 + #ifdef CONFIG_MEMCG 5246 5246 int i; 5247 5247 char *buffer = NULL; 5248 5248 struct kmem_cache *root_cache; ··· 5328 5328 5329 5329 static inline struct kset *cache_kset(struct kmem_cache *s) 5330 5330 { 5331 - #ifdef CONFIG_MEMCG_KMEM 5331 + #ifdef CONFIG_MEMCG 5332 5332 if (!is_root_cache(s)) 5333 5333 return s->memcg_params.root_cache->memcg_kset; 5334 5334 #endif ··· 5405 5405 if (err) 5406 5406 goto out_del_kobj; 5407 5407 5408 - #ifdef CONFIG_MEMCG_KMEM 5408 + #ifdef CONFIG_MEMCG 5409 5409 if (is_root_cache(s)) { 5410 5410 s->memcg_kset = kset_create_and_add("cgroup", NULL, &s->kobj); 5411 5411 if (!s->memcg_kset) { ··· 5438 5438 */ 5439 5439 return; 5440 5440 5441 - #ifdef CONFIG_MEMCG_KMEM 5441 + #ifdef CONFIG_MEMCG 5442 5442 kset_unregister(s->memcg_kset); 5443 5443 #endif 5444 5444 kobject_uevent(&s->kobj, KOBJ_REMOVE);
+5
mm/swap_state.c
··· 170 170 if (!entry.val) 171 171 return 0; 172 172 173 + if (mem_cgroup_try_charge_swap(page, entry)) { 174 + swapcache_free(entry); 175 + return 0; 176 + } 177 + 173 178 if (unlikely(PageTransHuge(page))) 174 179 if (unlikely(split_huge_page_to_list(page, list))) { 175 180 swapcache_free(entry);
+2 -4
mm/swapfile.c
··· 785 785 count--; 786 786 } 787 787 788 - if (!count) 789 - mem_cgroup_uncharge_swap(entry); 790 - 791 788 usage = count | has_cache; 792 789 p->swap_map[offset] = usage; 793 790 794 791 /* free if no reference */ 795 792 if (!usage) { 793 + mem_cgroup_uncharge_swap(entry); 796 794 dec_cluster_info_page(p, p->cluster_info, offset); 797 795 if (offset < p->lowest_bit) 798 796 p->lowest_bit = offset; ··· 1006 1008 * Also recheck PageSwapCache now page is locked (above). 1007 1009 */ 1008 1010 if (PageSwapCache(page) && !PageWriteback(page) && 1009 - (!page_mapped(page) || vm_swap_full())) { 1011 + (!page_mapped(page) || mem_cgroup_swap_full(page))) { 1010 1012 delete_from_swap_cache(page); 1011 1013 SetPageDirty(page); 1012 1014 }
+12 -4
mm/util.c
··· 476 476 int res = 0; 477 477 unsigned int len; 478 478 struct mm_struct *mm = get_task_mm(task); 479 + unsigned long arg_start, arg_end, env_start, env_end; 479 480 if (!mm) 480 481 goto out; 481 482 if (!mm->arg_end) 482 483 goto out_mm; /* Shh! No looking before we're done */ 483 484 484 - len = mm->arg_end - mm->arg_start; 485 + down_read(&mm->mmap_sem); 486 + arg_start = mm->arg_start; 487 + arg_end = mm->arg_end; 488 + env_start = mm->env_start; 489 + env_end = mm->env_end; 490 + up_read(&mm->mmap_sem); 491 + 492 + len = arg_end - arg_start; 485 493 486 494 if (len > buflen) 487 495 len = buflen; 488 496 489 - res = access_process_vm(task, mm->arg_start, buffer, len, 0); 497 + res = access_process_vm(task, arg_start, buffer, len, 0); 490 498 491 499 /* 492 500 * If the nul at the end of args has been overwritten, then ··· 505 497 if (len < res) { 506 498 res = len; 507 499 } else { 508 - len = mm->env_end - mm->env_start; 500 + len = env_end - env_start; 509 501 if (len > buflen - res) 510 502 len = buflen - res; 511 - res += access_process_vm(task, mm->env_start, 503 + res += access_process_vm(task, env_start, 512 504 buffer+res, len, 0); 513 505 res = strnlen(buffer, res); 514 506 }
+12 -16
mm/vmscan.c
··· 411 411 struct shrinker *shrinker; 412 412 unsigned long freed = 0; 413 413 414 - if (memcg && !memcg_kmem_is_active(memcg)) 414 + if (memcg && !memcg_kmem_online(memcg)) 415 415 return 0; 416 416 417 417 if (nr_scanned == 0) ··· 1214 1214 1215 1215 activate_locked: 1216 1216 /* Not a candidate for swapping, so reclaim swap space. */ 1217 - if (PageSwapCache(page) && vm_swap_full()) 1217 + if (PageSwapCache(page) && mem_cgroup_swap_full(page)) 1218 1218 try_to_free_swap(page); 1219 1219 VM_BUG_ON_PAGE(PageActive(page), page); 1220 1220 SetPageActive(page); ··· 1966 1966 * nr[0] = anon inactive pages to scan; nr[1] = anon active pages to scan 1967 1967 * nr[2] = file inactive pages to scan; nr[3] = file active pages to scan 1968 1968 */ 1969 - static void get_scan_count(struct lruvec *lruvec, int swappiness, 1969 + static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, 1970 1970 struct scan_control *sc, unsigned long *nr, 1971 1971 unsigned long *lru_pages) 1972 1972 { 1973 + int swappiness = mem_cgroup_swappiness(memcg); 1973 1974 struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; 1974 1975 u64 fraction[2]; 1975 1976 u64 denominator = 0; /* gcc */ ··· 1997 1996 if (current_is_kswapd()) { 1998 1997 if (!zone_reclaimable(zone)) 1999 1998 force_scan = true; 2000 - if (!mem_cgroup_lruvec_online(lruvec)) 1999 + if (!mem_cgroup_online(memcg)) 2001 2000 force_scan = true; 2002 2001 } 2003 2002 if (!global_reclaim(sc)) 2004 2003 force_scan = true; 2005 2004 2006 2005 /* If we have no swap space, do not bother scanning anon pages. */ 2007 - if (!sc->may_swap || (get_nr_swap_pages() <= 0)) { 2006 + if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) { 2008 2007 scan_balance = SCAN_FILE; 2009 2008 goto out; 2010 2009 } ··· 2194 2193 /* 2195 2194 * This is a basic per-zone page freer. Used by both kswapd and direct reclaim. 2196 2195 */ 2197 - static void shrink_lruvec(struct lruvec *lruvec, int swappiness, 2198 - struct scan_control *sc, unsigned long *lru_pages) 2196 + static void shrink_zone_memcg(struct zone *zone, struct mem_cgroup *memcg, 2197 + struct scan_control *sc, unsigned long *lru_pages) 2199 2198 { 2199 + struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg); 2200 2200 unsigned long nr[NR_LRU_LISTS]; 2201 2201 unsigned long targets[NR_LRU_LISTS]; 2202 2202 unsigned long nr_to_scan; ··· 2207 2205 struct blk_plug plug; 2208 2206 bool scan_adjusted; 2209 2207 2210 - get_scan_count(lruvec, swappiness, sc, nr, lru_pages); 2208 + get_scan_count(lruvec, memcg, sc, nr, lru_pages); 2211 2209 2212 2210 /* Record the original scan target for proportional adjustments later */ 2213 2211 memcpy(targets, nr, sizeof(nr)); ··· 2411 2409 unsigned long lru_pages; 2412 2410 unsigned long reclaimed; 2413 2411 unsigned long scanned; 2414 - struct lruvec *lruvec; 2415 - int swappiness; 2416 2412 2417 2413 if (mem_cgroup_low(root, memcg)) { 2418 2414 if (!sc->may_thrash) ··· 2418 2418 mem_cgroup_events(memcg, MEMCG_LOW, 1); 2419 2419 } 2420 2420 2421 - lruvec = mem_cgroup_zone_lruvec(zone, memcg); 2422 - swappiness = mem_cgroup_swappiness(memcg); 2423 2421 reclaimed = sc->nr_reclaimed; 2424 2422 scanned = sc->nr_scanned; 2425 2423 2426 - shrink_lruvec(lruvec, swappiness, sc, &lru_pages); 2424 + shrink_zone_memcg(zone, memcg, sc, &lru_pages); 2427 2425 zone_lru_pages += lru_pages; 2428 2426 2429 2427 if (memcg && is_classzone) ··· 2891 2893 .may_unmap = 1, 2892 2894 .may_swap = !noswap, 2893 2895 }; 2894 - struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg); 2895 - int swappiness = mem_cgroup_swappiness(memcg); 2896 2896 unsigned long lru_pages; 2897 2897 2898 2898 sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) | ··· 2907 2911 * will pick up pages from other mem cgroup's as well. We hack 2908 2912 * the priority and make it zero. 2909 2913 */ 2910 - shrink_lruvec(lruvec, swappiness, &sc, &lru_pages); 2914 + shrink_zone_memcg(zone, memcg, &sc, &lru_pages); 2911 2915 2912 2916 trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed); 2913 2917
+13 -1
mm/zsmalloc.c
··· 309 309 310 310 static void record_obj(unsigned long handle, unsigned long obj) 311 311 { 312 - *(unsigned long *)handle = obj; 312 + /* 313 + * lsb of @obj represents handle lock while other bits 314 + * represent object value the handle is pointing so 315 + * updating shouldn't do store tearing. 316 + */ 317 + WRITE_ONCE(*(unsigned long *)handle, obj); 313 318 } 314 319 315 320 /* zpool driver */ ··· 1640 1635 free_obj = obj_malloc(d_page, class, handle); 1641 1636 zs_object_copy(free_obj, used_obj, class); 1642 1637 index++; 1638 + /* 1639 + * record_obj updates handle's value to free_obj and it will 1640 + * invalidate lock bit(ie, HANDLE_PIN_BIT) of handle, which 1641 + * breaks synchronization using pin_tag(e,g, zs_free) so 1642 + * let's keep the lock bit. 1643 + */ 1644 + free_obj |= BIT(HANDLE_PIN_BIT); 1643 1645 record_obj(handle, free_obj); 1644 1646 unpin_tag(handle); 1645 1647 obj_free(pool, class, used_obj);
-1
net/ipv4/Makefile
··· 56 56 obj-$(CONFIG_TCP_CONG_LP) += tcp_lp.o 57 57 obj-$(CONFIG_TCP_CONG_YEAH) += tcp_yeah.o 58 58 obj-$(CONFIG_TCP_CONG_ILLINOIS) += tcp_illinois.o 59 - obj-$(CONFIG_MEMCG_KMEM) += tcp_memcontrol.o 60 59 obj-$(CONFIG_NETLABEL) += cipso_ipv4.o 61 60 62 61 obj-$(CONFIG_XFRM) += xfrm4_policy.o xfrm4_state.o xfrm4_input.o \
-1
net/ipv4/sysctl_net_ipv4.c
··· 24 24 #include <net/cipso_ipv4.h> 25 25 #include <net/inet_frag.h> 26 26 #include <net/ping.h> 27 - #include <net/tcp_memcontrol.h> 28 27 29 28 static int zero; 30 29 static int one = 1;
-1
net/ipv4/tcp_ipv4.c
··· 73 73 #include <net/timewait_sock.h> 74 74 #include <net/xfrm.h> 75 75 #include <net/secure_seq.h> 76 - #include <net/tcp_memcontrol.h> 77 76 #include <net/busy_poll.h> 78 77 79 78 #include <linux/inet.h>
-200
net/ipv4/tcp_memcontrol.c
··· 1 - #include <net/tcp.h> 2 - #include <net/tcp_memcontrol.h> 3 - #include <net/sock.h> 4 - #include <net/ip.h> 5 - #include <linux/nsproxy.h> 6 - #include <linux/memcontrol.h> 7 - #include <linux/module.h> 8 - 9 - int tcp_init_cgroup(struct mem_cgroup *memcg, struct cgroup_subsys *ss) 10 - { 11 - struct mem_cgroup *parent = parent_mem_cgroup(memcg); 12 - struct page_counter *counter_parent = NULL; 13 - /* 14 - * The root cgroup does not use page_counters, but rather, 15 - * rely on the data already collected by the network 16 - * subsystem 17 - */ 18 - if (memcg == root_mem_cgroup) 19 - return 0; 20 - 21 - memcg->tcp_mem.memory_pressure = 0; 22 - 23 - if (parent) 24 - counter_parent = &parent->tcp_mem.memory_allocated; 25 - 26 - page_counter_init(&memcg->tcp_mem.memory_allocated, counter_parent); 27 - 28 - return 0; 29 - } 30 - 31 - void tcp_destroy_cgroup(struct mem_cgroup *memcg) 32 - { 33 - if (memcg == root_mem_cgroup) 34 - return; 35 - 36 - if (memcg->tcp_mem.active) 37 - static_branch_dec(&memcg_sockets_enabled_key); 38 - } 39 - 40 - static int tcp_update_limit(struct mem_cgroup *memcg, unsigned long nr_pages) 41 - { 42 - int ret; 43 - 44 - if (memcg == root_mem_cgroup) 45 - return -EINVAL; 46 - 47 - ret = page_counter_limit(&memcg->tcp_mem.memory_allocated, nr_pages); 48 - if (ret) 49 - return ret; 50 - 51 - if (!memcg->tcp_mem.active) { 52 - /* 53 - * The active flag needs to be written after the static_key 54 - * update. This is what guarantees that the socket activation 55 - * function is the last one to run. See sock_update_memcg() for 56 - * details, and note that we don't mark any socket as belonging 57 - * to this memcg until that flag is up. 58 - * 59 - * We need to do this, because static_keys will span multiple 60 - * sites, but we can't control their order. If we mark a socket 61 - * as accounted, but the accounting functions are not patched in 62 - * yet, we'll lose accounting. 63 - * 64 - * We never race with the readers in sock_update_memcg(), 65 - * because when this value change, the code to process it is not 66 - * patched in yet. 67 - */ 68 - static_branch_inc(&memcg_sockets_enabled_key); 69 - memcg->tcp_mem.active = true; 70 - } 71 - 72 - return 0; 73 - } 74 - 75 - enum { 76 - RES_USAGE, 77 - RES_LIMIT, 78 - RES_MAX_USAGE, 79 - RES_FAILCNT, 80 - }; 81 - 82 - static DEFINE_MUTEX(tcp_limit_mutex); 83 - 84 - static ssize_t tcp_cgroup_write(struct kernfs_open_file *of, 85 - char *buf, size_t nbytes, loff_t off) 86 - { 87 - struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); 88 - unsigned long nr_pages; 89 - int ret = 0; 90 - 91 - buf = strstrip(buf); 92 - 93 - switch (of_cft(of)->private) { 94 - case RES_LIMIT: 95 - /* see memcontrol.c */ 96 - ret = page_counter_memparse(buf, "-1", &nr_pages); 97 - if (ret) 98 - break; 99 - mutex_lock(&tcp_limit_mutex); 100 - ret = tcp_update_limit(memcg, nr_pages); 101 - mutex_unlock(&tcp_limit_mutex); 102 - break; 103 - default: 104 - ret = -EINVAL; 105 - break; 106 - } 107 - return ret ?: nbytes; 108 - } 109 - 110 - static u64 tcp_cgroup_read(struct cgroup_subsys_state *css, struct cftype *cft) 111 - { 112 - struct mem_cgroup *memcg = mem_cgroup_from_css(css); 113 - u64 val; 114 - 115 - switch (cft->private) { 116 - case RES_LIMIT: 117 - if (memcg == root_mem_cgroup) 118 - val = PAGE_COUNTER_MAX; 119 - else 120 - val = memcg->tcp_mem.memory_allocated.limit; 121 - val *= PAGE_SIZE; 122 - break; 123 - case RES_USAGE: 124 - if (memcg == root_mem_cgroup) 125 - val = atomic_long_read(&tcp_memory_allocated); 126 - else 127 - val = page_counter_read(&memcg->tcp_mem.memory_allocated); 128 - val *= PAGE_SIZE; 129 - break; 130 - case RES_FAILCNT: 131 - if (memcg == root_mem_cgroup) 132 - return 0; 133 - val = memcg->tcp_mem.memory_allocated.failcnt; 134 - break; 135 - case RES_MAX_USAGE: 136 - if (memcg == root_mem_cgroup) 137 - return 0; 138 - val = memcg->tcp_mem.memory_allocated.watermark; 139 - val *= PAGE_SIZE; 140 - break; 141 - default: 142 - BUG(); 143 - } 144 - return val; 145 - } 146 - 147 - static ssize_t tcp_cgroup_reset(struct kernfs_open_file *of, 148 - char *buf, size_t nbytes, loff_t off) 149 - { 150 - struct mem_cgroup *memcg; 151 - 152 - memcg = mem_cgroup_from_css(of_css(of)); 153 - if (memcg == root_mem_cgroup) 154 - return nbytes; 155 - 156 - switch (of_cft(of)->private) { 157 - case RES_MAX_USAGE: 158 - page_counter_reset_watermark(&memcg->tcp_mem.memory_allocated); 159 - break; 160 - case RES_FAILCNT: 161 - memcg->tcp_mem.memory_allocated.failcnt = 0; 162 - break; 163 - } 164 - 165 - return nbytes; 166 - } 167 - 168 - static struct cftype tcp_files[] = { 169 - { 170 - .name = "kmem.tcp.limit_in_bytes", 171 - .write = tcp_cgroup_write, 172 - .read_u64 = tcp_cgroup_read, 173 - .private = RES_LIMIT, 174 - }, 175 - { 176 - .name = "kmem.tcp.usage_in_bytes", 177 - .read_u64 = tcp_cgroup_read, 178 - .private = RES_USAGE, 179 - }, 180 - { 181 - .name = "kmem.tcp.failcnt", 182 - .private = RES_FAILCNT, 183 - .write = tcp_cgroup_reset, 184 - .read_u64 = tcp_cgroup_read, 185 - }, 186 - { 187 - .name = "kmem.tcp.max_usage_in_bytes", 188 - .private = RES_MAX_USAGE, 189 - .write = tcp_cgroup_reset, 190 - .read_u64 = tcp_cgroup_read, 191 - }, 192 - { } /* terminate */ 193 - }; 194 - 195 - static int __init tcp_memcontrol_init(void) 196 - { 197 - WARN_ON(cgroup_add_legacy_cftypes(&memory_cgrp_subsys, tcp_files)); 198 - return 0; 199 - } 200 - __initcall(tcp_memcontrol_init);
-1
net/ipv6/tcp_ipv6.c
··· 61 61 #include <net/timewait_sock.h> 62 62 #include <net/inet_common.h> 63 63 #include <net/secure_seq.h> 64 - #include <net/tcp_memcontrol.h> 65 64 #include <net/busy_poll.h> 66 65 67 66 #include <linux/proc_fs.h>
+2 -5
net/mac80211/debugfs.c
··· 91 91 }; 92 92 #endif 93 93 94 - static const char *hw_flag_names[NUM_IEEE80211_HW_FLAGS + 1] = { 94 + static const char *hw_flag_names[] = { 95 95 #define FLAG(F) [IEEE80211_HW_##F] = #F 96 96 FLAG(HAS_RATE_CONTROL), 97 97 FLAG(RX_INCLUDES_FCS), ··· 126 126 FLAG(SUPPORTS_AMSDU_IN_AMPDU), 127 127 FLAG(BEACON_TX_STATUS), 128 128 FLAG(NEEDS_UNIQUE_STA_ADDR), 129 - 130 - /* keep last for the build bug below */ 131 - (void *)0x1 132 129 #undef FLAG 133 130 }; 134 131 ··· 145 148 /* fail compilation if somebody adds or removes 146 149 * a flag without updating the name array above 147 150 */ 148 - BUILD_BUG_ON(hw_flag_names[NUM_IEEE80211_HW_FLAGS] != (void *)0x1); 151 + BUILD_BUG_ON(ARRAY_SIZE(hw_flag_names) != NUM_IEEE80211_HW_FLAGS); 149 152 150 153 for (i = 0; i < NUM_IEEE80211_HW_FLAGS; i++) { 151 154 if (test_bit(i, local->hw.flags))
+6
scripts/Makefile.lib
··· 130 130 $(CFLAGS_KASAN)) 131 131 endif 132 132 133 + ifeq ($(CONFIG_UBSAN),y) 134 + _c_flags += $(if $(patsubst n%,, \ 135 + $(UBSAN_SANITIZE_$(basetarget).o)$(UBSAN_SANITIZE)$(CONFIG_UBSAN_SANITIZE_ALL)), \ 136 + $(CFLAGS_UBSAN)) 137 + endif 138 + 133 139 # If building the kernel in a separate objtree expand all occurrences 134 140 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/'). 135 141
+17
scripts/Makefile.ubsan
··· 1 + ifdef CONFIG_UBSAN 2 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=shift) 3 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=integer-divide-by-zero) 4 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=unreachable) 5 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=vla-bound) 6 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=null) 7 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=signed-integer-overflow) 8 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=bounds) 9 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=object-size) 10 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=returns-nonnull-attribute) 11 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=bool) 12 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=enum) 13 + 14 + ifdef CONFIG_UBSAN_ALIGNMENT 15 + CFLAGS_UBSAN += $(call cc-option, -fsanitize=alignment) 16 + endif 17 + endif
+46 -3
scripts/checkpatch.pl
··· 433 433 qr{${Ident}_handler_fn}, 434 434 @typeListMisordered, 435 435 ); 436 + 437 + our $C90_int_types = qr{(?x: 438 + long\s+long\s+int\s+(?:un)?signed| 439 + long\s+long\s+(?:un)?signed\s+int| 440 + long\s+long\s+(?:un)?signed| 441 + (?:(?:un)?signed\s+)?long\s+long\s+int| 442 + (?:(?:un)?signed\s+)?long\s+long| 443 + int\s+long\s+long\s+(?:un)?signed| 444 + int\s+(?:(?:un)?signed\s+)?long\s+long| 445 + 446 + long\s+int\s+(?:un)?signed| 447 + long\s+(?:un)?signed\s+int| 448 + long\s+(?:un)?signed| 449 + (?:(?:un)?signed\s+)?long\s+int| 450 + (?:(?:un)?signed\s+)?long| 451 + int\s+long\s+(?:un)?signed| 452 + int\s+(?:(?:un)?signed\s+)?long| 453 + 454 + int\s+(?:un)?signed| 455 + (?:(?:un)?signed\s+)?int 456 + )}; 457 + 436 458 our @typeListFile = (); 437 459 our @typeListWithAttr = ( 438 460 @typeList, ··· 4539 4517 #print "LINE<$lines[$ln-1]> len<" . length($lines[$ln-1]) . "\n"; 4540 4518 4541 4519 $has_flow_statement = 1 if ($ctx =~ /\b(goto|return)\b/); 4542 - $has_arg_concat = 1 if ($ctx =~ /\#\#/); 4520 + $has_arg_concat = 1 if ($ctx =~ /\#\#/ && $ctx !~ /\#\#\s*(?:__VA_ARGS__|args)\b/); 4543 4521 4544 4522 $dstat =~ s/^.\s*\#\s*define\s+$Ident(?:\([^\)]*\))?\s*//; 4545 4523 $dstat =~ s/$;//g; ··· 4550 4528 # Flatten any parentheses and braces 4551 4529 while ($dstat =~ s/\([^\(\)]*\)/1/ || 4552 4530 $dstat =~ s/\{[^\{\}]*\}/1/ || 4553 - $dstat =~ s/\[[^\[\]]*\]/1/) 4531 + $dstat =~ s/.\[[^\[\]]*\]/1/) 4554 4532 { 4555 4533 } 4556 4534 ··· 4570 4548 union| 4571 4549 struct| 4572 4550 \.$Ident\s*=\s*| 4573 - ^\"|\"$ 4551 + ^\"|\"$| 4552 + ^\[ 4574 4553 }x; 4575 4554 #print "REST<$rest> dstat<$dstat> ctx<$ctx>\n"; 4576 4555 if ($dstat ne '' && ··· 5292 5269 $fix) { 5293 5270 $fixed[$fixlinenr] =~ s/\b$type\b/$kernel_type/; 5294 5271 } 5272 + } 5273 + } 5274 + 5275 + # check for cast of C90 native int or longer types constants 5276 + if ($line =~ /(\(\s*$C90_int_types\s*\)\s*)($Constant)\b/) { 5277 + my $cast = $1; 5278 + my $const = $2; 5279 + if (WARN("TYPECAST_INT_CONSTANT", 5280 + "Unnecessary typecast of c90 int constant\n" . $herecurr) && 5281 + $fix) { 5282 + my $suffix = ""; 5283 + my $newconst = $const; 5284 + $newconst =~ s/${Int_type}$//; 5285 + $suffix .= 'U' if ($cast =~ /\bunsigned\b/); 5286 + if ($cast =~ /\blong\s+long\b/) { 5287 + $suffix .= 'LL'; 5288 + } elsif ($cast =~ /\blong\b/) { 5289 + $suffix .= 'L'; 5290 + } 5291 + $fixed[$fixlinenr] =~ s/\Q$cast\E$const\b/$newconst$suffix/; 5295 5292 } 5296 5293 } 5297 5294
+4
scripts/get_maintainer.pl
··· 16 16 my $V = '0.26'; 17 17 18 18 use Getopt::Long qw(:config no_auto_abbrev); 19 + use Cwd; 19 20 21 + my $cur_path = fastgetcwd() . '/'; 20 22 my $lk_path = "./"; 21 23 my $email = 1; 22 24 my $email_usename = 1; ··· 431 429 } 432 430 } 433 431 if ($from_filename) { 432 + $file =~ s/^\Q${cur_path}\E//; #strip any absolute path 433 + $file =~ s/^\Q${lk_path}\E//; #or the path to the lk tree 434 434 push(@files, $file); 435 435 if ($file ne "MAINTAINERS" && -f $file && ($keywords || $file_emails)) { 436 436 open(my $f, '<', $file)
+6 -1
security/commoncap.c
··· 137 137 { 138 138 int ret = 0; 139 139 const struct cred *cred, *child_cred; 140 + const kernel_cap_t *caller_caps; 140 141 141 142 rcu_read_lock(); 142 143 cred = current_cred(); 143 144 child_cred = __task_cred(child); 145 + if (mode & PTRACE_MODE_FSCREDS) 146 + caller_caps = &cred->cap_effective; 147 + else 148 + caller_caps = &cred->cap_permitted; 144 149 if (cred->user_ns == child_cred->user_ns && 145 - cap_issubset(child_cred->cap_permitted, cred->cap_permitted)) 150 + cap_issubset(child_cred->cap_permitted, *caller_caps)) 146 151 goto out; 147 152 if (ns_capable(child_cred->user_ns, CAP_SYS_PTRACE)) 148 153 goto out;
+3 -5
security/smack/smack_lsm.c
··· 398 398 */ 399 399 static inline unsigned int smk_ptrace_mode(unsigned int mode) 400 400 { 401 - switch (mode) { 402 - case PTRACE_MODE_READ: 403 - return MAY_READ; 404 - case PTRACE_MODE_ATTACH: 401 + if (mode & PTRACE_MODE_ATTACH) 405 402 return MAY_READWRITE; 406 - } 403 + if (mode & PTRACE_MODE_READ) 404 + return MAY_READ; 407 405 408 406 return 0; 409 407 }
+2 -2
security/yama/yama_lsm.c
··· 281 281 int rc = 0; 282 282 283 283 /* require ptrace target be a child of ptracer on attach */ 284 - if (mode == PTRACE_MODE_ATTACH) { 284 + if (mode & PTRACE_MODE_ATTACH) { 285 285 switch (ptrace_scope) { 286 286 case YAMA_SCOPE_DISABLED: 287 287 /* No additional restrictions. */ ··· 307 307 } 308 308 } 309 309 310 - if (rc) { 310 + if (rc && (mode & PTRACE_MODE_NOAUDIT) == 0) { 311 311 printk_ratelimited(KERN_NOTICE 312 312 "ptrace of pid %d was attempted by: %s (pid %d)\n", 313 313 child->pid, current->comm, current->pid);