Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'akpm' (patches from Andrew)

Merge fourth set of updates from Andrew Morton:

- the rest of lib/

- checkpatch updates

- a few misc things

- kasan: kernel address sanitizer

- the rtc tree

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (108 commits)
ARM: mvebu: enable Armada 38x RTC driver in mvebu_v7_defconfig
ARM: mvebu: add Device Tree description of RTC on Armada 38x
MAINTAINERS: add the RTC driver for the Armada38x
drivers/rtc/rtc-armada38x: add a new RTC driver for recent mvebu SoCs
rtc: armada38x: add the device tree binding documentation
rtc: rtc-ab-b5ze-s3: add sub-minute alarm support
rtc: add support for Abracon AB-RTCMC-32.768kHz-B5ZE-S3 I2C RTC chip
of: add vendor prefix for Abracon Corporation
drivers/rtc/rtc-rk808.c: fix rtc time reading issue
drivers/rtc/rtc-isl12057.c: constify struct regmap_config
drivers/rtc/rtc-at91sam9.c: constify struct regmap_config
drivers/rtc/rtc-imxdi.c: add more known register bits
drivers/rtc/rtc-imxdi.c: trivial clean up code
ARM: mvebu: ISL12057 rtc chip can now wake up RN102, RN102 and RN2120
rtc: rtc-isl12057: add isil,irq2-can-wakeup-machine property for in-tree users
drivers/rtc/rtc-isl12057.c: add alarm support to Intersil ISL12057 RTC driver
drivers/rtc/rtc-pcf2123.c: add support for devicetree
kprobes: makes kprobes/enabled works correctly for optimized kprobes.
kprobes: set kprobes_all_disarmed earlier to enable re-optimization.
init: remove CONFIG_INIT_FALLBACK
...

+4476 -797
+1
Documentation/devicetree/bindings/i2c/trivial-devices.txt
··· 9 9 10 10 Compatible Vendor / Chip 11 11 ========== ============= 12 + abracon,abb5zes3 AB-RTCMC-32.768kHz-B5ZE-S3: Real Time Clock/Calendar Module with I2C Interface 12 13 ad,ad7414 SMBus/I2C Digital Temperature Sensor in 6-Pin SOT with SMBus Alert and Over Temperature Pin 13 14 ad,adm9240 ADM9240: Complete System Hardware Monitor for uProcessor-Based Systems 14 15 adi,adt7461 +/-1C TDM Extended Temp Range I.C
+22
Documentation/devicetree/bindings/rtc/armada-380-rtc.txt
··· 1 + * Real Time Clock of the Armada 38x SoCs 2 + 3 + RTC controller for the Armada 38x SoCs 4 + 5 + Required properties: 6 + - compatible : Should be "marvell,armada-380-rtc" 7 + - reg: a list of base address and size pairs, one for each entry in 8 + reg-names 9 + - reg names: should contain: 10 + * "rtc" for the RTC registers 11 + * "rtc-soc" for the SoC related registers and among them the one 12 + related to the interrupt. 13 + - interrupts: IRQ line for the RTC. 14 + 15 + Example: 16 + 17 + rtc@a3800 { 18 + compatible = "marvell,armada-380-rtc"; 19 + reg = <0xa3800 0x20>, <0x184a0 0x0c>; 20 + reg-names = "rtc", "rtc-soc"; 21 + interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>; 22 + };
+78
Documentation/devicetree/bindings/rtc/isil,isl12057.txt
··· 1 + Intersil ISL12057 I2C RTC/Alarm chip 2 + 3 + ISL12057 is a trivial I2C device (it has simple device tree bindings, 4 + consisting of a compatible field, an address and possibly an interrupt 5 + line). 6 + 7 + Nonetheless, it also supports an option boolean property 8 + ("isil,irq2-can-wakeup-machine") to handle the specific use-case found 9 + on at least three in-tree users of the chip (NETGEAR ReadyNAS 102, 104 10 + and 2120 ARM-based NAS); On those devices, the IRQ#2 pin of the chip 11 + (associated with the alarm supported by the driver) is not connected 12 + to the SoC but to a PMIC. It allows the device to be powered up when 13 + RTC alarm rings. In order to mark the device has a wakeup source and 14 + get access to the 'wakealarm' sysfs entry, this specific property can 15 + be set when the IRQ#2 pin of the chip is not connected to the SoC but 16 + can wake up the device. 17 + 18 + Required properties supported by the device: 19 + 20 + - "compatible": must be "isil,isl12057" 21 + - "reg": I2C bus address of the device 22 + 23 + Optional properties: 24 + 25 + - "isil,irq2-can-wakeup-machine": mark the chip as a wakeup source, 26 + independently of the availability of an IRQ line connected to the 27 + SoC. 28 + 29 + - "interrupt-parent", "interrupts": for passing the interrupt line 30 + of the SoC connected to IRQ#2 of the RTC chip. 31 + 32 + 33 + Example isl12057 node without IRQ#2 pin connected (no alarm support): 34 + 35 + isl12057: isl12057@68 { 36 + compatible = "isil,isl12057"; 37 + reg = <0x68>; 38 + }; 39 + 40 + 41 + Example isl12057 node with IRQ#2 pin connected to main SoC via MPP6 (note 42 + that the pinctrl-related properties below are given for completeness and 43 + may not be required or may be different depending on your system or 44 + SoC, and the main function of the MPP used as IRQ line, i.e. 45 + "interrupt-parent" and "interrupts" are usually sufficient): 46 + 47 + pinctrl { 48 + ... 49 + 50 + rtc_alarm_pin: rtc_alarm_pin { 51 + marvell,pins = "mpp6"; 52 + marvell,function = "gpio"; 53 + }; 54 + 55 + ... 56 + 57 + }; 58 + 59 + ... 60 + 61 + isl12057: isl12057@68 { 62 + compatible = "isil,isl12057"; 63 + reg = <0x68>; 64 + pinctrl-0 = <&rtc_alarm_pin>; 65 + pinctrl-names = "default"; 66 + interrupt-parent = <&gpio0>; 67 + interrupts = <6 IRQ_TYPE_EDGE_FALLING>; 68 + }; 69 + 70 + 71 + Example isl12057 node without IRQ#2 pin connected to the SoC but to a 72 + PMIC, allowing the device to be started based on configured alarm: 73 + 74 + isl12057: isl12057@68 { 75 + compatible = "isil,isl12057"; 76 + reg = <0x68>; 77 + isil,irq2-can-wakeup-machine; 78 + };
+16
Documentation/devicetree/bindings/rtc/nxp,rtc-2123.txt
··· 1 + NXP PCF2123 SPI Real Time Clock 2 + 3 + Required properties: 4 + - compatible: should be: "nxp,rtc-pcf2123" 5 + - reg: should be the SPI slave chipselect address 6 + 7 + Optional properties: 8 + - spi-cs-high: PCF2123 needs chipselect high 9 + 10 + Example: 11 + 12 + rtc: nxp,rtc-pcf2123@3 { 13 + compatible = "nxp,rtc-pcf2123" 14 + reg = <3> 15 + spi-cs-high; 16 + };
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 4 4 using them to avoid name-space collisions. 5 5 6 6 abilis Abilis Systems 7 + abcn Abracon Corporation 7 8 active-semi Active-Semi International Inc 8 9 ad Avionic Design GmbH 9 10 adapteva Adapteva, Inc.
+170
Documentation/kasan.txt
··· 1 + Kernel address sanitizer 2 + ================ 3 + 4 + 0. Overview 5 + =========== 6 + 7 + Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides 8 + a fast and comprehensive solution for finding use-after-free and out-of-bounds 9 + bugs. 10 + 11 + KASan uses compile-time instrumentation for checking every memory access, 12 + therefore you will need a certain version of GCC > 4.9.2 13 + 14 + Currently KASan is supported only for x86_64 architecture and requires that the 15 + kernel be built with the SLUB allocator. 16 + 17 + 1. Usage 18 + ========= 19 + 20 + To enable KASAN configure kernel with: 21 + 22 + CONFIG_KASAN = y 23 + 24 + and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline 25 + is compiler instrumentation types. The former produces smaller binary the 26 + latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or 27 + latter. 28 + 29 + Currently KASAN works only with the SLUB memory allocator. 30 + For better bug detection and nicer report, enable CONFIG_STACKTRACE and put 31 + at least 'slub_debug=U' in the boot cmdline. 32 + 33 + To disable instrumentation for specific files or directories, add a line 34 + similar to the following to the respective kernel Makefile: 35 + 36 + For a single file (e.g. main.o): 37 + KASAN_SANITIZE_main.o := n 38 + 39 + For all files in one directory: 40 + KASAN_SANITIZE := n 41 + 42 + 1.1 Error reports 43 + ========== 44 + 45 + A typical out of bounds access report looks like this: 46 + 47 + ================================================================== 48 + BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3 49 + Write of size 1 by task modprobe/1689 50 + ============================================================================= 51 + BUG kmalloc-128 (Not tainted): kasan error 52 + ----------------------------------------------------------------------------- 53 + 54 + Disabling lock debugging due to kernel taint 55 + INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689 56 + __slab_alloc+0x4b4/0x4f0 57 + kmem_cache_alloc_trace+0x10b/0x190 58 + kmalloc_oob_right+0x3d/0x75 [test_kasan] 59 + init_module+0x9/0x47 [test_kasan] 60 + do_one_initcall+0x99/0x200 61 + load_module+0x2cb3/0x3b20 62 + SyS_finit_module+0x76/0x80 63 + system_call_fastpath+0x12/0x17 64 + INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080 65 + INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720 66 + 67 + Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ 68 + Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 69 + Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 70 + Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 71 + Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 72 + Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 73 + Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 74 + Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk 75 + Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk. 76 + Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc ........ 77 + Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ 78 + CPU: 0 PID: 1689 Comm: modprobe Tainted: G B 3.18.0-rc1-mm1+ #98 79 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 80 + ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78 81 + ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8 82 + ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558 83 + Call Trace: 84 + [<ffffffff81cc68ae>] dump_stack+0x46/0x58 85 + [<ffffffff811fd848>] print_trailer+0xf8/0x160 86 + [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan] 87 + [<ffffffff811ff0f5>] object_err+0x35/0x40 88 + [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan] 89 + [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0 90 + [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40 91 + [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40 92 + [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40 93 + [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan] 94 + [<ffffffff8120a995>] __asan_store1+0x75/0xb0 95 + [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan] 96 + [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan] 97 + [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan] 98 + [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan] 99 + [<ffffffff810002d9>] do_one_initcall+0x99/0x200 100 + [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160 101 + [<ffffffff81114f63>] load_module+0x2cb3/0x3b20 102 + [<ffffffff8110fd70>] ? m_show+0x240/0x240 103 + [<ffffffff81115f06>] SyS_finit_module+0x76/0x80 104 + [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17 105 + Memory state around the buggy address: 106 + ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 107 + ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc 108 + ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 109 + ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 110 + ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00 111 + >ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc 112 + ^ 113 + ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 114 + ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc 115 + ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb 116 + ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb 117 + ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb 118 + ================================================================== 119 + 120 + First sections describe slub object where bad access happened. 121 + See 'SLUB Debug output' section in Documentation/vm/slub.txt for details. 122 + 123 + In the last section the report shows memory state around the accessed address. 124 + Reading this part requires some more understanding of how KASAN works. 125 + 126 + Each 8 bytes of memory are encoded in one shadow byte as accessible, 127 + partially accessible, freed or they can be part of a redzone. 128 + We use the following encoding for each shadow byte: 0 means that all 8 bytes 129 + of the corresponding memory region are accessible; number N (1 <= N <= 7) means 130 + that the first N bytes are accessible, and other (8 - N) bytes are not; 131 + any negative value indicates that the entire 8-byte word is inaccessible. 132 + We use different negative values to distinguish between different kinds of 133 + inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h). 134 + 135 + In the report above the arrows point to the shadow byte 03, which means that 136 + the accessed address is partially accessible. 137 + 138 + 139 + 2. Implementation details 140 + ======================== 141 + 142 + From a high level, our approach to memory error detection is similar to that 143 + of kmemcheck: use shadow memory to record whether each byte of memory is safe 144 + to access, and use compile-time instrumentation to check shadow memory on each 145 + memory access. 146 + 147 + AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory 148 + (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and 149 + offset to translate a memory address to its corresponding shadow address. 150 + 151 + Here is the function witch translate an address to its corresponding shadow 152 + address: 153 + 154 + static inline void *kasan_mem_to_shadow(const void *addr) 155 + { 156 + return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT) 157 + + KASAN_SHADOW_OFFSET; 158 + } 159 + 160 + where KASAN_SHADOW_SCALE_SHIFT = 3. 161 + 162 + Compile-time instrumentation used for checking memory accesses. Compiler inserts 163 + function calls (__asan_load*(addr), __asan_store*(addr)) before each memory 164 + access of size 1, 2, 4, 8 or 16. These functions check whether memory access is 165 + valid or not by checking corresponding shadow memory. 166 + 167 + GCC 5.0 has possibility to perform inline instrumentation. Instead of making 168 + function calls GCC directly inserts the code to check the shadow memory. 169 + This option significantly enlarges kernel but it gives x1.1-x2 performance 170 + boost over outline instrumented kernel.
+1 -1
Documentation/video4linux/v4l2-pci-skeleton.c
··· 42 42 MODULE_DESCRIPTION("V4L2 PCI Skeleton Driver"); 43 43 MODULE_AUTHOR("Hans Verkuil"); 44 44 MODULE_LICENSE("GPL v2"); 45 - MODULE_DEVICE_TABLE(pci, skeleton_pci_tbl); 46 45 47 46 /** 48 47 * struct skeleton - All internal data for one instance of device ··· 94 95 /* { PCI_DEVICE(PCI_VENDOR_ID_, PCI_DEVICE_ID_) }, */ 95 96 { 0, } 96 97 }; 98 + MODULE_DEVICE_TABLE(pci, skeleton_pci_tbl); 97 99 98 100 /* 99 101 * HDTV: this structure has the capabilities of the HDTV receiver.
+2
Documentation/x86/x86_64/mm.txt
··· 12 12 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole 13 13 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB) 14 14 ... unused hole ... 15 + ffffec0000000000 - fffffc0000000000 (=44 bits) kasan shadow memory (16TB) 16 + ... unused hole ... 15 17 ffffff0000000000 - ffffff7fffffffff (=39 bits) %esp fixup stacks 16 18 ... unused hole ... 17 19 ffffffff80000000 - ffffffffa0000000 (=512 MB) kernel text mapping, from phys 0
+1
MAINTAINERS
··· 1173 1173 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1174 1174 S: Maintained 1175 1175 F: arch/arm/mach-mvebu/ 1176 + F: drivers/rtc/armada38x-rtc 1176 1177 1177 1178 ARM/Marvell Berlin SoC support 1178 1179 M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
+2 -1
Makefile
··· 423 423 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS 424 424 425 425 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS 426 - export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV 426 + export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN 427 427 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE 428 428 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE 429 429 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL ··· 781 781 KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO 782 782 endif 783 783 784 + include $(srctree)/scripts/Makefile.kasan 784 785 include $(srctree)/scripts/Makefile.extrawarn 785 786 786 787 # Add user supplied CPPFLAGS, AFLAGS and CFLAGS as the last assignments
+1
arch/arm/boot/dts/armada-370-netgear-rn102.dts
··· 87 87 isl12057: isl12057@68 { 88 88 compatible = "isil,isl12057"; 89 89 reg = <0x68>; 90 + isil,irq2-can-wakeup-machine; 90 91 }; 91 92 92 93 g762: g762@3e {
+1
arch/arm/boot/dts/armada-370-netgear-rn104.dts
··· 93 93 isl12057: isl12057@68 { 94 94 compatible = "isil,isl12057"; 95 95 reg = <0x68>; 96 + isil,irq2-can-wakeup-machine; 96 97 }; 97 98 98 99 g762: g762@3e {
+7
arch/arm/boot/dts/armada-38x.dtsi
··· 381 381 clocks = <&gateclk 4>; 382 382 }; 383 383 384 + rtc@a3800 { 385 + compatible = "marvell,armada-380-rtc"; 386 + reg = <0xa3800 0x20>, <0x184a0 0x0c>; 387 + reg-names = "rtc", "rtc-soc"; 388 + interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>; 389 + }; 390 + 384 391 sata@a8000 { 385 392 compatible = "marvell,armada-380-ahci"; 386 393 reg = <0xa8000 0x2000>;
+1
arch/arm/boot/dts/armada-xp-netgear-rn2120.dts
··· 100 100 isl12057: isl12057@68 { 101 101 compatible = "isil,isl12057"; 102 102 reg = <0x68>; 103 + isil,irq2-can-wakeup-machine; 103 104 }; 104 105 105 106 /* Controller for rear fan #1 of 3 (Protechnic
+1
arch/arm/configs/mvebu_v7_defconfig
··· 112 112 CONFIG_RTC_CLASS=y 113 113 CONFIG_RTC_DRV_S35390A=y 114 114 CONFIG_RTC_DRV_MV=y 115 + CONFIG_RTC_DRV_ARMADA38X=y 115 116 CONFIG_DMADEVICES=y 116 117 CONFIG_MV_XOR=y 117 118 # CONFIG_IOMMU_SUPPORT is not set
+1 -1
arch/arm/kernel/module.c
··· 41 41 void *module_alloc(unsigned long size) 42 42 { 43 43 return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, 44 - GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE, 44 + GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, 45 45 __builtin_return_address(0)); 46 46 } 47 47 #endif
+2 -2
arch/arm64/kernel/module.c
··· 35 35 void *module_alloc(unsigned long size) 36 36 { 37 37 return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, 38 - GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE, 39 - __builtin_return_address(0)); 38 + GFP_KERNEL, PAGE_KERNEL_EXEC, 0, 39 + NUMA_NO_NODE, __builtin_return_address(0)); 40 40 } 41 41 42 42 enum aarch64_reloc_op {
+2 -4
arch/ia64/kernel/topology.c
··· 217 217 218 218 static ssize_t show_shared_cpu_map(struct cache_info *this_leaf, char *buf) 219 219 { 220 - ssize_t len; 221 220 cpumask_t shared_cpu_map; 222 221 223 222 cpumask_and(&shared_cpu_map, 224 223 &this_leaf->shared_cpu_map, cpu_online_mask); 225 - len = cpumask_scnprintf(buf, NR_CPUS+1, &shared_cpu_map); 226 - len += sprintf(buf+len, "\n"); 227 - return len; 224 + return scnprintf(buf, PAGE_SIZE, "%*pb\n", 225 + cpumask_pr_args(&shared_cpu_map)); 228 226 } 229 227 230 228 static ssize_t show_type(struct cache_info *this_leaf, char *buf)
+1 -1
arch/mips/kernel/module.c
··· 47 47 void *module_alloc(unsigned long size) 48 48 { 49 49 return __vmalloc_node_range(size, 1, MODULE_START, MODULE_END, 50 - GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE, 50 + GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, 51 51 __builtin_return_address(0)); 52 52 } 53 53 #endif
+5 -8
arch/mips/netlogic/common/smp.c
··· 162 162 unsigned int boot_cpu; 163 163 int num_cpus, i, ncore, node; 164 164 volatile u32 *cpu_ready = nlm_get_boot_data(BOOT_CPU_READY); 165 - char buf[64]; 166 165 167 166 boot_cpu = hard_smp_processor_id(); 168 167 cpumask_clear(&phys_cpu_present_mask); ··· 188 189 } 189 190 } 190 191 191 - cpumask_scnprintf(buf, ARRAY_SIZE(buf), &phys_cpu_present_mask); 192 - pr_info("Physical CPU mask: %s\n", buf); 193 - cpumask_scnprintf(buf, ARRAY_SIZE(buf), cpu_possible_mask); 194 - pr_info("Possible CPU mask: %s\n", buf); 192 + pr_info("Physical CPU mask: %*pb\n", 193 + cpumask_pr_args(&phys_cpu_present_mask)); 194 + pr_info("Possible CPU mask: %*pb\n", 195 + cpumask_pr_args(cpu_possible_mask)); 195 196 196 197 /* check with the cores we have woken up */ 197 198 for (ncore = 0, i = 0; i < NLM_NR_NODES; i++) ··· 208 209 { 209 210 uint32_t core0_thr_mask, core_thr_mask; 210 211 int threadmode, i, j; 211 - char buf[64]; 212 212 213 213 core0_thr_mask = 0; 214 214 for (i = 0; i < NLM_THREADS_PER_CORE; i++) ··· 242 244 return threadmode; 243 245 244 246 unsupp: 245 - cpumask_scnprintf(buf, ARRAY_SIZE(buf), wakeup_mask); 246 - panic("Unsupported CPU mask %s", buf); 247 + panic("Unsupported CPU mask %*pb", cpumask_pr_args(wakeup_mask)); 247 248 return 0; 248 249 } 249 250
+1 -1
arch/parisc/kernel/module.c
··· 219 219 * init_data correctly */ 220 220 return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, 221 221 GFP_KERNEL | __GFP_HIGHMEM, 222 - PAGE_KERNEL_RWX, NUMA_NO_NODE, 222 + PAGE_KERNEL_RWX, 0, NUMA_NO_NODE, 223 223 __builtin_return_address(0)); 224 224 } 225 225
+6 -9
arch/powerpc/kernel/cacheinfo.c
··· 607 607 { 608 608 struct cache_index_dir *index; 609 609 struct cache *cache; 610 - int len; 611 - int n = 0; 610 + int ret; 612 611 613 612 index = kobj_to_cache_index_dir(k); 614 613 cache = index->cache; 615 - len = PAGE_SIZE - 2; 616 614 617 - if (len > 1) { 618 - n = cpumask_scnprintf(buf, len, &cache->shared_cpu_map); 619 - buf[n++] = '\n'; 620 - buf[n] = '\0'; 621 - } 622 - return n; 615 + ret = scnprintf(buf, PAGE_SIZE - 1, "%*pb\n", 616 + cpumask_pr_args(&cache->shared_cpu_map)); 617 + buf[ret++] = '\n'; 618 + buf[ret] = '\0'; 619 + return ret; 623 620 } 624 621 625 622 static struct kobj_attribute cache_shared_cpu_map_attr =
+2 -4
arch/powerpc/sysdev/xics/ics-opal.c
··· 131 131 132 132 wanted_server = xics_get_irq_server(d->irq, cpumask, 1); 133 133 if (wanted_server < 0) { 134 - char cpulist[128]; 135 - cpumask_scnprintf(cpulist, sizeof(cpulist), cpumask); 136 - pr_warning("%s: No online cpus in the mask %s for irq %d\n", 137 - __func__, cpulist, d->irq); 134 + pr_warning("%s: No online cpus in the mask %*pb for irq %d\n", 135 + __func__, cpumask_pr_args(cpumask), d->irq); 138 136 return -1; 139 137 } 140 138 server = ics_opal_mangle_server(wanted_server);
+2 -5
arch/powerpc/sysdev/xics/ics-rtas.c
··· 140 140 141 141 irq_server = xics_get_irq_server(d->irq, cpumask, 1); 142 142 if (irq_server == -1) { 143 - char cpulist[128]; 144 - cpumask_scnprintf(cpulist, sizeof(cpulist), cpumask); 145 - printk(KERN_WARNING 146 - "%s: No online cpus in the mask %s for irq %d\n", 147 - __func__, cpulist, d->irq); 143 + pr_warning("%s: No online cpus in the mask %*pb for irq %d\n", 144 + __func__, cpumask_pr_args(cpumask), d->irq); 148 145 return -1; 149 146 } 150 147
+1 -1
arch/s390/kernel/module.c
··· 50 50 if (PAGE_ALIGN(size) > MODULES_LEN) 51 51 return NULL; 52 52 return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, 53 - GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE, 53 + GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, 54 54 __builtin_return_address(0)); 55 55 } 56 56 #endif
+1 -1
arch/sparc/kernel/module.c
··· 29 29 if (PAGE_ALIGN(size) > MODULES_LEN) 30 30 return NULL; 31 31 return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, 32 - GFP_KERNEL, PAGE_KERNEL, NUMA_NO_NODE, 32 + GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, 33 33 __builtin_return_address(0)); 34 34 } 35 35 #else
+1 -4
arch/tile/kernel/hardwall.c
··· 909 909 static int hardwall_proc_show(struct seq_file *sf, void *v) 910 910 { 911 911 struct hardwall_info *info = sf->private; 912 - char buf[256]; 913 912 914 - int rc = cpulist_scnprintf(buf, sizeof(buf), &info->cpumask); 915 - buf[rc++] = '\n'; 916 - seq_write(sf, buf, rc); 913 + seq_printf(sf, "%*pbl\n", cpumask_pr_args(&info->cpumask)); 917 914 return 0; 918 915 } 919 916
+2 -3
arch/tile/kernel/proc.c
··· 45 45 int n = ptr_to_cpu(v); 46 46 47 47 if (n == 0) { 48 - char buf[NR_CPUS*5]; 49 - cpulist_scnprintf(buf, sizeof(buf), cpu_online_mask); 50 48 seq_printf(m, "cpu count\t: %d\n", num_online_cpus()); 51 - seq_printf(m, "cpu list\t: %s\n", buf); 49 + seq_printf(m, "cpu list\t: %*pbl\n", 50 + cpumask_pr_args(cpu_online_mask)); 52 51 seq_printf(m, "model name\t: %s\n", chip_model); 53 52 seq_printf(m, "flags\t\t:\n"); /* nothing for now */ 54 53 seq_printf(m, "cpu MHz\t\t: %llu.%06llu\n",
+5 -8
arch/tile/kernel/setup.c
··· 215 215 216 216 static int __init setup_isolnodes(char *str) 217 217 { 218 - char buf[MAX_NUMNODES * 5]; 219 218 if (str == NULL || nodelist_parse(str, isolnodes) != 0) 220 219 return -EINVAL; 221 220 222 - nodelist_scnprintf(buf, sizeof(buf), isolnodes); 223 - pr_info("Set isolnodes value to '%s'\n", buf); 221 + pr_info("Set isolnodes value to '%*pbl'\n", 222 + nodemask_pr_args(&isolnodes)); 224 223 return 0; 225 224 } 226 225 early_param("isolnodes", setup_isolnodes); ··· 1314 1315 1315 1316 void __init print_disabled_cpus(void) 1316 1317 { 1317 - if (!cpumask_empty(&disabled_map)) { 1318 - char buf[100]; 1319 - cpulist_scnprintf(buf, sizeof(buf), &disabled_map); 1320 - pr_info("CPUs not available for Linux: %s\n", buf); 1321 - } 1318 + if (!cpumask_empty(&disabled_map)) 1319 + pr_info("CPUs not available for Linux: %*pbl\n", 1320 + cpumask_pr_args(&disabled_map)); 1322 1321 } 1323 1322 1324 1323 static void __init setup_cpu_maps(void)
+5 -7
arch/tile/mm/homecache.c
··· 115 115 struct cpumask cache_cpumask_copy, tlb_cpumask_copy; 116 116 struct cpumask *cache_cpumask, *tlb_cpumask; 117 117 HV_PhysAddr cache_pa; 118 - char cache_buf[NR_CPUS*5], tlb_buf[NR_CPUS*5]; 119 118 120 119 mb(); /* provided just to simplify "magic hypervisor" mode */ 121 120 ··· 148 149 asids, asidcount); 149 150 if (rc == 0) 150 151 return; 151 - cpumask_scnprintf(cache_buf, sizeof(cache_buf), &cache_cpumask_copy); 152 - cpumask_scnprintf(tlb_buf, sizeof(tlb_buf), &tlb_cpumask_copy); 153 152 154 - pr_err("hv_flush_remote(%#llx, %#lx, %p [%s], %#lx, %#lx, %#lx, %p [%s], %p, %d) = %d\n", 155 - cache_pa, cache_control, cache_cpumask, cache_buf, 156 - (unsigned long)tlb_va, tlb_length, tlb_pgsize, 157 - tlb_cpumask, tlb_buf, asids, asidcount, rc); 153 + pr_err("hv_flush_remote(%#llx, %#lx, %p [%*pb], %#lx, %#lx, %#lx, %p [%*pb], %p, %d) = %d\n", 154 + cache_pa, cache_control, cache_cpumask, 155 + cpumask_pr_args(&cache_cpumask_copy), 156 + (unsigned long)tlb_va, tlb_length, tlb_pgsize, tlb_cpumask, 157 + cpumask_pr_args(&tlb_cpumask_copy), asids, asidcount, rc); 158 158 panic("Unsafe to continue."); 159 159 } 160 160
+7 -11
arch/tile/mm/init.c
··· 353 353 354 354 /* Neighborhood ktext pages on specified mask */ 355 355 else if (cpulist_parse(str, &ktext_mask) == 0) { 356 - char buf[NR_CPUS * 5]; 357 - cpulist_scnprintf(buf, sizeof(buf), &ktext_mask); 358 356 if (cpumask_weight(&ktext_mask) > 1) { 359 357 ktext_small = 1; 360 - pr_info("ktext: using caching neighborhood %s with small pages\n", 361 - buf); 358 + pr_info("ktext: using caching neighborhood %*pbl with small pages\n", 359 + cpumask_pr_args(&ktext_mask)); 362 360 } else { 363 - pr_info("ktext: caching on cpu %s with one huge page\n", 364 - buf); 361 + pr_info("ktext: caching on cpu %*pbl with one huge page\n", 362 + cpumask_pr_args(&ktext_mask)); 365 363 } 366 364 } 367 365 ··· 490 492 struct cpumask bad; 491 493 cpumask_andnot(&bad, &ktext_mask, cpu_possible_mask); 492 494 cpumask_and(&ktext_mask, &ktext_mask, cpu_possible_mask); 493 - if (!cpumask_empty(&bad)) { 494 - char buf[NR_CPUS * 5]; 495 - cpulist_scnprintf(buf, sizeof(buf), &bad); 496 - pr_info("ktext: not using unavailable cpus %s\n", buf); 497 - } 495 + if (!cpumask_empty(&bad)) 496 + pr_info("ktext: not using unavailable cpus %*pbl\n", 497 + cpumask_pr_args(&bad)); 498 498 if (cpumask_empty(&ktext_mask)) { 499 499 pr_warn("ktext: no valid cpus; caching on %d\n", 500 500 smp_processor_id());
+1 -1
arch/unicore32/kernel/module.c
··· 25 25 void *module_alloc(unsigned long size) 26 26 { 27 27 return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, 28 - GFP_KERNEL, PAGE_KERNEL_EXEC, NUMA_NO_NODE, 28 + GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, 29 29 __builtin_return_address(0)); 30 30 } 31 31
+1
arch/x86/Kconfig
··· 85 85 select HAVE_CMPXCHG_LOCAL 86 86 select HAVE_CMPXCHG_DOUBLE 87 87 select HAVE_ARCH_KMEMCHECK 88 + select HAVE_ARCH_KASAN if X86_64 && SPARSEMEM_VMEMMAP 88 89 select HAVE_USER_RETURN_NOTIFIER 89 90 select ARCH_BINFMT_ELF_RANDOMIZE_PIE 90 91 select HAVE_ARCH_JUMP_LABEL
+2
arch/x86/boot/Makefile
··· 14 14 # Set it to -DSVGA_MODE=NORMAL_VGA if you just want the EGA/VGA mode. 15 15 # The number is the same as you would ordinarily press at bootup. 16 16 17 + KASAN_SANITIZE := n 18 + 17 19 SVGA_MODE := -DSVGA_MODE=NORMAL_VGA 18 20 19 21 targets := vmlinux.bin setup.bin setup.elf bzImage
+2
arch/x86/boot/compressed/Makefile
··· 16 16 # (see scripts/Makefile.lib size_append) 17 17 # compressed vmlinux.bin.all + u32 size of vmlinux.bin.all 18 18 19 + KASAN_SANITIZE := n 20 + 19 21 targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \ 20 22 vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 21 23
+1 -2
arch/x86/boot/compressed/eboot.c
··· 13 13 #include <asm/setup.h> 14 14 #include <asm/desc.h> 15 15 16 - #undef memcpy /* Use memcpy from misc.c */ 17 - 16 + #include "../string.h" 18 17 #include "eboot.h" 19 18 20 19 static efi_system_table_t *sys_table;
+1
arch/x86/boot/compressed/misc.h
··· 7 7 * we just keep it from happening 8 8 */ 9 9 #undef CONFIG_PARAVIRT 10 + #undef CONFIG_KASAN 10 11 #ifdef CONFIG_X86_32 11 12 #define _ASM_X86_DESC_H 1 12 13 #endif
+31
arch/x86/include/asm/kasan.h
··· 1 + #ifndef _ASM_X86_KASAN_H 2 + #define _ASM_X86_KASAN_H 3 + 4 + /* 5 + * Compiler uses shadow offset assuming that addresses start 6 + * from 0. Kernel addresses don't start from 0, so shadow 7 + * for kernel really starts from compiler's shadow offset + 8 + * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT 9 + */ 10 + #define KASAN_SHADOW_START (KASAN_SHADOW_OFFSET + \ 11 + (0xffff800000000000ULL >> 3)) 12 + /* 47 bits for kernel address -> (47 - 3) bits for shadow */ 13 + #define KASAN_SHADOW_END (KASAN_SHADOW_START + (1ULL << (47 - 3))) 14 + 15 + #ifndef __ASSEMBLY__ 16 + 17 + extern pte_t kasan_zero_pte[]; 18 + extern pte_t kasan_zero_pmd[]; 19 + extern pte_t kasan_zero_pud[]; 20 + 21 + #ifdef CONFIG_KASAN 22 + void __init kasan_map_early_shadow(pgd_t *pgd); 23 + void __init kasan_init(void); 24 + #else 25 + static inline void kasan_map_early_shadow(pgd_t *pgd) { } 26 + static inline void kasan_init(void) { } 27 + #endif 28 + 29 + #endif 30 + 31 + #endif
+9 -3
arch/x86/include/asm/page_64_types.h
··· 1 1 #ifndef _ASM_X86_PAGE_64_DEFS_H 2 2 #define _ASM_X86_PAGE_64_DEFS_H 3 3 4 - #define THREAD_SIZE_ORDER 2 4 + #ifdef CONFIG_KASAN 5 + #define KASAN_STACK_ORDER 1 6 + #else 7 + #define KASAN_STACK_ORDER 0 8 + #endif 9 + 10 + #define THREAD_SIZE_ORDER (2 + KASAN_STACK_ORDER) 5 11 #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) 6 12 #define CURRENT_MASK (~(THREAD_SIZE - 1)) 7 13 8 - #define EXCEPTION_STACK_ORDER 0 14 + #define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER) 9 15 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER) 10 16 11 17 #define DEBUG_STACK_ORDER (EXCEPTION_STACK_ORDER + 1) 12 18 #define DEBUG_STKSZ (PAGE_SIZE << DEBUG_STACK_ORDER) 13 19 14 - #define IRQ_STACK_ORDER 2 20 + #define IRQ_STACK_ORDER (2 + KASAN_STACK_ORDER) 15 21 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER) 16 22 17 23 #define DOUBLEFAULT_STACK 1
+17 -1
arch/x86/include/asm/string_64.h
··· 27 27 function. */ 28 28 29 29 #define __HAVE_ARCH_MEMCPY 1 30 + extern void *__memcpy(void *to, const void *from, size_t len); 31 + 30 32 #ifndef CONFIG_KMEMCHECK 31 33 #if (__GNUC__ == 4 && __GNUC_MINOR__ >= 3) || __GNUC__ > 4 32 34 extern void *memcpy(void *to, const void *from, size_t len); 33 35 #else 34 - extern void *__memcpy(void *to, const void *from, size_t len); 35 36 #define memcpy(dst, src, len) \ 36 37 ({ \ 37 38 size_t __len = (len); \ ··· 54 53 55 54 #define __HAVE_ARCH_MEMSET 56 55 void *memset(void *s, int c, size_t n); 56 + void *__memset(void *s, int c, size_t n); 57 57 58 58 #define __HAVE_ARCH_MEMMOVE 59 59 void *memmove(void *dest, const void *src, size_t count); 60 + void *__memmove(void *dest, const void *src, size_t count); 60 61 61 62 int memcmp(const void *cs, const void *ct, size_t count); 62 63 size_t strlen(const char *s); 63 64 char *strcpy(char *dest, const char *src); 64 65 char *strcat(char *dest, const char *src); 65 66 int strcmp(const char *cs, const char *ct); 67 + 68 + #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) 69 + 70 + /* 71 + * For files that not instrumented (e.g. mm/slub.c) we 72 + * should use not instrumented version of mem* functions. 73 + */ 74 + 75 + #undef memcpy 76 + #define memcpy(dst, src, len) __memcpy(dst, src, len) 77 + #define memmove(dst, src, len) __memmove(dst, src, len) 78 + #define memset(s, c, n) __memset(s, c, n) 79 + #endif 66 80 67 81 #endif /* __KERNEL__ */ 68 82
+4
arch/x86/kernel/Makefile
··· 16 16 CFLAGS_REMOVE_early_printk.o = -pg 17 17 endif 18 18 19 + KASAN_SANITIZE_head$(BITS).o := n 20 + KASAN_SANITIZE_dumpstack.o := n 21 + KASAN_SANITIZE_dumpstack_$(BITS).o := n 22 + 19 23 CFLAGS_irq.o := -I$(src)/../include/asm/trace 20 24 21 25 obj-y := process_$(BITS).o signal.o entry_$(BITS).o
+11 -13
arch/x86/kernel/cpu/intel_cacheinfo.c
··· 952 952 static ssize_t show_shared_cpu_map_func(struct _cpuid4_info *this_leaf, 953 953 int type, char *buf) 954 954 { 955 - ptrdiff_t len = PTR_ALIGN(buf + PAGE_SIZE - 1, PAGE_SIZE) - buf; 956 - int n = 0; 955 + const struct cpumask *mask = to_cpumask(this_leaf->shared_cpu_map); 956 + int ret; 957 957 958 - if (len > 1) { 959 - const struct cpumask *mask; 960 - 961 - mask = to_cpumask(this_leaf->shared_cpu_map); 962 - n = type ? 963 - cpulist_scnprintf(buf, len-2, mask) : 964 - cpumask_scnprintf(buf, len-2, mask); 965 - buf[n++] = '\n'; 966 - buf[n] = '\0'; 967 - } 968 - return n; 958 + if (type) 959 + ret = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", 960 + cpumask_pr_args(mask)); 961 + else 962 + ret = scnprintf(buf, PAGE_SIZE - 1, "%*pb", 963 + cpumask_pr_args(mask)); 964 + buf[ret++] = '\n'; 965 + buf[ret] = '\0'; 966 + return ret; 969 967 } 970 968 971 969 static inline ssize_t show_shared_cpu_map(struct _cpuid4_info *leaf, char *buf,
+4 -1
arch/x86/kernel/dumpstack.c
··· 265 265 printk("SMP "); 266 266 #endif 267 267 #ifdef CONFIG_DEBUG_PAGEALLOC 268 - printk("DEBUG_PAGEALLOC"); 268 + printk("DEBUG_PAGEALLOC "); 269 + #endif 270 + #ifdef CONFIG_KASAN 271 + printk("KASAN"); 269 272 #endif 270 273 printk("\n"); 271 274 if (notify_die(DIE_OOPS, str, regs, err,
+7 -2
arch/x86/kernel/head64.c
··· 27 27 #include <asm/bios_ebda.h> 28 28 #include <asm/bootparam_utils.h> 29 29 #include <asm/microcode.h> 30 + #include <asm/kasan.h> 30 31 31 32 /* 32 33 * Manage page tables very early on. ··· 47 46 48 47 next_early_pgt = 0; 49 48 50 - write_cr3(__pa(early_level4_pgt)); 49 + write_cr3(__pa_nodebug(early_level4_pgt)); 51 50 } 52 51 53 52 /* Create a new PMD entry */ ··· 60 59 pmdval_t pmd, *pmd_p; 61 60 62 61 /* Invalid address or early pgt is done ? */ 63 - if (physaddr >= MAXMEM || read_cr3() != __pa(early_level4_pgt)) 62 + if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt)) 64 63 return -1; 65 64 66 65 again: ··· 159 158 /* Kill off the identity-map trampoline */ 160 159 reset_early_page_tables(); 161 160 161 + kasan_map_early_shadow(early_level4_pgt); 162 + 162 163 /* clear bss before set_intr_gate with early_idt_handler */ 163 164 clear_bss(); 164 165 ··· 181 178 clear_page(init_level4_pgt); 182 179 /* set init_level4_pgt kernel high mapping*/ 183 180 init_level4_pgt[511] = early_level4_pgt[511]; 181 + 182 + kasan_map_early_shadow(init_level4_pgt); 184 183 185 184 x86_64_start_reservations(real_mode_data); 186 185 }
+30
arch/x86/kernel/head_64.S
··· 514 514 /* This must match the first entry in level2_kernel_pgt */ 515 515 .quad 0x0000000000000000 516 516 517 + #ifdef CONFIG_KASAN 518 + #define FILL(VAL, COUNT) \ 519 + .rept (COUNT) ; \ 520 + .quad (VAL) ; \ 521 + .endr 522 + 523 + NEXT_PAGE(kasan_zero_pte) 524 + FILL(kasan_zero_page - __START_KERNEL_map + _KERNPG_TABLE, 512) 525 + NEXT_PAGE(kasan_zero_pmd) 526 + FILL(kasan_zero_pte - __START_KERNEL_map + _KERNPG_TABLE, 512) 527 + NEXT_PAGE(kasan_zero_pud) 528 + FILL(kasan_zero_pmd - __START_KERNEL_map + _KERNPG_TABLE, 512) 529 + 530 + #undef FILL 531 + #endif 532 + 533 + 517 534 #include "../../x86/xen/xen-head.S" 518 535 519 536 __PAGE_ALIGNED_BSS 520 537 NEXT_PAGE(empty_zero_page) 521 538 .skip PAGE_SIZE 539 + 540 + #ifdef CONFIG_KASAN 541 + /* 542 + * This page used as early shadow. We don't use empty_zero_page 543 + * at early stages, stack instrumentation could write some garbage 544 + * to this page. 545 + * Latter we reuse it as zero shadow for large ranges of memory 546 + * that allowed to access, but not instrumented by kasan 547 + * (vmalloc/vmemmap ...). 548 + */ 549 + NEXT_PAGE(kasan_zero_page) 550 + .skip PAGE_SIZE 551 + #endif
+12 -2
arch/x86/kernel/module.c
··· 24 24 #include <linux/fs.h> 25 25 #include <linux/string.h> 26 26 #include <linux/kernel.h> 27 + #include <linux/kasan.h> 27 28 #include <linux/bug.h> 28 29 #include <linux/mm.h> 29 30 #include <linux/gfp.h> ··· 84 83 85 84 void *module_alloc(unsigned long size) 86 85 { 86 + void *p; 87 + 87 88 if (PAGE_ALIGN(size) > MODULES_LEN) 88 89 return NULL; 89 - return __vmalloc_node_range(size, 1, 90 + 91 + p = __vmalloc_node_range(size, MODULE_ALIGN, 90 92 MODULES_VADDR + get_module_load_offset(), 91 93 MODULES_END, GFP_KERNEL | __GFP_HIGHMEM, 92 - PAGE_KERNEL_EXEC, NUMA_NO_NODE, 94 + PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, 93 95 __builtin_return_address(0)); 96 + if (p && (kasan_module_alloc(p, size) < 0)) { 97 + vfree(p); 98 + return NULL; 99 + } 100 + 101 + return p; 94 102 } 95 103 96 104 #ifdef CONFIG_X86_32
+3
arch/x86/kernel/setup.c
··· 89 89 #include <asm/cacheflush.h> 90 90 #include <asm/processor.h> 91 91 #include <asm/bugs.h> 92 + #include <asm/kasan.h> 92 93 93 94 #include <asm/vsyscall.h> 94 95 #include <asm/cpu.h> ··· 1174 1173 #endif 1175 1174 1176 1175 x86_init.paging.pagetable_init(); 1176 + 1177 + kasan_init(); 1177 1178 1178 1179 if (boot_cpu_data.cpuid_level >= 0) { 1179 1180 /* A CPU has %cr4 if and only if it has CPUID */
+8 -2
arch/x86/kernel/x8664_ksyms_64.c
··· 50 50 #undef memset 51 51 #undef memmove 52 52 53 + extern void *__memset(void *, int, __kernel_size_t); 54 + extern void *__memcpy(void *, const void *, __kernel_size_t); 55 + extern void *__memmove(void *, const void *, __kernel_size_t); 53 56 extern void *memset(void *, int, __kernel_size_t); 54 57 extern void *memcpy(void *, const void *, __kernel_size_t); 55 - extern void *__memcpy(void *, const void *, __kernel_size_t); 58 + extern void *memmove(void *, const void *, __kernel_size_t); 59 + 60 + EXPORT_SYMBOL(__memset); 61 + EXPORT_SYMBOL(__memcpy); 62 + EXPORT_SYMBOL(__memmove); 56 63 57 64 EXPORT_SYMBOL(memset); 58 65 EXPORT_SYMBOL(memcpy); 59 - EXPORT_SYMBOL(__memcpy); 60 66 EXPORT_SYMBOL(memmove); 61 67 62 68 #ifndef CONFIG_DEBUG_VIRTUAL
+4 -2
arch/x86/lib/memcpy_64.S
··· 53 53 .Lmemcpy_e_e: 54 54 .previous 55 55 56 + .weak memcpy 57 + 56 58 ENTRY(__memcpy) 57 59 ENTRY(memcpy) 58 60 CFI_STARTPROC ··· 201 199 * only outcome... 202 200 */ 203 201 .section .altinstructions, "a" 204 - altinstruction_entry memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\ 202 + altinstruction_entry __memcpy,.Lmemcpy_c,X86_FEATURE_REP_GOOD,\ 205 203 .Lmemcpy_e-.Lmemcpy_c,.Lmemcpy_e-.Lmemcpy_c 206 - altinstruction_entry memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \ 204 + altinstruction_entry __memcpy,.Lmemcpy_c_e,X86_FEATURE_ERMS, \ 207 205 .Lmemcpy_e_e-.Lmemcpy_c_e,.Lmemcpy_e_e-.Lmemcpy_c_e 208 206 .previous
+4
arch/x86/lib/memmove_64.S
··· 24 24 * Output: 25 25 * rax: dest 26 26 */ 27 + .weak memmove 28 + 27 29 ENTRY(memmove) 30 + ENTRY(__memmove) 28 31 CFI_STARTPROC 29 32 30 33 /* Handle more 32 bytes in loop */ ··· 223 220 .Lmemmove_end_forward-.Lmemmove_begin_forward, \ 224 221 .Lmemmove_end_forward_efs-.Lmemmove_begin_forward_efs 225 222 .previous 223 + ENDPROC(__memmove) 226 224 ENDPROC(memmove)
+6 -4
arch/x86/lib/memset_64.S
··· 56 56 .Lmemset_e_e: 57 57 .previous 58 58 59 + .weak memset 60 + 59 61 ENTRY(memset) 60 62 ENTRY(__memset) 61 63 CFI_STARTPROC ··· 149 147 * feature to implement the right patch order. 150 148 */ 151 149 .section .altinstructions,"a" 152 - altinstruction_entry memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\ 153 - .Lfinal-memset,.Lmemset_e-.Lmemset_c 154 - altinstruction_entry memset,.Lmemset_c_e,X86_FEATURE_ERMS, \ 155 - .Lfinal-memset,.Lmemset_e_e-.Lmemset_c_e 150 + altinstruction_entry __memset,.Lmemset_c,X86_FEATURE_REP_GOOD,\ 151 + .Lfinal-__memset,.Lmemset_e-.Lmemset_c 152 + altinstruction_entry __memset,.Lmemset_c_e,X86_FEATURE_ERMS, \ 153 + .Lfinal-__memset,.Lmemset_e_e-.Lmemset_c_e 156 154 .previous
+3
arch/x86/mm/Makefile
··· 20 20 21 21 obj-$(CONFIG_KMEMCHECK) += kmemcheck/ 22 22 23 + KASAN_SANITIZE_kasan_init_$(BITS).o := n 24 + obj-$(CONFIG_KASAN) += kasan_init_$(BITS).o 25 + 23 26 obj-$(CONFIG_MMIOTRACE) += mmiotrace.o 24 27 mmiotrace-y := kmmio.o pf_in.o mmio-mod.o 25 28 obj-$(CONFIG_MMIOTRACE_TEST) += testmmiotrace.o
+206
arch/x86/mm/kasan_init_64.c
··· 1 + #include <linux/bootmem.h> 2 + #include <linux/kasan.h> 3 + #include <linux/kdebug.h> 4 + #include <linux/mm.h> 5 + #include <linux/sched.h> 6 + #include <linux/vmalloc.h> 7 + 8 + #include <asm/tlbflush.h> 9 + #include <asm/sections.h> 10 + 11 + extern pgd_t early_level4_pgt[PTRS_PER_PGD]; 12 + extern struct range pfn_mapped[E820_X_MAX]; 13 + 14 + extern unsigned char kasan_zero_page[PAGE_SIZE]; 15 + 16 + static int __init map_range(struct range *range) 17 + { 18 + unsigned long start; 19 + unsigned long end; 20 + 21 + start = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->start)); 22 + end = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->end)); 23 + 24 + /* 25 + * end + 1 here is intentional. We check several shadow bytes in advance 26 + * to slightly speed up fastpath. In some rare cases we could cross 27 + * boundary of mapped shadow, so we just map some more here. 28 + */ 29 + return vmemmap_populate(start, end + 1, NUMA_NO_NODE); 30 + } 31 + 32 + static void __init clear_pgds(unsigned long start, 33 + unsigned long end) 34 + { 35 + for (; start < end; start += PGDIR_SIZE) 36 + pgd_clear(pgd_offset_k(start)); 37 + } 38 + 39 + void __init kasan_map_early_shadow(pgd_t *pgd) 40 + { 41 + int i; 42 + unsigned long start = KASAN_SHADOW_START; 43 + unsigned long end = KASAN_SHADOW_END; 44 + 45 + for (i = pgd_index(start); start < end; i++) { 46 + pgd[i] = __pgd(__pa_nodebug(kasan_zero_pud) 47 + | _KERNPG_TABLE); 48 + start += PGDIR_SIZE; 49 + } 50 + } 51 + 52 + static int __init zero_pte_populate(pmd_t *pmd, unsigned long addr, 53 + unsigned long end) 54 + { 55 + pte_t *pte = pte_offset_kernel(pmd, addr); 56 + 57 + while (addr + PAGE_SIZE <= end) { 58 + WARN_ON(!pte_none(*pte)); 59 + set_pte(pte, __pte(__pa_nodebug(kasan_zero_page) 60 + | __PAGE_KERNEL_RO)); 61 + addr += PAGE_SIZE; 62 + pte = pte_offset_kernel(pmd, addr); 63 + } 64 + return 0; 65 + } 66 + 67 + static int __init zero_pmd_populate(pud_t *pud, unsigned long addr, 68 + unsigned long end) 69 + { 70 + int ret = 0; 71 + pmd_t *pmd = pmd_offset(pud, addr); 72 + 73 + while (IS_ALIGNED(addr, PMD_SIZE) && addr + PMD_SIZE <= end) { 74 + WARN_ON(!pmd_none(*pmd)); 75 + set_pmd(pmd, __pmd(__pa_nodebug(kasan_zero_pte) 76 + | __PAGE_KERNEL_RO)); 77 + addr += PMD_SIZE; 78 + pmd = pmd_offset(pud, addr); 79 + } 80 + if (addr < end) { 81 + if (pmd_none(*pmd)) { 82 + void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE); 83 + if (!p) 84 + return -ENOMEM; 85 + set_pmd(pmd, __pmd(__pa_nodebug(p) | _KERNPG_TABLE)); 86 + } 87 + ret = zero_pte_populate(pmd, addr, end); 88 + } 89 + return ret; 90 + } 91 + 92 + 93 + static int __init zero_pud_populate(pgd_t *pgd, unsigned long addr, 94 + unsigned long end) 95 + { 96 + int ret = 0; 97 + pud_t *pud = pud_offset(pgd, addr); 98 + 99 + while (IS_ALIGNED(addr, PUD_SIZE) && addr + PUD_SIZE <= end) { 100 + WARN_ON(!pud_none(*pud)); 101 + set_pud(pud, __pud(__pa_nodebug(kasan_zero_pmd) 102 + | __PAGE_KERNEL_RO)); 103 + addr += PUD_SIZE; 104 + pud = pud_offset(pgd, addr); 105 + } 106 + 107 + if (addr < end) { 108 + if (pud_none(*pud)) { 109 + void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE); 110 + if (!p) 111 + return -ENOMEM; 112 + set_pud(pud, __pud(__pa_nodebug(p) | _KERNPG_TABLE)); 113 + } 114 + ret = zero_pmd_populate(pud, addr, end); 115 + } 116 + return ret; 117 + } 118 + 119 + static int __init zero_pgd_populate(unsigned long addr, unsigned long end) 120 + { 121 + int ret = 0; 122 + pgd_t *pgd = pgd_offset_k(addr); 123 + 124 + while (IS_ALIGNED(addr, PGDIR_SIZE) && addr + PGDIR_SIZE <= end) { 125 + WARN_ON(!pgd_none(*pgd)); 126 + set_pgd(pgd, __pgd(__pa_nodebug(kasan_zero_pud) 127 + | __PAGE_KERNEL_RO)); 128 + addr += PGDIR_SIZE; 129 + pgd = pgd_offset_k(addr); 130 + } 131 + 132 + if (addr < end) { 133 + if (pgd_none(*pgd)) { 134 + void *p = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE); 135 + if (!p) 136 + return -ENOMEM; 137 + set_pgd(pgd, __pgd(__pa_nodebug(p) | _KERNPG_TABLE)); 138 + } 139 + ret = zero_pud_populate(pgd, addr, end); 140 + } 141 + return ret; 142 + } 143 + 144 + 145 + static void __init populate_zero_shadow(const void *start, const void *end) 146 + { 147 + if (zero_pgd_populate((unsigned long)start, (unsigned long)end)) 148 + panic("kasan: unable to map zero shadow!"); 149 + } 150 + 151 + 152 + #ifdef CONFIG_KASAN_INLINE 153 + static int kasan_die_handler(struct notifier_block *self, 154 + unsigned long val, 155 + void *data) 156 + { 157 + if (val == DIE_GPF) { 158 + pr_emerg("CONFIG_KASAN_INLINE enabled"); 159 + pr_emerg("GPF could be caused by NULL-ptr deref or user memory access"); 160 + } 161 + return NOTIFY_OK; 162 + } 163 + 164 + static struct notifier_block kasan_die_notifier = { 165 + .notifier_call = kasan_die_handler, 166 + }; 167 + #endif 168 + 169 + void __init kasan_init(void) 170 + { 171 + int i; 172 + 173 + #ifdef CONFIG_KASAN_INLINE 174 + register_die_notifier(&kasan_die_notifier); 175 + #endif 176 + 177 + memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt)); 178 + load_cr3(early_level4_pgt); 179 + 180 + clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); 181 + 182 + populate_zero_shadow((void *)KASAN_SHADOW_START, 183 + kasan_mem_to_shadow((void *)PAGE_OFFSET)); 184 + 185 + for (i = 0; i < E820_X_MAX; i++) { 186 + if (pfn_mapped[i].end == 0) 187 + break; 188 + 189 + if (map_range(&pfn_mapped[i])) 190 + panic("kasan: unable to allocate shadow!"); 191 + } 192 + populate_zero_shadow(kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM), 193 + kasan_mem_to_shadow((void *)__START_KERNEL_map)); 194 + 195 + vmemmap_populate((unsigned long)kasan_mem_to_shadow(_stext), 196 + (unsigned long)kasan_mem_to_shadow(_end), 197 + NUMA_NO_NODE); 198 + 199 + populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END), 200 + (void *)KASAN_SHADOW_END); 201 + 202 + memset(kasan_zero_page, 0, PAGE_SIZE); 203 + 204 + load_cr3(init_level4_pgt); 205 + init_task.kasan_depth = 0; 206 + }
+2 -4
arch/x86/mm/numa.c
··· 794 794 void debug_cpumask_set_cpu(int cpu, int node, bool enable) 795 795 { 796 796 struct cpumask *mask; 797 - char buf[64]; 798 797 799 798 if (node == NUMA_NO_NODE) { 800 799 /* early_cpu_to_node() already emits a warning and trace */ ··· 811 812 else 812 813 cpumask_clear_cpu(cpu, mask); 813 814 814 - cpulist_scnprintf(buf, sizeof(buf), mask); 815 - printk(KERN_DEBUG "%s cpu %d node %d: mask now %s\n", 815 + printk(KERN_DEBUG "%s cpu %d node %d: mask now %*pbl\n", 816 816 enable ? "numa_add_cpu" : "numa_remove_cpu", 817 - cpu, node, buf); 817 + cpu, node, cpumask_pr_args(mask)); 818 818 return; 819 819 } 820 820
+7 -18
arch/x86/platform/uv/uv_nmi.c
··· 273 273 } 274 274 } 275 275 276 - /* Print non-responding cpus */ 277 - static void uv_nmi_nr_cpus_pr(char *fmt) 278 - { 279 - static char cpu_list[1024]; 280 - int len = sizeof(cpu_list); 281 - int c = cpumask_weight(uv_nmi_cpu_mask); 282 - int n = cpulist_scnprintf(cpu_list, len, uv_nmi_cpu_mask); 283 - 284 - if (n >= len-1) 285 - strcpy(&cpu_list[len - 6], "...\n"); 286 - 287 - printk(fmt, c, cpu_list); 288 - } 289 - 290 276 /* Ping non-responding cpus attemping to force them into the NMI handler */ 291 277 static void uv_nmi_nr_cpus_ping(void) 292 278 { ··· 357 371 break; 358 372 359 373 /* if not all made it in, send IPI NMI to them */ 360 - uv_nmi_nr_cpus_pr(KERN_ALERT 361 - "UV: Sending NMI IPI to %d non-responding CPUs: %s\n"); 374 + pr_alert("UV: Sending NMI IPI to %d non-responding CPUs: %*pbl\n", 375 + cpumask_weight(uv_nmi_cpu_mask), 376 + cpumask_pr_args(uv_nmi_cpu_mask)); 377 + 362 378 uv_nmi_nr_cpus_ping(); 363 379 364 380 /* if all cpus are in, then done */ 365 381 if (!uv_nmi_wait_cpus(0)) 366 382 break; 367 383 368 - uv_nmi_nr_cpus_pr(KERN_ALERT 369 - "UV: %d CPUs not in NMI loop: %s\n"); 384 + pr_alert("UV: %d CPUs not in NMI loop: %*pbl\n", 385 + cpumask_weight(uv_nmi_cpu_mask), 386 + cpumask_pr_args(uv_nmi_cpu_mask)); 370 387 } while (0); 371 388 372 389 pr_alert("UV: %d of %d CPUs in NMI\n",
+1 -1
arch/x86/realmode/Makefile
··· 6 6 # for more details. 7 7 # 8 8 # 9 - 9 + KASAN_SANITIZE := n 10 10 subdir- := rm 11 11 12 12 obj-y += init.o
+1
arch/x86/realmode/rm/Makefile
··· 6 6 # for more details. 7 7 # 8 8 # 9 + KASAN_SANITIZE := n 9 10 10 11 always := realmode.bin realmode.relocs 11 12
+1
arch/x86/vdso/Makefile
··· 3 3 # 4 4 5 5 KBUILD_CFLAGS += $(DISABLE_LTO) 6 + KASAN_SANITIZE := n 6 7 7 8 VDSO64-$(CONFIG_X86_64) := y 8 9 VDSOX32-$(CONFIG_X86_X32_ABI) := y
+2 -5
arch/xtensa/kernel/setup.c
··· 574 574 static int 575 575 c_show(struct seq_file *f, void *slot) 576 576 { 577 - char buf[NR_CPUS * 5]; 578 - 579 - cpulist_scnprintf(buf, sizeof(buf), cpu_online_mask); 580 577 /* high-level stuff */ 581 578 seq_printf(f, "CPU count\t: %u\n" 582 - "CPU list\t: %s\n" 579 + "CPU list\t: %*pbl\n" 583 580 "vendor_id\t: Tensilica\n" 584 581 "model\t\t: Xtensa " XCHAL_HW_VERSION_NAME "\n" 585 582 "core ID\t\t: " XCHAL_CORE_ID "\n" ··· 585 588 "cpu MHz\t\t: %lu.%02lu\n" 586 589 "bogomips\t: %lu.%02lu\n", 587 590 num_online_cpus(), 588 - buf, 591 + cpumask_pr_args(cpu_online_mask), 589 592 XCHAL_BUILD_UNIQUE_ID, 590 593 XCHAL_HAVE_BE ? "big" : "little", 591 594 ccount_freq/1000000,
+1 -1
drivers/base/cpu.c
··· 245 245 if (!alloc_cpumask_var(&offline, GFP_KERNEL)) 246 246 return -ENOMEM; 247 247 cpumask_andnot(offline, cpu_possible_mask, cpu_online_mask); 248 - n = cpulist_scnprintf(buf, len, offline); 248 + n = scnprintf(buf, len, "%*pbl", cpumask_pr_args(offline)); 249 249 free_cpumask_var(offline); 250 250 251 251 /* display offline cpus >= nr_cpu_ids */
+2 -1
drivers/base/node.c
··· 605 605 { 606 606 int n; 607 607 608 - n = nodelist_scnprintf(buf, PAGE_SIZE-2, node_states[state]); 608 + n = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", 609 + nodemask_pr_args(&node_states[state])); 609 610 buf[n++] = '\n'; 610 611 buf[n] = '\0'; 611 612 return n;
+2 -2
drivers/bus/arm-cci.c
··· 806 806 static ssize_t pmu_attr_cpumask_show(struct device *dev, 807 807 struct device_attribute *attr, char *buf) 808 808 { 809 - int n = cpulist_scnprintf(buf, PAGE_SIZE - 2, &pmu->cpus); 810 - 809 + int n = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", 810 + cpumask_pr_args(&pmu->cpus)); 811 811 buf[n++] = '\n'; 812 812 buf[n] = '\0'; 813 813 return n;
+6 -6
drivers/clk/clk.c
··· 2048 2048 goto fail_out; 2049 2049 } 2050 2050 2051 - clk->name = kstrdup(hw->init->name, GFP_KERNEL); 2051 + clk->name = kstrdup_const(hw->init->name, GFP_KERNEL); 2052 2052 if (!clk->name) { 2053 2053 pr_err("%s: could not allocate clk->name\n", __func__); 2054 2054 ret = -ENOMEM; ··· 2075 2075 2076 2076 /* copy each string name in case parent_names is __initdata */ 2077 2077 for (i = 0; i < clk->num_parents; i++) { 2078 - clk->parent_names[i] = kstrdup(hw->init->parent_names[i], 2078 + clk->parent_names[i] = kstrdup_const(hw->init->parent_names[i], 2079 2079 GFP_KERNEL); 2080 2080 if (!clk->parent_names[i]) { 2081 2081 pr_err("%s: could not copy parent_names\n", __func__); ··· 2090 2090 2091 2091 fail_parent_names_copy: 2092 2092 while (--i >= 0) 2093 - kfree(clk->parent_names[i]); 2093 + kfree_const(clk->parent_names[i]); 2094 2094 kfree(clk->parent_names); 2095 2095 fail_parent_names: 2096 - kfree(clk->name); 2096 + kfree_const(clk->name); 2097 2097 fail_name: 2098 2098 kfree(clk); 2099 2099 fail_out: ··· 2112 2112 2113 2113 kfree(clk->parents); 2114 2114 while (--i >= 0) 2115 - kfree(clk->parent_names[i]); 2115 + kfree_const(clk->parent_names[i]); 2116 2116 2117 2117 kfree(clk->parent_names); 2118 - kfree(clk->name); 2118 + kfree_const(clk->name); 2119 2119 kfree(clk); 2120 2120 } 2121 2121
+1
drivers/firmware/efi/libstub/Makefile
··· 19 19 $(call cc-option,-fno-stack-protector) 20 20 21 21 GCOV_PROFILE := n 22 + KASAN_SANITIZE := n 22 23 23 24 lib-y := efi-stub-helper.o 24 25 lib-$(CONFIG_EFI_ARMSTUB) += arm-stub.o fdt.o
+4
drivers/firmware/efi/libstub/efistub.h
··· 5 5 /* error code which can't be mistaken for valid address */ 6 6 #define EFI_ERROR (~0UL) 7 7 8 + #undef memcpy 9 + #undef memset 10 + #undef memmove 11 + 8 12 void efi_char16_printk(efi_system_table_t *, efi_char16_t *); 9 13 10 14 efi_status_t efi_open_volume(efi_system_table_t *sys_table_arg, void *__image,
+2 -2
drivers/input/keyboard/atkbd.c
··· 1399 1399 1400 1400 static ssize_t atkbd_show_force_release(struct atkbd *atkbd, char *buf) 1401 1401 { 1402 - size_t len = bitmap_scnlistprintf(buf, PAGE_SIZE - 2, 1403 - atkbd->force_release_mask, ATKBD_KEYMAP_SIZE); 1402 + size_t len = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", 1403 + ATKBD_KEYMAP_SIZE, atkbd->force_release_mask); 1404 1404 1405 1405 buf[len++] = '\n'; 1406 1406 buf[len] = '\0';
+1 -1
drivers/input/keyboard/gpio_keys.c
··· 190 190 __set_bit(bdata->button->code, bits); 191 191 } 192 192 193 - ret = bitmap_scnlistprintf(buf, PAGE_SIZE - 2, bits, n_events); 193 + ret = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", n_events, bits); 194 194 buf[ret++] = '\n'; 195 195 buf[ret] = '\0'; 196 196
-1
drivers/net/ethernet/emulex/benet/be_main.c
··· 26 26 #include <net/vxlan.h> 27 27 28 28 MODULE_VERSION(DRV_VER); 29 - MODULE_DEVICE_TABLE(pci, be_dev_ids); 30 29 MODULE_DESCRIPTION(DRV_DESC " " DRV_VER); 31 30 MODULE_AUTHOR("Emulex Corporation"); 32 31 MODULE_LICENSE("GPL");
+2 -3
drivers/net/ethernet/tile/tilegx.c
··· 292 292 */ 293 293 static bool network_cpus_init(void) 294 294 { 295 - char buf[1024]; 296 295 int rc; 297 296 298 297 if (network_cpus_string == NULL) ··· 313 314 return false; 314 315 } 315 316 316 - cpulist_scnprintf(buf, sizeof(buf), &network_cpus_map); 317 - pr_info("Linux network CPUs: %s\n", buf); 317 + pr_info("Linux network CPUs: %*pbl\n", 318 + cpumask_pr_args(&network_cpus_map)); 318 319 return true; 319 320 } 320 321
+2 -3
drivers/net/ethernet/tile/tilepro.c
··· 2410 2410 if (cpumask_empty(&network_cpus_map)) { 2411 2411 pr_warn("Ignoring network_cpus='%s'\n", str); 2412 2412 } else { 2413 - char buf[1024]; 2414 - cpulist_scnprintf(buf, sizeof(buf), &network_cpus_map); 2415 - pr_info("Linux network CPUs: %s\n", buf); 2413 + pr_info("Linux network CPUs: %*pbl\n", 2414 + cpumask_pr_args(&network_cpus_map)); 2416 2415 network_cpus_used = true; 2417 2416 } 2418 2417 }
+6 -17
drivers/net/wireless/ath/ath9k/htc_drv_debug.c
··· 291 291 { 292 292 struct ath9k_htc_priv *priv = file->private_data; 293 293 char buf[512]; 294 - unsigned int len = 0; 294 + unsigned int len; 295 295 296 296 spin_lock_bh(&priv->tx.tx_lock); 297 - 298 - len += scnprintf(buf + len, sizeof(buf) - len, "TX slot bitmap : "); 299 - 300 - len += bitmap_scnprintf(buf + len, sizeof(buf) - len, 301 - priv->tx.tx_slot, MAX_TX_BUF_NUM); 302 - 303 - len += scnprintf(buf + len, sizeof(buf) - len, "\n"); 304 - 305 - len += scnprintf(buf + len, sizeof(buf) - len, 306 - "Used slots : %d\n", 307 - bitmap_weight(priv->tx.tx_slot, MAX_TX_BUF_NUM)); 308 - 297 + len = scnprintf(buf, sizeof(buf), 298 + "TX slot bitmap : %*pb\n" 299 + "Used slots : %d\n", 300 + MAX_TX_BUF_NUM, priv->tx.tx_slot, 301 + bitmap_weight(priv->tx.tx_slot, MAX_TX_BUF_NUM)); 309 302 spin_unlock_bh(&priv->tx.tx_lock); 310 - 311 - if (len > sizeof(buf)) 312 - len = sizeof(buf); 313 - 314 303 return simple_read_from_buffer(user_buf, count, ppos, buf, len); 315 304 } 316 305
+6 -18
drivers/net/wireless/ath/carl9170/debug.c
··· 214 214 static char *carl9170_debugfs_mem_usage_read(struct ar9170 *ar, char *buf, 215 215 size_t bufsize, ssize_t *len) 216 216 { 217 - ADD(buf, *len, bufsize, "jar: ["); 218 - 219 217 spin_lock_bh(&ar->mem_lock); 220 218 221 - *len += bitmap_scnprintf(&buf[*len], bufsize - *len, 222 - ar->mem_bitmap, ar->fw.mem_blocks); 223 - 224 - ADD(buf, *len, bufsize, "]\n"); 219 + ADD(buf, *len, bufsize, "jar: [%*pb]\n", 220 + ar->fw.mem_blocks, ar->mem_bitmap); 225 221 226 222 ADD(buf, *len, bufsize, "cookies: used:%3d / total:%3d, allocs:%d\n", 227 223 bitmap_weight(ar->mem_bitmap, ar->fw.mem_blocks), ··· 312 316 cnt, iter->tid, iter->bsn, iter->snx, iter->hsn, 313 317 iter->max, iter->state, iter->counter); 314 318 315 - ADD(buf, *len, bufsize, "\tWindow: ["); 316 - 317 - *len += bitmap_scnprintf(&buf[*len], bufsize - *len, 318 - iter->bitmap, CARL9170_BAW_BITS); 319 + ADD(buf, *len, bufsize, "\tWindow: [%*pb,W]\n", 320 + CARL9170_BAW_BITS, iter->bitmap); 319 321 320 322 #define BM_STR_OFF(offset) \ 321 323 ((CARL9170_BAW_BITS - (offset) - 1) / 4 + \ 322 324 (CARL9170_BAW_BITS - (offset) - 1) / 32 + 1) 323 - 324 - ADD(buf, *len, bufsize, ",W]\n"); 325 325 326 326 offset = BM_STR_OFF(0); 327 327 ADD(buf, *len, bufsize, "\tBase Seq: %*s\n", offset, "T"); ··· 440 448 ADD(buf, *len, bufsize, "registered VIFs:%d \\ %d\n", 441 449 ar->vifs, ar->fw.vif_num); 442 450 443 - ADD(buf, *len, bufsize, "VIF bitmap: ["); 444 - 445 - *len += bitmap_scnprintf(&buf[*len], bufsize - *len, 446 - &ar->vif_bitmap, ar->fw.vif_num); 447 - 448 - ADD(buf, *len, bufsize, "]\n"); 451 + ADD(buf, *len, bufsize, "VIF bitmap: [%*pb]\n", 452 + ar->fw.vif_num, &ar->vif_bitmap); 449 453 450 454 rcu_read_lock(); 451 455 list_for_each_entry_rcu(iter, &ar->vif_list, list) {
+21
drivers/rtc/Kconfig
··· 153 153 This driver can also be built as a module. If so, the module 154 154 will be called rtc-88pm80x. 155 155 156 + config RTC_DRV_ABB5ZES3 157 + depends on I2C 158 + select REGMAP_I2C 159 + tristate "Abracon AB-RTCMC-32.768kHz-B5ZE-S3" 160 + help 161 + If you say yes here you get support for the Abracon 162 + AB-RTCMC-32.768kHz-B5ZE-S3 I2C RTC chip. 163 + 164 + This driver can also be built as a module. If so, the module 165 + will be called rtc-ab-b5ze-s3. 166 + 156 167 config RTC_DRV_AS3722 157 168 tristate "ams AS3722 RTC driver" 158 169 depends on MFD_AS3722 ··· 1279 1268 1280 1269 This driver can also be built as a module. If so, the module 1281 1270 will be called rtc-mv. 1271 + 1272 + config RTC_DRV_ARMADA38X 1273 + tristate "Armada 38x Marvell SoC RTC" 1274 + depends on ARCH_MVEBU 1275 + help 1276 + If you say yes here you will get support for the in-chip RTC 1277 + that can be found in the Armada 38x Marvell's SoC device 1278 + 1279 + This driver can also be built as a module. If so, the module 1280 + will be called armada38x-rtc. 1282 1281 1283 1282 config RTC_DRV_PS3 1284 1283 tristate "PS3 RTC"
+2
drivers/rtc/Makefile
··· 24 24 obj-$(CONFIG_RTC_DRV_88PM80X) += rtc-88pm80x.o 25 25 obj-$(CONFIG_RTC_DRV_AB3100) += rtc-ab3100.o 26 26 obj-$(CONFIG_RTC_DRV_AB8500) += rtc-ab8500.o 27 + obj-$(CONFIG_RTC_DRV_ABB5ZES3) += rtc-ab-b5ze-s3.o 28 + obj-$(CONFIG_RTC_DRV_ARMADA38X) += rtc-armada38x.o 27 29 obj-$(CONFIG_RTC_DRV_AS3722) += rtc-as3722.o 28 30 obj-$(CONFIG_RTC_DRV_AT32AP700X)+= rtc-at32ap700x.o 29 31 obj-$(CONFIG_RTC_DRV_AT91RM9200)+= rtc-at91rm9200.o
+1035
drivers/rtc/rtc-ab-b5ze-s3.c
··· 1 + /* 2 + * rtc-ab-b5ze-s3 - Driver for Abracon AB-RTCMC-32.768Khz-B5ZE-S3 3 + * I2C RTC / Alarm chip 4 + * 5 + * Copyright (C) 2014, Arnaud EBALARD <arno@natisbad.org> 6 + * 7 + * Detailed datasheet of the chip is available here: 8 + * 9 + * http://www.abracon.com/realtimeclock/AB-RTCMC-32.768kHz-B5ZE-S3-Application-Manual.pdf 10 + * 11 + * This work is based on ISL12057 driver (drivers/rtc/rtc-isl12057.c). 12 + * 13 + * This program is free software; you can redistribute it and/or modify 14 + * it under the terms of the GNU General Public License as published by 15 + * the Free Software Foundation; either version 2 of the License, or 16 + * (at your option) any later version. 17 + * 18 + * This program is distributed in the hope that it will be useful, 19 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 20 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 21 + * GNU General Public License for more details. 22 + */ 23 + 24 + #include <linux/module.h> 25 + #include <linux/mutex.h> 26 + #include <linux/rtc.h> 27 + #include <linux/i2c.h> 28 + #include <linux/bcd.h> 29 + #include <linux/of.h> 30 + #include <linux/regmap.h> 31 + #include <linux/interrupt.h> 32 + 33 + #define DRV_NAME "rtc-ab-b5ze-s3" 34 + 35 + /* Control section */ 36 + #define ABB5ZES3_REG_CTRL1 0x00 /* Control 1 register */ 37 + #define ABB5ZES3_REG_CTRL1_CIE BIT(0) /* Pulse interrupt enable */ 38 + #define ABB5ZES3_REG_CTRL1_AIE BIT(1) /* Alarm interrupt enable */ 39 + #define ABB5ZES3_REG_CTRL1_SIE BIT(2) /* Second interrupt enable */ 40 + #define ABB5ZES3_REG_CTRL1_PM BIT(3) /* 24h/12h mode */ 41 + #define ABB5ZES3_REG_CTRL1_SR BIT(4) /* Software reset */ 42 + #define ABB5ZES3_REG_CTRL1_STOP BIT(5) /* RTC circuit enable */ 43 + #define ABB5ZES3_REG_CTRL1_CAP BIT(7) 44 + 45 + #define ABB5ZES3_REG_CTRL2 0x01 /* Control 2 register */ 46 + #define ABB5ZES3_REG_CTRL2_CTBIE BIT(0) /* Countdown timer B int. enable */ 47 + #define ABB5ZES3_REG_CTRL2_CTAIE BIT(1) /* Countdown timer A int. enable */ 48 + #define ABB5ZES3_REG_CTRL2_WTAIE BIT(2) /* Watchdog timer A int. enable */ 49 + #define ABB5ZES3_REG_CTRL2_AF BIT(3) /* Alarm interrupt status */ 50 + #define ABB5ZES3_REG_CTRL2_SF BIT(4) /* Second interrupt status */ 51 + #define ABB5ZES3_REG_CTRL2_CTBF BIT(5) /* Countdown timer B int. status */ 52 + #define ABB5ZES3_REG_CTRL2_CTAF BIT(6) /* Countdown timer A int. status */ 53 + #define ABB5ZES3_REG_CTRL2_WTAF BIT(7) /* Watchdog timer A int. status */ 54 + 55 + #define ABB5ZES3_REG_CTRL3 0x02 /* Control 3 register */ 56 + #define ABB5ZES3_REG_CTRL3_PM2 BIT(7) /* Power Management bit 2 */ 57 + #define ABB5ZES3_REG_CTRL3_PM1 BIT(6) /* Power Management bit 1 */ 58 + #define ABB5ZES3_REG_CTRL3_PM0 BIT(5) /* Power Management bit 0 */ 59 + #define ABB5ZES3_REG_CTRL3_BSF BIT(3) /* Battery switchover int. status */ 60 + #define ABB5ZES3_REG_CTRL3_BLF BIT(2) /* Battery low int. status */ 61 + #define ABB5ZES3_REG_CTRL3_BSIE BIT(1) /* Battery switchover int. enable */ 62 + #define ABB5ZES3_REG_CTRL3_BLIE BIT(0) /* Battery low int. enable */ 63 + 64 + #define ABB5ZES3_CTRL_SEC_LEN 3 65 + 66 + /* RTC section */ 67 + #define ABB5ZES3_REG_RTC_SC 0x03 /* RTC Seconds register */ 68 + #define ABB5ZES3_REG_RTC_SC_OSC BIT(7) /* Clock integrity status */ 69 + #define ABB5ZES3_REG_RTC_MN 0x04 /* RTC Minutes register */ 70 + #define ABB5ZES3_REG_RTC_HR 0x05 /* RTC Hours register */ 71 + #define ABB5ZES3_REG_RTC_HR_PM BIT(5) /* RTC Hours PM bit */ 72 + #define ABB5ZES3_REG_RTC_DT 0x06 /* RTC Date register */ 73 + #define ABB5ZES3_REG_RTC_DW 0x07 /* RTC Day of the week register */ 74 + #define ABB5ZES3_REG_RTC_MO 0x08 /* RTC Month register */ 75 + #define ABB5ZES3_REG_RTC_YR 0x09 /* RTC Year register */ 76 + 77 + #define ABB5ZES3_RTC_SEC_LEN 7 78 + 79 + /* Alarm section (enable bits are all active low) */ 80 + #define ABB5ZES3_REG_ALRM_MN 0x0A /* Alarm - minute register */ 81 + #define ABB5ZES3_REG_ALRM_MN_AE BIT(7) /* Minute enable */ 82 + #define ABB5ZES3_REG_ALRM_HR 0x0B /* Alarm - hours register */ 83 + #define ABB5ZES3_REG_ALRM_HR_AE BIT(7) /* Hour enable */ 84 + #define ABB5ZES3_REG_ALRM_DT 0x0C /* Alarm - date register */ 85 + #define ABB5ZES3_REG_ALRM_DT_AE BIT(7) /* Date (day of the month) enable */ 86 + #define ABB5ZES3_REG_ALRM_DW 0x0D /* Alarm - day of the week reg. */ 87 + #define ABB5ZES3_REG_ALRM_DW_AE BIT(7) /* Day of the week enable */ 88 + 89 + #define ABB5ZES3_ALRM_SEC_LEN 4 90 + 91 + /* Frequency offset section */ 92 + #define ABB5ZES3_REG_FREQ_OF 0x0E /* Frequency offset register */ 93 + #define ABB5ZES3_REG_FREQ_OF_MODE 0x0E /* Offset mode: 2 hours / minute */ 94 + 95 + /* CLOCKOUT section */ 96 + #define ABB5ZES3_REG_TIM_CLK 0x0F /* Timer & Clockout register */ 97 + #define ABB5ZES3_REG_TIM_CLK_TAM BIT(7) /* Permanent/pulsed timer A/int. 2 */ 98 + #define ABB5ZES3_REG_TIM_CLK_TBM BIT(6) /* Permanent/pulsed timer B */ 99 + #define ABB5ZES3_REG_TIM_CLK_COF2 BIT(5) /* Clkout Freq bit 2 */ 100 + #define ABB5ZES3_REG_TIM_CLK_COF1 BIT(4) /* Clkout Freq bit 1 */ 101 + #define ABB5ZES3_REG_TIM_CLK_COF0 BIT(3) /* Clkout Freq bit 0 */ 102 + #define ABB5ZES3_REG_TIM_CLK_TAC1 BIT(2) /* Timer A: - 01 : countdown */ 103 + #define ABB5ZES3_REG_TIM_CLK_TAC0 BIT(1) /* - 10 : timer */ 104 + #define ABB5ZES3_REG_TIM_CLK_TBC BIT(0) /* Timer B enable */ 105 + 106 + /* Timer A Section */ 107 + #define ABB5ZES3_REG_TIMA_CLK 0x10 /* Timer A clock register */ 108 + #define ABB5ZES3_REG_TIMA_CLK_TAQ2 BIT(2) /* Freq bit 2 */ 109 + #define ABB5ZES3_REG_TIMA_CLK_TAQ1 BIT(1) /* Freq bit 1 */ 110 + #define ABB5ZES3_REG_TIMA_CLK_TAQ0 BIT(0) /* Freq bit 0 */ 111 + #define ABB5ZES3_REG_TIMA 0x11 /* Timer A register */ 112 + 113 + #define ABB5ZES3_TIMA_SEC_LEN 2 114 + 115 + /* Timer B Section */ 116 + #define ABB5ZES3_REG_TIMB_CLK 0x12 /* Timer B clock register */ 117 + #define ABB5ZES3_REG_TIMB_CLK_TBW2 BIT(6) 118 + #define ABB5ZES3_REG_TIMB_CLK_TBW1 BIT(5) 119 + #define ABB5ZES3_REG_TIMB_CLK_TBW0 BIT(4) 120 + #define ABB5ZES3_REG_TIMB_CLK_TAQ2 BIT(2) 121 + #define ABB5ZES3_REG_TIMB_CLK_TAQ1 BIT(1) 122 + #define ABB5ZES3_REG_TIMB_CLK_TAQ0 BIT(0) 123 + #define ABB5ZES3_REG_TIMB 0x13 /* Timer B register */ 124 + #define ABB5ZES3_TIMB_SEC_LEN 2 125 + 126 + #define ABB5ZES3_MEM_MAP_LEN 0x14 127 + 128 + struct abb5zes3_rtc_data { 129 + struct rtc_device *rtc; 130 + struct regmap *regmap; 131 + struct mutex lock; 132 + 133 + int irq; 134 + 135 + bool battery_low; 136 + bool timer_alarm; /* current alarm is via timer A */ 137 + }; 138 + 139 + /* 140 + * Try and match register bits w/ fixed null values to see whether we 141 + * are dealing with an ABB5ZES3. Note: this function is called early 142 + * during init and hence does need mutex protection. 143 + */ 144 + static int abb5zes3_i2c_validate_chip(struct regmap *regmap) 145 + { 146 + u8 regs[ABB5ZES3_MEM_MAP_LEN]; 147 + static const u8 mask[ABB5ZES3_MEM_MAP_LEN] = { 0x00, 0x00, 0x10, 0x00, 148 + 0x80, 0xc0, 0xc0, 0xf8, 149 + 0xe0, 0x00, 0x00, 0x40, 150 + 0x40, 0x78, 0x00, 0x00, 151 + 0xf8, 0x00, 0x88, 0x00 }; 152 + int ret, i; 153 + 154 + ret = regmap_bulk_read(regmap, 0, regs, ABB5ZES3_MEM_MAP_LEN); 155 + if (ret) 156 + return ret; 157 + 158 + for (i = 0; i < ABB5ZES3_MEM_MAP_LEN; ++i) { 159 + if (regs[i] & mask[i]) /* check if bits are cleared */ 160 + return -ENODEV; 161 + } 162 + 163 + return 0; 164 + } 165 + 166 + /* Clear alarm status bit. */ 167 + static int _abb5zes3_rtc_clear_alarm(struct device *dev) 168 + { 169 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 170 + int ret; 171 + 172 + ret = regmap_update_bits(data->regmap, ABB5ZES3_REG_CTRL2, 173 + ABB5ZES3_REG_CTRL2_AF, 0); 174 + if (ret) 175 + dev_err(dev, "%s: clearing alarm failed (%d)\n", __func__, ret); 176 + 177 + return ret; 178 + } 179 + 180 + /* Enable or disable alarm (i.e. alarm interrupt generation) */ 181 + static int _abb5zes3_rtc_update_alarm(struct device *dev, bool enable) 182 + { 183 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 184 + int ret; 185 + 186 + ret = regmap_update_bits(data->regmap, ABB5ZES3_REG_CTRL1, 187 + ABB5ZES3_REG_CTRL1_AIE, 188 + enable ? ABB5ZES3_REG_CTRL1_AIE : 0); 189 + if (ret) 190 + dev_err(dev, "%s: writing alarm INT failed (%d)\n", 191 + __func__, ret); 192 + 193 + return ret; 194 + } 195 + 196 + /* Enable or disable timer (watchdog timer A interrupt generation) */ 197 + static int _abb5zes3_rtc_update_timer(struct device *dev, bool enable) 198 + { 199 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 200 + int ret; 201 + 202 + ret = regmap_update_bits(data->regmap, ABB5ZES3_REG_CTRL2, 203 + ABB5ZES3_REG_CTRL2_WTAIE, 204 + enable ? ABB5ZES3_REG_CTRL2_WTAIE : 0); 205 + if (ret) 206 + dev_err(dev, "%s: writing timer INT failed (%d)\n", 207 + __func__, ret); 208 + 209 + return ret; 210 + } 211 + 212 + /* 213 + * Note: we only read, so regmap inner lock protection is sufficient, i.e. 214 + * we do not need driver's main lock protection. 215 + */ 216 + static int _abb5zes3_rtc_read_time(struct device *dev, struct rtc_time *tm) 217 + { 218 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 219 + u8 regs[ABB5ZES3_REG_RTC_SC + ABB5ZES3_RTC_SEC_LEN]; 220 + int ret; 221 + 222 + /* 223 + * As we need to read CTRL1 register anyway to access 24/12h 224 + * mode bit, we do a single bulk read of both control and RTC 225 + * sections (they are consecutive). This also ease indexing 226 + * of register values after bulk read. 227 + */ 228 + ret = regmap_bulk_read(data->regmap, ABB5ZES3_REG_CTRL1, regs, 229 + sizeof(regs)); 230 + if (ret) { 231 + dev_err(dev, "%s: reading RTC time failed (%d)\n", 232 + __func__, ret); 233 + goto err; 234 + } 235 + 236 + /* If clock integrity is not guaranteed, do not return a time value */ 237 + if (regs[ABB5ZES3_REG_RTC_SC] & ABB5ZES3_REG_RTC_SC_OSC) { 238 + ret = -ENODATA; 239 + goto err; 240 + } 241 + 242 + tm->tm_sec = bcd2bin(regs[ABB5ZES3_REG_RTC_SC] & 0x7F); 243 + tm->tm_min = bcd2bin(regs[ABB5ZES3_REG_RTC_MN]); 244 + 245 + if (regs[ABB5ZES3_REG_CTRL1] & ABB5ZES3_REG_CTRL1_PM) { /* 12hr mode */ 246 + tm->tm_hour = bcd2bin(regs[ABB5ZES3_REG_RTC_HR] & 0x1f); 247 + if (regs[ABB5ZES3_REG_RTC_HR] & ABB5ZES3_REG_RTC_HR_PM) /* PM */ 248 + tm->tm_hour += 12; 249 + } else { /* 24hr mode */ 250 + tm->tm_hour = bcd2bin(regs[ABB5ZES3_REG_RTC_HR]); 251 + } 252 + 253 + tm->tm_mday = bcd2bin(regs[ABB5ZES3_REG_RTC_DT]); 254 + tm->tm_wday = bcd2bin(regs[ABB5ZES3_REG_RTC_DW]); 255 + tm->tm_mon = bcd2bin(regs[ABB5ZES3_REG_RTC_MO]) - 1; /* starts at 1 */ 256 + tm->tm_year = bcd2bin(regs[ABB5ZES3_REG_RTC_YR]) + 100; 257 + 258 + ret = rtc_valid_tm(tm); 259 + 260 + err: 261 + return ret; 262 + } 263 + 264 + static int abb5zes3_rtc_set_time(struct device *dev, struct rtc_time *tm) 265 + { 266 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 267 + u8 regs[ABB5ZES3_REG_RTC_SC + ABB5ZES3_RTC_SEC_LEN]; 268 + int ret; 269 + 270 + /* 271 + * Year register is 8-bit wide and bcd-coded, i.e records values 272 + * between 0 and 99. tm_year is an offset from 1900 and we are 273 + * interested in the 2000-2099 range, so any value less than 100 274 + * is invalid. 275 + */ 276 + if (tm->tm_year < 100) 277 + return -EINVAL; 278 + 279 + regs[ABB5ZES3_REG_RTC_SC] = bin2bcd(tm->tm_sec); /* MSB=0 clears OSC */ 280 + regs[ABB5ZES3_REG_RTC_MN] = bin2bcd(tm->tm_min); 281 + regs[ABB5ZES3_REG_RTC_HR] = bin2bcd(tm->tm_hour); /* 24-hour format */ 282 + regs[ABB5ZES3_REG_RTC_DT] = bin2bcd(tm->tm_mday); 283 + regs[ABB5ZES3_REG_RTC_DW] = bin2bcd(tm->tm_wday); 284 + regs[ABB5ZES3_REG_RTC_MO] = bin2bcd(tm->tm_mon + 1); 285 + regs[ABB5ZES3_REG_RTC_YR] = bin2bcd(tm->tm_year - 100); 286 + 287 + mutex_lock(&data->lock); 288 + ret = regmap_bulk_write(data->regmap, ABB5ZES3_REG_RTC_SC, 289 + regs + ABB5ZES3_REG_RTC_SC, 290 + ABB5ZES3_RTC_SEC_LEN); 291 + mutex_unlock(&data->lock); 292 + 293 + 294 + return ret; 295 + } 296 + 297 + /* 298 + * Set provided TAQ and Timer A registers (TIMA_CLK and TIMA) based on 299 + * given number of seconds. 300 + */ 301 + static inline void sec_to_timer_a(u8 secs, u8 *taq, u8 *timer_a) 302 + { 303 + *taq = ABB5ZES3_REG_TIMA_CLK_TAQ1; /* 1Hz */ 304 + *timer_a = secs; 305 + } 306 + 307 + /* 308 + * Return current number of seconds in Timer A. As we only use 309 + * timer A with a 1Hz freq, this is what we expect to have. 310 + */ 311 + static inline int sec_from_timer_a(u8 *secs, u8 taq, u8 timer_a) 312 + { 313 + if (taq != ABB5ZES3_REG_TIMA_CLK_TAQ1) /* 1Hz */ 314 + return -EINVAL; 315 + 316 + *secs = timer_a; 317 + 318 + return 0; 319 + } 320 + 321 + /* 322 + * Read alarm currently configured via a watchdog timer using timer A. This 323 + * is done by reading current RTC time and adding remaining timer time. 324 + */ 325 + static int _abb5zes3_rtc_read_timer(struct device *dev, 326 + struct rtc_wkalrm *alarm) 327 + { 328 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 329 + struct rtc_time rtc_tm, *alarm_tm = &alarm->time; 330 + u8 regs[ABB5ZES3_TIMA_SEC_LEN + 1]; 331 + unsigned long rtc_secs; 332 + unsigned int reg; 333 + u8 timer_secs; 334 + int ret; 335 + 336 + /* 337 + * Instead of doing two separate calls, because they are consecutive, 338 + * we grab both clockout register and Timer A section. The latter is 339 + * used to decide if timer A is enabled (as a watchdog timer). 340 + */ 341 + ret = regmap_bulk_read(data->regmap, ABB5ZES3_REG_TIM_CLK, regs, 342 + ABB5ZES3_TIMA_SEC_LEN + 1); 343 + if (ret) { 344 + dev_err(dev, "%s: reading Timer A section failed (%d)\n", 345 + __func__, ret); 346 + goto err; 347 + } 348 + 349 + /* get current time ... */ 350 + ret = _abb5zes3_rtc_read_time(dev, &rtc_tm); 351 + if (ret) 352 + goto err; 353 + 354 + /* ... convert to seconds ... */ 355 + ret = rtc_tm_to_time(&rtc_tm, &rtc_secs); 356 + if (ret) 357 + goto err; 358 + 359 + /* ... add remaining timer A time ... */ 360 + ret = sec_from_timer_a(&timer_secs, regs[1], regs[2]); 361 + if (ret) 362 + goto err; 363 + 364 + /* ... and convert back. */ 365 + rtc_time_to_tm(rtc_secs + timer_secs, alarm_tm); 366 + 367 + ret = regmap_read(data->regmap, ABB5ZES3_REG_CTRL2, &reg); 368 + if (ret) { 369 + dev_err(dev, "%s: reading ctrl reg failed (%d)\n", 370 + __func__, ret); 371 + goto err; 372 + } 373 + 374 + alarm->enabled = !!(reg & ABB5ZES3_REG_CTRL2_WTAIE); 375 + 376 + err: 377 + return ret; 378 + } 379 + 380 + /* Read alarm currently configured via a RTC alarm registers. */ 381 + static int _abb5zes3_rtc_read_alarm(struct device *dev, 382 + struct rtc_wkalrm *alarm) 383 + { 384 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 385 + struct rtc_time rtc_tm, *alarm_tm = &alarm->time; 386 + unsigned long rtc_secs, alarm_secs; 387 + u8 regs[ABB5ZES3_ALRM_SEC_LEN]; 388 + unsigned int reg; 389 + int ret; 390 + 391 + ret = regmap_bulk_read(data->regmap, ABB5ZES3_REG_ALRM_MN, regs, 392 + ABB5ZES3_ALRM_SEC_LEN); 393 + if (ret) { 394 + dev_err(dev, "%s: reading alarm section failed (%d)\n", 395 + __func__, ret); 396 + goto err; 397 + } 398 + 399 + alarm_tm->tm_sec = 0; 400 + alarm_tm->tm_min = bcd2bin(regs[0] & 0x7f); 401 + alarm_tm->tm_hour = bcd2bin(regs[1] & 0x3f); 402 + alarm_tm->tm_mday = bcd2bin(regs[2] & 0x3f); 403 + alarm_tm->tm_wday = -1; 404 + 405 + /* 406 + * The alarm section does not store year/month. We use the ones in rtc 407 + * section as a basis and increment month and then year if needed to get 408 + * alarm after current time. 409 + */ 410 + ret = _abb5zes3_rtc_read_time(dev, &rtc_tm); 411 + if (ret) 412 + goto err; 413 + 414 + alarm_tm->tm_year = rtc_tm.tm_year; 415 + alarm_tm->tm_mon = rtc_tm.tm_mon; 416 + 417 + ret = rtc_tm_to_time(&rtc_tm, &rtc_secs); 418 + if (ret) 419 + goto err; 420 + 421 + ret = rtc_tm_to_time(alarm_tm, &alarm_secs); 422 + if (ret) 423 + goto err; 424 + 425 + if (alarm_secs < rtc_secs) { 426 + if (alarm_tm->tm_mon == 11) { 427 + alarm_tm->tm_mon = 0; 428 + alarm_tm->tm_year += 1; 429 + } else { 430 + alarm_tm->tm_mon += 1; 431 + } 432 + } 433 + 434 + ret = regmap_read(data->regmap, ABB5ZES3_REG_CTRL1, &reg); 435 + if (ret) { 436 + dev_err(dev, "%s: reading ctrl reg failed (%d)\n", 437 + __func__, ret); 438 + goto err; 439 + } 440 + 441 + alarm->enabled = !!(reg & ABB5ZES3_REG_CTRL1_AIE); 442 + 443 + err: 444 + return ret; 445 + } 446 + 447 + /* 448 + * As the Alarm mechanism supported by the chip is only accurate to the 449 + * minute, we use the watchdog timer mechanism provided by timer A 450 + * (up to 256 seconds w/ a second accuracy) for low alarm values (below 451 + * 4 minutes). Otherwise, we use the common alarm mechanism provided 452 + * by the chip. In order for that to work, we keep track of currently 453 + * configured timer type via 'timer_alarm' flag in our private data 454 + * structure. 455 + */ 456 + static int abb5zes3_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alarm) 457 + { 458 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 459 + int ret; 460 + 461 + mutex_lock(&data->lock); 462 + if (data->timer_alarm) 463 + ret = _abb5zes3_rtc_read_timer(dev, alarm); 464 + else 465 + ret = _abb5zes3_rtc_read_alarm(dev, alarm); 466 + mutex_unlock(&data->lock); 467 + 468 + return ret; 469 + } 470 + 471 + /* 472 + * Set alarm using chip alarm mechanism. It is only accurate to the 473 + * minute (not the second). The function expects alarm interrupt to 474 + * be disabled. 475 + */ 476 + static int _abb5zes3_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm) 477 + { 478 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 479 + struct rtc_time *alarm_tm = &alarm->time; 480 + unsigned long rtc_secs, alarm_secs; 481 + u8 regs[ABB5ZES3_ALRM_SEC_LEN]; 482 + struct rtc_time rtc_tm; 483 + int ret, enable = 1; 484 + 485 + ret = _abb5zes3_rtc_read_time(dev, &rtc_tm); 486 + if (ret) 487 + goto err; 488 + 489 + ret = rtc_tm_to_time(&rtc_tm, &rtc_secs); 490 + if (ret) 491 + goto err; 492 + 493 + ret = rtc_tm_to_time(alarm_tm, &alarm_secs); 494 + if (ret) 495 + goto err; 496 + 497 + /* If alarm time is before current time, disable the alarm */ 498 + if (!alarm->enabled || alarm_secs <= rtc_secs) { 499 + enable = 0; 500 + } else { 501 + /* 502 + * Chip only support alarms up to one month in the future. Let's 503 + * return an error if we get something after that limit. 504 + * Comparison is done by incrementing rtc_tm month field by one 505 + * and checking alarm value is still below. 506 + */ 507 + if (rtc_tm.tm_mon == 11) { /* handle year wrapping */ 508 + rtc_tm.tm_mon = 0; 509 + rtc_tm.tm_year += 1; 510 + } else { 511 + rtc_tm.tm_mon += 1; 512 + } 513 + 514 + ret = rtc_tm_to_time(&rtc_tm, &rtc_secs); 515 + if (ret) 516 + goto err; 517 + 518 + if (alarm_secs > rtc_secs) { 519 + dev_err(dev, "%s: alarm maximum is one month in the " 520 + "future (%d)\n", __func__, ret); 521 + ret = -EINVAL; 522 + goto err; 523 + } 524 + } 525 + 526 + /* 527 + * Program all alarm registers but DW one. For each register, setting 528 + * MSB to 0 enables associated alarm. 529 + */ 530 + regs[0] = bin2bcd(alarm_tm->tm_min) & 0x7f; 531 + regs[1] = bin2bcd(alarm_tm->tm_hour) & 0x3f; 532 + regs[2] = bin2bcd(alarm_tm->tm_mday) & 0x3f; 533 + regs[3] = ABB5ZES3_REG_ALRM_DW_AE; /* do not match day of the week */ 534 + 535 + ret = regmap_bulk_write(data->regmap, ABB5ZES3_REG_ALRM_MN, regs, 536 + ABB5ZES3_ALRM_SEC_LEN); 537 + if (ret < 0) { 538 + dev_err(dev, "%s: writing ALARM section failed (%d)\n", 539 + __func__, ret); 540 + goto err; 541 + } 542 + 543 + /* Record currently configured alarm is not a timer */ 544 + data->timer_alarm = 0; 545 + 546 + /* Enable or disable alarm interrupt generation */ 547 + ret = _abb5zes3_rtc_update_alarm(dev, enable); 548 + 549 + err: 550 + return ret; 551 + } 552 + 553 + /* 554 + * Set alarm using timer watchdog (via timer A) mechanism. The function expects 555 + * timer A interrupt to be disabled. 556 + */ 557 + static int _abb5zes3_rtc_set_timer(struct device *dev, struct rtc_wkalrm *alarm, 558 + u8 secs) 559 + { 560 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 561 + u8 regs[ABB5ZES3_TIMA_SEC_LEN]; 562 + u8 mask = ABB5ZES3_REG_TIM_CLK_TAC0 | ABB5ZES3_REG_TIM_CLK_TAC1; 563 + int ret = 0; 564 + 565 + /* Program given number of seconds to Timer A registers */ 566 + sec_to_timer_a(secs, &regs[0], &regs[1]); 567 + ret = regmap_bulk_write(data->regmap, ABB5ZES3_REG_TIMA_CLK, regs, 568 + ABB5ZES3_TIMA_SEC_LEN); 569 + if (ret < 0) { 570 + dev_err(dev, "%s: writing timer section failed\n", __func__); 571 + goto err; 572 + } 573 + 574 + /* Configure Timer A as a watchdog timer */ 575 + ret = regmap_update_bits(data->regmap, ABB5ZES3_REG_TIM_CLK, 576 + mask, ABB5ZES3_REG_TIM_CLK_TAC1); 577 + if (ret) 578 + dev_err(dev, "%s: failed to update timer\n", __func__); 579 + 580 + /* Record currently configured alarm is a timer */ 581 + data->timer_alarm = 1; 582 + 583 + /* Enable or disable timer interrupt generation */ 584 + ret = _abb5zes3_rtc_update_timer(dev, alarm->enabled); 585 + 586 + err: 587 + return ret; 588 + } 589 + 590 + /* 591 + * The chip has an alarm which is only accurate to the minute. In order to 592 + * handle alarms below that limit, we use the watchdog timer function of 593 + * timer A. More precisely, the timer method is used for alarms below 240 594 + * seconds. 595 + */ 596 + static int abb5zes3_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm) 597 + { 598 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 599 + struct rtc_time *alarm_tm = &alarm->time; 600 + unsigned long rtc_secs, alarm_secs; 601 + struct rtc_time rtc_tm; 602 + int ret; 603 + 604 + mutex_lock(&data->lock); 605 + ret = _abb5zes3_rtc_read_time(dev, &rtc_tm); 606 + if (ret) 607 + goto err; 608 + 609 + ret = rtc_tm_to_time(&rtc_tm, &rtc_secs); 610 + if (ret) 611 + goto err; 612 + 613 + ret = rtc_tm_to_time(alarm_tm, &alarm_secs); 614 + if (ret) 615 + goto err; 616 + 617 + /* Let's first disable both the alarm and the timer interrupts */ 618 + ret = _abb5zes3_rtc_update_alarm(dev, false); 619 + if (ret < 0) { 620 + dev_err(dev, "%s: unable to disable alarm (%d)\n", __func__, 621 + ret); 622 + goto err; 623 + } 624 + ret = _abb5zes3_rtc_update_timer(dev, false); 625 + if (ret < 0) { 626 + dev_err(dev, "%s: unable to disable timer (%d)\n", __func__, 627 + ret); 628 + goto err; 629 + } 630 + 631 + data->timer_alarm = 0; 632 + 633 + /* 634 + * Let's now configure the alarm; if we are expected to ring in 635 + * more than 240s, then we setup an alarm. Otherwise, a timer. 636 + */ 637 + if ((alarm_secs > rtc_secs) && ((alarm_secs - rtc_secs) <= 240)) 638 + ret = _abb5zes3_rtc_set_timer(dev, alarm, 639 + alarm_secs - rtc_secs); 640 + else 641 + ret = _abb5zes3_rtc_set_alarm(dev, alarm); 642 + 643 + err: 644 + mutex_unlock(&data->lock); 645 + 646 + if (ret) 647 + dev_err(dev, "%s: unable to configure alarm (%d)\n", __func__, 648 + ret); 649 + 650 + return ret; 651 + } 652 + 653 + /* Enable or disable battery low irq generation */ 654 + static inline int _abb5zes3_rtc_battery_low_irq_enable(struct regmap *regmap, 655 + bool enable) 656 + { 657 + return regmap_update_bits(regmap, ABB5ZES3_REG_CTRL3, 658 + ABB5ZES3_REG_CTRL3_BLIE, 659 + enable ? ABB5ZES3_REG_CTRL3_BLIE : 0); 660 + } 661 + 662 + /* 663 + * Check current RTC status and enable/disable what needs to be. Return 0 if 664 + * everything went ok and a negative value upon error. Note: this function 665 + * is called early during init and hence does need mutex protection. 666 + */ 667 + static int abb5zes3_rtc_check_setup(struct device *dev) 668 + { 669 + struct abb5zes3_rtc_data *data = dev_get_drvdata(dev); 670 + struct regmap *regmap = data->regmap; 671 + unsigned int reg; 672 + int ret; 673 + u8 mask; 674 + 675 + /* 676 + * By default, the devices generates a 32.768KHz signal on IRQ#1 pin. It 677 + * is disabled here to prevent polluting the interrupt line and 678 + * uselessly triggering the IRQ handler we install for alarm and battery 679 + * low events. Note: this is done before clearing int. status below 680 + * in this function. 681 + * We also disable all timers and set timer interrupt to permanent (not 682 + * pulsed). 683 + */ 684 + mask = (ABB5ZES3_REG_TIM_CLK_TBC | ABB5ZES3_REG_TIM_CLK_TAC0 | 685 + ABB5ZES3_REG_TIM_CLK_TAC1 | ABB5ZES3_REG_TIM_CLK_COF0 | 686 + ABB5ZES3_REG_TIM_CLK_COF1 | ABB5ZES3_REG_TIM_CLK_COF2 | 687 + ABB5ZES3_REG_TIM_CLK_TBM | ABB5ZES3_REG_TIM_CLK_TAM); 688 + ret = regmap_update_bits(regmap, ABB5ZES3_REG_TIM_CLK, mask, 689 + ABB5ZES3_REG_TIM_CLK_COF0 | ABB5ZES3_REG_TIM_CLK_COF1 | 690 + ABB5ZES3_REG_TIM_CLK_COF2); 691 + if (ret < 0) { 692 + dev_err(dev, "%s: unable to initialize clkout register (%d)\n", 693 + __func__, ret); 694 + return ret; 695 + } 696 + 697 + /* 698 + * Each component of the alarm (MN, HR, DT, DW) can be enabled/disabled 699 + * individually by clearing/setting MSB of each associated register. So, 700 + * we set all alarm enable bits to disable current alarm setting. 701 + */ 702 + mask = (ABB5ZES3_REG_ALRM_MN_AE | ABB5ZES3_REG_ALRM_HR_AE | 703 + ABB5ZES3_REG_ALRM_DT_AE | ABB5ZES3_REG_ALRM_DW_AE); 704 + ret = regmap_update_bits(regmap, ABB5ZES3_REG_CTRL2, mask, mask); 705 + if (ret < 0) { 706 + dev_err(dev, "%s: unable to disable alarm setting (%d)\n", 707 + __func__, ret); 708 + return ret; 709 + } 710 + 711 + /* Set Control 1 register (RTC enabled, 24hr mode, all int. disabled) */ 712 + mask = (ABB5ZES3_REG_CTRL1_CIE | ABB5ZES3_REG_CTRL1_AIE | 713 + ABB5ZES3_REG_CTRL1_SIE | ABB5ZES3_REG_CTRL1_PM | 714 + ABB5ZES3_REG_CTRL1_CAP | ABB5ZES3_REG_CTRL1_STOP); 715 + ret = regmap_update_bits(regmap, ABB5ZES3_REG_CTRL1, mask, 0); 716 + if (ret < 0) { 717 + dev_err(dev, "%s: unable to initialize CTRL1 register (%d)\n", 718 + __func__, ret); 719 + return ret; 720 + } 721 + 722 + /* 723 + * Set Control 2 register (timer int. disabled, alarm status cleared). 724 + * WTAF is read-only and cleared automatically by reading the register. 725 + */ 726 + mask = (ABB5ZES3_REG_CTRL2_CTBIE | ABB5ZES3_REG_CTRL2_CTAIE | 727 + ABB5ZES3_REG_CTRL2_WTAIE | ABB5ZES3_REG_CTRL2_AF | 728 + ABB5ZES3_REG_CTRL2_SF | ABB5ZES3_REG_CTRL2_CTBF | 729 + ABB5ZES3_REG_CTRL2_CTAF); 730 + ret = regmap_update_bits(regmap, ABB5ZES3_REG_CTRL2, mask, 0); 731 + if (ret < 0) { 732 + dev_err(dev, "%s: unable to initialize CTRL2 register (%d)\n", 733 + __func__, ret); 734 + return ret; 735 + } 736 + 737 + /* 738 + * Enable battery low detection function and battery switchover function 739 + * (standard mode). Disable associated interrupts. Clear battery 740 + * switchover flag but not battery low flag. The latter is checked 741 + * later below. 742 + */ 743 + mask = (ABB5ZES3_REG_CTRL3_PM0 | ABB5ZES3_REG_CTRL3_PM1 | 744 + ABB5ZES3_REG_CTRL3_PM2 | ABB5ZES3_REG_CTRL3_BLIE | 745 + ABB5ZES3_REG_CTRL3_BSIE| ABB5ZES3_REG_CTRL3_BSF); 746 + ret = regmap_update_bits(regmap, ABB5ZES3_REG_CTRL3, mask, 0); 747 + if (ret < 0) { 748 + dev_err(dev, "%s: unable to initialize CTRL3 register (%d)\n", 749 + __func__, ret); 750 + return ret; 751 + } 752 + 753 + /* Check oscillator integrity flag */ 754 + ret = regmap_read(regmap, ABB5ZES3_REG_RTC_SC, &reg); 755 + if (ret < 0) { 756 + dev_err(dev, "%s: unable to read osc. integrity flag (%d)\n", 757 + __func__, ret); 758 + return ret; 759 + } 760 + 761 + if (reg & ABB5ZES3_REG_RTC_SC_OSC) { 762 + dev_err(dev, "clock integrity not guaranteed. Osc. has stopped " 763 + "or has been interrupted.\n"); 764 + dev_err(dev, "change battery (if not already done) and " 765 + "then set time to reset osc. failure flag.\n"); 766 + } 767 + 768 + /* 769 + * Check battery low flag at startup: this allows reporting battery 770 + * is low at startup when IRQ line is not connected. Note: we record 771 + * current status to avoid reenabling this interrupt later in probe 772 + * function if battery is low. 773 + */ 774 + ret = regmap_read(regmap, ABB5ZES3_REG_CTRL3, &reg); 775 + if (ret < 0) { 776 + dev_err(dev, "%s: unable to read battery low flag (%d)\n", 777 + __func__, ret); 778 + return ret; 779 + } 780 + 781 + data->battery_low = reg & ABB5ZES3_REG_CTRL3_BLF; 782 + if (data->battery_low) { 783 + dev_err(dev, "RTC battery is low; please, consider " 784 + "changing it!\n"); 785 + 786 + ret = _abb5zes3_rtc_battery_low_irq_enable(regmap, false); 787 + if (ret) 788 + dev_err(dev, "%s: disabling battery low interrupt " 789 + "generation failed (%d)\n", __func__, ret); 790 + } 791 + 792 + return ret; 793 + } 794 + 795 + static int abb5zes3_rtc_alarm_irq_enable(struct device *dev, 796 + unsigned int enable) 797 + { 798 + struct abb5zes3_rtc_data *rtc_data = dev_get_drvdata(dev); 799 + int ret = 0; 800 + 801 + if (rtc_data->irq) { 802 + mutex_lock(&rtc_data->lock); 803 + if (rtc_data->timer_alarm) 804 + ret = _abb5zes3_rtc_update_timer(dev, enable); 805 + else 806 + ret = _abb5zes3_rtc_update_alarm(dev, enable); 807 + mutex_unlock(&rtc_data->lock); 808 + } 809 + 810 + return ret; 811 + } 812 + 813 + static irqreturn_t _abb5zes3_rtc_interrupt(int irq, void *data) 814 + { 815 + struct i2c_client *client = data; 816 + struct device *dev = &client->dev; 817 + struct abb5zes3_rtc_data *rtc_data = dev_get_drvdata(dev); 818 + struct rtc_device *rtc = rtc_data->rtc; 819 + u8 regs[ABB5ZES3_CTRL_SEC_LEN]; 820 + int ret, handled = IRQ_NONE; 821 + 822 + ret = regmap_bulk_read(rtc_data->regmap, 0, regs, 823 + ABB5ZES3_CTRL_SEC_LEN); 824 + if (ret) { 825 + dev_err(dev, "%s: unable to read control section (%d)!\n", 826 + __func__, ret); 827 + return handled; 828 + } 829 + 830 + /* 831 + * Check battery low detection flag and disable battery low interrupt 832 + * generation if flag is set (interrupt can only be cleared when 833 + * battery is replaced). 834 + */ 835 + if (regs[ABB5ZES3_REG_CTRL3] & ABB5ZES3_REG_CTRL3_BLF) { 836 + dev_err(dev, "RTC battery is low; please change it!\n"); 837 + 838 + _abb5zes3_rtc_battery_low_irq_enable(rtc_data->regmap, false); 839 + 840 + handled = IRQ_HANDLED; 841 + } 842 + 843 + /* Check alarm flag */ 844 + if (regs[ABB5ZES3_REG_CTRL2] & ABB5ZES3_REG_CTRL2_AF) { 845 + dev_dbg(dev, "RTC alarm!\n"); 846 + 847 + rtc_update_irq(rtc, 1, RTC_IRQF | RTC_AF); 848 + 849 + /* Acknowledge and disable the alarm */ 850 + _abb5zes3_rtc_clear_alarm(dev); 851 + _abb5zes3_rtc_update_alarm(dev, 0); 852 + 853 + handled = IRQ_HANDLED; 854 + } 855 + 856 + /* Check watchdog Timer A flag */ 857 + if (regs[ABB5ZES3_REG_CTRL2] & ABB5ZES3_REG_CTRL2_WTAF) { 858 + dev_dbg(dev, "RTC timer!\n"); 859 + 860 + rtc_update_irq(rtc, 1, RTC_IRQF | RTC_AF); 861 + 862 + /* 863 + * Acknowledge and disable the alarm. Note: WTAF 864 + * flag had been cleared when reading CTRL2 865 + */ 866 + _abb5zes3_rtc_update_timer(dev, 0); 867 + 868 + rtc_data->timer_alarm = 0; 869 + 870 + handled = IRQ_HANDLED; 871 + } 872 + 873 + return handled; 874 + } 875 + 876 + static const struct rtc_class_ops rtc_ops = { 877 + .read_time = _abb5zes3_rtc_read_time, 878 + .set_time = abb5zes3_rtc_set_time, 879 + .read_alarm = abb5zes3_rtc_read_alarm, 880 + .set_alarm = abb5zes3_rtc_set_alarm, 881 + .alarm_irq_enable = abb5zes3_rtc_alarm_irq_enable, 882 + }; 883 + 884 + static struct regmap_config abb5zes3_rtc_regmap_config = { 885 + .reg_bits = 8, 886 + .val_bits = 8, 887 + }; 888 + 889 + static int abb5zes3_probe(struct i2c_client *client, 890 + const struct i2c_device_id *id) 891 + { 892 + struct abb5zes3_rtc_data *data = NULL; 893 + struct device *dev = &client->dev; 894 + struct regmap *regmap; 895 + int ret; 896 + 897 + if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C | 898 + I2C_FUNC_SMBUS_BYTE_DATA | 899 + I2C_FUNC_SMBUS_I2C_BLOCK)) { 900 + ret = -ENODEV; 901 + goto err; 902 + } 903 + 904 + regmap = devm_regmap_init_i2c(client, &abb5zes3_rtc_regmap_config); 905 + if (IS_ERR(regmap)) { 906 + ret = PTR_ERR(regmap); 907 + dev_err(dev, "%s: regmap allocation failed: %d\n", 908 + __func__, ret); 909 + goto err; 910 + } 911 + 912 + ret = abb5zes3_i2c_validate_chip(regmap); 913 + if (ret) 914 + goto err; 915 + 916 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 917 + if (!data) { 918 + ret = -ENOMEM; 919 + goto err; 920 + } 921 + 922 + mutex_init(&data->lock); 923 + data->regmap = regmap; 924 + dev_set_drvdata(dev, data); 925 + 926 + ret = abb5zes3_rtc_check_setup(dev); 927 + if (ret) 928 + goto err; 929 + 930 + if (client->irq > 0) { 931 + ret = devm_request_threaded_irq(dev, client->irq, NULL, 932 + _abb5zes3_rtc_interrupt, 933 + IRQF_SHARED|IRQF_ONESHOT, 934 + DRV_NAME, client); 935 + if (!ret) { 936 + device_init_wakeup(dev, true); 937 + data->irq = client->irq; 938 + dev_dbg(dev, "%s: irq %d used by RTC\n", __func__, 939 + client->irq); 940 + } else { 941 + dev_err(dev, "%s: irq %d unavailable (%d)\n", 942 + __func__, client->irq, ret); 943 + goto err; 944 + } 945 + } 946 + 947 + data->rtc = devm_rtc_device_register(dev, DRV_NAME, &rtc_ops, 948 + THIS_MODULE); 949 + ret = PTR_ERR_OR_ZERO(data->rtc); 950 + if (ret) { 951 + dev_err(dev, "%s: unable to register RTC device (%d)\n", 952 + __func__, ret); 953 + goto err; 954 + } 955 + 956 + /* Enable battery low detection interrupt if battery not already low */ 957 + if (!data->battery_low && data->irq) { 958 + ret = _abb5zes3_rtc_battery_low_irq_enable(regmap, true); 959 + if (ret) { 960 + dev_err(dev, "%s: enabling battery low interrupt " 961 + "generation failed (%d)\n", __func__, ret); 962 + goto err; 963 + } 964 + } 965 + 966 + err: 967 + if (ret && data && data->irq) 968 + device_init_wakeup(dev, false); 969 + return ret; 970 + } 971 + 972 + static int abb5zes3_remove(struct i2c_client *client) 973 + { 974 + struct abb5zes3_rtc_data *rtc_data = dev_get_drvdata(&client->dev); 975 + 976 + if (rtc_data->irq > 0) 977 + device_init_wakeup(&client->dev, false); 978 + 979 + return 0; 980 + } 981 + 982 + #ifdef CONFIG_PM_SLEEP 983 + static int abb5zes3_rtc_suspend(struct device *dev) 984 + { 985 + struct abb5zes3_rtc_data *rtc_data = dev_get_drvdata(dev); 986 + 987 + if (device_may_wakeup(dev)) 988 + return enable_irq_wake(rtc_data->irq); 989 + 990 + return 0; 991 + } 992 + 993 + static int abb5zes3_rtc_resume(struct device *dev) 994 + { 995 + struct abb5zes3_rtc_data *rtc_data = dev_get_drvdata(dev); 996 + 997 + if (device_may_wakeup(dev)) 998 + return disable_irq_wake(rtc_data->irq); 999 + 1000 + return 0; 1001 + } 1002 + #endif 1003 + 1004 + static SIMPLE_DEV_PM_OPS(abb5zes3_rtc_pm_ops, abb5zes3_rtc_suspend, 1005 + abb5zes3_rtc_resume); 1006 + 1007 + #ifdef CONFIG_OF 1008 + static const struct of_device_id abb5zes3_dt_match[] = { 1009 + { .compatible = "abracon,abb5zes3" }, 1010 + { }, 1011 + }; 1012 + #endif 1013 + 1014 + static const struct i2c_device_id abb5zes3_id[] = { 1015 + { "abb5zes3", 0 }, 1016 + { } 1017 + }; 1018 + MODULE_DEVICE_TABLE(i2c, abb5zes3_id); 1019 + 1020 + static struct i2c_driver abb5zes3_driver = { 1021 + .driver = { 1022 + .name = DRV_NAME, 1023 + .owner = THIS_MODULE, 1024 + .pm = &abb5zes3_rtc_pm_ops, 1025 + .of_match_table = of_match_ptr(abb5zes3_dt_match), 1026 + }, 1027 + .probe = abb5zes3_probe, 1028 + .remove = abb5zes3_remove, 1029 + .id_table = abb5zes3_id, 1030 + }; 1031 + module_i2c_driver(abb5zes3_driver); 1032 + 1033 + MODULE_AUTHOR("Arnaud EBALARD <arno@natisbad.org>"); 1034 + MODULE_DESCRIPTION("Abracon AB-RTCMC-32.768kHz-B5ZE-S3 RTC/Alarm driver"); 1035 + MODULE_LICENSE("GPL");
+320
drivers/rtc/rtc-armada38x.c
··· 1 + /* 2 + * RTC driver for the Armada 38x Marvell SoCs 3 + * 4 + * Copyright (C) 2015 Marvell 5 + * 6 + * Gregory Clement <gregory.clement@free-electrons.com> 7 + * 8 + * This program is free software; you can redistribute it and/or 9 + * modify it under the terms of the GNU General Public License as 10 + * published by the Free Software Foundation; either version 2 of the 11 + * License, or (at your option) any later version. 12 + * 13 + */ 14 + 15 + #include <linux/delay.h> 16 + #include <linux/io.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/platform_device.h> 20 + #include <linux/rtc.h> 21 + 22 + #define RTC_STATUS 0x0 23 + #define RTC_STATUS_ALARM1 BIT(0) 24 + #define RTC_STATUS_ALARM2 BIT(1) 25 + #define RTC_IRQ1_CONF 0x4 26 + #define RTC_IRQ1_AL_EN BIT(0) 27 + #define RTC_IRQ1_FREQ_EN BIT(1) 28 + #define RTC_IRQ1_FREQ_1HZ BIT(2) 29 + #define RTC_TIME 0xC 30 + #define RTC_ALARM1 0x10 31 + 32 + #define SOC_RTC_INTERRUPT 0x8 33 + #define SOC_RTC_ALARM1 BIT(0) 34 + #define SOC_RTC_ALARM2 BIT(1) 35 + #define SOC_RTC_ALARM1_MASK BIT(2) 36 + #define SOC_RTC_ALARM2_MASK BIT(3) 37 + 38 + struct armada38x_rtc { 39 + struct rtc_device *rtc_dev; 40 + void __iomem *regs; 41 + void __iomem *regs_soc; 42 + spinlock_t lock; 43 + int irq; 44 + }; 45 + 46 + /* 47 + * According to the datasheet, the OS should wait 5us after every 48 + * register write to the RTC hard macro so that the required update 49 + * can occur without holding off the system bus 50 + */ 51 + static void rtc_delayed_write(u32 val, struct armada38x_rtc *rtc, int offset) 52 + { 53 + writel(val, rtc->regs + offset); 54 + udelay(5); 55 + } 56 + 57 + static int armada38x_rtc_read_time(struct device *dev, struct rtc_time *tm) 58 + { 59 + struct armada38x_rtc *rtc = dev_get_drvdata(dev); 60 + unsigned long time, time_check, flags; 61 + 62 + spin_lock_irqsave(&rtc->lock, flags); 63 + 64 + time = readl(rtc->regs + RTC_TIME); 65 + /* 66 + * WA for failing time set attempts. As stated in HW ERRATA if 67 + * more than one second between two time reads is detected 68 + * then read once again. 69 + */ 70 + time_check = readl(rtc->regs + RTC_TIME); 71 + if ((time_check - time) > 1) 72 + time_check = readl(rtc->regs + RTC_TIME); 73 + 74 + spin_unlock_irqrestore(&rtc->lock, flags); 75 + 76 + rtc_time_to_tm(time_check, tm); 77 + 78 + return 0; 79 + } 80 + 81 + static int armada38x_rtc_set_time(struct device *dev, struct rtc_time *tm) 82 + { 83 + struct armada38x_rtc *rtc = dev_get_drvdata(dev); 84 + int ret = 0; 85 + unsigned long time, flags; 86 + 87 + ret = rtc_tm_to_time(tm, &time); 88 + 89 + if (ret) 90 + goto out; 91 + /* 92 + * Setting the RTC time not always succeeds. According to the 93 + * errata we need to first write on the status register and 94 + * then wait for 100ms before writing to the time register to be 95 + * sure that the data will be taken into account. 96 + */ 97 + spin_lock_irqsave(&rtc->lock, flags); 98 + 99 + rtc_delayed_write(0, rtc, RTC_STATUS); 100 + 101 + spin_unlock_irqrestore(&rtc->lock, flags); 102 + 103 + msleep(100); 104 + 105 + spin_lock_irqsave(&rtc->lock, flags); 106 + 107 + rtc_delayed_write(time, rtc, RTC_TIME); 108 + 109 + spin_unlock_irqrestore(&rtc->lock, flags); 110 + out: 111 + return ret; 112 + } 113 + 114 + static int armada38x_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alrm) 115 + { 116 + struct armada38x_rtc *rtc = dev_get_drvdata(dev); 117 + unsigned long time, flags; 118 + u32 val; 119 + 120 + spin_lock_irqsave(&rtc->lock, flags); 121 + 122 + time = readl(rtc->regs + RTC_ALARM1); 123 + val = readl(rtc->regs + RTC_IRQ1_CONF) & RTC_IRQ1_AL_EN; 124 + 125 + spin_unlock_irqrestore(&rtc->lock, flags); 126 + 127 + alrm->enabled = val ? 1 : 0; 128 + rtc_time_to_tm(time, &alrm->time); 129 + 130 + return 0; 131 + } 132 + 133 + static int armada38x_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alrm) 134 + { 135 + struct armada38x_rtc *rtc = dev_get_drvdata(dev); 136 + unsigned long time, flags; 137 + int ret = 0; 138 + u32 val; 139 + 140 + ret = rtc_tm_to_time(&alrm->time, &time); 141 + 142 + if (ret) 143 + goto out; 144 + 145 + spin_lock_irqsave(&rtc->lock, flags); 146 + 147 + rtc_delayed_write(time, rtc, RTC_ALARM1); 148 + 149 + if (alrm->enabled) { 150 + rtc_delayed_write(RTC_IRQ1_AL_EN, rtc, RTC_IRQ1_CONF); 151 + val = readl(rtc->regs_soc + SOC_RTC_INTERRUPT); 152 + writel(val | SOC_RTC_ALARM1_MASK, 153 + rtc->regs_soc + SOC_RTC_INTERRUPT); 154 + } 155 + 156 + spin_unlock_irqrestore(&rtc->lock, flags); 157 + 158 + out: 159 + return ret; 160 + } 161 + 162 + static int armada38x_rtc_alarm_irq_enable(struct device *dev, 163 + unsigned int enabled) 164 + { 165 + struct armada38x_rtc *rtc = dev_get_drvdata(dev); 166 + unsigned long flags; 167 + 168 + spin_lock_irqsave(&rtc->lock, flags); 169 + 170 + if (enabled) 171 + rtc_delayed_write(RTC_IRQ1_AL_EN, rtc, RTC_IRQ1_CONF); 172 + else 173 + rtc_delayed_write(0, rtc, RTC_IRQ1_CONF); 174 + 175 + spin_unlock_irqrestore(&rtc->lock, flags); 176 + 177 + return 0; 178 + } 179 + 180 + static irqreturn_t armada38x_rtc_alarm_irq(int irq, void *data) 181 + { 182 + struct armada38x_rtc *rtc = data; 183 + u32 val; 184 + int event = RTC_IRQF | RTC_AF; 185 + 186 + dev_dbg(&rtc->rtc_dev->dev, "%s:irq(%d)\n", __func__, irq); 187 + 188 + spin_lock(&rtc->lock); 189 + 190 + val = readl(rtc->regs_soc + SOC_RTC_INTERRUPT); 191 + 192 + writel(val & ~SOC_RTC_ALARM1, rtc->regs_soc + SOC_RTC_INTERRUPT); 193 + val = readl(rtc->regs + RTC_IRQ1_CONF); 194 + /* disable all the interrupts for alarm 1 */ 195 + rtc_delayed_write(0, rtc, RTC_IRQ1_CONF); 196 + /* Ack the event */ 197 + rtc_delayed_write(RTC_STATUS_ALARM1, rtc, RTC_STATUS); 198 + 199 + spin_unlock(&rtc->lock); 200 + 201 + if (val & RTC_IRQ1_FREQ_EN) { 202 + if (val & RTC_IRQ1_FREQ_1HZ) 203 + event |= RTC_UF; 204 + else 205 + event |= RTC_PF; 206 + } 207 + 208 + rtc_update_irq(rtc->rtc_dev, 1, event); 209 + 210 + return IRQ_HANDLED; 211 + } 212 + 213 + static struct rtc_class_ops armada38x_rtc_ops = { 214 + .read_time = armada38x_rtc_read_time, 215 + .set_time = armada38x_rtc_set_time, 216 + .read_alarm = armada38x_rtc_read_alarm, 217 + .set_alarm = armada38x_rtc_set_alarm, 218 + .alarm_irq_enable = armada38x_rtc_alarm_irq_enable, 219 + }; 220 + 221 + static __init int armada38x_rtc_probe(struct platform_device *pdev) 222 + { 223 + struct resource *res; 224 + struct armada38x_rtc *rtc; 225 + int ret; 226 + 227 + rtc = devm_kzalloc(&pdev->dev, sizeof(struct armada38x_rtc), 228 + GFP_KERNEL); 229 + if (!rtc) 230 + return -ENOMEM; 231 + 232 + spin_lock_init(&rtc->lock); 233 + 234 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rtc"); 235 + rtc->regs = devm_ioremap_resource(&pdev->dev, res); 236 + if (IS_ERR(rtc->regs)) 237 + return PTR_ERR(rtc->regs); 238 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rtc-soc"); 239 + rtc->regs_soc = devm_ioremap_resource(&pdev->dev, res); 240 + if (IS_ERR(rtc->regs_soc)) 241 + return PTR_ERR(rtc->regs_soc); 242 + 243 + rtc->irq = platform_get_irq(pdev, 0); 244 + 245 + if (rtc->irq < 0) { 246 + dev_err(&pdev->dev, "no irq\n"); 247 + return rtc->irq; 248 + } 249 + if (devm_request_irq(&pdev->dev, rtc->irq, armada38x_rtc_alarm_irq, 250 + 0, pdev->name, rtc) < 0) { 251 + dev_warn(&pdev->dev, "Interrupt not available.\n"); 252 + rtc->irq = -1; 253 + /* 254 + * If there is no interrupt available then we can't 255 + * use the alarm 256 + */ 257 + armada38x_rtc_ops.set_alarm = NULL; 258 + armada38x_rtc_ops.alarm_irq_enable = NULL; 259 + } 260 + platform_set_drvdata(pdev, rtc); 261 + if (rtc->irq != -1) 262 + device_init_wakeup(&pdev->dev, 1); 263 + 264 + rtc->rtc_dev = devm_rtc_device_register(&pdev->dev, pdev->name, 265 + &armada38x_rtc_ops, THIS_MODULE); 266 + if (IS_ERR(rtc->rtc_dev)) { 267 + ret = PTR_ERR(rtc->rtc_dev); 268 + dev_err(&pdev->dev, "Failed to register RTC device: %d\n", ret); 269 + return ret; 270 + } 271 + return 0; 272 + } 273 + 274 + #ifdef CONFIG_PM_SLEEP 275 + static int armada38x_rtc_suspend(struct device *dev) 276 + { 277 + if (device_may_wakeup(dev)) { 278 + struct armada38x_rtc *rtc = dev_get_drvdata(dev); 279 + 280 + return enable_irq_wake(rtc->irq); 281 + } 282 + 283 + return 0; 284 + } 285 + 286 + static int armada38x_rtc_resume(struct device *dev) 287 + { 288 + if (device_may_wakeup(dev)) { 289 + struct armada38x_rtc *rtc = dev_get_drvdata(dev); 290 + 291 + return disable_irq_wake(rtc->irq); 292 + } 293 + 294 + return 0; 295 + } 296 + #endif 297 + 298 + static SIMPLE_DEV_PM_OPS(armada38x_rtc_pm_ops, 299 + armada38x_rtc_suspend, armada38x_rtc_resume); 300 + 301 + #ifdef CONFIG_OF 302 + static const struct of_device_id armada38x_rtc_of_match_table[] = { 303 + { .compatible = "marvell,armada-380-rtc", }, 304 + {} 305 + }; 306 + #endif 307 + 308 + static struct platform_driver armada38x_rtc_driver = { 309 + .driver = { 310 + .name = "armada38x-rtc", 311 + .pm = &armada38x_rtc_pm_ops, 312 + .of_match_table = of_match_ptr(armada38x_rtc_of_match_table), 313 + }, 314 + }; 315 + 316 + module_platform_driver_probe(armada38x_rtc_driver, armada38x_rtc_probe); 317 + 318 + MODULE_DESCRIPTION("Marvell Armada 38x RTC driver"); 319 + MODULE_AUTHOR("Gregory CLEMENT <gregory.clement@free-electrons.com>"); 320 + MODULE_LICENSE("GPL");
+1 -1
drivers/rtc/rtc-at91sam9.c
··· 313 313 .alarm_irq_enable = at91_rtc_alarm_irq_enable, 314 314 }; 315 315 316 - static struct regmap_config gpbr_regmap_config = { 316 + static const struct regmap_config gpbr_regmap_config = { 317 317 .reg_bits = 32, 318 318 .val_bits = 32, 319 319 .reg_stride = 4,
+43 -7
drivers/rtc/rtc-imxdi.c
··· 50 50 #define DCAMR_UNSET 0xFFFFFFFF /* doomsday - 1 sec */ 51 51 52 52 #define DCR 0x10 /* Control Reg */ 53 + #define DCR_TDCHL (1 << 30) /* Tamper-detect configuration hard lock */ 54 + #define DCR_TDCSL (1 << 29) /* Tamper-detect configuration soft lock */ 55 + #define DCR_KSSL (1 << 27) /* Key-select soft lock */ 56 + #define DCR_MCHL (1 << 20) /* Monotonic-counter hard lock */ 57 + #define DCR_MCSL (1 << 19) /* Monotonic-counter soft lock */ 58 + #define DCR_TCHL (1 << 18) /* Timer-counter hard lock */ 59 + #define DCR_TCSL (1 << 17) /* Timer-counter soft lock */ 60 + #define DCR_FSHL (1 << 16) /* Failure state hard lock */ 53 61 #define DCR_TCE (1 << 3) /* Time Counter Enable */ 62 + #define DCR_MCE (1 << 2) /* Monotonic Counter Enable */ 54 63 55 64 #define DSR 0x14 /* Status Reg */ 56 - #define DSR_WBF (1 << 10) /* Write Busy Flag */ 57 - #define DSR_WNF (1 << 9) /* Write Next Flag */ 58 - #define DSR_WCF (1 << 8) /* Write Complete Flag */ 65 + #define DSR_WTD (1 << 23) /* Wire-mesh tamper detected */ 66 + #define DSR_ETBD (1 << 22) /* External tamper B detected */ 67 + #define DSR_ETAD (1 << 21) /* External tamper A detected */ 68 + #define DSR_EBD (1 << 20) /* External boot detected */ 69 + #define DSR_SAD (1 << 19) /* SCC alarm detected */ 70 + #define DSR_TTD (1 << 18) /* Temperatur tamper detected */ 71 + #define DSR_CTD (1 << 17) /* Clock tamper detected */ 72 + #define DSR_VTD (1 << 16) /* Voltage tamper detected */ 73 + #define DSR_WBF (1 << 10) /* Write Busy Flag (synchronous) */ 74 + #define DSR_WNF (1 << 9) /* Write Next Flag (synchronous) */ 75 + #define DSR_WCF (1 << 8) /* Write Complete Flag (synchronous)*/ 59 76 #define DSR_WEF (1 << 7) /* Write Error Flag */ 60 77 #define DSR_CAF (1 << 4) /* Clock Alarm Flag */ 78 + #define DSR_MCO (1 << 3) /* monotonic counter overflow */ 79 + #define DSR_TCO (1 << 2) /* time counter overflow */ 61 80 #define DSR_NVF (1 << 1) /* Non-Valid Flag */ 62 81 #define DSR_SVF (1 << 0) /* Security Violation Flag */ 63 82 64 - #define DIER 0x18 /* Interrupt Enable Reg */ 83 + #define DIER 0x18 /* Interrupt Enable Reg (synchronous) */ 65 84 #define DIER_WNIE (1 << 9) /* Write Next Interrupt Enable */ 66 85 #define DIER_WCIE (1 << 8) /* Write Complete Interrupt Enable */ 67 86 #define DIER_WEIE (1 << 7) /* Write Error Interrupt Enable */ 68 87 #define DIER_CAIE (1 << 4) /* Clock Alarm Interrupt Enable */ 88 + #define DIER_SVIE (1 << 0) /* Security-violation Interrupt Enable */ 89 + 90 + #define DMCR 0x1c /* DryIce Monotonic Counter Reg */ 91 + 92 + #define DTCR 0x28 /* DryIce Tamper Configuration Reg */ 93 + #define DTCR_MOE (1 << 9) /* monotonic overflow enabled */ 94 + #define DTCR_TOE (1 << 8) /* time overflow enabled */ 95 + #define DTCR_WTE (1 << 7) /* wire-mesh tamper enabled */ 96 + #define DTCR_ETBE (1 << 6) /* external B tamper enabled */ 97 + #define DTCR_ETAE (1 << 5) /* external A tamper enabled */ 98 + #define DTCR_EBE (1 << 4) /* external boot tamper enabled */ 99 + #define DTCR_SAIE (1 << 3) /* SCC enabled */ 100 + #define DTCR_TTE (1 << 2) /* temperature tamper enabled */ 101 + #define DTCR_CTE (1 << 1) /* clock tamper enabled */ 102 + #define DTCR_VTE (1 << 0) /* voltage tamper enabled */ 103 + 104 + #define DGPR 0x3c /* DryIce General Purpose Reg */ 69 105 70 106 /** 71 107 * struct imxdi_dev - private imxdi rtc data ··· 349 313 dier = __raw_readl(imxdi->ioaddr + DIER); 350 314 351 315 /* handle write complete and write error cases */ 352 - if ((dier & DIER_WCIE)) { 316 + if (dier & DIER_WCIE) { 353 317 /*If the write wait queue is empty then there is no pending 354 318 operations. It means the interrupt is for DryIce -Security. 355 319 IRQ must be returned as none.*/ ··· 358 322 359 323 /* DSR_WCF clears itself on DSR read */ 360 324 dsr = __raw_readl(imxdi->ioaddr + DSR); 361 - if ((dsr & (DSR_WCF | DSR_WEF))) { 325 + if (dsr & (DSR_WCF | DSR_WEF)) { 362 326 /* mask the interrupt */ 363 327 di_int_disable(imxdi, DIER_WCIE); 364 328 ··· 371 335 } 372 336 373 337 /* handle the alarm case */ 374 - if ((dier & DIER_CAIE)) { 338 + if (dier & DIER_CAIE) { 375 339 /* DSR_WCF clears itself on DSR read */ 376 340 dsr = __raw_readl(imxdi->ioaddr + DSR); 377 341 if (dsr & DSR_CAF) {
+339 -9
drivers/rtc/rtc-isl12057.c
··· 79 79 #define ISL12057_MEM_MAP_LEN 0x10 80 80 81 81 struct isl12057_rtc_data { 82 + struct rtc_device *rtc; 82 83 struct regmap *regmap; 83 84 struct mutex lock; 85 + int irq; 84 86 }; 85 87 86 88 static void isl12057_rtc_regs_to_tm(struct rtc_time *tm, u8 *regs) ··· 162 160 return 0; 163 161 } 164 162 165 - static int isl12057_rtc_read_time(struct device *dev, struct rtc_time *tm) 163 + static int _isl12057_rtc_clear_alarm(struct device *dev) 164 + { 165 + struct isl12057_rtc_data *data = dev_get_drvdata(dev); 166 + int ret; 167 + 168 + ret = regmap_update_bits(data->regmap, ISL12057_REG_SR, 169 + ISL12057_REG_SR_A1F, 0); 170 + if (ret) 171 + dev_err(dev, "%s: clearing alarm failed (%d)\n", __func__, ret); 172 + 173 + return ret; 174 + } 175 + 176 + static int _isl12057_rtc_update_alarm(struct device *dev, int enable) 177 + { 178 + struct isl12057_rtc_data *data = dev_get_drvdata(dev); 179 + int ret; 180 + 181 + ret = regmap_update_bits(data->regmap, ISL12057_REG_INT, 182 + ISL12057_REG_INT_A1IE, 183 + enable ? ISL12057_REG_INT_A1IE : 0); 184 + if (ret) 185 + dev_err(dev, "%s: changing alarm interrupt flag failed (%d)\n", 186 + __func__, ret); 187 + 188 + return ret; 189 + } 190 + 191 + /* 192 + * Note: as we only read from device and do not perform any update, there is 193 + * no need for an equivalent function which would try and get driver's main 194 + * lock. Here, it is safe for everyone if we just use regmap internal lock 195 + * on the device when reading. 196 + */ 197 + static int _isl12057_rtc_read_time(struct device *dev, struct rtc_time *tm) 166 198 { 167 199 struct isl12057_rtc_data *data = dev_get_drvdata(dev); 168 200 u8 regs[ISL12057_RTC_SEC_LEN]; 169 201 unsigned int sr; 170 202 int ret; 171 203 172 - mutex_lock(&data->lock); 173 204 ret = regmap_read(data->regmap, ISL12057_REG_SR, &sr); 174 205 if (ret) { 175 206 dev_err(dev, "%s: unable to read oscillator status flag (%d)\n", ··· 222 187 __func__, ret); 223 188 224 189 out: 225 - mutex_unlock(&data->lock); 226 - 227 190 if (ret) 228 191 return ret; 229 192 230 193 isl12057_rtc_regs_to_tm(tm, regs); 231 194 232 195 return rtc_valid_tm(tm); 196 + } 197 + 198 + static int isl12057_rtc_update_alarm(struct device *dev, int enable) 199 + { 200 + struct isl12057_rtc_data *data = dev_get_drvdata(dev); 201 + int ret; 202 + 203 + mutex_lock(&data->lock); 204 + ret = _isl12057_rtc_update_alarm(dev, enable); 205 + mutex_unlock(&data->lock); 206 + 207 + return ret; 208 + } 209 + 210 + static int isl12057_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *alarm) 211 + { 212 + struct isl12057_rtc_data *data = dev_get_drvdata(dev); 213 + struct rtc_time rtc_tm, *alarm_tm = &alarm->time; 214 + unsigned long rtc_secs, alarm_secs; 215 + u8 regs[ISL12057_A1_SEC_LEN]; 216 + unsigned int ir; 217 + int ret; 218 + 219 + mutex_lock(&data->lock); 220 + ret = regmap_bulk_read(data->regmap, ISL12057_REG_A1_SC, regs, 221 + ISL12057_A1_SEC_LEN); 222 + if (ret) { 223 + dev_err(dev, "%s: reading alarm section failed (%d)\n", 224 + __func__, ret); 225 + goto err_unlock; 226 + } 227 + 228 + alarm_tm->tm_sec = bcd2bin(regs[0] & 0x7f); 229 + alarm_tm->tm_min = bcd2bin(regs[1] & 0x7f); 230 + alarm_tm->tm_hour = bcd2bin(regs[2] & 0x3f); 231 + alarm_tm->tm_mday = bcd2bin(regs[3] & 0x3f); 232 + alarm_tm->tm_wday = -1; 233 + 234 + /* 235 + * The alarm section does not store year/month. We use the ones in rtc 236 + * section as a basis and increment month and then year if needed to get 237 + * alarm after current time. 238 + */ 239 + ret = _isl12057_rtc_read_time(dev, &rtc_tm); 240 + if (ret) 241 + goto err_unlock; 242 + 243 + alarm_tm->tm_year = rtc_tm.tm_year; 244 + alarm_tm->tm_mon = rtc_tm.tm_mon; 245 + 246 + ret = rtc_tm_to_time(&rtc_tm, &rtc_secs); 247 + if (ret) 248 + goto err_unlock; 249 + 250 + ret = rtc_tm_to_time(alarm_tm, &alarm_secs); 251 + if (ret) 252 + goto err_unlock; 253 + 254 + if (alarm_secs < rtc_secs) { 255 + if (alarm_tm->tm_mon == 11) { 256 + alarm_tm->tm_mon = 0; 257 + alarm_tm->tm_year += 1; 258 + } else { 259 + alarm_tm->tm_mon += 1; 260 + } 261 + } 262 + 263 + ret = regmap_read(data->regmap, ISL12057_REG_INT, &ir); 264 + if (ret) { 265 + dev_err(dev, "%s: reading alarm interrupt flag failed (%d)\n", 266 + __func__, ret); 267 + goto err_unlock; 268 + } 269 + 270 + alarm->enabled = !!(ir & ISL12057_REG_INT_A1IE); 271 + 272 + err_unlock: 273 + mutex_unlock(&data->lock); 274 + 275 + return ret; 276 + } 277 + 278 + static int isl12057_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm) 279 + { 280 + struct isl12057_rtc_data *data = dev_get_drvdata(dev); 281 + struct rtc_time *alarm_tm = &alarm->time; 282 + unsigned long rtc_secs, alarm_secs; 283 + u8 regs[ISL12057_A1_SEC_LEN]; 284 + struct rtc_time rtc_tm; 285 + int ret, enable = 1; 286 + 287 + mutex_lock(&data->lock); 288 + ret = _isl12057_rtc_read_time(dev, &rtc_tm); 289 + if (ret) 290 + goto err_unlock; 291 + 292 + ret = rtc_tm_to_time(&rtc_tm, &rtc_secs); 293 + if (ret) 294 + goto err_unlock; 295 + 296 + ret = rtc_tm_to_time(alarm_tm, &alarm_secs); 297 + if (ret) 298 + goto err_unlock; 299 + 300 + /* If alarm time is before current time, disable the alarm */ 301 + if (!alarm->enabled || alarm_secs <= rtc_secs) { 302 + enable = 0; 303 + } else { 304 + /* 305 + * Chip only support alarms up to one month in the future. Let's 306 + * return an error if we get something after that limit. 307 + * Comparison is done by incrementing rtc_tm month field by one 308 + * and checking alarm value is still below. 309 + */ 310 + if (rtc_tm.tm_mon == 11) { /* handle year wrapping */ 311 + rtc_tm.tm_mon = 0; 312 + rtc_tm.tm_year += 1; 313 + } else { 314 + rtc_tm.tm_mon += 1; 315 + } 316 + 317 + ret = rtc_tm_to_time(&rtc_tm, &rtc_secs); 318 + if (ret) 319 + goto err_unlock; 320 + 321 + if (alarm_secs > rtc_secs) { 322 + dev_err(dev, "%s: max for alarm is one month (%d)\n", 323 + __func__, ret); 324 + ret = -EINVAL; 325 + goto err_unlock; 326 + } 327 + } 328 + 329 + /* Disable the alarm before modifying it */ 330 + ret = _isl12057_rtc_update_alarm(dev, 0); 331 + if (ret < 0) { 332 + dev_err(dev, "%s: unable to disable the alarm (%d)\n", 333 + __func__, ret); 334 + goto err_unlock; 335 + } 336 + 337 + /* Program alarm registers */ 338 + regs[0] = bin2bcd(alarm_tm->tm_sec) & 0x7f; 339 + regs[1] = bin2bcd(alarm_tm->tm_min) & 0x7f; 340 + regs[2] = bin2bcd(alarm_tm->tm_hour) & 0x3f; 341 + regs[3] = bin2bcd(alarm_tm->tm_mday) & 0x3f; 342 + 343 + ret = regmap_bulk_write(data->regmap, ISL12057_REG_A1_SC, regs, 344 + ISL12057_A1_SEC_LEN); 345 + if (ret < 0) { 346 + dev_err(dev, "%s: writing alarm section failed (%d)\n", 347 + __func__, ret); 348 + goto err_unlock; 349 + } 350 + 351 + /* Enable or disable alarm */ 352 + ret = _isl12057_rtc_update_alarm(dev, enable); 353 + 354 + err_unlock: 355 + mutex_unlock(&data->lock); 356 + 357 + return ret; 233 358 } 234 359 235 360 static int isl12057_rtc_set_time(struct device *dev, struct rtc_time *tm) ··· 457 262 return 0; 458 263 } 459 264 265 + #ifdef CONFIG_OF 266 + /* 267 + * One would expect the device to be marked as a wakeup source only 268 + * when an IRQ pin of the RTC is routed to an interrupt line of the 269 + * CPU. In practice, such an IRQ pin can be connected to a PMIC and 270 + * this allows the device to be powered up when RTC alarm rings. This 271 + * is for instance the case on ReadyNAS 102, 104 and 2120. On those 272 + * devices with no IRQ driectly connected to the SoC, the RTC chip 273 + * can be forced as a wakeup source by stating that explicitly in 274 + * the device's .dts file using the "isil,irq2-can-wakeup-machine" 275 + * boolean property. This will guarantee 'wakealarm' sysfs entry is 276 + * available on the device. 277 + * 278 + * The function below returns 1, i.e. the capability of the chip to 279 + * wakeup the device, based on IRQ availability or if the boolean 280 + * property has been set in the .dts file. Otherwise, it returns 0. 281 + */ 282 + 283 + static bool isl12057_can_wakeup_machine(struct device *dev) 284 + { 285 + struct isl12057_rtc_data *data = dev_get_drvdata(dev); 286 + 287 + return (data->irq || of_property_read_bool(dev->of_node, 288 + "isil,irq2-can-wakeup-machine")); 289 + } 290 + #else 291 + static bool isl12057_can_wakeup_machine(struct device *dev) 292 + { 293 + struct isl12057_rtc_data *data = dev_get_drvdata(dev); 294 + 295 + return !!data->irq; 296 + } 297 + #endif 298 + 299 + static int isl12057_rtc_alarm_irq_enable(struct device *dev, 300 + unsigned int enable) 301 + { 302 + struct isl12057_rtc_data *rtc_data = dev_get_drvdata(dev); 303 + int ret = -ENOTTY; 304 + 305 + if (rtc_data->irq) 306 + ret = isl12057_rtc_update_alarm(dev, enable); 307 + 308 + return ret; 309 + } 310 + 311 + static irqreturn_t isl12057_rtc_interrupt(int irq, void *data) 312 + { 313 + struct i2c_client *client = data; 314 + struct isl12057_rtc_data *rtc_data = dev_get_drvdata(&client->dev); 315 + struct rtc_device *rtc = rtc_data->rtc; 316 + int ret, handled = IRQ_NONE; 317 + unsigned int sr; 318 + 319 + ret = regmap_read(rtc_data->regmap, ISL12057_REG_SR, &sr); 320 + if (!ret && (sr & ISL12057_REG_SR_A1F)) { 321 + dev_dbg(&client->dev, "RTC alarm!\n"); 322 + 323 + rtc_update_irq(rtc, 1, RTC_IRQF | RTC_AF); 324 + 325 + /* Acknowledge and disable the alarm */ 326 + _isl12057_rtc_clear_alarm(&client->dev); 327 + _isl12057_rtc_update_alarm(&client->dev, 0); 328 + 329 + handled = IRQ_HANDLED; 330 + } 331 + 332 + return handled; 333 + } 334 + 460 335 static const struct rtc_class_ops rtc_ops = { 461 - .read_time = isl12057_rtc_read_time, 336 + .read_time = _isl12057_rtc_read_time, 462 337 .set_time = isl12057_rtc_set_time, 338 + .read_alarm = isl12057_rtc_read_alarm, 339 + .set_alarm = isl12057_rtc_set_alarm, 340 + .alarm_irq_enable = isl12057_rtc_alarm_irq_enable, 463 341 }; 464 342 465 - static struct regmap_config isl12057_rtc_regmap_config = { 343 + static const struct regmap_config isl12057_rtc_regmap_config = { 466 344 .reg_bits = 8, 467 345 .val_bits = 8, 468 346 }; ··· 545 277 { 546 278 struct device *dev = &client->dev; 547 279 struct isl12057_rtc_data *data; 548 - struct rtc_device *rtc; 549 280 struct regmap *regmap; 550 281 int ret; 551 282 ··· 577 310 data->regmap = regmap; 578 311 dev_set_drvdata(dev, data); 579 312 580 - rtc = devm_rtc_device_register(dev, DRV_NAME, &rtc_ops, THIS_MODULE); 581 - return PTR_ERR_OR_ZERO(rtc); 313 + if (client->irq > 0) { 314 + ret = devm_request_threaded_irq(dev, client->irq, NULL, 315 + isl12057_rtc_interrupt, 316 + IRQF_SHARED|IRQF_ONESHOT, 317 + DRV_NAME, client); 318 + if (!ret) 319 + data->irq = client->irq; 320 + else 321 + dev_err(dev, "%s: irq %d unavailable (%d)\n", __func__, 322 + client->irq, ret); 323 + } 324 + 325 + if (isl12057_can_wakeup_machine(dev)) 326 + device_init_wakeup(dev, true); 327 + 328 + data->rtc = devm_rtc_device_register(dev, DRV_NAME, &rtc_ops, 329 + THIS_MODULE); 330 + ret = PTR_ERR_OR_ZERO(data->rtc); 331 + if (ret) { 332 + dev_err(dev, "%s: unable to register RTC device (%d)\n", 333 + __func__, ret); 334 + goto err; 335 + } 336 + 337 + /* We cannot support UIE mode if we do not have an IRQ line */ 338 + if (!data->irq) 339 + data->rtc->uie_unsupported = 1; 340 + 341 + err: 342 + return ret; 582 343 } 344 + 345 + static int isl12057_remove(struct i2c_client *client) 346 + { 347 + if (isl12057_can_wakeup_machine(&client->dev)) 348 + device_init_wakeup(&client->dev, false); 349 + 350 + return 0; 351 + } 352 + 353 + #ifdef CONFIG_PM_SLEEP 354 + static int isl12057_rtc_suspend(struct device *dev) 355 + { 356 + struct isl12057_rtc_data *rtc_data = dev_get_drvdata(dev); 357 + 358 + if (rtc_data->irq && device_may_wakeup(dev)) 359 + return enable_irq_wake(rtc_data->irq); 360 + 361 + return 0; 362 + } 363 + 364 + static int isl12057_rtc_resume(struct device *dev) 365 + { 366 + struct isl12057_rtc_data *rtc_data = dev_get_drvdata(dev); 367 + 368 + if (rtc_data->irq && device_may_wakeup(dev)) 369 + return disable_irq_wake(rtc_data->irq); 370 + 371 + return 0; 372 + } 373 + #endif 374 + 375 + static SIMPLE_DEV_PM_OPS(isl12057_rtc_pm_ops, isl12057_rtc_suspend, 376 + isl12057_rtc_resume); 583 377 584 378 #ifdef CONFIG_OF 585 379 static const struct of_device_id isl12057_dt_match[] = { ··· 659 331 .driver = { 660 332 .name = DRV_NAME, 661 333 .owner = THIS_MODULE, 334 + .pm = &isl12057_rtc_pm_ops, 662 335 .of_match_table = of_match_ptr(isl12057_dt_match), 663 336 }, 664 337 .probe = isl12057_probe, 338 + .remove = isl12057_remove, 665 339 .id_table = isl12057_id, 666 340 }; 667 341 module_i2c_driver(isl12057_driver);
+10
drivers/rtc/rtc-pcf2123.c
··· 38 38 #include <linux/errno.h> 39 39 #include <linux/init.h> 40 40 #include <linux/kernel.h> 41 + #include <linux/of.h> 41 42 #include <linux/string.h> 42 43 #include <linux/slab.h> 43 44 #include <linux/rtc.h> ··· 341 340 return 0; 342 341 } 343 342 343 + #ifdef CONFIG_OF 344 + static const struct of_device_id pcf2123_dt_ids[] = { 345 + { .compatible = "nxp,rtc-pcf2123", }, 346 + { /* sentinel */ } 347 + }; 348 + MODULE_DEVICE_TABLE(of, pcf2123_dt_ids); 349 + #endif 350 + 344 351 static struct spi_driver pcf2123_driver = { 345 352 .driver = { 346 353 .name = "rtc-pcf2123", 347 354 .owner = THIS_MODULE, 355 + .of_match_table = of_match_ptr(pcf2123_dt_ids), 348 356 }, 349 357 .probe = pcf2123_probe, 350 358 .remove = pcf2123_remove,
+8 -2
drivers/rtc/rtc-rk808.c
··· 67 67 /* Force an update of the shadowed registers right now */ 68 68 ret = regmap_update_bits(rk808->regmap, RK808_RTC_CTRL_REG, 69 69 BIT_RTC_CTRL_REG_RTC_GET_TIME, 70 - 0); 70 + BIT_RTC_CTRL_REG_RTC_GET_TIME); 71 71 if (ret) { 72 72 dev_err(dev, "Failed to update bits rtc_ctrl: %d\n", ret); 73 73 return ret; 74 74 } 75 75 76 + /* 77 + * After we set the GET_TIME bit, the rtc time can't be read 78 + * immediately. So we should wait up to 31.25 us, about one cycle of 79 + * 32khz. If we clear the GET_TIME bit here, the time of i2c transfer 80 + * certainly more than 31.25us: 16 * 2.5us at 400kHz bus frequency. 81 + */ 76 82 ret = regmap_update_bits(rk808->regmap, RK808_RTC_CTRL_REG, 77 83 BIT_RTC_CTRL_REG_RTC_GET_TIME, 78 - BIT_RTC_CTRL_REG_RTC_GET_TIME); 84 + 0); 79 85 if (ret) { 80 86 dev_err(dev, "Failed to update bits rtc_ctrl: %d\n", ret); 81 87 return ret;
-1
drivers/scsi/be2iscsi/be_main.c
··· 48 48 static unsigned int be_max_phys_size = 64; 49 49 static unsigned int enable_msix = 1; 50 50 51 - MODULE_DEVICE_TABLE(pci, beiscsi_pci_id_table); 52 51 MODULE_DESCRIPTION(DRV_DESC " " BUILD_STR); 53 52 MODULE_VERSION(BUILD_STR); 54 53 MODULE_AUTHOR("Emulex Corporation");
+3 -3
drivers/scsi/scsi_debug.c
··· 4658 4658 return scnprintf(buf, PAGE_SIZE, "0-%u\n", 4659 4659 sdebug_store_sectors); 4660 4660 4661 - count = bitmap_scnlistprintf(buf, PAGE_SIZE, map_storep, map_size); 4662 - 4661 + count = scnprintf(buf, PAGE_SIZE - 1, "%*pbl", 4662 + (int)map_size, map_storep); 4663 4663 buf[count++] = '\n'; 4664 - buf[count++] = 0; 4664 + buf[count] = '\0'; 4665 4665 4666 4666 return count; 4667 4667 }
+2 -5
drivers/usb/host/whci/debug.c
··· 86 86 static int di_print(struct seq_file *s, void *p) 87 87 { 88 88 struct whc *whc = s->private; 89 - char buf[72]; 90 89 int d; 91 90 92 91 for (d = 0; d < whc->n_devices; d++) { 93 92 struct di_buf_entry *di = &whc->di_buf[d]; 94 93 95 - bitmap_scnprintf(buf, sizeof(buf), 96 - (unsigned long *)di->availability_info, UWB_NUM_MAS); 97 - 98 94 seq_printf(s, "DI[%d]\n", d); 99 - seq_printf(s, " availability: %s\n", buf); 95 + seq_printf(s, " availability: %*pb\n", 96 + UWB_NUM_MAS, (unsigned long *)di->availability_info); 100 97 seq_printf(s, " %c%c key idx: %d dev addr: %d\n", 101 98 (di->addr_sec_info & WHC_DI_SECURE) ? 'S' : ' ', 102 99 (di->addr_sec_info & WHC_DI_DISABLE) ? 'D' : ' ',
+2 -3
drivers/usb/wusbcore/reservation.c
··· 49 49 struct wusbhc *wusbhc = rsv->pal_priv; 50 50 struct device *dev = wusbhc->dev; 51 51 struct uwb_mas_bm mas; 52 - char buf[72]; 53 52 54 53 dev_dbg(dev, "%s: state = %d\n", __func__, rsv->state); 55 54 switch (rsv->state) { 56 55 case UWB_RSV_STATE_O_ESTABLISHED: 57 56 uwb_rsv_get_usable_mas(rsv, &mas); 58 - bitmap_scnprintf(buf, sizeof(buf), mas.bm, UWB_NUM_MAS); 59 - dev_dbg(dev, "established reservation: %s\n", buf); 57 + dev_dbg(dev, "established reservation: %*pb\n", 58 + UWB_NUM_MAS, mas.bm); 60 59 wusbhc_bwa_set(wusbhc, rsv->stream, &mas); 61 60 break; 62 61 case UWB_RSV_STATE_NONE:
+2 -3
drivers/usb/wusbcore/wa-rpipe.c
··· 496 496 struct device *dev = &wa->usb_iface->dev; 497 497 498 498 if (!bitmap_empty(wa->rpipe_bm, wa->rpipes)) { 499 - char buf[256]; 500 499 WARN_ON(1); 501 - bitmap_scnprintf(buf, sizeof(buf), wa->rpipe_bm, wa->rpipes); 502 - dev_err(dev, "BUG: pipes not released on exit: %s\n", buf); 500 + dev_err(dev, "BUG: pipes not released on exit: %*pb\n", 501 + wa->rpipes, wa->rpipe_bm); 503 502 } 504 503 kfree(wa->rpipe_bm); 505 504 }
+2 -5
drivers/usb/wusbcore/wusbhc.c
··· 496 496 { 497 497 clear_bit(0, wusb_cluster_id_table); 498 498 if (!bitmap_empty(wusb_cluster_id_table, CLUSTER_IDS)) { 499 - char buf[256]; 500 - bitmap_scnprintf(buf, sizeof(buf), wusb_cluster_id_table, 501 - CLUSTER_IDS); 502 - printk(KERN_ERR "BUG: WUSB Cluster IDs not released " 503 - "on exit: %s\n", buf); 499 + printk(KERN_ERR "BUG: WUSB Cluster IDs not released on exit: %*pb\n", 500 + CLUSTER_IDS, wusb_cluster_id_table); 504 501 WARN_ON(1); 505 502 } 506 503 usb_unregister_notify(&wusb_usb_notifier);
-2
drivers/uwb/drp.c
··· 619 619 struct device *dev = &rc->uwb_dev.dev; 620 620 struct uwb_mas_bm mas; 621 621 struct uwb_cnflt_alien *cnflt; 622 - char buf[72]; 623 622 unsigned long delay_us = UWB_MAS_LENGTH_US * UWB_MAS_PER_ZONE; 624 623 625 624 uwb_drp_ie_to_bm(&mas, drp_ie); 626 - bitmap_scnprintf(buf, sizeof(buf), mas.bm, UWB_NUM_MAS); 627 625 628 626 list_for_each_entry(cnflt, &rc->cnflt_alien_list, rc_node) { 629 627 if (bitmap_equal(cnflt->mas.bm, mas.bm, UWB_NUM_MAS)) {
+4 -10
drivers/uwb/uwb-debug.c
··· 217 217 struct uwb_dev_addr devaddr; 218 218 char owner[UWB_ADDR_STRSIZE], target[UWB_ADDR_STRSIZE]; 219 219 bool is_owner; 220 - char buf[72]; 221 220 222 221 uwb_dev_addr_print(owner, sizeof(owner), &rsv->owner->dev_addr); 223 222 if (rsv->target.type == UWB_RSV_TARGET_DEV) { ··· 233 234 owner, target, uwb_rsv_state_str(rsv->state)); 234 235 seq_printf(s, " stream: %d type: %s\n", 235 236 rsv->stream, uwb_rsv_type_str(rsv->type)); 236 - bitmap_scnprintf(buf, sizeof(buf), rsv->mas.bm, UWB_NUM_MAS); 237 - seq_printf(s, " %s\n", buf); 237 + seq_printf(s, " %*pb\n", UWB_NUM_MAS, rsv->mas.bm); 238 238 } 239 239 240 240 mutex_unlock(&rc->rsvs_mutex); ··· 257 259 static int drp_avail_print(struct seq_file *s, void *p) 258 260 { 259 261 struct uwb_rc *rc = s->private; 260 - char buf[72]; 261 262 262 - bitmap_scnprintf(buf, sizeof(buf), rc->drp_avail.global, UWB_NUM_MAS); 263 - seq_printf(s, "global: %s\n", buf); 264 - bitmap_scnprintf(buf, sizeof(buf), rc->drp_avail.local, UWB_NUM_MAS); 265 - seq_printf(s, "local: %s\n", buf); 266 - bitmap_scnprintf(buf, sizeof(buf), rc->drp_avail.pending, UWB_NUM_MAS); 267 - seq_printf(s, "pending: %s\n", buf); 263 + seq_printf(s, "global: %*pb\n", UWB_NUM_MAS, rc->drp_avail.global); 264 + seq_printf(s, "local: %*pb\n", UWB_NUM_MAS, rc->drp_avail.local); 265 + seq_printf(s, "pending: %*pb\n", UWB_NUM_MAS, rc->drp_avail.pending); 268 266 269 267 return 0; 270 268 }
+5
fs/dcache.c
··· 38 38 #include <linux/prefetch.h> 39 39 #include <linux/ratelimit.h> 40 40 #include <linux/list_lru.h> 41 + #include <linux/kasan.h> 42 + 41 43 #include "internal.h" 42 44 #include "mount.h" 43 45 ··· 1431 1429 } 1432 1430 atomic_set(&p->u.count, 1); 1433 1431 dname = p->name; 1432 + if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS)) 1433 + kasan_unpoison_shadow(dname, 1434 + round_up(name->len + 1, sizeof(unsigned long))); 1434 1435 } else { 1435 1436 dname = dentry->d_iname; 1436 1437 }
+2 -2
fs/eventpoll.c
··· 1639 1639 1640 1640 spin_lock_irqsave(&ep->lock, flags); 1641 1641 } 1642 - __remove_wait_queue(&ep->wq, &wait); 1643 1642 1644 - set_current_state(TASK_RUNNING); 1643 + __remove_wait_queue(&ep->wq, &wait); 1644 + __set_current_state(TASK_RUNNING); 1645 1645 } 1646 1646 check_events: 1647 1647 /* Is it worth to try to dig for events ? */
+10 -14
fs/kernfs/dir.c
··· 411 411 412 412 if (kernfs_type(kn) == KERNFS_LINK) 413 413 kernfs_put(kn->symlink.target_kn); 414 - if (!(kn->flags & KERNFS_STATIC_NAME)) 415 - kfree(kn->name); 414 + 415 + kfree_const(kn->name); 416 + 416 417 if (kn->iattr) { 417 418 if (kn->iattr->ia_secdata) 418 419 security_release_secctx(kn->iattr->ia_secdata, ··· 507 506 const char *name, umode_t mode, 508 507 unsigned flags) 509 508 { 510 - char *dup_name = NULL; 511 509 struct kernfs_node *kn; 512 510 int ret; 513 511 514 - if (!(flags & KERNFS_STATIC_NAME)) { 515 - name = dup_name = kstrdup(name, GFP_KERNEL); 516 - if (!name) 517 - return NULL; 518 - } 512 + name = kstrdup_const(name, GFP_KERNEL); 513 + if (!name) 514 + return NULL; 519 515 520 516 kn = kmem_cache_zalloc(kernfs_node_cache, GFP_KERNEL); 521 517 if (!kn) ··· 536 538 err_out2: 537 539 kmem_cache_free(kernfs_node_cache, kn); 538 540 err_out1: 539 - kfree(dup_name); 541 + kfree_const(name); 540 542 return NULL; 541 543 } 542 544 ··· 1262 1264 /* rename kernfs_node */ 1263 1265 if (strcmp(kn->name, new_name) != 0) { 1264 1266 error = -ENOMEM; 1265 - new_name = kstrdup(new_name, GFP_KERNEL); 1267 + new_name = kstrdup_const(new_name, GFP_KERNEL); 1266 1268 if (!new_name) 1267 1269 goto out; 1268 1270 } else { ··· 1283 1285 1284 1286 kn->ns = new_ns; 1285 1287 if (new_name) { 1286 - if (!(kn->flags & KERNFS_STATIC_NAME)) 1287 - old_name = kn->name; 1288 - kn->flags &= ~KERNFS_STATIC_NAME; 1288 + old_name = kn->name; 1289 1289 kn->name = new_name; 1290 1290 } 1291 1291 ··· 1293 1297 kernfs_link_sibling(kn); 1294 1298 1295 1299 kernfs_put(old_parent); 1296 - kfree(old_name); 1300 + kfree_const(old_name); 1297 1301 1298 1302 error = 0; 1299 1303 out:
-4
fs/kernfs/file.c
··· 901 901 * @ops: kernfs operations for the file 902 902 * @priv: private data for the file 903 903 * @ns: optional namespace tag of the file 904 - * @name_is_static: don't copy file name 905 904 * @key: lockdep key for the file's active_ref, %NULL to disable lockdep 906 905 * 907 906 * Returns the created node on success, ERR_PTR() value on error. ··· 910 911 umode_t mode, loff_t size, 911 912 const struct kernfs_ops *ops, 912 913 void *priv, const void *ns, 913 - bool name_is_static, 914 914 struct lock_class_key *key) 915 915 { 916 916 struct kernfs_node *kn; ··· 917 919 int rc; 918 920 919 921 flags = KERNFS_FILE; 920 - if (name_is_static) 921 - flags |= KERNFS_STATIC_NAME; 922 922 923 923 kn = kernfs_new_node(parent, name, (mode & S_IALLUGO) | S_IFREG, flags); 924 924 if (!kn)
+3 -3
fs/namespace.c
··· 201 201 goto out_free_cache; 202 202 203 203 if (name) { 204 - mnt->mnt_devname = kstrdup(name, GFP_KERNEL); 204 + mnt->mnt_devname = kstrdup_const(name, GFP_KERNEL); 205 205 if (!mnt->mnt_devname) 206 206 goto out_free_id; 207 207 } ··· 234 234 235 235 #ifdef CONFIG_SMP 236 236 out_free_devname: 237 - kfree(mnt->mnt_devname); 237 + kfree_const(mnt->mnt_devname); 238 238 #endif 239 239 out_free_id: 240 240 mnt_free_id(mnt); ··· 568 568 569 569 static void free_vfsmnt(struct mount *mnt) 570 570 { 571 - kfree(mnt->mnt_devname); 571 + kfree_const(mnt->mnt_devname); 572 572 #ifdef CONFIG_SMP 573 573 free_percpu(mnt->mnt_pcp); 574 574 #endif
+4 -6
fs/proc/array.c
··· 316 316 317 317 static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) 318 318 { 319 - seq_puts(m, "Cpus_allowed:\t"); 320 - seq_cpumask(m, &task->cpus_allowed); 321 - seq_putc(m, '\n'); 322 - seq_puts(m, "Cpus_allowed_list:\t"); 323 - seq_cpumask_list(m, &task->cpus_allowed); 324 - seq_putc(m, '\n'); 319 + seq_printf(m, "Cpus_allowed:\t%*pb\n", 320 + cpumask_pr_args(&task->cpus_allowed)); 321 + seq_printf(m, "Cpus_allowed_list:\t%*pbl\n", 322 + cpumask_pr_args(&task->cpus_allowed)); 325 323 } 326 324 327 325 int proc_pid_status(struct seq_file *m, struct pid_namespace *ns,
-32
fs/seq_file.c
··· 539 539 return res; 540 540 } 541 541 542 - int seq_bitmap(struct seq_file *m, const unsigned long *bits, 543 - unsigned int nr_bits) 544 - { 545 - if (m->count < m->size) { 546 - int len = bitmap_scnprintf(m->buf + m->count, 547 - m->size - m->count, bits, nr_bits); 548 - if (m->count + len < m->size) { 549 - m->count += len; 550 - return 0; 551 - } 552 - } 553 - seq_set_overflow(m); 554 - return -1; 555 - } 556 - EXPORT_SYMBOL(seq_bitmap); 557 - 558 - int seq_bitmap_list(struct seq_file *m, const unsigned long *bits, 559 - unsigned int nr_bits) 560 - { 561 - if (m->count < m->size) { 562 - int len = bitmap_scnlistprintf(m->buf + m->count, 563 - m->size - m->count, bits, nr_bits); 564 - if (m->count + len < m->size) { 565 - m->count += len; 566 - return 0; 567 - } 568 - } 569 - seq_set_overflow(m); 570 - return -1; 571 - } 572 - EXPORT_SYMBOL(seq_bitmap_list); 573 - 574 542 static void *single_start(struct seq_file *p, loff_t *pos) 575 543 { 576 544 return NULL + (*pos == 0);
+1 -1
fs/sysfs/file.c
··· 295 295 key = attr->key ?: (struct lock_class_key *)&attr->skey; 296 296 #endif 297 297 kn = __kernfs_create_file(parent, attr->name, mode & 0777, size, ops, 298 - (void *)attr, ns, true, key); 298 + (void *)attr, ns, key); 299 299 if (IS_ERR(kn)) { 300 300 if (PTR_ERR(kn) == -EEXIST) 301 301 sysfs_warn_dup(parent, attr->name);
+1
include/asm-generic/vmlinux.lds.h
··· 478 478 #define KERNEL_CTORS() . = ALIGN(8); \ 479 479 VMLINUX_SYMBOL(__ctors_start) = .; \ 480 480 *(.ctors) \ 481 + *(SORT(.init_array.*)) \ 481 482 *(.init_array) \ 482 483 VMLINUX_SYMBOL(__ctors_end) = .; 483 484 #else
+17 -20
include/linux/bitmap.h
··· 52 52 * bitmap_bitremap(oldbit, old, new, nbits) newbit = map(old, new)(oldbit) 53 53 * bitmap_onto(dst, orig, relmap, nbits) *dst = orig relative to relmap 54 54 * bitmap_fold(dst, orig, sz, nbits) dst bits = orig bits mod sz 55 - * bitmap_scnprintf(buf, len, src, nbits) Print bitmap src to buf 56 55 * bitmap_parse(buf, buflen, dst, nbits) Parse bitmap dst from kernel buf 57 56 * bitmap_parse_user(ubuf, ulen, dst, nbits) Parse bitmap dst from user buf 58 - * bitmap_scnlistprintf(buf, len, src, nbits) Print bitmap src as list to buf 59 57 * bitmap_parselist(buf, dst, nbits) Parse bitmap dst from kernel buf 60 58 * bitmap_parselist_user(buf, dst, nbits) Parse bitmap dst from user buf 61 59 * bitmap_find_free_region(bitmap, bits, order) Find and allocate bit region 62 60 * bitmap_release_region(bitmap, pos, order) Free specified bit region 63 61 * bitmap_allocate_region(bitmap, pos, order) Allocate specified bit region 64 - * bitmap_print_to_pagebuf(list, buf, mask, nbits) Print bitmap src as list/hex 65 62 */ 66 63 67 64 /* ··· 93 96 const unsigned long *bitmap2, unsigned int nbits); 94 97 extern void __bitmap_complement(unsigned long *dst, const unsigned long *src, 95 98 unsigned int nbits); 96 - extern void __bitmap_shift_right(unsigned long *dst, 97 - const unsigned long *src, int shift, int bits); 98 - extern void __bitmap_shift_left(unsigned long *dst, 99 - const unsigned long *src, int shift, int bits); 99 + extern void __bitmap_shift_right(unsigned long *dst, const unsigned long *src, 100 + unsigned int shift, unsigned int nbits); 101 + extern void __bitmap_shift_left(unsigned long *dst, const unsigned long *src, 102 + unsigned int shift, unsigned int nbits); 100 103 extern int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1, 101 104 const unsigned long *bitmap2, unsigned int nbits); 102 105 extern void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1, ··· 144 147 align_mask, 0); 145 148 } 146 149 147 - extern int bitmap_scnprintf(char *buf, unsigned int len, 148 - const unsigned long *src, int nbits); 149 150 extern int __bitmap_parse(const char *buf, unsigned int buflen, int is_user, 150 151 unsigned long *dst, int nbits); 151 152 extern int bitmap_parse_user(const char __user *ubuf, unsigned int ulen, 152 153 unsigned long *dst, int nbits); 153 - extern int bitmap_scnlistprintf(char *buf, unsigned int len, 154 - const unsigned long *src, int nbits); 155 154 extern int bitmap_parselist(const char *buf, unsigned long *maskp, 156 155 int nmaskbits); 157 156 extern int bitmap_parselist_user(const char __user *ubuf, unsigned int ulen, ··· 163 170 extern int bitmap_find_free_region(unsigned long *bitmap, unsigned int bits, int order); 164 171 extern void bitmap_release_region(unsigned long *bitmap, unsigned int pos, int order); 165 172 extern int bitmap_allocate_region(unsigned long *bitmap, unsigned int pos, int order); 166 - extern void bitmap_copy_le(void *dst, const unsigned long *src, int nbits); 173 + #ifdef __BIG_ENDIAN 174 + extern void bitmap_copy_le(unsigned long *dst, const unsigned long *src, unsigned int nbits); 175 + #else 176 + #define bitmap_copy_le bitmap_copy 177 + #endif 167 178 extern unsigned int bitmap_ord_to_pos(const unsigned long *bitmap, unsigned int ord, unsigned int nbits); 168 179 extern int bitmap_print_to_pagebuf(bool list, char *buf, 169 180 const unsigned long *maskp, int nmaskbits); ··· 306 309 return __bitmap_weight(src, nbits); 307 310 } 308 311 309 - static inline void bitmap_shift_right(unsigned long *dst, 310 - const unsigned long *src, int n, int nbits) 312 + static inline void bitmap_shift_right(unsigned long *dst, const unsigned long *src, 313 + unsigned int shift, int nbits) 311 314 { 312 315 if (small_const_nbits(nbits)) 313 - *dst = (*src & BITMAP_LAST_WORD_MASK(nbits)) >> n; 316 + *dst = (*src & BITMAP_LAST_WORD_MASK(nbits)) >> shift; 314 317 else 315 - __bitmap_shift_right(dst, src, n, nbits); 318 + __bitmap_shift_right(dst, src, shift, nbits); 316 319 } 317 320 318 - static inline void bitmap_shift_left(unsigned long *dst, 319 - const unsigned long *src, int n, int nbits) 321 + static inline void bitmap_shift_left(unsigned long *dst, const unsigned long *src, 322 + unsigned int shift, unsigned int nbits) 320 323 { 321 324 if (small_const_nbits(nbits)) 322 - *dst = (*src << n) & BITMAP_LAST_WORD_MASK(nbits); 325 + *dst = (*src << shift) & BITMAP_LAST_WORD_MASK(nbits); 323 326 else 324 - __bitmap_shift_left(dst, src, n, nbits); 327 + __bitmap_shift_left(dst, src, shift, nbits); 325 328 } 326 329 327 330 static inline int bitmap_parse(const char *buf, unsigned int buflen,
+1
include/linux/compiler-gcc.h
··· 66 66 #define __deprecated __attribute__((deprecated)) 67 67 #define __packed __attribute__((packed)) 68 68 #define __weak __attribute__((weak)) 69 + #define __alias(symbol) __attribute__((alias(#symbol))) 69 70 70 71 /* 71 72 * it doesn't make sense on ARM (currently the only user of __naked) to trace
+4
include/linux/compiler-gcc4.h
··· 85 85 #define __HAVE_BUILTIN_BSWAP16__ 86 86 #endif 87 87 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */ 88 + 89 + #if GCC_VERSION >= 40902 90 + #define KASAN_ABI_VERSION 3 91 + #endif
+2
include/linux/compiler-gcc5.h
··· 63 63 #define __HAVE_BUILTIN_BSWAP64__ 64 64 #define __HAVE_BUILTIN_BSWAP16__ 65 65 #endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */ 66 + 67 + #define KASAN_ABI_VERSION 4
+13 -36
include/linux/cpumask.h
··· 22 22 */ 23 23 #define cpumask_bits(maskp) ((maskp)->bits) 24 24 25 + /** 26 + * cpumask_pr_args - printf args to output a cpumask 27 + * @maskp: cpumask to be printed 28 + * 29 + * Can be used to provide arguments for '%*pb[l]' when printing a cpumask. 30 + */ 31 + #define cpumask_pr_args(maskp) nr_cpu_ids, cpumask_bits(maskp) 32 + 25 33 #if NR_CPUS == 1 26 34 #define nr_cpu_ids 1 27 35 #else ··· 547 539 #define cpumask_of(cpu) (get_cpu_mask(cpu)) 548 540 549 541 /** 550 - * cpumask_scnprintf - print a cpumask into a string as comma-separated hex 551 - * @buf: the buffer to sprintf into 552 - * @len: the length of the buffer 553 - * @srcp: the cpumask to print 554 - * 555 - * If len is zero, returns zero. Otherwise returns the length of the 556 - * (nul-terminated) @buf string. 557 - */ 558 - static inline int cpumask_scnprintf(char *buf, int len, 559 - const struct cpumask *srcp) 560 - { 561 - return bitmap_scnprintf(buf, len, cpumask_bits(srcp), nr_cpumask_bits); 562 - } 563 - 564 - /** 565 542 * cpumask_parse_user - extract a cpumask from a user string 566 543 * @buf: the buffer to extract from 567 544 * @len: the length of the buffer ··· 557 564 static inline int cpumask_parse_user(const char __user *buf, int len, 558 565 struct cpumask *dstp) 559 566 { 560 - return bitmap_parse_user(buf, len, cpumask_bits(dstp), nr_cpumask_bits); 567 + return bitmap_parse_user(buf, len, cpumask_bits(dstp), nr_cpu_ids); 561 568 } 562 569 563 570 /** ··· 572 579 struct cpumask *dstp) 573 580 { 574 581 return bitmap_parselist_user(buf, len, cpumask_bits(dstp), 575 - nr_cpumask_bits); 576 - } 577 - 578 - /** 579 - * cpulist_scnprintf - print a cpumask into a string as comma-separated list 580 - * @buf: the buffer to sprintf into 581 - * @len: the length of the buffer 582 - * @srcp: the cpumask to print 583 - * 584 - * If len is zero, returns zero. Otherwise returns the length of the 585 - * (nul-terminated) @buf string. 586 - */ 587 - static inline int cpulist_scnprintf(char *buf, int len, 588 - const struct cpumask *srcp) 589 - { 590 - return bitmap_scnlistprintf(buf, len, cpumask_bits(srcp), 591 - nr_cpumask_bits); 582 + nr_cpu_ids); 592 583 } 593 584 594 585 /** ··· 587 610 char *nl = strchr(buf, '\n'); 588 611 unsigned int len = nl ? (unsigned int)(nl - buf) : strlen(buf); 589 612 590 - return bitmap_parse(buf, len, cpumask_bits(dstp), nr_cpumask_bits); 613 + return bitmap_parse(buf, len, cpumask_bits(dstp), nr_cpu_ids); 591 614 } 592 615 593 616 /** ··· 599 622 */ 600 623 static inline int cpulist_parse(const char *buf, struct cpumask *dstp) 601 624 { 602 - return bitmap_parselist(buf, cpumask_bits(dstp), nr_cpumask_bits); 625 + return bitmap_parselist(buf, cpumask_bits(dstp), nr_cpu_ids); 603 626 } 604 627 605 628 /** ··· 794 817 cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask) 795 818 { 796 819 return bitmap_print_to_pagebuf(list, buf, cpumask_bits(mask), 797 - nr_cpumask_bits); 820 + nr_cpu_ids); 798 821 } 799 822 800 823 /*
+8
include/linux/init_task.h
··· 175 175 # define INIT_NUMA_BALANCING(tsk) 176 176 #endif 177 177 178 + #ifdef CONFIG_KASAN 179 + # define INIT_KASAN(tsk) \ 180 + .kasan_depth = 1, 181 + #else 182 + # define INIT_KASAN(tsk) 183 + #endif 184 + 178 185 /* 179 186 * INIT_TASK is used to set up the first task table, touch at 180 187 * your own risk!. Base=0, limit=0x1fffff (=2MB) ··· 257 250 INIT_RT_MUTEXES(tsk) \ 258 251 INIT_VTIME(tsk) \ 259 252 INIT_NUMA_BALANCING(tsk) \ 253 + INIT_KASAN(tsk) \ 260 254 } 261 255 262 256
+89
include/linux/kasan.h
··· 1 + #ifndef _LINUX_KASAN_H 2 + #define _LINUX_KASAN_H 3 + 4 + #include <linux/types.h> 5 + 6 + struct kmem_cache; 7 + struct page; 8 + 9 + #ifdef CONFIG_KASAN 10 + 11 + #define KASAN_SHADOW_SCALE_SHIFT 3 12 + #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) 13 + 14 + #include <asm/kasan.h> 15 + #include <linux/sched.h> 16 + 17 + static inline void *kasan_mem_to_shadow(const void *addr) 18 + { 19 + return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT) 20 + + KASAN_SHADOW_OFFSET; 21 + } 22 + 23 + /* Enable reporting bugs after kasan_disable_current() */ 24 + static inline void kasan_enable_current(void) 25 + { 26 + current->kasan_depth++; 27 + } 28 + 29 + /* Disable reporting bugs for current task */ 30 + static inline void kasan_disable_current(void) 31 + { 32 + current->kasan_depth--; 33 + } 34 + 35 + void kasan_unpoison_shadow(const void *address, size_t size); 36 + 37 + void kasan_alloc_pages(struct page *page, unsigned int order); 38 + void kasan_free_pages(struct page *page, unsigned int order); 39 + 40 + void kasan_poison_slab(struct page *page); 41 + void kasan_unpoison_object_data(struct kmem_cache *cache, void *object); 42 + void kasan_poison_object_data(struct kmem_cache *cache, void *object); 43 + 44 + void kasan_kmalloc_large(const void *ptr, size_t size); 45 + void kasan_kfree_large(const void *ptr); 46 + void kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size); 47 + void kasan_krealloc(const void *object, size_t new_size); 48 + 49 + void kasan_slab_alloc(struct kmem_cache *s, void *object); 50 + void kasan_slab_free(struct kmem_cache *s, void *object); 51 + 52 + #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 53 + 54 + int kasan_module_alloc(void *addr, size_t size); 55 + void kasan_module_free(void *addr); 56 + 57 + #else /* CONFIG_KASAN */ 58 + 59 + #define MODULE_ALIGN 1 60 + 61 + static inline void kasan_unpoison_shadow(const void *address, size_t size) {} 62 + 63 + static inline void kasan_enable_current(void) {} 64 + static inline void kasan_disable_current(void) {} 65 + 66 + static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} 67 + static inline void kasan_free_pages(struct page *page, unsigned int order) {} 68 + 69 + static inline void kasan_poison_slab(struct page *page) {} 70 + static inline void kasan_unpoison_object_data(struct kmem_cache *cache, 71 + void *object) {} 72 + static inline void kasan_poison_object_data(struct kmem_cache *cache, 73 + void *object) {} 74 + 75 + static inline void kasan_kmalloc_large(void *ptr, size_t size) {} 76 + static inline void kasan_kfree_large(const void *ptr) {} 77 + static inline void kasan_kmalloc(struct kmem_cache *s, const void *object, 78 + size_t size) {} 79 + static inline void kasan_krealloc(const void *object, size_t new_size) {} 80 + 81 + static inline void kasan_slab_alloc(struct kmem_cache *s, void *object) {} 82 + static inline void kasan_slab_free(struct kmem_cache *s, void *object) {} 83 + 84 + static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } 85 + static inline void kasan_module_free(void *addr) {} 86 + 87 + #endif /* CONFIG_KASAN */ 88 + 89 + #endif /* LINUX_KASAN_H */
+2 -5
include/linux/kernfs.h
··· 43 43 KERNFS_HAS_SEQ_SHOW = 0x0040, 44 44 KERNFS_HAS_MMAP = 0x0080, 45 45 KERNFS_LOCKDEP = 0x0100, 46 - KERNFS_STATIC_NAME = 0x0200, 47 46 KERNFS_SUICIDAL = 0x0400, 48 47 KERNFS_SUICIDED = 0x0800, 49 48 }; ··· 290 291 umode_t mode, loff_t size, 291 292 const struct kernfs_ops *ops, 292 293 void *priv, const void *ns, 293 - bool name_is_static, 294 294 struct lock_class_key *key); 295 295 struct kernfs_node *kernfs_create_link(struct kernfs_node *parent, 296 296 const char *name, ··· 367 369 static inline struct kernfs_node * 368 370 __kernfs_create_file(struct kernfs_node *parent, const char *name, 369 371 umode_t mode, loff_t size, const struct kernfs_ops *ops, 370 - void *priv, const void *ns, bool name_is_static, 371 - struct lock_class_key *key) 372 + void *priv, const void *ns, struct lock_class_key *key) 372 373 { return ERR_PTR(-ENOSYS); } 373 374 374 375 static inline struct kernfs_node * ··· 436 439 key = (struct lock_class_key *)&ops->lockdep_key; 437 440 #endif 438 441 return __kernfs_create_file(parent, name, mode, size, ops, priv, ns, 439 - false, key); 442 + key); 440 443 } 441 444 442 445 static inline struct kernfs_node *
+1 -1
include/linux/module.h
··· 135 135 #ifdef MODULE 136 136 /* Creates an alias so file2alias.c can find device table. */ 137 137 #define MODULE_DEVICE_TABLE(type, name) \ 138 - extern const struct type##_device_id __mod_##type##__##name##_device_table \ 138 + extern const typeof(name) __mod_##type##__##name##_device_table \ 139 139 __attribute__ ((unused, alias(__stringify(name)))) 140 140 #else /* !MODULE */ 141 141 #define MODULE_DEVICE_TABLE(type, name)
+15 -26
include/linux/nodemask.h
··· 8 8 * See detailed comments in the file linux/bitmap.h describing the 9 9 * data type on which these nodemasks are based. 10 10 * 11 - * For details of nodemask_scnprintf() and nodemask_parse_user(), 12 - * see bitmap_scnprintf() and bitmap_parse_user() in lib/bitmap.c. 13 - * For details of nodelist_scnprintf() and nodelist_parse(), see 14 - * bitmap_scnlistprintf() and bitmap_parselist(), also in bitmap.c. 15 - * For details of node_remap(), see bitmap_bitremap in lib/bitmap.c. 16 - * For details of nodes_remap(), see bitmap_remap in lib/bitmap.c. 17 - * For details of nodes_onto(), see bitmap_onto in lib/bitmap.c. 18 - * For details of nodes_fold(), see bitmap_fold in lib/bitmap.c. 11 + * For details of nodemask_parse_user(), see bitmap_parse_user() in 12 + * lib/bitmap.c. For details of nodelist_parse(), see bitmap_parselist(), 13 + * also in bitmap.c. For details of node_remap(), see bitmap_bitremap in 14 + * lib/bitmap.c. For details of nodes_remap(), see bitmap_remap in 15 + * lib/bitmap.c. For details of nodes_onto(), see bitmap_onto in 16 + * lib/bitmap.c. For details of nodes_fold(), see bitmap_fold in 17 + * lib/bitmap.c. 19 18 * 20 19 * The available nodemask operations are: 21 20 * ··· 51 52 * NODE_MASK_NONE Initializer - no bits set 52 53 * unsigned long *nodes_addr(mask) Array of unsigned long's in mask 53 54 * 54 - * int nodemask_scnprintf(buf, len, mask) Format nodemask for printing 55 55 * int nodemask_parse_user(ubuf, ulen, mask) Parse ascii string as nodemask 56 - * int nodelist_scnprintf(buf, len, mask) Format nodemask as list for printing 57 56 * int nodelist_parse(buf, map) Parse ascii string as nodelist 58 57 * int node_remap(oldbit, old, new) newbit = map(old, new)(oldbit) 59 58 * void nodes_remap(dst, src, old, new) *dst = map(old, new)(src) ··· 94 97 95 98 typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t; 96 99 extern nodemask_t _unused_nodemask_arg_; 100 + 101 + /** 102 + * nodemask_pr_args - printf args to output a nodemask 103 + * @maskp: nodemask to be printed 104 + * 105 + * Can be used to provide arguments for '%*pb[l]' when printing a nodemask. 106 + */ 107 + #define nodemask_pr_args(maskp) MAX_NUMNODES, (maskp)->bits 97 108 98 109 /* 99 110 * The inline keyword gives the compiler room to decide to inline, or ··· 309 304 310 305 #define nodes_addr(src) ((src).bits) 311 306 312 - #define nodemask_scnprintf(buf, len, src) \ 313 - __nodemask_scnprintf((buf), (len), &(src), MAX_NUMNODES) 314 - static inline int __nodemask_scnprintf(char *buf, int len, 315 - const nodemask_t *srcp, int nbits) 316 - { 317 - return bitmap_scnprintf(buf, len, srcp->bits, nbits); 318 - } 319 - 320 307 #define nodemask_parse_user(ubuf, ulen, dst) \ 321 308 __nodemask_parse_user((ubuf), (ulen), &(dst), MAX_NUMNODES) 322 309 static inline int __nodemask_parse_user(const char __user *buf, int len, 323 310 nodemask_t *dstp, int nbits) 324 311 { 325 312 return bitmap_parse_user(buf, len, dstp->bits, nbits); 326 - } 327 - 328 - #define nodelist_scnprintf(buf, len, src) \ 329 - __nodelist_scnprintf((buf), (len), &(src), MAX_NUMNODES) 330 - static inline int __nodelist_scnprintf(char *buf, int len, 331 - const nodemask_t *srcp, int nbits) 332 - { 333 - return bitmap_scnlistprintf(buf, len, srcp->bits, nbits); 334 313 } 335 314 336 315 #define nodelist_parse(buf, dst) __nodelist_parse((buf), &(dst), MAX_NUMNODES)
+3
include/linux/sched.h
··· 1664 1664 unsigned long timer_slack_ns; 1665 1665 unsigned long default_timer_slack_ns; 1666 1666 1667 + #ifdef CONFIG_KASAN 1668 + unsigned int kasan_depth; 1669 + #endif 1667 1670 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 1668 1671 /* Index of current stored address in ret_stack */ 1669 1672 int curr_ret_stack;
-3
include/linux/seq_buf.h
··· 125 125 unsigned int len); 126 126 extern int seq_buf_path(struct seq_buf *s, const struct path *path, const char *esc); 127 127 128 - extern int seq_buf_bitmask(struct seq_buf *s, const unsigned long *maskp, 129 - int nmaskbits); 130 - 131 128 #ifdef CONFIG_BINARY_PRINTF 132 129 extern int 133 130 seq_buf_bprintf(struct seq_buf *s, const char *fmt, const u32 *binary);
-25
include/linux/seq_file.h
··· 126 126 int seq_dentry(struct seq_file *, struct dentry *, const char *); 127 127 int seq_path_root(struct seq_file *m, const struct path *path, 128 128 const struct path *root, const char *esc); 129 - int seq_bitmap(struct seq_file *m, const unsigned long *bits, 130 - unsigned int nr_bits); 131 - static inline int seq_cpumask(struct seq_file *m, const struct cpumask *mask) 132 - { 133 - return seq_bitmap(m, cpumask_bits(mask), nr_cpu_ids); 134 - } 135 - 136 - static inline int seq_nodemask(struct seq_file *m, nodemask_t *mask) 137 - { 138 - return seq_bitmap(m, mask->bits, MAX_NUMNODES); 139 - } 140 - 141 - int seq_bitmap_list(struct seq_file *m, const unsigned long *bits, 142 - unsigned int nr_bits); 143 - 144 - static inline int seq_cpumask_list(struct seq_file *m, 145 - const struct cpumask *mask) 146 - { 147 - return seq_bitmap_list(m, cpumask_bits(mask), nr_cpu_ids); 148 - } 149 - 150 - static inline int seq_nodemask_list(struct seq_file *m, nodemask_t *mask) 151 - { 152 - return seq_bitmap_list(m, mask->bits, MAX_NUMNODES); 153 - } 154 129 155 130 int single_open(struct file *, int (*)(struct seq_file *, void *), void *); 156 131 int single_open_size(struct file *, int (*)(struct seq_file *, void *), void *, size_t);
+9 -2
include/linux/slab.h
··· 104 104 (unsigned long)ZERO_SIZE_PTR) 105 105 106 106 #include <linux/kmemleak.h> 107 + #include <linux/kasan.h> 107 108 108 109 struct mem_cgroup; 109 110 /* ··· 326 325 static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, 327 326 gfp_t flags, size_t size) 328 327 { 329 - return kmem_cache_alloc(s, flags); 328 + void *ret = kmem_cache_alloc(s, flags); 329 + 330 + kasan_kmalloc(s, ret, size); 331 + return ret; 330 332 } 331 333 332 334 static __always_inline void * ··· 337 333 gfp_t gfpflags, 338 334 int node, size_t size) 339 335 { 340 - return kmem_cache_alloc_node(s, gfpflags, node); 336 + void *ret = kmem_cache_alloc_node(s, gfpflags, node); 337 + 338 + kasan_kmalloc(s, ret, size); 339 + return ret; 341 340 } 342 341 #endif /* CONFIG_TRACING */ 343 342
+19
include/linux/slub_def.h
··· 110 110 } 111 111 #endif 112 112 113 + 114 + /** 115 + * virt_to_obj - returns address of the beginning of object. 116 + * @s: object's kmem_cache 117 + * @slab_page: address of slab page 118 + * @x: address within object memory range 119 + * 120 + * Returns address of the beginning of object 121 + */ 122 + static inline void *virt_to_obj(struct kmem_cache *s, 123 + const void *slab_page, 124 + const void *x) 125 + { 126 + return (void *)x - ((x - slab_page) % s->size); 127 + } 128 + 129 + void object_err(struct kmem_cache *s, struct page *page, 130 + u8 *object, char *reason); 131 + 113 132 #endif /* _LINUX_SLUB_DEF_H */
+3
include/linux/string.h
··· 112 112 #endif 113 113 void *memchr_inv(const void *s, int c, size_t n); 114 114 115 + extern void kfree_const(const void *x); 116 + 115 117 extern char *kstrdup(const char *s, gfp_t gfp); 118 + extern const char *kstrdup_const(const char *s, gfp_t gfp); 116 119 extern char *kstrndup(const char *s, size_t len, gfp_t gfp); 117 120 extern void *kmemdup(const void *src, size_t len, gfp_t gfp); 118 121
+10 -3
include/linux/vmalloc.h
··· 16 16 #define VM_USERMAP 0x00000008 /* suitable for remap_vmalloc_range */ 17 17 #define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */ 18 18 #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ 19 + #define VM_NO_GUARD 0x00000040 /* don't add guard page */ 19 20 /* bits [20..32] reserved for arch specific ioremap internals */ 20 21 21 22 /* ··· 76 75 extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot); 77 76 extern void *__vmalloc_node_range(unsigned long size, unsigned long align, 78 77 unsigned long start, unsigned long end, gfp_t gfp_mask, 79 - pgprot_t prot, int node, const void *caller); 78 + pgprot_t prot, unsigned long vm_flags, int node, 79 + const void *caller); 80 + 80 81 extern void vfree(const void *addr); 81 82 82 83 extern void *vmap(struct page **pages, unsigned int count, ··· 99 96 100 97 static inline size_t get_vm_area_size(const struct vm_struct *area) 101 98 { 102 - /* return actual size without guard page */ 103 - return area->size - PAGE_SIZE; 99 + if (!(area->flags & VM_NO_GUARD)) 100 + /* return actual size without guard page */ 101 + return area->size - PAGE_SIZE; 102 + else 103 + return area->size; 104 + 104 105 } 105 106 106 107 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
-16
init/Kconfig
··· 1287 1287 1288 1288 endif 1289 1289 1290 - config INIT_FALLBACK 1291 - bool "Fall back to defaults if init= parameter is bad" 1292 - default y 1293 - help 1294 - If enabled, the kernel will try the default init binaries if an 1295 - explicit request from the init= parameter fails. 1296 - 1297 - This can have unexpected effects. For example, booting 1298 - with init=/sbin/kiosk_app will run /sbin/init or even /bin/sh 1299 - if /sbin/kiosk_app cannot be executed. 1300 - 1301 - The default value of Y is consistent with historical behavior. 1302 - Selecting N is likely to be more appropriate for most uses, 1303 - especially on kiosks and on kernels that are intended to be 1304 - run under the control of a script. 1305 - 1306 1290 config CC_OPTIMIZE_FOR_SIZE 1307 1291 bool "Optimize for size" 1308 1292 help
-5
init/main.c
··· 953 953 ret = run_init_process(execute_command); 954 954 if (!ret) 955 955 return 0; 956 - #ifndef CONFIG_INIT_FALLBACK 957 956 panic("Requested init %s failed (error %d).", 958 957 execute_command, ret); 959 - #else 960 - pr_err("Failed to execute %s (error %d). Attempting defaults...\n", 961 - execute_command, ret); 962 - #endif 963 958 } 964 959 if (!try_to_run_init_process("/sbin/init") || 965 960 !try_to_run_init_process("/etc/init") ||
+1 -1
kernel/cgroup.c
··· 3077 3077 #endif 3078 3078 kn = __kernfs_create_file(cgrp->kn, cgroup_file_name(cgrp, cft, name), 3079 3079 cgroup_file_mode(cft), 0, cft->kf_ops, cft, 3080 - NULL, false, key); 3080 + NULL, key); 3081 3081 if (IS_ERR(kn)) 3082 3082 return PTR_ERR(kn); 3083 3083
+9 -33
kernel/cpuset.c
··· 1707 1707 { 1708 1708 struct cpuset *cs = css_cs(seq_css(sf)); 1709 1709 cpuset_filetype_t type = seq_cft(sf)->private; 1710 - ssize_t count; 1711 - char *buf, *s; 1712 1710 int ret = 0; 1713 - 1714 - count = seq_get_buf(sf, &buf); 1715 - s = buf; 1716 1711 1717 1712 spin_lock_irq(&callback_lock); 1718 1713 1719 1714 switch (type) { 1720 1715 case FILE_CPULIST: 1721 - s += cpulist_scnprintf(s, count, cs->cpus_allowed); 1716 + seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs->cpus_allowed)); 1722 1717 break; 1723 1718 case FILE_MEMLIST: 1724 - s += nodelist_scnprintf(s, count, cs->mems_allowed); 1719 + seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->mems_allowed)); 1725 1720 break; 1726 1721 case FILE_EFFECTIVE_CPULIST: 1727 - s += cpulist_scnprintf(s, count, cs->effective_cpus); 1722 + seq_printf(sf, "%*pbl\n", cpumask_pr_args(cs->effective_cpus)); 1728 1723 break; 1729 1724 case FILE_EFFECTIVE_MEMLIST: 1730 - s += nodelist_scnprintf(s, count, cs->effective_mems); 1725 + seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems)); 1731 1726 break; 1732 1727 default: 1733 1728 ret = -EINVAL; 1734 - goto out_unlock; 1735 1729 } 1736 1730 1737 - if (s < buf + count - 1) { 1738 - *s++ = '\n'; 1739 - seq_commit(sf, s - buf); 1740 - } else { 1741 - seq_commit(sf, -1); 1742 - } 1743 - out_unlock: 1744 1731 spin_unlock_irq(&callback_lock); 1745 1732 return ret; 1746 1733 } ··· 2597 2610 return nodes_intersects(tsk1->mems_allowed, tsk2->mems_allowed); 2598 2611 } 2599 2612 2600 - #define CPUSET_NODELIST_LEN (256) 2601 - 2602 2613 /** 2603 2614 * cpuset_print_task_mems_allowed - prints task's cpuset and mems_allowed 2604 2615 * @tsk: pointer to task_struct of some task. ··· 2606 2621 */ 2607 2622 void cpuset_print_task_mems_allowed(struct task_struct *tsk) 2608 2623 { 2609 - /* Statically allocated to prevent using excess stack. */ 2610 - static char cpuset_nodelist[CPUSET_NODELIST_LEN]; 2611 - static DEFINE_SPINLOCK(cpuset_buffer_lock); 2612 2624 struct cgroup *cgrp; 2613 2625 2614 - spin_lock(&cpuset_buffer_lock); 2615 2626 rcu_read_lock(); 2616 2627 2617 2628 cgrp = task_cs(tsk)->css.cgroup; 2618 - nodelist_scnprintf(cpuset_nodelist, CPUSET_NODELIST_LEN, 2619 - tsk->mems_allowed); 2620 2629 pr_info("%s cpuset=", tsk->comm); 2621 2630 pr_cont_cgroup_name(cgrp); 2622 - pr_cont(" mems_allowed=%s\n", cpuset_nodelist); 2631 + pr_cont(" mems_allowed=%*pbl\n", nodemask_pr_args(&tsk->mems_allowed)); 2623 2632 2624 2633 rcu_read_unlock(); 2625 - spin_unlock(&cpuset_buffer_lock); 2626 2634 } 2627 2635 2628 2636 /* ··· 2693 2715 /* Display task mems_allowed in /proc/<pid>/status file. */ 2694 2716 void cpuset_task_status_allowed(struct seq_file *m, struct task_struct *task) 2695 2717 { 2696 - seq_puts(m, "Mems_allowed:\t"); 2697 - seq_nodemask(m, &task->mems_allowed); 2698 - seq_puts(m, "\n"); 2699 - seq_puts(m, "Mems_allowed_list:\t"); 2700 - seq_nodemask_list(m, &task->mems_allowed); 2701 - seq_puts(m, "\n"); 2718 + seq_printf(m, "Mems_allowed:\t%*pb\n", 2719 + nodemask_pr_args(&task->mems_allowed)); 2720 + seq_printf(m, "Mems_allowed_list:\t%*pbl\n", 2721 + nodemask_pr_args(&task->mems_allowed)); 2702 2722 }
+4 -7
kernel/irq/proc.c
··· 46 46 mask = desc->pending_mask; 47 47 #endif 48 48 if (type) 49 - seq_cpumask_list(m, mask); 49 + seq_printf(m, "%*pbl\n", cpumask_pr_args(mask)); 50 50 else 51 - seq_cpumask(m, mask); 52 - seq_putc(m, '\n'); 51 + seq_printf(m, "%*pb\n", cpumask_pr_args(mask)); 53 52 return 0; 54 53 } 55 54 ··· 66 67 cpumask_copy(mask, desc->affinity_hint); 67 68 raw_spin_unlock_irqrestore(&desc->lock, flags); 68 69 69 - seq_cpumask(m, mask); 70 - seq_putc(m, '\n'); 70 + seq_printf(m, "%*pb\n", cpumask_pr_args(mask)); 71 71 free_cpumask_var(mask); 72 72 73 73 return 0; ··· 184 186 185 187 static int default_affinity_show(struct seq_file *m, void *v) 186 188 { 187 - seq_cpumask(m, irq_default_affinity); 188 - seq_putc(m, '\n'); 189 + seq_printf(m, "%*pb\n", cpumask_pr_args(irq_default_affinity)); 189 190 return 0; 190 191 } 191 192
+15 -3
kernel/kprobes.c
··· 869 869 { 870 870 struct kprobe *_p; 871 871 872 - unoptimize_kprobe(p, false); /* Try to unoptimize */ 872 + /* Try to unoptimize */ 873 + unoptimize_kprobe(p, kprobes_all_disarmed); 873 874 874 875 if (!kprobe_queued(p)) { 875 876 arch_disarm_kprobe(p); ··· 1572 1571 1573 1572 /* Try to disarm and disable this/parent probe */ 1574 1573 if (p == orig_p || aggr_kprobe_disabled(orig_p)) { 1575 - disarm_kprobe(orig_p, true); 1574 + /* 1575 + * If kprobes_all_disarmed is set, orig_p 1576 + * should have already been disarmed, so 1577 + * skip unneed disarming process. 1578 + */ 1579 + if (!kprobes_all_disarmed) 1580 + disarm_kprobe(orig_p, true); 1576 1581 orig_p->flags |= KPROBE_FLAG_DISABLED; 1577 1582 } 1578 1583 } ··· 2327 2320 if (!kprobes_all_disarmed) 2328 2321 goto already_enabled; 2329 2322 2323 + /* 2324 + * optimize_kprobe() called by arm_kprobe() checks 2325 + * kprobes_all_disarmed, so set kprobes_all_disarmed before 2326 + * arm_kprobe. 2327 + */ 2328 + kprobes_all_disarmed = false; 2330 2329 /* Arming kprobes doesn't optimize kprobe itself */ 2331 2330 for (i = 0; i < KPROBE_TABLE_SIZE; i++) { 2332 2331 head = &kprobe_table[i]; ··· 2341 2328 arm_kprobe(p); 2342 2329 } 2343 2330 2344 - kprobes_all_disarmed = false; 2345 2331 printk(KERN_INFO "Kprobes globally enabled\n"); 2346 2332 2347 2333 already_enabled:
+2
kernel/module.c
··· 56 56 #include <linux/async.h> 57 57 #include <linux/percpu.h> 58 58 #include <linux/kmemleak.h> 59 + #include <linux/kasan.h> 59 60 #include <linux/jump_label.h> 60 61 #include <linux/pfn.h> 61 62 #include <linux/bsearch.h> ··· 1814 1813 void __weak module_memfree(void *module_region) 1815 1814 { 1816 1815 vfree(module_region); 1816 + kasan_module_free(module_region); 1817 1817 } 1818 1818 1819 1819 void __weak module_arch_cleanup(struct module *mod)
+3 -8
kernel/padata.c
··· 917 917 else 918 918 cpumask = pinst->cpumask.pcpu; 919 919 920 - len = bitmap_scnprintf(buf, PAGE_SIZE, cpumask_bits(cpumask), 921 - nr_cpu_ids); 922 - if (PAGE_SIZE - len < 2) 923 - len = -EINVAL; 924 - else 925 - len += sprintf(buf + len, "\n"); 926 - 920 + len = snprintf(buf, PAGE_SIZE, "%*pb\n", 921 + nr_cpu_ids, cpumask_bits(cpumask)); 927 922 mutex_unlock(&pinst->lock); 928 - return len; 923 + return len < PAGE_SIZE ? len : -EINVAL; 929 924 } 930 925 931 926 static ssize_t store_cpumask(struct padata_instance *pinst,
+1 -2
kernel/profile.c
··· 422 422 423 423 static int prof_cpu_mask_proc_show(struct seq_file *m, void *v) 424 424 { 425 - seq_cpumask(m, prof_cpu_mask); 426 - seq_putc(m, '\n'); 425 + seq_printf(m, "%*pb\n", cpumask_pr_args(prof_cpu_mask)); 427 426 return 0; 428 427 } 429 428
+2 -3
kernel/rcu/tree_plugin.h
··· 49 49 static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */ 50 50 static bool have_rcu_nocb_mask; /* Was rcu_nocb_mask allocated? */ 51 51 static bool __read_mostly rcu_nocb_poll; /* Offload kthread are to poll. */ 52 - static char __initdata nocb_buf[NR_CPUS * 5]; 53 52 #endif /* #ifdef CONFIG_RCU_NOCB_CPU */ 54 53 55 54 /* ··· 2385 2386 cpumask_and(rcu_nocb_mask, cpu_possible_mask, 2386 2387 rcu_nocb_mask); 2387 2388 } 2388 - cpulist_scnprintf(nocb_buf, sizeof(nocb_buf), rcu_nocb_mask); 2389 - pr_info("\tOffload RCU callbacks from CPUs: %s.\n", nocb_buf); 2389 + pr_info("\tOffload RCU callbacks from CPUs: %*pbl.\n", 2390 + cpumask_pr_args(rcu_nocb_mask)); 2390 2391 if (rcu_nocb_poll) 2391 2392 pr_info("\tPoll for callbacks from no-CBs CPUs.\n"); 2392 2393
+4 -6
kernel/sched/core.c
··· 5462 5462 struct cpumask *groupmask) 5463 5463 { 5464 5464 struct sched_group *group = sd->groups; 5465 - char str[256]; 5466 5465 5467 - cpulist_scnprintf(str, sizeof(str), sched_domain_span(sd)); 5468 5466 cpumask_clear(groupmask); 5469 5467 5470 5468 printk(KERN_DEBUG "%*s domain %d: ", level, "", level); ··· 5475 5477 return -1; 5476 5478 } 5477 5479 5478 - printk(KERN_CONT "span %s level %s\n", str, sd->name); 5480 + printk(KERN_CONT "span %*pbl level %s\n", 5481 + cpumask_pr_args(sched_domain_span(sd)), sd->name); 5479 5482 5480 5483 if (!cpumask_test_cpu(cpu, sched_domain_span(sd))) { 5481 5484 printk(KERN_ERR "ERROR: domain->span does not contain " ··· 5521 5522 5522 5523 cpumask_or(groupmask, groupmask, sched_group_cpus(group)); 5523 5524 5524 - cpulist_scnprintf(str, sizeof(str), sched_group_cpus(group)); 5525 - 5526 - printk(KERN_CONT " %s", str); 5525 + printk(KERN_CONT " %*pbl", 5526 + cpumask_pr_args(sched_group_cpus(group))); 5527 5527 if (group->sgc->capacity != SCHED_CAPACITY_SCALE) { 5528 5528 printk(KERN_CONT " (cpu_capacity = %d)", 5529 5529 group->sgc->capacity);
+2 -9
kernel/sched/stats.c
··· 15 15 static int show_schedstat(struct seq_file *seq, void *v) 16 16 { 17 17 int cpu; 18 - int mask_len = DIV_ROUND_UP(NR_CPUS, 32) * 9; 19 - char *mask_str = kmalloc(mask_len, GFP_KERNEL); 20 - 21 - if (mask_str == NULL) 22 - return -ENOMEM; 23 18 24 19 if (v == (void *)1) { 25 20 seq_printf(seq, "version %d\n", SCHEDSTAT_VERSION); ··· 45 50 for_each_domain(cpu, sd) { 46 51 enum cpu_idle_type itype; 47 52 48 - cpumask_scnprintf(mask_str, mask_len, 49 - sched_domain_span(sd)); 50 - seq_printf(seq, "domain%d %s", dcount++, mask_str); 53 + seq_printf(seq, "domain%d %*pb", dcount++, 54 + cpumask_pr_args(sched_domain_span(sd))); 51 55 for (itype = CPU_IDLE; itype < CPU_MAX_IDLE_TYPES; 52 56 itype++) { 53 57 seq_printf(seq, " %u %u %u %u %u %u %u %u", ··· 70 76 rcu_read_unlock(); 71 77 #endif 72 78 } 73 - kfree(mask_str); 74 79 return 0; 75 80 } 76 81
+2 -9
kernel/time/tick-sched.c
··· 326 326 return NOTIFY_OK; 327 327 } 328 328 329 - /* 330 - * Worst case string length in chunks of CPU range seems 2 steps 331 - * separations: 0,2,4,6,... 332 - * This is NR_CPUS + sizeof('\0') 333 - */ 334 - static char __initdata nohz_full_buf[NR_CPUS + 1]; 335 - 336 329 static int tick_nohz_init_all(void) 337 330 { 338 331 int err = -1; ··· 386 393 context_tracking_cpu_set(cpu); 387 394 388 395 cpu_notifier(tick_nohz_cpu_down_callback, 0); 389 - cpulist_scnprintf(nohz_full_buf, sizeof(nohz_full_buf), tick_nohz_full_mask); 390 - pr_info("NO_HZ: Full dynticks CPUs: %s.\n", nohz_full_buf); 396 + pr_info("NO_HZ: Full dynticks CPUs: %*pbl.\n", 397 + cpumask_pr_args(tick_nohz_full_mask)); 391 398 } 392 399 #endif 393 400
+3 -3
kernel/trace/trace.c
··· 3353 3353 3354 3354 mutex_lock(&tracing_cpumask_update_lock); 3355 3355 3356 - len = cpumask_scnprintf(mask_str, count, tr->tracing_cpumask); 3357 - if (count - len < 2) { 3356 + len = snprintf(mask_str, count, "%*pb\n", 3357 + cpumask_pr_args(tr->tracing_cpumask)); 3358 + if (len >= count) { 3358 3359 count = -EINVAL; 3359 3360 goto out_err; 3360 3361 } 3361 - len += sprintf(mask_str + len, "\n"); 3362 3362 count = simple_read_from_buffer(ubuf, count, ppos, mask_str, NR_CPUS+1); 3363 3363 3364 3364 out_err:
+1 -1
kernel/trace/trace_seq.c
··· 120 120 121 121 __trace_seq_init(s); 122 122 123 - seq_buf_bitmask(&s->seq, maskp, nmaskbits); 123 + seq_buf_printf(&s->seq, "%*pb", nmaskbits, maskp); 124 124 125 125 if (unlikely(seq_buf_has_overflowed(&s->seq))) { 126 126 s->seq.len = save_len;
+2 -3
kernel/workqueue.c
··· 3083 3083 int written; 3084 3084 3085 3085 mutex_lock(&wq->mutex); 3086 - written = cpumask_scnprintf(buf, PAGE_SIZE, wq->unbound_attrs->cpumask); 3086 + written = scnprintf(buf, PAGE_SIZE, "%*pb\n", 3087 + cpumask_pr_args(wq->unbound_attrs->cpumask)); 3087 3088 mutex_unlock(&wq->mutex); 3088 - 3089 - written += scnprintf(buf + written, PAGE_SIZE - written, "\n"); 3090 3089 return written; 3091 3090 } 3092 3091
+2
lib/Kconfig.debug
··· 651 651 652 652 source "lib/Kconfig.kmemcheck" 653 653 654 + source "lib/Kconfig.kasan" 655 + 654 656 endmenu # "Memory Debugging" 655 657 656 658 config DEBUG_SHIRQ
+54
lib/Kconfig.kasan
··· 1 + config HAVE_ARCH_KASAN 2 + bool 3 + 4 + if HAVE_ARCH_KASAN 5 + 6 + config KASAN 7 + bool "KASan: runtime memory debugger" 8 + depends on SLUB_DEBUG 9 + select CONSTRUCTORS 10 + help 11 + Enables kernel address sanitizer - runtime memory debugger, 12 + designed to find out-of-bounds accesses and use-after-free bugs. 13 + This is strictly debugging feature. It consumes about 1/8 14 + of available memory and brings about ~x3 performance slowdown. 15 + For better error detection enable CONFIG_STACKTRACE, 16 + and add slub_debug=U to boot cmdline. 17 + 18 + config KASAN_SHADOW_OFFSET 19 + hex 20 + default 0xdffffc0000000000 if X86_64 21 + 22 + choice 23 + prompt "Instrumentation type" 24 + depends on KASAN 25 + default KASAN_OUTLINE 26 + 27 + config KASAN_OUTLINE 28 + bool "Outline instrumentation" 29 + help 30 + Before every memory access compiler insert function call 31 + __asan_load*/__asan_store*. These functions performs check 32 + of shadow memory. This is slower than inline instrumentation, 33 + however it doesn't bloat size of kernel's .text section so 34 + much as inline does. 35 + 36 + config KASAN_INLINE 37 + bool "Inline instrumentation" 38 + help 39 + Compiler directly inserts code checking shadow memory before 40 + memory accesses. This is faster than outline (in some workloads 41 + it gives about x2 boost over outline instrumentation), but 42 + make kernel's .text size much bigger. 43 + 44 + endchoice 45 + 46 + config TEST_KASAN 47 + tristate "Module for testing kasan for bug detection" 48 + depends on m && KASAN 49 + help 50 + This is a test module doing various nasty things like 51 + out of bounds accesses, use after free. It is useful for testing 52 + kernel debugging features like kernel address sanitizer. 53 + 54 + endif
+4 -3
lib/Makefile
··· 32 32 obj-y += hexdump.o 33 33 obj-$(CONFIG_TEST_HEXDUMP) += test-hexdump.o 34 34 obj-y += kstrtox.o 35 - obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o 36 - obj-$(CONFIG_TEST_LKM) += test_module.o 37 - obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o 38 35 obj-$(CONFIG_TEST_BPF) += test_bpf.o 39 36 obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o 37 + obj-$(CONFIG_TEST_KASAN) += test_kasan.o 38 + obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o 39 + obj-$(CONFIG_TEST_LKM) += test_module.o 40 40 obj-$(CONFIG_TEST_RHASHTABLE) += test_rhashtable.o 41 + obj-$(CONFIG_TEST_USER_COPY) += test_user_copy.o 41 42 42 43 ifeq ($(CONFIG_DEBUG_KOBJECT),y) 43 44 CFLAGS_kobject.o += -DDEBUG
+28 -132
lib/bitmap.c
··· 104 104 * @dst : destination bitmap 105 105 * @src : source bitmap 106 106 * @shift : shift by this many bits 107 - * @bits : bitmap size, in bits 107 + * @nbits : bitmap size, in bits 108 108 * 109 109 * Shifting right (dividing) means moving bits in the MS -> LS bit 110 110 * direction. Zeros are fed into the vacated MS positions and the 111 111 * LS bits shifted off the bottom are lost. 112 112 */ 113 - void __bitmap_shift_right(unsigned long *dst, 114 - const unsigned long *src, int shift, int bits) 113 + void __bitmap_shift_right(unsigned long *dst, const unsigned long *src, 114 + unsigned shift, unsigned nbits) 115 115 { 116 - int k, lim = BITS_TO_LONGS(bits), left = bits % BITS_PER_LONG; 117 - int off = shift/BITS_PER_LONG, rem = shift % BITS_PER_LONG; 118 - unsigned long mask = (1UL << left) - 1; 116 + unsigned k, lim = BITS_TO_LONGS(nbits); 117 + unsigned off = shift/BITS_PER_LONG, rem = shift % BITS_PER_LONG; 118 + unsigned long mask = BITMAP_LAST_WORD_MASK(nbits); 119 119 for (k = 0; off + k < lim; ++k) { 120 120 unsigned long upper, lower; 121 121 ··· 127 127 upper = 0; 128 128 else { 129 129 upper = src[off + k + 1]; 130 - if (off + k + 1 == lim - 1 && left) 130 + if (off + k + 1 == lim - 1) 131 131 upper &= mask; 132 + upper <<= (BITS_PER_LONG - rem); 132 133 } 133 134 lower = src[off + k]; 134 - if (left && off + k == lim - 1) 135 + if (off + k == lim - 1) 135 136 lower &= mask; 136 - dst[k] = lower >> rem; 137 - if (rem) 138 - dst[k] |= upper << (BITS_PER_LONG - rem); 139 - if (left && k == lim - 1) 140 - dst[k] &= mask; 137 + lower >>= rem; 138 + dst[k] = lower | upper; 141 139 } 142 140 if (off) 143 141 memset(&dst[lim - off], 0, off*sizeof(unsigned long)); ··· 148 150 * @dst : destination bitmap 149 151 * @src : source bitmap 150 152 * @shift : shift by this many bits 151 - * @bits : bitmap size, in bits 153 + * @nbits : bitmap size, in bits 152 154 * 153 155 * Shifting left (multiplying) means moving bits in the LS -> MS 154 156 * direction. Zeros are fed into the vacated LS bit positions 155 157 * and those MS bits shifted off the top are lost. 156 158 */ 157 159 158 - void __bitmap_shift_left(unsigned long *dst, 159 - const unsigned long *src, int shift, int bits) 160 + void __bitmap_shift_left(unsigned long *dst, const unsigned long *src, 161 + unsigned int shift, unsigned int nbits) 160 162 { 161 - int k, lim = BITS_TO_LONGS(bits), left = bits % BITS_PER_LONG; 162 - int off = shift/BITS_PER_LONG, rem = shift % BITS_PER_LONG; 163 + int k; 164 + unsigned int lim = BITS_TO_LONGS(nbits); 165 + unsigned int off = shift/BITS_PER_LONG, rem = shift % BITS_PER_LONG; 163 166 for (k = lim - off - 1; k >= 0; --k) { 164 167 unsigned long upper, lower; 165 168 ··· 169 170 * word below and make them the bottom rem bits of result. 170 171 */ 171 172 if (rem && k > 0) 172 - lower = src[k - 1]; 173 + lower = src[k - 1] >> (BITS_PER_LONG - rem); 173 174 else 174 175 lower = 0; 175 - upper = src[k]; 176 - if (left && k == lim - 1) 177 - upper &= (1UL << left) - 1; 178 - dst[k + off] = upper << rem; 179 - if (rem) 180 - dst[k + off] |= lower >> (BITS_PER_LONG - rem); 181 - if (left && k + off == lim - 1) 182 - dst[k + off] &= (1UL << left) - 1; 176 + upper = src[k] << rem; 177 + dst[k + off] = lower | upper; 183 178 } 184 179 if (off) 185 180 memset(dst, 0, off*sizeof(unsigned long)); ··· 370 377 #define BASEDEC 10 /* fancier cpuset lists input in decimal */ 371 378 372 379 /** 373 - * bitmap_scnprintf - convert bitmap to an ASCII hex string. 374 - * @buf: byte buffer into which string is placed 375 - * @buflen: reserved size of @buf, in bytes 376 - * @maskp: pointer to bitmap to convert 377 - * @nmaskbits: size of bitmap, in bits 378 - * 379 - * Exactly @nmaskbits bits are displayed. Hex digits are grouped into 380 - * comma-separated sets of eight digits per set. Returns the number of 381 - * characters which were written to *buf, excluding the trailing \0. 382 - */ 383 - int bitmap_scnprintf(char *buf, unsigned int buflen, 384 - const unsigned long *maskp, int nmaskbits) 385 - { 386 - int i, word, bit, len = 0; 387 - unsigned long val; 388 - const char *sep = ""; 389 - int chunksz; 390 - u32 chunkmask; 391 - 392 - chunksz = nmaskbits & (CHUNKSZ - 1); 393 - if (chunksz == 0) 394 - chunksz = CHUNKSZ; 395 - 396 - i = ALIGN(nmaskbits, CHUNKSZ) - CHUNKSZ; 397 - for (; i >= 0; i -= CHUNKSZ) { 398 - chunkmask = ((1ULL << chunksz) - 1); 399 - word = i / BITS_PER_LONG; 400 - bit = i % BITS_PER_LONG; 401 - val = (maskp[word] >> bit) & chunkmask; 402 - len += scnprintf(buf+len, buflen-len, "%s%0*lx", sep, 403 - (chunksz+3)/4, val); 404 - chunksz = CHUNKSZ; 405 - sep = ","; 406 - } 407 - return len; 408 - } 409 - EXPORT_SYMBOL(bitmap_scnprintf); 410 - 411 - /** 412 380 * __bitmap_parse - convert an ASCII hex string into a bitmap. 413 381 * @buf: pointer to buffer containing string. 414 382 * @buflen: buffer size in bytes. If string is smaller than this ··· 482 528 } 483 529 EXPORT_SYMBOL(bitmap_parse_user); 484 530 485 - /* 486 - * bscnl_emit(buf, buflen, rbot, rtop, bp) 487 - * 488 - * Helper routine for bitmap_scnlistprintf(). Write decimal number 489 - * or range to buf, suppressing output past buf+buflen, with optional 490 - * comma-prefix. Return len of what was written to *buf, excluding the 491 - * trailing \0. 492 - */ 493 - static inline int bscnl_emit(char *buf, int buflen, int rbot, int rtop, int len) 494 - { 495 - if (len > 0) 496 - len += scnprintf(buf + len, buflen - len, ","); 497 - if (rbot == rtop) 498 - len += scnprintf(buf + len, buflen - len, "%d", rbot); 499 - else 500 - len += scnprintf(buf + len, buflen - len, "%d-%d", rbot, rtop); 501 - return len; 502 - } 503 - 504 - /** 505 - * bitmap_scnlistprintf - convert bitmap to list format ASCII string 506 - * @buf: byte buffer into which string is placed 507 - * @buflen: reserved size of @buf, in bytes 508 - * @maskp: pointer to bitmap to convert 509 - * @nmaskbits: size of bitmap, in bits 510 - * 511 - * Output format is a comma-separated list of decimal numbers and 512 - * ranges. Consecutively set bits are shown as two hyphen-separated 513 - * decimal numbers, the smallest and largest bit numbers set in 514 - * the range. Output format is compatible with the format 515 - * accepted as input by bitmap_parselist(). 516 - * 517 - * The return value is the number of characters which were written to *buf 518 - * excluding the trailing '\0', as per ISO C99's scnprintf. 519 - */ 520 - int bitmap_scnlistprintf(char *buf, unsigned int buflen, 521 - const unsigned long *maskp, int nmaskbits) 522 - { 523 - int len = 0; 524 - /* current bit is 'cur', most recently seen range is [rbot, rtop] */ 525 - int cur, rbot, rtop; 526 - 527 - if (buflen == 0) 528 - return 0; 529 - buf[0] = 0; 530 - 531 - rbot = cur = find_first_bit(maskp, nmaskbits); 532 - while (cur < nmaskbits) { 533 - rtop = cur; 534 - cur = find_next_bit(maskp, nmaskbits, cur+1); 535 - if (cur >= nmaskbits || cur > rtop + 1) { 536 - len = bscnl_emit(buf, buflen, rbot, rtop, len); 537 - rbot = cur; 538 - } 539 - } 540 - return len; 541 - } 542 - EXPORT_SYMBOL(bitmap_scnlistprintf); 543 - 544 531 /** 545 532 * bitmap_print_to_pagebuf - convert bitmap to list or hex format ASCII string 546 533 * @list: indicates whether the bitmap must be list ··· 500 605 int n = 0; 501 606 502 607 if (len > 1) { 503 - n = list ? bitmap_scnlistprintf(buf, len, maskp, nmaskbits) : 504 - bitmap_scnprintf(buf, len, maskp, nmaskbits); 608 + n = list ? scnprintf(buf, len, "%*pbl", nmaskbits, maskp) : 609 + scnprintf(buf, len, "%*pb", nmaskbits, maskp); 505 610 buf[n++] = '\n'; 506 611 buf[n] = '\0'; 507 612 } ··· 1086 1191 * 1087 1192 * Require nbits % BITS_PER_LONG == 0. 1088 1193 */ 1089 - void bitmap_copy_le(void *dst, const unsigned long *src, int nbits) 1194 + #ifdef __BIG_ENDIAN 1195 + void bitmap_copy_le(unsigned long *dst, const unsigned long *src, unsigned int nbits) 1090 1196 { 1091 - unsigned long *d = dst; 1092 - int i; 1197 + unsigned int i; 1093 1198 1094 1199 for (i = 0; i < nbits/BITS_PER_LONG; i++) { 1095 1200 if (BITS_PER_LONG == 64) 1096 - d[i] = cpu_to_le64(src[i]); 1201 + dst[i] = cpu_to_le64(src[i]); 1097 1202 else 1098 - d[i] = cpu_to_le32(src[i]); 1203 + dst[i] = cpu_to_le32(src[i]); 1099 1204 } 1100 1205 } 1101 1206 EXPORT_SYMBOL(bitmap_copy_le); 1207 + #endif
+3 -3
lib/gen_crc32table.c
··· 109 109 110 110 if (CRC_LE_BITS > 1) { 111 111 crc32init_le(); 112 - printf("static u32 __cacheline_aligned " 112 + printf("static const u32 ____cacheline_aligned " 113 113 "crc32table_le[%d][%d] = {", 114 114 LE_TABLE_ROWS, LE_TABLE_SIZE); 115 115 output_table(crc32table_le, LE_TABLE_ROWS, ··· 119 119 120 120 if (CRC_BE_BITS > 1) { 121 121 crc32init_be(); 122 - printf("static u32 __cacheline_aligned " 122 + printf("static const u32 ____cacheline_aligned " 123 123 "crc32table_be[%d][%d] = {", 124 124 BE_TABLE_ROWS, BE_TABLE_SIZE); 125 125 output_table(crc32table_be, LE_TABLE_ROWS, ··· 128 128 } 129 129 if (CRC_LE_BITS > 1) { 130 130 crc32cinit_le(); 131 - printf("static u32 __cacheline_aligned " 131 + printf("static const u32 ____cacheline_aligned " 132 132 "crc32ctable_le[%d][%d] = {", 133 133 LE_TABLE_ROWS, LE_TABLE_SIZE); 134 134 output_table(crc32ctable_le, LE_TABLE_ROWS,
+2
lib/genalloc.c
··· 586 586 struct gen_pool **ptr, *pool; 587 587 588 588 ptr = devres_alloc(devm_gen_pool_release, sizeof(*ptr), GFP_KERNEL); 589 + if (!ptr) 590 + return NULL; 589 591 590 592 pool = gen_pool_create(min_alloc_order, nid); 591 593 if (pool) {
-36
lib/seq_buf.c
··· 91 91 return ret; 92 92 } 93 93 94 - /** 95 - * seq_buf_bitmask - write a bitmask array in its ASCII representation 96 - * @s: seq_buf descriptor 97 - * @maskp: points to an array of unsigned longs that represent a bitmask 98 - * @nmaskbits: The number of bits that are valid in @maskp 99 - * 100 - * Writes a ASCII representation of a bitmask string into @s. 101 - * 102 - * Returns zero on success, -1 on overflow. 103 - */ 104 - int seq_buf_bitmask(struct seq_buf *s, const unsigned long *maskp, 105 - int nmaskbits) 106 - { 107 - unsigned int len = seq_buf_buffer_left(s); 108 - int ret; 109 - 110 - WARN_ON(s->size == 0); 111 - 112 - /* 113 - * Note, because bitmap_scnprintf() only returns the number of bytes 114 - * written and not the number that would be written, we use the last 115 - * byte of the buffer to let us know if we overflowed. There's a small 116 - * chance that the bitmap could have fit exactly inside the buffer, but 117 - * it's not that critical if that does happen. 118 - */ 119 - if (len > 1) { 120 - ret = bitmap_scnprintf(s->buffer + s->len, len, maskp, nmaskbits); 121 - if (ret < len) { 122 - s->len += ret; 123 - return 0; 124 - } 125 - } 126 - seq_buf_set_overflow(s); 127 - return -1; 128 - } 129 - 130 94 #ifdef CONFIG_BINARY_PRINTF 131 95 /** 132 96 * seq_buf_bprintf - Write the printf string from binary arguments
+6 -6
lib/string.c
··· 313 313 */ 314 314 char *strrchr(const char *s, int c) 315 315 { 316 - const char *p = s + strlen(s); 317 - do { 318 - if (*p == (char)c) 319 - return (char *)p; 320 - } while (--p >= s); 321 - return NULL; 316 + const char *last = NULL; 317 + do { 318 + if (*s == (char)c) 319 + last = s; 320 + } while (*s++); 321 + return (char *)last; 322 322 } 323 323 EXPORT_SYMBOL(strrchr); 324 324 #endif
+277
lib/test_kasan.c
··· 1 + /* 2 + * 3 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 4 + * Author: Andrey Ryabinin <a.ryabinin@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #define pr_fmt(fmt) "kasan test: %s " fmt, __func__ 13 + 14 + #include <linux/kernel.h> 15 + #include <linux/printk.h> 16 + #include <linux/slab.h> 17 + #include <linux/string.h> 18 + #include <linux/module.h> 19 + 20 + static noinline void __init kmalloc_oob_right(void) 21 + { 22 + char *ptr; 23 + size_t size = 123; 24 + 25 + pr_info("out-of-bounds to right\n"); 26 + ptr = kmalloc(size, GFP_KERNEL); 27 + if (!ptr) { 28 + pr_err("Allocation failed\n"); 29 + return; 30 + } 31 + 32 + ptr[size] = 'x'; 33 + kfree(ptr); 34 + } 35 + 36 + static noinline void __init kmalloc_oob_left(void) 37 + { 38 + char *ptr; 39 + size_t size = 15; 40 + 41 + pr_info("out-of-bounds to left\n"); 42 + ptr = kmalloc(size, GFP_KERNEL); 43 + if (!ptr) { 44 + pr_err("Allocation failed\n"); 45 + return; 46 + } 47 + 48 + *ptr = *(ptr - 1); 49 + kfree(ptr); 50 + } 51 + 52 + static noinline void __init kmalloc_node_oob_right(void) 53 + { 54 + char *ptr; 55 + size_t size = 4096; 56 + 57 + pr_info("kmalloc_node(): out-of-bounds to right\n"); 58 + ptr = kmalloc_node(size, GFP_KERNEL, 0); 59 + if (!ptr) { 60 + pr_err("Allocation failed\n"); 61 + return; 62 + } 63 + 64 + ptr[size] = 0; 65 + kfree(ptr); 66 + } 67 + 68 + static noinline void __init kmalloc_large_oob_rigth(void) 69 + { 70 + char *ptr; 71 + size_t size = KMALLOC_MAX_CACHE_SIZE + 10; 72 + 73 + pr_info("kmalloc large allocation: out-of-bounds to right\n"); 74 + ptr = kmalloc(size, GFP_KERNEL); 75 + if (!ptr) { 76 + pr_err("Allocation failed\n"); 77 + return; 78 + } 79 + 80 + ptr[size] = 0; 81 + kfree(ptr); 82 + } 83 + 84 + static noinline void __init kmalloc_oob_krealloc_more(void) 85 + { 86 + char *ptr1, *ptr2; 87 + size_t size1 = 17; 88 + size_t size2 = 19; 89 + 90 + pr_info("out-of-bounds after krealloc more\n"); 91 + ptr1 = kmalloc(size1, GFP_KERNEL); 92 + ptr2 = krealloc(ptr1, size2, GFP_KERNEL); 93 + if (!ptr1 || !ptr2) { 94 + pr_err("Allocation failed\n"); 95 + kfree(ptr1); 96 + return; 97 + } 98 + 99 + ptr2[size2] = 'x'; 100 + kfree(ptr2); 101 + } 102 + 103 + static noinline void __init kmalloc_oob_krealloc_less(void) 104 + { 105 + char *ptr1, *ptr2; 106 + size_t size1 = 17; 107 + size_t size2 = 15; 108 + 109 + pr_info("out-of-bounds after krealloc less\n"); 110 + ptr1 = kmalloc(size1, GFP_KERNEL); 111 + ptr2 = krealloc(ptr1, size2, GFP_KERNEL); 112 + if (!ptr1 || !ptr2) { 113 + pr_err("Allocation failed\n"); 114 + kfree(ptr1); 115 + return; 116 + } 117 + ptr2[size1] = 'x'; 118 + kfree(ptr2); 119 + } 120 + 121 + static noinline void __init kmalloc_oob_16(void) 122 + { 123 + struct { 124 + u64 words[2]; 125 + } *ptr1, *ptr2; 126 + 127 + pr_info("kmalloc out-of-bounds for 16-bytes access\n"); 128 + ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL); 129 + ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL); 130 + if (!ptr1 || !ptr2) { 131 + pr_err("Allocation failed\n"); 132 + kfree(ptr1); 133 + kfree(ptr2); 134 + return; 135 + } 136 + *ptr1 = *ptr2; 137 + kfree(ptr1); 138 + kfree(ptr2); 139 + } 140 + 141 + static noinline void __init kmalloc_oob_in_memset(void) 142 + { 143 + char *ptr; 144 + size_t size = 666; 145 + 146 + pr_info("out-of-bounds in memset\n"); 147 + ptr = kmalloc(size, GFP_KERNEL); 148 + if (!ptr) { 149 + pr_err("Allocation failed\n"); 150 + return; 151 + } 152 + 153 + memset(ptr, 0, size+5); 154 + kfree(ptr); 155 + } 156 + 157 + static noinline void __init kmalloc_uaf(void) 158 + { 159 + char *ptr; 160 + size_t size = 10; 161 + 162 + pr_info("use-after-free\n"); 163 + ptr = kmalloc(size, GFP_KERNEL); 164 + if (!ptr) { 165 + pr_err("Allocation failed\n"); 166 + return; 167 + } 168 + 169 + kfree(ptr); 170 + *(ptr + 8) = 'x'; 171 + } 172 + 173 + static noinline void __init kmalloc_uaf_memset(void) 174 + { 175 + char *ptr; 176 + size_t size = 33; 177 + 178 + pr_info("use-after-free in memset\n"); 179 + ptr = kmalloc(size, GFP_KERNEL); 180 + if (!ptr) { 181 + pr_err("Allocation failed\n"); 182 + return; 183 + } 184 + 185 + kfree(ptr); 186 + memset(ptr, 0, size); 187 + } 188 + 189 + static noinline void __init kmalloc_uaf2(void) 190 + { 191 + char *ptr1, *ptr2; 192 + size_t size = 43; 193 + 194 + pr_info("use-after-free after another kmalloc\n"); 195 + ptr1 = kmalloc(size, GFP_KERNEL); 196 + if (!ptr1) { 197 + pr_err("Allocation failed\n"); 198 + return; 199 + } 200 + 201 + kfree(ptr1); 202 + ptr2 = kmalloc(size, GFP_KERNEL); 203 + if (!ptr2) { 204 + pr_err("Allocation failed\n"); 205 + return; 206 + } 207 + 208 + ptr1[40] = 'x'; 209 + kfree(ptr2); 210 + } 211 + 212 + static noinline void __init kmem_cache_oob(void) 213 + { 214 + char *p; 215 + size_t size = 200; 216 + struct kmem_cache *cache = kmem_cache_create("test_cache", 217 + size, 0, 218 + 0, NULL); 219 + if (!cache) { 220 + pr_err("Cache allocation failed\n"); 221 + return; 222 + } 223 + pr_info("out-of-bounds in kmem_cache_alloc\n"); 224 + p = kmem_cache_alloc(cache, GFP_KERNEL); 225 + if (!p) { 226 + pr_err("Allocation failed\n"); 227 + kmem_cache_destroy(cache); 228 + return; 229 + } 230 + 231 + *p = p[size]; 232 + kmem_cache_free(cache, p); 233 + kmem_cache_destroy(cache); 234 + } 235 + 236 + static char global_array[10]; 237 + 238 + static noinline void __init kasan_global_oob(void) 239 + { 240 + volatile int i = 3; 241 + char *p = &global_array[ARRAY_SIZE(global_array) + i]; 242 + 243 + pr_info("out-of-bounds global variable\n"); 244 + *(volatile char *)p; 245 + } 246 + 247 + static noinline void __init kasan_stack_oob(void) 248 + { 249 + char stack_array[10]; 250 + volatile int i = 0; 251 + char *p = &stack_array[ARRAY_SIZE(stack_array) + i]; 252 + 253 + pr_info("out-of-bounds on stack\n"); 254 + *(volatile char *)p; 255 + } 256 + 257 + static int __init kmalloc_tests_init(void) 258 + { 259 + kmalloc_oob_right(); 260 + kmalloc_oob_left(); 261 + kmalloc_node_oob_right(); 262 + kmalloc_large_oob_rigth(); 263 + kmalloc_oob_krealloc_more(); 264 + kmalloc_oob_krealloc_less(); 265 + kmalloc_oob_16(); 266 + kmalloc_oob_in_memset(); 267 + kmalloc_uaf(); 268 + kmalloc_uaf_memset(); 269 + kmalloc_uaf2(); 270 + kmem_cache_oob(); 271 + kasan_stack_oob(); 272 + kasan_global_oob(); 273 + return -EAGAIN; 274 + } 275 + 276 + module_init(kmalloc_tests_init); 277 + MODULE_LICENSE("GPL");
+94
lib/vsprintf.c
··· 794 794 } 795 795 796 796 static noinline_for_stack 797 + char *bitmap_string(char *buf, char *end, unsigned long *bitmap, 798 + struct printf_spec spec, const char *fmt) 799 + { 800 + const int CHUNKSZ = 32; 801 + int nr_bits = max_t(int, spec.field_width, 0); 802 + int i, chunksz; 803 + bool first = true; 804 + 805 + /* reused to print numbers */ 806 + spec = (struct printf_spec){ .flags = SMALL | ZEROPAD, .base = 16 }; 807 + 808 + chunksz = nr_bits & (CHUNKSZ - 1); 809 + if (chunksz == 0) 810 + chunksz = CHUNKSZ; 811 + 812 + i = ALIGN(nr_bits, CHUNKSZ) - CHUNKSZ; 813 + for (; i >= 0; i -= CHUNKSZ) { 814 + u32 chunkmask, val; 815 + int word, bit; 816 + 817 + chunkmask = ((1ULL << chunksz) - 1); 818 + word = i / BITS_PER_LONG; 819 + bit = i % BITS_PER_LONG; 820 + val = (bitmap[word] >> bit) & chunkmask; 821 + 822 + if (!first) { 823 + if (buf < end) 824 + *buf = ','; 825 + buf++; 826 + } 827 + first = false; 828 + 829 + spec.field_width = DIV_ROUND_UP(chunksz, 4); 830 + buf = number(buf, end, val, spec); 831 + 832 + chunksz = CHUNKSZ; 833 + } 834 + return buf; 835 + } 836 + 837 + static noinline_for_stack 838 + char *bitmap_list_string(char *buf, char *end, unsigned long *bitmap, 839 + struct printf_spec spec, const char *fmt) 840 + { 841 + int nr_bits = max_t(int, spec.field_width, 0); 842 + /* current bit is 'cur', most recently seen range is [rbot, rtop] */ 843 + int cur, rbot, rtop; 844 + bool first = true; 845 + 846 + /* reused to print numbers */ 847 + spec = (struct printf_spec){ .base = 10 }; 848 + 849 + rbot = cur = find_first_bit(bitmap, nr_bits); 850 + while (cur < nr_bits) { 851 + rtop = cur; 852 + cur = find_next_bit(bitmap, nr_bits, cur + 1); 853 + if (cur < nr_bits && cur <= rtop + 1) 854 + continue; 855 + 856 + if (!first) { 857 + if (buf < end) 858 + *buf = ','; 859 + buf++; 860 + } 861 + first = false; 862 + 863 + buf = number(buf, end, rbot, spec); 864 + if (rbot < rtop) { 865 + if (buf < end) 866 + *buf = '-'; 867 + buf++; 868 + 869 + buf = number(buf, end, rtop, spec); 870 + } 871 + 872 + rbot = cur; 873 + } 874 + return buf; 875 + } 876 + 877 + static noinline_for_stack 797 878 char *mac_address_string(char *buf, char *end, u8 *addr, 798 879 struct printf_spec spec, const char *fmt) 799 880 { ··· 1339 1258 * - 'B' For backtraced symbolic direct pointers with offset 1340 1259 * - 'R' For decoded struct resource, e.g., [mem 0x0-0x1f 64bit pref] 1341 1260 * - 'r' For raw struct resource, e.g., [mem 0x0-0x1f flags 0x201] 1261 + * - 'b[l]' For a bitmap, the number of bits is determined by the field 1262 + * width which must be explicitly specified either as part of the 1263 + * format string '%32b[l]' or through '%*b[l]', [l] selects 1264 + * range-list format instead of hex format 1342 1265 * - 'M' For a 6-byte MAC address, it prints the address in the 1343 1266 * usual colon-separated hex notation 1344 1267 * - 'm' For a 6-byte MAC address, it prints the hex address without colons ··· 1439 1354 return resource_string(buf, end, ptr, spec, fmt); 1440 1355 case 'h': 1441 1356 return hex_string(buf, end, ptr, spec, fmt); 1357 + case 'b': 1358 + switch (fmt[1]) { 1359 + case 'l': 1360 + return bitmap_list_string(buf, end, ptr, spec, fmt); 1361 + default: 1362 + return bitmap_string(buf, end, ptr, spec, fmt); 1363 + } 1442 1364 case 'M': /* Colon separated: 00:01:02:03:04:05 */ 1443 1365 case 'm': /* Contiguous: 000102030405 */ 1444 1366 /* [mM]F (FDDI) */ ··· 1781 1689 * %pB output the name of a backtrace symbol with its offset 1782 1690 * %pR output the address range in a struct resource with decoded flags 1783 1691 * %pr output the address range in a struct resource with raw flags 1692 + * %pb output the bitmap with field width as the number of bits 1693 + * %pbl output the bitmap as range list with field width as the number of bits 1784 1694 * %pM output a 6-byte MAC address with colons 1785 1695 * %pMR output a 6-byte MAC address with colons in reversed order 1786 1696 * %pMF output a 6-byte MAC address with dashes
+4
mm/Makefile
··· 2 2 # Makefile for the linux memory manager. 3 3 # 4 4 5 + KASAN_SANITIZE_slab_common.o := n 6 + KASAN_SANITIZE_slub.o := n 7 + 5 8 mmu-y := nommu.o 6 9 mmu-$(CONFIG_MMU) := gup.o highmem.o memory.o mincore.o \ 7 10 mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \ ··· 52 49 obj-$(CONFIG_SLAB) += slab.o 53 50 obj-$(CONFIG_SLUB) += slub.o 54 51 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o 52 + obj-$(CONFIG_KASAN) += kasan/ 55 53 obj-$(CONFIG_FAILSLAB) += failslab.o 56 54 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o 57 55 obj-$(CONFIG_FS_XIP) += filemap_xip.o
+2
mm/compaction.c
··· 16 16 #include <linux/sysfs.h> 17 17 #include <linux/balloon_compaction.h> 18 18 #include <linux/page-isolation.h> 19 + #include <linux/kasan.h> 19 20 #include "internal.h" 20 21 21 22 #ifdef CONFIG_COMPACTION ··· 73 72 list_for_each_entry(page, list, lru) { 74 73 arch_alloc_page(page, 0); 75 74 kernel_map_pages(page, 1, 1); 75 + kasan_alloc_pages(page, 0); 76 76 } 77 77 } 78 78
+8
mm/kasan/Makefile
··· 1 + KASAN_SANITIZE := n 2 + 3 + CFLAGS_REMOVE_kasan.o = -pg 4 + # Function splitter causes unnecessary splits in __asan_load1/__asan_store1 5 + # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533 6 + CFLAGS_kasan.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) 7 + 8 + obj-y := kasan.o report.o
+516
mm/kasan/kasan.c
··· 1 + /* 2 + * This file contains shadow memory manipulation code. 3 + * 4 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 5 + * Author: Andrey Ryabinin <a.ryabinin@samsung.com> 6 + * 7 + * Some of code borrowed from https://github.com/xairy/linux by 8 + * Andrey Konovalov <adech.fo@gmail.com> 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + * 14 + */ 15 + 16 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 17 + #define DISABLE_BRANCH_PROFILING 18 + 19 + #include <linux/export.h> 20 + #include <linux/init.h> 21 + #include <linux/kernel.h> 22 + #include <linux/memblock.h> 23 + #include <linux/memory.h> 24 + #include <linux/mm.h> 25 + #include <linux/module.h> 26 + #include <linux/printk.h> 27 + #include <linux/sched.h> 28 + #include <linux/slab.h> 29 + #include <linux/stacktrace.h> 30 + #include <linux/string.h> 31 + #include <linux/types.h> 32 + #include <linux/kasan.h> 33 + 34 + #include "kasan.h" 35 + #include "../slab.h" 36 + 37 + /* 38 + * Poisons the shadow memory for 'size' bytes starting from 'addr'. 39 + * Memory addresses should be aligned to KASAN_SHADOW_SCALE_SIZE. 40 + */ 41 + static void kasan_poison_shadow(const void *address, size_t size, u8 value) 42 + { 43 + void *shadow_start, *shadow_end; 44 + 45 + shadow_start = kasan_mem_to_shadow(address); 46 + shadow_end = kasan_mem_to_shadow(address + size); 47 + 48 + memset(shadow_start, value, shadow_end - shadow_start); 49 + } 50 + 51 + void kasan_unpoison_shadow(const void *address, size_t size) 52 + { 53 + kasan_poison_shadow(address, size, 0); 54 + 55 + if (size & KASAN_SHADOW_MASK) { 56 + u8 *shadow = (u8 *)kasan_mem_to_shadow(address + size); 57 + *shadow = size & KASAN_SHADOW_MASK; 58 + } 59 + } 60 + 61 + 62 + /* 63 + * All functions below always inlined so compiler could 64 + * perform better optimizations in each of __asan_loadX/__assn_storeX 65 + * depending on memory access size X. 66 + */ 67 + 68 + static __always_inline bool memory_is_poisoned_1(unsigned long addr) 69 + { 70 + s8 shadow_value = *(s8 *)kasan_mem_to_shadow((void *)addr); 71 + 72 + if (unlikely(shadow_value)) { 73 + s8 last_accessible_byte = addr & KASAN_SHADOW_MASK; 74 + return unlikely(last_accessible_byte >= shadow_value); 75 + } 76 + 77 + return false; 78 + } 79 + 80 + static __always_inline bool memory_is_poisoned_2(unsigned long addr) 81 + { 82 + u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr); 83 + 84 + if (unlikely(*shadow_addr)) { 85 + if (memory_is_poisoned_1(addr + 1)) 86 + return true; 87 + 88 + if (likely(((addr + 1) & KASAN_SHADOW_MASK) != 0)) 89 + return false; 90 + 91 + return unlikely(*(u8 *)shadow_addr); 92 + } 93 + 94 + return false; 95 + } 96 + 97 + static __always_inline bool memory_is_poisoned_4(unsigned long addr) 98 + { 99 + u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr); 100 + 101 + if (unlikely(*shadow_addr)) { 102 + if (memory_is_poisoned_1(addr + 3)) 103 + return true; 104 + 105 + if (likely(((addr + 3) & KASAN_SHADOW_MASK) >= 3)) 106 + return false; 107 + 108 + return unlikely(*(u8 *)shadow_addr); 109 + } 110 + 111 + return false; 112 + } 113 + 114 + static __always_inline bool memory_is_poisoned_8(unsigned long addr) 115 + { 116 + u16 *shadow_addr = (u16 *)kasan_mem_to_shadow((void *)addr); 117 + 118 + if (unlikely(*shadow_addr)) { 119 + if (memory_is_poisoned_1(addr + 7)) 120 + return true; 121 + 122 + if (likely(((addr + 7) & KASAN_SHADOW_MASK) >= 7)) 123 + return false; 124 + 125 + return unlikely(*(u8 *)shadow_addr); 126 + } 127 + 128 + return false; 129 + } 130 + 131 + static __always_inline bool memory_is_poisoned_16(unsigned long addr) 132 + { 133 + u32 *shadow_addr = (u32 *)kasan_mem_to_shadow((void *)addr); 134 + 135 + if (unlikely(*shadow_addr)) { 136 + u16 shadow_first_bytes = *(u16 *)shadow_addr; 137 + s8 last_byte = (addr + 15) & KASAN_SHADOW_MASK; 138 + 139 + if (unlikely(shadow_first_bytes)) 140 + return true; 141 + 142 + if (likely(!last_byte)) 143 + return false; 144 + 145 + return memory_is_poisoned_1(addr + 15); 146 + } 147 + 148 + return false; 149 + } 150 + 151 + static __always_inline unsigned long bytes_is_zero(const u8 *start, 152 + size_t size) 153 + { 154 + while (size) { 155 + if (unlikely(*start)) 156 + return (unsigned long)start; 157 + start++; 158 + size--; 159 + } 160 + 161 + return 0; 162 + } 163 + 164 + static __always_inline unsigned long memory_is_zero(const void *start, 165 + const void *end) 166 + { 167 + unsigned int words; 168 + unsigned long ret; 169 + unsigned int prefix = (unsigned long)start % 8; 170 + 171 + if (end - start <= 16) 172 + return bytes_is_zero(start, end - start); 173 + 174 + if (prefix) { 175 + prefix = 8 - prefix; 176 + ret = bytes_is_zero(start, prefix); 177 + if (unlikely(ret)) 178 + return ret; 179 + start += prefix; 180 + } 181 + 182 + words = (end - start) / 8; 183 + while (words) { 184 + if (unlikely(*(u64 *)start)) 185 + return bytes_is_zero(start, 8); 186 + start += 8; 187 + words--; 188 + } 189 + 190 + return bytes_is_zero(start, (end - start) % 8); 191 + } 192 + 193 + static __always_inline bool memory_is_poisoned_n(unsigned long addr, 194 + size_t size) 195 + { 196 + unsigned long ret; 197 + 198 + ret = memory_is_zero(kasan_mem_to_shadow((void *)addr), 199 + kasan_mem_to_shadow((void *)addr + size - 1) + 1); 200 + 201 + if (unlikely(ret)) { 202 + unsigned long last_byte = addr + size - 1; 203 + s8 *last_shadow = (s8 *)kasan_mem_to_shadow((void *)last_byte); 204 + 205 + if (unlikely(ret != (unsigned long)last_shadow || 206 + ((last_byte & KASAN_SHADOW_MASK) >= *last_shadow))) 207 + return true; 208 + } 209 + return false; 210 + } 211 + 212 + static __always_inline bool memory_is_poisoned(unsigned long addr, size_t size) 213 + { 214 + if (__builtin_constant_p(size)) { 215 + switch (size) { 216 + case 1: 217 + return memory_is_poisoned_1(addr); 218 + case 2: 219 + return memory_is_poisoned_2(addr); 220 + case 4: 221 + return memory_is_poisoned_4(addr); 222 + case 8: 223 + return memory_is_poisoned_8(addr); 224 + case 16: 225 + return memory_is_poisoned_16(addr); 226 + default: 227 + BUILD_BUG(); 228 + } 229 + } 230 + 231 + return memory_is_poisoned_n(addr, size); 232 + } 233 + 234 + 235 + static __always_inline void check_memory_region(unsigned long addr, 236 + size_t size, bool write) 237 + { 238 + struct kasan_access_info info; 239 + 240 + if (unlikely(size == 0)) 241 + return; 242 + 243 + if (unlikely((void *)addr < 244 + kasan_shadow_to_mem((void *)KASAN_SHADOW_START))) { 245 + info.access_addr = (void *)addr; 246 + info.access_size = size; 247 + info.is_write = write; 248 + info.ip = _RET_IP_; 249 + kasan_report_user_access(&info); 250 + return; 251 + } 252 + 253 + if (likely(!memory_is_poisoned(addr, size))) 254 + return; 255 + 256 + kasan_report(addr, size, write, _RET_IP_); 257 + } 258 + 259 + void __asan_loadN(unsigned long addr, size_t size); 260 + void __asan_storeN(unsigned long addr, size_t size); 261 + 262 + #undef memset 263 + void *memset(void *addr, int c, size_t len) 264 + { 265 + __asan_storeN((unsigned long)addr, len); 266 + 267 + return __memset(addr, c, len); 268 + } 269 + 270 + #undef memmove 271 + void *memmove(void *dest, const void *src, size_t len) 272 + { 273 + __asan_loadN((unsigned long)src, len); 274 + __asan_storeN((unsigned long)dest, len); 275 + 276 + return __memmove(dest, src, len); 277 + } 278 + 279 + #undef memcpy 280 + void *memcpy(void *dest, const void *src, size_t len) 281 + { 282 + __asan_loadN((unsigned long)src, len); 283 + __asan_storeN((unsigned long)dest, len); 284 + 285 + return __memcpy(dest, src, len); 286 + } 287 + 288 + void kasan_alloc_pages(struct page *page, unsigned int order) 289 + { 290 + if (likely(!PageHighMem(page))) 291 + kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order); 292 + } 293 + 294 + void kasan_free_pages(struct page *page, unsigned int order) 295 + { 296 + if (likely(!PageHighMem(page))) 297 + kasan_poison_shadow(page_address(page), 298 + PAGE_SIZE << order, 299 + KASAN_FREE_PAGE); 300 + } 301 + 302 + void kasan_poison_slab(struct page *page) 303 + { 304 + kasan_poison_shadow(page_address(page), 305 + PAGE_SIZE << compound_order(page), 306 + KASAN_KMALLOC_REDZONE); 307 + } 308 + 309 + void kasan_unpoison_object_data(struct kmem_cache *cache, void *object) 310 + { 311 + kasan_unpoison_shadow(object, cache->object_size); 312 + } 313 + 314 + void kasan_poison_object_data(struct kmem_cache *cache, void *object) 315 + { 316 + kasan_poison_shadow(object, 317 + round_up(cache->object_size, KASAN_SHADOW_SCALE_SIZE), 318 + KASAN_KMALLOC_REDZONE); 319 + } 320 + 321 + void kasan_slab_alloc(struct kmem_cache *cache, void *object) 322 + { 323 + kasan_kmalloc(cache, object, cache->object_size); 324 + } 325 + 326 + void kasan_slab_free(struct kmem_cache *cache, void *object) 327 + { 328 + unsigned long size = cache->object_size; 329 + unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE); 330 + 331 + /* RCU slabs could be legally used after free within the RCU period */ 332 + if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU)) 333 + return; 334 + 335 + kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); 336 + } 337 + 338 + void kasan_kmalloc(struct kmem_cache *cache, const void *object, size_t size) 339 + { 340 + unsigned long redzone_start; 341 + unsigned long redzone_end; 342 + 343 + if (unlikely(object == NULL)) 344 + return; 345 + 346 + redzone_start = round_up((unsigned long)(object + size), 347 + KASAN_SHADOW_SCALE_SIZE); 348 + redzone_end = round_up((unsigned long)object + cache->object_size, 349 + KASAN_SHADOW_SCALE_SIZE); 350 + 351 + kasan_unpoison_shadow(object, size); 352 + kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, 353 + KASAN_KMALLOC_REDZONE); 354 + } 355 + EXPORT_SYMBOL(kasan_kmalloc); 356 + 357 + void kasan_kmalloc_large(const void *ptr, size_t size) 358 + { 359 + struct page *page; 360 + unsigned long redzone_start; 361 + unsigned long redzone_end; 362 + 363 + if (unlikely(ptr == NULL)) 364 + return; 365 + 366 + page = virt_to_page(ptr); 367 + redzone_start = round_up((unsigned long)(ptr + size), 368 + KASAN_SHADOW_SCALE_SIZE); 369 + redzone_end = (unsigned long)ptr + (PAGE_SIZE << compound_order(page)); 370 + 371 + kasan_unpoison_shadow(ptr, size); 372 + kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start, 373 + KASAN_PAGE_REDZONE); 374 + } 375 + 376 + void kasan_krealloc(const void *object, size_t size) 377 + { 378 + struct page *page; 379 + 380 + if (unlikely(object == ZERO_SIZE_PTR)) 381 + return; 382 + 383 + page = virt_to_head_page(object); 384 + 385 + if (unlikely(!PageSlab(page))) 386 + kasan_kmalloc_large(object, size); 387 + else 388 + kasan_kmalloc(page->slab_cache, object, size); 389 + } 390 + 391 + void kasan_kfree_large(const void *ptr) 392 + { 393 + struct page *page = virt_to_page(ptr); 394 + 395 + kasan_poison_shadow(ptr, PAGE_SIZE << compound_order(page), 396 + KASAN_FREE_PAGE); 397 + } 398 + 399 + int kasan_module_alloc(void *addr, size_t size) 400 + { 401 + void *ret; 402 + size_t shadow_size; 403 + unsigned long shadow_start; 404 + 405 + shadow_start = (unsigned long)kasan_mem_to_shadow(addr); 406 + shadow_size = round_up(size >> KASAN_SHADOW_SCALE_SHIFT, 407 + PAGE_SIZE); 408 + 409 + if (WARN_ON(!PAGE_ALIGNED(shadow_start))) 410 + return -EINVAL; 411 + 412 + ret = __vmalloc_node_range(shadow_size, 1, shadow_start, 413 + shadow_start + shadow_size, 414 + GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO, 415 + PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, 416 + __builtin_return_address(0)); 417 + return ret ? 0 : -ENOMEM; 418 + } 419 + 420 + void kasan_module_free(void *addr) 421 + { 422 + vfree(kasan_mem_to_shadow(addr)); 423 + } 424 + 425 + static void register_global(struct kasan_global *global) 426 + { 427 + size_t aligned_size = round_up(global->size, KASAN_SHADOW_SCALE_SIZE); 428 + 429 + kasan_unpoison_shadow(global->beg, global->size); 430 + 431 + kasan_poison_shadow(global->beg + aligned_size, 432 + global->size_with_redzone - aligned_size, 433 + KASAN_GLOBAL_REDZONE); 434 + } 435 + 436 + void __asan_register_globals(struct kasan_global *globals, size_t size) 437 + { 438 + int i; 439 + 440 + for (i = 0; i < size; i++) 441 + register_global(&globals[i]); 442 + } 443 + EXPORT_SYMBOL(__asan_register_globals); 444 + 445 + void __asan_unregister_globals(struct kasan_global *globals, size_t size) 446 + { 447 + } 448 + EXPORT_SYMBOL(__asan_unregister_globals); 449 + 450 + #define DEFINE_ASAN_LOAD_STORE(size) \ 451 + void __asan_load##size(unsigned long addr) \ 452 + { \ 453 + check_memory_region(addr, size, false); \ 454 + } \ 455 + EXPORT_SYMBOL(__asan_load##size); \ 456 + __alias(__asan_load##size) \ 457 + void __asan_load##size##_noabort(unsigned long); \ 458 + EXPORT_SYMBOL(__asan_load##size##_noabort); \ 459 + void __asan_store##size(unsigned long addr) \ 460 + { \ 461 + check_memory_region(addr, size, true); \ 462 + } \ 463 + EXPORT_SYMBOL(__asan_store##size); \ 464 + __alias(__asan_store##size) \ 465 + void __asan_store##size##_noabort(unsigned long); \ 466 + EXPORT_SYMBOL(__asan_store##size##_noabort) 467 + 468 + DEFINE_ASAN_LOAD_STORE(1); 469 + DEFINE_ASAN_LOAD_STORE(2); 470 + DEFINE_ASAN_LOAD_STORE(4); 471 + DEFINE_ASAN_LOAD_STORE(8); 472 + DEFINE_ASAN_LOAD_STORE(16); 473 + 474 + void __asan_loadN(unsigned long addr, size_t size) 475 + { 476 + check_memory_region(addr, size, false); 477 + } 478 + EXPORT_SYMBOL(__asan_loadN); 479 + 480 + __alias(__asan_loadN) 481 + void __asan_loadN_noabort(unsigned long, size_t); 482 + EXPORT_SYMBOL(__asan_loadN_noabort); 483 + 484 + void __asan_storeN(unsigned long addr, size_t size) 485 + { 486 + check_memory_region(addr, size, true); 487 + } 488 + EXPORT_SYMBOL(__asan_storeN); 489 + 490 + __alias(__asan_storeN) 491 + void __asan_storeN_noabort(unsigned long, size_t); 492 + EXPORT_SYMBOL(__asan_storeN_noabort); 493 + 494 + /* to shut up compiler complaints */ 495 + void __asan_handle_no_return(void) {} 496 + EXPORT_SYMBOL(__asan_handle_no_return); 497 + 498 + #ifdef CONFIG_MEMORY_HOTPLUG 499 + static int kasan_mem_notifier(struct notifier_block *nb, 500 + unsigned long action, void *data) 501 + { 502 + return (action == MEM_GOING_ONLINE) ? NOTIFY_BAD : NOTIFY_OK; 503 + } 504 + 505 + static int __init kasan_memhotplug_init(void) 506 + { 507 + pr_err("WARNING: KASan doesn't support memory hot-add\n"); 508 + pr_err("Memory hot-add will be disabled\n"); 509 + 510 + hotplug_memory_notifier(kasan_mem_notifier, 0); 511 + 512 + return 0; 513 + } 514 + 515 + module_init(kasan_memhotplug_init); 516 + #endif
+75
mm/kasan/kasan.h
··· 1 + #ifndef __MM_KASAN_KASAN_H 2 + #define __MM_KASAN_KASAN_H 3 + 4 + #include <linux/kasan.h> 5 + 6 + #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) 7 + #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) 8 + 9 + #define KASAN_FREE_PAGE 0xFF /* page was freed */ 10 + #define KASAN_FREE_PAGE 0xFF /* page was freed */ 11 + #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ 12 + #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ 13 + #define KASAN_KMALLOC_FREE 0xFB /* object was freed (kmem_cache_free/kfree) */ 14 + #define KASAN_GLOBAL_REDZONE 0xFA /* redzone for global variable */ 15 + 16 + /* 17 + * Stack redzone shadow values 18 + * (Those are compiler's ABI, don't change them) 19 + */ 20 + #define KASAN_STACK_LEFT 0xF1 21 + #define KASAN_STACK_MID 0xF2 22 + #define KASAN_STACK_RIGHT 0xF3 23 + #define KASAN_STACK_PARTIAL 0xF4 24 + 25 + /* Don't break randconfig/all*config builds */ 26 + #ifndef KASAN_ABI_VERSION 27 + #define KASAN_ABI_VERSION 1 28 + #endif 29 + 30 + struct kasan_access_info { 31 + const void *access_addr; 32 + const void *first_bad_addr; 33 + size_t access_size; 34 + bool is_write; 35 + unsigned long ip; 36 + }; 37 + 38 + /* The layout of struct dictated by compiler */ 39 + struct kasan_source_location { 40 + const char *filename; 41 + int line_no; 42 + int column_no; 43 + }; 44 + 45 + /* The layout of struct dictated by compiler */ 46 + struct kasan_global { 47 + const void *beg; /* Address of the beginning of the global variable. */ 48 + size_t size; /* Size of the global variable. */ 49 + size_t size_with_redzone; /* Size of the variable + size of the red zone. 32 bytes aligned */ 50 + const void *name; 51 + const void *module_name; /* Name of the module where the global variable is declared. */ 52 + unsigned long has_dynamic_init; /* This needed for C++ */ 53 + #if KASAN_ABI_VERSION >= 4 54 + struct kasan_source_location *location; 55 + #endif 56 + }; 57 + 58 + void kasan_report_error(struct kasan_access_info *info); 59 + void kasan_report_user_access(struct kasan_access_info *info); 60 + 61 + static inline const void *kasan_shadow_to_mem(const void *shadow_addr) 62 + { 63 + return (void *)(((unsigned long)shadow_addr - KASAN_SHADOW_OFFSET) 64 + << KASAN_SHADOW_SCALE_SHIFT); 65 + } 66 + 67 + static inline bool kasan_enabled(void) 68 + { 69 + return !current->kasan_depth; 70 + } 71 + 72 + void kasan_report(unsigned long addr, size_t size, 73 + bool is_write, unsigned long ip); 74 + 75 + #endif
+269
mm/kasan/report.c
··· 1 + /* 2 + * This file contains error reporting code. 3 + * 4 + * Copyright (c) 2014 Samsung Electronics Co., Ltd. 5 + * Author: Andrey Ryabinin <a.ryabinin@samsung.com> 6 + * 7 + * Some of code borrowed from https://github.com/xairy/linux by 8 + * Andrey Konovalov <adech.fo@gmail.com> 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + * 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include <linux/mm.h> 18 + #include <linux/printk.h> 19 + #include <linux/sched.h> 20 + #include <linux/slab.h> 21 + #include <linux/stacktrace.h> 22 + #include <linux/string.h> 23 + #include <linux/types.h> 24 + #include <linux/kasan.h> 25 + 26 + #include <asm/sections.h> 27 + 28 + #include "kasan.h" 29 + #include "../slab.h" 30 + 31 + /* Shadow layout customization. */ 32 + #define SHADOW_BYTES_PER_BLOCK 1 33 + #define SHADOW_BLOCKS_PER_ROW 16 34 + #define SHADOW_BYTES_PER_ROW (SHADOW_BLOCKS_PER_ROW * SHADOW_BYTES_PER_BLOCK) 35 + #define SHADOW_ROWS_AROUND_ADDR 2 36 + 37 + static const void *find_first_bad_addr(const void *addr, size_t size) 38 + { 39 + u8 shadow_val = *(u8 *)kasan_mem_to_shadow(addr); 40 + const void *first_bad_addr = addr; 41 + 42 + while (!shadow_val && first_bad_addr < addr + size) { 43 + first_bad_addr += KASAN_SHADOW_SCALE_SIZE; 44 + shadow_val = *(u8 *)kasan_mem_to_shadow(first_bad_addr); 45 + } 46 + return first_bad_addr; 47 + } 48 + 49 + static void print_error_description(struct kasan_access_info *info) 50 + { 51 + const char *bug_type = "unknown crash"; 52 + u8 shadow_val; 53 + 54 + info->first_bad_addr = find_first_bad_addr(info->access_addr, 55 + info->access_size); 56 + 57 + shadow_val = *(u8 *)kasan_mem_to_shadow(info->first_bad_addr); 58 + 59 + switch (shadow_val) { 60 + case KASAN_FREE_PAGE: 61 + case KASAN_KMALLOC_FREE: 62 + bug_type = "use after free"; 63 + break; 64 + case KASAN_PAGE_REDZONE: 65 + case KASAN_KMALLOC_REDZONE: 66 + case KASAN_GLOBAL_REDZONE: 67 + case 0 ... KASAN_SHADOW_SCALE_SIZE - 1: 68 + bug_type = "out of bounds access"; 69 + break; 70 + case KASAN_STACK_LEFT: 71 + case KASAN_STACK_MID: 72 + case KASAN_STACK_RIGHT: 73 + case KASAN_STACK_PARTIAL: 74 + bug_type = "out of bounds on stack"; 75 + break; 76 + } 77 + 78 + pr_err("BUG: KASan: %s in %pS at addr %p\n", 79 + bug_type, (void *)info->ip, 80 + info->access_addr); 81 + pr_err("%s of size %zu by task %s/%d\n", 82 + info->is_write ? "Write" : "Read", 83 + info->access_size, current->comm, task_pid_nr(current)); 84 + } 85 + 86 + static inline bool kernel_or_module_addr(const void *addr) 87 + { 88 + return (addr >= (void *)_stext && addr < (void *)_end) 89 + || (addr >= (void *)MODULES_VADDR 90 + && addr < (void *)MODULES_END); 91 + } 92 + 93 + static inline bool init_task_stack_addr(const void *addr) 94 + { 95 + return addr >= (void *)&init_thread_union.stack && 96 + (addr <= (void *)&init_thread_union.stack + 97 + sizeof(init_thread_union.stack)); 98 + } 99 + 100 + static void print_address_description(struct kasan_access_info *info) 101 + { 102 + const void *addr = info->access_addr; 103 + 104 + if ((addr >= (void *)PAGE_OFFSET) && 105 + (addr < high_memory)) { 106 + struct page *page = virt_to_head_page(addr); 107 + 108 + if (PageSlab(page)) { 109 + void *object; 110 + struct kmem_cache *cache = page->slab_cache; 111 + void *last_object; 112 + 113 + object = virt_to_obj(cache, page_address(page), addr); 114 + last_object = page_address(page) + 115 + page->objects * cache->size; 116 + 117 + if (unlikely(object > last_object)) 118 + object = last_object; /* we hit into padding */ 119 + 120 + object_err(cache, page, object, 121 + "kasan: bad access detected"); 122 + return; 123 + } 124 + dump_page(page, "kasan: bad access detected"); 125 + } 126 + 127 + if (kernel_or_module_addr(addr)) { 128 + if (!init_task_stack_addr(addr)) 129 + pr_err("Address belongs to variable %pS\n", addr); 130 + } 131 + 132 + dump_stack(); 133 + } 134 + 135 + static bool row_is_guilty(const void *row, const void *guilty) 136 + { 137 + return (row <= guilty) && (guilty < row + SHADOW_BYTES_PER_ROW); 138 + } 139 + 140 + static int shadow_pointer_offset(const void *row, const void *shadow) 141 + { 142 + /* The length of ">ff00ff00ff00ff00: " is 143 + * 3 + (BITS_PER_LONG/8)*2 chars. 144 + */ 145 + return 3 + (BITS_PER_LONG/8)*2 + (shadow - row)*2 + 146 + (shadow - row) / SHADOW_BYTES_PER_BLOCK + 1; 147 + } 148 + 149 + static void print_shadow_for_address(const void *addr) 150 + { 151 + int i; 152 + const void *shadow = kasan_mem_to_shadow(addr); 153 + const void *shadow_row; 154 + 155 + shadow_row = (void *)round_down((unsigned long)shadow, 156 + SHADOW_BYTES_PER_ROW) 157 + - SHADOW_ROWS_AROUND_ADDR * SHADOW_BYTES_PER_ROW; 158 + 159 + pr_err("Memory state around the buggy address:\n"); 160 + 161 + for (i = -SHADOW_ROWS_AROUND_ADDR; i <= SHADOW_ROWS_AROUND_ADDR; i++) { 162 + const void *kaddr = kasan_shadow_to_mem(shadow_row); 163 + char buffer[4 + (BITS_PER_LONG/8)*2]; 164 + 165 + snprintf(buffer, sizeof(buffer), 166 + (i == 0) ? ">%p: " : " %p: ", kaddr); 167 + 168 + kasan_disable_current(); 169 + print_hex_dump(KERN_ERR, buffer, 170 + DUMP_PREFIX_NONE, SHADOW_BYTES_PER_ROW, 1, 171 + shadow_row, SHADOW_BYTES_PER_ROW, 0); 172 + kasan_enable_current(); 173 + 174 + if (row_is_guilty(shadow_row, shadow)) 175 + pr_err("%*c\n", 176 + shadow_pointer_offset(shadow_row, shadow), 177 + '^'); 178 + 179 + shadow_row += SHADOW_BYTES_PER_ROW; 180 + } 181 + } 182 + 183 + static DEFINE_SPINLOCK(report_lock); 184 + 185 + void kasan_report_error(struct kasan_access_info *info) 186 + { 187 + unsigned long flags; 188 + 189 + spin_lock_irqsave(&report_lock, flags); 190 + pr_err("=================================" 191 + "=================================\n"); 192 + print_error_description(info); 193 + print_address_description(info); 194 + print_shadow_for_address(info->first_bad_addr); 195 + pr_err("=================================" 196 + "=================================\n"); 197 + spin_unlock_irqrestore(&report_lock, flags); 198 + } 199 + 200 + void kasan_report_user_access(struct kasan_access_info *info) 201 + { 202 + unsigned long flags; 203 + 204 + spin_lock_irqsave(&report_lock, flags); 205 + pr_err("=================================" 206 + "=================================\n"); 207 + pr_err("BUG: KASan: user-memory-access on address %p\n", 208 + info->access_addr); 209 + pr_err("%s of size %zu by task %s/%d\n", 210 + info->is_write ? "Write" : "Read", 211 + info->access_size, current->comm, task_pid_nr(current)); 212 + dump_stack(); 213 + pr_err("=================================" 214 + "=================================\n"); 215 + spin_unlock_irqrestore(&report_lock, flags); 216 + } 217 + 218 + void kasan_report(unsigned long addr, size_t size, 219 + bool is_write, unsigned long ip) 220 + { 221 + struct kasan_access_info info; 222 + 223 + if (likely(!kasan_enabled())) 224 + return; 225 + 226 + info.access_addr = (void *)addr; 227 + info.access_size = size; 228 + info.is_write = is_write; 229 + info.ip = ip; 230 + kasan_report_error(&info); 231 + } 232 + 233 + 234 + #define DEFINE_ASAN_REPORT_LOAD(size) \ 235 + void __asan_report_load##size##_noabort(unsigned long addr) \ 236 + { \ 237 + kasan_report(addr, size, false, _RET_IP_); \ 238 + } \ 239 + EXPORT_SYMBOL(__asan_report_load##size##_noabort) 240 + 241 + #define DEFINE_ASAN_REPORT_STORE(size) \ 242 + void __asan_report_store##size##_noabort(unsigned long addr) \ 243 + { \ 244 + kasan_report(addr, size, true, _RET_IP_); \ 245 + } \ 246 + EXPORT_SYMBOL(__asan_report_store##size##_noabort) 247 + 248 + DEFINE_ASAN_REPORT_LOAD(1); 249 + DEFINE_ASAN_REPORT_LOAD(2); 250 + DEFINE_ASAN_REPORT_LOAD(4); 251 + DEFINE_ASAN_REPORT_LOAD(8); 252 + DEFINE_ASAN_REPORT_LOAD(16); 253 + DEFINE_ASAN_REPORT_STORE(1); 254 + DEFINE_ASAN_REPORT_STORE(2); 255 + DEFINE_ASAN_REPORT_STORE(4); 256 + DEFINE_ASAN_REPORT_STORE(8); 257 + DEFINE_ASAN_REPORT_STORE(16); 258 + 259 + void __asan_report_load_n_noabort(unsigned long addr, size_t size) 260 + { 261 + kasan_report(addr, size, false, _RET_IP_); 262 + } 263 + EXPORT_SYMBOL(__asan_report_load_n_noabort); 264 + 265 + void __asan_report_store_n_noabort(unsigned long addr, size_t size) 266 + { 267 + kasan_report(addr, size, true, _RET_IP_); 268 + } 269 + EXPORT_SYMBOL(__asan_report_store_n_noabort);
+6
mm/kmemleak.c
··· 98 98 #include <asm/processor.h> 99 99 #include <linux/atomic.h> 100 100 101 + #include <linux/kasan.h> 101 102 #include <linux/kmemcheck.h> 102 103 #include <linux/kmemleak.h> 103 104 #include <linux/memory_hotplug.h> ··· 1114 1113 if (!kmemcheck_is_obj_initialized(object->pointer, object->size)) 1115 1114 return false; 1116 1115 1116 + kasan_disable_current(); 1117 1117 object->checksum = crc32(0, (void *)object->pointer, object->size); 1118 + kasan_enable_current(); 1119 + 1118 1120 return object->checksum != old_csum; 1119 1121 } 1120 1122 ··· 1168 1164 BYTES_PER_POINTER)) 1169 1165 continue; 1170 1166 1167 + kasan_disable_current(); 1171 1168 pointer = *ptr; 1169 + kasan_enable_current(); 1172 1170 1173 1171 object = find_and_get_object(pointer, 1); 1174 1172 if (!object)
+3 -4
mm/mempolicy.c
··· 2817 2817 p += snprintf(p, buffer + maxlen - p, "relative"); 2818 2818 } 2819 2819 2820 - if (!nodes_empty(nodes)) { 2821 - p += snprintf(p, buffer + maxlen - p, ":"); 2822 - p += nodelist_scnprintf(p, buffer + maxlen - p, nodes); 2823 - } 2820 + if (!nodes_empty(nodes)) 2821 + p += scnprintf(p, buffer + maxlen - p, ":%*pbl", 2822 + nodemask_pr_args(&nodes)); 2824 2823 }
+3
mm/page_alloc.c
··· 25 25 #include <linux/compiler.h> 26 26 #include <linux/kernel.h> 27 27 #include <linux/kmemcheck.h> 28 + #include <linux/kasan.h> 28 29 #include <linux/module.h> 29 30 #include <linux/suspend.h> 30 31 #include <linux/pagevec.h> ··· 788 787 789 788 trace_mm_page_free(page, order); 790 789 kmemcheck_free_shadow(page, order); 790 + kasan_free_pages(page, order); 791 791 792 792 if (PageAnon(page)) 793 793 page->mapping = NULL; ··· 972 970 973 971 arch_alloc_page(page, order); 974 972 kernel_map_pages(page, 1 << order, 1); 973 + kasan_alloc_pages(page, order); 975 974 976 975 if (gfp_flags & __GFP_ZERO) 977 976 prep_zero_page(page, order, gfp_flags);
+2 -4
mm/percpu.c
··· 1528 1528 int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai, 1529 1529 void *base_addr) 1530 1530 { 1531 - static char cpus_buf[4096] __initdata; 1532 1531 static int smap[PERCPU_DYNAMIC_EARLY_SLOTS] __initdata; 1533 1532 static int dmap[PERCPU_DYNAMIC_EARLY_SLOTS] __initdata; 1534 1533 size_t dyn_size = ai->dyn_size; ··· 1540 1541 int *unit_map; 1541 1542 int group, unit, i; 1542 1543 1543 - cpumask_scnprintf(cpus_buf, sizeof(cpus_buf), cpu_possible_mask); 1544 - 1545 1544 #define PCPU_SETUP_BUG_ON(cond) do { \ 1546 1545 if (unlikely(cond)) { \ 1547 1546 pr_emerg("PERCPU: failed to initialize, %s", #cond); \ 1548 - pr_emerg("PERCPU: cpu_possible_mask=%s\n", cpus_buf); \ 1547 + pr_emerg("PERCPU: cpu_possible_mask=%*pb\n", \ 1548 + cpumask_pr_args(cpu_possible_mask)); \ 1549 1549 pcpu_dump_alloc_info(KERN_EMERG, ai); \ 1550 1550 BUG(); \ 1551 1551 } \
+10 -7
mm/slab_common.c
··· 295 295 } 296 296 297 297 static struct kmem_cache * 298 - do_kmem_cache_create(char *name, size_t object_size, size_t size, size_t align, 299 - unsigned long flags, void (*ctor)(void *), 298 + do_kmem_cache_create(const char *name, size_t object_size, size_t size, 299 + size_t align, unsigned long flags, void (*ctor)(void *), 300 300 struct mem_cgroup *memcg, struct kmem_cache *root_cache) 301 301 { 302 302 struct kmem_cache *s; ··· 363 363 unsigned long flags, void (*ctor)(void *)) 364 364 { 365 365 struct kmem_cache *s; 366 - char *cache_name; 366 + const char *cache_name; 367 367 int err; 368 368 369 369 get_online_cpus(); ··· 390 390 if (s) 391 391 goto out_unlock; 392 392 393 - cache_name = kstrdup(name, GFP_KERNEL); 393 + cache_name = kstrdup_const(name, GFP_KERNEL); 394 394 if (!cache_name) { 395 395 err = -ENOMEM; 396 396 goto out_unlock; ··· 401 401 flags, ctor, NULL, NULL); 402 402 if (IS_ERR(s)) { 403 403 err = PTR_ERR(s); 404 - kfree(cache_name); 404 + kfree_const(cache_name); 405 405 } 406 406 407 407 out_unlock: ··· 607 607 void slab_kmem_cache_release(struct kmem_cache *s) 608 608 { 609 609 destroy_memcg_params(s); 610 - kfree(s->name); 610 + kfree_const(s->name); 611 611 kmem_cache_free(kmem_cache, s); 612 612 } 613 613 ··· 898 898 page = alloc_kmem_pages(flags, order); 899 899 ret = page ? page_address(page) : NULL; 900 900 kmemleak_alloc(ret, size, 1, flags); 901 + kasan_kmalloc_large(ret, size); 901 902 return ret; 902 903 } 903 904 EXPORT_SYMBOL(kmalloc_order); ··· 1078 1077 if (p) 1079 1078 ks = ksize(p); 1080 1079 1081 - if (ks >= new_size) 1080 + if (ks >= new_size) { 1081 + kasan_krealloc((void *)p, new_size); 1082 1082 return (void *)p; 1083 + } 1083 1084 1084 1085 ret = kmalloc_track_caller(new_size, flags); 1085 1086 if (ret && p)
+63 -15
mm/slub.c
··· 20 20 #include <linux/proc_fs.h> 21 21 #include <linux/notifier.h> 22 22 #include <linux/seq_file.h> 23 + #include <linux/kasan.h> 23 24 #include <linux/kmemcheck.h> 24 25 #include <linux/cpu.h> 25 26 #include <linux/cpuset.h> ··· 469 468 static int disable_higher_order_debug; 470 469 471 470 /* 471 + * slub is about to manipulate internal object metadata. This memory lies 472 + * outside the range of the allocated object, so accessing it would normally 473 + * be reported by kasan as a bounds error. metadata_access_enable() is used 474 + * to tell kasan that these accesses are OK. 475 + */ 476 + static inline void metadata_access_enable(void) 477 + { 478 + kasan_disable_current(); 479 + } 480 + 481 + static inline void metadata_access_disable(void) 482 + { 483 + kasan_enable_current(); 484 + } 485 + 486 + /* 472 487 * Object debugging 473 488 */ 474 489 static void print_section(char *text, u8 *addr, unsigned int length) 475 490 { 491 + metadata_access_enable(); 476 492 print_hex_dump(KERN_ERR, text, DUMP_PREFIX_ADDRESS, 16, 1, addr, 477 493 length, 1); 494 + metadata_access_disable(); 478 495 } 479 496 480 497 static struct track *get_track(struct kmem_cache *s, void *object, ··· 522 503 trace.max_entries = TRACK_ADDRS_COUNT; 523 504 trace.entries = p->addrs; 524 505 trace.skip = 3; 506 + metadata_access_enable(); 525 507 save_stack_trace(&trace); 508 + metadata_access_disable(); 526 509 527 510 /* See rant in lockdep.c */ 528 511 if (trace.nr_entries != 0 && ··· 650 629 dump_stack(); 651 630 } 652 631 653 - static void object_err(struct kmem_cache *s, struct page *page, 632 + void object_err(struct kmem_cache *s, struct page *page, 654 633 u8 *object, char *reason) 655 634 { 656 635 slab_bug(s, "%s", reason); ··· 698 677 u8 *fault; 699 678 u8 *end; 700 679 680 + metadata_access_enable(); 701 681 fault = memchr_inv(start, value, bytes); 682 + metadata_access_disable(); 702 683 if (!fault) 703 684 return 1; 704 685 ··· 793 770 if (!remainder) 794 771 return 1; 795 772 773 + metadata_access_enable(); 796 774 fault = memchr_inv(end - remainder, POISON_INUSE, remainder); 775 + metadata_access_disable(); 797 776 if (!fault) 798 777 return 1; 799 778 while (end > fault && end[-1] == POISON_INUSE) ··· 1251 1226 static inline void kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) 1252 1227 { 1253 1228 kmemleak_alloc(ptr, size, 1, flags); 1229 + kasan_kmalloc_large(ptr, size); 1254 1230 } 1255 1231 1256 1232 static inline void kfree_hook(const void *x) 1257 1233 { 1258 1234 kmemleak_free(x); 1235 + kasan_kfree_large(x); 1259 1236 } 1260 1237 1261 1238 static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, ··· 1280 1253 kmemcheck_slab_alloc(s, flags, object, slab_ksize(s)); 1281 1254 kmemleak_alloc_recursive(object, s->object_size, 1, s->flags, flags); 1282 1255 memcg_kmem_put_cache(s); 1256 + kasan_slab_alloc(s, object); 1283 1257 } 1284 1258 1285 1259 static inline void slab_free_hook(struct kmem_cache *s, void *x) ··· 1304 1276 #endif 1305 1277 if (!(s->flags & SLAB_DEBUG_OBJECTS)) 1306 1278 debug_check_no_obj_freed(x, s->object_size); 1279 + 1280 + kasan_slab_free(s, x); 1307 1281 } 1308 1282 1309 1283 /* ··· 1400 1370 void *object) 1401 1371 { 1402 1372 setup_object_debug(s, page, object); 1403 - if (unlikely(s->ctor)) 1373 + if (unlikely(s->ctor)) { 1374 + kasan_unpoison_object_data(s, object); 1404 1375 s->ctor(object); 1376 + kasan_poison_object_data(s, object); 1377 + } 1405 1378 } 1406 1379 1407 1380 static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node) ··· 1436 1403 1437 1404 if (unlikely(s->flags & SLAB_POISON)) 1438 1405 memset(start, POISON_INUSE, PAGE_SIZE << order); 1406 + 1407 + kasan_poison_slab(page); 1439 1408 1440 1409 for_each_object_idx(p, idx, s, start, page->objects) { 1441 1410 setup_object(s, page, p); ··· 2532 2497 { 2533 2498 void *ret = slab_alloc(s, gfpflags, _RET_IP_); 2534 2499 trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); 2500 + kasan_kmalloc(s, ret, size); 2535 2501 return ret; 2536 2502 } 2537 2503 EXPORT_SYMBOL(kmem_cache_alloc_trace); ··· 2559 2523 2560 2524 trace_kmalloc_node(_RET_IP_, ret, 2561 2525 size, s->size, gfpflags, node); 2526 + 2527 + kasan_kmalloc(s, ret, size); 2562 2528 return ret; 2563 2529 } 2564 2530 EXPORT_SYMBOL(kmem_cache_alloc_node_trace); ··· 2946 2908 init_object(kmem_cache_node, n, SLUB_RED_ACTIVE); 2947 2909 init_tracking(kmem_cache_node, n); 2948 2910 #endif 2911 + kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node)); 2949 2912 init_kmem_cache_node(n); 2950 2913 inc_slabs_node(kmem_cache_node, node, page->objects); 2951 2914 ··· 3319 3280 3320 3281 trace_kmalloc(_RET_IP_, ret, size, s->size, flags); 3321 3282 3283 + kasan_kmalloc(s, ret, size); 3284 + 3322 3285 return ret; 3323 3286 } 3324 3287 EXPORT_SYMBOL(__kmalloc); ··· 3364 3323 3365 3324 trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); 3366 3325 3326 + kasan_kmalloc(s, ret, size); 3327 + 3367 3328 return ret; 3368 3329 } 3369 3330 EXPORT_SYMBOL(__kmalloc_node); 3370 3331 #endif 3371 3332 3372 - size_t ksize(const void *object) 3333 + static size_t __ksize(const void *object) 3373 3334 { 3374 3335 struct page *page; 3375 3336 ··· 3386 3343 } 3387 3344 3388 3345 return slab_ksize(page->slab_cache); 3346 + } 3347 + 3348 + size_t ksize(const void *object) 3349 + { 3350 + size_t size = __ksize(object); 3351 + /* We assume that ksize callers could use whole allocated area, 3352 + so we need unpoison this area. */ 3353 + kasan_krealloc(object, size); 3354 + return size; 3389 3355 } 3390 3356 EXPORT_SYMBOL(ksize); 3391 3357 ··· 4160 4108 4161 4109 if (num_online_cpus() > 1 && 4162 4110 !cpumask_empty(to_cpumask(l->cpus)) && 4163 - len < PAGE_SIZE - 60) { 4164 - len += sprintf(buf + len, " cpus="); 4165 - len += cpulist_scnprintf(buf + len, 4166 - PAGE_SIZE - len - 50, 4167 - to_cpumask(l->cpus)); 4168 - } 4111 + len < PAGE_SIZE - 60) 4112 + len += scnprintf(buf + len, PAGE_SIZE - len - 50, 4113 + " cpus=%*pbl", 4114 + cpumask_pr_args(to_cpumask(l->cpus))); 4169 4115 4170 4116 if (nr_online_nodes > 1 && !nodes_empty(l->nodes) && 4171 - len < PAGE_SIZE - 60) { 4172 - len += sprintf(buf + len, " nodes="); 4173 - len += nodelist_scnprintf(buf + len, 4174 - PAGE_SIZE - len - 50, 4175 - l->nodes); 4176 - } 4117 + len < PAGE_SIZE - 60) 4118 + len += scnprintf(buf + len, PAGE_SIZE - len - 50, 4119 + " nodes=%*pbl", 4120 + nodemask_pr_args(&l->nodes)); 4177 4121 4178 4122 len += sprintf(buf + len, "\n"); 4179 4123 }
+38
mm/util.c
··· 12 12 #include <linux/hugetlb.h> 13 13 #include <linux/vmalloc.h> 14 14 15 + #include <asm/sections.h> 15 16 #include <asm/uaccess.h> 16 17 17 18 #include "internal.h" 19 + 20 + static inline int is_kernel_rodata(unsigned long addr) 21 + { 22 + return addr >= (unsigned long)__start_rodata && 23 + addr < (unsigned long)__end_rodata; 24 + } 25 + 26 + /** 27 + * kfree_const - conditionally free memory 28 + * @x: pointer to the memory 29 + * 30 + * Function calls kfree only if @x is not in .rodata section. 31 + */ 32 + void kfree_const(const void *x) 33 + { 34 + if (!is_kernel_rodata((unsigned long)x)) 35 + kfree(x); 36 + } 37 + EXPORT_SYMBOL(kfree_const); 18 38 19 39 /** 20 40 * kstrdup - allocate space for and copy an existing string ··· 56 36 return buf; 57 37 } 58 38 EXPORT_SYMBOL(kstrdup); 39 + 40 + /** 41 + * kstrdup_const - conditionally duplicate an existing const string 42 + * @s: the string to duplicate 43 + * @gfp: the GFP mask used in the kmalloc() call when allocating memory 44 + * 45 + * Function returns source string if it is in .rodata section otherwise it 46 + * fallbacks to kstrdup. 47 + * Strings allocated by kstrdup_const should be freed by kfree_const. 48 + */ 49 + const char *kstrdup_const(const char *s, gfp_t gfp) 50 + { 51 + if (is_kernel_rodata((unsigned long)s)) 52 + return s; 53 + 54 + return kstrdup(s, gfp); 55 + } 56 + EXPORT_SYMBOL(kstrdup_const); 59 57 60 58 /** 61 59 * kstrndup - allocate space for and copy an existing string
+8 -8
mm/vmalloc.c
··· 1324 1324 if (unlikely(!area)) 1325 1325 return NULL; 1326 1326 1327 - /* 1328 - * We always allocate a guard page. 1329 - */ 1330 - size += PAGE_SIZE; 1327 + if (!(flags & VM_NO_GUARD)) 1328 + size += PAGE_SIZE; 1331 1329 1332 1330 va = alloc_vmap_area(size, align, start, end, node, gfp_mask); 1333 1331 if (IS_ERR(va)) { ··· 1619 1621 * @end: vm area range end 1620 1622 * @gfp_mask: flags for the page level allocator 1621 1623 * @prot: protection mask for the allocated pages 1624 + * @vm_flags: additional vm area flags (e.g. %VM_NO_GUARD) 1622 1625 * @node: node to use for allocation or NUMA_NO_NODE 1623 1626 * @caller: caller's return address 1624 1627 * ··· 1629 1630 */ 1630 1631 void *__vmalloc_node_range(unsigned long size, unsigned long align, 1631 1632 unsigned long start, unsigned long end, gfp_t gfp_mask, 1632 - pgprot_t prot, int node, const void *caller) 1633 + pgprot_t prot, unsigned long vm_flags, int node, 1634 + const void *caller) 1633 1635 { 1634 1636 struct vm_struct *area; 1635 1637 void *addr; ··· 1640 1640 if (!size || (size >> PAGE_SHIFT) > totalram_pages) 1641 1641 goto fail; 1642 1642 1643 - area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED, 1644 - start, end, node, gfp_mask, caller); 1643 + area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED | 1644 + vm_flags, start, end, node, gfp_mask, caller); 1645 1645 if (!area) 1646 1646 goto fail; 1647 1647 ··· 1690 1690 int node, const void *caller) 1691 1691 { 1692 1692 return __vmalloc_node_range(size, align, VMALLOC_START, VMALLOC_END, 1693 - gfp_mask, prot, node, caller); 1693 + gfp_mask, prot, 0, node, caller); 1694 1694 } 1695 1695 1696 1696 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
+7 -21
net/core/net-sysfs.c
··· 614 614 { 615 615 struct rps_map *map; 616 616 cpumask_var_t mask; 617 - size_t len = 0; 618 - int i; 617 + int i, len; 619 618 620 619 if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) 621 620 return -ENOMEM; ··· 625 626 for (i = 0; i < map->len; i++) 626 627 cpumask_set_cpu(map->cpus[i], mask); 627 628 628 - len += cpumask_scnprintf(buf + len, PAGE_SIZE, mask); 629 - if (PAGE_SIZE - len < 3) { 630 - rcu_read_unlock(); 631 - free_cpumask_var(mask); 632 - return -EINVAL; 633 - } 629 + len = snprintf(buf, PAGE_SIZE, "%*pb\n", cpumask_pr_args(mask)); 634 630 rcu_read_unlock(); 635 - 636 631 free_cpumask_var(mask); 637 - len += sprintf(buf + len, "\n"); 638 - return len; 632 + 633 + return len < PAGE_SIZE ? len : -EINVAL; 639 634 } 640 635 641 636 static ssize_t store_rps_map(struct netdev_rx_queue *queue, ··· 1083 1090 struct xps_dev_maps *dev_maps; 1084 1091 cpumask_var_t mask; 1085 1092 unsigned long index; 1086 - size_t len = 0; 1087 - int i; 1093 + int i, len; 1088 1094 1089 1095 if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) 1090 1096 return -ENOMEM; ··· 1109 1117 } 1110 1118 rcu_read_unlock(); 1111 1119 1112 - len += cpumask_scnprintf(buf + len, PAGE_SIZE, mask); 1113 - if (PAGE_SIZE - len < 3) { 1114 - free_cpumask_var(mask); 1115 - return -EINVAL; 1116 - } 1117 - 1120 + len = snprintf(buf, PAGE_SIZE, "%*pb\n", cpumask_pr_args(mask)); 1118 1121 free_cpumask_var(mask); 1119 - len += sprintf(buf + len, "\n"); 1120 - return len; 1122 + return len < PAGE_SIZE ? len : -EINVAL; 1121 1123 } 1122 1124 1123 1125 static ssize_t store_xps_map(struct netdev_queue *queue,
+1 -1
net/core/sysctl_net_core.c
··· 155 155 rcu_read_unlock(); 156 156 157 157 len = min(sizeof(kbuf) - 1, *lenp); 158 - len = cpumask_scnprintf(kbuf, len, mask); 158 + len = scnprintf(kbuf, len, "%*pb", cpumask_pr_args(mask)); 159 159 if (!len) { 160 160 *lenp = 0; 161 161 goto done;
+25
scripts/Makefile.kasan
··· 1 + ifdef CONFIG_KASAN 2 + ifdef CONFIG_KASAN_INLINE 3 + call_threshold := 10000 4 + else 5 + call_threshold := 0 6 + endif 7 + 8 + CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address 9 + 10 + CFLAGS_KASAN := $(call cc-option, -fsanitize=kernel-address \ 11 + -fasan-shadow-offset=$(CONFIG_KASAN_SHADOW_OFFSET) \ 12 + --param asan-stack=1 --param asan-globals=1 \ 13 + --param asan-instrumentation-with-call-threshold=$(call_threshold)) 14 + 15 + ifeq ($(call cc-option, $(CFLAGS_KASAN_MINIMAL) -Werror),) 16 + $(warning Cannot use CONFIG_KASAN: \ 17 + -fsanitize=kernel-address is not supported by compiler) 18 + else 19 + ifeq ($(CFLAGS_KASAN),) 20 + $(warning CONFIG_KASAN: compiler does not support all options.\ 21 + Trying minimal configuration) 22 + CFLAGS_KASAN := $(CFLAGS_KASAN_MINIMAL) 23 + endif 24 + endif 25 + endif
+10
scripts/Makefile.lib
··· 119 119 $(CFLAGS_GCOV)) 120 120 endif 121 121 122 + # 123 + # Enable address sanitizer flags for kernel except some files or directories 124 + # we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE) 125 + # 126 + ifeq ($(CONFIG_KASAN),y) 127 + _c_flags += $(if $(patsubst n%,, \ 128 + $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \ 129 + $(CFLAGS_KASAN)) 130 + endif 131 + 122 132 # If building the kernel in a separate objtree expand all occurrences 123 133 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/'). 124 134
+121 -26
scripts/checkpatch.pl
··· 278 278 __noreturn| 279 279 __used| 280 280 __cold| 281 + __pure| 281 282 __noclone| 282 283 __deprecated| 283 284 __read_mostly| ··· 299 298 our $Hex = qr{(?i)0x[0-9a-f]+$Int_type?}; 300 299 our $Int = qr{[0-9]+$Int_type?}; 301 300 our $Octal = qr{0[0-7]+$Int_type?}; 301 + our $String = qr{"[X\t]*"}; 302 302 our $Float_hex = qr{(?i)0x[0-9a-f]+p-?[0-9]+[fl]?}; 303 303 our $Float_dec = qr{(?i)(?:[0-9]+\.[0-9]*|[0-9]*\.[0-9]+)(?:e-?[0-9]+)?[fl]?}; 304 304 our $Float_int = qr{(?i)[0-9]+e-?[0-9]+[fl]?}; ··· 338 336 [\x09\x0A\x0D\x20-\x7E] # ASCII 339 337 | $NON_ASCII_UTF8 340 338 }x; 339 + 340 + our $typeOtherOSTypedefs = qr{(?x: 341 + u_(?:char|short|int|long) | # bsd 342 + u(?:nchar|short|int|long) # sysv 343 + )}; 341 344 342 345 our $typeTypedefs = qr{(?x: 343 346 (?:__)?(?:u|s|be|le)(?:8|16|32|64)| ··· 480 473 (?:$Modifier\s+|const\s+)* 481 474 (?: 482 475 (?:typeof|__typeof__)\s*\([^\)]*\)| 476 + (?:$typeOtherOSTypedefs\b)| 483 477 (?:$typeTypedefs\b)| 484 478 (?:${all}\b) 485 479 ) ··· 498 490 (?: 499 491 (?:typeof|__typeof__)\s*\([^\)]*\)| 500 492 (?:$typeTypedefs\b)| 493 + (?:$typeOtherOSTypedefs\b)| 501 494 (?:${allWithAttr}\b) 502 495 ) 503 496 (?:\s+$Modifier|\s+const)* ··· 526 517 527 518 our $balanced_parens = qr/(\((?:[^\(\)]++|(?-1))*\))/; 528 519 our $LvalOrFunc = qr{((?:[\&\*]\s*)?$Lval)\s*($balanced_parens{0,1})\s*}; 529 - our $FuncArg = qr{$Typecast{0,1}($LvalOrFunc|$Constant)}; 520 + our $FuncArg = qr{$Typecast{0,1}($LvalOrFunc|$Constant|$String)}; 530 521 531 522 our $declaration_macros = qr{(?x: 532 523 (?:$Storage\s+)?(?:[A-Z_][A-Z0-9]*_){0,2}(?:DEFINE|DECLARE)(?:_[A-Z0-9]+){1,2}\s*\(| ··· 640 631 my $output = `git log --no-color --format='%H %s' -1 $commit 2>&1`; 641 632 $output =~ s/^\s*//gm; 642 633 my @lines = split("\n", $output); 634 + 635 + return ($id, $desc) if ($#lines < 0); 643 636 644 637 if ($lines[0] =~ /^error: short SHA1 $commit is ambiguous\./) { 645 638 # Maybe one day convert this block of bash into something that returns ··· 2170 2159 } 2171 2160 } 2172 2161 2162 + # Check email subject for common tools that don't need to be mentioned 2163 + if ($in_header_lines && 2164 + $line =~ /^Subject:.*\b(?:checkpatch|sparse|smatch)\b[^:]/i) { 2165 + WARN("EMAIL_SUBJECT", 2166 + "A patch subject line should describe the change not the tool that found it\n" . $herecurr); 2167 + } 2168 + 2173 2169 # Check for old stable address 2174 2170 if ($line =~ /^\s*cc:\s*.*<?\bstable\@kernel\.org\b>?.*$/i) { 2175 2171 ERROR("STABLE_ADDRESS", ··· 2189 2171 "Remove Gerrit Change-Id's before submitting upstream.\n" . $herecurr); 2190 2172 } 2191 2173 2192 - # Check for improperly formed commit descriptions 2193 - if ($in_commit_log && 2194 - $line =~ /\bcommit\s+[0-9a-f]{5,}/i && 2195 - !($line =~ /\b[Cc]ommit [0-9a-f]{12,40} \("/ || 2196 - ($line =~ /\b[Cc]ommit [0-9a-f]{12,40}\s*$/ && 2197 - defined $rawlines[$linenr] && 2198 - $rawlines[$linenr] =~ /^\s*\("/))) { 2199 - $line =~ /\b(c)ommit\s+([0-9a-f]{5,})/i; 2174 + # Check for git id commit length and improperly formed commit descriptions 2175 + if ($in_commit_log && $line =~ /\b(c)ommit\s+([0-9a-f]{5,})/i) { 2200 2176 my $init_char = $1; 2201 2177 my $orig_commit = lc($2); 2202 - my $id = '01234567890ab'; 2203 - my $desc = 'commit description'; 2204 - ($id, $desc) = git_commit_info($orig_commit, $id, $desc); 2205 - ERROR("GIT_COMMIT_ID", 2206 - "Please use 12 or more chars for the git commit ID like: '${init_char}ommit $id (\"$desc\")'\n" . $herecurr); 2178 + my $short = 1; 2179 + my $long = 0; 2180 + my $case = 1; 2181 + my $space = 1; 2182 + my $hasdesc = 0; 2183 + my $hasparens = 0; 2184 + my $id = '0123456789ab'; 2185 + my $orig_desc = "commit description"; 2186 + my $description = ""; 2187 + 2188 + $short = 0 if ($line =~ /\bcommit\s+[0-9a-f]{12,40}/i); 2189 + $long = 1 if ($line =~ /\bcommit\s+[0-9a-f]{41,}/i); 2190 + $space = 0 if ($line =~ /\bcommit [0-9a-f]/i); 2191 + $case = 0 if ($line =~ /\b[Cc]ommit\s+[0-9a-f]{5,40}[^A-F]/); 2192 + if ($line =~ /\bcommit\s+[0-9a-f]{5,}\s+\("([^"]+)"\)/i) { 2193 + $orig_desc = $1; 2194 + $hasparens = 1; 2195 + } elsif ($line =~ /\bcommit\s+[0-9a-f]{5,}\s*$/i && 2196 + defined $rawlines[$linenr] && 2197 + $rawlines[$linenr] =~ /^\s*\("([^"]+)"\)/) { 2198 + $orig_desc = $1; 2199 + $hasparens = 1; 2200 + } elsif ($line =~ /\bcommit\s+[0-9a-f]{5,}\s+\("[^"]+$/i && 2201 + defined $rawlines[$linenr] && 2202 + $rawlines[$linenr] =~ /^\s*[^"]+"\)/) { 2203 + $line =~ /\bcommit\s+[0-9a-f]{5,}\s+\("([^"]+)$/i; 2204 + $orig_desc = $1; 2205 + $rawlines[$linenr] =~ /^\s*([^"]+)"\)/; 2206 + $orig_desc .= " " . $1; 2207 + $hasparens = 1; 2208 + } 2209 + 2210 + ($id, $description) = git_commit_info($orig_commit, 2211 + $id, $orig_desc); 2212 + 2213 + if ($short || $long || $space || $case || ($orig_desc ne $description) || !$hasparens) { 2214 + ERROR("GIT_COMMIT_ID", 2215 + "Please use git commit description style 'commit <12+ chars of sha1> (\"<title line>\")' - ie: '${init_char}ommit $id (\"$description\")'\n" . $herecurr); 2216 + } 2207 2217 } 2208 2218 2209 2219 # Check for added, moved or deleted files ··· 2401 2355 "Use of CONFIG_EXPERIMENTAL is deprecated. For alternatives, see https://lkml.org/lkml/2012/10/23/580\n"); 2402 2356 } 2403 2357 2358 + # discourage the use of boolean for type definition attributes of Kconfig options 2359 + if ($realfile =~ /Kconfig/ && 2360 + $line =~ /^\+\s*\bboolean\b/) { 2361 + WARN("CONFIG_TYPE_BOOLEAN", 2362 + "Use of boolean is deprecated, please use bool instead.\n" . $herecurr); 2363 + } 2364 + 2404 2365 if (($realfile =~ /Makefile.*/ || $realfile =~ /Kbuild.*/) && 2405 2366 ($line =~ /\+(EXTRA_[A-Z]+FLAGS).*/)) { 2406 2367 my $flag = $1; ··· 2552 2499 } 2553 2500 } 2554 2501 2555 - if ($line =~ /^\+.*(\w+\s*)?\(\s*$Type\s*\)[ \t]+(?!$Assignment|$Arithmetic|[,;\({\[\<\>])/ && 2502 + if ($line =~ /^\+.*(\w+\s*)?\(\s*$Type\s*\)[ \t]+(?!$Assignment|$Arithmetic|[,;:\?\(\{\}\[\<\>]|&&|\|\||\\$)/ && 2556 2503 (!defined($1) || $1 !~ /sizeof\s*/)) { 2557 2504 if (CHK("SPACING", 2558 2505 "No space is necessary after a cast\n" . $herecurr) && ··· 3177 3124 $line !~ /\btypedef\s+$Type\s*\(\s*\*?$Ident\s*\)\s*\(/ && 3178 3125 $line !~ /\btypedef\s+$Type\s+$Ident\s*\(/ && 3179 3126 $line !~ /\b$typeTypedefs\b/ && 3127 + $line !~ /\b$typeOtherOSTypedefs\b/ && 3180 3128 $line !~ /\b__bitwise(?:__|)\b/) { 3181 3129 WARN("NEW_TYPEDEFS", 3182 3130 "do not add new typedefs\n" . $herecurr); ··· 3254 3200 # check for uses of printk_ratelimit 3255 3201 if ($line =~ /\bprintk_ratelimit\s*\(/) { 3256 3202 WARN("PRINTK_RATELIMITED", 3257 - "Prefer printk_ratelimited or pr_<level>_ratelimited to printk_ratelimit\n" . $herecurr); 3203 + "Prefer printk_ratelimited or pr_<level>_ratelimited to printk_ratelimit\n" . $herecurr); 3258 3204 } 3259 3205 3260 3206 # printk should use KERN_* levels. Note that follow on printk's on the ··· 3700 3646 $op eq '*' or $op eq '/' or 3701 3647 $op eq '%') 3702 3648 { 3703 - if ($ctx =~ /Wx[^WCE]|[^WCE]xW/) { 3649 + if ($check) { 3650 + if (defined $fix_elements[$n + 2] && $ctx !~ /[EW]x[EW]/) { 3651 + if (CHK("SPACING", 3652 + "spaces preferred around that '$op' $at\n" . $hereptr)) { 3653 + $good = rtrim($fix_elements[$n]) . " " . trim($fix_elements[$n + 1]) . " "; 3654 + $fix_elements[$n + 2] =~ s/^\s+//; 3655 + $line_fixed = 1; 3656 + } 3657 + } elsif (!defined $fix_elements[$n + 2] && $ctx !~ /Wx[OE]/) { 3658 + if (CHK("SPACING", 3659 + "space preferred before that '$op' $at\n" . $hereptr)) { 3660 + $good = rtrim($fix_elements[$n]) . " " . trim($fix_elements[$n + 1]); 3661 + $line_fixed = 1; 3662 + } 3663 + } 3664 + } elsif ($ctx =~ /Wx[^WCE]|[^WCE]xW/) { 3704 3665 if (ERROR("SPACING", 3705 3666 "need consistent spacing around '$op' $at\n" . $hereptr)) { 3706 3667 $good = rtrim($fix_elements[$n]) . " " . trim($fix_elements[$n + 1]) . " "; ··· 4320 4251 $ctx = $dstat; 4321 4252 4322 4253 $dstat =~ s/\\\n.//g; 4254 + $dstat =~ s/$;/ /g; 4323 4255 4324 4256 if ($dstat =~ /^\+\s*#\s*define\s+$Ident\s*${balanced_parens}\s*do\s*{(.*)\s*}\s*while\s*\(\s*0\s*\)\s*([;\s]*)\s*$/) { 4325 4257 my $stmts = $2; ··· 4487 4417 4488 4418 # check for unnecessary blank lines around braces 4489 4419 if (($line =~ /^.\s*}\s*$/ && $prevrawline =~ /^.\s*$/)) { 4490 - CHK("BRACES", 4491 - "Blank lines aren't necessary before a close brace '}'\n" . $hereprev); 4420 + if (CHK("BRACES", 4421 + "Blank lines aren't necessary before a close brace '}'\n" . $hereprev) && 4422 + $fix && $prevrawline =~ /^\+/) { 4423 + fix_delete_line($fixlinenr - 1, $prevrawline); 4424 + } 4492 4425 } 4493 4426 if (($rawline =~ /^.\s*$/ && $prevline =~ /^..*{\s*$/)) { 4494 - CHK("BRACES", 4495 - "Blank lines aren't necessary after an open brace '{'\n" . $hereprev); 4427 + if (CHK("BRACES", 4428 + "Blank lines aren't necessary after an open brace '{'\n" . $hereprev) && 4429 + $fix) { 4430 + fix_delete_line($fixlinenr, $rawline); 4431 + } 4496 4432 } 4497 4433 4498 4434 # no volatiles please ··· 4621 4545 } 4622 4546 4623 4547 # check for logging functions with KERN_<LEVEL> 4624 - if ($line !~ /printk\s*\(/ && 4548 + if ($line !~ /printk(?:_ratelimited|_once)?\s*\(/ && 4625 4549 $line =~ /\b$logFunctions\s*\(.*\b(KERN_[A-Z]+)\b/) { 4626 4550 my $level = $1; 4627 4551 if (WARN("UNNECESSARY_KERN_LEVEL", ··· 4880 4804 # check for seq_printf uses that could be seq_puts 4881 4805 if ($sline =~ /\bseq_printf\s*\(.*"\s*\)\s*;\s*$/) { 4882 4806 my $fmt = get_quoted_string($line, $rawline); 4883 - if ($fmt ne "" && $fmt !~ /[^\\]\%/) { 4807 + $fmt =~ s/%%//g; 4808 + if ($fmt !~ /%/) { 4884 4809 if (WARN("PREFER_SEQ_PUTS", 4885 4810 "Prefer seq_puts to seq_printf\n" . $herecurr) && 4886 4811 $fix) { ··· 5166 5089 } 5167 5090 } 5168 5091 5092 + # check for uses of __DATE__, __TIME__, __TIMESTAMP__ 5093 + while ($line =~ /\b(__(?:DATE|TIME|TIMESTAMP)__)\b/g) { 5094 + ERROR("DATE_TIME", 5095 + "Use of the '$1' macro makes the build non-deterministic\n" . $herecurr); 5096 + } 5097 + 5169 5098 # check for use of yield() 5170 5099 if ($line =~ /\byield\s*\(\s*\)/) { 5171 5100 WARN("YIELD", ··· 5223 5140 "please use device_initcall() or more appropriate function instead of __initcall() (see include/linux/init.h)\n" . $herecurr); 5224 5141 } 5225 5142 5226 - # check for various ops structs, ensure they are const. 5227 - my $struct_ops = qr{acpi_dock_ops| 5143 + # check for various structs that are normally const (ops, kgdb, device_tree) 5144 + my $const_structs = qr{ 5145 + acpi_dock_ops| 5228 5146 address_space_operations| 5229 5147 backlight_ops| 5230 5148 block_device_operations| ··· 5248 5164 mtrr_ops| 5249 5165 neigh_ops| 5250 5166 nlmsvc_binding| 5167 + of_device_id| 5251 5168 pci_raw_ops| 5252 5169 pipe_buf_operations| 5253 5170 platform_hibernation_ops| ··· 5264 5179 usb_mon_operations| 5265 5180 wd_ops}x; 5266 5181 if ($line !~ /\bconst\b/ && 5267 - $line =~ /\bstruct\s+($struct_ops)\b/) { 5182 + $line =~ /\bstruct\s+($const_structs)\b/) { 5268 5183 WARN("CONST_STRUCT", 5269 5184 "struct $1 should normally be const\n" . 5270 5185 $herecurr); ··· 5287 5202 if ($line =~ /\+\s*#\s*define\s+((?:__)?ARCH_(?:HAS|HAVE)\w*)\b/) { 5288 5203 ERROR("DEFINE_ARCH_HAS", 5289 5204 "#define of '$1' is wrong - use Kconfig variables or standard guards instead\n" . $herecurr); 5205 + } 5206 + 5207 + # likely/unlikely comparisons similar to "(likely(foo) > 0)" 5208 + if ($^V && $^V ge 5.10.0 && 5209 + $line =~ /\b((?:un)?likely)\s*\(\s*$FuncArg\s*\)\s*$Compare/) { 5210 + WARN("LIKELY_MISUSE", 5211 + "Using $1 should generally have parentheses around the comparison\n" . $herecurr); 5290 5212 } 5291 5213 5292 5214 # whine mightly about in_atomic ··· 5347 5255 length($val) ne 4)) { 5348 5256 ERROR("NON_OCTAL_PERMISSIONS", 5349 5257 "Use 4 digit octal (0777) not decimal permissions\n" . $herecurr); 5258 + } elsif ($val =~ /^$Octal$/ && (oct($val) & 02)) { 5259 + ERROR("EXPORTED_WORLD_WRITABLE", 5260 + "Exporting writable files is usually an error. Consider more restrictive permissions.\n" . $herecurr); 5350 5261 } 5351 5262 } 5352 5263 }
+3
scripts/module-common.lds
··· 16 16 __kcrctab_unused 0 : { *(SORT(___kcrctab_unused+*)) } 17 17 __kcrctab_unused_gpl 0 : { *(SORT(___kcrctab_unused_gpl+*)) } 18 18 __kcrctab_gpl_future 0 : { *(SORT(___kcrctab_gpl_future+*)) } 19 + 20 + . = ALIGN(8); 21 + .init_array 0 : { *(SORT(.init_array.*)) *(.init_array) } 19 22 }