Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:
"Another mixture of changes this time around:

- Split XIP linker file from main linker file to make it more
maintainable, and various XIP fixes, and clean up a resulting
macro.

- Decompressor cleanups from Masahiro Yamada

- Avoid printing an error for a missing L2 cache

- Remove some duplicated symbols in System.map, and move
vectors/stubs back into kernel VMA

- Various low priority fixes from Arnd

- Updates to allow bus match functions to return negative errno
values, touching some drivers and the driver core. Greg has acked
these changes.

- Virtualisation platform udpates form Jean-Philippe Brucker.

- Security enhancements from Kees Cook

- Rework some Kconfig dependencies and move PSCI idle management code
out of arch/arm into drivers/firmware/psci.c

- ARM DMA mapping updates, touching media, acked by Mauro.

- Fix places in ARM code which should be using virt_to_idmap() so
that Keystone2 can work.

- Fix Marvell Tauros2 to work again with non-DT boots.

- Provide a delay timer for ARM Orion platforms"

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (45 commits)
ARM: 8546/1: dma-mapping: refactor to fix coherent+cma+gfp=0
ARM: 8547/1: dma-mapping: store buffer information
ARM: 8543/1: decompressor: rename suffix_y to compress-y
ARM: 8542/1: decompressor: merge piggy.*.S and simplify Makefile
ARM: 8541/1: decompressor: drop redundant FORCE in Makefile
ARM: 8540/1: decompressor: use clean-files instead of extra-y to clean files
ARM: 8539/1: decompressor: drop more unneeded assignments to "targets"
ARM: 8538/1: decompressor: drop unneeded assignments to "targets"
ARM: 8532/1: uncompress: mark putc as inline
ARM: 8531/1: turn init_new_context into an inline function
ARM: 8530/1: remove VIRT_TO_BUS
ARM: 8537/1: drop unused DEBUG_RODATA from XIP_KERNEL
ARM: 8536/1: mm: hide __start_rodata_section_aligned for non-debug builds
ARM: 8535/1: mm: DEBUG_RODATA makes no sense with XIP_KERNEL
ARM: 8534/1: virt: fix hyp-stub build for pre-ARMv7 CPUs
ARM: make the physical-relative calculation more obvious
ARM: 8512/1: proc-v7.S: Adjust stack address when XIP_KERNEL
ARM: 8411/1: Add default SPARSEMEM settings
ARM: 8503/1: clk_register_clkdev: remove format string interface
ARM: 8529/1: remove 'i' and 'zi' targets
...

+967 -375
+26
Documentation/DMA-attributes.txt
··· 100 100 be mapped as contiguous chunk into device dma address space. By 101 101 specifying this attribute the allocated buffer is forced to be contiguous 102 102 also in physical memory. 103 + 104 + DMA_ATTR_ALLOC_SINGLE_PAGES 105 + --------------------------- 106 + 107 + This is a hint to the DMA-mapping subsystem that it's probably not worth 108 + the time to try to allocate memory to in a way that gives better TLB 109 + efficiency (AKA it's not worth trying to build the mapping out of larger 110 + pages). You might want to specify this if: 111 + - You know that the accesses to this memory won't thrash the TLB. 112 + You might know that the accesses are likely to be sequential or 113 + that they aren't sequential but it's unlikely you'll ping-pong 114 + between many addresses that are likely to be in different physical 115 + pages. 116 + - You know that the penalty of TLB misses while accessing the 117 + memory will be small enough to be inconsequential. If you are 118 + doing a heavy operation like decryption or decompression this 119 + might be the case. 120 + - You know that the DMA mapping is fairly transitory. If you expect 121 + the mapping to have a short lifetime then it may be worth it to 122 + optimize allocation (avoid coming up with large pages) instead of 123 + getting the slight performance win of larger pages. 124 + Setting this hint doesn't guarantee that you won't get huge pages, but it 125 + means that we won't try quite as hard to get them. 126 + 127 + NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM, 128 + though ARM64 patches will likely be posted soon.
+4 -2
Documentation/driver-model/porting.txt
··· 340 340 341 341 int (*match)(struct device * dev, struct device_driver * drv); 342 342 343 - match should return '1' if the driver supports the device, and '0' 344 - otherwise. 343 + match should return positive value if the driver supports the device, 344 + and zero otherwise. It may also return error code (for example 345 + -EPROBE_DEFER) if determining that given driver supports the device is 346 + not possible. 345 347 346 348 When a device is registered, the bus's list of drivers is iterated 347 349 over. bus->match() is called for each one until a match is found.
+2 -3
arch/arm/Kconfig
··· 572 572 select NEED_MACH_IO_H 573 573 select NEED_MACH_MEMORY_H 574 574 select NO_IOPORT_MAP 575 - select VIRT_TO_BUS 576 575 help 577 576 On the Acorn Risc-PC, Linux can support the internal IDE disk and 578 577 CD-ROM interface, serial and parallel port, and the floppy drive. ··· 1335 1336 config BL_SWITCHER 1336 1337 bool "big.LITTLE switcher support" 1337 1338 depends on BIG_LITTLE && MCPM && HOTPLUG_CPU && ARM_GIC 1338 - select ARM_CPU_SUSPEND 1339 1339 select CPU_PM 1340 1340 help 1341 1341 The big.LITTLE "switcher" provides the core functionality to ··· 2108 2110 def_bool y 2109 2111 2110 2112 config ARM_CPU_SUSPEND 2111 - def_bool PM_SLEEP 2113 + def_bool PM_SLEEP || BL_SWITCHER || ARM_PSCI_FW 2114 + depends on ARCH_SUSPEND_POSSIBLE 2112 2115 2113 2116 config ARCH_HIBERNATION_POSSIBLE 2114 2117 bool
-1
arch/arm/Makefile
··· 352 352 353 353 # My testing targets (bypasses dependencies) 354 354 bp:; $(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $(boot)/bootpImage 355 - i zi:; $(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $@ 356 355 357 356 358 357 define archhelp
+1 -9
arch/arm/boot/Makefile
··· 88 88 $(call if_changed,objcopy) 89 89 @$(kecho) ' Kernel: $@ is ready' 90 90 91 - PHONY += initrd FORCE 91 + PHONY += initrd 92 92 initrd: 93 93 @test "$(INITRD_PHYS)" != "" || \ 94 94 (echo This machine does not support INITRD; exit -1) ··· 106 106 uinstall: 107 107 $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 108 108 $(obj)/uImage System.map "$(INSTALL_PATH)" 109 - 110 - zi: 111 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 112 - $(obj)/zImage System.map "$(INSTALL_PATH)" 113 - 114 - i: 115 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \ 116 - $(obj)/Image System.map "$(INSTALL_PATH)" 117 109 118 110 subdir- := bootp compressed dts
+1 -5
arch/arm/boot/compressed/.gitignore
··· 3 3 font.c 4 4 lib1funcs.S 5 5 hyp-stub.S 6 - piggy.gzip 7 - piggy.lzo 8 - piggy.lzma 9 - piggy.xzkern 10 - piggy.lz4 6 + piggy_data 11 7 vmlinux 12 8 vmlinux.lds 13 9
+14 -17
arch/arm/boot/compressed/Makefile
··· 66 66 67 67 CPPFLAGS_vmlinux.lds := -DTEXT_START="$(ZTEXTADDR)" -DBSS_START="$(ZBSSADDR)" 68 68 69 - suffix_$(CONFIG_KERNEL_GZIP) = gzip 70 - suffix_$(CONFIG_KERNEL_LZO) = lzo 71 - suffix_$(CONFIG_KERNEL_LZMA) = lzma 72 - suffix_$(CONFIG_KERNEL_XZ) = xzkern 73 - suffix_$(CONFIG_KERNEL_LZ4) = lz4 69 + compress-$(CONFIG_KERNEL_GZIP) = gzip 70 + compress-$(CONFIG_KERNEL_LZO) = lzo 71 + compress-$(CONFIG_KERNEL_LZMA) = lzma 72 + compress-$(CONFIG_KERNEL_XZ) = xzkern 73 + compress-$(CONFIG_KERNEL_LZ4) = lz4 74 74 75 75 # Borrowed libfdt files for the ATAG compatibility mode 76 76 ··· 89 89 OBJS += $(libfdt_objs) atags_to_fdt.o 90 90 endif 91 91 92 - targets := vmlinux vmlinux.lds \ 93 - piggy.$(suffix_y) piggy.$(suffix_y).o \ 94 - lib1funcs.o lib1funcs.S ashldi3.o ashldi3.S bswapsdi2.o \ 95 - bswapsdi2.S font.o font.c head.o misc.o $(OBJS) 92 + targets := vmlinux vmlinux.lds piggy_data piggy.o \ 93 + lib1funcs.o ashldi3.o bswapsdi2.o \ 94 + head.o $(OBJS) 96 95 97 - # Make sure files are removed during clean 98 - extra-y += piggy.gzip piggy.lzo piggy.lzma piggy.xzkern piggy.lz4 \ 99 - lib1funcs.S ashldi3.S bswapsdi2.S $(libfdt) $(libfdt_hdrs) \ 100 - hyp-stub.S 96 + clean-files += piggy_data lib1funcs.S ashldi3.S bswapsdi2.S \ 97 + $(libfdt) $(libfdt_hdrs) hyp-stub.S 101 98 102 99 KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 103 100 ··· 175 178 176 179 efi-obj-$(CONFIG_EFI_STUB) := $(objtree)/drivers/firmware/efi/libstub/lib.a 177 180 178 - $(obj)/vmlinux: $(obj)/vmlinux.lds $(obj)/$(HEAD) $(obj)/piggy.$(suffix_y).o \ 181 + $(obj)/vmlinux: $(obj)/vmlinux.lds $(obj)/$(HEAD) $(obj)/piggy.o \ 179 182 $(addprefix $(obj)/, $(OBJS)) $(lib1funcs) $(ashldi3) \ 180 183 $(bswapsdi2) $(efi-obj-y) FORCE 181 184 @$(check_for_multiple_zreladdr) 182 185 $(call if_changed,ld) 183 186 @$(check_for_bad_syms) 184 187 185 - $(obj)/piggy.$(suffix_y): $(obj)/../Image FORCE 186 - $(call if_changed,$(suffix_y)) 188 + $(obj)/piggy_data: $(obj)/../Image FORCE 189 + $(call if_changed,$(compress-y)) 187 190 188 - $(obj)/piggy.$(suffix_y).o: $(obj)/piggy.$(suffix_y) FORCE 191 + $(obj)/piggy.o: $(obj)/piggy_data 189 192 190 193 CFLAGS_font.o := -Dstatic= 191 194
-6
arch/arm/boot/compressed/piggy.gzip.S
··· 1 - .section .piggydata,#alloc 2 - .globl input_data 3 - input_data: 4 - .incbin "arch/arm/boot/compressed/piggy.gzip" 5 - .globl input_data_end 6 - input_data_end:
+1 -1
arch/arm/boot/compressed/piggy.lz4.S arch/arm/boot/compressed/piggy.S
··· 1 1 .section .piggydata,#alloc 2 2 .globl input_data 3 3 input_data: 4 - .incbin "arch/arm/boot/compressed/piggy.lz4" 4 + .incbin "arch/arm/boot/compressed/piggy_data" 5 5 .globl input_data_end 6 6 input_data_end:
-6
arch/arm/boot/compressed/piggy.lzma.S
··· 1 - .section .piggydata,#alloc 2 - .globl input_data 3 - input_data: 4 - .incbin "arch/arm/boot/compressed/piggy.lzma" 5 - .globl input_data_end 6 - input_data_end:
-6
arch/arm/boot/compressed/piggy.lzo.S
··· 1 - .section .piggydata,#alloc 2 - .globl input_data 3 - input_data: 4 - .incbin "arch/arm/boot/compressed/piggy.lzo" 5 - .globl input_data_end 6 - input_data_end:
-6
arch/arm/boot/compressed/piggy.xzkern.S
··· 1 - .section .piggydata,#alloc 2 - .globl input_data 3 - input_data: 4 - .incbin "arch/arm/boot/compressed/piggy.xzkern" 5 - .globl input_data_end 6 - input_data_end:
+1 -1
arch/arm/common/sa1111.c
··· 1290 1290 struct sa1111_dev *dev = SA1111_DEV(_dev); 1291 1291 struct sa1111_driver *drv = SA1111_DRV(_drv); 1292 1292 1293 - return dev->devid & drv->devid; 1293 + return !!(dev->devid & drv->devid); 1294 1294 } 1295 1295 1296 1296 static int sa1111_bus_suspend(struct device *dev, pm_message_t state)
-1
arch/arm/include/asm/Kbuild
··· 23 23 generic-y += resource.h 24 24 generic-y += rwsem.h 25 25 generic-y += seccomp.h 26 - generic-y += sections.h 27 26 generic-y += segment.h 28 27 generic-y += sembuf.h 29 28 generic-y += serial.h
+7 -7
arch/arm/include/asm/div64.h
··· 74 74 static inline uint64_t __arch_xprod_64(uint64_t m, uint64_t n, bool bias) 75 75 { 76 76 unsigned long long res; 77 - unsigned int tmp = 0; 77 + register unsigned int tmp asm("ip") = 0; 78 78 79 79 if (!bias) { 80 80 asm ( "umull %Q0, %R0, %Q1, %Q2\n\t" ··· 90 90 : "r" (m), "r" (n) 91 91 : "cc"); 92 92 } else { 93 - asm ( "umull %Q0, %R0, %Q1, %Q2\n\t" 94 - "cmn %Q0, %Q1\n\t" 95 - "adcs %R0, %R0, %R1\n\t" 96 - "adc %Q0, %3, #0" 97 - : "=&r" (res) 98 - : "r" (m), "r" (n), "r" (tmp) 93 + asm ( "umull %Q0, %R0, %Q2, %Q3\n\t" 94 + "cmn %Q0, %Q2\n\t" 95 + "adcs %R0, %R0, %R2\n\t" 96 + "adc %Q0, %1, #0" 97 + : "=&r" (res), "+&r" (tmp) 98 + : "r" (m), "r" (n) 99 99 : "cc"); 100 100 } 101 101
+18 -17
arch/arm/include/asm/memory.h
··· 134 134 */ 135 135 #define PLAT_PHYS_OFFSET UL(CONFIG_PHYS_OFFSET) 136 136 137 + #ifdef CONFIG_XIP_KERNEL 138 + /* 139 + * When referencing data in RAM from the XIP region in a relative manner 140 + * with the MMU off, we need the relative offset between the two physical 141 + * addresses. The macro below achieves this, which is: 142 + * __pa(v_data) - __xip_pa(v_text) 143 + */ 144 + #define PHYS_RELATIVE(v_data, v_text) \ 145 + (((v_data) - PAGE_OFFSET + PLAT_PHYS_OFFSET) - \ 146 + ((v_text) - XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR) + \ 147 + CONFIG_XIP_PHYS_ADDR)) 148 + #else 149 + #define PHYS_RELATIVE(v_data, v_text) ((v_data) - (v_text)) 150 + #endif 151 + 137 152 #ifndef __ASSEMBLY__ 138 153 139 154 /* ··· 288 273 #define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x))) 289 274 #define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT) 290 275 291 - extern phys_addr_t (*arch_virt_to_idmap)(unsigned long x); 276 + extern unsigned long (*arch_virt_to_idmap)(unsigned long x); 292 277 293 278 /* 294 279 * These are for systems that have a hardware interconnect supported alias of 295 280 * physical memory for idmap purposes. Most cases should leave these 296 - * untouched. 281 + * untouched. Note: this can only return addresses less than 4GiB. 297 282 */ 298 - static inline phys_addr_t __virt_to_idmap(unsigned long x) 283 + static inline unsigned long __virt_to_idmap(unsigned long x) 299 284 { 300 285 if (IS_ENABLED(CONFIG_MMU) && arch_virt_to_idmap) 301 286 return arch_virt_to_idmap(x); ··· 316 301 #define __bus_to_virt __phys_to_virt 317 302 #define __pfn_to_bus(x) __pfn_to_phys(x) 318 303 #define __bus_to_pfn(x) __phys_to_pfn(x) 319 - #endif 320 - 321 - #ifdef CONFIG_VIRT_TO_BUS 322 - #define virt_to_bus virt_to_bus 323 - static inline __deprecated unsigned long virt_to_bus(void *x) 324 - { 325 - return __virt_to_bus((unsigned long)x); 326 - } 327 - 328 - #define bus_to_virt bus_to_virt 329 - static inline __deprecated void *bus_to_virt(unsigned long x) 330 - { 331 - return (void *)__bus_to_virt(x); 332 - } 333 304 #endif 334 305 335 306 /*
+12 -2
arch/arm/include/asm/mmu_context.h
··· 26 26 #ifdef CONFIG_CPU_HAS_ASID 27 27 28 28 void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk); 29 - #define init_new_context(tsk,mm) ({ atomic64_set(&(mm)->context.id, 0); 0; }) 29 + static inline int 30 + init_new_context(struct task_struct *tsk, struct mm_struct *mm) 31 + { 32 + atomic64_set(&mm->context.id, 0); 33 + return 0; 34 + } 30 35 31 36 #ifdef CONFIG_ARM_ERRATA_798181 32 37 void a15_erratum_get_cpumask(int this_cpu, struct mm_struct *mm, ··· 90 85 91 86 #endif /* CONFIG_MMU */ 92 87 93 - #define init_new_context(tsk,mm) 0 88 + static inline int 89 + init_new_context(struct task_struct *tsk, struct mm_struct *mm) 90 + { 91 + return 0; 92 + } 93 + 94 94 95 95 #endif /* CONFIG_CPU_HAS_ASID */ 96 96
+8
arch/arm/include/asm/sections.h
··· 1 + #ifndef _ASM_ARM_SECTIONS_H 2 + #define _ASM_ARM_SECTIONS_H 3 + 4 + #include <asm-generic/sections.h> 5 + 6 + extern char _exiprom[]; 7 + 8 + #endif /* _ASM_ARM_SECTIONS_H */
+4 -3
arch/arm/include/asm/sparsemem.h
··· 15 15 * Eg, if you have 2 banks of up to 64MB at 0x80000000, 0x84000000, 16 16 * then MAX_PHYSMEM_BITS is 32, SECTION_SIZE_BITS is 26. 17 17 * 18 - * Define these in your mach/memory.h. 18 + * These can be overridden in your mach/memory.h. 19 19 */ 20 - #if !defined(SECTION_SIZE_BITS) || !defined(MAX_PHYSMEM_BITS) 21 - #error Sparsemem is not supported on this platform 20 + #if !defined(MAX_PHYSMEM_BITS) || !defined(SECTION_SIZE_BITS) 21 + #define MAX_PHYSMEM_BITS 36 22 + #define SECTION_SIZE_BITS 28 22 23 #endif 23 24 24 25 #endif
+3 -5
arch/arm/kernel/entry-armv.S
··· 1064 1064 .endm 1065 1065 1066 1066 .section .stubs, "ax", %progbits 1067 - __stubs_start: 1068 1067 @ This must be the first word 1069 1068 .word vector_swi 1070 1069 ··· 1201 1202 .long __fiq_svc @ e 1202 1203 .long __fiq_svc @ f 1203 1204 1204 - .globl vector_fiq_offset 1205 - .equ vector_fiq_offset, vector_fiq 1205 + .globl vector_fiq 1206 1206 1207 1207 .section .vectors, "ax", %progbits 1208 - __vectors_start: 1208 + .L__vectors_start: 1209 1209 W(b) vector_rst 1210 1210 W(b) vector_und 1211 - W(ldr) pc, __vectors_start + 0x1000 1211 + W(ldr) pc, .L__vectors_start + 0x1000 1212 1212 W(b) vector_pabt 1213 1213 W(b) vector_dabt 1214 1214 W(b) vector_addrexcptn
+2 -2
arch/arm/kernel/hibernate.c
··· 62 62 63 63 ret = swsusp_save(); 64 64 if (ret == 0) 65 - _soft_restart(virt_to_phys(cpu_resume), false); 65 + _soft_restart(virt_to_idmap(cpu_resume), false); 66 66 return ret; 67 67 } 68 68 ··· 87 87 for (pbe = restore_pblist; pbe; pbe = pbe->next) 88 88 copy_page(pbe->orig_address, pbe->address); 89 89 90 - _soft_restart(virt_to_phys(cpu_resume), false); 90 + _soft_restart(virt_to_idmap(cpu_resume), false); 91 91 } 92 92 93 93 static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata;
+24
arch/arm/kernel/hyp-stub.S
··· 17 17 */ 18 18 19 19 #include <linux/init.h> 20 + #include <linux/irqchip/arm-gic-v3.h> 20 21 #include <linux/linkage.h> 21 22 #include <asm/assembler.h> 22 23 #include <asm/virt.h> ··· 160 159 bic r7, #1 @ Clear ENABLE 161 160 mcr p15, 0, r7, c14, c3, 1 @ CNTV_CTL 162 161 1: 162 + #endif 163 + 164 + #ifdef CONFIG_ARM_GIC_V3 165 + @ Check whether GICv3 system registers are available 166 + mrc p15, 0, r7, c0, c1, 1 @ ID_PFR1 167 + ubfx r7, r7, #28, #4 168 + cmp r7, #1 169 + bne 2f 170 + 171 + @ Enable system register accesses 172 + mrc p15, 4, r7, c12, c9, 5 @ ICC_HSRE 173 + orr r7, r7, #(ICC_SRE_EL2_ENABLE | ICC_SRE_EL2_SRE) 174 + mcr p15, 4, r7, c12, c9, 5 @ ICC_HSRE 175 + isb 176 + 177 + @ SRE bit could be forced to 0 by firmware. 178 + @ Check whether it sticks before accessing any other sysreg 179 + mrc p15, 4, r7, c12, c9, 5 @ ICC_HSRE 180 + tst r7, #ICC_SRE_EL2_SRE 181 + beq 2f 182 + mov r7, #0 183 + mcr p15, 4, r7, c12, c11, 0 @ ICH_HCR 184 + 2: 163 185 #endif 164 186 165 187 bx lr @ The boot CPU mode is left in r4.
+1 -1
arch/arm/kernel/irq.c
··· 95 95 outer_cache.write_sec = machine_desc->l2c_write_sec; 96 96 ret = l2x0_of_init(machine_desc->l2c_aux_val, 97 97 machine_desc->l2c_aux_mask); 98 - if (ret) 98 + if (ret && ret != -ENODEV) 99 99 pr_err("L2C: failed to init: %d\n", ret); 100 100 } 101 101
+6 -10
arch/arm/kernel/machine_kexec.c
··· 143 143 144 144 void machine_kexec(struct kimage *image) 145 145 { 146 - unsigned long page_list; 147 - unsigned long reboot_code_buffer_phys; 148 - unsigned long reboot_entry = (unsigned long)relocate_new_kernel; 149 - unsigned long reboot_entry_phys; 146 + unsigned long page_list, reboot_entry_phys; 147 + void (*reboot_entry)(void); 150 148 void *reboot_code_buffer; 151 149 152 150 /* ··· 157 159 158 160 page_list = image->head & PAGE_MASK; 159 161 160 - /* we need both effective and real address here */ 161 - reboot_code_buffer_phys = 162 - page_to_pfn(image->control_code_page) << PAGE_SHIFT; 163 162 reboot_code_buffer = page_address(image->control_code_page); 164 163 165 164 /* Prepare parameters for reboot_code_buffer*/ ··· 169 174 170 175 /* copy our kernel relocation code to the control code page */ 171 176 reboot_entry = fncpy(reboot_code_buffer, 172 - reboot_entry, 177 + &relocate_new_kernel, 173 178 relocate_new_kernel_size); 174 - reboot_entry_phys = (unsigned long)reboot_entry + 175 - (reboot_code_buffer_phys - (unsigned long)reboot_code_buffer); 179 + 180 + /* get the identity mapping physical address for the reboot code */ 181 + reboot_entry_phys = virt_to_idmap(reboot_entry); 176 182 177 183 pr_info("Bye!\n"); 178 184
+1 -1
arch/arm/kernel/module.c
··· 34 34 * recompiling the whole kernel when CONFIG_XIP_KERNEL is turned on/off. 35 35 */ 36 36 #undef MODULES_VADDR 37 - #define MODULES_VADDR (((unsigned long)_etext + ~PMD_MASK) & PMD_MASK) 37 + #define MODULES_VADDR (((unsigned long)_exiprom + ~PMD_MASK) & PMD_MASK) 38 38 #endif 39 39 40 40 #ifdef CONFIG_MMU
+1 -1
arch/arm/kernel/reboot.c
··· 50 50 flush_cache_all(); 51 51 52 52 /* Switch to the identity mapping. */ 53 - phys_reset = (phys_reset_t)(unsigned long)virt_to_idmap(cpu_reset); 53 + phys_reset = (phys_reset_t)virt_to_idmap(cpu_reset); 54 54 phys_reset((unsigned long)addr); 55 55 56 56 /* Should never get here. */
+1 -3
arch/arm/kernel/topology.c
··· 40 40 * to run the rebalance_domains for all idle cores and the cpu_capacity can be 41 41 * updated during this sequence. 42 42 */ 43 - static DEFINE_PER_CPU(unsigned long, cpu_scale); 43 + static DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE; 44 44 45 45 unsigned long arch_scale_cpu_capacity(struct sched_domain *sd, int cpu) 46 46 { ··· 306 306 cpu_topo->socket_id = -1; 307 307 cpumask_clear(&cpu_topo->core_sibling); 308 308 cpumask_clear(&cpu_topo->thread_sibling); 309 - 310 - set_capacity_scale(cpu, SCHED_CAPACITY_SCALE); 311 309 } 312 310 smp_wmb(); 313 311
+316
arch/arm/kernel/vmlinux-xip.lds.S
··· 1 + /* ld script to make ARM Linux kernel 2 + * taken from the i386 version by Russell King 3 + * Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz> 4 + */ 5 + 6 + #include <asm-generic/vmlinux.lds.h> 7 + #include <asm/cache.h> 8 + #include <asm/thread_info.h> 9 + #include <asm/memory.h> 10 + #include <asm/page.h> 11 + 12 + #define PROC_INFO \ 13 + . = ALIGN(4); \ 14 + VMLINUX_SYMBOL(__proc_info_begin) = .; \ 15 + *(.proc.info.init) \ 16 + VMLINUX_SYMBOL(__proc_info_end) = .; 17 + 18 + #define IDMAP_TEXT \ 19 + ALIGN_FUNCTION(); \ 20 + VMLINUX_SYMBOL(__idmap_text_start) = .; \ 21 + *(.idmap.text) \ 22 + VMLINUX_SYMBOL(__idmap_text_end) = .; \ 23 + . = ALIGN(PAGE_SIZE); \ 24 + VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \ 25 + *(.hyp.idmap.text) \ 26 + VMLINUX_SYMBOL(__hyp_idmap_text_end) = .; 27 + 28 + #ifdef CONFIG_HOTPLUG_CPU 29 + #define ARM_CPU_DISCARD(x) 30 + #define ARM_CPU_KEEP(x) x 31 + #else 32 + #define ARM_CPU_DISCARD(x) x 33 + #define ARM_CPU_KEEP(x) 34 + #endif 35 + 36 + #if (defined(CONFIG_SMP_ON_UP) && !defined(CONFIG_DEBUG_SPINLOCK)) || \ 37 + defined(CONFIG_GENERIC_BUG) 38 + #define ARM_EXIT_KEEP(x) x 39 + #define ARM_EXIT_DISCARD(x) 40 + #else 41 + #define ARM_EXIT_KEEP(x) 42 + #define ARM_EXIT_DISCARD(x) x 43 + #endif 44 + 45 + OUTPUT_ARCH(arm) 46 + ENTRY(stext) 47 + 48 + #ifndef __ARMEB__ 49 + jiffies = jiffies_64; 50 + #else 51 + jiffies = jiffies_64 + 4; 52 + #endif 53 + 54 + SECTIONS 55 + { 56 + /* 57 + * XXX: The linker does not define how output sections are 58 + * assigned to input sections when there are multiple statements 59 + * matching the same input section name. There is no documented 60 + * order of matching. 61 + * 62 + * unwind exit sections must be discarded before the rest of the 63 + * unwind sections get included. 64 + */ 65 + /DISCARD/ : { 66 + *(.ARM.exidx.exit.text) 67 + *(.ARM.extab.exit.text) 68 + ARM_CPU_DISCARD(*(.ARM.exidx.cpuexit.text)) 69 + ARM_CPU_DISCARD(*(.ARM.extab.cpuexit.text)) 70 + ARM_EXIT_DISCARD(EXIT_TEXT) 71 + ARM_EXIT_DISCARD(EXIT_DATA) 72 + EXIT_CALL 73 + #ifndef CONFIG_MMU 74 + *(.text.fixup) 75 + *(__ex_table) 76 + #endif 77 + #ifndef CONFIG_SMP_ON_UP 78 + *(.alt.smp.init) 79 + #endif 80 + *(.discard) 81 + *(.discard.*) 82 + } 83 + 84 + . = XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR); 85 + _xiprom = .; /* XIP ROM area to be mapped */ 86 + 87 + .head.text : { 88 + _text = .; 89 + HEAD_TEXT 90 + } 91 + 92 + .text : { /* Real text segment */ 93 + _stext = .; /* Text and read-only data */ 94 + IDMAP_TEXT 95 + __exception_text_start = .; 96 + *(.exception.text) 97 + __exception_text_end = .; 98 + IRQENTRY_TEXT 99 + TEXT_TEXT 100 + SCHED_TEXT 101 + LOCK_TEXT 102 + KPROBES_TEXT 103 + *(.gnu.warning) 104 + *(.glue_7) 105 + *(.glue_7t) 106 + . = ALIGN(4); 107 + *(.got) /* Global offset table */ 108 + ARM_CPU_KEEP(PROC_INFO) 109 + } 110 + 111 + RO_DATA(PAGE_SIZE) 112 + 113 + . = ALIGN(4); 114 + __ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) { 115 + __start___ex_table = .; 116 + #ifdef CONFIG_MMU 117 + *(__ex_table) 118 + #endif 119 + __stop___ex_table = .; 120 + } 121 + 122 + #ifdef CONFIG_ARM_UNWIND 123 + /* 124 + * Stack unwinding tables 125 + */ 126 + . = ALIGN(8); 127 + .ARM.unwind_idx : { 128 + __start_unwind_idx = .; 129 + *(.ARM.exidx*) 130 + __stop_unwind_idx = .; 131 + } 132 + .ARM.unwind_tab : { 133 + __start_unwind_tab = .; 134 + *(.ARM.extab*) 135 + __stop_unwind_tab = .; 136 + } 137 + #endif 138 + 139 + NOTES 140 + 141 + _etext = .; /* End of text and rodata section */ 142 + 143 + /* 144 + * The vectors and stubs are relocatable code, and the 145 + * only thing that matters is their relative offsets 146 + */ 147 + __vectors_start = .; 148 + .vectors 0xffff0000 : AT(__vectors_start) { 149 + *(.vectors) 150 + } 151 + . = __vectors_start + SIZEOF(.vectors); 152 + __vectors_end = .; 153 + 154 + __stubs_start = .; 155 + .stubs ADDR(.vectors) + 0x1000 : AT(__stubs_start) { 156 + *(.stubs) 157 + } 158 + . = __stubs_start + SIZEOF(.stubs); 159 + __stubs_end = .; 160 + 161 + PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors)); 162 + 163 + INIT_TEXT_SECTION(8) 164 + .exit.text : { 165 + ARM_EXIT_KEEP(EXIT_TEXT) 166 + } 167 + .init.proc.info : { 168 + ARM_CPU_DISCARD(PROC_INFO) 169 + } 170 + .init.arch.info : { 171 + __arch_info_begin = .; 172 + *(.arch.info.init) 173 + __arch_info_end = .; 174 + } 175 + .init.tagtable : { 176 + __tagtable_begin = .; 177 + *(.taglist.init) 178 + __tagtable_end = .; 179 + } 180 + #ifdef CONFIG_SMP_ON_UP 181 + .init.smpalt : { 182 + __smpalt_begin = .; 183 + *(.alt.smp.init) 184 + __smpalt_end = .; 185 + } 186 + #endif 187 + .init.pv_table : { 188 + __pv_table_begin = .; 189 + *(.pv_table) 190 + __pv_table_end = .; 191 + } 192 + .init.data : { 193 + INIT_SETUP(16) 194 + INIT_CALLS 195 + CON_INITCALL 196 + SECURITY_INITCALL 197 + INIT_RAM_FS 198 + } 199 + 200 + #ifdef CONFIG_SMP 201 + PERCPU_SECTION(L1_CACHE_BYTES) 202 + #endif 203 + 204 + _exiprom = .; /* End of XIP ROM area */ 205 + __data_loc = ALIGN(4); /* location in binary */ 206 + . = PAGE_OFFSET + TEXT_OFFSET; 207 + 208 + .data : AT(__data_loc) { 209 + _data = .; /* address in memory */ 210 + _sdata = .; 211 + 212 + /* 213 + * first, the init task union, aligned 214 + * to an 8192 byte boundary. 215 + */ 216 + INIT_TASK_DATA(THREAD_SIZE) 217 + 218 + . = ALIGN(PAGE_SIZE); 219 + __init_begin = .; 220 + INIT_DATA 221 + ARM_EXIT_KEEP(EXIT_DATA) 222 + . = ALIGN(PAGE_SIZE); 223 + __init_end = .; 224 + 225 + NOSAVE_DATA 226 + CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES) 227 + READ_MOSTLY_DATA(L1_CACHE_BYTES) 228 + 229 + /* 230 + * and the usual data section 231 + */ 232 + DATA_DATA 233 + CONSTRUCTORS 234 + 235 + _edata = .; 236 + } 237 + _edata_loc = __data_loc + SIZEOF(.data); 238 + 239 + #ifdef CONFIG_HAVE_TCM 240 + /* 241 + * We align everything to a page boundary so we can 242 + * free it after init has commenced and TCM contents have 243 + * been copied to its destination. 244 + */ 245 + .tcm_start : { 246 + . = ALIGN(PAGE_SIZE); 247 + __tcm_start = .; 248 + __itcm_start = .; 249 + } 250 + 251 + /* 252 + * Link these to the ITCM RAM 253 + * Put VMA to the TCM address and LMA to the common RAM 254 + * and we'll upload the contents from RAM to TCM and free 255 + * the used RAM after that. 256 + */ 257 + .text_itcm ITCM_OFFSET : AT(__itcm_start) 258 + { 259 + __sitcm_text = .; 260 + *(.tcm.text) 261 + *(.tcm.rodata) 262 + . = ALIGN(4); 263 + __eitcm_text = .; 264 + } 265 + 266 + /* 267 + * Reset the dot pointer, this is needed to create the 268 + * relative __dtcm_start below (to be used as extern in code). 269 + */ 270 + . = ADDR(.tcm_start) + SIZEOF(.tcm_start) + SIZEOF(.text_itcm); 271 + 272 + .dtcm_start : { 273 + __dtcm_start = .; 274 + } 275 + 276 + /* TODO: add remainder of ITCM as well, that can be used for data! */ 277 + .data_dtcm DTCM_OFFSET : AT(__dtcm_start) 278 + { 279 + . = ALIGN(4); 280 + __sdtcm_data = .; 281 + *(.tcm.data) 282 + . = ALIGN(4); 283 + __edtcm_data = .; 284 + } 285 + 286 + /* Reset the dot pointer or the linker gets confused */ 287 + . = ADDR(.dtcm_start) + SIZEOF(.data_dtcm); 288 + 289 + /* End marker for freeing TCM copy in linked object */ 290 + .tcm_end : AT(ADDR(.dtcm_start) + SIZEOF(.data_dtcm)){ 291 + . = ALIGN(PAGE_SIZE); 292 + __tcm_end = .; 293 + } 294 + #endif 295 + 296 + BSS_SECTION(0, 0, 0) 297 + _end = .; 298 + 299 + STABS_DEBUG 300 + } 301 + 302 + /* 303 + * These must never be empty 304 + * If you have to comment these two assert statements out, your 305 + * binutils is too old (for other reasons as well) 306 + */ 307 + ASSERT((__proc_info_end - __proc_info_begin), "missing CPU support") 308 + ASSERT((__arch_info_end - __arch_info_begin), "no machine record defined") 309 + 310 + /* 311 + * The HYP init code can't be more than a page long, 312 + * and should not cross a page boundary. 313 + * The above comment applies as well. 314 + */ 315 + ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & PAGE_MASK) <= PAGE_SIZE, 316 + "HYP init code too big or misaligned")
+26 -34
arch/arm/kernel/vmlinux.lds.S
··· 3 3 * Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz> 4 4 */ 5 5 6 + #ifdef CONFIG_XIP_KERNEL 7 + #include "vmlinux-xip.lds.S" 8 + #else 9 + 6 10 #include <asm-generic/vmlinux.lds.h> 7 11 #include <asm/cache.h> 8 12 #include <asm/thread_info.h> 9 13 #include <asm/memory.h> 10 14 #include <asm/page.h> 11 - #ifdef CONFIG_ARM_KERNMEM_PERMS 12 15 #include <asm/pgtable.h> 13 - #endif 14 16 15 17 #define PROC_INFO \ 16 18 . = ALIGN(4); \ ··· 91 89 *(.discard.*) 92 90 } 93 91 94 - #ifdef CONFIG_XIP_KERNEL 95 - . = XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR); 96 - #else 97 92 . = PAGE_OFFSET + TEXT_OFFSET; 98 - #endif 99 93 .head.text : { 100 94 _text = .; 101 95 HEAD_TEXT 102 96 } 103 97 104 - #ifdef CONFIG_ARM_KERNMEM_PERMS 98 + #ifdef CONFIG_DEBUG_RODATA 105 99 . = ALIGN(1<<SECTION_SHIFT); 106 100 #endif 107 101 ··· 121 123 ARM_CPU_KEEP(PROC_INFO) 122 124 } 123 125 124 - #ifdef CONFIG_DEBUG_RODATA 126 + #ifdef CONFIG_DEBUG_ALIGN_RODATA 125 127 . = ALIGN(1<<SECTION_SHIFT); 126 128 #endif 127 129 RO_DATA(PAGE_SIZE) ··· 156 158 157 159 _etext = .; /* End of text and rodata section */ 158 160 159 - #ifndef CONFIG_XIP_KERNEL 160 - # ifdef CONFIG_ARM_KERNMEM_PERMS 161 + #ifdef CONFIG_DEBUG_RODATA 161 162 . = ALIGN(1<<SECTION_SHIFT); 162 - # else 163 + #else 163 164 . = ALIGN(PAGE_SIZE); 164 - # endif 165 - __init_begin = .; 166 165 #endif 166 + __init_begin = .; 167 + 167 168 /* 168 169 * The vectors and stubs are relocatable code, and the 169 170 * only thing that matters is their relative offsets 170 171 */ 171 172 __vectors_start = .; 172 - .vectors 0 : AT(__vectors_start) { 173 + .vectors 0xffff0000 : AT(__vectors_start) { 173 174 *(.vectors) 174 175 } 175 176 . = __vectors_start + SIZEOF(.vectors); 176 177 __vectors_end = .; 177 178 178 179 __stubs_start = .; 179 - .stubs 0x1000 : AT(__stubs_start) { 180 + .stubs ADDR(.vectors) + 0x1000 : AT(__stubs_start) { 180 181 *(.stubs) 181 182 } 182 183 . = __stubs_start + SIZEOF(.stubs); 183 184 __stubs_end = .; 185 + 186 + PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors)); 184 187 185 188 INIT_TEXT_SECTION(8) 186 189 .exit.text : { ··· 213 214 __pv_table_end = .; 214 215 } 215 216 .init.data : { 216 - #ifndef CONFIG_XIP_KERNEL 217 217 INIT_DATA 218 - #endif 219 218 INIT_SETUP(16) 220 219 INIT_CALLS 221 220 CON_INITCALL 222 221 SECURITY_INITCALL 223 222 INIT_RAM_FS 224 223 } 225 - #ifndef CONFIG_XIP_KERNEL 226 224 .exit.data : { 227 225 ARM_EXIT_KEEP(EXIT_DATA) 228 226 } 229 - #endif 230 227 231 228 #ifdef CONFIG_SMP 232 229 PERCPU_SECTION(L1_CACHE_BYTES) 233 230 #endif 234 231 235 - #ifdef CONFIG_XIP_KERNEL 236 - __data_loc = ALIGN(4); /* location in binary */ 237 - . = PAGE_OFFSET + TEXT_OFFSET; 238 - #else 239 - #ifdef CONFIG_ARM_KERNMEM_PERMS 232 + #ifdef CONFIG_DEBUG_RODATA 240 233 . = ALIGN(1<<SECTION_SHIFT); 241 234 #else 242 235 . = ALIGN(THREAD_SIZE); 243 236 #endif 244 237 __init_end = .; 245 238 __data_loc = .; 246 - #endif 247 239 248 240 .data : AT(__data_loc) { 249 241 _data = .; /* address in memory */ ··· 245 255 * to an 8192 byte boundary. 246 256 */ 247 257 INIT_TASK_DATA(THREAD_SIZE) 248 - 249 - #ifdef CONFIG_XIP_KERNEL 250 - . = ALIGN(PAGE_SIZE); 251 - __init_begin = .; 252 - INIT_DATA 253 - ARM_EXIT_KEEP(EXIT_DATA) 254 - . = ALIGN(PAGE_SIZE); 255 - __init_end = .; 256 - #endif 257 258 258 259 NOSAVE_DATA 259 260 CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES) ··· 323 342 STABS_DEBUG 324 343 } 325 344 345 + #ifdef CONFIG_DEBUG_RODATA 346 + /* 347 + * Without CONFIG_DEBUG_ALIGN_RODATA, __start_rodata_section_aligned will 348 + * be the first section-aligned location after __start_rodata. Otherwise, 349 + * it will be equal to __start_rodata. 350 + */ 351 + __start_rodata_section_aligned = ALIGN(__start_rodata, 1 << SECTION_SHIFT); 352 + #endif 353 + 326 354 /* 327 355 * These must never be empty 328 356 * If you have to comment these two assert statements out, your ··· 347 357 */ 348 358 ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & PAGE_MASK) <= PAGE_SIZE, 349 359 "HYP init code too big or misaligned") 360 + 361 + #endif /* CONFIG_XIP_KERNEL */
+1 -1
arch/arm/mach-davinci/include/mach/uncompress.h
··· 30 30 u32 *uart; 31 31 32 32 /* PORT_16C550A, in polled non-fifo mode */ 33 - static void putc(char c) 33 + static inline void putc(char c) 34 34 { 35 35 if (!uart) 36 36 return;
-1
arch/arm/mach-footbridge/Kconfig
··· 68 68 select ISA 69 69 select ISA_DMA 70 70 select PCI 71 - select VIRT_TO_BUS 72 71 help 73 72 Say Y here if you intend to run this kernel on the Rebel.COM 74 73 NetWinder. Information about this machine can be found at:
+1 -1
arch/arm/mach-keystone/keystone.c
··· 63 63 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 64 64 } 65 65 66 - static phys_addr_t keystone_virt_to_idmap(unsigned long x) 66 + static unsigned long keystone_virt_to_idmap(unsigned long x) 67 67 { 68 68 return (phys_addr_t)(x) - CONFIG_PAGE_OFFSET + KEYSTONE_LOW_PHYS_START; 69 69 }
+1 -1
arch/arm/mach-ks8695/include/mach/uncompress.h
··· 17 17 #include <linux/io.h> 18 18 #include <mach/regs-uart.h> 19 19 20 - static void putc(char c) 20 + static inline void putc(char c) 21 21 { 22 22 while (!(__raw_readl((void __iomem*)KS8695_UART_PA + KS8695_URLS) & URLS_URTHRE)) 23 23 barrier();
+1 -1
arch/arm/mach-netx/include/mach/uncompress.h
··· 40 40 #define FR_BUSY (1<<3) 41 41 #define FR_TXFF (1<<5) 42 42 43 - static void putc(char c) 43 + static inline void putc(char c) 44 44 { 45 45 unsigned long base; 46 46
+1 -1
arch/arm/mach-omap1/include/mach/uncompress.h
··· 45 45 *uart_info = port; 46 46 } 47 47 48 - static void putc(int c) 48 + static inline void putc(int c) 49 49 { 50 50 if (!uart_base) 51 51 return;
+1 -1
arch/arm/mach-rpc/include/mach/uncompress.h
··· 76 76 /* 77 77 * This does not append a newline 78 78 */ 79 - static void putc(int c) 79 + static inline void putc(int c) 80 80 { 81 81 extern void ll_write_char(char *, char c, char white); 82 82 int x,y;
+1 -1
arch/arm/mach-sa1100/include/mach/uncompress.h
··· 19 19 20 20 #define UART(x) (*(volatile unsigned long *)(serial_port + (x))) 21 21 22 - static void putc(int c) 22 + static inline void putc(int c) 23 23 { 24 24 unsigned long serial_port; 25 25
+1 -1
arch/arm/mach-w90x900/include/mach/uncompress.h
··· 27 27 #define TX_DONE (UART_LSR_TEMT | UART_LSR_THRE) 28 28 static volatile u32 * const uart_base = (u32 *)UART0_PA; 29 29 30 - static void putc(int ch) 30 + static inline void putc(int ch) 31 31 { 32 32 /* Check THRE and TEMT bits before we transmit the character. 33 33 */
+19 -17
arch/arm/mm/Kconfig
··· 1037 1037 This option specifies the architecture can support big endian 1038 1038 operation. 1039 1039 1040 - config ARM_KERNMEM_PERMS 1041 - bool "Restrict kernel memory permissions" 1042 - depends on MMU 1043 - help 1044 - If this is set, kernel memory other than kernel text (and rodata) 1045 - will be made non-executable. The tradeoff is that each region is 1046 - padded to section-size (1MiB) boundaries (because their permissions 1047 - are different and splitting the 1M pages into 4K ones causes TLB 1048 - performance problems), wasting memory. 1049 - 1050 1040 config DEBUG_RODATA 1051 1041 bool "Make kernel text and rodata read-only" 1052 - depends on ARM_KERNMEM_PERMS 1042 + depends on MMU && !XIP_KERNEL 1043 + default y if CPU_V7 1044 + help 1045 + If this is set, kernel text and rodata memory will be made 1046 + read-only, and non-text kernel memory will be made non-executable. 1047 + The tradeoff is that each region is padded to section-size (1MiB) 1048 + boundaries (because their permissions are different and splitting 1049 + the 1M pages into 4K ones causes TLB performance problems), which 1050 + can waste memory. 1051 + 1052 + config DEBUG_ALIGN_RODATA 1053 + bool "Make rodata strictly non-executable" 1054 + depends on DEBUG_RODATA 1053 1055 default y 1054 1056 help 1055 - If this is set, kernel text and rodata will be made read-only. This 1056 - is to help catch accidental or malicious attempts to change the 1057 - kernel's executable code. Additionally splits rodata from kernel 1058 - text so it can be made explicitly non-executable. This creates 1059 - another section-size padded region, so it can waste more memory 1060 - space while gaining the read-only protections. 1057 + If this is set, rodata will be made explicitly non-executable. This 1058 + provides protection on the rare chance that attackers might find and 1059 + use ROP gadgets that exist in the rodata section. This adds an 1060 + additional section-aligned split of rodata from kernel text so it 1061 + can be made explicitly non-executable. This padding may waste memory 1062 + space to gain the additional protection.
+18 -14
arch/arm/mm/cache-tauros2.c
··· 22 22 #include <asm/cputype.h> 23 23 #include <asm/hardware/cache-tauros2.h> 24 24 25 + /* CP15 PJ4 Control configuration register */ 26 + #define CCR_L2C_PREFETCH_DISABLE BIT(24) 27 + #define CCR_L2C_ECC_ENABLE BIT(23) 28 + #define CCR_L2C_WAY7_4_DISABLE BIT(21) 29 + #define CCR_L2C_BURST8_ENABLE BIT(20) 25 30 26 31 /* 27 32 * When Tauros2 is used on a CPU that supports the v7 hierarchical ··· 187 182 u = read_extra_features(); 188 183 189 184 if (features & CACHE_TAUROS2_PREFETCH_ON) 190 - u &= ~0x01000000; 185 + u &= ~CCR_L2C_PREFETCH_DISABLE; 191 186 else 192 - u |= 0x01000000; 187 + u |= CCR_L2C_PREFETCH_DISABLE; 193 188 pr_info("Tauros2: %s L2 prefetch.\n", 194 189 (features & CACHE_TAUROS2_PREFETCH_ON) 195 190 ? "Enabling" : "Disabling"); 196 191 197 192 if (features & CACHE_TAUROS2_LINEFILL_BURST8) 198 - u |= 0x00100000; 193 + u |= CCR_L2C_BURST8_ENABLE; 199 194 else 200 - u &= ~0x00100000; 201 - pr_info("Tauros2: %s line fill burt8.\n", 195 + u &= ~CCR_L2C_BURST8_ENABLE; 196 + pr_info("Tauros2: %s burst8 line fill.\n", 202 197 (features & CACHE_TAUROS2_LINEFILL_BURST8) 203 198 ? "Enabling" : "Disabling"); 204 199 ··· 292 287 node = of_find_matching_node(NULL, tauros2_ids); 293 288 if (!node) { 294 289 pr_info("Not found marvell,tauros2-cache, disable it\n"); 295 - return; 290 + } else { 291 + ret = of_property_read_u32(node, "marvell,tauros2-cache-features", &f); 292 + if (ret) { 293 + pr_info("Not found marvell,tauros-cache-features property, " 294 + "disable extra features\n"); 295 + features = 0; 296 + } else 297 + features = f; 296 298 } 297 - 298 - ret = of_property_read_u32(node, "marvell,tauros2-cache-features", &f); 299 - if (ret) { 300 - pr_info("Not found marvell,tauros-cache-features property, " 301 - "disable extra features\n"); 302 - features = 0; 303 - } else 304 - features = f; 305 299 #endif 306 300 tauros2_internal_init(features); 307 301 }
+196 -48
arch/arm/mm/dma-mapping.c
··· 42 42 #include "dma.h" 43 43 #include "mm.h" 44 44 45 + struct arm_dma_alloc_args { 46 + struct device *dev; 47 + size_t size; 48 + gfp_t gfp; 49 + pgprot_t prot; 50 + const void *caller; 51 + bool want_vaddr; 52 + }; 53 + 54 + struct arm_dma_free_args { 55 + struct device *dev; 56 + size_t size; 57 + void *cpu_addr; 58 + struct page *page; 59 + bool want_vaddr; 60 + }; 61 + 62 + struct arm_dma_allocator { 63 + void *(*alloc)(struct arm_dma_alloc_args *args, 64 + struct page **ret_page); 65 + void (*free)(struct arm_dma_free_args *args); 66 + }; 67 + 68 + struct arm_dma_buffer { 69 + struct list_head list; 70 + void *virt; 71 + struct arm_dma_allocator *allocator; 72 + }; 73 + 74 + static LIST_HEAD(arm_dma_bufs); 75 + static DEFINE_SPINLOCK(arm_dma_bufs_lock); 76 + 77 + static struct arm_dma_buffer *arm_dma_buffer_find(void *virt) 78 + { 79 + struct arm_dma_buffer *buf, *found = NULL; 80 + unsigned long flags; 81 + 82 + spin_lock_irqsave(&arm_dma_bufs_lock, flags); 83 + list_for_each_entry(buf, &arm_dma_bufs, list) { 84 + if (buf->virt == virt) { 85 + list_del(&buf->list); 86 + found = buf; 87 + break; 88 + } 89 + } 90 + spin_unlock_irqrestore(&arm_dma_bufs_lock, flags); 91 + return found; 92 + } 93 + 45 94 /* 46 95 * The DMA API is built upon the notion of "buffer ownership". A buffer 47 96 * is either exclusively owned by the CPU (and therefore may be accessed ··· 641 592 #define __alloc_remap_buffer(dev, size, gfp, prot, ret, c, wv) NULL 642 593 #define __alloc_from_pool(size, ret_page) NULL 643 594 #define __alloc_from_contiguous(dev, size, prot, ret, c, wv) NULL 644 - #define __free_from_pool(cpu_addr, size) 0 595 + #define __free_from_pool(cpu_addr, size) do { } while (0) 645 596 #define __free_from_contiguous(dev, page, cpu_addr, size, wv) do { } while (0) 646 597 #define __dma_free_remap(cpu_addr, size) do { } while (0) 647 598 ··· 659 610 return page_address(page); 660 611 } 661 612 613 + static void *simple_allocator_alloc(struct arm_dma_alloc_args *args, 614 + struct page **ret_page) 615 + { 616 + return __alloc_simple_buffer(args->dev, args->size, args->gfp, 617 + ret_page); 618 + } 662 619 620 + static void simple_allocator_free(struct arm_dma_free_args *args) 621 + { 622 + __dma_free_buffer(args->page, args->size); 623 + } 624 + 625 + static struct arm_dma_allocator simple_allocator = { 626 + .alloc = simple_allocator_alloc, 627 + .free = simple_allocator_free, 628 + }; 629 + 630 + static void *cma_allocator_alloc(struct arm_dma_alloc_args *args, 631 + struct page **ret_page) 632 + { 633 + return __alloc_from_contiguous(args->dev, args->size, args->prot, 634 + ret_page, args->caller, 635 + args->want_vaddr); 636 + } 637 + 638 + static void cma_allocator_free(struct arm_dma_free_args *args) 639 + { 640 + __free_from_contiguous(args->dev, args->page, args->cpu_addr, 641 + args->size, args->want_vaddr); 642 + } 643 + 644 + static struct arm_dma_allocator cma_allocator = { 645 + .alloc = cma_allocator_alloc, 646 + .free = cma_allocator_free, 647 + }; 648 + 649 + static void *pool_allocator_alloc(struct arm_dma_alloc_args *args, 650 + struct page **ret_page) 651 + { 652 + return __alloc_from_pool(args->size, ret_page); 653 + } 654 + 655 + static void pool_allocator_free(struct arm_dma_free_args *args) 656 + { 657 + __free_from_pool(args->cpu_addr, args->size); 658 + } 659 + 660 + static struct arm_dma_allocator pool_allocator = { 661 + .alloc = pool_allocator_alloc, 662 + .free = pool_allocator_free, 663 + }; 664 + 665 + static void *remap_allocator_alloc(struct arm_dma_alloc_args *args, 666 + struct page **ret_page) 667 + { 668 + return __alloc_remap_buffer(args->dev, args->size, args->gfp, 669 + args->prot, ret_page, args->caller, 670 + args->want_vaddr); 671 + } 672 + 673 + static void remap_allocator_free(struct arm_dma_free_args *args) 674 + { 675 + if (args->want_vaddr) 676 + __dma_free_remap(args->cpu_addr, args->size); 677 + 678 + __dma_free_buffer(args->page, args->size); 679 + } 680 + 681 + static struct arm_dma_allocator remap_allocator = { 682 + .alloc = remap_allocator_alloc, 683 + .free = remap_allocator_free, 684 + }; 663 685 664 686 static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 665 687 gfp_t gfp, pgprot_t prot, bool is_coherent, ··· 739 619 u64 mask = get_coherent_dma_mask(dev); 740 620 struct page *page = NULL; 741 621 void *addr; 742 - bool want_vaddr; 622 + bool allowblock, cma; 623 + struct arm_dma_buffer *buf; 624 + struct arm_dma_alloc_args args = { 625 + .dev = dev, 626 + .size = PAGE_ALIGN(size), 627 + .gfp = gfp, 628 + .prot = prot, 629 + .caller = caller, 630 + .want_vaddr = !dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs), 631 + }; 743 632 744 633 #ifdef CONFIG_DMA_API_DEBUG 745 634 u64 limit = (mask + 1) & ~mask; ··· 762 633 if (!mask) 763 634 return NULL; 764 635 636 + buf = kzalloc(sizeof(*buf), gfp); 637 + if (!buf) 638 + return NULL; 639 + 765 640 if (mask < 0xffffffffULL) 766 641 gfp |= GFP_DMA; 767 642 ··· 777 644 * platform; see CONFIG_HUGETLBFS. 778 645 */ 779 646 gfp &= ~(__GFP_COMP); 647 + args.gfp = gfp; 780 648 781 649 *handle = DMA_ERROR_CODE; 782 - size = PAGE_ALIGN(size); 783 - want_vaddr = !dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs); 650 + allowblock = gfpflags_allow_blocking(gfp); 651 + cma = allowblock ? dev_get_cma_area(dev) : false; 784 652 785 - if (nommu()) 786 - addr = __alloc_simple_buffer(dev, size, gfp, &page); 787 - else if (dev_get_cma_area(dev) && (gfp & __GFP_DIRECT_RECLAIM)) 788 - addr = __alloc_from_contiguous(dev, size, prot, &page, 789 - caller, want_vaddr); 790 - else if (is_coherent) 791 - addr = __alloc_simple_buffer(dev, size, gfp, &page); 792 - else if (!gfpflags_allow_blocking(gfp)) 793 - addr = __alloc_from_pool(size, &page); 653 + if (cma) 654 + buf->allocator = &cma_allocator; 655 + else if (nommu() || is_coherent) 656 + buf->allocator = &simple_allocator; 657 + else if (allowblock) 658 + buf->allocator = &remap_allocator; 794 659 else 795 - addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, 796 - caller, want_vaddr); 660 + buf->allocator = &pool_allocator; 797 661 798 - if (page) 662 + addr = buf->allocator->alloc(&args, &page); 663 + 664 + if (page) { 665 + unsigned long flags; 666 + 799 667 *handle = pfn_to_dma(dev, page_to_pfn(page)); 668 + buf->virt = args.want_vaddr ? addr : page; 800 669 801 - return want_vaddr ? addr : page; 670 + spin_lock_irqsave(&arm_dma_bufs_lock, flags); 671 + list_add(&buf->list, &arm_dma_bufs); 672 + spin_unlock_irqrestore(&arm_dma_bufs_lock, flags); 673 + } else { 674 + kfree(buf); 675 + } 676 + 677 + return args.want_vaddr ? addr : page; 802 678 } 803 679 804 680 /* ··· 883 741 bool is_coherent) 884 742 { 885 743 struct page *page = pfn_to_page(dma_to_pfn(dev, handle)); 886 - bool want_vaddr = !dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs); 744 + struct arm_dma_buffer *buf; 745 + struct arm_dma_free_args args = { 746 + .dev = dev, 747 + .size = PAGE_ALIGN(size), 748 + .cpu_addr = cpu_addr, 749 + .page = page, 750 + .want_vaddr = !dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs), 751 + }; 887 752 888 - size = PAGE_ALIGN(size); 889 - 890 - if (nommu()) { 891 - __dma_free_buffer(page, size); 892 - } else if (!is_coherent && __free_from_pool(cpu_addr, size)) { 753 + buf = arm_dma_buffer_find(cpu_addr); 754 + if (WARN(!buf, "Freeing invalid buffer %p\n", cpu_addr)) 893 755 return; 894 - } else if (!dev_get_cma_area(dev)) { 895 - if (want_vaddr && !is_coherent) 896 - __dma_free_remap(cpu_addr, size); 897 - __dma_free_buffer(page, size); 898 - } else { 899 - /* 900 - * Non-atomic allocations cannot be freed with IRQs disabled 901 - */ 902 - WARN_ON(irqs_disabled()); 903 - __free_from_contiguous(dev, page, cpu_addr, size, want_vaddr); 904 - } 756 + 757 + buf->allocator->free(&args); 758 + kfree(buf); 905 759 } 906 760 907 761 void arm_dma_free(struct device *dev, size_t size, void *cpu_addr, ··· 1260 1122 spin_unlock_irqrestore(&mapping->lock, flags); 1261 1123 } 1262 1124 1125 + /* We'll try 2M, 1M, 64K, and finally 4K; array must end with 0! */ 1126 + static const int iommu_order_array[] = { 9, 8, 4, 0 }; 1127 + 1263 1128 static struct page **__iommu_alloc_buffer(struct device *dev, size_t size, 1264 1129 gfp_t gfp, struct dma_attrs *attrs) 1265 1130 { ··· 1270 1129 int count = size >> PAGE_SHIFT; 1271 1130 int array_size = count * sizeof(struct page *); 1272 1131 int i = 0; 1132 + int order_idx = 0; 1273 1133 1274 1134 if (array_size <= PAGE_SIZE) 1275 1135 pages = kzalloc(array_size, GFP_KERNEL); ··· 1296 1154 return pages; 1297 1155 } 1298 1156 1157 + /* Go straight to 4K chunks if caller says it's OK. */ 1158 + if (dma_get_attr(DMA_ATTR_ALLOC_SINGLE_PAGES, attrs)) 1159 + order_idx = ARRAY_SIZE(iommu_order_array) - 1; 1160 + 1299 1161 /* 1300 1162 * IOMMU can map any pages, so himem can also be used here 1301 1163 */ ··· 1308 1162 while (count) { 1309 1163 int j, order; 1310 1164 1311 - for (order = __fls(count); order > 0; --order) { 1312 - /* 1313 - * We do not want OOM killer to be invoked as long 1314 - * as we can fall back to single pages, so we force 1315 - * __GFP_NORETRY for orders higher than zero. 1316 - */ 1317 - pages[i] = alloc_pages(gfp | __GFP_NORETRY, order); 1318 - if (pages[i]) 1319 - break; 1165 + order = iommu_order_array[order_idx]; 1166 + 1167 + /* Drop down when we get small */ 1168 + if (__fls(count) < order) { 1169 + order_idx++; 1170 + continue; 1320 1171 } 1321 1172 1322 - if (!pages[i]) { 1323 - /* 1324 - * Fall back to single page allocation. 1325 - * Might invoke OOM killer as last resort. 1326 - */ 1173 + if (order) { 1174 + /* See if it's easy to allocate a high-order chunk */ 1175 + pages[i] = alloc_pages(gfp | __GFP_NORETRY, order); 1176 + 1177 + /* Go down a notch at first sign of pressure */ 1178 + if (!pages[i]) { 1179 + order_idx++; 1180 + continue; 1181 + } 1182 + } else { 1327 1183 pages[i] = alloc_pages(gfp, 0); 1328 1184 if (!pages[i]) 1329 1185 goto error;
+1 -1
arch/arm/mm/idmap.c
··· 15 15 * page tables. 16 16 */ 17 17 pgd_t *idmap_pgd; 18 - phys_addr_t (*arch_virt_to_idmap) (unsigned long x); 18 + unsigned long (*arch_virt_to_idmap)(unsigned long x); 19 19 20 20 #ifdef CONFIG_ARM_LPAE 21 21 static void idmap_add_pmd(pud_t *pud, unsigned long addr, unsigned long end,
+13 -11
arch/arm/mm/init.c
··· 572 572 } 573 573 } 574 574 575 - #ifdef CONFIG_ARM_KERNMEM_PERMS 575 + #ifdef CONFIG_DEBUG_RODATA 576 576 struct section_perm { 577 + const char *name; 577 578 unsigned long start; 578 579 unsigned long end; 579 580 pmdval_t mask; ··· 582 581 pmdval_t clear; 583 582 }; 584 583 584 + /* First section-aligned location at or after __start_rodata. */ 585 + extern char __start_rodata_section_aligned[]; 586 + 585 587 static struct section_perm nx_perms[] = { 586 588 /* Make pages tables, etc before _stext RW (set NX). */ 587 589 { 590 + .name = "pre-text NX", 588 591 .start = PAGE_OFFSET, 589 592 .end = (unsigned long)_stext, 590 593 .mask = ~PMD_SECT_XN, ··· 596 591 }, 597 592 /* Make init RW (set NX). */ 598 593 { 594 + .name = "init NX", 599 595 .start = (unsigned long)__init_begin, 600 596 .end = (unsigned long)_sdata, 601 597 .mask = ~PMD_SECT_XN, 602 598 .prot = PMD_SECT_XN, 603 599 }, 604 - #ifdef CONFIG_DEBUG_RODATA 605 600 /* Make rodata NX (set RO in ro_perms below). */ 606 601 { 607 - .start = (unsigned long)__start_rodata, 602 + .name = "rodata NX", 603 + .start = (unsigned long)__start_rodata_section_aligned, 608 604 .end = (unsigned long)__init_begin, 609 605 .mask = ~PMD_SECT_XN, 610 606 .prot = PMD_SECT_XN, 611 607 }, 612 - #endif 613 608 }; 614 609 615 - #ifdef CONFIG_DEBUG_RODATA 616 610 static struct section_perm ro_perms[] = { 617 611 /* Make kernel code and rodata RX (set RO). */ 618 612 { 613 + .name = "text/rodata RO", 619 614 .start = (unsigned long)_stext, 620 615 .end = (unsigned long)__init_begin, 621 616 #ifdef CONFIG_ARM_LPAE ··· 628 623 #endif 629 624 }, 630 625 }; 631 - #endif 632 626 633 627 /* 634 628 * Updates section permissions only for the current mm (sections are ··· 674 670 for (i = 0; i < n; i++) { 675 671 if (!IS_ALIGNED(perms[i].start, SECTION_SIZE) || 676 672 !IS_ALIGNED(perms[i].end, SECTION_SIZE)) { 677 - pr_err("BUG: section %lx-%lx not aligned to %lx\n", 678 - perms[i].start, perms[i].end, 673 + pr_err("BUG: %s section %lx-%lx not aligned to %lx\n", 674 + perms[i].name, perms[i].start, perms[i].end, 679 675 SECTION_SIZE); 680 676 continue; 681 677 } ··· 716 712 stop_machine(__fix_kernmem_perms, NULL, NULL); 717 713 } 718 714 719 - #ifdef CONFIG_DEBUG_RODATA 720 715 int __mark_rodata_ro(void *unused) 721 716 { 722 717 update_sections_early(ro_perms, ARRAY_SIZE(ro_perms)); ··· 738 735 set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), true, 739 736 current->active_mm); 740 737 } 741 - #endif /* CONFIG_DEBUG_RODATA */ 742 738 743 739 #else 744 740 static inline void fix_kernmem_perms(void) { } 745 - #endif /* CONFIG_ARM_KERNMEM_PERMS */ 741 + #endif /* CONFIG_DEBUG_RODATA */ 746 742 747 743 void free_tcmmem(void) 748 744 {
+6 -2
arch/arm/mm/mmu.c
··· 1253 1253 1254 1254 #ifdef CONFIG_XIP_KERNEL 1255 1255 /* The XIP kernel is mapped in the module area -- skip over it */ 1256 - addr = ((unsigned long)_etext + PMD_SIZE - 1) & PMD_MASK; 1256 + addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK; 1257 1257 #endif 1258 1258 for ( ; addr < PAGE_OFFSET; addr += PMD_SIZE) 1259 1259 pmd_clear(pmd_off_k(addr)); ··· 1335 1335 #ifdef CONFIG_XIP_KERNEL 1336 1336 map.pfn = __phys_to_pfn(CONFIG_XIP_PHYS_ADDR & SECTION_MASK); 1337 1337 map.virtual = MODULES_VADDR; 1338 - map.length = ((unsigned long)_etext - map.virtual + ~SECTION_MASK) & SECTION_MASK; 1338 + map.length = ((unsigned long)_exiprom - map.virtual + ~SECTION_MASK) & SECTION_MASK; 1339 1339 map.type = MT_ROM; 1340 1340 create_mapping(&map); 1341 1341 #endif ··· 1426 1426 static void __init map_lowmem(void) 1427 1427 { 1428 1428 struct memblock_region *reg; 1429 + #ifdef CONFIG_XIP_KERNEL 1430 + phys_addr_t kernel_x_start = round_down(__pa(_sdata), SECTION_SIZE); 1431 + #else 1429 1432 phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); 1433 + #endif 1430 1434 phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); 1431 1435 1432 1436 /* Map all the lowmem memory banks. */
+1 -1
arch/arm/mm/proc-v7.S
··· 487 487 488 488 .align 2 489 489 __v7_setup_stack_ptr: 490 - .word __v7_setup_stack - . 490 + .word PHYS_RELATIVE(__v7_setup_stack, .) 491 491 ENDPROC(__v7_setup) 492 492 493 493 .bss
+13
arch/arm/plat-orion/time.c
··· 18 18 #include <linux/irq.h> 19 19 #include <linux/sched_clock.h> 20 20 #include <plat/time.h> 21 + #include <asm/delay.h> 21 22 22 23 /* 23 24 * MBus bridge block registers. ··· 189 188 timer_base = _timer_base; 190 189 } 191 190 191 + static unsigned long orion_delay_timer_read(void) 192 + { 193 + return ~readl(timer_base + TIMER0_VAL_OFF); 194 + } 195 + 196 + static struct delay_timer orion_delay_timer = { 197 + .read_current_timer = orion_delay_timer_read, 198 + }; 199 + 192 200 void __init 193 201 orion_time_init(void __iomem *_bridge_base, u32 _bridge_timer1_clr_mask, 194 202 unsigned int irq, unsigned int tclk) ··· 211 201 bridge_timer1_clr_mask = _bridge_timer1_clr_mask; 212 202 213 203 ticks_per_jiffy = (tclk + HZ/2) / HZ; 204 + 205 + orion_delay_timer.freq = tclk; 206 + register_current_timer_delay(&orion_delay_timer); 214 207 215 208 /* 216 209 * Set scale and timer for sched_clock.
+2 -97
arch/arm64/kernel/psci.c
··· 20 20 #include <linux/smp.h> 21 21 #include <linux/delay.h> 22 22 #include <linux/psci.h> 23 - #include <linux/slab.h> 24 23 25 24 #include <uapi/linux/psci.h> 26 25 ··· 27 28 #include <asm/cpu_ops.h> 28 29 #include <asm/errno.h> 29 30 #include <asm/smp_plat.h> 30 - #include <asm/suspend.h> 31 - 32 - static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state); 33 - 34 - static int __maybe_unused cpu_psci_cpu_init_idle(unsigned int cpu) 35 - { 36 - int i, ret, count = 0; 37 - u32 *psci_states; 38 - struct device_node *state_node, *cpu_node; 39 - 40 - cpu_node = of_get_cpu_node(cpu, NULL); 41 - if (!cpu_node) 42 - return -ENODEV; 43 - 44 - /* 45 - * If the PSCI cpu_suspend function hook has not been initialized 46 - * idle states must not be enabled, so bail out 47 - */ 48 - if (!psci_ops.cpu_suspend) 49 - return -EOPNOTSUPP; 50 - 51 - /* Count idle states */ 52 - while ((state_node = of_parse_phandle(cpu_node, "cpu-idle-states", 53 - count))) { 54 - count++; 55 - of_node_put(state_node); 56 - } 57 - 58 - if (!count) 59 - return -ENODEV; 60 - 61 - psci_states = kcalloc(count, sizeof(*psci_states), GFP_KERNEL); 62 - if (!psci_states) 63 - return -ENOMEM; 64 - 65 - for (i = 0; i < count; i++) { 66 - u32 state; 67 - 68 - state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i); 69 - 70 - ret = of_property_read_u32(state_node, 71 - "arm,psci-suspend-param", 72 - &state); 73 - if (ret) { 74 - pr_warn(" * %s missing arm,psci-suspend-param property\n", 75 - state_node->full_name); 76 - of_node_put(state_node); 77 - goto free_mem; 78 - } 79 - 80 - of_node_put(state_node); 81 - pr_debug("psci-power-state %#x index %d\n", state, i); 82 - if (!psci_power_state_is_valid(state)) { 83 - pr_warn("Invalid PSCI power state %#x\n", state); 84 - ret = -EINVAL; 85 - goto free_mem; 86 - } 87 - psci_states[i] = state; 88 - } 89 - /* Idle states parsed correctly, initialize per-cpu pointer */ 90 - per_cpu(psci_power_state, cpu) = psci_states; 91 - return 0; 92 - 93 - free_mem: 94 - kfree(psci_states); 95 - return ret; 96 - } 97 31 98 32 static int __init cpu_psci_cpu_init(unsigned int cpu) 99 33 { ··· 110 178 } 111 179 #endif 112 180 113 - static int psci_suspend_finisher(unsigned long index) 114 - { 115 - u32 *state = __this_cpu_read(psci_power_state); 116 - 117 - return psci_ops.cpu_suspend(state[index - 1], 118 - virt_to_phys(cpu_resume)); 119 - } 120 - 121 - static int __maybe_unused cpu_psci_cpu_suspend(unsigned long index) 122 - { 123 - int ret; 124 - u32 *state = __this_cpu_read(psci_power_state); 125 - /* 126 - * idle state index 0 corresponds to wfi, should never be called 127 - * from the cpu_suspend operations 128 - */ 129 - if (WARN_ON_ONCE(!index)) 130 - return -EINVAL; 131 - 132 - if (!psci_power_state_loses_context(state[index - 1])) 133 - ret = psci_ops.cpu_suspend(state[index - 1], 0); 134 - else 135 - ret = cpu_suspend(index, psci_suspend_finisher); 136 - 137 - return ret; 138 - } 139 - 140 181 const struct cpu_operations cpu_psci_ops = { 141 182 .name = "psci", 142 183 #ifdef CONFIG_CPU_IDLE 143 - .cpu_init_idle = cpu_psci_cpu_init_idle, 144 - .cpu_suspend = cpu_psci_cpu_suspend, 184 + .cpu_init_idle = psci_cpu_init_idle, 185 + .cpu_suspend = psci_cpu_suspend_enter, 145 186 #endif 146 187 .cpu_init = cpu_psci_cpu_init, 147 188 .cpu_prepare = cpu_psci_cpu_prepare,
+22 -2
drivers/base/dd.c
··· 560 560 struct device_attach_data *data = _data; 561 561 struct device *dev = data->dev; 562 562 bool async_allowed; 563 + int ret; 563 564 564 565 /* 565 566 * Check if device has already been claimed. This may ··· 571 570 if (dev->driver) 572 571 return -EBUSY; 573 572 574 - if (!driver_match_device(drv, dev)) 573 + ret = driver_match_device(drv, dev); 574 + if (ret == 0) { 575 + /* no match */ 575 576 return 0; 577 + } else if (ret == -EPROBE_DEFER) { 578 + dev_dbg(dev, "Device match requests probe deferral\n"); 579 + driver_deferred_probe_add(dev); 580 + } else if (ret < 0) { 581 + dev_dbg(dev, "Bus failed to match device: %d", ret); 582 + return ret; 583 + } /* ret > 0 means positive match */ 576 584 577 585 async_allowed = driver_allows_async_probing(drv); 578 586 ··· 701 691 static int __driver_attach(struct device *dev, void *data) 702 692 { 703 693 struct device_driver *drv = data; 694 + int ret; 704 695 705 696 /* 706 697 * Lock device and try to bind to it. We drop the error ··· 713 702 * is an error. 714 703 */ 715 704 716 - if (!driver_match_device(drv, dev)) 705 + ret = driver_match_device(drv, dev); 706 + if (ret == 0) { 707 + /* no match */ 717 708 return 0; 709 + } else if (ret == -EPROBE_DEFER) { 710 + dev_dbg(dev, "Device match requests probe deferral\n"); 711 + driver_deferred_probe_add(dev); 712 + } else if (ret < 0) { 713 + dev_dbg(dev, "Bus failed to match device: %d", ret); 714 + return ret; 715 + } /* ret > 0 means positive match */ 718 716 719 717 if (dev->parent) /* Needed for USB */ 720 718 device_lock(dev->parent);
+25 -6
drivers/clk/clkdev.c
··· 353 353 } 354 354 EXPORT_SYMBOL(clkdev_drop); 355 355 356 + static struct clk_lookup *__clk_register_clkdev(struct clk_hw *hw, 357 + const char *con_id, 358 + const char *dev_id, ...) 359 + { 360 + struct clk_lookup *cl; 361 + va_list ap; 362 + 363 + va_start(ap, dev_id); 364 + cl = vclkdev_create(hw, con_id, dev_id, ap); 365 + va_end(ap); 366 + 367 + return cl; 368 + } 369 + 356 370 /** 357 371 * clk_register_clkdev - register one clock lookup for a struct clk 358 372 * @clk: struct clk to associate with all clk_lookups 359 373 * @con_id: connection ID string on device 360 - * @dev_id: format string describing device name 374 + * @dev_id: string describing device name 361 375 * 362 376 * con_id or dev_id may be NULL as a wildcard, just as in the rest of 363 377 * clkdev. ··· 382 368 * after clk_register(). 383 369 */ 384 370 int clk_register_clkdev(struct clk *clk, const char *con_id, 385 - const char *dev_fmt, ...) 371 + const char *dev_id) 386 372 { 387 373 struct clk_lookup *cl; 388 - va_list ap; 389 374 390 375 if (IS_ERR(clk)) 391 376 return PTR_ERR(clk); 392 377 393 - va_start(ap, dev_fmt); 394 - cl = vclkdev_create(__clk_get_hw(clk), con_id, dev_fmt, ap); 395 - va_end(ap); 378 + /* 379 + * Since dev_id can be NULL, and NULL is handled specially, we must 380 + * pass it as either a NULL format string, or with "%s". 381 + */ 382 + if (dev_id) 383 + cl = __clk_register_clkdev(__clk_get_hw(clk), con_id, "%s", 384 + dev_id); 385 + else 386 + cl = __clk_register_clkdev(__clk_get_hw(clk), con_id, NULL); 396 387 397 388 return cl ? 0 : -ENOMEM; 398 389 }
+120
drivers/firmware/psci.c
··· 14 14 #define pr_fmt(fmt) "psci: " fmt 15 15 16 16 #include <linux/arm-smccc.h> 17 + #include <linux/cpuidle.h> 17 18 #include <linux/errno.h> 18 19 #include <linux/linkage.h> 19 20 #include <linux/of.h> ··· 22 21 #include <linux/printk.h> 23 22 #include <linux/psci.h> 24 23 #include <linux/reboot.h> 24 + #include <linux/slab.h> 25 25 #include <linux/suspend.h> 26 26 27 27 #include <uapi/linux/psci.h> 28 28 29 + #include <asm/cpuidle.h> 29 30 #include <asm/cputype.h> 30 31 #include <asm/system_misc.h> 31 32 #include <asm/smp_plat.h> ··· 246 243 return invoke_psci_fn(PSCI_1_0_FN_PSCI_FEATURES, 247 244 psci_func_id, 0, 0); 248 245 } 246 + 247 + #ifdef CONFIG_CPU_IDLE 248 + static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state); 249 + 250 + static int psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu) 251 + { 252 + int i, ret, count = 0; 253 + u32 *psci_states; 254 + struct device_node *state_node; 255 + 256 + /* 257 + * If the PSCI cpu_suspend function hook has not been initialized 258 + * idle states must not be enabled, so bail out 259 + */ 260 + if (!psci_ops.cpu_suspend) 261 + return -EOPNOTSUPP; 262 + 263 + /* Count idle states */ 264 + while ((state_node = of_parse_phandle(cpu_node, "cpu-idle-states", 265 + count))) { 266 + count++; 267 + of_node_put(state_node); 268 + } 269 + 270 + if (!count) 271 + return -ENODEV; 272 + 273 + psci_states = kcalloc(count, sizeof(*psci_states), GFP_KERNEL); 274 + if (!psci_states) 275 + return -ENOMEM; 276 + 277 + for (i = 0; i < count; i++) { 278 + u32 state; 279 + 280 + state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i); 281 + 282 + ret = of_property_read_u32(state_node, 283 + "arm,psci-suspend-param", 284 + &state); 285 + if (ret) { 286 + pr_warn(" * %s missing arm,psci-suspend-param property\n", 287 + state_node->full_name); 288 + of_node_put(state_node); 289 + goto free_mem; 290 + } 291 + 292 + of_node_put(state_node); 293 + pr_debug("psci-power-state %#x index %d\n", state, i); 294 + if (!psci_power_state_is_valid(state)) { 295 + pr_warn("Invalid PSCI power state %#x\n", state); 296 + ret = -EINVAL; 297 + goto free_mem; 298 + } 299 + psci_states[i] = state; 300 + } 301 + /* Idle states parsed correctly, initialize per-cpu pointer */ 302 + per_cpu(psci_power_state, cpu) = psci_states; 303 + return 0; 304 + 305 + free_mem: 306 + kfree(psci_states); 307 + return ret; 308 + } 309 + 310 + int psci_cpu_init_idle(unsigned int cpu) 311 + { 312 + struct device_node *cpu_node; 313 + int ret; 314 + 315 + cpu_node = of_get_cpu_node(cpu, NULL); 316 + if (!cpu_node) 317 + return -ENODEV; 318 + 319 + ret = psci_dt_cpu_init_idle(cpu_node, cpu); 320 + 321 + of_node_put(cpu_node); 322 + 323 + return ret; 324 + } 325 + 326 + static int psci_suspend_finisher(unsigned long index) 327 + { 328 + u32 *state = __this_cpu_read(psci_power_state); 329 + 330 + return psci_ops.cpu_suspend(state[index - 1], 331 + virt_to_phys(cpu_resume)); 332 + } 333 + 334 + int psci_cpu_suspend_enter(unsigned long index) 335 + { 336 + int ret; 337 + u32 *state = __this_cpu_read(psci_power_state); 338 + /* 339 + * idle state index 0 corresponds to wfi, should never be called 340 + * from the cpu_suspend operations 341 + */ 342 + if (WARN_ON_ONCE(!index)) 343 + return -EINVAL; 344 + 345 + if (!psci_power_state_loses_context(state[index - 1])) 346 + ret = psci_ops.cpu_suspend(state[index - 1], 0); 347 + else 348 + ret = cpu_suspend(index, psci_suspend_finisher); 349 + 350 + return ret; 351 + } 352 + 353 + /* ARM specific CPU idle operations */ 354 + #ifdef CONFIG_ARM 355 + static struct cpuidle_ops psci_cpuidle_ops __initdata = { 356 + .suspend = psci_cpu_suspend_enter, 357 + .init = psci_dt_cpu_init_idle, 358 + }; 359 + 360 + CPUIDLE_METHOD_OF_DECLARE(psci, "arm,psci", &psci_cpuidle_ops); 361 + #endif 362 + #endif 249 363 250 364 static int psci_system_suspend(unsigned long unused) 251 365 {
+22 -11
drivers/media/v4l2-core/videobuf2-dma-contig.c
··· 23 23 24 24 struct vb2_dc_conf { 25 25 struct device *dev; 26 + struct dma_attrs attrs; 26 27 }; 27 28 28 29 struct vb2_dc_buf { 29 30 struct device *dev; 30 31 void *vaddr; 31 32 unsigned long size; 33 + void *cookie; 32 34 dma_addr_t dma_addr; 35 + struct dma_attrs attrs; 33 36 enum dma_data_direction dma_dir; 34 37 struct sg_table *dma_sgt; 35 38 struct frame_vector *vec; ··· 134 131 sg_free_table(buf->sgt_base); 135 132 kfree(buf->sgt_base); 136 133 } 137 - dma_free_coherent(buf->dev, buf->size, buf->vaddr, buf->dma_addr); 134 + dma_free_attrs(buf->dev, buf->size, buf->cookie, buf->dma_addr, 135 + &buf->attrs); 138 136 put_device(buf->dev); 139 137 kfree(buf); 140 138 } ··· 151 147 if (!buf) 152 148 return ERR_PTR(-ENOMEM); 153 149 154 - buf->vaddr = dma_alloc_coherent(dev, size, &buf->dma_addr, 155 - GFP_KERNEL | gfp_flags); 156 - if (!buf->vaddr) { 150 + buf->attrs = conf->attrs; 151 + buf->cookie = dma_alloc_attrs(dev, size, &buf->dma_addr, 152 + GFP_KERNEL | gfp_flags, &buf->attrs); 153 + if (!buf->cookie) { 157 154 dev_err(dev, "dma_alloc_coherent of size %ld failed\n", size); 158 155 kfree(buf); 159 156 return ERR_PTR(-ENOMEM); 160 157 } 158 + 159 + if (!dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, &buf->attrs)) 160 + buf->vaddr = buf->cookie; 161 161 162 162 /* Prevent the device from being released while the buffer is used */ 163 163 buf->dev = get_device(dev); ··· 193 185 */ 194 186 vma->vm_pgoff = 0; 195 187 196 - ret = dma_mmap_coherent(buf->dev, vma, buf->vaddr, 197 - buf->dma_addr, buf->size); 188 + ret = dma_mmap_attrs(buf->dev, vma, buf->cookie, 189 + buf->dma_addr, buf->size, &buf->attrs); 198 190 199 191 if (ret) { 200 192 pr_err("Remapping memory failed, error: %d\n", ret); ··· 337 329 { 338 330 struct vb2_dc_buf *buf = dbuf->priv; 339 331 340 - return buf->vaddr + pgnum * PAGE_SIZE; 332 + return buf->vaddr ? buf->vaddr + pgnum * PAGE_SIZE : NULL; 341 333 } 342 334 343 335 static void *vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf) ··· 376 368 return NULL; 377 369 } 378 370 379 - ret = dma_get_sgtable(buf->dev, sgt, buf->vaddr, buf->dma_addr, 380 - buf->size); 371 + ret = dma_get_sgtable_attrs(buf->dev, sgt, buf->cookie, buf->dma_addr, 372 + buf->size, &buf->attrs); 381 373 if (ret < 0) { 382 374 dev_err(buf->dev, "failed to get scatterlist from DMA API\n"); 383 375 kfree(sgt); ··· 729 721 }; 730 722 EXPORT_SYMBOL_GPL(vb2_dma_contig_memops); 731 723 732 - void *vb2_dma_contig_init_ctx(struct device *dev) 724 + void *vb2_dma_contig_init_ctx_attrs(struct device *dev, 725 + struct dma_attrs *attrs) 733 726 { 734 727 struct vb2_dc_conf *conf; 735 728 ··· 739 730 return ERR_PTR(-ENOMEM); 740 731 741 732 conf->dev = dev; 733 + if (attrs) 734 + conf->attrs = *attrs; 742 735 743 736 return conf; 744 737 } 745 - EXPORT_SYMBOL_GPL(vb2_dma_contig_init_ctx); 738 + EXPORT_SYMBOL_GPL(vb2_dma_contig_init_ctx_attrs); 746 739 747 740 void vb2_dma_contig_cleanup_ctx(void *alloc_ctx) 748 741 {
+1 -1
drivers/nvdimm/bus.c
··· 62 62 { 63 63 struct nd_device_driver *nd_drv = to_nd_device_driver(drv); 64 64 65 - return test_bit(to_nd_device_type(dev), &nd_drv->type); 65 + return !!test_bit(to_nd_device_type(dev), &nd_drv->type); 66 66 } 67 67 68 68 static struct module *to_bus_provider(struct device *dev)
+1 -2
include/linux/clkdev.h
··· 44 44 void clkdev_add_table(struct clk_lookup *, size_t); 45 45 int clk_add_alias(const char *, const char *, const char *, struct device *); 46 46 47 - int clk_register_clkdev(struct clk *, const char *, const char *, ...) 48 - __printf(3, 4); 47 + int clk_register_clkdev(struct clk *, const char *, const char *); 49 48 int clk_register_clkdevs(struct clk *, struct clk_lookup *, size_t); 50 49 51 50 #ifdef CONFIG_COMMON_CLK
+5 -2
include/linux/device.h
··· 70 70 * @dev_groups: Default attributes of the devices on the bus. 71 71 * @drv_groups: Default attributes of the device drivers on the bus. 72 72 * @match: Called, perhaps multiple times, whenever a new device or driver 73 - * is added for this bus. It should return a nonzero value if the 74 - * given device can be handled by the given driver. 73 + * is added for this bus. It should return a positive value if the 74 + * given device can be handled by the given driver and zero 75 + * otherwise. It may also return error code if determining that 76 + * the driver supports the device is not possible. In case of 77 + * -EPROBE_DEFER it will queue the device for deferred probing. 75 78 * @uevent: Called when a device is added, removed, or a few other things 76 79 * that generate uevents to add the environment variables. 77 80 * @probe: Called when a new device or driver add to this bus, and callback
+1
include/linux/dma-attrs.h
··· 18 18 DMA_ATTR_NO_KERNEL_MAPPING, 19 19 DMA_ATTR_SKIP_CPU_SYNC, 20 20 DMA_ATTR_FORCE_CONTIGUOUS, 21 + DMA_ATTR_ALLOC_SINGLE_PAGES, 21 22 DMA_ATTR_MAX, 22 23 }; 23 24
+3
include/linux/psci.h
··· 24 24 bool psci_power_state_loses_context(u32 state); 25 25 bool psci_power_state_is_valid(u32 state); 26 26 27 + int psci_cpu_init_idle(unsigned int cpu); 28 + int psci_cpu_suspend_enter(unsigned long index); 29 + 27 30 struct psci_operations { 28 31 int (*cpu_suspend)(u32 state, unsigned long entry_point); 29 32 int (*cpu_off)(u32 state);
+10 -1
include/media/videobuf2-dma-contig.h
··· 16 16 #include <media/videobuf2-v4l2.h> 17 17 #include <linux/dma-mapping.h> 18 18 19 + struct dma_attrs; 20 + 19 21 static inline dma_addr_t 20 22 vb2_dma_contig_plane_dma_addr(struct vb2_buffer *vb, unsigned int plane_no) 21 23 { ··· 26 24 return *addr; 27 25 } 28 26 29 - void *vb2_dma_contig_init_ctx(struct device *dev); 27 + void *vb2_dma_contig_init_ctx_attrs(struct device *dev, 28 + struct dma_attrs *attrs); 29 + 30 + static inline void *vb2_dma_contig_init_ctx(struct device *dev) 31 + { 32 + return vb2_dma_contig_init_ctx_attrs(dev, NULL); 33 + } 34 + 30 35 void vb2_dma_contig_cleanup_ctx(void *alloc_ctx); 31 36 32 37 extern const struct vb2_mem_ops vb2_dma_contig_memops;