Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/freescale/fec_main.c
drivers/net/ethernet/renesas/sh_eth.c
net/ipv4/gre.c

The GRE conflict is between a bug fix (kfree_skb --> kfree_skb_list)
and the splitting of the gre.c code into seperate files.

The FEC conflict was two sets of changes adding ethtool support code
in an "!CONFIG_M5272" CPP protected block.

Finally the sh_eth.c conflict was between one commit add bits set
in the .eesr_err_check mask whilst another commit removed the
.tx_error_check member and assignments.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2182 -1277
+23 -14
Documentation/DocBook/media/v4l/dev-codec.xml
··· 1 1 <title>Codec Interface</title> 2 2 3 - <note> 4 - <title>Suspended</title> 5 - 6 - <para>This interface has been be suspended from the V4L2 API 7 - implemented in Linux 2.6 until we have more experience with codec 8 - device interfaces.</para> 9 - </note> 10 - 11 3 <para>A V4L2 codec can compress, decompress, transform, or otherwise 12 - convert video data from one format into another format, in memory. 13 - Applications send data to be converted to the driver through a 14 - &func-write; call, and receive the converted data through a 15 - &func-read; call. For efficiency a driver may also support streaming 16 - I/O.</para> 4 + convert video data from one format into another format, in memory. Typically 5 + such devices are memory-to-memory devices (i.e. devices with the 6 + <constant>V4L2_CAP_VIDEO_M2M</constant> or <constant>V4L2_CAP_VIDEO_M2M_MPLANE</constant> 7 + capability set). 8 + </para> 17 9 18 - <para>[to do]</para> 10 + <para>A memory-to-memory video node acts just like a normal video node, but it 11 + supports both output (sending frames from memory to the codec hardware) and 12 + capture (receiving the processed frames from the codec hardware into memory) 13 + stream I/O. An application will have to setup the stream 14 + I/O for both sides and finally call &VIDIOC-STREAMON; for both capture and output 15 + to start the codec.</para> 16 + 17 + <para>Video compression codecs use the MPEG controls to setup their codec parameters 18 + (note that the MPEG controls actually support many more codecs than just MPEG). 19 + See <xref linkend="mpeg-controls"></xref>.</para> 20 + 21 + <para>Memory-to-memory devices can often be used as a shared resource: you can 22 + open the video node multiple times, each application setting up their own codec properties 23 + that are local to the file handle, and each can use it independently from the others. 24 + The driver will arbitrate access to the codec and reprogram it whenever another file 25 + handler gets access. This is different from the usual video node behavior where the video properties 26 + are global to the device (i.e. changing something through one file handle is visible 27 + through another file handle).</para>
+1 -1
Documentation/DocBook/media/v4l/v4l2.xml
··· 493 493 </partinfo> 494 494 495 495 <title>Video for Linux Two API Specification</title> 496 - <subtitle>Revision 3.9</subtitle> 496 + <subtitle>Revision 3.10</subtitle> 497 497 498 498 <chapter id="common"> 499 499 &sub-common;
+1 -1
Documentation/devicetree/bindings/media/exynos-fimc-lite.txt
··· 2 2 3 3 Required properties: 4 4 5 - - compatible : should be "samsung,exynos4212-fimc" for Exynos4212 and 5 + - compatible : should be "samsung,exynos4212-fimc-lite" for Exynos4212 and 6 6 Exynos4412 SoCs; 7 7 - reg : physical base address and size of the device memory mapped 8 8 registers;
+2 -2
Documentation/networking/ip-sysctl.txt
··· 420 420 for a passive TCP connection will happen after 63seconds. 421 421 422 422 tcp_syncookies - BOOLEAN 423 - Only valid when the kernel was compiled with CONFIG_SYNCOOKIES 423 + Only valid when the kernel was compiled with CONFIG_SYN_COOKIES 424 424 Send out syncookies when the syn backlog queue of a socket 425 425 overflows. This is to prevent against the common 'SYN flood attack' 426 - Default: FALSE 426 + Default: 1 427 427 428 428 Note, that syncookies is fallback facility. 429 429 It MUST NOT be used to help highly loaded servers to stand
+3
Documentation/sound/alsa/HD-Audio-Models.txt
··· 29 29 alc271-dmic Enable ALC271X digital mic workaround 30 30 inv-dmic Inverted internal mic workaround 31 31 lenovo-dock Enables docking station I/O for some Lenovos 32 + dell-headset-multi Headset jack, which can also be used as mic-in 33 + dell-headset-dock Headset jack (without mic-in), and also dock I/O 32 34 33 35 ALC662/663/272 34 36 ============== ··· 44 42 asus-mode7 ASUS 45 43 asus-mode8 ASUS 46 44 inv-dmic Inverted internal mic workaround 45 + dell-headset-multi Headset jack, which can also be used as mic-in 47 46 48 47 ALC680 49 48 ======
+1 -1
MAINTAINERS
··· 3225 3225 3226 3226 FCOE SUBSYSTEM (libfc, libfcoe, fcoe) 3227 3227 M: Robert Love <robert.w.love@intel.com> 3228 - L: devel@open-fcoe.org 3228 + L: fcoe-devel@open-fcoe.org 3229 3229 W: www.Open-FCoE.org 3230 3230 S: Supported 3231 3231 F: drivers/scsi/libfc/
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 10 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = 5 5 NAME = Unicycling Gorilla 6 6 7 7 # *DOCUMENTATION*
+25 -1
arch/arm/Kconfig
··· 1087 1087 source "arch/arm/Kconfig-nommu" 1088 1088 endif 1089 1089 1090 + config PJ4B_ERRATA_4742 1091 + bool "PJ4B Errata 4742: IDLE Wake Up Commands can Cause the CPU Core to Cease Operation" 1092 + depends on CPU_PJ4B && MACH_ARMADA_370 1093 + default y 1094 + help 1095 + When coming out of either a Wait for Interrupt (WFI) or a Wait for 1096 + Event (WFE) IDLE states, a specific timing sensitivity exists between 1097 + the retiring WFI/WFE instructions and the newly issued subsequent 1098 + instructions. This sensitivity can result in a CPU hang scenario. 1099 + Workaround: 1100 + The software must insert either a Data Synchronization Barrier (DSB) 1101 + or Data Memory Barrier (DMB) command immediately after the WFI/WFE 1102 + instruction 1103 + 1090 1104 config ARM_ERRATA_326103 1091 1105 bool "ARM errata: FSR write bit incorrect on a SWP to read-only memory" 1092 1106 depends on CPU_V6 ··· 1202 1188 both performing to the same memory location. This functionality 1203 1189 is not correctly implemented in PL310 as clean lines are not 1204 1190 invalidated as a result of these operations. 1191 + 1192 + config ARM_ERRATA_643719 1193 + bool "ARM errata: LoUIS bit field in CLIDR register is incorrect" 1194 + depends on CPU_V7 && SMP 1195 + help 1196 + This option enables the workaround for the 643719 Cortex-A9 (prior to 1197 + r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR 1198 + register returns zero when it should return one. The workaround 1199 + corrects this value, ensuring cache maintenance operations which use 1200 + it behave as intended and avoiding data corruption. 1205 1201 1206 1202 config ARM_ERRATA_720789 1207 1203 bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID" ··· 2030 2006 2031 2007 config KEXEC 2032 2008 bool "Kexec system call (EXPERIMENTAL)" 2033 - depends on (!SMP || HOTPLUG_CPU) 2009 + depends on (!SMP || PM_SLEEP_SMP) 2034 2010 help 2035 2011 kexec is a system call that implements the ability to shutdown your 2036 2012 current kernel, and to start another kernel. It is like a reboot
+2 -1
arch/arm/boot/compressed/Makefile
··· 116 116 117 117 # Make sure files are removed during clean 118 118 extra-y += piggy.gzip piggy.lzo piggy.lzma piggy.xzkern \ 119 - lib1funcs.S ashldi3.S $(libfdt) $(libfdt_hdrs) 119 + lib1funcs.S ashldi3.S $(libfdt) $(libfdt_hdrs) \ 120 + hyp-stub.S 120 121 121 122 ifeq ($(CONFIG_FUNCTION_TRACER),y) 122 123 ORIG_CFLAGS := $(KBUILD_CFLAGS)
+1 -1
arch/arm/boot/dts/exynos5250-pinctrl.dtsi
··· 763 763 }; 764 764 }; 765 765 766 - pinctrl@03680000 { 766 + pinctrl@03860000 { 767 767 gpz: gpz { 768 768 gpio-controller; 769 769 #gpio-cells = <2>;
+2 -2
arch/arm/boot/dts/exynos5250.dtsi
··· 161 161 interrupts = <0 50 0>; 162 162 }; 163 163 164 - pinctrl_3: pinctrl@03680000 { 164 + pinctrl_3: pinctrl@03860000 { 165 165 compatible = "samsung,exynos5250-pinctrl"; 166 - reg = <0x0368000 0x1000>; 166 + reg = <0x03860000 0x1000>; 167 167 interrupts = <0 47 0>; 168 168 }; 169 169
+1 -3
arch/arm/include/asm/cacheflush.h
··· 320 320 } 321 321 322 322 #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE 323 - static inline void flush_kernel_dcache_page(struct page *page) 324 - { 325 - } 323 + extern void flush_kernel_dcache_page(struct page *); 326 324 327 325 #define flush_dcache_mmap_lock(mapping) \ 328 326 spin_lock_irq(&(mapping)->tree_lock)
+2
arch/arm/include/asm/cputype.h
··· 32 32 33 33 #define MPIDR_HWID_BITMASK 0xFFFFFF 34 34 35 + #define MPIDR_INVALID (~MPIDR_HWID_BITMASK) 36 + 35 37 #define MPIDR_LEVEL_BITS 8 36 38 #define MPIDR_LEVEL_MASK ((1 << MPIDR_LEVEL_BITS) - 1) 37 39
+9
arch/arm/include/asm/glue-proc.h
··· 230 230 # endif 231 231 #endif 232 232 233 + #ifdef CONFIG_CPU_PJ4B 234 + # ifdef CPU_NAME 235 + # undef MULTI_CPU 236 + # define MULTI_CPU 237 + # else 238 + # define CPU_NAME cpu_pj4b 239 + # endif 240 + #endif 241 + 233 242 #ifndef MULTI_CPU 234 243 #define cpu_proc_init __glue(CPU_NAME,_proc_init) 235 244 #define cpu_proc_fin __glue(CPU_NAME,_proc_fin)
+1 -1
arch/arm/include/asm/smp_plat.h
··· 49 49 /* 50 50 * Logical CPU mapping. 51 51 */ 52 - extern int __cpu_logical_map[]; 52 + extern u32 __cpu_logical_map[]; 53 53 #define cpu_logical_map(cpu) __cpu_logical_map[cpu] 54 54 /* 55 55 * Retrieve logical cpu index corresponding to a given MPIDR[23:0]
+7 -3
arch/arm/kernel/devtree.c
··· 82 82 u32 i, j, cpuidx = 1; 83 83 u32 mpidr = is_smp() ? read_cpuid_mpidr() & MPIDR_HWID_BITMASK : 0; 84 84 85 - u32 tmp_map[NR_CPUS] = { [0 ... NR_CPUS-1] = UINT_MAX }; 85 + u32 tmp_map[NR_CPUS] = { [0 ... NR_CPUS-1] = MPIDR_INVALID }; 86 86 bool bootcpu_valid = false; 87 87 cpus = of_find_node_by_path("/cpus"); 88 88 ··· 91 91 92 92 for_each_child_of_node(cpus, cpu) { 93 93 u32 hwid; 94 + 95 + if (of_node_cmp(cpu->type, "cpu")) 96 + continue; 94 97 95 98 pr_debug(" * %s...\n", cpu->full_name); 96 99 /* ··· 152 149 tmp_map[i] = hwid; 153 150 } 154 151 155 - if (WARN(!bootcpu_valid, "DT missing boot CPU MPIDR[23:0], " 156 - "fall back to default cpu_logical_map\n")) 152 + if (!bootcpu_valid) { 153 + pr_warn("DT missing boot CPU MPIDR[23:0], fall back to default cpu_logical_map\n"); 157 154 return; 155 + } 158 156 159 157 /* 160 158 * Since the boot CPU node contains proper data, and all nodes have
+4
arch/arm/kernel/machine_kexec.c
··· 134 134 unsigned long reboot_code_buffer_phys; 135 135 void *reboot_code_buffer; 136 136 137 + if (num_online_cpus() > 1) { 138 + pr_err("kexec: error: multiple CPUs still online\n"); 139 + return; 140 + } 137 141 138 142 page_list = image->head & PAGE_MASK; 139 143
+37 -6
arch/arm/kernel/process.c
··· 184 184 185 185 __setup("reboot=", reboot_setup); 186 186 187 + /* 188 + * Called by kexec, immediately prior to machine_kexec(). 189 + * 190 + * This must completely disable all secondary CPUs; simply causing those CPUs 191 + * to execute e.g. a RAM-based pin loop is not sufficient. This allows the 192 + * kexec'd kernel to use any and all RAM as it sees fit, without having to 193 + * avoid any code or data used by any SW CPU pin loop. The CPU hotplug 194 + * functionality embodied in disable_nonboot_cpus() to achieve this. 195 + */ 187 196 void machine_shutdown(void) 188 197 { 189 - #ifdef CONFIG_SMP 190 - smp_send_stop(); 191 - #endif 198 + disable_nonboot_cpus(); 192 199 } 193 200 201 + /* 202 + * Halting simply requires that the secondary CPUs stop performing any 203 + * activity (executing tasks, handling interrupts). smp_send_stop() 204 + * achieves this. 205 + */ 194 206 void machine_halt(void) 195 207 { 196 - machine_shutdown(); 208 + smp_send_stop(); 209 + 197 210 local_irq_disable(); 198 211 while (1); 199 212 } 200 213 214 + /* 215 + * Power-off simply requires that the secondary CPUs stop performing any 216 + * activity (executing tasks, handling interrupts). smp_send_stop() 217 + * achieves this. When the system power is turned off, it will take all CPUs 218 + * with it. 219 + */ 201 220 void machine_power_off(void) 202 221 { 203 - machine_shutdown(); 222 + smp_send_stop(); 223 + 204 224 if (pm_power_off) 205 225 pm_power_off(); 206 226 } 207 227 228 + /* 229 + * Restart requires that the secondary CPUs stop performing any activity 230 + * while the primary CPU resets the system. Systems with a single CPU can 231 + * use soft_restart() as their machine descriptor's .restart hook, since that 232 + * will cause the only available CPU to reset. Systems with multiple CPUs must 233 + * provide a HW restart implementation, to ensure that all CPUs reset at once. 234 + * This is required so that any code running after reset on the primary CPU 235 + * doesn't have to co-ordinate with other CPUs to ensure they aren't still 236 + * executing pre-reset code, and using RAM that the primary CPU's code wishes 237 + * to use. Implementing such co-ordination would be essentially impossible. 238 + */ 208 239 void machine_restart(char *cmd) 209 240 { 210 - machine_shutdown(); 241 + smp_send_stop(); 211 242 212 243 arm_pm_restart(reboot_mode, cmd); 213 244
+1 -1
arch/arm/kernel/setup.c
··· 444 444 : "r14"); 445 445 } 446 446 447 - int __cpu_logical_map[NR_CPUS]; 447 + u32 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = MPIDR_INVALID }; 448 448 449 449 void __init smp_setup_processor_id(void) 450 450 {
-13
arch/arm/kernel/smp.c
··· 651 651 smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE); 652 652 } 653 653 654 - #ifdef CONFIG_HOTPLUG_CPU 655 - static void smp_kill_cpus(cpumask_t *mask) 656 - { 657 - unsigned int cpu; 658 - for_each_cpu(cpu, mask) 659 - platform_cpu_kill(cpu); 660 - } 661 - #else 662 - static void smp_kill_cpus(cpumask_t *mask) { } 663 - #endif 664 - 665 654 void smp_send_stop(void) 666 655 { 667 656 unsigned long timeout; ··· 668 679 669 680 if (num_online_cpus() > 1) 670 681 pr_warning("SMP: failed to stop secondary CPUs\n"); 671 - 672 - smp_kill_cpus(&mask); 673 682 } 674 683 675 684 /*
+8
arch/arm/mm/cache-v7.S
··· 92 92 mrc p15, 1, r0, c0, c0, 1 @ read clidr, r0 = clidr 93 93 ALT_SMP(ands r3, r0, #(7 << 21)) @ extract LoUIS from clidr 94 94 ALT_UP(ands r3, r0, #(7 << 27)) @ extract LoUU from clidr 95 + #ifdef CONFIG_ARM_ERRATA_643719 96 + ALT_SMP(mrceq p15, 0, r2, c0, c0, 0) @ read main ID register 97 + ALT_UP(moveq pc, lr) @ LoUU is zero, so nothing to do 98 + ldreq r1, =0x410fc090 @ ID of ARM Cortex A9 r0p? 99 + biceq r2, r2, #0x0000000f @ clear minor revision number 100 + teqeq r2, r1 @ test for errata affected core and if so... 101 + orreqs r3, #(1 << 21) @ fix LoUIS value (and set flags state to 'ne') 102 + #endif 95 103 ALT_SMP(mov r3, r3, lsr #20) @ r3 = LoUIS * 2 96 104 ALT_UP(mov r3, r3, lsr #26) @ r3 = LoUU * 2 97 105 moveq pc, lr @ return if level == 0
+33
arch/arm/mm/flush.c
··· 301 301 EXPORT_SYMBOL(flush_dcache_page); 302 302 303 303 /* 304 + * Ensure cache coherency for the kernel mapping of this page. We can 305 + * assume that the page is pinned via kmap. 306 + * 307 + * If the page only exists in the page cache and there are no user 308 + * space mappings, this is a no-op since the page was already marked 309 + * dirty at creation. Otherwise, we need to flush the dirty kernel 310 + * cache lines directly. 311 + */ 312 + void flush_kernel_dcache_page(struct page *page) 313 + { 314 + if (cache_is_vivt() || cache_is_vipt_aliasing()) { 315 + struct address_space *mapping; 316 + 317 + mapping = page_mapping(page); 318 + 319 + if (!mapping || mapping_mapped(mapping)) { 320 + void *addr; 321 + 322 + addr = page_address(page); 323 + /* 324 + * kmap_atomic() doesn't set the page virtual 325 + * address for highmem pages, and 326 + * kunmap_atomic() takes care of cache 327 + * flushing already. 328 + */ 329 + if (!IS_ENABLED(CONFIG_HIGHMEM) || addr) 330 + __cpuc_flush_dcache_area(addr, PAGE_SIZE); 331 + } 332 + } 333 + } 334 + EXPORT_SYMBOL(flush_kernel_dcache_page); 335 + 336 + /* 304 337 * Flush an anonymous page so that users of get_user_pages() 305 338 * can safely access the data. The expected sequence is: 306 339 *
+5 -3
arch/arm/mm/mmu.c
··· 616 616 } while (pte++, addr += PAGE_SIZE, addr != end); 617 617 } 618 618 619 - static void __init map_init_section(pmd_t *pmd, unsigned long addr, 619 + static void __init __map_init_section(pmd_t *pmd, unsigned long addr, 620 620 unsigned long end, phys_addr_t phys, 621 621 const struct mem_type *type) 622 622 { 623 + pmd_t *p = pmd; 624 + 623 625 #ifndef CONFIG_ARM_LPAE 624 626 /* 625 627 * In classic MMU format, puds and pmds are folded in to ··· 640 638 phys += SECTION_SIZE; 641 639 } while (pmd++, addr += SECTION_SIZE, addr != end); 642 640 643 - flush_pmd_entry(pmd); 641 + flush_pmd_entry(p); 644 642 } 645 643 646 644 static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, ··· 663 661 */ 664 662 if (type->prot_sect && 665 663 ((addr | next | phys) & ~SECTION_MASK) == 0) { 666 - map_init_section(pmd, addr, next, phys, type); 664 + __map_init_section(pmd, addr, next, phys, type); 667 665 } else { 668 666 alloc_init_pte(pmd, addr, next, 669 667 __phys_to_pfn(phys), type);
+6
arch/arm/mm/nommu.c
··· 57 57 } 58 58 EXPORT_SYMBOL(flush_dcache_page); 59 59 60 + void flush_kernel_dcache_page(struct page *page) 61 + { 62 + __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); 63 + } 64 + EXPORT_SYMBOL(flush_kernel_dcache_page); 65 + 60 66 void copy_to_user_page(struct vm_area_struct *vma, struct page *page, 61 67 unsigned long uaddr, void *dst, const void *src, 62 68 unsigned long len)
-1
arch/arm/mm/proc-fa526.S
··· 81 81 */ 82 82 .align 4 83 83 ENTRY(cpu_fa526_do_idle) 84 - mcr p15, 0, r0, c7, c0, 4 @ Wait for interrupt 85 84 mov pc, lr 86 85 87 86
+5
arch/arm/mm/proc-macros.S
··· 333 333 .endif 334 334 .size \name\()_tlb_fns, . - \name\()_tlb_fns 335 335 .endm 336 + 337 + .macro globl_equ x, y 338 + .globl \x 339 + .equ \x, \y 340 + .endm
+33 -5
arch/arm/mm/proc-v7.S
··· 140 140 ENDPROC(cpu_v7_do_resume) 141 141 #endif 142 142 143 + #ifdef CONFIG_CPU_PJ4B 144 + globl_equ cpu_pj4b_switch_mm, cpu_v7_switch_mm 145 + globl_equ cpu_pj4b_set_pte_ext, cpu_v7_set_pte_ext 146 + globl_equ cpu_pj4b_proc_init, cpu_v7_proc_init 147 + globl_equ cpu_pj4b_proc_fin, cpu_v7_proc_fin 148 + globl_equ cpu_pj4b_reset, cpu_v7_reset 149 + #ifdef CONFIG_PJ4B_ERRATA_4742 150 + ENTRY(cpu_pj4b_do_idle) 151 + dsb @ WFI may enter a low-power mode 152 + wfi 153 + dsb @barrier 154 + mov pc, lr 155 + ENDPROC(cpu_pj4b_do_idle) 156 + #else 157 + globl_equ cpu_pj4b_do_idle, cpu_v7_do_idle 158 + #endif 159 + globl_equ cpu_pj4b_dcache_clean_area, cpu_v7_dcache_clean_area 160 + globl_equ cpu_pj4b_do_suspend, cpu_v7_do_suspend 161 + globl_equ cpu_pj4b_do_resume, cpu_v7_do_resume 162 + globl_equ cpu_pj4b_suspend_size, cpu_v7_suspend_size 163 + 164 + #endif 165 + 143 166 __CPUINIT 144 167 145 168 /* ··· 373 350 374 351 @ define struct processor (see <asm/proc-fns.h> and proc-macros.S) 375 352 define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 353 + #ifdef CONFIG_CPU_PJ4B 354 + define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 355 + #endif 376 356 377 357 .section ".rodata" 378 358 ··· 388 362 /* 389 363 * Standard v7 proc info content 390 364 */ 391 - .macro __v7_proc initfunc, mm_mmuflags = 0, io_mmuflags = 0, hwcaps = 0 365 + .macro __v7_proc initfunc, mm_mmuflags = 0, io_mmuflags = 0, hwcaps = 0, proc_fns = v7_processor_functions 392 366 ALT_SMP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | \ 393 367 PMD_SECT_AF | PMD_FLAGS_SMP | \mm_mmuflags) 394 368 ALT_UP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | \ ··· 401 375 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_FAST_MULT | \ 402 376 HWCAP_EDSP | HWCAP_TLS | \hwcaps 403 377 .long cpu_v7_name 404 - .long v7_processor_functions 378 + .long \proc_fns 405 379 .long v7wbi_tlb_fns 406 380 .long v6_user_fns 407 381 .long v7_cache_fns ··· 433 407 /* 434 408 * Marvell PJ4B processor. 435 409 */ 410 + #ifdef CONFIG_CPU_PJ4B 436 411 .type __v7_pj4b_proc_info, #object 437 412 __v7_pj4b_proc_info: 438 - .long 0x562f5840 439 - .long 0xfffffff0 440 - __v7_proc __v7_pj4b_setup 413 + .long 0x560f5800 414 + .long 0xff0fff00 415 + __v7_proc __v7_pj4b_setup, proc_fns = pj4b_processor_functions 441 416 .size __v7_pj4b_proc_info, . - __v7_pj4b_proc_info 417 + #endif 442 418 443 419 /* 444 420 * ARM Ltd. Cortex A7 processor.
+1
arch/arm64/kernel/perf_event.c
··· 1336 1336 return; 1337 1337 } 1338 1338 1339 + perf_callchain_store(entry, regs->pc); 1339 1340 tail = (struct frame_tail __user *)regs->regs[29]; 1340 1341 1341 1342 while (entry->nr < PERF_MAX_STACK_DEPTH &&
+1
arch/ia64/include/asm/irqflags.h
··· 11 11 #define _ASM_IA64_IRQFLAGS_H 12 12 13 13 #include <asm/pal.h> 14 + #include <asm/kregs.h> 14 15 15 16 #ifdef CONFIG_IA64_DEBUG_IRQ 16 17 extern unsigned long last_cli_ip;
+1
arch/metag/include/asm/hugetlb.h
··· 2 2 #define _ASM_METAG_HUGETLB_H 3 3 4 4 #include <asm/page.h> 5 + #include <asm-generic/hugetlb.h> 5 6 6 7 7 8 static inline int is_hugepage_only_range(struct mm_struct *mm,
+2 -3
arch/mn10300/include/asm/irqflags.h
··· 13 13 #define _ASM_IRQFLAGS_H 14 14 15 15 #include <asm/cpu-regs.h> 16 - #ifndef __ASSEMBLY__ 17 - #include <linux/smp.h> 18 - #endif 16 + /* linux/smp.h <- linux/irqflags.h needs asm/smp.h first */ 17 + #include <asm/smp.h> 19 18 20 19 /* 21 20 * interrupt control
+3 -1
arch/mn10300/include/asm/smp.h
··· 24 24 #ifndef __ASSEMBLY__ 25 25 #include <linux/threads.h> 26 26 #include <linux/cpumask.h> 27 + #include <linux/thread_info.h> 27 28 #endif 28 29 29 30 #ifdef CONFIG_SMP ··· 86 85 extern void smp_init_cpus(void); 87 86 extern void smp_cache_interrupt(void); 88 87 extern void send_IPI_allbutself(int irq); 89 - extern int smp_nmi_call_function(smp_call_func_t func, void *info, int wait); 88 + extern int smp_nmi_call_function(void (*func)(void *), void *info, int wait); 90 89 91 90 extern void arch_send_call_function_single_ipi(int cpu); 92 91 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); ··· 101 100 #ifndef __ASSEMBLY__ 102 101 103 102 static inline void smp_init_cpus(void) {} 103 + #define raw_smp_processor_id() 0 104 104 105 105 #endif /* __ASSEMBLY__ */ 106 106 #endif /* CONFIG_SMP */
+1 -1
arch/mn10300/include/asm/uaccess.h
··· 161 161 162 162 #define __get_user_check(x, ptr, size) \ 163 163 ({ \ 164 - const __typeof__(ptr) __guc_ptr = (ptr); \ 164 + const __typeof__(*(ptr))* __guc_ptr = (ptr); \ 165 165 int _e; \ 166 166 if (likely(__access_ok((unsigned long) __guc_ptr, (size)))) \ 167 167 _e = __get_user_nocheck((x), __guc_ptr, (size)); \
+21 -33
arch/mn10300/kernel/setup.c
··· 38 38 /* For PCI or other memory-mapped resources */ 39 39 unsigned long pci_mem_start = 0x18000000; 40 40 41 + static char __initdata cmd_line[COMMAND_LINE_SIZE]; 41 42 char redboot_command_line[COMMAND_LINE_SIZE] = 42 43 "console=ttyS0,115200 root=/dev/mtdblock3 rw"; 43 44 ··· 75 74 }; 76 75 77 76 /* 78 - * 77 + * Pick out the memory size. We look for mem=size, 78 + * where size is "size[KkMm]" 79 79 */ 80 - static void __init parse_mem_cmdline(char **cmdline_p) 80 + static int __init early_mem(char *p) 81 81 { 82 - char *from, *to, c; 83 - 84 - /* save unparsed command line copy for /proc/cmdline */ 85 - strcpy(boot_command_line, redboot_command_line); 86 - 87 - /* see if there's an explicit memory size option */ 88 - from = redboot_command_line; 89 - to = redboot_command_line; 90 - c = ' '; 91 - 92 - for (;;) { 93 - if (c == ' ' && !memcmp(from, "mem=", 4)) { 94 - if (to != redboot_command_line) 95 - to--; 96 - memory_size = memparse(from + 4, &from); 97 - } 98 - 99 - c = *(from++); 100 - if (!c) 101 - break; 102 - 103 - *(to++) = c; 104 - } 105 - 106 - *to = '\0'; 107 - *cmdline_p = redboot_command_line; 82 + memory_size = memparse(p, &p); 108 83 109 84 if (memory_size == 0) 110 85 panic("Memory size not known\n"); 111 86 112 - memory_end = (unsigned long) CONFIG_KERNEL_RAM_BASE_ADDRESS + 113 - memory_size; 114 - if (memory_end > phys_memory_end) 115 - memory_end = phys_memory_end; 87 + return 0; 116 88 } 89 + early_param("mem", early_mem); 117 90 118 91 /* 119 92 * architecture specific setup ··· 100 125 cpu_init(); 101 126 unit_setup(); 102 127 smp_init_cpus(); 103 - parse_mem_cmdline(cmdline_p); 128 + 129 + /* save unparsed command line copy for /proc/cmdline */ 130 + strlcpy(boot_command_line, redboot_command_line, COMMAND_LINE_SIZE); 131 + 132 + /* populate cmd_line too for later use, preserving boot_command_line */ 133 + strlcpy(cmd_line, boot_command_line, COMMAND_LINE_SIZE); 134 + *cmdline_p = cmd_line; 135 + 136 + parse_early_param(); 137 + 138 + memory_end = (unsigned long) CONFIG_KERNEL_RAM_BASE_ADDRESS + 139 + memory_size; 140 + if (memory_end > phys_memory_end) 141 + memory_end = phys_memory_end; 104 142 105 143 init_mm.start_code = (unsigned long)&_text; 106 144 init_mm.end_code = (unsigned long) &_etext;
+2 -2
arch/parisc/include/asm/mmzone.h
··· 27 27 28 28 #define PFNNID_SHIFT (30 - PAGE_SHIFT) 29 29 #define PFNNID_MAP_MAX 512 /* support 512GB */ 30 - extern unsigned char pfnnid_map[PFNNID_MAP_MAX]; 30 + extern signed char pfnnid_map[PFNNID_MAP_MAX]; 31 31 32 32 #ifndef CONFIG_64BIT 33 33 #define pfn_is_io(pfn) ((pfn & (0xf0000000UL >> PAGE_SHIFT)) == (0xf0000000UL >> PAGE_SHIFT)) ··· 46 46 i = pfn >> PFNNID_SHIFT; 47 47 BUG_ON(i >= ARRAY_SIZE(pfnnid_map)); 48 48 49 - return (int)pfnnid_map[i]; 49 + return pfnnid_map[i]; 50 50 } 51 51 52 52 static inline int pfn_valid(int pfn)
+5
arch/parisc/include/asm/pci.h
··· 225 225 return channel ? 15 : 14; 226 226 } 227 227 228 + #define HAVE_PCI_MMAP 229 + 230 + extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 231 + enum pci_mmap_state mmap_state, int write_combine); 232 + 228 233 #endif /* __ASM_PARISC_PCI_H */
+1
arch/parisc/kernel/hardware.c
··· 1205 1205 {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, 1206 1206 {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, 1207 1207 {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, 1208 + {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, 1208 1209 {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, 1209 1210 {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, 1210 1211 {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"},
+36 -36
arch/parisc/kernel/pacache.S
··· 860 860 #endif 861 861 862 862 ldil L%dcache_stride, %r1 863 - ldw R%dcache_stride(%r1), %r1 863 + ldw R%dcache_stride(%r1), r31 864 864 865 865 #ifdef CONFIG_64BIT 866 866 depdi,z 1, 63-PAGE_SHIFT,1, %r25 ··· 868 868 depwi,z 1, 31-PAGE_SHIFT,1, %r25 869 869 #endif 870 870 add %r28, %r25, %r25 871 - sub %r25, %r1, %r25 871 + sub %r25, r31, %r25 872 872 873 873 874 - 1: fdc,m %r1(%r28) 875 - fdc,m %r1(%r28) 876 - fdc,m %r1(%r28) 877 - fdc,m %r1(%r28) 878 - fdc,m %r1(%r28) 879 - fdc,m %r1(%r28) 880 - fdc,m %r1(%r28) 881 - fdc,m %r1(%r28) 882 - fdc,m %r1(%r28) 883 - fdc,m %r1(%r28) 884 - fdc,m %r1(%r28) 885 - fdc,m %r1(%r28) 886 - fdc,m %r1(%r28) 887 - fdc,m %r1(%r28) 888 - fdc,m %r1(%r28) 874 + 1: fdc,m r31(%r28) 875 + fdc,m r31(%r28) 876 + fdc,m r31(%r28) 877 + fdc,m r31(%r28) 878 + fdc,m r31(%r28) 879 + fdc,m r31(%r28) 880 + fdc,m r31(%r28) 881 + fdc,m r31(%r28) 882 + fdc,m r31(%r28) 883 + fdc,m r31(%r28) 884 + fdc,m r31(%r28) 885 + fdc,m r31(%r28) 886 + fdc,m r31(%r28) 887 + fdc,m r31(%r28) 888 + fdc,m r31(%r28) 889 889 cmpb,COND(<<) %r28, %r25,1b 890 - fdc,m %r1(%r28) 890 + fdc,m r31(%r28) 891 891 892 892 sync 893 893 ··· 936 936 #endif 937 937 938 938 ldil L%icache_stride, %r1 939 - ldw R%icache_stride(%r1), %r1 939 + ldw R%icache_stride(%r1), %r31 940 940 941 941 #ifdef CONFIG_64BIT 942 942 depdi,z 1, 63-PAGE_SHIFT,1, %r25 ··· 944 944 depwi,z 1, 31-PAGE_SHIFT,1, %r25 945 945 #endif 946 946 add %r28, %r25, %r25 947 - sub %r25, %r1, %r25 947 + sub %r25, %r31, %r25 948 948 949 949 950 950 /* fic only has the type 26 form on PA1.1, requiring an 951 951 * explicit space specification, so use %sr4 */ 952 - 1: fic,m %r1(%sr4,%r28) 953 - fic,m %r1(%sr4,%r28) 954 - fic,m %r1(%sr4,%r28) 955 - fic,m %r1(%sr4,%r28) 956 - fic,m %r1(%sr4,%r28) 957 - fic,m %r1(%sr4,%r28) 958 - fic,m %r1(%sr4,%r28) 959 - fic,m %r1(%sr4,%r28) 960 - fic,m %r1(%sr4,%r28) 961 - fic,m %r1(%sr4,%r28) 962 - fic,m %r1(%sr4,%r28) 963 - fic,m %r1(%sr4,%r28) 964 - fic,m %r1(%sr4,%r28) 965 - fic,m %r1(%sr4,%r28) 966 - fic,m %r1(%sr4,%r28) 952 + 1: fic,m %r31(%sr4,%r28) 953 + fic,m %r31(%sr4,%r28) 954 + fic,m %r31(%sr4,%r28) 955 + fic,m %r31(%sr4,%r28) 956 + fic,m %r31(%sr4,%r28) 957 + fic,m %r31(%sr4,%r28) 958 + fic,m %r31(%sr4,%r28) 959 + fic,m %r31(%sr4,%r28) 960 + fic,m %r31(%sr4,%r28) 961 + fic,m %r31(%sr4,%r28) 962 + fic,m %r31(%sr4,%r28) 963 + fic,m %r31(%sr4,%r28) 964 + fic,m %r31(%sr4,%r28) 965 + fic,m %r31(%sr4,%r28) 966 + fic,m %r31(%sr4,%r28) 967 967 cmpb,COND(<<) %r28, %r25,1b 968 - fic,m %r1(%sr4,%r28) 968 + fic,m %r31(%sr4,%r28) 969 969 970 970 sync 971 971
+27
arch/parisc/kernel/pci.c
··· 220 220 } 221 221 222 222 223 + int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 224 + enum pci_mmap_state mmap_state, int write_combine) 225 + { 226 + unsigned long prot; 227 + 228 + /* 229 + * I/O space can be accessed via normal processor loads and stores on 230 + * this platform but for now we elect not to do this and portable 231 + * drivers should not do this anyway. 232 + */ 233 + if (mmap_state == pci_mmap_io) 234 + return -EINVAL; 235 + 236 + if (write_combine) 237 + return -EINVAL; 238 + 239 + /* 240 + * Ignore write-combine; for now only return uncached mappings. 241 + */ 242 + prot = pgprot_val(vma->vm_page_prot); 243 + prot |= _PAGE_NO_CACHE; 244 + vma->vm_page_prot = __pgprot(prot); 245 + 246 + return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, 247 + vma->vm_end - vma->vm_start, vma->vm_page_prot); 248 + } 249 + 223 250 /* 224 251 * A driver is enabling the device. We make sure that all the appropriate 225 252 * bits are set to allow the device to operate as the driver is expecting.
+1 -1
arch/parisc/mm/init.c
··· 47 47 48 48 #ifdef CONFIG_DISCONTIGMEM 49 49 struct node_map_data node_data[MAX_NUMNODES] __read_mostly; 50 - unsigned char pfnnid_map[PFNNID_MAP_MAX] __read_mostly; 50 + signed char pfnnid_map[PFNNID_MAP_MAX] __read_mostly; 51 51 #endif 52 52 53 53 static struct resource data_resource = {
+12 -5
arch/powerpc/kernel/pci-common.c
··· 994 994 ppc_md.pci_dma_bus_setup(bus); 995 995 } 996 996 997 - void pcibios_setup_device(struct pci_dev *dev) 997 + static void pcibios_setup_device(struct pci_dev *dev) 998 998 { 999 999 /* Fixup NUMA node as it may not be setup yet by the generic 1000 1000 * code and is needed by the DMA init ··· 1013 1013 pci_read_irq_line(dev); 1014 1014 if (ppc_md.pci_irq_fixup) 1015 1015 ppc_md.pci_irq_fixup(dev); 1016 + } 1017 + 1018 + int pcibios_add_device(struct pci_dev *dev) 1019 + { 1020 + /* 1021 + * We can only call pcibios_setup_device() after bus setup is complete, 1022 + * since some of the platform specific DMA setup code depends on it. 1023 + */ 1024 + if (dev->bus->is_added) 1025 + pcibios_setup_device(dev); 1026 + return 0; 1016 1027 } 1017 1028 1018 1029 void pcibios_setup_bus_devices(struct pci_bus *bus) ··· 1479 1468 if (ppc_md.pcibios_enable_device_hook) 1480 1469 if (ppc_md.pcibios_enable_device_hook(dev)) 1481 1470 return -EINVAL; 1482 - 1483 - /* avoid pcie irq fix up impact on cardbus */ 1484 - if (dev->hdr_type != PCI_HEADER_TYPE_CARDBUS) 1485 - pcibios_setup_device(dev); 1486 1471 1487 1472 return pci_enable_resources(dev, mask); 1488 1473 }
+2 -1
arch/powerpc/kvm/booke.c
··· 673 673 ret = s; 674 674 goto out; 675 675 } 676 - kvmppc_lazy_ee_enable(); 677 676 678 677 kvm_guest_enter(); 679 678 ··· 697 698 698 699 kvmppc_load_guest_fp(vcpu); 699 700 #endif 701 + 702 + kvmppc_lazy_ee_enable(); 700 703 701 704 ret = __kvmppc_vcpu_run(kvm_run, vcpu); 702 705
+7 -1
arch/powerpc/mm/hugetlbpage.c
··· 592 592 do { 593 593 pmd = pmd_offset(pud, addr); 594 594 next = pmd_addr_end(addr, end); 595 - if (pmd_none_or_clear_bad(pmd)) 595 + if (!is_hugepd(pmd)) { 596 + /* 597 + * if it is not hugepd pointer, we should already find 598 + * it cleared. 599 + */ 600 + WARN_ON(!pmd_none_or_clear_bad(pmd)); 596 601 continue; 602 + } 597 603 #ifdef CONFIG_PPC_FSL_BOOK3E 598 604 /* 599 605 * Increment next by the size of the huge mapping since
+2 -2
arch/powerpc/platforms/pseries/eeh_cache.c
··· 294 294 spin_lock_init(&pci_io_addr_cache_root.piar_lock); 295 295 296 296 for_each_pci_dev(dev) { 297 - eeh_addr_cache_insert_dev(dev); 298 - 299 297 dn = pci_device_to_OF_node(dev); 300 298 if (!dn) 301 299 continue; ··· 305 307 pci_dev_get(dev); /* matching put is in eeh_remove_device() */ 306 308 dev->dev.archdata.edev = edev; 307 309 edev->pdev = dev; 310 + 311 + eeh_addr_cache_insert_dev(dev); 308 312 309 313 eeh_sysfs_add_device(dev); 310 314 }
+2 -1
arch/powerpc/platforms/pseries/eeh_pe.c
··· 639 639 640 640 if (pe->type & EEH_PE_PHB) { 641 641 bus = pe->phb->bus; 642 - } else if (pe->type & EEH_PE_BUS) { 642 + } else if (pe->type & EEH_PE_BUS || 643 + pe->type & EEH_PE_DEVICE) { 643 644 edev = list_first_entry(&pe->edevs, struct eeh_dev, list); 644 645 pdev = eeh_dev_to_pci_dev(edev); 645 646 if (pdev)
+9 -15
arch/powerpc/sysdev/fsl_pci.c
··· 97 97 return indirect_read_config(bus, devfn, offset, len, val); 98 98 } 99 99 100 - static struct pci_ops fsl_indirect_pci_ops = 100 + #if defined(CONFIG_FSL_SOC_BOOKE) || defined(CONFIG_PPC_86xx) 101 + 102 + static struct pci_ops fsl_indirect_pcie_ops = 101 103 { 102 104 .read = fsl_indirect_read_config, 103 105 .write = indirect_write_config, 104 106 }; 105 - 106 - static void __init fsl_setup_indirect_pci(struct pci_controller* hose, 107 - resource_size_t cfg_addr, 108 - resource_size_t cfg_data, u32 flags) 109 - { 110 - setup_indirect_pci(hose, cfg_addr, cfg_data, flags); 111 - hose->ops = &fsl_indirect_pci_ops; 112 - } 113 - 114 - #if defined(CONFIG_FSL_SOC_BOOKE) || defined(CONFIG_PPC_86xx) 115 107 116 108 #define MAX_PHYS_ADDR_BITS 40 117 109 static u64 pci64_dma_offset = 1ull << MAX_PHYS_ADDR_BITS; ··· 496 504 if (!hose->private_data) 497 505 goto no_bridge; 498 506 499 - fsl_setup_indirect_pci(hose, rsrc.start, rsrc.start + 0x4, 500 - PPC_INDIRECT_TYPE_BIG_ENDIAN); 507 + setup_indirect_pci(hose, rsrc.start, rsrc.start + 0x4, 508 + PPC_INDIRECT_TYPE_BIG_ENDIAN); 501 509 502 510 if (in_be32(&pci->block_rev1) < PCIE_IP_REV_3_0) 503 511 hose->indirect_type |= PPC_INDIRECT_TYPE_FSL_CFG_REG_LINK; 504 512 505 513 if (early_find_capability(hose, 0, 0, PCI_CAP_ID_EXP)) { 514 + /* use fsl_indirect_read_config for PCIe */ 515 + hose->ops = &fsl_indirect_pcie_ops; 506 516 /* For PCIE read HEADER_TYPE to identify controler mode */ 507 517 early_read_config_byte(hose, 0, 0, PCI_HEADER_TYPE, &hdr_type); 508 518 if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) ··· 808 814 if (ret) 809 815 goto err0; 810 816 } else { 811 - fsl_setup_indirect_pci(hose, rsrc_cfg.start, 812 - rsrc_cfg.start + 4, 0); 817 + setup_indirect_pci(hose, rsrc_cfg.start, 818 + rsrc_cfg.start + 4, 0); 813 819 } 814 820 815 821 printk(KERN_INFO "Found FSL PCI host bridge at 0x%016llx. "
+2 -1
arch/s390/include/asm/dma-mapping.h
··· 50 50 { 51 51 struct dma_map_ops *dma_ops = get_dma_ops(dev); 52 52 53 + debug_dma_mapping_error(dev, dma_addr); 53 54 if (dma_ops->mapping_error) 54 55 return dma_ops->mapping_error(dev, dma_addr); 55 - return (dma_addr == 0UL); 56 + return (dma_addr == DMA_ERROR_CODE); 56 57 } 57 58 58 59 static inline void *dma_alloc_coherent(struct device *dev, size_t size,
+4 -4
arch/s390/kernel/ipl.c
··· 754 754 .write = reipl_fcp_scpdata_write, 755 755 }; 756 756 757 - DEFINE_IPL_ATTR_RW(reipl_fcp, wwpn, "0x%016llx\n", "%016llx\n", 757 + DEFINE_IPL_ATTR_RW(reipl_fcp, wwpn, "0x%016llx\n", "%llx\n", 758 758 reipl_block_fcp->ipl_info.fcp.wwpn); 759 - DEFINE_IPL_ATTR_RW(reipl_fcp, lun, "0x%016llx\n", "%016llx\n", 759 + DEFINE_IPL_ATTR_RW(reipl_fcp, lun, "0x%016llx\n", "%llx\n", 760 760 reipl_block_fcp->ipl_info.fcp.lun); 761 761 DEFINE_IPL_ATTR_RW(reipl_fcp, bootprog, "%lld\n", "%lld\n", 762 762 reipl_block_fcp->ipl_info.fcp.bootprog); ··· 1323 1323 1324 1324 /* FCP dump device attributes */ 1325 1325 1326 - DEFINE_IPL_ATTR_RW(dump_fcp, wwpn, "0x%016llx\n", "%016llx\n", 1326 + DEFINE_IPL_ATTR_RW(dump_fcp, wwpn, "0x%016llx\n", "%llx\n", 1327 1327 dump_block_fcp->ipl_info.fcp.wwpn); 1328 - DEFINE_IPL_ATTR_RW(dump_fcp, lun, "0x%016llx\n", "%016llx\n", 1328 + DEFINE_IPL_ATTR_RW(dump_fcp, lun, "0x%016llx\n", "%llx\n", 1329 1329 dump_block_fcp->ipl_info.fcp.lun); 1330 1330 DEFINE_IPL_ATTR_RW(dump_fcp, bootprog, "%lld\n", "%lld\n", 1331 1331 dump_block_fcp->ipl_info.fcp.bootprog);
+2
arch/s390/kernel/irq.c
··· 312 312 } 313 313 EXPORT_SYMBOL(measurement_alert_subclass_unregister); 314 314 315 + #ifdef CONFIG_SMP 315 316 void synchronize_irq(unsigned int irq) 316 317 { 317 318 /* ··· 321 320 */ 322 321 } 323 322 EXPORT_SYMBOL_GPL(synchronize_irq); 323 + #endif 324 324 325 325 #ifndef CONFIG_PCI 326 326
+2 -1
arch/s390/mm/mem_detect.c
··· 123 123 continue; 124 124 } else if ((addr <= chunk->addr) && 125 125 (addr + size >= chunk->addr + chunk->size)) { 126 - memset(chunk, 0 , sizeof(*chunk)); 126 + memmove(chunk, chunk + 1, (MEMORY_CHUNKS-i-1) * sizeof(*chunk)); 127 + memset(&mem_chunk[MEMORY_CHUNKS-1], 0, sizeof(*chunk)); 127 128 } else if (addr + size < chunk->addr + chunk->size) { 128 129 chunk->size = chunk->addr + chunk->size - addr - size; 129 130 chunk->addr = addr + size;
+1
arch/sparc/include/asm/Kbuild
··· 6 6 generic-y += div64.h 7 7 generic-y += emergency-restart.h 8 8 generic-y += exec.h 9 + generic-y += linkage.h 9 10 generic-y += local64.h 10 11 generic-y += mutex.h 11 12 generic-y += irq_regs.h
+1 -1
arch/sparc/include/asm/leon.h
··· 135 135 136 136 #ifdef CONFIG_SMP 137 137 # define LEON3_IRQ_IPI_DEFAULT 13 138 - # define LEON3_IRQ_TICKER (leon3_ticker_irq) 138 + # define LEON3_IRQ_TICKER (leon3_gptimer_irq) 139 139 # define LEON3_IRQ_CROSS_CALL 15 140 140 #endif 141 141
+1
arch/sparc/include/asm/leon_amba.h
··· 47 47 #define LEON3_GPTIMER_LD 4 48 48 #define LEON3_GPTIMER_IRQEN 8 49 49 #define LEON3_GPTIMER_SEPIRQ 8 50 + #define LEON3_GPTIMER_TIMERS 0x7 50 51 51 52 #define LEON23_REG_TIMER_CONTROL_EN 0x00000001 /* 1 = enable counting */ 52 53 /* 0 = hold scalar and counter */
-6
arch/sparc/include/asm/linkage.h
··· 1 - #ifndef __ASM_LINKAGE_H 2 - #define __ASM_LINKAGE_H 3 - 4 - /* Nothing to see here... */ 5 - 6 - #endif
+2 -1
arch/sparc/kernel/ds.c
··· 843 843 unsigned long len; 844 844 845 845 strcpy(full_boot_str, "boot "); 846 - strcpy(full_boot_str + strlen("boot "), boot_command); 846 + strlcpy(full_boot_str + strlen("boot "), boot_command, 847 + sizeof(full_boot_str + strlen("boot "))); 847 848 len = strlen(full_boot_str); 848 849 849 850 if (reboot_data_supported) {
+24 -44
arch/sparc/kernel/leon_kernel.c
··· 38 38 39 39 unsigned long leon3_gptimer_irq; /* interrupt controller irq number */ 40 40 unsigned long leon3_gptimer_idx; /* Timer Index (0..6) within Timer Core */ 41 - int leon3_ticker_irq; /* Timer ticker IRQ */ 42 41 unsigned int sparc_leon_eirq; 43 42 #define LEON_IMASK(cpu) (&leon3_irqctrl_regs->mask[cpu]) 44 43 #define LEON_IACK (&leon3_irqctrl_regs->iclear) ··· 277 278 278 279 leon_clear_profile_irq(cpu); 279 280 281 + if (cpu == boot_cpu_id) 282 + timer_interrupt(irq, NULL); 283 + 280 284 ce = &per_cpu(sparc32_clockevent, cpu); 281 285 282 286 irq_enter(); ··· 301 299 int icsel; 302 300 int ampopts; 303 301 int err; 302 + u32 config; 304 303 305 304 sparc_config.get_cycles_offset = leon_cycles_offset; 306 305 sparc_config.cs_period = 1000000 / HZ; ··· 380 377 LEON3_BYPASS_STORE_PA( 381 378 &leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl, 0); 382 379 383 - #ifdef CONFIG_SMP 384 - leon3_ticker_irq = leon3_gptimer_irq + 1 + leon3_gptimer_idx; 385 - 386 - if (!(LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config) & 387 - (1<<LEON3_GPTIMER_SEPIRQ))) { 388 - printk(KERN_ERR "timer not configured with separate irqs\n"); 389 - BUG(); 390 - } 391 - 392 - LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].val, 393 - 0); 394 - LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].rld, 395 - (((1000000/HZ) - 1))); 396 - LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl, 397 - 0); 398 - #endif 399 - 400 380 /* 401 381 * The IRQ controller may (if implemented) consist of multiple 402 382 * IRQ controllers, each mapped on a 4Kb boundary. ··· 402 416 if (eirq != 0) 403 417 leon_eirq_setup(eirq); 404 418 405 - irq = _leon_build_device_irq(NULL, leon3_gptimer_irq+leon3_gptimer_idx); 406 - err = request_irq(irq, timer_interrupt, IRQF_TIMER, "timer", NULL); 407 - if (err) { 408 - printk(KERN_ERR "unable to attach timer IRQ%d\n", irq); 409 - prom_halt(); 410 - } 411 - 412 419 #ifdef CONFIG_SMP 413 420 { 414 421 unsigned long flags; ··· 418 439 } 419 440 #endif 420 441 442 + config = LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config); 443 + if (config & (1 << LEON3_GPTIMER_SEPIRQ)) 444 + leon3_gptimer_irq += leon3_gptimer_idx; 445 + else if ((config & LEON3_GPTIMER_TIMERS) > 1) 446 + pr_warn("GPTIMER uses shared irqs, using other timers of the same core will fail.\n"); 447 + 448 + #ifdef CONFIG_SMP 449 + /* Install per-cpu IRQ handler for broadcasted ticker */ 450 + irq = leon_build_device_irq(leon3_gptimer_irq, handle_percpu_irq, 451 + "per-cpu", 0); 452 + err = request_irq(irq, leon_percpu_timer_ce_interrupt, 453 + IRQF_PERCPU | IRQF_TIMER, "timer", NULL); 454 + #else 455 + irq = _leon_build_device_irq(NULL, leon3_gptimer_irq); 456 + err = request_irq(irq, timer_interrupt, IRQF_TIMER, "timer", NULL); 457 + #endif 458 + if (err) { 459 + pr_err("Unable to attach timer IRQ%d\n", irq); 460 + prom_halt(); 461 + } 421 462 LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl, 422 463 LEON3_GPTIMER_EN | 423 464 LEON3_GPTIMER_RL | 424 465 LEON3_GPTIMER_LD | 425 466 LEON3_GPTIMER_IRQEN); 426 - 427 - #ifdef CONFIG_SMP 428 - /* Install per-cpu IRQ handler for broadcasted ticker */ 429 - irq = leon_build_device_irq(leon3_ticker_irq, handle_percpu_irq, 430 - "per-cpu", 0); 431 - err = request_irq(irq, leon_percpu_timer_ce_interrupt, 432 - IRQF_PERCPU | IRQF_TIMER, "ticker", 433 - NULL); 434 - if (err) { 435 - printk(KERN_ERR "unable to attach ticker IRQ%d\n", irq); 436 - prom_halt(); 437 - } 438 - 439 - LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl, 440 - LEON3_GPTIMER_EN | 441 - LEON3_GPTIMER_RL | 442 - LEON3_GPTIMER_LD | 443 - LEON3_GPTIMER_IRQEN); 444 - #endif 445 467 return; 446 468 bad: 447 469 printk(KERN_ERR "No Timer/irqctrl found\n");
+3 -5
arch/sparc/kernel/leon_pci_grpci1.c
··· 536 536 537 537 /* find device register base address */ 538 538 res = platform_get_resource(ofdev, IORESOURCE_MEM, 0); 539 - regs = devm_request_and_ioremap(&ofdev->dev, res); 540 - if (!regs) { 541 - dev_err(&ofdev->dev, "io-regs mapping failed\n"); 542 - return -EADDRNOTAVAIL; 543 - } 539 + regs = devm_ioremap_resource(&ofdev->dev, res); 540 + if (IS_ERR(regs)) 541 + return PTR_ERR(regs); 544 542 545 543 /* 546 544 * check that we're in Host Slot and that we can act as a Host Bridge
+7
arch/sparc/kernel/leon_pmc.c
··· 47 47 * MMU does not get a TLB miss here by using the MMU BYPASS ASI. 48 48 */ 49 49 register unsigned int address = (unsigned int)leon3_irqctrl_regs; 50 + 51 + /* Interrupts need to be enabled to not hang the CPU */ 52 + local_irq_enable(); 53 + 50 54 __asm__ __volatile__ ( 51 55 "wr %%g0, %%asr19\n" 52 56 "lda [%0] %1, %%g0\n" ··· 64 60 */ 65 61 void pmc_leon_idle(void) 66 62 { 63 + /* Interrupts need to be enabled to not hang the CPU */ 64 + local_irq_enable(); 65 + 67 66 /* For systems without power-down, this will be no-op */ 68 67 __asm__ __volatile__ ("wr %g0, %asr19\n\t"); 69 68 }
+1 -1
arch/sparc/kernel/setup_32.c
··· 304 304 305 305 /* Initialize PROM console and command line. */ 306 306 *cmdline_p = prom_getbootargs(); 307 - strcpy(boot_command_line, *cmdline_p); 307 + strlcpy(boot_command_line, *cmdline_p, COMMAND_LINE_SIZE); 308 308 parse_early_param(); 309 309 310 310 boot_flags_init(*cmdline_p);
+1 -1
arch/sparc/kernel/setup_64.c
··· 555 555 { 556 556 /* Initialize PROM console and command line. */ 557 557 *cmdline_p = prom_getbootargs(); 558 - strcpy(boot_command_line, *cmdline_p); 558 + strlcpy(boot_command_line, *cmdline_p, COMMAND_LINE_SIZE); 559 559 parse_early_param(); 560 560 561 561 boot_flags_init(*cmdline_p);
+8 -1
arch/sparc/mm/init_64.c
··· 1098 1098 m->size = *val; 1099 1099 val = mdesc_get_property(md, node, 1100 1100 "address-congruence-offset", NULL); 1101 - m->offset = *val; 1101 + 1102 + /* The address-congruence-offset property is optional. 1103 + * Explicity zero it be identifty this. 1104 + */ 1105 + if (val) 1106 + m->offset = *val; 1107 + else 1108 + m->offset = 0UL; 1102 1109 1103 1110 numadbg("MBLOCK[%d]: base[%llx] size[%llx] offset[%llx]\n", 1104 1111 count - 1, m->base, m->size, m->offset);
+1 -1
arch/sparc/mm/tlb.c
··· 85 85 } 86 86 87 87 if (!tb->active) { 88 - global_flush_tlb_page(mm, vaddr); 89 88 flush_tsb_user_page(mm, vaddr); 89 + global_flush_tlb_page(mm, vaddr); 90 90 goto out; 91 91 } 92 92
+7 -5
arch/sparc/prom/bootstr_32.c
··· 23 23 return barg_buf; 24 24 } 25 25 26 - switch(prom_vers) { 26 + switch (prom_vers) { 27 27 case PROM_V0: 28 28 cp = barg_buf; 29 29 /* Start from 1 and go over fd(0,0,0)kernel */ 30 - for(iter = 1; iter < 8; iter++) { 30 + for (iter = 1; iter < 8; iter++) { 31 31 arg = (*(romvec->pv_v0bootargs))->argv[iter]; 32 32 if (arg == NULL) 33 33 break; 34 - while(*arg != 0) { 34 + while (*arg != 0) { 35 35 /* Leave place for space and null. */ 36 - if(cp >= barg_buf + BARG_LEN-2){ 36 + if (cp >= barg_buf + BARG_LEN - 2) 37 37 /* We might issue a warning here. */ 38 38 break; 39 - } 40 39 *cp++ = *arg++; 41 40 } 42 41 *cp++ = ' '; 42 + if (cp >= barg_buf + BARG_LEN - 1) 43 + /* We might issue a warning here. */ 44 + break; 43 45 } 44 46 *cp = 0; 45 47 break;
+8 -8
arch/sparc/prom/tree_64.c
··· 39 39 return prom_node_to_node("child", node); 40 40 } 41 41 42 - inline phandle prom_getchild(phandle node) 42 + phandle prom_getchild(phandle node) 43 43 { 44 44 phandle cnode; 45 45 ··· 72 72 return prom_node_to_node(prom_peer_name, node); 73 73 } 74 74 75 - inline phandle prom_getsibling(phandle node) 75 + phandle prom_getsibling(phandle node) 76 76 { 77 77 phandle sibnode; 78 78 ··· 89 89 /* Return the length in bytes of property 'prop' at node 'node'. 90 90 * Return -1 on error. 91 91 */ 92 - inline int prom_getproplen(phandle node, const char *prop) 92 + int prom_getproplen(phandle node, const char *prop) 93 93 { 94 94 unsigned long args[6]; 95 95 ··· 113 113 * 'buffer' which has a size of 'bufsize'. If the acquisition 114 114 * was successful the length will be returned, else -1 is returned. 115 115 */ 116 - inline int prom_getproperty(phandle node, const char *prop, 117 - char *buffer, int bufsize) 116 + int prom_getproperty(phandle node, const char *prop, 117 + char *buffer, int bufsize) 118 118 { 119 119 unsigned long args[8]; 120 120 int plen; ··· 141 141 /* Acquire an integer property and return its value. Returns -1 142 142 * on failure. 143 143 */ 144 - inline int prom_getint(phandle node, const char *prop) 144 + int prom_getint(phandle node, const char *prop) 145 145 { 146 146 int intprop; 147 147 ··· 235 235 /* Return the first property type for node 'node'. 236 236 * buffer should be at least 32B in length 237 237 */ 238 - inline char *prom_firstprop(phandle node, char *buffer) 238 + char *prom_firstprop(phandle node, char *buffer) 239 239 { 240 240 unsigned long args[7]; 241 241 ··· 261 261 * at node 'node' . Returns NULL string if no more 262 262 * property types for this node. 263 263 */ 264 - inline char *prom_nextprop(phandle node, const char *oprop, char *buffer) 264 + char *prom_nextprop(phandle node, const char *oprop, char *buffer) 265 265 { 266 266 unsigned long args[7]; 267 267 char buf[32];
+2
arch/tile/lib/exports.c
··· 84 84 EXPORT_SYMBOL(__ashrdi3); 85 85 uint64_t __ashldi3(uint64_t, unsigned int); 86 86 EXPORT_SYMBOL(__ashldi3); 87 + int __ffsdi2(uint64_t); 88 + EXPORT_SYMBOL(__ffsdi2); 87 89 #endif
+1 -1
arch/um/drivers/mconsole_kern.c
··· 147 147 } 148 148 149 149 do { 150 - loff_t pos; 150 + loff_t pos = file->f_pos; 151 151 mm_segment_t old_fs = get_fs(); 152 152 set_fs(KERNEL_DS); 153 153 len = vfs_read(file, buf, PAGE_SIZE - 1, &pos);
+1
arch/x86/Kconfig
··· 2265 2265 config IA32_EMULATION 2266 2266 bool "IA32 Emulation" 2267 2267 depends on X86_64 2268 + select BINFMT_ELF 2268 2269 select COMPAT_BINFMT_ELF 2269 2270 select HAVE_UID16 2270 2271 ---help---
+32 -16
arch/x86/crypto/aesni-intel_asm.S
··· 2681 2681 addq %rcx, KEYP 2682 2682 2683 2683 movdqa IV, STATE1 2684 - pxor 0x00(INP), STATE1 2684 + movdqu 0x00(INP), INC 2685 + pxor INC, STATE1 2685 2686 movdqu IV, 0x00(OUTP) 2686 2687 2687 2688 _aesni_gf128mul_x_ble() 2688 2689 movdqa IV, STATE2 2689 - pxor 0x10(INP), STATE2 2690 + movdqu 0x10(INP), INC 2691 + pxor INC, STATE2 2690 2692 movdqu IV, 0x10(OUTP) 2691 2693 2692 2694 _aesni_gf128mul_x_ble() 2693 2695 movdqa IV, STATE3 2694 - pxor 0x20(INP), STATE3 2696 + movdqu 0x20(INP), INC 2697 + pxor INC, STATE3 2695 2698 movdqu IV, 0x20(OUTP) 2696 2699 2697 2700 _aesni_gf128mul_x_ble() 2698 2701 movdqa IV, STATE4 2699 - pxor 0x30(INP), STATE4 2702 + movdqu 0x30(INP), INC 2703 + pxor INC, STATE4 2700 2704 movdqu IV, 0x30(OUTP) 2701 2705 2702 2706 call *%r11 2703 2707 2704 - pxor 0x00(OUTP), STATE1 2708 + movdqu 0x00(OUTP), INC 2709 + pxor INC, STATE1 2705 2710 movdqu STATE1, 0x00(OUTP) 2706 2711 2707 2712 _aesni_gf128mul_x_ble() 2708 2713 movdqa IV, STATE1 2709 - pxor 0x40(INP), STATE1 2714 + movdqu 0x40(INP), INC 2715 + pxor INC, STATE1 2710 2716 movdqu IV, 0x40(OUTP) 2711 2717 2712 - pxor 0x10(OUTP), STATE2 2718 + movdqu 0x10(OUTP), INC 2719 + pxor INC, STATE2 2713 2720 movdqu STATE2, 0x10(OUTP) 2714 2721 2715 2722 _aesni_gf128mul_x_ble() 2716 2723 movdqa IV, STATE2 2717 - pxor 0x50(INP), STATE2 2724 + movdqu 0x50(INP), INC 2725 + pxor INC, STATE2 2718 2726 movdqu IV, 0x50(OUTP) 2719 2727 2720 - pxor 0x20(OUTP), STATE3 2728 + movdqu 0x20(OUTP), INC 2729 + pxor INC, STATE3 2721 2730 movdqu STATE3, 0x20(OUTP) 2722 2731 2723 2732 _aesni_gf128mul_x_ble() 2724 2733 movdqa IV, STATE3 2725 - pxor 0x60(INP), STATE3 2734 + movdqu 0x60(INP), INC 2735 + pxor INC, STATE3 2726 2736 movdqu IV, 0x60(OUTP) 2727 2737 2728 - pxor 0x30(OUTP), STATE4 2738 + movdqu 0x30(OUTP), INC 2739 + pxor INC, STATE4 2729 2740 movdqu STATE4, 0x30(OUTP) 2730 2741 2731 2742 _aesni_gf128mul_x_ble() 2732 2743 movdqa IV, STATE4 2733 - pxor 0x70(INP), STATE4 2744 + movdqu 0x70(INP), INC 2745 + pxor INC, STATE4 2734 2746 movdqu IV, 0x70(OUTP) 2735 2747 2736 2748 _aesni_gf128mul_x_ble() ··· 2750 2738 2751 2739 call *%r11 2752 2740 2753 - pxor 0x40(OUTP), STATE1 2741 + movdqu 0x40(OUTP), INC 2742 + pxor INC, STATE1 2754 2743 movdqu STATE1, 0x40(OUTP) 2755 2744 2756 - pxor 0x50(OUTP), STATE2 2745 + movdqu 0x50(OUTP), INC 2746 + pxor INC, STATE2 2757 2747 movdqu STATE2, 0x50(OUTP) 2758 2748 2759 - pxor 0x60(OUTP), STATE3 2749 + movdqu 0x60(OUTP), INC 2750 + pxor INC, STATE3 2760 2751 movdqu STATE3, 0x60(OUTP) 2761 2752 2762 - pxor 0x70(OUTP), STATE4 2753 + movdqu 0x70(OUTP), INC 2754 + pxor INC, STATE4 2763 2755 movdqu STATE4, 0x70(OUTP) 2764 2756 2765 2757 ret
+1 -1
arch/x86/ia32/ia32_aout.c
··· 192 192 /* struct user */ 193 193 DUMP_WRITE(&dump, sizeof(dump)); 194 194 /* Now dump all of the user data. Include malloced stuff as well */ 195 - DUMP_SEEK(PAGE_SIZE); 195 + DUMP_SEEK(PAGE_SIZE - sizeof(dump)); 196 196 /* now we start writing out the user space info */ 197 197 set_fs(USER_DS); 198 198 /* Dump the data area */
+5
arch/x86/include/asm/irq.h
··· 41 41 42 42 extern void init_ISA_irqs(void); 43 43 44 + #ifdef CONFIG_X86_LOCAL_APIC 45 + void arch_trigger_all_cpu_backtrace(void); 46 + #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace 47 + #endif 48 + 44 49 #endif /* _ASM_X86_IRQ_H */
+2 -2
arch/x86/include/asm/microcode.h
··· 60 60 #ifdef CONFIG_MICROCODE_EARLY 61 61 #define MAX_UCODE_COUNT 128 62 62 extern void __init load_ucode_bsp(void); 63 - extern __init void load_ucode_ap(void); 63 + extern void __cpuinit load_ucode_ap(void); 64 64 extern int __init save_microcode_in_initrd(void); 65 65 #else 66 66 static inline void __init load_ucode_bsp(void) {} 67 - static inline __init void load_ucode_ap(void) {} 67 + static inline void __cpuinit load_ucode_ap(void) {} 68 68 static inline int __init save_microcode_in_initrd(void) 69 69 { 70 70 return 0;
+1 -3
arch/x86/include/asm/nmi.h
··· 18 18 void __user *, size_t *, loff_t *); 19 19 extern int unknown_nmi_panic; 20 20 21 - void arch_trigger_all_cpu_backtrace(void); 22 - #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace 23 - #endif 21 + #endif /* CONFIG_X86_LOCAL_APIC */ 24 22 25 23 #define NMI_FLAG_FIRST 1 26 24
+1
arch/x86/kernel/apic/hw_nmi.c
··· 9 9 * 10 10 */ 11 11 #include <asm/apic.h> 12 + #include <asm/nmi.h> 12 13 13 14 #include <linux/cpumask.h> 14 15 #include <linux/kdebug.h>
+4 -4
arch/x86/kernel/cpu/mtrr/cleanup.c
··· 714 714 if (mtrr_tom2) 715 715 x_remove_size = (mtrr_tom2 >> PAGE_SHIFT) - x_remove_base; 716 716 717 - nr_range = x86_get_mtrr_mem_range(range, 0, x_remove_base, x_remove_size); 718 717 /* 719 718 * [0, 1M) should always be covered by var mtrr with WB 720 719 * and fixed mtrrs should take effect before var mtrr for it: 721 720 */ 722 - nr_range = add_range_with_merge(range, RANGE_NUM, nr_range, 0, 721 + nr_range = add_range_with_merge(range, RANGE_NUM, 0, 0, 723 722 1ULL<<(20 - PAGE_SHIFT)); 724 - /* Sort the ranges: */ 725 - sort_range(range, nr_range); 723 + /* add from var mtrr at last */ 724 + nr_range = x86_get_mtrr_mem_range(range, nr_range, 725 + x_remove_base, x_remove_size); 726 726 727 727 range_sums = sum_ranges(range, nr_range); 728 728 printk(KERN_INFO "total RAM covered: %ldM\n",
+1 -1
arch/x86/kernel/cpu/perf_event_intel.c
··· 165 165 INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3f807f8fffull, RSP_0), 166 166 INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3f807f8fffull, RSP_1), 167 167 INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 168 - INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 169 168 EVENT_EXTRA_END 170 169 }; 171 170 172 171 static struct extra_reg intel_snbep_extra_regs[] __read_mostly = { 173 172 INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3fffff8fffull, RSP_0), 174 173 INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3fffff8fffull, RSP_1), 174 + INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 175 175 EVENT_EXTRA_END 176 176 }; 177 177
+10 -4
arch/x86/kernel/kprobes/core.c
··· 365 365 return insn.length; 366 366 } 367 367 368 - static void __kprobes arch_copy_kprobe(struct kprobe *p) 368 + static int __kprobes arch_copy_kprobe(struct kprobe *p) 369 369 { 370 + int ret; 371 + 370 372 /* Copy an instruction with recovering if other optprobe modifies it.*/ 371 - __copy_instruction(p->ainsn.insn, p->addr); 373 + ret = __copy_instruction(p->ainsn.insn, p->addr); 374 + if (!ret) 375 + return -EINVAL; 372 376 373 377 /* 374 378 * __copy_instruction can modify the displacement of the instruction, ··· 388 384 389 385 /* Also, displacement change doesn't affect the first byte */ 390 386 p->opcode = p->ainsn.insn[0]; 387 + 388 + return 0; 391 389 } 392 390 393 391 int __kprobes arch_prepare_kprobe(struct kprobe *p) ··· 403 397 p->ainsn.insn = get_insn_slot(); 404 398 if (!p->ainsn.insn) 405 399 return -ENOMEM; 406 - arch_copy_kprobe(p); 407 - return 0; 400 + 401 + return arch_copy_kprobe(p); 408 402 } 409 403 410 404 void __kprobes arch_arm_kprobe(struct kprobe *p)
+1
arch/x86/kernel/kvmclock.c
··· 242 242 if (!mem) 243 243 return; 244 244 hv_clock = __va(mem); 245 + memset(hv_clock, 0, size); 245 246 246 247 if (kvm_register_clock("boot clock")) { 247 248 hv_clock = NULL;
-12
arch/x86/kernel/process.c
··· 277 277 } 278 278 #endif 279 279 280 - void arch_cpu_idle_prepare(void) 281 - { 282 - /* 283 - * If we're the non-boot CPU, nothing set the stack canary up 284 - * for us. CPU0 already has it initialized but no harm in 285 - * doing it again. This is a good place for updating it, as 286 - * we wont ever return from this function (so the invalid 287 - * canaries already on the stack wont ever trigger). 288 - */ 289 - boot_init_stack_canary(); 290 - } 291 - 292 280 void arch_cpu_idle_enter(void) 293 281 { 294 282 local_touch_nmi();
+4 -4
arch/x86/kernel/smpboot.c
··· 372 372 373 373 void __cpuinit set_cpu_sibling_map(int cpu) 374 374 { 375 - bool has_mc = boot_cpu_data.x86_max_cores > 1; 376 375 bool has_smt = smp_num_siblings > 1; 376 + bool has_mp = has_smt || boot_cpu_data.x86_max_cores > 1; 377 377 struct cpuinfo_x86 *c = &cpu_data(cpu); 378 378 struct cpuinfo_x86 *o; 379 379 int i; 380 380 381 381 cpumask_set_cpu(cpu, cpu_sibling_setup_mask); 382 382 383 - if (!has_smt && !has_mc) { 383 + if (!has_mp) { 384 384 cpumask_set_cpu(cpu, cpu_sibling_mask(cpu)); 385 385 cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu)); 386 386 cpumask_set_cpu(cpu, cpu_core_mask(cpu)); ··· 394 394 if ((i == cpu) || (has_smt && match_smt(c, o))) 395 395 link_mask(sibling, cpu, i); 396 396 397 - if ((i == cpu) || (has_mc && match_llc(c, o))) 397 + if ((i == cpu) || (has_mp && match_llc(c, o))) 398 398 link_mask(llc_shared, cpu, i); 399 399 400 400 } ··· 406 406 for_each_cpu(i, cpu_sibling_setup_mask) { 407 407 o = &cpu_data(i); 408 408 409 - if ((i == cpu) || (has_mc && match_mc(c, o))) { 409 + if ((i == cpu) || (has_mp && match_mc(c, o))) { 410 410 link_mask(core, cpu, i); 411 411 412 412 /*
+2 -3
arch/x86/kvm/x86.c
··· 582 582 if (index != XCR_XFEATURE_ENABLED_MASK) 583 583 return 1; 584 584 xcr0 = xcr; 585 - if (kvm_x86_ops->get_cpl(vcpu) != 0) 586 - return 1; 587 585 if (!(xcr0 & XSTATE_FP)) 588 586 return 1; 589 587 if ((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE)) ··· 595 597 596 598 int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr) 597 599 { 598 - if (__kvm_set_xcr(vcpu, index, xcr)) { 600 + if (kvm_x86_ops->get_cpl(vcpu) != 0 || 601 + __kvm_set_xcr(vcpu, index, xcr)) { 599 602 kvm_inject_gp(vcpu, 0); 600 603 return 1; 601 604 }
+6 -1
arch/x86/platform/efi/efi.c
··· 1069 1069 * that by attempting to use more space than is available. 1070 1070 */ 1071 1071 unsigned long dummy_size = remaining_size + 1024; 1072 - void *dummy = kmalloc(dummy_size, GFP_ATOMIC); 1072 + void *dummy = kzalloc(dummy_size, GFP_ATOMIC); 1073 + 1074 + if (!dummy) 1075 + return EFI_OUT_OF_RESOURCES; 1073 1076 1074 1077 status = efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, 1075 1078 EFI_VARIABLE_NON_VOLATILE | ··· 1091 1088 EFI_VARIABLE_RUNTIME_ACCESS, 1092 1089 0, dummy); 1093 1090 } 1091 + 1092 + kfree(dummy); 1094 1093 1095 1094 /* 1096 1095 * The runtime code may now have triggered a garbage collection
+8 -7
crypto/algboss.c
··· 45 45 } nu32; 46 46 } attrs[CRYPTO_MAX_ATTRS]; 47 47 48 - char larval[CRYPTO_MAX_ALG_NAME]; 49 48 char template[CRYPTO_MAX_ALG_NAME]; 50 49 51 - struct completion *completion; 50 + struct crypto_larval *larval; 52 51 53 52 u32 otype; 54 53 u32 omask; ··· 86 87 crypto_tmpl_put(tmpl); 87 88 88 89 out: 89 - complete_all(param->completion); 90 + complete_all(&param->larval->completion); 91 + crypto_alg_put(&param->larval->alg); 90 92 kfree(param); 91 93 module_put_and_exit(0); 92 94 } ··· 187 187 param->otype = larval->alg.cra_flags; 188 188 param->omask = larval->mask; 189 189 190 - memcpy(param->larval, larval->alg.cra_name, CRYPTO_MAX_ALG_NAME); 191 - 192 - param->completion = &larval->completion; 190 + crypto_alg_get(&larval->alg); 191 + param->larval = larval; 193 192 194 193 thread = kthread_run(cryptomgr_probe, param, "cryptomgr_probe"); 195 194 if (IS_ERR(thread)) 196 - goto err_free_param; 195 + goto err_put_larval; 197 196 198 197 wait_for_completion_interruptible(&larval->completion); 199 198 200 199 return NOTIFY_STOP; 201 200 201 + err_put_larval: 202 + crypto_alg_put(&larval->alg); 202 203 err_free_param: 203 204 kfree(param); 204 205 err_put_module:
-6
crypto/api.c
··· 34 34 BLOCKING_NOTIFIER_HEAD(crypto_chain); 35 35 EXPORT_SYMBOL_GPL(crypto_chain); 36 36 37 - static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg) 38 - { 39 - atomic_inc(&alg->cra_refcnt); 40 - return alg; 41 - } 42 - 43 37 struct crypto_alg *crypto_mod_get(struct crypto_alg *alg) 44 38 { 45 39 return try_module_get(alg->cra_module) ? crypto_alg_get(alg) : NULL;
+6
crypto/internal.h
··· 103 103 int crypto_unregister_notifier(struct notifier_block *nb); 104 104 int crypto_probing_notify(unsigned long val, void *v); 105 105 106 + static inline struct crypto_alg *crypto_alg_get(struct crypto_alg *alg) 107 + { 108 + atomic_inc(&alg->cra_refcnt); 109 + return alg; 110 + } 111 + 106 112 static inline void crypto_alg_put(struct crypto_alg *alg) 107 113 { 108 114 if (atomic_dec_and_test(&alg->cra_refcnt) && alg->cra_destroy)
+15 -6
drivers/acpi/acpi_lpss.c
··· 164 164 if (dev_desc->clk_required) { 165 165 ret = register_device_clock(adev, pdata); 166 166 if (ret) { 167 - /* 168 - * Skip the device, but don't terminate the namespace 169 - * scan. 170 - */ 171 - kfree(pdata); 172 - return 0; 167 + /* Skip the device, but continue the namespace scan. */ 168 + ret = 0; 169 + goto err_out; 173 170 } 171 + } 172 + 173 + /* 174 + * This works around a known issue in ACPI tables where LPSS devices 175 + * have _PS0 and _PS3 without _PSC (and no power resources), so 176 + * acpi_bus_init_power() will assume that the BIOS has put them into D0. 177 + */ 178 + ret = acpi_device_fix_up_power(adev); 179 + if (ret) { 180 + /* Skip the device, but continue the namespace scan. */ 181 + ret = 0; 182 + goto err_out; 174 183 } 175 184 176 185 adev->driver_data = pdata;
+20
drivers/acpi/device_pm.c
··· 290 290 return 0; 291 291 } 292 292 293 + /** 294 + * acpi_device_fix_up_power - Force device with missing _PSC into D0. 295 + * @device: Device object whose power state is to be fixed up. 296 + * 297 + * Devices without power resources and _PSC, but having _PS0 and _PS3 defined, 298 + * are assumed to be put into D0 by the BIOS. However, in some cases that may 299 + * not be the case and this function should be used then. 300 + */ 301 + int acpi_device_fix_up_power(struct acpi_device *device) 302 + { 303 + int ret = 0; 304 + 305 + if (!device->power.flags.power_resources 306 + && !device->power.flags.explicit_get 307 + && device->power.state == ACPI_STATE_D0) 308 + ret = acpi_dev_pm_explicit_set(device, ACPI_STATE_D0); 309 + 310 + return ret; 311 + } 312 + 293 313 int acpi_bus_update_power(acpi_handle handle, int *state_p) 294 314 { 295 315 struct acpi_device *device;
+98 -83
drivers/acpi/dock.c
··· 66 66 spinlock_t dd_lock; 67 67 struct mutex hp_lock; 68 68 struct list_head dependent_devices; 69 - struct list_head hotplug_devices; 70 69 71 70 struct list_head sibling; 72 71 struct platform_device *dock_device; 73 72 }; 74 73 static LIST_HEAD(dock_stations); 75 74 static int dock_station_count; 75 + static DEFINE_MUTEX(hotplug_lock); 76 76 77 77 struct dock_dependent_device { 78 78 struct list_head list; 79 - struct list_head hotplug_list; 80 79 acpi_handle handle; 81 - const struct acpi_dock_ops *ops; 82 - void *context; 80 + const struct acpi_dock_ops *hp_ops; 81 + void *hp_context; 82 + unsigned int hp_refcount; 83 + void (*hp_release)(void *); 83 84 }; 84 85 85 86 #define DOCK_DOCKING 0x00000001 ··· 112 111 113 112 dd->handle = handle; 114 113 INIT_LIST_HEAD(&dd->list); 115 - INIT_LIST_HEAD(&dd->hotplug_list); 116 114 117 115 spin_lock(&ds->dd_lock); 118 116 list_add_tail(&dd->list, &ds->dependent_devices); ··· 121 121 } 122 122 123 123 /** 124 - * dock_add_hotplug_device - associate a hotplug handler with the dock station 125 - * @ds: The dock station 126 - * @dd: The dependent device struct 127 - * 128 - * Add the dependent device to the dock's hotplug device list 124 + * dock_init_hotplug - Initialize a hotplug device on a docking station. 125 + * @dd: Dock-dependent device. 126 + * @ops: Dock operations to attach to the dependent device. 127 + * @context: Data to pass to the @ops callbacks and @release. 128 + * @init: Optional initialization routine to run after setting up context. 129 + * @release: Optional release routine to run on removal. 129 130 */ 130 - static void 131 - dock_add_hotplug_device(struct dock_station *ds, 132 - struct dock_dependent_device *dd) 131 + static int dock_init_hotplug(struct dock_dependent_device *dd, 132 + const struct acpi_dock_ops *ops, void *context, 133 + void (*init)(void *), void (*release)(void *)) 133 134 { 134 - mutex_lock(&ds->hp_lock); 135 - list_add_tail(&dd->hotplug_list, &ds->hotplug_devices); 136 - mutex_unlock(&ds->hp_lock); 135 + int ret = 0; 136 + 137 + mutex_lock(&hotplug_lock); 138 + 139 + if (dd->hp_context) { 140 + ret = -EEXIST; 141 + } else { 142 + dd->hp_refcount = 1; 143 + dd->hp_ops = ops; 144 + dd->hp_context = context; 145 + dd->hp_release = release; 146 + } 147 + 148 + if (!WARN_ON(ret) && init) 149 + init(context); 150 + 151 + mutex_unlock(&hotplug_lock); 152 + return ret; 137 153 } 138 154 139 155 /** 140 - * dock_del_hotplug_device - remove a hotplug handler from the dock station 141 - * @ds: The dock station 142 - * @dd: the dependent device struct 156 + * dock_release_hotplug - Decrement hotplug reference counter of dock device. 157 + * @dd: Dock-dependent device. 143 158 * 144 - * Delete the dependent device from the dock's hotplug device list 159 + * Decrement the reference counter of @dd and if 0, detach its hotplug 160 + * operations from it, reset its context pointer and run the optional release 161 + * routine if present. 145 162 */ 146 - static void 147 - dock_del_hotplug_device(struct dock_station *ds, 148 - struct dock_dependent_device *dd) 163 + static void dock_release_hotplug(struct dock_dependent_device *dd) 149 164 { 150 - mutex_lock(&ds->hp_lock); 151 - list_del(&dd->hotplug_list); 152 - mutex_unlock(&ds->hp_lock); 165 + void (*release)(void *) = NULL; 166 + void *context = NULL; 167 + 168 + mutex_lock(&hotplug_lock); 169 + 170 + if (dd->hp_context && !--dd->hp_refcount) { 171 + dd->hp_ops = NULL; 172 + context = dd->hp_context; 173 + dd->hp_context = NULL; 174 + release = dd->hp_release; 175 + dd->hp_release = NULL; 176 + } 177 + 178 + if (release && context) 179 + release(context); 180 + 181 + mutex_unlock(&hotplug_lock); 182 + } 183 + 184 + static void dock_hotplug_event(struct dock_dependent_device *dd, u32 event, 185 + bool uevent) 186 + { 187 + acpi_notify_handler cb = NULL; 188 + bool run = false; 189 + 190 + mutex_lock(&hotplug_lock); 191 + 192 + if (dd->hp_context) { 193 + run = true; 194 + dd->hp_refcount++; 195 + if (dd->hp_ops) 196 + cb = uevent ? dd->hp_ops->uevent : dd->hp_ops->handler; 197 + } 198 + 199 + mutex_unlock(&hotplug_lock); 200 + 201 + if (!run) 202 + return; 203 + 204 + if (cb) 205 + cb(dd->handle, event, dd->hp_context); 206 + 207 + dock_release_hotplug(dd); 153 208 } 154 209 155 210 /** ··· 415 360 /* 416 361 * First call driver specific hotplug functions 417 362 */ 418 - list_for_each_entry(dd, &ds->hotplug_devices, hotplug_list) 419 - if (dd->ops && dd->ops->handler) 420 - dd->ops->handler(dd->handle, event, dd->context); 363 + list_for_each_entry(dd, &ds->dependent_devices, list) 364 + dock_hotplug_event(dd, event, false); 421 365 422 366 /* 423 367 * Now make sure that an acpi_device is created for each ··· 452 398 if (num == DOCK_EVENT) 453 399 kobject_uevent_env(&dev->kobj, KOBJ_CHANGE, envp); 454 400 455 - list_for_each_entry(dd, &ds->hotplug_devices, hotplug_list) 456 - if (dd->ops && dd->ops->uevent) 457 - dd->ops->uevent(dd->handle, event, dd->context); 401 + list_for_each_entry(dd, &ds->dependent_devices, list) 402 + dock_hotplug_event(dd, event, true); 458 403 459 404 if (num != DOCK_EVENT) 460 405 kobject_uevent_env(&dev->kobj, KOBJ_CHANGE, envp); ··· 623 570 * @handle: the handle of the device 624 571 * @ops: handlers to call after docking 625 572 * @context: device specific data 573 + * @init: Optional initialization routine to run after registration 574 + * @release: Optional release routine to run on unregistration 626 575 * 627 576 * If a driver would like to perform a hotplug operation after a dock 628 577 * event, they can register an acpi_notifiy_handler to be called by 629 578 * the dock driver after _DCK is executed. 630 579 */ 631 - int 632 - register_hotplug_dock_device(acpi_handle handle, const struct acpi_dock_ops *ops, 633 - void *context) 580 + int register_hotplug_dock_device(acpi_handle handle, 581 + const struct acpi_dock_ops *ops, void *context, 582 + void (*init)(void *), void (*release)(void *)) 634 583 { 635 584 struct dock_dependent_device *dd; 636 585 struct dock_station *dock_station; 637 586 int ret = -EINVAL; 587 + 588 + if (WARN_ON(!context)) 589 + return -EINVAL; 638 590 639 591 if (!dock_station_count) 640 592 return -ENODEV; ··· 655 597 * ops 656 598 */ 657 599 dd = find_dock_dependent_device(dock_station, handle); 658 - if (dd) { 659 - dd->ops = ops; 660 - dd->context = context; 661 - dock_add_hotplug_device(dock_station, dd); 600 + if (dd && !dock_init_hotplug(dd, ops, context, init, release)) 662 601 ret = 0; 663 - } 664 602 } 665 603 666 604 return ret; ··· 678 624 list_for_each_entry(dock_station, &dock_stations, sibling) { 679 625 dd = find_dock_dependent_device(dock_station, handle); 680 626 if (dd) 681 - dock_del_hotplug_device(dock_station, dd); 627 + dock_release_hotplug(dd); 682 628 } 683 629 } 684 630 EXPORT_SYMBOL_GPL(unregister_hotplug_dock_device); ··· 922 868 if (!count) 923 869 return -EINVAL; 924 870 871 + acpi_scan_lock_acquire(); 925 872 begin_undock(dock_station); 926 873 ret = handle_eject_request(dock_station, ACPI_NOTIFY_EJECT_REQUEST); 874 + acpi_scan_lock_release(); 927 875 return ret ? ret: count; 928 876 } 929 877 static DEVICE_ATTR(undock, S_IWUSR, NULL, write_undock); ··· 1007 951 mutex_init(&dock_station->hp_lock); 1008 952 spin_lock_init(&dock_station->dd_lock); 1009 953 INIT_LIST_HEAD(&dock_station->sibling); 1010 - INIT_LIST_HEAD(&dock_station->hotplug_devices); 1011 954 ATOMIC_INIT_NOTIFIER_HEAD(&dock_notifier_list); 1012 955 INIT_LIST_HEAD(&dock_station->dependent_devices); 1013 956 ··· 1047 992 } 1048 993 1049 994 /** 1050 - * dock_remove - free up resources related to the dock station 1051 - */ 1052 - static int dock_remove(struct dock_station *ds) 1053 - { 1054 - struct dock_dependent_device *dd, *tmp; 1055 - struct platform_device *dock_device = ds->dock_device; 1056 - 1057 - if (!dock_station_count) 1058 - return 0; 1059 - 1060 - /* remove dependent devices */ 1061 - list_for_each_entry_safe(dd, tmp, &ds->dependent_devices, list) 1062 - kfree(dd); 1063 - 1064 - list_del(&ds->sibling); 1065 - 1066 - /* cleanup sysfs */ 1067 - sysfs_remove_group(&dock_device->dev.kobj, &dock_attribute_group); 1068 - platform_device_unregister(dock_device); 1069 - 1070 - return 0; 1071 - } 1072 - 1073 - /** 1074 995 * find_dock_and_bay - look for dock stations and bays 1075 996 * @handle: acpi handle of a device 1076 997 * @lvl: unused ··· 1064 1033 return AE_OK; 1065 1034 } 1066 1035 1067 - static int __init dock_init(void) 1036 + int __init acpi_dock_init(void) 1068 1037 { 1069 1038 if (acpi_disabled) 1070 1039 return 0; ··· 1083 1052 ACPI_DOCK_DRIVER_DESCRIPTION, dock_station_count); 1084 1053 return 0; 1085 1054 } 1086 - 1087 - static void __exit dock_exit(void) 1088 - { 1089 - struct dock_station *tmp, *dock_station; 1090 - 1091 - unregister_acpi_bus_notifier(&dock_acpi_notifier); 1092 - list_for_each_entry_safe(dock_station, tmp, &dock_stations, sibling) 1093 - dock_remove(dock_station); 1094 - } 1095 - 1096 - /* 1097 - * Must be called before drivers of devices in dock, otherwise we can't know 1098 - * which devices are in a dock 1099 - */ 1100 - subsys_initcall(dock_init); 1101 - module_exit(dock_exit);
+5
drivers/acpi/internal.h
··· 40 40 #else 41 41 static inline void acpi_container_init(void) {} 42 42 #endif 43 + #ifdef CONFIG_ACPI_DOCK 44 + void acpi_dock_init(void); 45 + #else 46 + static inline void acpi_dock_init(void) {} 47 + #endif 43 48 #ifdef CONFIG_ACPI_HOTPLUG_MEMORY 44 49 void acpi_memory_hotplug_init(void); 45 50 #else
+1
drivers/acpi/power.c
··· 885 885 ACPI_STA_DEFAULT); 886 886 mutex_init(&resource->resource_lock); 887 887 INIT_LIST_HEAD(&resource->dependent); 888 + INIT_LIST_HEAD(&resource->list_node); 888 889 resource->name = device->pnp.bus_id; 889 890 strcpy(acpi_device_name(device), ACPI_POWER_DEVICE_NAME); 890 891 strcpy(acpi_device_class(device), ACPI_POWER_CLASS);
+11 -5
drivers/acpi/resource.c
··· 304 304 } 305 305 306 306 static void acpi_dev_get_irqresource(struct resource *res, u32 gsi, 307 - u8 triggering, u8 polarity, u8 shareable) 307 + u8 triggering, u8 polarity, u8 shareable, 308 + bool legacy) 308 309 { 309 310 int irq, p, t; 310 311 ··· 318 317 * In IO-APIC mode, use overrided attribute. Two reasons: 319 318 * 1. BIOS bug in DSDT 320 319 * 2. BIOS uses IO-APIC mode Interrupt Source Override 320 + * 321 + * We do this only if we are dealing with IRQ() or IRQNoFlags() 322 + * resource (the legacy ISA resources). With modern ACPI 5 devices 323 + * using extended IRQ descriptors we take the IRQ configuration 324 + * from _CRS directly. 321 325 */ 322 - if (!acpi_get_override_irq(gsi, &t, &p)) { 326 + if (legacy && !acpi_get_override_irq(gsi, &t, &p)) { 323 327 u8 trig = t ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE; 324 328 u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH; 325 329 326 330 if (triggering != trig || polarity != pol) { 327 331 pr_warning("ACPI: IRQ %d override to %s, %s\n", gsi, 328 - t ? "edge" : "level", p ? "low" : "high"); 332 + t ? "level" : "edge", p ? "low" : "high"); 329 333 triggering = trig; 330 334 polarity = pol; 331 335 } ··· 379 373 } 380 374 acpi_dev_get_irqresource(res, irq->interrupts[index], 381 375 irq->triggering, irq->polarity, 382 - irq->sharable); 376 + irq->sharable, true); 383 377 break; 384 378 case ACPI_RESOURCE_TYPE_EXTENDED_IRQ: 385 379 ext_irq = &ares->data.extended_irq; ··· 389 383 } 390 384 acpi_dev_get_irqresource(res, ext_irq->interrupts[index], 391 385 ext_irq->triggering, ext_irq->polarity, 392 - ext_irq->sharable); 386 + ext_irq->sharable, false); 393 387 break; 394 388 default: 395 389 return false;
+1
drivers/acpi/scan.c
··· 2042 2042 acpi_lpss_init(); 2043 2043 acpi_container_init(); 2044 2044 acpi_memory_hotplug_init(); 2045 + acpi_dock_init(); 2045 2046 2046 2047 mutex_lock(&acpi_scan_lock); 2047 2048 /*
+36 -1
drivers/ata/libata-acpi.c
··· 156 156 157 157 spin_unlock_irqrestore(ap->lock, flags); 158 158 159 - if (wait) 159 + if (wait) { 160 160 ata_port_wait_eh(ap); 161 + flush_work(&ap->hotplug_task.work); 162 + } 161 163 } 162 164 163 165 static void ata_acpi_dev_notify_dock(acpi_handle handle, u32 event, void *data) ··· 215 213 .handler = ata_acpi_ap_notify_dock, 216 214 .uevent = ata_acpi_ap_uevent, 217 215 }; 216 + 217 + void ata_acpi_hotplug_init(struct ata_host *host) 218 + { 219 + int i; 220 + 221 + for (i = 0; i < host->n_ports; i++) { 222 + struct ata_port *ap = host->ports[i]; 223 + acpi_handle handle; 224 + struct ata_device *dev; 225 + 226 + if (!ap) 227 + continue; 228 + 229 + handle = ata_ap_acpi_handle(ap); 230 + if (handle) { 231 + /* we might be on a docking station */ 232 + register_hotplug_dock_device(handle, 233 + &ata_acpi_ap_dock_ops, ap, 234 + NULL, NULL); 235 + } 236 + 237 + ata_for_each_dev(dev, &ap->link, ALL) { 238 + handle = ata_dev_acpi_handle(dev); 239 + if (!handle) 240 + continue; 241 + 242 + /* we might be on a docking station */ 243 + register_hotplug_dock_device(handle, 244 + &ata_acpi_dev_dock_ops, 245 + dev, NULL, NULL); 246 + } 247 + } 248 + } 218 249 219 250 /** 220 251 * ata_acpi_dissociate - dissociate ATA host from ACPI objects
+2
drivers/ata/libata-core.c
··· 6148 6148 if (rc) 6149 6149 goto err_tadd; 6150 6150 6151 + ata_acpi_hotplug_init(host); 6152 + 6151 6153 /* set cable, sata_spd_limit and report */ 6152 6154 for (i = 0; i < host->n_ports; i++) { 6153 6155 struct ata_port *ap = host->ports[i];
+2
drivers/ata/libata.h
··· 122 122 extern void ata_acpi_unregister(void); 123 123 extern void ata_acpi_bind(struct ata_device *dev); 124 124 extern void ata_acpi_unbind(struct ata_device *dev); 125 + extern void ata_acpi_hotplug_init(struct ata_host *host); 125 126 #else 126 127 static inline void ata_acpi_dissociate(struct ata_host *host) { } 127 128 static inline int ata_acpi_on_suspend(struct ata_port *ap) { return 0; } ··· 135 134 static inline void ata_acpi_unregister(void) { } 136 135 static inline void ata_acpi_bind(struct ata_device *dev) { } 137 136 static inline void ata_acpi_unbind(struct ata_device *dev) { } 137 + static inline void ata_acpi_hotplug_init(struct ata_host *host) {} 138 138 #endif 139 139 140 140 /* libata-scsi.c */
+18 -9
drivers/base/firmware_class.c
··· 450 450 { 451 451 struct firmware_buf *buf = fw_priv->buf; 452 452 453 + /* 454 + * There is a small window in which user can write to 'loading' 455 + * between loading done and disappearance of 'loading' 456 + */ 457 + if (test_bit(FW_STATUS_DONE, &buf->status)) 458 + return; 459 + 453 460 set_bit(FW_STATUS_ABORT, &buf->status); 454 461 complete_all(&buf->completion); 462 + 463 + /* avoid user action after loading abort */ 464 + fw_priv->buf = NULL; 455 465 } 456 466 457 467 #define is_fw_load_aborted(buf) \ ··· 538 528 struct device_attribute *attr, char *buf) 539 529 { 540 530 struct firmware_priv *fw_priv = to_firmware_priv(dev); 541 - int loading = test_bit(FW_STATUS_LOADING, &fw_priv->buf->status); 531 + int loading = 0; 532 + 533 + mutex_lock(&fw_lock); 534 + if (fw_priv->buf) 535 + loading = test_bit(FW_STATUS_LOADING, &fw_priv->buf->status); 536 + mutex_unlock(&fw_lock); 542 537 543 538 return sprintf(buf, "%d\n", loading); 544 539 } ··· 585 570 const char *buf, size_t count) 586 571 { 587 572 struct firmware_priv *fw_priv = to_firmware_priv(dev); 588 - struct firmware_buf *fw_buf = fw_priv->buf; 573 + struct firmware_buf *fw_buf; 589 574 int loading = simple_strtol(buf, NULL, 10); 590 575 int i; 591 576 592 577 mutex_lock(&fw_lock); 593 - 578 + fw_buf = fw_priv->buf; 594 579 if (!fw_buf) 595 580 goto out; 596 581 ··· 792 777 struct firmware_priv, timeout_work.work); 793 778 794 779 mutex_lock(&fw_lock); 795 - if (test_bit(FW_STATUS_DONE, &(fw_priv->buf->status))) { 796 - mutex_unlock(&fw_lock); 797 - return; 798 - } 799 780 fw_load_abort(fw_priv); 800 781 mutex_unlock(&fw_lock); 801 782 } ··· 871 860 wait_for_completion(&buf->completion); 872 861 873 862 cancel_delayed_work_sync(&fw_priv->timeout_work); 874 - 875 - fw_priv->buf = NULL; 876 863 877 864 device_remove_file(f_dev, &dev_attr_loading); 878 865 err_del_bin_attr:
+14 -6
drivers/block/rbd.c
··· 1036 1036 char *name; 1037 1037 u64 segment; 1038 1038 int ret; 1039 + char *name_format; 1039 1040 1040 1041 name = kmem_cache_alloc(rbd_segment_name_cache, GFP_NOIO); 1041 1042 if (!name) 1042 1043 return NULL; 1043 1044 segment = offset >> rbd_dev->header.obj_order; 1044 - ret = snprintf(name, MAX_OBJ_NAME_SIZE + 1, "%s.%012llx", 1045 + name_format = "%s.%012llx"; 1046 + if (rbd_dev->image_format == 2) 1047 + name_format = "%s.%016llx"; 1048 + ret = snprintf(name, MAX_OBJ_NAME_SIZE + 1, name_format, 1045 1049 rbd_dev->header.object_prefix, segment); 1046 1050 if (ret < 0 || ret > MAX_OBJ_NAME_SIZE) { 1047 1051 pr_err("error formatting segment name for #%llu (%d)\n", ··· 2252 2248 obj_request->pages, length, 2253 2249 offset & ~PAGE_MASK, false, false); 2254 2250 2251 + /* 2252 + * set obj_request->img_request before formatting 2253 + * the osd_request so that it gets the right snapc 2254 + */ 2255 + rbd_img_obj_request_add(img_request, obj_request); 2255 2256 if (write_request) 2256 2257 rbd_osd_req_format_write(obj_request); 2257 2258 else 2258 2259 rbd_osd_req_format_read(obj_request); 2259 2260 2260 2261 obj_request->img_offset = img_offset; 2261 - rbd_img_obj_request_add(img_request, obj_request); 2262 2262 2263 2263 img_offset += length; 2264 2264 resid -= length; ··· 4247 4239 4248 4240 down_write(&rbd_dev->header_rwsem); 4249 4241 4242 + ret = rbd_dev_v2_image_size(rbd_dev); 4243 + if (ret) 4244 + goto out; 4245 + 4250 4246 if (first_time) { 4251 4247 ret = rbd_dev_v2_header_onetime(rbd_dev); 4252 4248 if (ret) ··· 4283 4271 rbd_warn(rbd_dev, "WARNING: kernel layering " 4284 4272 "is EXPERIMENTAL!"); 4285 4273 } 4286 - 4287 - ret = rbd_dev_v2_image_size(rbd_dev); 4288 - if (ret) 4289 - goto out; 4290 4274 4291 4275 if (rbd_dev->spec->snap_id == CEPH_NOSNAP) 4292 4276 if (rbd_dev->mapping.size != rbd_dev->header.image_size)
+1
drivers/clk/clk.c
··· 1955 1955 /* XXX the notifier code should handle this better */ 1956 1956 if (!cn->notifier_head.head) { 1957 1957 srcu_cleanup_notifier_head(&cn->notifier_head); 1958 + list_del(&cn->node); 1958 1959 kfree(cn); 1959 1960 } 1960 1961
+5 -5
drivers/clk/samsung/clk-exynos5250.c
··· 155 155 156 156 /* list of all parent clock list */ 157 157 PNAME(mout_apll_p) = { "fin_pll", "fout_apll", }; 158 - PNAME(mout_cpu_p) = { "mout_apll", "mout_mpll", }; 158 + PNAME(mout_cpu_p) = { "mout_apll", "sclk_mpll", }; 159 159 PNAME(mout_mpll_fout_p) = { "fout_mplldiv2", "fout_mpll" }; 160 160 PNAME(mout_mpll_p) = { "fin_pll", "mout_mpll_fout" }; 161 161 PNAME(mout_bpll_fout_p) = { "fout_bplldiv2", "fout_bpll" }; ··· 208 208 }; 209 209 210 210 struct samsung_mux_clock exynos5250_mux_clks[] __initdata = { 211 - MUX(none, "mout_apll", mout_apll_p, SRC_CPU, 0, 1), 212 - MUX(none, "mout_cpu", mout_cpu_p, SRC_CPU, 16, 1), 211 + MUX_A(none, "mout_apll", mout_apll_p, SRC_CPU, 0, 1, "mout_apll"), 212 + MUX_A(none, "mout_cpu", mout_cpu_p, SRC_CPU, 16, 1, "mout_cpu"), 213 213 MUX(none, "mout_mpll_fout", mout_mpll_fout_p, PLL_DIV2_SEL, 4, 1), 214 - MUX(none, "sclk_mpll", mout_mpll_p, SRC_CORE1, 8, 1), 214 + MUX_A(none, "sclk_mpll", mout_mpll_p, SRC_CORE1, 8, 1, "mout_mpll"), 215 215 MUX(none, "mout_bpll_fout", mout_bpll_fout_p, PLL_DIV2_SEL, 0, 1), 216 216 MUX(none, "sclk_bpll", mout_bpll_p, SRC_CDREX, 0, 1), 217 217 MUX(none, "mout_vpllsrc", mout_vpllsrc_p, SRC_TOP2, 0, 1), ··· 378 378 GATE(hsi2c3, "hsi2c3", "aclk66", GATE_IP_PERIC, 31, 0, 0), 379 379 GATE(chipid, "chipid", "aclk66", GATE_IP_PERIS, 0, 0, 0), 380 380 GATE(sysreg, "sysreg", "aclk66", GATE_IP_PERIS, 1, 0, 0), 381 - GATE(pmu, "pmu", "aclk66", GATE_IP_PERIS, 2, 0, 0), 381 + GATE(pmu, "pmu", "aclk66", GATE_IP_PERIS, 2, CLK_IGNORE_UNUSED, 0), 382 382 GATE(tzpc0, "tzpc0", "aclk66", GATE_IP_PERIS, 6, 0, 0), 383 383 GATE(tzpc1, "tzpc1", "aclk66", GATE_IP_PERIS, 7, 0, 0), 384 384 GATE(tzpc2, "tzpc2", "aclk66", GATE_IP_PERIS, 8, 0, 0),
+3 -2
drivers/clk/samsung/clk-pll.c
··· 111 111 unsigned long parent_rate) 112 112 { 113 113 struct samsung_clk_pll36xx *pll = to_clk_pll36xx(hw); 114 - u32 mdiv, pdiv, sdiv, kdiv, pll_con0, pll_con1; 114 + u32 mdiv, pdiv, sdiv, pll_con0, pll_con1; 115 + s16 kdiv; 115 116 u64 fvco = parent_rate; 116 117 117 118 pll_con0 = __raw_readl(pll->con_reg); ··· 120 119 mdiv = (pll_con0 >> PLL36XX_MDIV_SHIFT) & PLL36XX_MDIV_MASK; 121 120 pdiv = (pll_con0 >> PLL36XX_PDIV_SHIFT) & PLL36XX_PDIV_MASK; 122 121 sdiv = (pll_con0 >> PLL36XX_SDIV_SHIFT) & PLL36XX_SDIV_MASK; 123 - kdiv = pll_con1 & PLL36XX_KDIV_MASK; 122 + kdiv = (s16)(pll_con1 & PLL36XX_KDIV_MASK); 124 123 125 124 fvco *= (mdiv << 16) + kdiv; 126 125 do_div(fvco, (pdiv << sdiv));
+1 -1
drivers/clk/spear/spear3xx_clock.c
··· 369 369 clk_register_clkdev(clk, NULL, "60100000.serial"); 370 370 } 371 371 #else 372 - static inline void spear320_clk_init(void) { } 372 + static inline void spear320_clk_init(void __iomem *soc_config_base) { } 373 373 #endif 374 374 375 375 void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_base)
+6 -5
drivers/clk/tegra/clk-tegra30.c
··· 1598 1598 clk_register_clkdev(clk, "afi", "tegra-pcie"); 1599 1599 clks[afi] = clk; 1600 1600 1601 + /* pciex */ 1602 + clk = tegra_clk_register_periph_gate("pciex", "pll_e", 0, clk_base, 0, 1603 + 74, &periph_u_regs, periph_clk_enb_refcnt); 1604 + clk_register_clkdev(clk, "pciex", "tegra-pcie"); 1605 + clks[pciex] = clk; 1606 + 1601 1607 /* kfuse */ 1602 1608 clk = tegra_clk_register_periph_gate("kfuse", "clk_m", 1603 1609 TEGRA_PERIPH_ON_APB, ··· 1722 1716 1, 0, &cml_lock); 1723 1717 clk_register_clkdev(clk, "cml1", NULL); 1724 1718 clks[cml1] = clk; 1725 - 1726 - /* pciex */ 1727 - clk = clk_register_fixed_rate(NULL, "pciex", "pll_e", 0, 100000000); 1728 - clk_register_clkdev(clk, "pciex", NULL); 1729 - clks[pciex] = clk; 1730 1719 } 1731 1720 1732 1721 static void __init tegra30_osc_clk_init(void)
+13 -4
drivers/cpufreq/cpufreq_ondemand.c
··· 47 47 static struct cpufreq_governor cpufreq_gov_ondemand; 48 48 #endif 49 49 50 + static unsigned int default_powersave_bias; 51 + 50 52 static void ondemand_powersave_bias_init_cpu(int cpu) 51 53 { 52 54 struct od_cpu_dbs_info_s *dbs_info = &per_cpu(od_cpu_dbs_info, cpu); ··· 545 543 546 544 tuners->sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR; 547 545 tuners->ignore_nice = 0; 548 - tuners->powersave_bias = 0; 546 + tuners->powersave_bias = default_powersave_bias; 549 547 tuners->io_is_busy = should_io_be_busy(); 550 548 551 549 dbs_data->tuners = tuners; ··· 587 585 unsigned int cpu; 588 586 cpumask_t done; 589 587 588 + default_powersave_bias = powersave_bias; 590 589 cpumask_clear(&done); 591 590 592 591 get_online_cpus(); ··· 596 593 continue; 597 594 598 595 policy = per_cpu(od_cpu_dbs_info, cpu).cdbs.cur_policy; 599 - dbs_data = policy->governor_data; 600 - od_tuners = dbs_data->tuners; 601 - od_tuners->powersave_bias = powersave_bias; 596 + if (!policy) 597 + continue; 602 598 603 599 cpumask_or(&done, &done, policy->cpus); 600 + 601 + if (policy->governor != &cpufreq_gov_ondemand) 602 + continue; 603 + 604 + dbs_data = policy->governor_data; 605 + od_tuners = dbs_data->tuners; 606 + od_tuners->powersave_bias = default_powersave_bias; 604 607 } 605 608 put_online_cpus(); 606 609 }
+21 -1
drivers/gpio/gpio-omap.c
··· 1094 1094 const struct omap_gpio_platform_data *pdata; 1095 1095 struct resource *res; 1096 1096 struct gpio_bank *bank; 1097 + #ifdef CONFIG_ARCH_OMAP1 1098 + int irq_base; 1099 + #endif 1097 1100 1098 1101 match = of_match_device(of_match_ptr(omap_gpio_match), dev); 1099 1102 ··· 1138 1135 pdata->get_context_loss_count; 1139 1136 } 1140 1137 1138 + #ifdef CONFIG_ARCH_OMAP1 1139 + /* 1140 + * REVISIT: Once we have OMAP1 supporting SPARSE_IRQ, we can drop 1141 + * irq_alloc_descs() and irq_domain_add_legacy() and just use a 1142 + * linear IRQ domain mapping for all OMAP platforms. 1143 + */ 1144 + irq_base = irq_alloc_descs(-1, 0, bank->width, 0); 1145 + if (irq_base < 0) { 1146 + dev_err(dev, "Couldn't allocate IRQ numbers\n"); 1147 + return -ENODEV; 1148 + } 1141 1149 1150 + bank->domain = irq_domain_add_legacy(node, bank->width, irq_base, 1151 + 0, &irq_domain_simple_ops, NULL); 1152 + #else 1142 1153 bank->domain = irq_domain_add_linear(node, bank->width, 1143 1154 &irq_domain_simple_ops, NULL); 1144 - if (!bank->domain) 1155 + #endif 1156 + if (!bank->domain) { 1157 + dev_err(dev, "Couldn't register an IRQ domain\n"); 1145 1158 return -ENODEV; 1159 + } 1146 1160 1147 1161 if (bank->regs->set_dataout && bank->regs->clr_dataout) 1148 1162 bank->set_dataout = _set_gpio_dataout_reg;
+1 -2
drivers/gpu/drm/drm_prime.c
··· 190 190 if (ret) 191 191 return ERR_PTR(ret); 192 192 } 193 - return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, 194 - 0600); 193 + return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, flags); 195 194 } 196 195 EXPORT_SYMBOL(drm_gem_prime_export); 197 196
+2
drivers/gpu/drm/i915/i915_drv.h
··· 1697 1697 struct dma_buf *i915_gem_prime_export(struct drm_device *dev, 1698 1698 struct drm_gem_object *gem_obj, int flags); 1699 1699 1700 + void i915_gem_restore_fences(struct drm_device *dev); 1701 + 1700 1702 /* i915_gem_context.c */ 1701 1703 void i915_gem_context_init(struct drm_device *dev); 1702 1704 void i915_gem_context_fini(struct drm_device *dev);
+17 -20
drivers/gpu/drm/i915/i915_gem.c
··· 1801 1801 gfp |= __GFP_NORETRY | __GFP_NOWARN | __GFP_NO_KSWAPD; 1802 1802 gfp &= ~(__GFP_IO | __GFP_WAIT); 1803 1803 } 1804 - 1804 + #ifdef CONFIG_SWIOTLB 1805 + if (swiotlb_nr_tbl()) { 1806 + st->nents++; 1807 + sg_set_page(sg, page, PAGE_SIZE, 0); 1808 + sg = sg_next(sg); 1809 + continue; 1810 + } 1811 + #endif 1805 1812 if (!i || page_to_pfn(page) != last_pfn + 1) { 1806 1813 if (i) 1807 1814 sg = sg_next(sg); ··· 1819 1812 } 1820 1813 last_pfn = page_to_pfn(page); 1821 1814 } 1822 - 1823 - sg_mark_end(sg); 1815 + #ifdef CONFIG_SWIOTLB 1816 + if (!swiotlb_nr_tbl()) 1817 + #endif 1818 + sg_mark_end(sg); 1824 1819 obj->pages = st; 1825 1820 1826 1821 if (i915_gem_object_needs_bit17_swizzle(obj)) ··· 2126 2117 } 2127 2118 } 2128 2119 2129 - static void i915_gem_reset_fences(struct drm_device *dev) 2120 + void i915_gem_restore_fences(struct drm_device *dev) 2130 2121 { 2131 2122 struct drm_i915_private *dev_priv = dev->dev_private; 2132 2123 int i; 2133 2124 2134 2125 for (i = 0; i < dev_priv->num_fence_regs; i++) { 2135 2126 struct drm_i915_fence_reg *reg = &dev_priv->fence_regs[i]; 2136 - 2137 - if (reg->obj) 2138 - i915_gem_object_fence_lost(reg->obj); 2139 - 2140 - i915_gem_write_fence(dev, i, NULL); 2141 - 2142 - reg->pin_count = 0; 2143 - reg->obj = NULL; 2144 - INIT_LIST_HEAD(&reg->lru_list); 2127 + i915_gem_write_fence(dev, i, reg->obj); 2145 2128 } 2146 - 2147 - INIT_LIST_HEAD(&dev_priv->mm.fence_list); 2148 2129 } 2149 2130 2150 2131 void i915_gem_reset(struct drm_device *dev) ··· 2157 2158 obj->base.read_domains &= ~I915_GEM_GPU_DOMAINS; 2158 2159 } 2159 2160 2160 - /* The fence registers are invalidated so clear them out */ 2161 - i915_gem_reset_fences(dev); 2161 + i915_gem_restore_fences(dev); 2162 2162 } 2163 2163 2164 2164 /** ··· 3863 3865 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 3864 3866 i915_gem_evict_everything(dev); 3865 3867 3866 - i915_gem_reset_fences(dev); 3867 - 3868 3868 /* Hack! Don't let anybody do execbuf while we don't control the chip. 3869 3869 * We need to replace this with a semaphore, or something. 3870 3870 * And not confound mm.suspended! ··· 4189 4193 dev_priv->num_fence_regs = 8; 4190 4194 4191 4195 /* Initialize fence registers to zero */ 4192 - i915_gem_reset_fences(dev); 4196 + INIT_LIST_HEAD(&dev_priv->mm.fence_list); 4197 + i915_gem_restore_fences(dev); 4193 4198 4194 4199 i915_gem_detect_bit_6_swizzle(dev); 4195 4200 init_waitqueue_head(&dev_priv->pending_flip_queue);
+1
drivers/gpu/drm/i915/i915_suspend.c
··· 384 384 385 385 mutex_lock(&dev->struct_mutex); 386 386 387 + i915_gem_restore_fences(dev); 387 388 i915_restore_display(dev); 388 389 389 390 if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
+5
drivers/gpu/drm/qxl/qxl_ioctl.c
··· 171 171 if (user_cmd.command_size > PAGE_SIZE - sizeof(union qxl_release_info)) 172 172 return -EINVAL; 173 173 174 + if (!access_ok(VERIFY_READ, 175 + (void *)(unsigned long)user_cmd.command, 176 + user_cmd.command_size)) 177 + return -EFAULT; 178 + 174 179 ret = qxl_alloc_release_reserved(qdev, 175 180 sizeof(union qxl_release_info) + 176 181 user_cmd.command_size,
+10 -3
drivers/gpu/drm/radeon/r600.c
··· 2687 2687 int r600_uvd_init(struct radeon_device *rdev) 2688 2688 { 2689 2689 int i, j, r; 2690 + /* disable byte swapping */ 2691 + u32 lmi_swap_cntl = 0; 2692 + u32 mp_swap_cntl = 0; 2690 2693 2691 2694 /* raise clocks while booting up the VCPU */ 2692 2695 radeon_set_uvd_clocks(rdev, 53300, 40000); ··· 2714 2711 WREG32(UVD_LMI_CTRL, 0x40 | (1 << 8) | (1 << 13) | 2715 2712 (1 << 21) | (1 << 9) | (1 << 20)); 2716 2713 2717 - /* disable byte swapping */ 2718 - WREG32(UVD_LMI_SWAP_CNTL, 0); 2719 - WREG32(UVD_MP_SWAP_CNTL, 0); 2714 + #ifdef __BIG_ENDIAN 2715 + /* swap (8 in 32) RB and IB */ 2716 + lmi_swap_cntl = 0xa; 2717 + mp_swap_cntl = 0; 2718 + #endif 2719 + WREG32(UVD_LMI_SWAP_CNTL, lmi_swap_cntl); 2720 + WREG32(UVD_MP_SWAP_CNTL, mp_swap_cntl); 2720 2721 2721 2722 WREG32(UVD_MPC_SET_MUXA0, 0x40c2040); 2722 2723 WREG32(UVD_MPC_SET_MUXA1, 0x0);
+24 -29
drivers/gpu/drm/radeon/radeon_device.c
··· 244 244 */ 245 245 void radeon_wb_disable(struct radeon_device *rdev) 246 246 { 247 - int r; 248 - 249 - if (rdev->wb.wb_obj) { 250 - r = radeon_bo_reserve(rdev->wb.wb_obj, false); 251 - if (unlikely(r != 0)) 252 - return; 253 - radeon_bo_kunmap(rdev->wb.wb_obj); 254 - radeon_bo_unpin(rdev->wb.wb_obj); 255 - radeon_bo_unreserve(rdev->wb.wb_obj); 256 - } 257 247 rdev->wb.enabled = false; 258 248 } 259 249 ··· 259 269 { 260 270 radeon_wb_disable(rdev); 261 271 if (rdev->wb.wb_obj) { 272 + if (!radeon_bo_reserve(rdev->wb.wb_obj, false)) { 273 + radeon_bo_kunmap(rdev->wb.wb_obj); 274 + radeon_bo_unpin(rdev->wb.wb_obj); 275 + radeon_bo_unreserve(rdev->wb.wb_obj); 276 + } 262 277 radeon_bo_unref(&rdev->wb.wb_obj); 263 278 rdev->wb.wb = NULL; 264 279 rdev->wb.wb_obj = NULL; ··· 290 295 dev_warn(rdev->dev, "(%d) create WB bo failed\n", r); 291 296 return r; 292 297 } 293 - } 294 - r = radeon_bo_reserve(rdev->wb.wb_obj, false); 295 - if (unlikely(r != 0)) { 296 - radeon_wb_fini(rdev); 297 - return r; 298 - } 299 - r = radeon_bo_pin(rdev->wb.wb_obj, RADEON_GEM_DOMAIN_GTT, 300 - &rdev->wb.gpu_addr); 301 - if (r) { 298 + r = radeon_bo_reserve(rdev->wb.wb_obj, false); 299 + if (unlikely(r != 0)) { 300 + radeon_wb_fini(rdev); 301 + return r; 302 + } 303 + r = radeon_bo_pin(rdev->wb.wb_obj, RADEON_GEM_DOMAIN_GTT, 304 + &rdev->wb.gpu_addr); 305 + if (r) { 306 + radeon_bo_unreserve(rdev->wb.wb_obj); 307 + dev_warn(rdev->dev, "(%d) pin WB bo failed\n", r); 308 + radeon_wb_fini(rdev); 309 + return r; 310 + } 311 + r = radeon_bo_kmap(rdev->wb.wb_obj, (void **)&rdev->wb.wb); 302 312 radeon_bo_unreserve(rdev->wb.wb_obj); 303 - dev_warn(rdev->dev, "(%d) pin WB bo failed\n", r); 304 - radeon_wb_fini(rdev); 305 - return r; 306 - } 307 - r = radeon_bo_kmap(rdev->wb.wb_obj, (void **)&rdev->wb.wb); 308 - radeon_bo_unreserve(rdev->wb.wb_obj); 309 - if (r) { 310 - dev_warn(rdev->dev, "(%d) map WB bo failed\n", r); 311 - radeon_wb_fini(rdev); 312 - return r; 313 + if (r) { 314 + dev_warn(rdev->dev, "(%d) map WB bo failed\n", r); 315 + radeon_wb_fini(rdev); 316 + return r; 317 + } 313 318 } 314 319 315 320 /* clear wb memory */
+8 -2
drivers/gpu/drm/radeon/radeon_fence.c
··· 63 63 { 64 64 struct radeon_fence_driver *drv = &rdev->fence_drv[ring]; 65 65 if (likely(rdev->wb.enabled || !drv->scratch_reg)) { 66 - *drv->cpu_addr = cpu_to_le32(seq); 66 + if (drv->cpu_addr) { 67 + *drv->cpu_addr = cpu_to_le32(seq); 68 + } 67 69 } else { 68 70 WREG32(drv->scratch_reg, seq); 69 71 } ··· 86 84 u32 seq = 0; 87 85 88 86 if (likely(rdev->wb.enabled || !drv->scratch_reg)) { 89 - seq = le32_to_cpu(*drv->cpu_addr); 87 + if (drv->cpu_addr) { 88 + seq = le32_to_cpu(*drv->cpu_addr); 89 + } else { 90 + seq = lower_32_bits(atomic64_read(&drv->last_seq)); 91 + } 90 92 } else { 91 93 seq = RREG32(drv->scratch_reg); 92 94 }
+4 -2
drivers/gpu/drm/radeon/radeon_gart.c
··· 1197 1197 int radeon_vm_bo_rmv(struct radeon_device *rdev, 1198 1198 struct radeon_bo_va *bo_va) 1199 1199 { 1200 - int r; 1200 + int r = 0; 1201 1201 1202 1202 mutex_lock(&rdev->vm_manager.lock); 1203 1203 mutex_lock(&bo_va->vm->mutex); 1204 - r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL); 1204 + if (bo_va->soffset) { 1205 + r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL); 1206 + } 1205 1207 mutex_unlock(&rdev->vm_manager.lock); 1206 1208 list_del(&bo_va->vm_list); 1207 1209 mutex_unlock(&bo_va->vm->mutex);
+7
drivers/gpu/drm/radeon/radeon_ring.c
··· 402 402 return -ENOMEM; 403 403 /* Align requested size with padding so unlock_commit can 404 404 * pad safely */ 405 + radeon_ring_free_size(rdev, ring); 406 + if (ring->ring_free_dw == (ring->ring_size / 4)) { 407 + /* This is an empty ring update lockup info to avoid 408 + * false positive. 409 + */ 410 + radeon_ring_lockup_update(ring); 411 + } 405 412 ndw = (ndw + ring->align_mask) & ~ring->align_mask; 406 413 while (ndw > (ring->ring_free_dw - 1)) { 407 414 radeon_ring_free_size(rdev, ring);
+31 -17
drivers/gpu/drm/radeon/radeon_uvd.c
··· 159 159 if (!r) { 160 160 radeon_bo_kunmap(rdev->uvd.vcpu_bo); 161 161 radeon_bo_unpin(rdev->uvd.vcpu_bo); 162 + rdev->uvd.cpu_addr = NULL; 163 + if (!radeon_bo_pin(rdev->uvd.vcpu_bo, RADEON_GEM_DOMAIN_CPU, NULL)) { 164 + radeon_bo_kmap(rdev->uvd.vcpu_bo, &rdev->uvd.cpu_addr); 165 + } 162 166 radeon_bo_unreserve(rdev->uvd.vcpu_bo); 167 + 168 + if (rdev->uvd.cpu_addr) { 169 + radeon_fence_driver_start_ring(rdev, R600_RING_TYPE_UVD_INDEX); 170 + } else { 171 + rdev->fence_drv[R600_RING_TYPE_UVD_INDEX].cpu_addr = NULL; 172 + } 163 173 } 164 174 return r; 165 175 } ··· 187 177 dev_err(rdev->dev, "(%d) failed to reserve UVD bo\n", r); 188 178 return r; 189 179 } 180 + 181 + /* Have been pin in cpu unmap unpin */ 182 + radeon_bo_kunmap(rdev->uvd.vcpu_bo); 183 + radeon_bo_unpin(rdev->uvd.vcpu_bo); 190 184 191 185 r = radeon_bo_pin(rdev->uvd.vcpu_bo, RADEON_GEM_DOMAIN_VRAM, 192 186 &rdev->uvd.gpu_addr); ··· 627 613 } 628 614 629 615 /* stitch together an UVD create msg */ 630 - msg[0] = 0x00000de4; 631 - msg[1] = 0x00000000; 632 - msg[2] = handle; 633 - msg[3] = 0x00000000; 634 - msg[4] = 0x00000000; 635 - msg[5] = 0x00000000; 636 - msg[6] = 0x00000000; 637 - msg[7] = 0x00000780; 638 - msg[8] = 0x00000440; 639 - msg[9] = 0x00000000; 640 - msg[10] = 0x01b37000; 616 + msg[0] = cpu_to_le32(0x00000de4); 617 + msg[1] = cpu_to_le32(0x00000000); 618 + msg[2] = cpu_to_le32(handle); 619 + msg[3] = cpu_to_le32(0x00000000); 620 + msg[4] = cpu_to_le32(0x00000000); 621 + msg[5] = cpu_to_le32(0x00000000); 622 + msg[6] = cpu_to_le32(0x00000000); 623 + msg[7] = cpu_to_le32(0x00000780); 624 + msg[8] = cpu_to_le32(0x00000440); 625 + msg[9] = cpu_to_le32(0x00000000); 626 + msg[10] = cpu_to_le32(0x01b37000); 641 627 for (i = 11; i < 1024; ++i) 642 - msg[i] = 0x0; 628 + msg[i] = cpu_to_le32(0x0); 643 629 644 630 radeon_bo_kunmap(bo); 645 631 radeon_bo_unreserve(bo); ··· 673 659 } 674 660 675 661 /* stitch together an UVD destroy msg */ 676 - msg[0] = 0x00000de4; 677 - msg[1] = 0x00000002; 678 - msg[2] = handle; 679 - msg[3] = 0x00000000; 662 + msg[0] = cpu_to_le32(0x00000de4); 663 + msg[1] = cpu_to_le32(0x00000002); 664 + msg[2] = cpu_to_le32(handle); 665 + msg[3] = cpu_to_le32(0x00000000); 680 666 for (i = 4; i < 1024; ++i) 681 - msg[i] = 0x0; 667 + msg[i] = cpu_to_le32(0x0); 682 668 683 669 radeon_bo_kunmap(bo); 684 670 radeon_bo_unreserve(bo);
+1 -1
drivers/input/joystick/xpad.c
··· 137 137 { 0x0738, 0x4540, "Mad Catz Beat Pad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX }, 138 138 { 0x0738, 0x4556, "Mad Catz Lynx Wireless Controller", 0, XTYPE_XBOX }, 139 139 { 0x0738, 0x4716, "Mad Catz Wired Xbox 360 Controller", 0, XTYPE_XBOX360 }, 140 - { 0x0738, 0x4728, "Mad Catz Street Fighter IV FightPad", XTYPE_XBOX360 }, 140 + { 0x0738, 0x4728, "Mad Catz Street Fighter IV FightPad", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 141 141 { 0x0738, 0x4738, "Mad Catz Wired Xbox 360 Controller (SFIV)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 142 142 { 0x0738, 0x6040, "Mad Catz Beat Pad Pro", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX }, 143 143 { 0x0738, 0xbeef, "Mad Catz JOYTECH NEO SE Advanced GamePad", XTYPE_XBOX360 },
+1
drivers/input/keyboard/Kconfig
··· 431 431 432 432 config KEYBOARD_OPENCORES 433 433 tristate "OpenCores Keyboard Controller" 434 + depends on HAS_IOMEM 434 435 help 435 436 Say Y here if you want to use the OpenCores Keyboard Controller 436 437 http://www.opencores.org/project,keyboardcontroller
+1
drivers/input/serio/Kconfig
··· 205 205 206 206 config SERIO_ALTERA_PS2 207 207 tristate "Altera UP PS/2 controller" 208 + depends on HAS_IOMEM 208 209 help 209 210 Say Y here if you have Altera University Program PS/2 ports. 210 211
+2
drivers/input/tablet/wacom_wac.c
··· 363 363 case 0x140802: /* Intuos4/5 13HD/24HD Classic Pen */ 364 364 case 0x160802: /* Cintiq 13HD Pro Pen */ 365 365 case 0x180802: /* DTH2242 Pen */ 366 + case 0x100802: /* Intuos4/5 13HD/24HD General Pen */ 366 367 wacom->tool[idx] = BTN_TOOL_PEN; 367 368 break; 368 369 ··· 402 401 case 0x10080c: /* Intuos4/5 13HD/24HD Art Pen Eraser */ 403 402 case 0x16080a: /* Cintiq 13HD Pro Pen Eraser */ 404 403 case 0x18080a: /* DTH2242 Eraser */ 404 + case 0x10080a: /* Intuos4/5 13HD/24HD General Pen Eraser */ 405 405 wacom->tool[idx] = BTN_TOOL_RUBBER; 406 406 break; 407 407
+21 -7
drivers/input/touchscreen/cyttsp_core.c
··· 116 116 return ttsp_write_block_data(ts, CY_REG_BASE, sizeof(cmd), &cmd); 117 117 } 118 118 119 + static int cyttsp_handshake(struct cyttsp *ts) 120 + { 121 + if (ts->pdata->use_hndshk) 122 + return ttsp_send_command(ts, 123 + ts->xy_data.hst_mode ^ CY_HNDSHK_BIT); 124 + 125 + return 0; 126 + } 127 + 119 128 static int cyttsp_load_bl_regs(struct cyttsp *ts) 120 129 { 121 130 memset(&ts->bl_data, 0, sizeof(ts->bl_data)); ··· 142 133 memcpy(bl_cmd, bl_command, sizeof(bl_command)); 143 134 if (ts->pdata->bl_keys) 144 135 memcpy(&bl_cmd[sizeof(bl_command) - CY_NUM_BL_KEYS], 145 - ts->pdata->bl_keys, sizeof(bl_command)); 136 + ts->pdata->bl_keys, CY_NUM_BL_KEYS); 146 137 147 138 error = ttsp_write_block_data(ts, CY_REG_BASE, 148 139 sizeof(bl_cmd), bl_cmd); ··· 176 167 if (error) 177 168 return error; 178 169 170 + error = cyttsp_handshake(ts); 171 + if (error) 172 + return error; 173 + 179 174 return ts->xy_data.act_dist == CY_ACT_DIST_DFLT ? -EIO : 0; 180 175 } 181 176 ··· 198 185 msleep(CY_DELAY_DFLT); 199 186 error = ttsp_read_block_data(ts, CY_REG_BASE, sizeof(ts->sysinfo_data), 200 187 &ts->sysinfo_data); 188 + if (error) 189 + return error; 190 + 191 + error = cyttsp_handshake(ts); 201 192 if (error) 202 193 return error; 203 194 ··· 361 344 goto out; 362 345 363 346 /* provide flow control handshake */ 364 - if (ts->pdata->use_hndshk) { 365 - error = ttsp_send_command(ts, 366 - ts->xy_data.hst_mode ^ CY_HNDSHK_BIT); 367 - if (error) 368 - goto out; 369 - } 347 + error = cyttsp_handshake(ts); 348 + if (error) 349 + goto out; 370 350 371 351 if (unlikely(ts->state == CY_IDLE_STATE)) 372 352 goto out;
+1 -1
drivers/input/touchscreen/cyttsp_core.h
··· 67 67 /* TTSP System Information interface definition */ 68 68 struct cyttsp_sysinfo_data { 69 69 u8 hst_mode; 70 - u8 mfg_cmd; 71 70 u8 mfg_stat; 71 + u8 mfg_cmd; 72 72 u8 cid[3]; 73 73 u8 tt_undef1; 74 74 u8 uid[8];
+1 -1
drivers/irqchip/irq-gic.c
··· 705 705 static int __cpuinit gic_secondary_init(struct notifier_block *nfb, 706 706 unsigned long action, void *hcpu) 707 707 { 708 - if (action == CPU_STARTING) 708 + if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 709 709 gic_cpu_init(&gic_data[0]); 710 710 return NOTIFY_OK; 711 711 }
+9 -3
drivers/media/Kconfig
··· 136 136 137 137 # This Kconfig option is used by both PCI and USB drivers 138 138 config TTPCI_EEPROM 139 - tristate 140 - depends on I2C 141 - default n 139 + tristate 140 + depends on I2C 141 + default n 142 142 143 143 source "drivers/media/dvb-core/Kconfig" 144 144 ··· 188 188 the needed demodulators). 189 189 190 190 If unsure say Y. 191 + 192 + config MEDIA_ATTACH 193 + bool 194 + depends on MEDIA_ANALOG_TV_SUPPORT || MEDIA_DIGITAL_TV_SUPPORT || MEDIA_RADIO_SUPPORT 195 + depends on MODULES 196 + default MODULES 191 197 192 198 source "drivers/media/i2c/Kconfig" 193 199 source "drivers/media/tuners/Kconfig"
+1 -1
drivers/media/i2c/s5c73m3/s5c73m3-core.c
··· 956 956 957 957 if (fie->pad != OIF_SOURCE_PAD) 958 958 return -EINVAL; 959 - if (fie->index > ARRAY_SIZE(s5c73m3_intervals)) 959 + if (fie->index >= ARRAY_SIZE(s5c73m3_intervals)) 960 960 return -EINVAL; 961 961 962 962 mutex_lock(&state->lock);
+3 -4
drivers/media/pci/cx88/cx88-alsa.c
··· 615 615 int changed = 0; 616 616 u32 old; 617 617 618 - if (core->board.audio_chip == V4L2_IDENT_WM8775) 618 + if (core->sd_wm8775) 619 619 snd_cx88_wm8775_volume_put(kcontrol, value); 620 620 621 621 left = value->value.integer.value[0] & 0x3f; ··· 682 682 vol ^= bit; 683 683 cx_swrite(SHADOW_AUD_VOL_CTL, AUD_VOL_CTL, vol); 684 684 /* Pass mute onto any WM8775 */ 685 - if ((core->board.audio_chip == V4L2_IDENT_WM8775) && 686 - ((1<<6) == bit)) 685 + if (core->sd_wm8775 && ((1<<6) == bit)) 687 686 wm8775_s_ctrl(core, V4L2_CID_AUDIO_MUTE, 0 != (vol & bit)); 688 687 ret = 1; 689 688 } ··· 902 903 goto error; 903 904 904 905 /* If there's a wm8775 then add a Line-In ALC switch */ 905 - if (core->board.audio_chip == V4L2_IDENT_WM8775) 906 + if (core->sd_wm8775) 906 907 snd_ctl_add(card, snd_ctl_new1(&snd_cx88_alc_switch, chip)); 907 908 908 909 strcpy (card->driver, "CX88x");
+3 -5
drivers/media/pci/cx88/cx88-video.c
··· 385 385 /* The wm8775 module has the "2" route hardwired into 386 386 the initialization. Some boards may use different 387 387 routes for different inputs. HVR-1300 surely does */ 388 - if (core->board.audio_chip && 389 - core->board.audio_chip == V4L2_IDENT_WM8775) { 388 + if (core->sd_wm8775) { 390 389 call_all(core, audio, s_routing, 391 390 INPUT(input).audioroute, 0, 0); 392 391 } ··· 770 771 cx_write(MO_GP1_IO, core->board.radio.gpio1); 771 772 cx_write(MO_GP2_IO, core->board.radio.gpio2); 772 773 if (core->board.radio.audioroute) { 773 - if(core->board.audio_chip && 774 - core->board.audio_chip == V4L2_IDENT_WM8775) { 774 + if (core->sd_wm8775) { 775 775 call_all(core, audio, s_routing, 776 776 core->board.radio.audioroute, 0, 0); 777 777 } ··· 957 959 u32 value,mask; 958 960 959 961 /* Pass changes onto any WM8775 */ 960 - if (core->board.audio_chip == V4L2_IDENT_WM8775) { 962 + if (core->sd_wm8775) { 961 963 switch (ctrl->id) { 962 964 case V4L2_CID_AUDIO_MUTE: 963 965 wm8775_s_ctrl(core, ctrl->id, ctrl->val);
+9
drivers/media/platform/coda.c
··· 576 576 return v4l2_m2m_dqbuf(file, ctx->m2m_ctx, buf); 577 577 } 578 578 579 + static int vidioc_create_bufs(struct file *file, void *priv, 580 + struct v4l2_create_buffers *create) 581 + { 582 + struct coda_ctx *ctx = fh_to_ctx(priv); 583 + 584 + return v4l2_m2m_create_bufs(file, ctx->m2m_ctx, create); 585 + } 586 + 579 587 static int vidioc_streamon(struct file *file, void *priv, 580 588 enum v4l2_buf_type type) 581 589 { ··· 618 610 619 611 .vidioc_qbuf = vidioc_qbuf, 620 612 .vidioc_dqbuf = vidioc_dqbuf, 613 + .vidioc_create_bufs = vidioc_create_bufs, 621 614 622 615 .vidioc_streamon = vidioc_streamon, 623 616 .vidioc_streamoff = vidioc_streamoff,
+15
drivers/media/platform/davinci/vpbe_display.c
··· 916 916 other video window */ 917 917 918 918 layer->pix_fmt = *pixfmt; 919 + if (pixfmt->pixelformat == V4L2_PIX_FMT_NV12) { 920 + struct vpbe_layer *otherlayer; 921 + 922 + otherlayer = _vpbe_display_get_other_win_layer(disp_dev, layer); 923 + /* if other layer is available, only 924 + * claim it, do not configure it 925 + */ 926 + ret = osd_device->ops.request_layer(osd_device, 927 + otherlayer->layer_info.id); 928 + if (ret < 0) { 929 + v4l2_err(&vpbe_dev->v4l2_dev, 930 + "Display Manager failed to allocate layer\n"); 931 + return -EBUSY; 932 + } 933 + } 919 934 920 935 /* Get osd layer config */ 921 936 osd_device->ops.get_layer_config(osd_device,
+1 -2
drivers/media/platform/davinci/vpfe_capture.c
··· 1837 1837 if (NULL == ccdc_cfg) { 1838 1838 v4l2_err(pdev->dev.driver, 1839 1839 "Memory allocation failed for ccdc_cfg\n"); 1840 - goto probe_free_lock; 1840 + goto probe_free_dev_mem; 1841 1841 } 1842 1842 1843 1843 mutex_lock(&ccdc_lock); ··· 1991 1991 free_irq(vpfe_dev->ccdc_irq0, vpfe_dev); 1992 1992 probe_free_ccdc_cfg_mem: 1993 1993 kfree(ccdc_cfg); 1994 - probe_free_lock: 1995 1994 mutex_unlock(&ccdc_lock); 1996 1995 probe_free_dev_mem: 1997 1996 kfree(vpfe_dev);
+1 -1
drivers/media/platform/exynos4-is/fimc-is-regs.c
··· 174 174 HIC_CAPTURE_STILL, HIC_CAPTURE_VIDEO, 175 175 }; 176 176 177 - if (WARN_ON(is->config_index > ARRAY_SIZE(cmd))) 177 + if (WARN_ON(is->config_index >= ARRAY_SIZE(cmd))) 178 178 return -EINVAL; 179 179 180 180 mcuctl_write(cmd[is->config_index], is, MCUCTL_REG_ISSR(0));
+18 -30
drivers/media/platform/exynos4-is/fimc-is.c
··· 48 48 [ISS_CLK_LITE0] = "lite0", 49 49 [ISS_CLK_LITE1] = "lite1", 50 50 [ISS_CLK_MPLL] = "mpll", 51 - [ISS_CLK_SYSREG] = "sysreg", 52 51 [ISS_CLK_ISP] = "isp", 53 52 [ISS_CLK_DRC] = "drc", 54 53 [ISS_CLK_FD] = "fd", ··· 70 71 for (i = 0; i < ISS_CLKS_MAX; i++) { 71 72 if (IS_ERR(is->clocks[i])) 72 73 continue; 73 - clk_unprepare(is->clocks[i]); 74 74 clk_put(is->clocks[i]); 75 75 is->clocks[i] = ERR_PTR(-EINVAL); 76 76 } ··· 88 90 ret = PTR_ERR(is->clocks[i]); 89 91 goto err; 90 92 } 91 - ret = clk_prepare(is->clocks[i]); 92 - if (ret < 0) { 93 - clk_put(is->clocks[i]); 94 - is->clocks[i] = ERR_PTR(-EINVAL); 95 - goto err; 96 - } 97 93 } 98 94 99 95 return 0; ··· 95 103 fimc_is_put_clocks(is); 96 104 dev_err(&is->pdev->dev, "failed to get clock: %s\n", 97 105 fimc_is_clocks[i]); 98 - return -ENXIO; 106 + return ret; 99 107 } 100 108 101 109 static int fimc_is_setup_clocks(struct fimc_is *is) ··· 136 144 for (i = 0; i < ISS_GATE_CLKS_MAX; i++) { 137 145 if (IS_ERR(is->clocks[i])) 138 146 continue; 139 - ret = clk_enable(is->clocks[i]); 147 + ret = clk_prepare_enable(is->clocks[i]); 140 148 if (ret < 0) { 141 149 dev_err(&is->pdev->dev, "clock %s enable failed\n", 142 150 fimc_is_clocks[i]); ··· 155 163 156 164 for (i = 0; i < ISS_GATE_CLKS_MAX; i++) { 157 165 if (!IS_ERR(is->clocks[i])) { 158 - clk_disable(is->clocks[i]); 166 + clk_disable_unprepare(is->clocks[i]); 159 167 pr_debug("disabled clock: %s\n", fimc_is_clocks[i]); 160 168 } 161 169 } ··· 317 325 { 318 326 struct device *dev = &is->pdev->dev; 319 327 int ret; 328 + 329 + if (is->fw.f_w == NULL) { 330 + dev_err(dev, "firmware is not loaded\n"); 331 + return -EINVAL; 332 + } 320 333 321 334 memcpy(is->memory.vaddr, is->fw.f_w->data, is->fw.f_w->size); 322 335 wmb(); ··· 834 837 goto err_clk; 835 838 } 836 839 pm_runtime_enable(dev); 837 - /* 838 - * Enable only the ISP power domain, keep FIMC-IS clocks off until 839 - * the whole clock tree is configured. The ISP power domain needs 840 - * be active in order to acces any CMU_ISP clock registers. 841 - */ 840 + 842 841 ret = pm_runtime_get_sync(dev); 843 842 if (ret < 0) 844 843 goto err_irq; 845 - 846 - ret = fimc_is_setup_clocks(is); 847 - pm_runtime_put_sync(dev); 848 - 849 - if (ret < 0) 850 - goto err_irq; 851 - 852 - is->clk_init = true; 853 844 854 845 is->alloc_ctx = vb2_dma_contig_init_ctx(dev); 855 846 if (IS_ERR(is->alloc_ctx)) { ··· 860 875 if (ret < 0) 861 876 goto err_dfs; 862 877 878 + pm_runtime_put_sync(dev); 879 + 863 880 dev_dbg(dev, "FIMC-IS registered successfully\n"); 864 881 return 0; 865 882 ··· 881 894 static int fimc_is_runtime_resume(struct device *dev) 882 895 { 883 896 struct fimc_is *is = dev_get_drvdata(dev); 897 + int ret; 884 898 885 - if (!is->clk_init) 886 - return 0; 899 + ret = fimc_is_setup_clocks(is); 900 + if (ret) 901 + return ret; 887 902 888 903 return fimc_is_enable_clocks(is); 889 904 } ··· 894 905 { 895 906 struct fimc_is *is = dev_get_drvdata(dev); 896 907 897 - if (is->clk_init) 898 - fimc_is_disable_clocks(is); 899 - 908 + fimc_is_disable_clocks(is); 900 909 return 0; 901 910 } 902 911 ··· 928 941 vb2_dma_contig_cleanup_ctx(is->alloc_ctx); 929 942 fimc_is_put_clocks(is); 930 943 fimc_is_debugfs_remove(is); 931 - release_firmware(is->fw.f_w); 944 + if (is->fw.f_w) 945 + release_firmware(is->fw.f_w); 932 946 fimc_is_free_cpu_memory(is); 933 947 934 948 return 0;
-2
drivers/media/platform/exynos4-is/fimc-is.h
··· 73 73 ISS_CLK_LITE0, 74 74 ISS_CLK_LITE1, 75 75 ISS_CLK_MPLL, 76 - ISS_CLK_SYSREG, 77 76 ISS_CLK_ISP, 78 77 ISS_CLK_DRC, 79 78 ISS_CLK_FD, ··· 264 265 spinlock_t slock; 265 266 266 267 struct clk *clocks[ISS_CLKS_MAX]; 267 - bool clk_init; 268 268 void __iomem *regs; 269 269 void __iomem *pmu_regs; 270 270 int irq;
+2 -2
drivers/media/platform/exynos4-is/fimc-isp.c
··· 138 138 return 0; 139 139 } 140 140 141 - mf->colorspace = V4L2_COLORSPACE_JPEG; 141 + mf->colorspace = V4L2_COLORSPACE_SRGB; 142 142 143 143 mutex_lock(&isp->subdev_lock); 144 144 __is_get_frame_size(is, &cur_fmt); ··· 194 194 v4l2_dbg(1, debug, sd, "%s: pad%d: code: 0x%x, %dx%d\n", 195 195 __func__, fmt->pad, mf->code, mf->width, mf->height); 196 196 197 - mf->colorspace = V4L2_COLORSPACE_JPEG; 197 + mf->colorspace = V4L2_COLORSPACE_SRGB; 198 198 199 199 mutex_lock(&isp->subdev_lock); 200 200 __isp_subdev_try_format(isp, fmt);
+1 -1
drivers/media/platform/exynos4-is/mipi-csis.c
··· 746 746 node = v4l2_of_get_next_endpoint(node, NULL); 747 747 if (!node) { 748 748 dev_err(&pdev->dev, "No port node at %s\n", 749 - node->full_name); 749 + pdev->dev.of_node->full_name); 750 750 return -EINVAL; 751 751 } 752 752 /* Get port node and validate MIPI-CSI channel id. */
+1 -1
drivers/media/platform/s3c-camif/camif-core.h
··· 229 229 unsigned int state; 230 230 u16 fmt_flags; 231 231 u8 id; 232 - u8 rotation; 232 + u16 rotation; 233 233 u8 hflip; 234 234 u8 vflip; 235 235 unsigned int offset;
+1 -1
drivers/media/platform/s5p-jpeg/Makefile
··· 1 1 s5p-jpeg-objs := jpeg-core.o 2 - obj-$(CONFIG_VIDEO_SAMSUNG_S5P_JPEG) := s5p-jpeg.o 2 + obj-$(CONFIG_VIDEO_SAMSUNG_S5P_JPEG) += s5p-jpeg.o
+1 -1
drivers/media/platform/s5p-mfc/Makefile
··· 1 - obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC) := s5p-mfc.o 1 + obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC) += s5p-mfc.o 2 2 s5p-mfc-y += s5p_mfc.o s5p_mfc_intr.o 3 3 s5p-mfc-y += s5p_mfc_dec.o s5p_mfc_enc.o 4 4 s5p-mfc-y += s5p_mfc_ctrl.o s5p_mfc_pm.o
+3 -5
drivers/media/platform/s5p-mfc/s5p_mfc.c
··· 397 397 leave_handle_frame: 398 398 spin_unlock_irqrestore(&dev->irqlock, flags); 399 399 if ((ctx->src_queue_cnt == 0 && ctx->state != MFCINST_FINISHING) 400 - || ctx->dst_queue_cnt < ctx->dpb_count) 400 + || ctx->dst_queue_cnt < ctx->pb_count) 401 401 clear_work_bit(ctx); 402 402 s5p_mfc_hw_call(dev->mfc_ops, clear_int_flags, dev); 403 403 wake_up_ctx(ctx, reason, err); ··· 473 473 474 474 s5p_mfc_hw_call(dev->mfc_ops, dec_calc_dpb_size, ctx); 475 475 476 - ctx->dpb_count = s5p_mfc_hw_call(dev->mfc_ops, get_dpb_count, 476 + ctx->pb_count = s5p_mfc_hw_call(dev->mfc_ops, get_dpb_count, 477 477 dev); 478 478 ctx->mv_count = s5p_mfc_hw_call(dev->mfc_ops, get_mv_count, 479 479 dev); ··· 562 562 struct s5p_mfc_dev *dev = ctx->dev; 563 563 struct s5p_mfc_buf *mb_entry; 564 564 565 - mfc_debug(2, "Stream completed"); 565 + mfc_debug(2, "Stream completed\n"); 566 566 567 567 s5p_mfc_clear_int_flags(dev); 568 568 ctx->int_type = reason; ··· 1362 1362 .port_num = MFC_NUM_PORTS, 1363 1363 .buf_size = &buf_size_v5, 1364 1364 .buf_align = &mfc_buf_align_v5, 1365 - .mclk_name = "sclk_mfc", 1366 1365 .fw_name = "s5p-mfc.fw", 1367 1366 }; 1368 1367 ··· 1388 1389 .port_num = MFC_NUM_PORTS_V6, 1389 1390 .buf_size = &buf_size_v6, 1390 1391 .buf_align = &mfc_buf_align_v6, 1391 - .mclk_name = "aclk_333", 1392 1392 .fw_name = "s5p-mfc-v6.fw", 1393 1393 }; 1394 1394
+3 -3
drivers/media/platform/s5p-mfc/s5p_mfc_common.h
··· 138 138 MFCINST_INIT = 100, 139 139 MFCINST_GOT_INST, 140 140 MFCINST_HEAD_PARSED, 141 + MFCINST_HEAD_PRODUCED, 141 142 MFCINST_BUFS_SET, 142 143 MFCINST_RUNNING, 143 144 MFCINST_FINISHING, ··· 232 231 unsigned int port_num; 233 232 struct s5p_mfc_buf_size *buf_size; 234 233 struct s5p_mfc_buf_align *buf_align; 235 - char *mclk_name; 236 234 char *fw_name; 237 235 }; 238 236 ··· 438 438 u32 rc_framerate_num; 439 439 u32 rc_framerate_denom; 440 440 441 - union { 441 + struct { 442 442 struct s5p_mfc_h264_enc_params h264; 443 443 struct s5p_mfc_mpeg4_enc_params mpeg4; 444 444 } codec; ··· 602 602 int after_packed_pb; 603 603 int sei_fp_parse; 604 604 605 - int dpb_count; 605 + int pb_count; 606 606 int total_dpb_count; 607 607 int mv_count; 608 608 /* Buffers */
+1 -1
drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
··· 38 38 dev->fw_virt_addr = dma_alloc_coherent(dev->mem_dev_l, dev->fw_size, 39 39 &dev->bank1, GFP_KERNEL); 40 40 41 - if (IS_ERR(dev->fw_virt_addr)) { 41 + if (IS_ERR_OR_NULL(dev->fw_virt_addr)) { 42 42 dev->fw_virt_addr = NULL; 43 43 mfc_err("Allocating bitprocessor buffer failed\n"); 44 44 return -ENOMEM;
+2 -2
drivers/media/platform/s5p-mfc/s5p_mfc_debug.h
··· 30 30 #define mfc_debug(level, fmt, args...) 31 31 #endif 32 32 33 - #define mfc_debug_enter() mfc_debug(5, "enter") 34 - #define mfc_debug_leave() mfc_debug(5, "leave") 33 + #define mfc_debug_enter() mfc_debug(5, "enter\n") 34 + #define mfc_debug_leave() mfc_debug(5, "leave\n") 35 35 36 36 #define mfc_err(fmt, args...) \ 37 37 do { \
+10 -10
drivers/media/platform/s5p-mfc/s5p_mfc_dec.c
··· 210 210 /* Context is to decode a frame */ 211 211 if (ctx->src_queue_cnt >= 1 && 212 212 ctx->state == MFCINST_RUNNING && 213 - ctx->dst_queue_cnt >= ctx->dpb_count) 213 + ctx->dst_queue_cnt >= ctx->pb_count) 214 214 return 1; 215 215 /* Context is to return last frame */ 216 216 if (ctx->state == MFCINST_FINISHING && 217 - ctx->dst_queue_cnt >= ctx->dpb_count) 217 + ctx->dst_queue_cnt >= ctx->pb_count) 218 218 return 1; 219 219 /* Context is to set buffers */ 220 220 if (ctx->src_queue_cnt >= 1 && ··· 224 224 /* Resolution change */ 225 225 if ((ctx->state == MFCINST_RES_CHANGE_INIT || 226 226 ctx->state == MFCINST_RES_CHANGE_FLUSH) && 227 - ctx->dst_queue_cnt >= ctx->dpb_count) 227 + ctx->dst_queue_cnt >= ctx->pb_count) 228 228 return 1; 229 229 if (ctx->state == MFCINST_RES_CHANGE_END && 230 230 ctx->src_queue_cnt >= 1) ··· 537 537 mfc_err("vb2_reqbufs on capture failed\n"); 538 538 return ret; 539 539 } 540 - if (reqbufs->count < ctx->dpb_count) { 540 + if (reqbufs->count < ctx->pb_count) { 541 541 mfc_err("Not enough buffers allocated\n"); 542 542 reqbufs->count = 0; 543 543 s5p_mfc_clock_on(); ··· 751 751 case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE: 752 752 if (ctx->state >= MFCINST_HEAD_PARSED && 753 753 ctx->state < MFCINST_ABORT) { 754 - ctrl->val = ctx->dpb_count; 754 + ctrl->val = ctx->pb_count; 755 755 break; 756 756 } else if (ctx->state != MFCINST_INIT) { 757 757 v4l2_err(&dev->v4l2_dev, "Decoding not initialised\n"); ··· 763 763 S5P_MFC_R2H_CMD_SEQ_DONE_RET, 0); 764 764 if (ctx->state >= MFCINST_HEAD_PARSED && 765 765 ctx->state < MFCINST_ABORT) { 766 - ctrl->val = ctx->dpb_count; 766 + ctrl->val = ctx->pb_count; 767 767 } else { 768 768 v4l2_err(&dev->v4l2_dev, "Decoding not initialised\n"); 769 769 return -EINVAL; ··· 924 924 /* Output plane count is 2 - one for Y and one for CbCr */ 925 925 *plane_count = 2; 926 926 /* Setup buffer count */ 927 - if (*buf_count < ctx->dpb_count) 928 - *buf_count = ctx->dpb_count; 929 - if (*buf_count > ctx->dpb_count + MFC_MAX_EXTRA_DPB) 930 - *buf_count = ctx->dpb_count + MFC_MAX_EXTRA_DPB; 927 + if (*buf_count < ctx->pb_count) 928 + *buf_count = ctx->pb_count; 929 + if (*buf_count > ctx->pb_count + MFC_MAX_EXTRA_DPB) 930 + *buf_count = ctx->pb_count + MFC_MAX_EXTRA_DPB; 931 931 if (*buf_count > MFC_MAX_BUFFERS) 932 932 *buf_count = MFC_MAX_BUFFERS; 933 933 } else {
+56 -26
drivers/media/platform/s5p-mfc/s5p_mfc_enc.c
··· 592 592 return 1; 593 593 /* context is ready to encode a frame */ 594 594 if ((ctx->state == MFCINST_RUNNING || 595 - ctx->state == MFCINST_HEAD_PARSED) && 595 + ctx->state == MFCINST_HEAD_PRODUCED) && 596 596 ctx->src_queue_cnt >= 1 && ctx->dst_queue_cnt >= 1) 597 597 return 1; 598 598 /* context is ready to encode remaining frames */ ··· 649 649 struct s5p_mfc_enc_params *p = &ctx->enc_params; 650 650 struct s5p_mfc_buf *dst_mb; 651 651 unsigned long flags; 652 + unsigned int enc_pb_count; 652 653 653 654 if (p->seq_hdr_mode == V4L2_MPEG_VIDEO_HEADER_MODE_SEPARATE) { 654 655 spin_lock_irqsave(&dev->irqlock, flags); ··· 662 661 vb2_buffer_done(dst_mb->b, VB2_BUF_STATE_DONE); 663 662 spin_unlock_irqrestore(&dev->irqlock, flags); 664 663 } 665 - if (IS_MFCV6(dev)) { 666 - ctx->state = MFCINST_HEAD_PARSED; /* for INIT_BUFFER cmd */ 667 - } else { 664 + 665 + if (!IS_MFCV6(dev)) { 668 666 ctx->state = MFCINST_RUNNING; 669 667 if (s5p_mfc_ctx_ready(ctx)) 670 668 set_work_bit_irqsave(ctx); 671 669 s5p_mfc_hw_call(dev->mfc_ops, try_run, dev); 672 - } 673 - 674 - if (IS_MFCV6(dev)) 675 - ctx->dpb_count = s5p_mfc_hw_call(dev->mfc_ops, 670 + } else { 671 + enc_pb_count = s5p_mfc_hw_call(dev->mfc_ops, 676 672 get_enc_dpb_count, dev); 673 + if (ctx->pb_count < enc_pb_count) 674 + ctx->pb_count = enc_pb_count; 675 + ctx->state = MFCINST_HEAD_PRODUCED; 676 + } 677 677 678 678 return 0; 679 679 } ··· 719 717 720 718 slice_type = s5p_mfc_hw_call(dev->mfc_ops, get_enc_slice_type, dev); 721 719 strm_size = s5p_mfc_hw_call(dev->mfc_ops, get_enc_strm_size, dev); 722 - mfc_debug(2, "Encoded slice type: %d", slice_type); 723 - mfc_debug(2, "Encoded stream size: %d", strm_size); 724 - mfc_debug(2, "Display order: %d", 720 + mfc_debug(2, "Encoded slice type: %d\n", slice_type); 721 + mfc_debug(2, "Encoded stream size: %d\n", strm_size); 722 + mfc_debug(2, "Display order: %d\n", 725 723 mfc_read(dev, S5P_FIMV_ENC_SI_PIC_CNT)); 726 724 spin_lock_irqsave(&dev->irqlock, flags); 727 725 if (slice_type >= 0) { ··· 1057 1055 } 1058 1056 ctx->capture_state = QUEUE_BUFS_REQUESTED; 1059 1057 1060 - if (!IS_MFCV6(dev)) { 1061 - ret = s5p_mfc_hw_call(ctx->dev->mfc_ops, 1062 - alloc_codec_buffers, ctx); 1063 - if (ret) { 1064 - mfc_err("Failed to allocate encoding buffers\n"); 1065 - reqbufs->count = 0; 1066 - ret = vb2_reqbufs(&ctx->vq_dst, reqbufs); 1067 - return -ENOMEM; 1068 - } 1058 + ret = s5p_mfc_hw_call(ctx->dev->mfc_ops, 1059 + alloc_codec_buffers, ctx); 1060 + if (ret) { 1061 + mfc_err("Failed to allocate encoding buffers\n"); 1062 + reqbufs->count = 0; 1063 + ret = vb2_reqbufs(&ctx->vq_dst, reqbufs); 1064 + return -ENOMEM; 1069 1065 } 1070 1066 } else if (reqbufs->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) { 1071 1067 if (ctx->output_state != QUEUE_FREE) { ··· 1071 1071 ctx->output_state); 1072 1072 return -EINVAL; 1073 1073 } 1074 + 1075 + if (IS_MFCV6(dev)) { 1076 + /* Check for min encoder buffers */ 1077 + if (ctx->pb_count && 1078 + (reqbufs->count < ctx->pb_count)) { 1079 + reqbufs->count = ctx->pb_count; 1080 + mfc_debug(2, "Minimum %d output buffers needed\n", 1081 + ctx->pb_count); 1082 + } else { 1083 + ctx->pb_count = reqbufs->count; 1084 + } 1085 + } 1086 + 1074 1087 ret = vb2_reqbufs(&ctx->vq_src, reqbufs); 1075 1088 if (ret != 0) { 1076 1089 mfc_err("error in vb2_reqbufs() for E(S)\n"); ··· 1546 1533 1547 1534 spin_lock_irqsave(&dev->irqlock, flags); 1548 1535 if (list_empty(&ctx->src_queue)) { 1549 - mfc_debug(2, "EOS: empty src queue, entering finishing state"); 1536 + mfc_debug(2, "EOS: empty src queue, entering finishing state\n"); 1550 1537 ctx->state = MFCINST_FINISHING; 1551 1538 if (s5p_mfc_ctx_ready(ctx)) 1552 1539 set_work_bit_irqsave(ctx); 1553 1540 spin_unlock_irqrestore(&dev->irqlock, flags); 1554 1541 s5p_mfc_hw_call(dev->mfc_ops, try_run, dev); 1555 1542 } else { 1556 - mfc_debug(2, "EOS: marking last buffer of stream"); 1543 + mfc_debug(2, "EOS: marking last buffer of stream\n"); 1557 1544 buf = list_entry(ctx->src_queue.prev, 1558 1545 struct s5p_mfc_buf, list); 1559 1546 if (buf->flags & MFC_BUF_FLAG_USED) ··· 1622 1609 mfc_err("failed to get plane cookie\n"); 1623 1610 return -EINVAL; 1624 1611 } 1625 - mfc_debug(2, "index: %d, plane[%d] cookie: 0x%08zx", 1626 - vb->v4l2_buf.index, i, 1627 - vb2_dma_contig_plane_dma_addr(vb, i)); 1612 + mfc_debug(2, "index: %d, plane[%d] cookie: 0x%08zx\n", 1613 + vb->v4l2_buf.index, i, 1614 + vb2_dma_contig_plane_dma_addr(vb, i)); 1628 1615 } 1629 1616 return 0; 1630 1617 } ··· 1773 1760 struct s5p_mfc_ctx *ctx = fh_to_ctx(q->drv_priv); 1774 1761 struct s5p_mfc_dev *dev = ctx->dev; 1775 1762 1776 - v4l2_ctrl_handler_setup(&ctx->ctrl_handler); 1763 + if (IS_MFCV6(dev) && (q->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)) { 1764 + 1765 + if ((ctx->state == MFCINST_GOT_INST) && 1766 + (dev->curr_ctx == ctx->num) && dev->hw_lock) { 1767 + s5p_mfc_wait_for_done_ctx(ctx, 1768 + S5P_MFC_R2H_CMD_SEQ_DONE_RET, 1769 + 0); 1770 + } 1771 + 1772 + if (ctx->src_bufs_cnt < ctx->pb_count) { 1773 + mfc_err("Need minimum %d OUTPUT buffers\n", 1774 + ctx->pb_count); 1775 + return -EINVAL; 1776 + } 1777 + } 1778 + 1777 1779 /* If context is ready then dev = work->data;schedule it to run */ 1778 1780 if (s5p_mfc_ctx_ready(ctx)) 1779 1781 set_work_bit_irqsave(ctx); 1780 1782 s5p_mfc_hw_call(dev->mfc_ops, try_run, dev); 1783 + 1781 1784 return 0; 1782 1785 } 1783 1786 ··· 1949 1920 if (controls[i].is_volatile && ctx->ctrls[i]) 1950 1921 ctx->ctrls[i]->flags |= V4L2_CTRL_FLAG_VOLATILE; 1951 1922 } 1923 + v4l2_ctrl_handler_setup(&ctx->ctrl_handler); 1952 1924 return 0; 1953 1925 } 1954 1926
+2 -2
drivers/media/platform/s5p-mfc/s5p_mfc_opr_v5.c
··· 1275 1275 spin_unlock_irqrestore(&dev->irqlock, flags); 1276 1276 dev->curr_ctx = ctx->num; 1277 1277 s5p_mfc_clean_ctx_int_flags(ctx); 1278 - mfc_debug(2, "encoding buffer with index=%d state=%d", 1279 - src_mb ? src_mb->b->v4l2_buf.index : -1, ctx->state); 1278 + mfc_debug(2, "encoding buffer with index=%d state=%d\n", 1279 + src_mb ? src_mb->b->v4l2_buf.index : -1, ctx->state); 1280 1280 s5p_mfc_encode_one_frame_v5(ctx); 1281 1281 return 0; 1282 1282 }
+15 -38
drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c
··· 62 62 /* NOP */ 63 63 } 64 64 65 - static int s5p_mfc_get_dec_status_v6(struct s5p_mfc_dev *dev) 66 - { 67 - /* NOP */ 68 - return -1; 69 - } 70 - 71 65 /* Allocate codec buffers */ 72 66 static int s5p_mfc_alloc_codec_buffers_v6(struct s5p_mfc_ctx *ctx) 73 67 { ··· 161 167 S5P_FIMV_SCRATCH_BUFFER_ALIGN_V6); 162 168 ctx->bank1.size = 163 169 ctx->scratch_buf_size + ctx->tmv_buffer_size + 164 - (ctx->dpb_count * (ctx->luma_dpb_size + 170 + (ctx->pb_count * (ctx->luma_dpb_size + 165 171 ctx->chroma_dpb_size + ctx->me_buffer_size)); 166 172 ctx->bank2.size = 0; 167 173 break; ··· 175 181 S5P_FIMV_SCRATCH_BUFFER_ALIGN_V6); 176 182 ctx->bank1.size = 177 183 ctx->scratch_buf_size + ctx->tmv_buffer_size + 178 - (ctx->dpb_count * (ctx->luma_dpb_size + 184 + (ctx->pb_count * (ctx->luma_dpb_size + 179 185 ctx->chroma_dpb_size + ctx->me_buffer_size)); 180 186 ctx->bank2.size = 0; 181 187 break; ··· 192 198 } 193 199 BUG_ON(ctx->bank1.dma & ((1 << MFC_BANK1_ALIGN_ORDER) - 1)); 194 200 } 195 - 196 201 return 0; 197 202 } 198 203 ··· 442 449 WRITEL(addr, S5P_FIMV_E_STREAM_BUFFER_ADDR_V6); /* 16B align */ 443 450 WRITEL(size, S5P_FIMV_E_STREAM_BUFFER_SIZE_V6); 444 451 445 - mfc_debug(2, "stream buf addr: 0x%08lx, size: 0x%d", 446 - addr, size); 452 + mfc_debug(2, "stream buf addr: 0x%08lx, size: 0x%d\n", 453 + addr, size); 447 454 448 455 return 0; 449 456 } ··· 456 463 WRITEL(y_addr, S5P_FIMV_E_SOURCE_LUMA_ADDR_V6); /* 256B align */ 457 464 WRITEL(c_addr, S5P_FIMV_E_SOURCE_CHROMA_ADDR_V6); 458 465 459 - mfc_debug(2, "enc src y buf addr: 0x%08lx", y_addr); 460 - mfc_debug(2, "enc src c buf addr: 0x%08lx", c_addr); 466 + mfc_debug(2, "enc src y buf addr: 0x%08lx\n", y_addr); 467 + mfc_debug(2, "enc src c buf addr: 0x%08lx\n", c_addr); 461 468 } 462 469 463 470 static void s5p_mfc_get_enc_frame_buffer_v6(struct s5p_mfc_ctx *ctx, ··· 472 479 enc_recon_y_addr = READL(S5P_FIMV_E_RECON_LUMA_DPB_ADDR_V6); 473 480 enc_recon_c_addr = READL(S5P_FIMV_E_RECON_CHROMA_DPB_ADDR_V6); 474 481 475 - mfc_debug(2, "recon y addr: 0x%08lx", enc_recon_y_addr); 476 - mfc_debug(2, "recon c addr: 0x%08lx", enc_recon_c_addr); 482 + mfc_debug(2, "recon y addr: 0x%08lx\n", enc_recon_y_addr); 483 + mfc_debug(2, "recon c addr: 0x%08lx\n", enc_recon_c_addr); 477 484 } 478 485 479 486 /* Set encoding ref & codec buffer */ ··· 490 497 491 498 mfc_debug(2, "Buf1: %p (%d)\n", (void *)buf_addr1, buf_size1); 492 499 493 - for (i = 0; i < ctx->dpb_count; i++) { 500 + for (i = 0; i < ctx->pb_count; i++) { 494 501 WRITEL(buf_addr1, S5P_FIMV_E_LUMA_DPB_V6 + (4 * i)); 495 502 buf_addr1 += ctx->luma_dpb_size; 496 503 WRITEL(buf_addr1, S5P_FIMV_E_CHROMA_DPB_V6 + (4 * i)); ··· 513 520 buf_size1 -= ctx->tmv_buffer_size; 514 521 515 522 mfc_debug(2, "Buf1: %u, buf_size1: %d (ref frames %d)\n", 516 - buf_addr1, buf_size1, ctx->dpb_count); 523 + buf_addr1, buf_size1, ctx->pb_count); 517 524 if (buf_size1 < 0) { 518 525 mfc_debug(2, "Not enough memory has been allocated.\n"); 519 526 return -ENOMEM; ··· 1424 1431 src_y_addr = vb2_dma_contig_plane_dma_addr(src_mb->b, 0); 1425 1432 src_c_addr = vb2_dma_contig_plane_dma_addr(src_mb->b, 1); 1426 1433 1427 - mfc_debug(2, "enc src y addr: 0x%08lx", src_y_addr); 1428 - mfc_debug(2, "enc src c addr: 0x%08lx", src_c_addr); 1434 + mfc_debug(2, "enc src y addr: 0x%08lx\n", src_y_addr); 1435 + mfc_debug(2, "enc src c addr: 0x%08lx\n", src_c_addr); 1429 1436 1430 1437 s5p_mfc_set_enc_frame_buffer_v6(ctx, src_y_addr, src_c_addr); 1431 1438 ··· 1515 1522 struct s5p_mfc_dev *dev = ctx->dev; 1516 1523 int ret; 1517 1524 1518 - ret = s5p_mfc_alloc_codec_buffers_v6(ctx); 1519 - if (ret) { 1520 - mfc_err("Failed to allocate encoding buffers.\n"); 1521 - return -ENOMEM; 1522 - } 1523 - 1524 - /* Header was generated now starting processing 1525 - * First set the reference frame buffers 1526 - */ 1527 - if (ctx->capture_state != QUEUE_BUFS_REQUESTED) { 1528 - mfc_err("It seems that destionation buffers were not\n" 1529 - "requested.MFC requires that header should be generated\n" 1530 - "before allocating codec buffer.\n"); 1531 - return -EAGAIN; 1532 - } 1533 - 1534 1525 dev->curr_ctx = ctx->num; 1535 1526 s5p_mfc_clean_ctx_int_flags(ctx); 1536 1527 ret = s5p_mfc_set_enc_ref_buffer_v6(ctx); ··· 1559 1582 mfc_debug(1, "Seting new context to %p\n", ctx); 1560 1583 /* Got context to run in ctx */ 1561 1584 mfc_debug(1, "ctx->dst_queue_cnt=%d ctx->dpb_count=%d ctx->src_queue_cnt=%d\n", 1562 - ctx->dst_queue_cnt, ctx->dpb_count, ctx->src_queue_cnt); 1585 + ctx->dst_queue_cnt, ctx->pb_count, ctx->src_queue_cnt); 1563 1586 mfc_debug(1, "ctx->state=%d\n", ctx->state); 1564 1587 /* Last frame has already been sent to MFC 1565 1588 * Now obtaining frames from MFC buffer */ ··· 1624 1647 case MFCINST_GOT_INST: 1625 1648 s5p_mfc_run_init_enc(ctx); 1626 1649 break; 1627 - case MFCINST_HEAD_PARSED: /* Only for MFC6.x */ 1650 + case MFCINST_HEAD_PRODUCED: 1628 1651 ret = s5p_mfc_run_init_enc_buffers(ctx); 1629 1652 break; 1630 1653 default: ··· 1707 1730 return mfc_read(dev, S5P_FIMV_D_DISPLAY_STATUS_V6); 1708 1731 } 1709 1732 1710 - static int s5p_mfc_get_decoded_status_v6(struct s5p_mfc_dev *dev) 1733 + static int s5p_mfc_get_dec_status_v6(struct s5p_mfc_dev *dev) 1711 1734 { 1712 1735 return mfc_read(dev, S5P_FIMV_D_DECODED_STATUS_V6); 1713 1736 }
+2 -21
drivers/media/platform/s5p-mfc/s5p_mfc_pm.c
··· 50 50 goto err_p_ip_clk; 51 51 } 52 52 53 - pm->clock = clk_get(&dev->plat_dev->dev, dev->variant->mclk_name); 54 - if (IS_ERR(pm->clock)) { 55 - mfc_err("Failed to get MFC clock\n"); 56 - ret = PTR_ERR(pm->clock); 57 - goto err_g_ip_clk_2; 58 - } 59 - 60 - ret = clk_prepare(pm->clock); 61 - if (ret) { 62 - mfc_err("Failed to prepare MFC clock\n"); 63 - goto err_p_ip_clk_2; 64 - } 65 - 66 53 atomic_set(&pm->power, 0); 67 54 #ifdef CONFIG_PM_RUNTIME 68 55 pm->device = &dev->plat_dev->dev; ··· 59 72 atomic_set(&clk_ref, 0); 60 73 #endif 61 74 return 0; 62 - err_p_ip_clk_2: 63 - clk_put(pm->clock); 64 - err_g_ip_clk_2: 65 - clk_unprepare(pm->clock_gate); 66 75 err_p_ip_clk: 67 76 clk_put(pm->clock_gate); 68 77 err_g_ip_clk: ··· 69 86 { 70 87 clk_unprepare(pm->clock_gate); 71 88 clk_put(pm->clock_gate); 72 - clk_unprepare(pm->clock); 73 - clk_put(pm->clock); 74 89 #ifdef CONFIG_PM_RUNTIME 75 90 pm_runtime_disable(pm->device); 76 91 #endif ··· 79 98 int ret; 80 99 #ifdef CLK_DEBUG 81 100 atomic_inc(&clk_ref); 82 - mfc_debug(3, "+ %d", atomic_read(&clk_ref)); 101 + mfc_debug(3, "+ %d\n", atomic_read(&clk_ref)); 83 102 #endif 84 103 ret = clk_enable(pm->clock_gate); 85 104 return ret; ··· 89 108 { 90 109 #ifdef CLK_DEBUG 91 110 atomic_dec(&clk_ref); 92 - mfc_debug(3, "- %d", atomic_read(&clk_ref)); 111 + mfc_debug(3, "- %d\n", atomic_read(&clk_ref)); 93 112 #endif 94 113 clk_disable(pm->clock_gate); 95 114 }
+6 -9
drivers/media/platform/sh_veu.c
··· 905 905 if (ftmp.fmt.pix.width != pix->width || 906 906 ftmp.fmt.pix.height != pix->height) 907 907 return -EINVAL; 908 - size = pix->bytesperline ? pix->bytesperline * pix->height : 909 - pix->width * pix->height * fmt->depth >> 3; 908 + size = pix->bytesperline ? pix->bytesperline * pix->height * fmt->depth / fmt->ydepth : 909 + pix->width * pix->height * fmt->depth / fmt->ydepth; 910 910 } else { 911 911 vfmt = sh_veu_get_vfmt(veu, vq->type); 912 - size = vfmt->bytesperline * vfmt->frame.height; 912 + size = vfmt->bytesperline * vfmt->frame.height * vfmt->fmt->depth / vfmt->fmt->ydepth; 913 913 } 914 914 915 915 if (count < 2) ··· 1033 1033 1034 1034 dev_dbg(veu->dev, "Releasing instance %p\n", veu_file); 1035 1035 1036 - pm_runtime_put(veu->dev); 1037 - 1038 1036 if (veu_file == veu->capture) { 1039 1037 veu->capture = NULL; 1040 1038 vb2_queue_release(v4l2_m2m_get_vq(veu->m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE)); ··· 1047 1049 v4l2_m2m_ctx_release(veu->m2m_ctx); 1048 1050 veu->m2m_ctx = NULL; 1049 1051 } 1052 + 1053 + pm_runtime_put(veu->dev); 1050 1054 1051 1055 kfree(veu_file); 1052 1056 ··· 1138 1138 1139 1139 veu->xaction++; 1140 1140 1141 - if (!veu->aborting) 1142 - return IRQ_WAKE_THREAD; 1143 - 1144 - return IRQ_HANDLED; 1141 + return IRQ_WAKE_THREAD; 1145 1142 } 1146 1143 1147 1144 static int sh_veu_probe(struct platform_device *pdev)
+2 -2
drivers/media/platform/soc_camera/soc_camera.c
··· 643 643 644 644 if (ici->ops->init_videobuf2) 645 645 vb2_queue_release(&icd->vb2_vidq); 646 - ici->ops->remove(icd); 647 - 648 646 __soc_camera_power_off(icd); 647 + 648 + ici->ops->remove(icd); 649 649 } 650 650 651 651 if (icd->streamer == file)
+1
drivers/media/radio/Kconfig
··· 22 22 tristate "Silicon Laboratories Si476x I2C FM Radio" 23 23 depends on I2C && VIDEO_V4L2 24 24 depends on MFD_SI476X_CORE 25 + depends on SND_SOC 25 26 select SND_SOC_SI476X 26 27 ---help--- 27 28 Choose Y here if you have this FM radio chip.
+1 -1
drivers/media/radio/radio-si476x.c
··· 44 44 45 45 #define FREQ_MUL (10000000 / 625) 46 46 47 - #define SI476X_PHDIV_STATUS_LINK_LOCKED(status) (0b10000000 & (status)) 47 + #define SI476X_PHDIV_STATUS_LINK_LOCKED(status) (0x80 & (status)) 48 48 49 49 #define DRIVER_NAME "si476x-radio" 50 50 #define DRIVER_CARD "SI476x AM/FM Receiver"
-20
drivers/media/tuners/Kconfig
··· 1 - config MEDIA_ATTACH 2 - bool "Load and attach frontend and tuner driver modules as needed" 3 - depends on MEDIA_ANALOG_TV_SUPPORT || MEDIA_DIGITAL_TV_SUPPORT || MEDIA_RADIO_SUPPORT 4 - depends on MODULES 5 - default y if !EXPERT 6 - help 7 - Remove the static dependency of DVB card drivers on all 8 - frontend modules for all possible card variants. Instead, 9 - allow the card drivers to only load the frontend modules 10 - they require. 11 - 12 - Also, tuner module will automatically load a tuner driver 13 - when needed, for analog mode. 14 - 15 - This saves several KBytes of memory. 16 - 17 - Note: You will need module-init-tools v3.2 or later for this feature. 18 - 19 - If unsure say Y. 20 - 21 1 # Analog TV tuners, auto-loaded via tuner.ko 22 2 config MEDIA_TUNER 23 3 tristate
+3 -3
drivers/media/usb/dvb-usb-v2/rtl28xxu.c
··· 376 376 struct rtl28xxu_req req_mxl5007t = {0xd9c0, CMD_I2C_RD, 1, buf}; 377 377 struct rtl28xxu_req req_e4000 = {0x02c8, CMD_I2C_RD, 1, buf}; 378 378 struct rtl28xxu_req req_tda18272 = {0x00c0, CMD_I2C_RD, 2, buf}; 379 - struct rtl28xxu_req req_r820t = {0x0034, CMD_I2C_RD, 5, buf}; 379 + struct rtl28xxu_req req_r820t = {0x0034, CMD_I2C_RD, 1, buf}; 380 380 381 381 dev_dbg(&d->udev->dev, "%s:\n", __func__); 382 382 ··· 481 481 goto found; 482 482 } 483 483 484 - /* check R820T by reading tuner stats at I2C addr 0x1a */ 484 + /* check R820T ID register; reg=00 val=69 */ 485 485 ret = rtl28xxu_ctrl_msg(d, &req_r820t); 486 - if (ret == 0) { 486 + if (ret == 0 && buf[0] == 0x69) { 487 487 priv->tuner = TUNER_RTL2832_R820T; 488 488 priv->tuner_name = "R820T"; 489 489 goto found;
+7
drivers/media/usb/gspca/sonixb.c
··· 1159 1159 regs[0x01] = 0x44; /* Select 24 Mhz clock */ 1160 1160 regs[0x12] = 0x02; /* Set hstart to 2 */ 1161 1161 } 1162 + break; 1163 + case SENSOR_PAS202: 1164 + /* For some unknown reason we need to increase hstart by 1 on 1165 + the sn9c103, otherwise we get wrong colors (bayer shift). */ 1166 + if (sd->bridge == BRIDGE_103) 1167 + regs[0x12] += 1; 1168 + break; 1162 1169 } 1163 1170 /* Disable compression when the raw bayer format has been selected */ 1164 1171 if (cam->cam_mode[gspca_dev->curr_mode].priv & MODE_RAW)
+1 -1
drivers/media/usb/pwc/pwc.h
··· 226 226 struct list_head queued_bufs; 227 227 spinlock_t queued_bufs_lock; /* Protects queued_bufs */ 228 228 229 - /* Note if taking both locks v4l2_lock must always be locked first! */ 229 + /* If taking both locks vb_queue_lock must always be locked first! */ 230 230 struct mutex v4l2_lock; /* Protects everything else */ 231 231 struct mutex vb_queue_lock; /* Protects vb_queue and capt_file */ 232 232
+2
drivers/media/v4l2-core/v4l2-ctrls.c
··· 1835 1835 { 1836 1836 if (V4L2_CTRL_ID2CLASS(ctrl->id) == V4L2_CTRL_CLASS_FM_TX) 1837 1837 return true; 1838 + if (V4L2_CTRL_ID2CLASS(ctrl->id) == V4L2_CTRL_CLASS_FM_RX) 1839 + return true; 1838 1840 switch (ctrl->id) { 1839 1841 case V4L2_CID_AUDIO_MUTE: 1840 1842 case V4L2_CID_AUDIO_VOLUME:
+21 -26
drivers/media/v4l2-core/v4l2-ioctl.c
··· 243 243 const struct v4l2_vbi_format *vbi; 244 244 const struct v4l2_sliced_vbi_format *sliced; 245 245 const struct v4l2_window *win; 246 - const struct v4l2_clip *clip; 247 246 unsigned i; 248 247 249 248 pr_cont("type=%s", prt_names(p->type, v4l2_type_names)); ··· 252 253 pix = &p->fmt.pix; 253 254 pr_cont(", width=%u, height=%u, " 254 255 "pixelformat=%c%c%c%c, field=%s, " 255 - "bytesperline=%u sizeimage=%u, colorspace=%d\n", 256 + "bytesperline=%u, sizeimage=%u, colorspace=%d\n", 256 257 pix->width, pix->height, 257 258 (pix->pixelformat & 0xff), 258 259 (pix->pixelformat >> 8) & 0xff, ··· 283 284 case V4L2_BUF_TYPE_VIDEO_OVERLAY: 284 285 case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY: 285 286 win = &p->fmt.win; 286 - pr_cont(", wxh=%dx%d, x,y=%d,%d, field=%s, " 287 - "chromakey=0x%08x, bitmap=%p, " 288 - "global_alpha=0x%02x\n", 289 - win->w.width, win->w.height, 290 - win->w.left, win->w.top, 287 + /* Note: we can't print the clip list here since the clips 288 + * pointer is a userspace pointer, not a kernelspace 289 + * pointer. */ 290 + pr_cont(", wxh=%dx%d, x,y=%d,%d, field=%s, chromakey=0x%08x, clipcount=%u, clips=%p, bitmap=%p, global_alpha=0x%02x\n", 291 + win->w.width, win->w.height, win->w.left, win->w.top, 291 292 prt_names(win->field, v4l2_field_names), 292 - win->chromakey, win->bitmap, win->global_alpha); 293 - clip = win->clips; 294 - for (i = 0; i < win->clipcount; i++) { 295 - printk(KERN_DEBUG "clip %u: wxh=%dx%d, x,y=%d,%d\n", 296 - i, clip->c.width, clip->c.height, 297 - clip->c.left, clip->c.top); 298 - clip = clip->next; 299 - } 293 + win->chromakey, win->clipcount, win->clips, 294 + win->bitmap, win->global_alpha); 300 295 break; 301 296 case V4L2_BUF_TYPE_VBI_CAPTURE: 302 297 case V4L2_BUF_TYPE_VBI_OUTPUT: ··· 325 332 326 333 pr_cont("capability=0x%x, flags=0x%x, base=0x%p, width=%u, " 327 334 "height=%u, pixelformat=%c%c%c%c, " 328 - "bytesperline=%u sizeimage=%u, colorspace=%d\n", 335 + "bytesperline=%u, sizeimage=%u, colorspace=%d\n", 329 336 p->capability, p->flags, p->base, 330 337 p->fmt.width, p->fmt.height, 331 338 (p->fmt.pixelformat & 0xff), ··· 346 353 const struct v4l2_modulator *p = arg; 347 354 348 355 if (write_only) 349 - pr_cont("index=%u, txsubchans=0x%x", p->index, p->txsubchans); 356 + pr_cont("index=%u, txsubchans=0x%x\n", p->index, p->txsubchans); 350 357 else 351 358 pr_cont("index=%u, name=%.*s, capability=0x%x, " 352 359 "rangelow=%u, rangehigh=%u, txsubchans=0x%x\n", ··· 438 445 for (i = 0; i < p->length; ++i) { 439 446 plane = &p->m.planes[i]; 440 447 printk(KERN_DEBUG 441 - "plane %d: bytesused=%d, data_offset=0x%08x " 448 + "plane %d: bytesused=%d, data_offset=0x%08x, " 442 449 "offset/userptr=0x%lx, length=%d\n", 443 450 i, plane->bytesused, plane->data_offset, 444 451 plane->m.userptr, plane->length); 445 452 } 446 453 } else { 447 - pr_cont("bytesused=%d, offset/userptr=0x%lx, length=%d\n", 454 + pr_cont(", bytesused=%d, offset/userptr=0x%lx, length=%d\n", 448 455 p->bytesused, p->m.userptr, p->length); 449 456 } 450 457 ··· 497 504 c->capability, c->outputmode, 498 505 c->timeperframe.numerator, c->timeperframe.denominator, 499 506 c->extendedmode, c->writebuffers); 507 + } else { 508 + pr_cont("\n"); 500 509 } 501 510 } 502 511 ··· 729 734 p->type); 730 735 switch (p->type) { 731 736 case V4L2_FRMSIZE_TYPE_DISCRETE: 732 - pr_cont(" wxh=%ux%u\n", 737 + pr_cont(", wxh=%ux%u\n", 733 738 p->discrete.width, p->discrete.height); 734 739 break; 735 740 case V4L2_FRMSIZE_TYPE_STEPWISE: 736 - pr_cont(" min=%ux%u, max=%ux%u, step=%ux%u\n", 741 + pr_cont(", min=%ux%u, max=%ux%u, step=%ux%u\n", 737 742 p->stepwise.min_width, p->stepwise.min_height, 738 743 p->stepwise.step_width, p->stepwise.step_height, 739 744 p->stepwise.max_width, p->stepwise.max_height); ··· 759 764 p->width, p->height, p->type); 760 765 switch (p->type) { 761 766 case V4L2_FRMIVAL_TYPE_DISCRETE: 762 - pr_cont(" fps=%d/%d\n", 767 + pr_cont(", fps=%d/%d\n", 763 768 p->discrete.numerator, 764 769 p->discrete.denominator); 765 770 break; 766 771 case V4L2_FRMIVAL_TYPE_STEPWISE: 767 - pr_cont(" min=%d/%d, max=%d/%d, step=%d/%d\n", 772 + pr_cont(", min=%d/%d, max=%d/%d, step=%d/%d\n", 768 773 p->stepwise.min.numerator, 769 774 p->stepwise.min.denominator, 770 775 p->stepwise.max.numerator, ··· 802 807 pr_cont("value64=%lld, ", c->value64); 803 808 else 804 809 pr_cont("value=%d, ", c->value); 805 - pr_cont("flags=0x%x, minimum=%d, maximum=%d, step=%d," 806 - " default_value=%d\n", 810 + pr_cont("flags=0x%x, minimum=%d, maximum=%d, step=%d, " 811 + "default_value=%d\n", 807 812 c->flags, c->minimum, c->maximum, 808 813 c->step, c->default_value); 809 814 break; ··· 840 845 const struct v4l2_frequency_band *p = arg; 841 846 842 847 pr_cont("tuner=%u, type=%u, index=%u, capability=0x%x, " 843 - "rangelow=%u, rangehigh=%u, modulation=0x%x\n", 848 + "rangelow=%u, rangehigh=%u, modulation=0x%x\n", 844 849 p->tuner, p->type, p->index, 845 850 p->capability, p->rangelow, 846 851 p->rangehigh, p->modulation);
+29 -10
drivers/media/v4l2-core/v4l2-mem2mem.c
··· 205 205 static void v4l2_m2m_try_schedule(struct v4l2_m2m_ctx *m2m_ctx) 206 206 { 207 207 struct v4l2_m2m_dev *m2m_dev; 208 - unsigned long flags_job, flags; 208 + unsigned long flags_job, flags_out, flags_cap; 209 209 210 210 m2m_dev = m2m_ctx->m2m_dev; 211 211 dprintk("Trying to schedule a job for m2m_ctx: %p\n", m2m_ctx); ··· 223 223 return; 224 224 } 225 225 226 - spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags); 226 + spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out); 227 227 if (list_empty(&m2m_ctx->out_q_ctx.rdy_queue)) { 228 - spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags); 228 + spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, 229 + flags_out); 229 230 spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); 230 231 dprintk("No input buffers available\n"); 231 232 return; 232 233 } 233 - spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags); 234 + spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap); 234 235 if (list_empty(&m2m_ctx->cap_q_ctx.rdy_queue)) { 235 - spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags); 236 - spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags); 236 + spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, 237 + flags_cap); 238 + spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, 239 + flags_out); 237 240 spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); 238 241 dprintk("No output buffers available\n"); 239 242 return; 240 243 } 241 - spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags); 242 - spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags); 244 + spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap); 245 + spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out); 243 246 244 247 if (m2m_dev->m2m_ops->job_ready 245 248 && (!m2m_dev->m2m_ops->job_ready(m2m_ctx->priv))) { ··· 375 372 EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf); 376 373 377 374 /** 375 + * v4l2_m2m_create_bufs() - create a source or destination buffer, depending 376 + * on the type 377 + */ 378 + int v4l2_m2m_create_bufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, 379 + struct v4l2_create_buffers *create) 380 + { 381 + struct vb2_queue *vq; 382 + 383 + vq = v4l2_m2m_get_vq(m2m_ctx, create->format.type); 384 + return vb2_create_bufs(vq, create); 385 + } 386 + EXPORT_SYMBOL_GPL(v4l2_m2m_create_bufs); 387 + 388 + /** 378 389 * v4l2_m2m_expbuf() - export a source or destination buffer, depending on 379 390 * the type 380 391 */ ··· 503 486 if (m2m_ctx->m2m_dev->m2m_ops->unlock) 504 487 m2m_ctx->m2m_dev->m2m_ops->unlock(m2m_ctx->priv); 505 488 506 - poll_wait(file, &src_q->done_wq, wait); 507 - poll_wait(file, &dst_q->done_wq, wait); 489 + if (list_empty(&src_q->done_list)) 490 + poll_wait(file, &src_q->done_wq, wait); 491 + if (list_empty(&dst_q->done_list)) 492 + poll_wait(file, &dst_q->done_wq, wait); 508 493 509 494 if (m2m_ctx->m2m_dev->m2m_ops->lock) 510 495 m2m_ctx->m2m_dev->m2m_ops->lock(m2m_ctx->priv);
+2 -1
drivers/media/v4l2-core/videobuf2-core.c
··· 2014 2014 if (list_empty(&q->queued_list)) 2015 2015 return res | POLLERR; 2016 2016 2017 - poll_wait(file, &q->done_wq, wait); 2017 + if (list_empty(&q->done_list)) 2018 + poll_wait(file, &q->done_wq, wait); 2018 2019 2019 2020 /* 2020 2021 * Take first buffer available for dequeuing.
+1 -1
drivers/mfd/tps6586x.c
··· 107 107 .name = "tps6586x-gpio", 108 108 }, 109 109 { 110 - .name = "tps6586x-pmic", 110 + .name = "tps6586x-regulator", 111 111 }, 112 112 { 113 113 .name = "tps6586x-rtc",
+2 -1
drivers/net/bonding/bond_main.c
··· 2364 2364 2365 2365 pr_info("%s: link status definitely up for interface %s, %u Mbps %s duplex.\n", 2366 2366 bond->dev->name, slave->dev->name, 2367 - slave->speed, slave->duplex ? "full" : "half"); 2367 + slave->speed == SPEED_UNKNOWN ? 0 : slave->speed, 2368 + slave->duplex ? "full" : "half"); 2368 2369 2369 2370 /* notify ad that the link status has changed */ 2370 2371 if (bond->params.mode == BOND_MODE_8023AD)
+36
drivers/net/ethernet/broadcom/tg3.c
··· 744 744 status = tg3_ape_read32(tp, gnt + off); 745 745 if (status == bit) 746 746 break; 747 + if (pci_channel_offline(tp->pdev)) 748 + break; 749 + 747 750 udelay(10); 748 751 } 749 752 ··· 1635 1632 for (i = 0; i < delay_cnt; i++) { 1636 1633 if (!(tr32(GRC_RX_CPU_EVENT) & GRC_RX_CPU_DRIVER_EVENT)) 1637 1634 break; 1635 + if (pci_channel_offline(tp->pdev)) 1636 + break; 1637 + 1638 1638 udelay(8); 1639 1639 } 1640 1640 } ··· 1809 1803 for (i = 0; i < 200; i++) { 1810 1804 if (tr32(VCPU_STATUS) & VCPU_STATUS_INIT_DONE) 1811 1805 return 0; 1806 + if (pci_channel_offline(tp->pdev)) 1807 + return -ENODEV; 1808 + 1812 1809 udelay(100); 1813 1810 } 1814 1811 return -ENODEV; ··· 1822 1813 tg3_read_mem(tp, NIC_SRAM_FIRMWARE_MBOX, &val); 1823 1814 if (val == ~NIC_SRAM_FIRMWARE_MBOX_MAGIC1) 1824 1815 break; 1816 + if (pci_channel_offline(tp->pdev)) { 1817 + if (!tg3_flag(tp, NO_FWARE_REPORTED)) { 1818 + tg3_flag_set(tp, NO_FWARE_REPORTED); 1819 + netdev_info(tp->dev, "No firmware running\n"); 1820 + } 1821 + 1822 + break; 1823 + } 1824 + 1825 1825 udelay(10); 1826 1826 } 1827 1827 ··· 3565 3547 tw32(cpu_base + CPU_MODE, CPU_MODE_HALT); 3566 3548 if (tr32(cpu_base + CPU_MODE) & CPU_MODE_HALT) 3567 3549 break; 3550 + if (pci_channel_offline(tp->pdev)) 3551 + return -EBUSY; 3568 3552 } 3569 3553 3570 3554 return (i == iters) ? -EBUSY : 0; ··· 8681 8661 tw32_f(ofs, val); 8682 8662 8683 8663 for (i = 0; i < MAX_WAIT_CNT; i++) { 8664 + if (pci_channel_offline(tp->pdev)) { 8665 + dev_err(&tp->pdev->dev, 8666 + "tg3_stop_block device offline, " 8667 + "ofs=%lx enable_bit=%x\n", 8668 + ofs, enable_bit); 8669 + return -ENODEV; 8670 + } 8671 + 8684 8672 udelay(100); 8685 8673 val = tr32(ofs); 8686 8674 if ((val & enable_bit) == 0) ··· 8711 8683 int i, err; 8712 8684 8713 8685 tg3_disable_ints(tp); 8686 + 8687 + if (pci_channel_offline(tp->pdev)) { 8688 + tp->rx_mode &= ~(RX_MODE_ENABLE | TX_MODE_ENABLE); 8689 + tp->mac_mode &= ~MAC_MODE_TDE_ENABLE; 8690 + err = -ENODEV; 8691 + goto err_no_dev; 8692 + } 8714 8693 8715 8694 tp->rx_mode &= ~RX_MODE_ENABLE; 8716 8695 tw32_f(MAC_RX_MODE, tp->rx_mode); ··· 8767 8732 err |= tg3_stop_block(tp, BUFMGR_MODE, BUFMGR_MODE_ENABLE, silent); 8768 8733 err |= tg3_stop_block(tp, MEMARB_MODE, MEMARB_MODE_ENABLE, silent); 8769 8734 8735 + err_no_dev: 8770 8736 for (i = 0; i < tp->irq_cnt; i++) { 8771 8737 struct tg3_napi *tnapi = &tp->napi[i]; 8772 8738 if (tnapi->hw_status)
+13 -2
drivers/net/ethernet/freescale/fec_main.c
··· 516 516 /* Set MII speed */ 517 517 writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED); 518 518 519 + #if !defined(CONFIG_M5272) 519 520 /* set RX checksum */ 520 521 val = readl(fep->hwp + FEC_RACC); 521 522 if (fep->csum_flags & FLAG_RX_CSUM_ENABLED) ··· 524 523 else 525 524 val &= ~FEC_RACC_OPTIONS; 526 525 writel(val, fep->hwp + FEC_RACC); 526 + #endif 527 527 528 528 /* 529 529 * The phy interface and speed need to get configured ··· 577 575 #endif 578 576 } 579 577 578 + #if !defined(CONFIG_M5272) 580 579 /* enable pause frame*/ 581 580 if ((fep->pause_flag & FEC_PAUSE_FLAG_ENABLE) || 582 581 ((fep->pause_flag & FEC_PAUSE_FLAG_AUTONEG) && ··· 595 592 } else { 596 593 rcntl &= ~FEC_ENET_FCE; 597 594 } 595 + #endif /* !defined(CONFIG_M5272) */ 598 596 599 597 writel(rcntl, fep->hwp + FEC_R_CNTRL); 600 598 ··· 1215 1211 /* mask with MAC supported features */ 1216 1212 if (id_entry->driver_data & FEC_QUIRK_HAS_GBIT) { 1217 1213 phy_dev->supported &= PHY_GBIT_FEATURES; 1214 + #if !defined(CONFIG_M5272) 1218 1215 phy_dev->supported |= SUPPORTED_Pause; 1216 + #endif 1219 1217 } 1220 1218 else 1221 1219 phy_dev->supported &= PHY_BASIC_FEATURES; ··· 1402 1396 } 1403 1397 } 1404 1398 1399 + #if !defined(CONFIG_M5272) 1400 + 1405 1401 static void fec_enet_get_pauseparam(struct net_device *ndev, 1406 1402 struct ethtool_pauseparam *pause) 1407 1403 { ··· 1450 1442 return 0; 1451 1443 } 1452 1444 1453 - #ifndef CONFIG_M5272 1454 1445 static const struct fec_stat { 1455 1446 char name[ETH_GSTRING_LEN]; 1456 1447 u16 offset; ··· 1548 1541 return -EOPNOTSUPP; 1549 1542 } 1550 1543 } 1551 - #endif 1544 + #endif /* !defined(CONFIG_M5272) */ 1552 1545 1553 1546 static int fec_enet_nway_reset(struct net_device *dev) 1554 1547 { ··· 1562 1555 } 1563 1556 1564 1557 static const struct ethtool_ops fec_enet_ethtool_ops = { 1558 + #if !defined(CONFIG_M5272) 1565 1559 .get_pauseparam = fec_enet_get_pauseparam, 1566 1560 .set_pauseparam = fec_enet_set_pauseparam, 1561 + #endif 1567 1562 .get_settings = fec_enet_get_settings, 1568 1563 .set_settings = fec_enet_set_settings, 1569 1564 .get_drvinfo = fec_enet_get_drvinfo, ··· 2005 1996 /* setup board info structure */ 2006 1997 fep = netdev_priv(ndev); 2007 1998 1999 + #if !defined(CONFIG_M5272) 2008 2000 /* default enable pause frame auto negotiation */ 2009 2001 if (pdev->id_entry && 2010 2002 (pdev->id_entry->driver_data & FEC_QUIRK_HAS_GBIT)) 2011 2003 fep->pause_flag |= FEC_PAUSE_FLAG_AUTONEG; 2004 + #endif 2012 2005 2013 2006 fep->hwp = devm_ioremap_resource(&pdev->dev, r); 2014 2007 if (IS_ERR(fep->hwp)) {
+1 -1
drivers/net/ethernet/marvell/mv643xx_eth.c
··· 1763 1763 memset(rxq->rx_desc_area, 0, size); 1764 1764 1765 1765 rxq->rx_desc_area_size = size; 1766 - rxq->rx_skb = kmalloc_array(rxq->rx_ring_size, sizeof(*rxq->rx_skb), 1766 + rxq->rx_skb = kcalloc(rxq->rx_ring_size, sizeof(*rxq->rx_skb), 1767 1767 GFP_KERNEL); 1768 1768 if (rxq->rx_skb == NULL) 1769 1769 goto out_free;
+2 -2
drivers/net/ethernet/marvell/pxa168_eth.c
··· 1015 1015 int rx_desc_num = pep->rx_ring_size; 1016 1016 1017 1017 /* Allocate RX skb rings */ 1018 - pep->rx_skb = kmalloc(sizeof(*pep->rx_skb) * pep->rx_ring_size, 1018 + pep->rx_skb = kzalloc(sizeof(*pep->rx_skb) * pep->rx_ring_size, 1019 1019 GFP_KERNEL); 1020 1020 if (!pep->rx_skb) 1021 1021 return -ENOMEM; ··· 1076 1076 int size = 0, i = 0; 1077 1077 int tx_desc_num = pep->tx_ring_size; 1078 1078 1079 - pep->tx_skb = kmalloc(sizeof(*pep->tx_skb) * pep->tx_ring_size, 1079 + pep->tx_skb = kzalloc(sizeof(*pep->tx_skb) * pep->tx_ring_size, 1080 1080 GFP_KERNEL); 1081 1081 if (!pep->tx_skb) 1082 1082 return -ENOMEM;
+3
drivers/net/ethernet/mellanox/mlx4/main.c
··· 632 632 dev->caps.cqe_size = 32; 633 633 } 634 634 635 + dev->caps.flags2 &= ~MLX4_DEV_CAP_FLAG2_TS; 636 + mlx4_warn(dev, "Timestamping is not supported in slave mode.\n"); 637 + 635 638 slave_adjust_steering_mode(dev, &dev_cap, &hca_param); 636 639 637 640 return 0;
+22 -11
drivers/net/ethernet/octeon/octeon_mgmt.c
··· 46 46 union mgmt_port_ring_entry { 47 47 u64 d64; 48 48 struct { 49 - u64 reserved_62_63:2; 50 - /* Length of the buffer/packet in bytes */ 51 - u64 len:14; 52 - /* For TX, signals that the packet should be timestamped */ 53 - u64 tstamp:1; 54 - /* The RX error code */ 55 - u64 code:7; 56 49 #define RING_ENTRY_CODE_DONE 0xf 57 50 #define RING_ENTRY_CODE_MORE 0x10 51 + #ifdef __BIG_ENDIAN_BITFIELD 52 + u64 reserved_62_63:2; 53 + /* Length of the buffer/packet in bytes */ 54 + u64 len:14; 55 + /* For TX, signals that the packet should be timestamped */ 56 + u64 tstamp:1; 57 + /* The RX error code */ 58 + u64 code:7; 58 59 /* Physical address of the buffer */ 59 - u64 addr:40; 60 + u64 addr:40; 61 + #else 62 + u64 addr:40; 63 + u64 code:7; 64 + u64 tstamp:1; 65 + u64 len:14; 66 + u64 reserved_62_63:2; 67 + #endif 60 68 } s; 61 69 }; 62 70 ··· 1149 1141 /* For compensation state to lock. */ 1150 1142 ndelay(1040 * NS_PER_PHY_CLK); 1151 1143 1152 - /* Some Ethernet switches cannot handle standard 1153 - * Interframe Gap, increase to 16 bytes. 1144 + /* Default Interframe Gaps are too small. Recommended 1145 + * workaround is. 1146 + * 1147 + * AGL_GMX_TX_IFG[IFG1]=14 1148 + * AGL_GMX_TX_IFG[IFG2]=10 1154 1149 */ 1155 - cvmx_write_csr(CVMX_AGL_GMX_TX_IFG, 0x88); 1150 + cvmx_write_csr(CVMX_AGL_GMX_TX_IFG, 0xae); 1156 1151 } 1157 1152 1158 1153 octeon_mgmt_rx_fill_ring(netdev);
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
··· 665 665 qlcnic_83xx_config_intrpt(adapter, 0); 666 666 } 667 667 /* Allow dma queues to drain after context reset */ 668 - msleep(20); 668 + mdelay(20); 669 669 } 670 670 } 671 671
+21 -17
drivers/net/ethernet/renesas/sh_eth.c
··· 382 382 .eesipr_value = 0x01ff009f, 383 383 384 384 .tx_check = EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | EESR_RTO, 385 - .eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RDE | 386 - EESR_RFRMER | EESR_TFE | EESR_TDE | EESR_ECI, 385 + .eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE | 386 + EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE | 387 + EESR_ECI, 387 388 388 389 .apr = 1, 389 390 .mpr = 1, ··· 418 417 .eesipr_value = 0x01ff009f, 419 418 420 419 .tx_check = EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | EESR_RTO, 421 - .eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RDE | 422 - EESR_RFRMER | EESR_TFE | EESR_TDE | EESR_ECI, 420 + .eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE | 421 + EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE | 422 + EESR_ECI, 423 423 424 424 .apr = 1, 425 425 .mpr = 1, ··· 455 453 .rmcr_value = 0x00000001, 456 454 457 455 .tx_check = EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | EESR_RTO, 458 - .eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RDE | 459 - EESR_RFRMER | EESR_TFE | EESR_TDE | EESR_ECI, 456 + .eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE | 457 + EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE | 458 + EESR_ECI, 460 459 461 460 .irq_flags = IRQF_SHARED, 462 461 .apr = 1, ··· 524 521 .eesipr_value = DMAC_M_RFRMER | DMAC_M_ECI | 0x003fffff, 525 522 526 523 .tx_check = EESR_TC1 | EESR_FTC, 527 - .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT | \ 528 - EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE | \ 529 - EESR_ECI, 524 + .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT | 525 + EESR_RFE | EESR_RDE | EESR_RFRMER | EESR_TFE | 526 + EESR_TDE | EESR_ECI, 530 527 .fdr_value = 0x0000072f, 531 528 .rmcr_value = 0x00000001, 532 529 ··· 582 579 .eesipr_value = DMAC_M_RFRMER | DMAC_M_ECI | 0x003fffff, 583 580 584 581 .tx_check = EESR_TC1 | EESR_FTC, 585 - .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT | \ 586 - EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE | \ 587 - EESR_ECI, 582 + .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT | 583 + EESR_RFE | EESR_RDE | EESR_RFRMER | EESR_TFE | 584 + EESR_TDE | EESR_ECI, 588 585 589 586 .apr = 1, 590 587 .mpr = 1, ··· 646 643 .eesipr_value = DMAC_M_RFRMER | DMAC_M_ECI | 0x003fffff, 647 644 648 645 .tx_check = EESR_TC1 | EESR_FTC, 649 - .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT | \ 650 - EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE | \ 651 - EESR_ECI, 646 + .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT | 647 + EESR_RFE | EESR_RDE | EESR_RFRMER | EESR_TFE | 648 + EESR_TDE | EESR_ECI, 652 649 653 650 .apr = 1, 654 651 .mpr = 1, ··· 1404 1401 1405 1402 ignore_link: 1406 1403 if (intr_status & EESR_TWB) { 1407 - /* Write buck end. unused write back interrupt */ 1408 - if (intr_status & EESR_TABT) /* Transmit Abort int */ 1404 + /* Unused write back interrupt */ 1405 + if (intr_status & EESR_TABT) { /* Transmit Abort int */ 1409 1406 ndev->stats.tx_aborted_errors++; 1410 1407 if (netif_msg_tx_err(mdp)) 1411 1408 dev_err(&ndev->dev, "Transmit Abort\n"); 1409 + } 1412 1410 } 1413 1411 1414 1412 if (intr_status & EESR_RABT) {
+1 -1
drivers/net/ethernet/renesas/sh_eth.h
··· 258 258 259 259 #define DEFAULT_TX_CHECK (EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | \ 260 260 EESR_RTO) 261 - #define DEFAULT_EESR_ERR_CHECK (EESR_TWB | EESR_TABT | EESR_RABT | \ 261 + #define DEFAULT_EESR_ERR_CHECK (EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE | \ 262 262 EESR_RDE | EESR_RFRMER | EESR_ADE | \ 263 263 EESR_TFE | EESR_TDE | EESR_ECI) 264 264
+1 -1
drivers/net/ethernet/sfc/efx.c
··· 2115 2115 struct efx_nic *efx = pci_get_drvdata(to_pci_dev(dev)); 2116 2116 return sprintf(buf, "%d\n", efx->phy_type); 2117 2117 } 2118 - static DEVICE_ATTR(phy_type, 0644, show_phy_type, NULL); 2118 + static DEVICE_ATTR(phy_type, 0444, show_phy_type, NULL); 2119 2119 2120 2120 static int efx_register_netdev(struct efx_nic *efx) 2121 2121 {
+2 -2
drivers/net/ethernet/stmicro/stmmac/common.h
··· 287 287 #define MAC_RNABLE_RX 0x00000004 /* Receiver Enable */ 288 288 289 289 /* Default LPI timers */ 290 - #define STMMAC_DEFAULT_LIT_LS_TIMER 0x3E8 291 - #define STMMAC_DEFAULT_TWT_LS_TIMER 0x0 290 + #define STMMAC_DEFAULT_LIT_LS 0x3E8 291 + #define STMMAC_DEFAULT_TWT_LS 0x0 292 292 293 293 #define STMMAC_CHAIN_MODE 0x1 294 294 #define STMMAC_RING_MODE 0x2
+30 -34
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 104 104 static int eee_timer = STMMAC_DEFAULT_LPI_TIMER; 105 105 module_param(eee_timer, int, S_IRUGO | S_IWUSR); 106 106 MODULE_PARM_DESC(eee_timer, "LPI tx expiration time in msec"); 107 - #define STMMAC_LPI_TIMER(x) (jiffies + msecs_to_jiffies(x)) 107 + #define STMMAC_LPI_T(x) (jiffies + msecs_to_jiffies(x)) 108 108 109 109 /* By default the driver will use the ring mode to manage tx and rx descriptors 110 110 * but passing this value so user can force to use the chain instead of the ring ··· 260 260 struct stmmac_priv *priv = (struct stmmac_priv *)arg; 261 261 262 262 stmmac_enable_eee_mode(priv); 263 - mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_TIMER(eee_timer)); 263 + mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer)); 264 264 } 265 265 266 266 /** ··· 276 276 { 277 277 bool ret = false; 278 278 279 + /* Using PCS we cannot dial with the phy registers at this stage 280 + * so we do not support extra feature like EEE. 281 + */ 282 + if ((priv->pcs == STMMAC_PCS_RGMII) || (priv->pcs == STMMAC_PCS_TBI) || 283 + (priv->pcs == STMMAC_PCS_RTBI)) 284 + goto out; 285 + 279 286 /* MAC core supports the EEE feature. */ 280 287 if (priv->dma_cap.eee) { 281 288 /* Check if the PHY supports EEE */ 282 289 if (phy_init_eee(priv->phydev, 1)) 283 290 goto out; 284 291 285 - priv->eee_active = 1; 286 - init_timer(&priv->eee_ctrl_timer); 287 - priv->eee_ctrl_timer.function = stmmac_eee_ctrl_timer; 288 - priv->eee_ctrl_timer.data = (unsigned long)priv; 289 - priv->eee_ctrl_timer.expires = STMMAC_LPI_TIMER(eee_timer); 290 - add_timer(&priv->eee_ctrl_timer); 292 + if (!priv->eee_active) { 293 + priv->eee_active = 1; 294 + init_timer(&priv->eee_ctrl_timer); 295 + priv->eee_ctrl_timer.function = stmmac_eee_ctrl_timer; 296 + priv->eee_ctrl_timer.data = (unsigned long)priv; 297 + priv->eee_ctrl_timer.expires = STMMAC_LPI_T(eee_timer); 298 + add_timer(&priv->eee_ctrl_timer); 291 299 292 - priv->hw->mac->set_eee_timer(priv->ioaddr, 293 - STMMAC_DEFAULT_LIT_LS_TIMER, 294 - priv->tx_lpi_timer); 300 + priv->hw->mac->set_eee_timer(priv->ioaddr, 301 + STMMAC_DEFAULT_LIT_LS, 302 + priv->tx_lpi_timer); 303 + } else 304 + /* Set HW EEE according to the speed */ 305 + priv->hw->mac->set_eee_pls(priv->ioaddr, 306 + priv->phydev->link); 295 307 296 308 pr_info("stmmac: Energy-Efficient Ethernet initialized\n"); 297 309 ··· 311 299 } 312 300 out: 313 301 return ret; 314 - } 315 - 316 - /** 317 - * stmmac_eee_adjust: adjust HW EEE according to the speed 318 - * @priv: driver private structure 319 - * Description: 320 - * When the EEE has been already initialised we have to 321 - * modify the PLS bit in the LPI ctrl & status reg according 322 - * to the PHY link status. For this reason. 323 - */ 324 - static void stmmac_eee_adjust(struct stmmac_priv *priv) 325 - { 326 - if (priv->eee_enabled) 327 - priv->hw->mac->set_eee_pls(priv->ioaddr, priv->phydev->link); 328 302 } 329 303 330 304 /* stmmac_get_tx_hwtstamp: get HW TX timestamps ··· 736 738 if (new_state && netif_msg_link(priv)) 737 739 phy_print_status(phydev); 738 740 739 - stmmac_eee_adjust(priv); 741 + /* At this stage, it could be needed to setup the EEE or adjust some 742 + * MAC related HW registers. 743 + */ 744 + priv->eee_enabled = stmmac_eee_init(priv); 740 745 741 746 spin_unlock_irqrestore(&priv->lock, flags); 742 747 } ··· 1251 1250 1252 1251 if ((priv->eee_enabled) && (!priv->tx_path_in_lpi_mode)) { 1253 1252 stmmac_enable_eee_mode(priv); 1254 - mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_TIMER(eee_timer)); 1253 + mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer)); 1255 1254 } 1256 1255 spin_unlock(&priv->tx_lock); 1257 1256 } ··· 1645 1644 if (priv->phydev) 1646 1645 phy_start(priv->phydev); 1647 1646 1648 - priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS_TIMER; 1647 + priv->tx_lpi_timer = STMMAC_DEFAULT_TWT_LS; 1649 1648 1650 - /* Using PCS we cannot dial with the phy registers at this stage 1651 - * so we do not support extra feature like EEE. 1652 - */ 1653 - if (priv->pcs != STMMAC_PCS_RGMII && priv->pcs != STMMAC_PCS_TBI && 1654 - priv->pcs != STMMAC_PCS_RTBI) 1655 - priv->eee_enabled = stmmac_eee_init(priv); 1649 + priv->eee_enabled = stmmac_eee_init(priv); 1656 1650 1657 1651 stmmac_init_tx_coalesce(priv); 1658 1652
+3
drivers/net/ethernet/ti/cpsw.c
··· 1974 1974 { 1975 1975 struct platform_device *pdev = to_platform_device(dev); 1976 1976 struct net_device *ndev = platform_get_drvdata(pdev); 1977 + struct cpsw_priv *priv = netdev_priv(ndev); 1977 1978 1978 1979 if (netif_running(ndev)) 1979 1980 cpsw_ndo_stop(ndev); 1981 + soft_reset("sliver 0", &priv->slaves[0].sliver->soft_reset); 1982 + soft_reset("sliver 1", &priv->slaves[1].sliver->soft_reset); 1980 1983 pm_runtime_put_sync(&pdev->dev); 1981 1984 1982 1985 return 0;
+7
drivers/net/ethernet/ti/davinci_cpdma.c
··· 706 706 } 707 707 708 708 buffer = dma_map_single(ctlr->dev, data, len, chan->dir); 709 + ret = dma_mapping_error(ctlr->dev, buffer); 710 + if (ret) { 711 + cpdma_desc_free(ctlr->pool, desc, 1); 712 + ret = -EINVAL; 713 + goto unlock_ret; 714 + } 715 + 709 716 mode = CPDMA_DESC_OWNER | CPDMA_DESC_SOP | CPDMA_DESC_EOP; 710 717 cpdma_desc_to_port(chan, mode, directed); 711 718
+4 -2
drivers/net/macvtap.c
··· 589 589 return -EMSGSIZE; 590 590 num_pages = get_user_pages_fast(base, size, 0, &page[i]); 591 591 if (num_pages != size) { 592 - for (i = 0; i < num_pages; i++) 593 - put_page(page[i]); 592 + int j; 593 + 594 + for (j = 0; j < num_pages; j++) 595 + put_page(page[i + j]); 594 596 return -EFAULT; 595 597 } 596 598 truesize = size * PAGE_SIZE;
+4 -2
drivers/net/tun.c
··· 1008 1008 return -EMSGSIZE; 1009 1009 num_pages = get_user_pages_fast(base, size, 0, &page[i]); 1010 1010 if (num_pages != size) { 1011 - for (i = 0; i < num_pages; i++) 1012 - put_page(page[i]); 1011 + int j; 1012 + 1013 + for (j = 0; j < num_pages; j++) 1014 + put_page(page[i + j]); 1013 1015 return -EFAULT; 1014 1016 } 1015 1017 truesize = size * PAGE_SIZE;
+7 -1
drivers/net/usb/qmi_wwan.c
··· 592 592 {QMI_GOBI1K_DEVICE(0x03f0, 0x1f1d)}, /* HP un2400 Gobi Modem Device */ 593 593 {QMI_GOBI1K_DEVICE(0x04da, 0x250d)}, /* Panasonic Gobi Modem device */ 594 594 {QMI_GOBI1K_DEVICE(0x413c, 0x8172)}, /* Dell Gobi Modem device */ 595 - {QMI_GOBI1K_DEVICE(0x1410, 0xa001)}, /* Novatel Gobi Modem device */ 595 + {QMI_GOBI1K_DEVICE(0x1410, 0xa001)}, /* Novatel/Verizon USB-1000 */ 596 + {QMI_GOBI1K_DEVICE(0x1410, 0xa002)}, /* Novatel Gobi Modem device */ 597 + {QMI_GOBI1K_DEVICE(0x1410, 0xa003)}, /* Novatel Gobi Modem device */ 598 + {QMI_GOBI1K_DEVICE(0x1410, 0xa004)}, /* Novatel Gobi Modem device */ 599 + {QMI_GOBI1K_DEVICE(0x1410, 0xa005)}, /* Novatel Gobi Modem device */ 600 + {QMI_GOBI1K_DEVICE(0x1410, 0xa006)}, /* Novatel Gobi Modem device */ 601 + {QMI_GOBI1K_DEVICE(0x1410, 0xa007)}, /* Novatel Gobi Modem device */ 596 602 {QMI_GOBI1K_DEVICE(0x0b05, 0x1776)}, /* Asus Gobi Modem device */ 597 603 {QMI_GOBI1K_DEVICE(0x19d2, 0xfff3)}, /* ONDA Gobi Modem device */ 598 604 {QMI_GOBI1K_DEVICE(0x05c6, 0x9001)}, /* Generic Gobi Modem device */
+21 -5
drivers/net/wan/dlci.c
··· 384 384 struct frad_local *flp; 385 385 struct net_device *master, *slave; 386 386 int err; 387 + bool found = false; 388 + 389 + rtnl_lock(); 387 390 388 391 /* validate slave device */ 389 392 master = __dev_get_by_name(&init_net, dlci->devname); 390 - if (!master) 391 - return -ENODEV; 393 + if (!master) { 394 + err = -ENODEV; 395 + goto out; 396 + } 397 + 398 + list_for_each_entry(dlp, &dlci_devs, list) { 399 + if (dlp->master == master) { 400 + found = true; 401 + break; 402 + } 403 + } 404 + if (!found) { 405 + err = -ENODEV; 406 + goto out; 407 + } 392 408 393 409 if (netif_running(master)) { 394 - return -EBUSY; 410 + err = -EBUSY; 411 + goto out; 395 412 } 396 413 397 414 dlp = netdev_priv(master); 398 415 slave = dlp->slave; 399 416 flp = netdev_priv(slave); 400 417 401 - rtnl_lock(); 402 418 err = (*flp->deassoc)(slave, master); 403 419 if (!err) { 404 420 list_del(&dlp->list); ··· 423 407 424 408 dev_put(slave); 425 409 } 410 + out: 426 411 rtnl_unlock(); 427 - 428 412 return err; 429 413 } 430 414
+66
drivers/parisc/iosapic.c
··· 811 811 return pcidev->irq; 812 812 } 813 813 814 + static struct iosapic_info *first_isi = NULL; 815 + 816 + #ifdef CONFIG_64BIT 817 + int iosapic_serial_irq(int num) 818 + { 819 + struct iosapic_info *isi = first_isi; 820 + struct irt_entry *irte = NULL; /* only used if PAT PDC */ 821 + struct vector_info *vi; 822 + int isi_line; /* line used by device */ 823 + 824 + /* lookup IRT entry for isi/slot/pin set */ 825 + irte = &irt_cell[num]; 826 + 827 + DBG_IRT("iosapic_serial_irq(): irte %p %x %x %x %x %x %x %x %x\n", 828 + irte, 829 + irte->entry_type, 830 + irte->entry_length, 831 + irte->polarity_trigger, 832 + irte->src_bus_irq_devno, 833 + irte->src_bus_id, 834 + irte->src_seg_id, 835 + irte->dest_iosapic_intin, 836 + (u32) irte->dest_iosapic_addr); 837 + isi_line = irte->dest_iosapic_intin; 838 + 839 + /* get vector info for this input line */ 840 + vi = isi->isi_vector + isi_line; 841 + DBG_IRT("iosapic_serial_irq: line %d vi 0x%p\n", isi_line, vi); 842 + 843 + /* If this IRQ line has already been setup, skip it */ 844 + if (vi->irte) 845 + goto out; 846 + 847 + vi->irte = irte; 848 + 849 + /* 850 + * Allocate processor IRQ 851 + * 852 + * XXX/FIXME The txn_alloc_irq() code and related code should be 853 + * moved to enable_irq(). That way we only allocate processor IRQ 854 + * bits for devices that actually have drivers claiming them. 855 + * Right now we assign an IRQ to every PCI device present, 856 + * regardless of whether it's used or not. 857 + */ 858 + vi->txn_irq = txn_alloc_irq(8); 859 + 860 + if (vi->txn_irq < 0) 861 + panic("I/O sapic: couldn't get TXN IRQ\n"); 862 + 863 + /* enable_irq() will use txn_* to program IRdT */ 864 + vi->txn_addr = txn_alloc_addr(vi->txn_irq); 865 + vi->txn_data = txn_alloc_data(vi->txn_irq); 866 + 867 + vi->eoi_addr = isi->addr + IOSAPIC_REG_EOI; 868 + vi->eoi_data = cpu_to_le32(vi->txn_data); 869 + 870 + cpu_claim_irq(vi->txn_irq, &iosapic_interrupt_type, vi); 871 + 872 + out: 873 + 874 + return vi->txn_irq; 875 + } 876 + #endif 877 + 814 878 815 879 /* 816 880 ** squirrel away the I/O Sapic Version ··· 941 877 vip->irqline = (unsigned char) cnt; 942 878 vip->iosapic = isi; 943 879 } 880 + if (!first_isi) 881 + first_isi = isi; 944 882 return isi; 945 883 } 946 884
+37 -16
drivers/pci/hotplug/acpiphp_glue.c
··· 61 61 static void handle_hotplug_event_bridge (acpi_handle, u32, void *); 62 62 static void acpiphp_sanitize_bus(struct pci_bus *bus); 63 63 static void acpiphp_set_hpp_values(struct pci_bus *bus); 64 + static void hotplug_event_func(acpi_handle handle, u32 type, void *context); 64 65 static void handle_hotplug_event_func(acpi_handle handle, u32 type, void *context); 65 66 static void free_bridge(struct kref *kref); 66 67 ··· 148 147 149 148 150 149 static const struct acpi_dock_ops acpiphp_dock_ops = { 151 - .handler = handle_hotplug_event_func, 150 + .handler = hotplug_event_func, 152 151 }; 153 152 154 153 /* Check whether the PCI device is managed by native PCIe hotplug driver */ ··· 178 177 return false; 179 178 180 179 return true; 180 + } 181 + 182 + static void acpiphp_dock_init(void *data) 183 + { 184 + struct acpiphp_func *func = data; 185 + 186 + get_bridge(func->slot->bridge); 187 + } 188 + 189 + static void acpiphp_dock_release(void *data) 190 + { 191 + struct acpiphp_func *func = data; 192 + 193 + put_bridge(func->slot->bridge); 181 194 } 182 195 183 196 /* callback routine to register each ACPI PCI slot object */ ··· 313 298 */ 314 299 newfunc->flags &= ~FUNC_HAS_EJ0; 315 300 if (register_hotplug_dock_device(handle, 316 - &acpiphp_dock_ops, newfunc)) 301 + &acpiphp_dock_ops, newfunc, 302 + acpiphp_dock_init, acpiphp_dock_release)) 317 303 dbg("failed to register dock device\n"); 318 304 319 305 /* we need to be notified when dock events happen ··· 686 670 struct pci_bus *bus = slot->bridge->pci_bus; 687 671 struct acpiphp_func *func; 688 672 int num, max, pass; 673 + LIST_HEAD(add_list); 689 674 690 675 if (slot->flags & SLOT_ENABLED) 691 676 goto err_exit; ··· 711 694 max = pci_scan_bridge(bus, dev, max, pass); 712 695 if (pass && dev->subordinate) { 713 696 check_hotplug_bridge(slot, dev); 714 - pci_bus_size_bridges(dev->subordinate); 697 + pcibios_resource_survey_bus(dev->subordinate); 698 + __pci_bus_size_bridges(dev->subordinate, 699 + &add_list); 715 700 } 716 701 } 717 702 } 718 703 } 719 704 720 - pci_bus_assign_resources(bus); 705 + __pci_bus_assign_resources(bus, &add_list, NULL); 721 706 acpiphp_sanitize_bus(bus); 722 707 acpiphp_set_hpp_values(bus); 723 708 acpiphp_set_acpi_region(slot); ··· 1084 1065 alloc_acpi_hp_work(handle, type, context, _handle_hotplug_event_bridge); 1085 1066 } 1086 1067 1087 - static void _handle_hotplug_event_func(struct work_struct *work) 1068 + static void hotplug_event_func(acpi_handle handle, u32 type, void *context) 1088 1069 { 1089 - struct acpiphp_func *func; 1070 + struct acpiphp_func *func = context; 1090 1071 char objname[64]; 1091 1072 struct acpi_buffer buffer = { .length = sizeof(objname), 1092 1073 .pointer = objname }; 1093 - struct acpi_hp_work *hp_work; 1094 - acpi_handle handle; 1095 - u32 type; 1096 - 1097 - hp_work = container_of(work, struct acpi_hp_work, work); 1098 - handle = hp_work->handle; 1099 - type = hp_work->type; 1100 - func = (struct acpiphp_func *)hp_work->context; 1101 - 1102 - acpi_scan_lock_acquire(); 1103 1074 1104 1075 acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer); 1105 1076 ··· 1122 1113 warn("notify_handler: unknown event type 0x%x for %s\n", type, objname); 1123 1114 break; 1124 1115 } 1116 + } 1117 + 1118 + static void _handle_hotplug_event_func(struct work_struct *work) 1119 + { 1120 + struct acpi_hp_work *hp_work; 1121 + struct acpiphp_func *func; 1122 + 1123 + hp_work = container_of(work, struct acpi_hp_work, work); 1124 + func = hp_work->context; 1125 + acpi_scan_lock_acquire(); 1126 + 1127 + hotplug_event_func(hp_work->handle, hp_work->type, func); 1125 1128 1126 1129 acpi_scan_lock_release(); 1127 1130 kfree(hp_work); /* allocated in handle_hotplug_event_func */
+5
drivers/pci/pci.h
··· 202 202 struct resource *res, unsigned int reg); 203 203 int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type); 204 204 void pci_configure_ari(struct pci_dev *dev); 205 + void __ref __pci_bus_size_bridges(struct pci_bus *bus, 206 + struct list_head *realloc_head); 207 + void __ref __pci_bus_assign_resources(const struct pci_bus *bus, 208 + struct list_head *realloc_head, 209 + struct list_head *fail_head); 205 210 206 211 /** 207 212 * pci_ari_enabled - query ARI forwarding status
+4 -4
drivers/pci/setup-bus.c
··· 1044 1044 ; 1045 1045 } 1046 1046 1047 - static void __ref __pci_bus_size_bridges(struct pci_bus *bus, 1047 + void __ref __pci_bus_size_bridges(struct pci_bus *bus, 1048 1048 struct list_head *realloc_head) 1049 1049 { 1050 1050 struct pci_dev *dev; ··· 1115 1115 } 1116 1116 EXPORT_SYMBOL(pci_bus_size_bridges); 1117 1117 1118 - static void __ref __pci_bus_assign_resources(const struct pci_bus *bus, 1119 - struct list_head *realloc_head, 1120 - struct list_head *fail_head) 1118 + void __ref __pci_bus_assign_resources(const struct pci_bus *bus, 1119 + struct list_head *realloc_head, 1120 + struct list_head *fail_head) 1121 1121 { 1122 1122 struct pci_bus *b; 1123 1123 struct pci_dev *dev;
+1 -1
drivers/regulator/tps6586x-regulator.c
··· 439 439 440 440 static struct platform_driver tps6586x_regulator_driver = { 441 441 .driver = { 442 - .name = "tps6586x-pmic", 442 + .name = "tps6586x-regulator", 443 443 .owner = THIS_MODULE, 444 444 }, 445 445 .probe = tps6586x_regulator_probe,
+5 -2
drivers/scsi/fcoe/fcoe.c
··· 1656 1656 1657 1657 if (fcoe->netdev->priv_flags & IFF_802_1Q_VLAN && 1658 1658 fcoe->realdev->features & NETIF_F_HW_VLAN_CTAG_TX) { 1659 - skb->vlan_tci = VLAN_TAG_PRESENT | 1660 - vlan_dev_vlan_id(fcoe->netdev); 1659 + /* must set skb->dev before calling vlan_put_tag */ 1661 1660 skb->dev = fcoe->realdev; 1661 + skb = __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), 1662 + vlan_dev_vlan_id(fcoe->netdev)); 1663 + if (!skb) 1664 + return -ENOMEM; 1662 1665 } else 1663 1666 skb->dev = fcoe->netdev; 1664 1667
+5 -10
drivers/scsi/fcoe/fcoe_ctlr.c
··· 1548 1548 { 1549 1549 struct fcoe_fcf *fcf; 1550 1550 struct fcoe_fcf *best = fip->sel_fcf; 1551 - struct fcoe_fcf *first; 1552 - 1553 - first = list_first_entry(&fip->fcfs, struct fcoe_fcf, list); 1554 1551 1555 1552 list_for_each_entry(fcf, &fip->fcfs, list) { 1556 1553 LIBFCOE_FIP_DBG(fip, "consider FCF fab %16.16llx " ··· 1565 1568 "" : "un"); 1566 1569 continue; 1567 1570 } 1568 - if (fcf->fabric_name != first->fabric_name || 1569 - fcf->vfid != first->vfid || 1570 - fcf->fc_map != first->fc_map) { 1571 + if (!best || fcf->pri < best->pri || best->flogi_sent) 1572 + best = fcf; 1573 + if (fcf->fabric_name != best->fabric_name || 1574 + fcf->vfid != best->vfid || 1575 + fcf->fc_map != best->fc_map) { 1571 1576 LIBFCOE_FIP_DBG(fip, "Conflicting fabric, VFID, " 1572 1577 "or FC-MAP\n"); 1573 1578 return NULL; 1574 1579 } 1575 - if (fcf->flogi_sent) 1576 - continue; 1577 - if (!best || fcf->pri < best->pri || best->flogi_sent) 1578 - best = fcf; 1579 1580 } 1580 1581 fip->sel_fcf = best; 1581 1582 if (best) {
-16
drivers/scsi/ipr.c
··· 8980 8980 if (!ioa_cfg->res_entries) 8981 8981 goto out; 8982 8982 8983 - if (ioa_cfg->sis64) { 8984 - ioa_cfg->target_ids = kzalloc(sizeof(unsigned long) * 8985 - BITS_TO_LONGS(ioa_cfg->max_devs_supported), GFP_KERNEL); 8986 - ioa_cfg->array_ids = kzalloc(sizeof(unsigned long) * 8987 - BITS_TO_LONGS(ioa_cfg->max_devs_supported), GFP_KERNEL); 8988 - ioa_cfg->vset_ids = kzalloc(sizeof(unsigned long) * 8989 - BITS_TO_LONGS(ioa_cfg->max_devs_supported), GFP_KERNEL); 8990 - 8991 - if (!ioa_cfg->target_ids || !ioa_cfg->array_ids 8992 - || !ioa_cfg->vset_ids) 8993 - goto out_free_res_entries; 8994 - } 8995 - 8996 8983 for (i = 0; i < ioa_cfg->max_devs_supported; i++) { 8997 8984 list_add_tail(&ioa_cfg->res_entries[i].queue, &ioa_cfg->free_res_q); 8998 8985 ioa_cfg->res_entries[i].ioa_cfg = ioa_cfg; ··· 9076 9089 ioa_cfg->vpd_cbs, ioa_cfg->vpd_cbs_dma); 9077 9090 out_free_res_entries: 9078 9091 kfree(ioa_cfg->res_entries); 9079 - kfree(ioa_cfg->target_ids); 9080 - kfree(ioa_cfg->array_ids); 9081 - kfree(ioa_cfg->vset_ids); 9082 9092 goto out; 9083 9093 } 9084 9094
+3 -3
drivers/scsi/ipr.h
··· 1440 1440 /* 1441 1441 * Bitmaps for SIS64 generated target values 1442 1442 */ 1443 - unsigned long *target_ids; 1444 - unsigned long *array_ids; 1445 - unsigned long *vset_ids; 1443 + unsigned long target_ids[BITS_TO_LONGS(IPR_MAX_SIS64_DEVS)]; 1444 + unsigned long array_ids[BITS_TO_LONGS(IPR_MAX_SIS64_DEVS)]; 1445 + unsigned long vset_ids[BITS_TO_LONGS(IPR_MAX_SIS64_DEVS)]; 1446 1446 1447 1447 u16 type; /* CCIN of the card */ 1448 1448
+24 -13
drivers/scsi/libfc/fc_exch.c
··· 463 463 fc_exch_release(ep); /* drop hold for exch in mp */ 464 464 } 465 465 466 - /** 467 - * fc_seq_send() - Send a frame using existing sequence/exchange pair 468 - * @lport: The local port that the exchange will be sent on 469 - * @sp: The sequence to be sent 470 - * @fp: The frame to be sent on the exchange 471 - */ 472 - static int fc_seq_send(struct fc_lport *lport, struct fc_seq *sp, 466 + static int fc_seq_send_locked(struct fc_lport *lport, struct fc_seq *sp, 473 467 struct fc_frame *fp) 474 468 { 475 469 struct fc_exch *ep; ··· 473 479 u8 fh_type = fh->fh_type; 474 480 475 481 ep = fc_seq_exch(sp); 476 - WARN_ON((ep->esb_stat & ESB_ST_SEQ_INIT) != ESB_ST_SEQ_INIT); 482 + WARN_ON(!(ep->esb_stat & ESB_ST_SEQ_INIT)); 477 483 478 484 f_ctl = ntoh24(fh->fh_f_ctl); 479 485 fc_exch_setup_hdr(ep, fp, f_ctl); ··· 496 502 error = lport->tt.frame_send(lport, fp); 497 503 498 504 if (fh_type == FC_TYPE_BLS) 499 - return error; 505 + goto out; 500 506 501 507 /* 502 508 * Update the exchange and sequence flags, 503 509 * assuming all frames for the sequence have been sent. 504 510 * We can only be called to send once for each sequence. 505 511 */ 506 - spin_lock_bh(&ep->ex_lock); 507 512 ep->f_ctl = f_ctl & ~FC_FC_FIRST_SEQ; /* not first seq */ 508 513 if (f_ctl & FC_FC_SEQ_INIT) 509 514 ep->esb_stat &= ~ESB_ST_SEQ_INIT; 515 + out: 516 + return error; 517 + } 518 + 519 + /** 520 + * fc_seq_send() - Send a frame using existing sequence/exchange pair 521 + * @lport: The local port that the exchange will be sent on 522 + * @sp: The sequence to be sent 523 + * @fp: The frame to be sent on the exchange 524 + */ 525 + static int fc_seq_send(struct fc_lport *lport, struct fc_seq *sp, 526 + struct fc_frame *fp) 527 + { 528 + struct fc_exch *ep; 529 + int error; 530 + ep = fc_seq_exch(sp); 531 + spin_lock_bh(&ep->ex_lock); 532 + error = fc_seq_send_locked(lport, sp, fp); 510 533 spin_unlock_bh(&ep->ex_lock); 511 534 return error; 512 535 } ··· 640 629 if (fp) { 641 630 fc_fill_fc_hdr(fp, FC_RCTL_BA_ABTS, ep->did, ep->sid, 642 631 FC_TYPE_BLS, FC_FC_END_SEQ | FC_FC_SEQ_INIT, 0); 643 - error = fc_seq_send(ep->lp, sp, fp); 632 + error = fc_seq_send_locked(ep->lp, sp, fp); 644 633 } else 645 634 error = -ENOBUFS; 646 635 return error; ··· 1143 1132 f_ctl = FC_FC_LAST_SEQ | FC_FC_END_SEQ | FC_FC_SEQ_INIT; 1144 1133 f_ctl |= ep->f_ctl; 1145 1134 fc_fill_fc_hdr(fp, rctl, ep->did, ep->sid, fh_type, f_ctl, 0); 1146 - fc_seq_send(ep->lp, sp, fp); 1135 + fc_seq_send_locked(ep->lp, sp, fp); 1147 1136 } 1148 1137 1149 1138 /** ··· 1318 1307 ap->ba_low_seq_cnt = htons(sp->cnt); 1319 1308 } 1320 1309 sp = fc_seq_start_next_locked(sp); 1321 - spin_unlock_bh(&ep->ex_lock); 1322 1310 fc_seq_send_last(sp, fp, FC_RCTL_BA_ACC, FC_TYPE_BLS); 1311 + spin_unlock_bh(&ep->ex_lock); 1323 1312 fc_frame_free(rx_fp); 1324 1313 return; 1325 1314
+1 -1
drivers/scsi/libfc/fc_rport.c
··· 1962 1962 rdata->flags |= FC_RP_FLAGS_RETRY; 1963 1963 rdata->supported_classes = FC_COS_CLASS3; 1964 1964 1965 - if (!(lport->service_params & FC_RPORT_ROLE_FCP_INITIATOR)) 1965 + if (!(lport->service_params & FCP_SPPF_INIT_FCN)) 1966 1966 return 0; 1967 1967 1968 1968 spp->spp_flags |= rspp->spp_flags & FC_SPP_EST_IMG_PAIR;
+11
drivers/scsi/qla2xxx/qla_inline.h
··· 278 278 279 279 set_bit(HOST_RAMP_UP_QUEUE_DEPTH, &vha->dpc_flags); 280 280 } 281 + 282 + static inline void 283 + qla2x00_handle_mbx_completion(struct qla_hw_data *ha, int status) 284 + { 285 + if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) && 286 + (status & MBX_INTERRUPT) && ha->flags.mbox_int) { 287 + set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 288 + clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags); 289 + complete(&ha->mbx_intr_comp); 290 + } 291 + }
+4 -23
drivers/scsi/qla2xxx/qla_isr.c
··· 104 104 RD_REG_WORD(&reg->hccr); 105 105 } 106 106 } 107 + qla2x00_handle_mbx_completion(ha, status); 107 108 spin_unlock_irqrestore(&ha->hardware_lock, flags); 108 - 109 - if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) && 110 - (status & MBX_INTERRUPT) && ha->flags.mbox_int) { 111 - set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 112 - complete(&ha->mbx_intr_comp); 113 - } 114 109 115 110 return (IRQ_HANDLED); 116 111 } ··· 216 221 WRT_REG_WORD(&reg->hccr, HCCR_CLR_RISC_INT); 217 222 RD_REG_WORD_RELAXED(&reg->hccr); 218 223 } 224 + qla2x00_handle_mbx_completion(ha, status); 219 225 spin_unlock_irqrestore(&ha->hardware_lock, flags); 220 - 221 - if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) && 222 - (status & MBX_INTERRUPT) && ha->flags.mbox_int) { 223 - set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 224 - complete(&ha->mbx_intr_comp); 225 - } 226 226 227 227 return (IRQ_HANDLED); 228 228 } ··· 2603 2613 if (unlikely(IS_QLA83XX(ha) && (ha->pdev->revision == 1))) 2604 2614 ndelay(3500); 2605 2615 } 2616 + qla2x00_handle_mbx_completion(ha, status); 2606 2617 spin_unlock_irqrestore(&ha->hardware_lock, flags); 2607 - 2608 - if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) && 2609 - (status & MBX_INTERRUPT) && ha->flags.mbox_int) { 2610 - set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 2611 - complete(&ha->mbx_intr_comp); 2612 - } 2613 2618 2614 2619 return IRQ_HANDLED; 2615 2620 } ··· 2748 2763 } 2749 2764 WRT_REG_DWORD(&reg->hccr, HCCRX_CLR_RISC_INT); 2750 2765 } while (0); 2766 + qla2x00_handle_mbx_completion(ha, status); 2751 2767 spin_unlock_irqrestore(&ha->hardware_lock, flags); 2752 2768 2753 - if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) && 2754 - (status & MBX_INTERRUPT) && ha->flags.mbox_int) { 2755 - set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 2756 - complete(&ha->mbx_intr_comp); 2757 - } 2758 2769 return IRQ_HANDLED; 2759 2770 } 2760 2771
-2
drivers/scsi/qla2xxx/qla_mbx.c
··· 179 179 180 180 wait_for_completion_timeout(&ha->mbx_intr_comp, mcp->tov * HZ); 181 181 182 - clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags); 183 - 184 182 } else { 185 183 ql_dbg(ql_dbg_mbx, vha, 0x1011, 186 184 "Cmd=%x Polling Mode.\n", command);
+2 -8
drivers/scsi/qla2xxx/qla_mr.c
··· 148 148 spin_unlock_irqrestore(&ha->hardware_lock, flags); 149 149 150 150 wait_for_completion_timeout(&ha->mbx_intr_comp, mcp->tov * HZ); 151 - 152 - clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags); 153 - 154 151 } else { 155 152 ql_dbg(ql_dbg_mbx, vha, 0x112c, 156 153 "Cmd=%x Polling Mode.\n", command); ··· 2931 2934 QLAFX00_CLR_INTR_REG(ha, clr_intr); 2932 2935 QLAFX00_RD_INTR_REG(ha); 2933 2936 } 2937 + 2938 + qla2x00_handle_mbx_completion(ha, status); 2934 2939 spin_unlock_irqrestore(&ha->hardware_lock, flags); 2935 2940 2936 - if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) && 2937 - (status & MBX_INTERRUPT) && ha->flags.mbox_int) { 2938 - set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 2939 - complete(&ha->mbx_intr_comp); 2940 - } 2941 2941 return IRQ_HANDLED; 2942 2942 } 2943 2943
+10 -16
drivers/scsi/qla2xxx/qla_nx.c
··· 2074 2074 } 2075 2075 WRT_REG_DWORD(&reg->host_int, 0); 2076 2076 } 2077 - spin_unlock_irqrestore(&ha->hardware_lock, flags); 2078 - if (!ha->flags.msi_enabled) 2079 - qla82xx_wr_32(ha, ha->nx_legacy_intr.tgt_mask_reg, 0xfbff); 2080 2077 2081 2078 #ifdef QL_DEBUG_LEVEL_17 2082 2079 if (!irq && ha->flags.eeh_busy) ··· 2082 2085 status, ha->mbx_cmd_flags, ha->flags.mbox_int, stat); 2083 2086 #endif 2084 2087 2085 - if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) && 2086 - (status & MBX_INTERRUPT) && ha->flags.mbox_int) { 2087 - set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 2088 - complete(&ha->mbx_intr_comp); 2089 - } 2088 + qla2x00_handle_mbx_completion(ha, status); 2089 + spin_unlock_irqrestore(&ha->hardware_lock, flags); 2090 + 2091 + if (!ha->flags.msi_enabled) 2092 + qla82xx_wr_32(ha, ha->nx_legacy_intr.tgt_mask_reg, 0xfbff); 2093 + 2090 2094 return IRQ_HANDLED; 2091 2095 } 2092 2096 ··· 2147 2149 WRT_REG_DWORD(&reg->host_int, 0); 2148 2150 } while (0); 2149 2151 2150 - spin_unlock_irqrestore(&ha->hardware_lock, flags); 2151 - 2152 2152 #ifdef QL_DEBUG_LEVEL_17 2153 2153 if (!irq && ha->flags.eeh_busy) 2154 2154 ql_log(ql_log_warn, vha, 0x5044, ··· 2154 2158 status, ha->mbx_cmd_flags, ha->flags.mbox_int, stat); 2155 2159 #endif 2156 2160 2157 - if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags) && 2158 - (status & MBX_INTERRUPT) && ha->flags.mbox_int) { 2159 - set_bit(MBX_INTERRUPT, &ha->mbx_cmd_flags); 2160 - complete(&ha->mbx_intr_comp); 2161 - } 2161 + qla2x00_handle_mbx_completion(ha, status); 2162 + spin_unlock_irqrestore(&ha->hardware_lock, flags); 2163 + 2162 2164 return IRQ_HANDLED; 2163 2165 } 2164 2166 ··· 3339 3345 ha->flags.mbox_busy = 0; 3340 3346 ql_log(ql_log_warn, vha, 0x6010, 3341 3347 "Doing premature completion of mbx command.\n"); 3342 - if (test_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags)) 3348 + if (test_and_clear_bit(MBX_INTR_WAIT, &ha->mbx_cmd_flags)) 3343 3349 complete(&ha->mbx_intr_comp); 3344 3350 } 3345 3351 }
+5 -1
drivers/scsi/qla2xxx/tcm_qla2xxx.c
··· 688 688 * For FCP_READ with CHECK_CONDITION status, clear cmd->bufflen 689 689 * for qla_tgt_xmit_response LLD code 690 690 */ 691 + if (se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) { 692 + se_cmd->se_cmd_flags &= ~SCF_OVERFLOW_BIT; 693 + se_cmd->residual_count = 0; 694 + } 691 695 se_cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT; 692 - se_cmd->residual_count = se_cmd->data_length; 696 + se_cmd->residual_count += se_cmd->data_length; 693 697 694 698 cmd->bufflen = 0; 695 699 }
+1 -1
drivers/spi/spi-pxa2xx-dma.c
··· 59 59 int ret; 60 60 61 61 sg_free_table(sgt); 62 - ret = sg_alloc_table(sgt, nents, GFP_KERNEL); 62 + ret = sg_alloc_table(sgt, nents, GFP_ATOMIC); 63 63 if (ret) 64 64 return ret; 65 65 }
+1 -1
drivers/spi/spi-pxa2xx.c
··· 1075 1075 acpi_bus_get_device(ACPI_HANDLE(&pdev->dev), &adev)) 1076 1076 return NULL; 1077 1077 1078 - pdata = devm_kzalloc(&pdev->dev, sizeof(*ssp), GFP_KERNEL); 1078 + pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL); 1079 1079 if (!pdata) { 1080 1080 dev_err(&pdev->dev, 1081 1081 "failed to allocate memory for platform data\n");
+1 -1
drivers/spi/spi-s3c64xx.c
··· 444 444 } 445 445 446 446 ret = pm_runtime_get_sync(&sdd->pdev->dev); 447 - if (ret != 0) { 447 + if (ret < 0) { 448 448 dev_err(dev, "Failed to enable device: %d\n", ret); 449 449 goto out_tx; 450 450 }
+1 -1
drivers/staging/media/davinci_vpfe/Kconfig
··· 1 1 config VIDEO_DM365_VPFE 2 2 tristate "DM365 VPFE Media Controller Capture Driver" 3 - depends on VIDEO_V4L2 && ARCH_DAVINCI_DM365 && !VIDEO_VPFE_CAPTURE 3 + depends on VIDEO_V4L2 && ARCH_DAVINCI_DM365 && !VIDEO_DM365_ISIF 4 4 select VIDEOBUF2_DMA_CONTIG 5 5 help 6 6 Support for DM365 VPFE based Media Controller Capture driver.
+4 -2
drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c
··· 639 639 if (ret) 640 640 goto probe_free_dev_mem; 641 641 642 - if (vpfe_initialize_modules(vpfe_dev, pdev)) 642 + ret = vpfe_initialize_modules(vpfe_dev, pdev); 643 + if (ret) 643 644 goto probe_disable_clock; 644 645 645 646 vpfe_dev->media_dev.dev = vpfe_dev->pdev; ··· 664 663 /* set the driver data in platform device */ 665 664 platform_set_drvdata(pdev, vpfe_dev); 666 665 /* register subdevs/entities */ 667 - if (vpfe_register_entities(vpfe_dev)) 666 + ret = vpfe_register_entities(vpfe_dev); 667 + if (ret) 668 668 goto probe_out_v4l2_unregister; 669 669 670 670 ret = vpfe_attach_irq(vpfe_dev);
+1
drivers/staging/media/solo6x10/Kconfig
··· 5 5 select VIDEOBUF2_DMA_SG 6 6 select VIDEOBUF2_DMA_CONTIG 7 7 select SND_PCM 8 + select FONT_8x16 8 9 ---help--- 9 10 This driver supports the Softlogic based MPEG-4 and h.264 codec 10 11 cards.
+14 -13
drivers/target/iscsi/iscsi_target_configfs.c
··· 155 155 struct iscsi_tpg_np *tpg_np_iser = NULL; 156 156 char *endptr; 157 157 u32 op; 158 - int rc; 158 + int rc = 0; 159 159 160 160 op = simple_strtoul(page, &endptr, 0); 161 161 if ((op != 1) && (op != 0)) { ··· 174 174 return -EINVAL; 175 175 176 176 if (op) { 177 - int rc = request_module("ib_isert"); 178 - if (rc != 0) 177 + rc = request_module("ib_isert"); 178 + if (rc != 0) { 179 179 pr_warn("Unable to request_module for ib_isert\n"); 180 + rc = 0; 181 + } 180 182 181 183 tpg_np_iser = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr, 182 184 np->np_ip, tpg_np, ISCSI_INFINIBAND); 183 - if (!tpg_np_iser || IS_ERR(tpg_np_iser)) 185 + if (IS_ERR(tpg_np_iser)) { 186 + rc = PTR_ERR(tpg_np_iser); 184 187 goto out; 188 + } 185 189 } else { 186 190 tpg_np_iser = iscsit_tpg_locate_child_np(tpg_np, ISCSI_INFINIBAND); 187 - if (!tpg_np_iser) 188 - goto out; 189 - 190 - rc = iscsit_tpg_del_network_portal(tpg, tpg_np_iser); 191 - if (rc < 0) 192 - goto out; 191 + if (tpg_np_iser) { 192 + rc = iscsit_tpg_del_network_portal(tpg, tpg_np_iser); 193 + if (rc < 0) 194 + goto out; 195 + } 193 196 } 194 - 195 - printk("lio_target_np_store_iser() done, op: %d\n", op); 196 197 197 198 iscsit_put_tpg(tpg); 198 199 return count; 199 200 out: 200 201 iscsit_put_tpg(tpg); 201 - return -EINVAL; 202 + return rc; 202 203 } 203 204 204 205 TF_NP_BASE_ATTR(lio_target, iser, S_IRUGO | S_IWUSR);
+2 -2
drivers/target/iscsi/iscsi_target_erl0.c
··· 842 842 return 0; 843 843 844 844 sess->time2retain_timer_flags |= ISCSI_TF_STOP; 845 - spin_unlock_bh(&se_tpg->session_lock); 845 + spin_unlock(&se_tpg->session_lock); 846 846 847 847 del_timer_sync(&sess->time2retain_timer); 848 848 849 - spin_lock_bh(&se_tpg->session_lock); 849 + spin_lock(&se_tpg->session_lock); 850 850 sess->time2retain_timer_flags &= ~ISCSI_TF_RUNNING; 851 851 pr_debug("Stopped Time2Retain Timer for SID: %u\n", 852 852 sess->sid);
-3
drivers/target/iscsi/iscsi_target_login.c
··· 984 984 } 985 985 986 986 np->np_transport = t; 987 - printk("Set np->np_transport to %p -> %s\n", np->np_transport, 988 - np->np_transport->name); 989 987 return 0; 990 988 } 991 989 ··· 1000 1002 1001 1003 conn->sock = new_sock; 1002 1004 conn->login_family = np->np_sockaddr.ss_family; 1003 - printk("iSCSI/TCP: Setup conn->sock from new_sock: %p\n", new_sock); 1004 1005 1005 1006 if (np->np_sockaddr.ss_family == AF_INET6) { 1006 1007 memset(&sock_in6, 0, sizeof(struct sockaddr_in6));
-3
drivers/target/iscsi/iscsi_target_nego.c
··· 721 721 722 722 start += strlen(key) + strlen(value) + 2; 723 723 } 724 - 725 - printk("i_buf: %s, s_buf: %s, t_buf: %s\n", i_buf, s_buf, t_buf); 726 - 727 724 /* 728 725 * See 5.3. Login Phase. 729 726 */
+5 -8
drivers/tty/pty.c
··· 244 244 245 245 static int pty_open(struct tty_struct *tty, struct file *filp) 246 246 { 247 - int retval = -ENODEV; 248 - 249 247 if (!tty || !tty->link) 250 - goto out; 248 + return -ENODEV; 251 249 252 - set_bit(TTY_IO_ERROR, &tty->flags); 253 - 254 - retval = -EIO; 255 250 if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) 256 251 goto out; 257 252 if (test_bit(TTY_PTY_LOCK, &tty->link->flags)) ··· 257 262 clear_bit(TTY_IO_ERROR, &tty->flags); 258 263 clear_bit(TTY_OTHER_CLOSED, &tty->link->flags); 259 264 set_bit(TTY_THROTTLED, &tty->flags); 260 - retval = 0; 265 + return 0; 266 + 261 267 out: 262 - return retval; 268 + set_bit(TTY_IO_ERROR, &tty->flags); 269 + return -EIO; 263 270 } 264 271 265 272 static void pty_set_termios(struct tty_struct *tty,
+9 -1
drivers/tty/serial/8250/8250_gsc.c
··· 30 30 unsigned long address; 31 31 int err; 32 32 33 + #ifdef CONFIG_64BIT 34 + extern int iosapic_serial_irq(int cellnum); 35 + if (!dev->irq && (dev->id.sversion == 0xad)) 36 + dev->irq = iosapic_serial_irq(dev->mod_index-1); 37 + #endif 38 + 33 39 if (!dev->irq) { 34 40 /* We find some unattached serial ports by walking native 35 41 * busses. These should be silently ignored. Otherwise, ··· 57 51 memset(&uart, 0, sizeof(uart)); 58 52 uart.port.iotype = UPIO_MEM; 59 53 /* 7.272727MHz on Lasi. Assumed the same for Dino, Wax and Timi. */ 60 - uart.port.uartclk = 7272727; 54 + uart.port.uartclk = (dev->id.sversion != 0xad) ? 55 + 7272727 : 1843200; 61 56 uart.port.mapbase = address; 62 57 uart.port.membase = ioremap_nocache(address, 16); 63 58 uart.port.irq = dev->irq; ··· 80 73 { HPHW_FIO, HVERSION_REV_ANY_ID, HVERSION_ANY_ID, 0x00075 }, 81 74 { HPHW_FIO, HVERSION_REV_ANY_ID, HVERSION_ANY_ID, 0x0008c }, 82 75 { HPHW_FIO, HVERSION_REV_ANY_ID, HVERSION_ANY_ID, 0x0008d }, 76 + { HPHW_FIO, HVERSION_REV_ANY_ID, HVERSION_ANY_ID, 0x000ad }, 83 77 { 0 } 84 78 }; 85 79
+1 -4
drivers/tty/vt/vt_ioctl.c
··· 289 289 struct vc_data *vc = NULL; 290 290 int ret = 0; 291 291 292 - if (!vc_num) 293 - return 0; 294 - 295 292 console_lock(); 296 293 if (VT_BUSY(vc_num)) 297 294 ret = -EBUSY; 298 - else 295 + else if (vc_num) 299 296 vc = vc_deallocate(vc_num); 300 297 console_unlock(); 301 298
+10 -4
drivers/usb/phy/Kconfig
··· 4 4 menuconfig USB_PHY 5 5 bool "USB Physical Layer drivers" 6 6 help 7 - USB controllers (those which are host, device or DRD) need a 8 - device to handle the physical layer signalling, commonly called 9 - a PHY. 7 + Most USB controllers have the physical layer signalling part 8 + (commonly called a PHY) built in. However, dual-role devices 9 + (a.k.a. USB on-the-go) which support being USB master or slave 10 + with the same connector often use an external PHY. 10 11 11 - The following drivers add support for such PHY devices. 12 + The drivers in this submenu add support for such PHY devices. 13 + They are not needed for standard master-only (or the vast 14 + majority of slave-only) USB interfaces. 15 + 16 + If you're not sure if this applies to you, it probably doesn't; 17 + say N here. 12 18 13 19 if USB_PHY 14 20
+2 -1
drivers/usb/serial/ti_usb_3410_5052.c
··· 172 172 { USB_DEVICE(IBM_VENDOR_ID, IBM_4543_PRODUCT_ID) }, 173 173 { USB_DEVICE(IBM_VENDOR_ID, IBM_454B_PRODUCT_ID) }, 174 174 { USB_DEVICE(IBM_VENDOR_ID, IBM_454C_PRODUCT_ID) }, 175 - { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_PRODUCT_ID) }, 175 + { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STEREO_PLUG_ID) }, 176 + { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) }, 176 177 { USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) }, 177 178 }; 178 179
+3 -1
drivers/usb/serial/ti_usb_3410_5052.h
··· 52 52 53 53 /* Abbott Diabetics vendor and product ids */ 54 54 #define ABBOTT_VENDOR_ID 0x1a61 55 - #define ABBOTT_PRODUCT_ID 0x3410 55 + #define ABBOTT_STEREO_PLUG_ID 0x3410 56 + #define ABBOTT_PRODUCT_ID ABBOTT_STEREO_PLUG_ID 57 + #define ABBOTT_STRIP_PORT_ID 0x3420 56 58 57 59 /* Commands */ 58 60 #define TI_GET_VERSION 0x01
+9 -7
fs/exec.c
··· 1135 1135 set_dumpable(current->mm, suid_dumpable); 1136 1136 } 1137 1137 1138 - /* 1139 - * Flush performance counters when crossing a 1140 - * security domain: 1141 - */ 1142 - if (!get_dumpable(current->mm)) 1143 - perf_event_exit_task(current); 1144 - 1145 1138 /* An exec changes our domain. We are no longer part of the thread 1146 1139 group */ 1147 1140 ··· 1198 1205 1199 1206 commit_creds(bprm->cred); 1200 1207 bprm->cred = NULL; 1208 + 1209 + /* 1210 + * Disable monitoring for regular users 1211 + * when executing setuid binaries. Must 1212 + * wait until new credentials are committed 1213 + * by commit_creds() above 1214 + */ 1215 + if (get_dumpable(current->mm) != SUID_DUMP_USER) 1216 + perf_event_exit_task(current); 1201 1217 /* 1202 1218 * cred_guard_mutex must be held at least to this point to prevent 1203 1219 * ptrace_attach() from altering our determination of the task's
+8 -4
fs/fuse/file.c
··· 2470 2470 .mode = mode 2471 2471 }; 2472 2472 int err; 2473 + bool lock_inode = !(mode & FALLOC_FL_KEEP_SIZE) || 2474 + (mode & FALLOC_FL_PUNCH_HOLE); 2473 2475 2474 2476 if (fc->no_fallocate) 2475 2477 return -EOPNOTSUPP; 2476 2478 2477 - if (mode & FALLOC_FL_PUNCH_HOLE) { 2479 + if (lock_inode) { 2478 2480 mutex_lock(&inode->i_mutex); 2479 - fuse_set_nowrite(inode); 2481 + if (mode & FALLOC_FL_PUNCH_HOLE) 2482 + fuse_set_nowrite(inode); 2480 2483 } 2481 2484 2482 2485 req = fuse_get_req_nopages(fc); ··· 2514 2511 fuse_invalidate_attr(inode); 2515 2512 2516 2513 out: 2517 - if (mode & FALLOC_FL_PUNCH_HOLE) { 2518 - fuse_release_nowrite(inode); 2514 + if (lock_inode) { 2515 + if (mode & FALLOC_FL_PUNCH_HOLE) 2516 + fuse_release_nowrite(inode); 2519 2517 mutex_unlock(&inode->i_mutex); 2520 2518 } 2521 2519
+6
fs/internal.h
··· 132 132 extern ssize_t __kernel_write(struct file *, const char *, size_t, loff_t *); 133 133 134 134 /* 135 + * splice.c 136 + */ 137 + extern long do_splice_direct(struct file *in, loff_t *ppos, struct file *out, 138 + loff_t *opos, size_t len, unsigned int flags); 139 + 140 + /* 135 141 * pipe.c 136 142 */ 137 143 extern const struct file_operations pipefifo_fops;
+16 -8
fs/read_write.c
··· 1064 1064 struct fd in, out; 1065 1065 struct inode *in_inode, *out_inode; 1066 1066 loff_t pos; 1067 + loff_t out_pos; 1067 1068 ssize_t retval; 1068 1069 int fl; 1069 1070 ··· 1078 1077 if (!(in.file->f_mode & FMODE_READ)) 1079 1078 goto fput_in; 1080 1079 retval = -ESPIPE; 1081 - if (!ppos) 1082 - ppos = &in.file->f_pos; 1083 - else 1080 + if (!ppos) { 1081 + pos = in.file->f_pos; 1082 + } else { 1083 + pos = *ppos; 1084 1084 if (!(in.file->f_mode & FMODE_PREAD)) 1085 1085 goto fput_in; 1086 - retval = rw_verify_area(READ, in.file, ppos, count); 1086 + } 1087 + retval = rw_verify_area(READ, in.file, &pos, count); 1087 1088 if (retval < 0) 1088 1089 goto fput_in; 1089 1090 count = retval; ··· 1102 1099 retval = -EINVAL; 1103 1100 in_inode = file_inode(in.file); 1104 1101 out_inode = file_inode(out.file); 1105 - retval = rw_verify_area(WRITE, out.file, &out.file->f_pos, count); 1102 + out_pos = out.file->f_pos; 1103 + retval = rw_verify_area(WRITE, out.file, &out_pos, count); 1106 1104 if (retval < 0) 1107 1105 goto fput_out; 1108 1106 count = retval; ··· 1111 1107 if (!max) 1112 1108 max = min(in_inode->i_sb->s_maxbytes, out_inode->i_sb->s_maxbytes); 1113 1109 1114 - pos = *ppos; 1115 1110 if (unlikely(pos + count > max)) { 1116 1111 retval = -EOVERFLOW; 1117 1112 if (pos >= max) ··· 1129 1126 if (in.file->f_flags & O_NONBLOCK) 1130 1127 fl = SPLICE_F_NONBLOCK; 1131 1128 #endif 1132 - retval = do_splice_direct(in.file, ppos, out.file, count, fl); 1129 + retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl); 1133 1130 1134 1131 if (retval > 0) { 1135 1132 add_rchar(current, retval); 1136 1133 add_wchar(current, retval); 1137 1134 fsnotify_access(in.file); 1138 1135 fsnotify_modify(out.file); 1136 + out.file->f_pos = out_pos; 1137 + if (ppos) 1138 + *ppos = pos; 1139 + else 1140 + in.file->f_pos = pos; 1139 1141 } 1140 1142 1141 1143 inc_syscr(current); 1142 1144 inc_syscw(current); 1143 - if (*ppos > max) 1145 + if (pos > max) 1144 1146 retval = -EOVERFLOW; 1145 1147 1146 1148 fput_out:
+19 -13
fs/splice.c
··· 1274 1274 { 1275 1275 struct file *file = sd->u.file; 1276 1276 1277 - return do_splice_from(pipe, file, &file->f_pos, sd->total_len, 1277 + return do_splice_from(pipe, file, sd->opos, sd->total_len, 1278 1278 sd->flags); 1279 1279 } 1280 1280 ··· 1283 1283 * @in: file to splice from 1284 1284 * @ppos: input file offset 1285 1285 * @out: file to splice to 1286 + * @opos: output file offset 1286 1287 * @len: number of bytes to splice 1287 1288 * @flags: splice modifier flags 1288 1289 * ··· 1295 1294 * 1296 1295 */ 1297 1296 long do_splice_direct(struct file *in, loff_t *ppos, struct file *out, 1298 - size_t len, unsigned int flags) 1297 + loff_t *opos, size_t len, unsigned int flags) 1299 1298 { 1300 1299 struct splice_desc sd = { 1301 1300 .len = len, ··· 1303 1302 .flags = flags, 1304 1303 .pos = *ppos, 1305 1304 .u.file = out, 1305 + .opos = opos, 1306 1306 }; 1307 1307 long ret; 1308 1308 ··· 1327 1325 { 1328 1326 struct pipe_inode_info *ipipe; 1329 1327 struct pipe_inode_info *opipe; 1330 - loff_t offset, *off; 1328 + loff_t offset; 1331 1329 long ret; 1332 1330 1333 1331 ipipe = get_pipe_info(in); ··· 1358 1356 return -EINVAL; 1359 1357 if (copy_from_user(&offset, off_out, sizeof(loff_t))) 1360 1358 return -EFAULT; 1361 - off = &offset; 1362 - } else 1363 - off = &out->f_pos; 1359 + } else { 1360 + offset = out->f_pos; 1361 + } 1364 1362 1365 - ret = do_splice_from(ipipe, out, off, len, flags); 1363 + ret = do_splice_from(ipipe, out, &offset, len, flags); 1366 1364 1367 - if (off_out && copy_to_user(off_out, off, sizeof(loff_t))) 1365 + if (!off_out) 1366 + out->f_pos = offset; 1367 + else if (copy_to_user(off_out, &offset, sizeof(loff_t))) 1368 1368 ret = -EFAULT; 1369 1369 1370 1370 return ret; ··· 1380 1376 return -EINVAL; 1381 1377 if (copy_from_user(&offset, off_in, sizeof(loff_t))) 1382 1378 return -EFAULT; 1383 - off = &offset; 1384 - } else 1385 - off = &in->f_pos; 1379 + } else { 1380 + offset = in->f_pos; 1381 + } 1386 1382 1387 - ret = do_splice_to(in, off, opipe, len, flags); 1383 + ret = do_splice_to(in, &offset, opipe, len, flags); 1388 1384 1389 - if (off_in && copy_to_user(off_in, off, sizeof(loff_t))) 1385 + if (!off_in) 1386 + in->f_pos = offset; 1387 + else if (copy_to_user(off_in, &offset, sizeof(loff_t))) 1390 1388 ret = -EFAULT; 1391 1389 1392 1390 return ret;
+39 -15
fs/ubifs/dir.c
··· 349 349 static int ubifs_readdir(struct file *file, void *dirent, filldir_t filldir) 350 350 { 351 351 int err, over = 0; 352 + loff_t pos = file->f_pos; 352 353 struct qstr nm; 353 354 union ubifs_key key; 354 355 struct ubifs_dent_node *dent; 355 356 struct inode *dir = file_inode(file); 356 357 struct ubifs_info *c = dir->i_sb->s_fs_info; 357 358 358 - dbg_gen("dir ino %lu, f_pos %#llx", dir->i_ino, file->f_pos); 359 + dbg_gen("dir ino %lu, f_pos %#llx", dir->i_ino, pos); 359 360 360 - if (file->f_pos > UBIFS_S_KEY_HASH_MASK || file->f_pos == 2) 361 + if (pos > UBIFS_S_KEY_HASH_MASK || pos == 2) 361 362 /* 362 363 * The directory was seek'ed to a senseless position or there 363 364 * are no more entries. 364 365 */ 365 366 return 0; 366 367 368 + if (file->f_version == 0) { 369 + /* 370 + * The file was seek'ed, which means that @file->private_data 371 + * is now invalid. This may also be just the first 372 + * 'ubifs_readdir()' invocation, in which case 373 + * @file->private_data is NULL, and the below code is 374 + * basically a no-op. 375 + */ 376 + kfree(file->private_data); 377 + file->private_data = NULL; 378 + } 379 + 380 + /* 381 + * 'generic_file_llseek()' unconditionally sets @file->f_version to 382 + * zero, and we use this for detecting whether the file was seek'ed. 383 + */ 384 + file->f_version = 1; 385 + 367 386 /* File positions 0 and 1 correspond to "." and ".." */ 368 - if (file->f_pos == 0) { 387 + if (pos == 0) { 369 388 ubifs_assert(!file->private_data); 370 389 over = filldir(dirent, ".", 1, 0, dir->i_ino, DT_DIR); 371 390 if (over) 372 391 return 0; 373 - file->f_pos = 1; 392 + file->f_pos = pos = 1; 374 393 } 375 394 376 - if (file->f_pos == 1) { 395 + if (pos == 1) { 377 396 ubifs_assert(!file->private_data); 378 397 over = filldir(dirent, "..", 2, 1, 379 398 parent_ino(file->f_path.dentry), DT_DIR); ··· 408 389 goto out; 409 390 } 410 391 411 - file->f_pos = key_hash_flash(c, &dent->key); 392 + file->f_pos = pos = key_hash_flash(c, &dent->key); 412 393 file->private_data = dent; 413 394 } 414 395 ··· 416 397 if (!dent) { 417 398 /* 418 399 * The directory was seek'ed to and is now readdir'ed. 419 - * Find the entry corresponding to @file->f_pos or the 420 - * closest one. 400 + * Find the entry corresponding to @pos or the closest one. 421 401 */ 422 - dent_key_init_hash(c, &key, dir->i_ino, file->f_pos); 402 + dent_key_init_hash(c, &key, dir->i_ino, pos); 423 403 nm.name = NULL; 424 404 dent = ubifs_tnc_next_ent(c, &key, &nm); 425 405 if (IS_ERR(dent)) { 426 406 err = PTR_ERR(dent); 427 407 goto out; 428 408 } 429 - file->f_pos = key_hash_flash(c, &dent->key); 409 + file->f_pos = pos = key_hash_flash(c, &dent->key); 430 410 file->private_data = dent; 431 411 } 432 412 ··· 437 419 ubifs_inode(dir)->creat_sqnum); 438 420 439 421 nm.len = le16_to_cpu(dent->nlen); 440 - over = filldir(dirent, dent->name, nm.len, file->f_pos, 422 + over = filldir(dirent, dent->name, nm.len, pos, 441 423 le64_to_cpu(dent->inum), 442 424 vfs_dent_type(dent->type)); 443 425 if (over) ··· 453 435 } 454 436 455 437 kfree(file->private_data); 456 - file->f_pos = key_hash_flash(c, &dent->key); 438 + file->f_pos = pos = key_hash_flash(c, &dent->key); 457 439 file->private_data = dent; 458 440 cond_resched(); 441 + 442 + if (file->f_version == 0) 443 + /* 444 + * The file was seek'ed meanwhile, lets return and start 445 + * reading direntries from the new position on the next 446 + * invocation. 447 + */ 448 + return 0; 459 449 } 460 450 461 451 out: ··· 474 448 475 449 kfree(file->private_data); 476 450 file->private_data = NULL; 451 + /* 2 is a special value indicating that there are no more direntries */ 477 452 file->f_pos = 2; 478 453 return 0; 479 454 } 480 455 481 - /* If a directory is seeked, we have to free saved readdir() state */ 482 456 static loff_t ubifs_dir_llseek(struct file *file, loff_t offset, int whence) 483 457 { 484 - kfree(file->private_data); 485 - file->private_data = NULL; 486 458 return generic_file_llseek(file, offset, whence); 487 459 } 488 460
+1
include/acpi/acpi_bus.h
··· 382 382 int acpi_device_get_power(struct acpi_device *device, int *state); 383 383 int acpi_device_set_power(struct acpi_device *device, int state); 384 384 int acpi_bus_init_power(struct acpi_device *device); 385 + int acpi_device_fix_up_power(struct acpi_device *device); 385 386 int acpi_bus_update_power(acpi_handle handle, int *state_p); 386 387 bool acpi_bus_power_manageable(acpi_handle handle); 387 388
+6 -2
include/acpi/acpi_drivers.h
··· 123 123 extern void unregister_dock_notifier(struct notifier_block *nb); 124 124 extern int register_hotplug_dock_device(acpi_handle handle, 125 125 const struct acpi_dock_ops *ops, 126 - void *context); 126 + void *context, 127 + void (*init)(void *), 128 + void (*release)(void *)); 127 129 extern void unregister_hotplug_dock_device(acpi_handle handle); 128 130 #else 129 131 static inline int is_dock_device(acpi_handle handle) ··· 141 139 } 142 140 static inline int register_hotplug_dock_device(acpi_handle handle, 143 141 const struct acpi_dock_ops *ops, 144 - void *context) 142 + void *context, 143 + void (*init)(void *), 144 + void (*release)(void *)) 145 145 { 146 146 return -ENODEV; 147 147 }
+35
include/linux/context_tracking.h
··· 3 3 4 4 #include <linux/sched.h> 5 5 #include <linux/percpu.h> 6 + #include <linux/vtime.h> 6 7 #include <asm/ptrace.h> 7 8 8 9 struct context_tracking { ··· 20 19 } state; 21 20 }; 22 21 22 + static inline void __guest_enter(void) 23 + { 24 + /* 25 + * This is running in ioctl context so we can avoid 26 + * the call to vtime_account() with its unnecessary idle check. 27 + */ 28 + vtime_account_system(current); 29 + current->flags |= PF_VCPU; 30 + } 31 + 32 + static inline void __guest_exit(void) 33 + { 34 + /* 35 + * This is running in ioctl context so we can avoid 36 + * the call to vtime_account() with its unnecessary idle check. 37 + */ 38 + vtime_account_system(current); 39 + current->flags &= ~PF_VCPU; 40 + } 41 + 23 42 #ifdef CONFIG_CONTEXT_TRACKING 24 43 DECLARE_PER_CPU(struct context_tracking, context_tracking); 25 44 ··· 55 34 56 35 extern void user_enter(void); 57 36 extern void user_exit(void); 37 + 38 + extern void guest_enter(void); 39 + extern void guest_exit(void); 58 40 59 41 static inline enum ctx_state exception_enter(void) 60 42 { ··· 81 57 static inline bool context_tracking_in_user(void) { return false; } 82 58 static inline void user_enter(void) { } 83 59 static inline void user_exit(void) { } 60 + 61 + static inline void guest_enter(void) 62 + { 63 + __guest_enter(); 64 + } 65 + 66 + static inline void guest_exit(void) 67 + { 68 + __guest_exit(); 69 + } 70 + 84 71 static inline enum ctx_state exception_enter(void) { return 0; } 85 72 static inline void exception_exit(enum ctx_state prev_ctx) { } 86 73 static inline void context_tracking_task_switch(struct task_struct *prev,
-2
include/linux/fs.h
··· 2414 2414 struct file *, loff_t *, size_t, unsigned int); 2415 2415 extern ssize_t generic_splice_sendpage(struct pipe_inode_info *pipe, 2416 2416 struct file *out, loff_t *, size_t len, unsigned int flags); 2417 - extern long do_splice_direct(struct file *in, loff_t *ppos, struct file *out, 2418 - size_t len, unsigned int flags); 2419 2417 2420 2418 extern void 2421 2419 file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping);
+1 -1
include/linux/if_vlan.h
··· 44 44 * struct vlan_ethhdr - vlan ethernet header (ethhdr + vlan_hdr) 45 45 * @h_dest: destination ethernet address 46 46 * @h_source: source ethernet address 47 - * @h_vlan_proto: ethernet protocol (always 0x8100) 47 + * @h_vlan_proto: ethernet protocol 48 48 * @h_vlan_TCI: priority and VLAN ID 49 49 * @h_vlan_encapsulated_proto: packet type ID or len 50 50 */
+1 -36
include/linux/kvm_host.h
··· 23 23 #include <linux/ratelimit.h> 24 24 #include <linux/err.h> 25 25 #include <linux/irqflags.h> 26 + #include <linux/context_tracking.h> 26 27 #include <asm/signal.h> 27 28 28 29 #include <linux/kvm.h> ··· 760 759 return 0; 761 760 } 762 761 #endif 763 - 764 - static inline void __guest_enter(void) 765 - { 766 - /* 767 - * This is running in ioctl context so we can avoid 768 - * the call to vtime_account() with its unnecessary idle check. 769 - */ 770 - vtime_account_system(current); 771 - current->flags |= PF_VCPU; 772 - } 773 - 774 - static inline void __guest_exit(void) 775 - { 776 - /* 777 - * This is running in ioctl context so we can avoid 778 - * the call to vtime_account() with its unnecessary idle check. 779 - */ 780 - vtime_account_system(current); 781 - current->flags &= ~PF_VCPU; 782 - } 783 - 784 - #ifdef CONFIG_CONTEXT_TRACKING 785 - extern void guest_enter(void); 786 - extern void guest_exit(void); 787 - 788 - #else /* !CONFIG_CONTEXT_TRACKING */ 789 - static inline void guest_enter(void) 790 - { 791 - __guest_enter(); 792 - } 793 - 794 - static inline void guest_exit(void) 795 - { 796 - __guest_exit(); 797 - } 798 - #endif /* !CONFIG_CONTEXT_TRACKING */ 799 762 800 763 static inline void kvm_guest_enter(void) 801 764 {
+1
include/linux/netdevice.h
··· 1759 1759 extern struct net_device *dev_get_by_index(struct net *net, int ifindex); 1760 1760 extern struct net_device *__dev_get_by_index(struct net *net, int ifindex); 1761 1761 extern struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex); 1762 + extern int netdev_get_name(struct net *net, char *name, int ifindex); 1762 1763 extern int dev_restart(struct net_device *dev); 1763 1764 #ifdef CONFIG_NETPOLL_TRAP 1764 1765 extern int netpoll_trap(void);
+1 -2
include/linux/perf_event.h
··· 389 389 /* mmap bits */ 390 390 struct mutex mmap_mutex; 391 391 atomic_t mmap_count; 392 - int mmap_locked; 393 - struct user_struct *mmap_user; 392 + 394 393 struct ring_buffer *rb; 395 394 struct list_head rb_entry; 396 395
+17 -1
include/linux/preempt.h
··· 33 33 preempt_schedule(); \ 34 34 } while (0) 35 35 36 + #ifdef CONFIG_CONTEXT_TRACKING 37 + 38 + void preempt_schedule_context(void); 39 + 40 + #define preempt_check_resched_context() \ 41 + do { \ 42 + if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \ 43 + preempt_schedule_context(); \ 44 + } while (0) 45 + #else 46 + 47 + #define preempt_check_resched_context() preempt_check_resched() 48 + 49 + #endif /* CONFIG_CONTEXT_TRACKING */ 50 + 36 51 #else /* !CONFIG_PREEMPT */ 37 52 38 53 #define preempt_check_resched() do { } while (0) 54 + #define preempt_check_resched_context() do { } while (0) 39 55 40 56 #endif /* CONFIG_PREEMPT */ 41 57 ··· 104 88 do { \ 105 89 preempt_enable_no_resched_notrace(); \ 106 90 barrier(); \ 107 - preempt_check_resched(); \ 91 + preempt_check_resched_context(); \ 108 92 } while (0) 109 93 110 94 #else /* !CONFIG_PREEMPT_COUNT */
+1
include/linux/skbuff.h
··· 635 635 } 636 636 637 637 extern void kfree_skb(struct sk_buff *skb); 638 + extern void kfree_skb_list(struct sk_buff *segs); 638 639 extern void skb_tx_error(struct sk_buff *skb); 639 640 extern void consume_skb(struct sk_buff *skb); 640 641 extern void __kfree_skb(struct sk_buff *skb);
+1
include/linux/splice.h
··· 35 35 void *data; /* cookie */ 36 36 } u; 37 37 loff_t pos; /* file position */ 38 + loff_t *opos; /* sendfile: output position */ 38 39 size_t num_spliced; /* number of bytes already spliced */ 39 40 bool need_wakeup; /* need to wake up writer */ 40 41 };
+2 -2
include/linux/vtime.h
··· 34 34 } 35 35 extern void vtime_guest_enter(struct task_struct *tsk); 36 36 extern void vtime_guest_exit(struct task_struct *tsk); 37 - extern void vtime_init_idle(struct task_struct *tsk); 37 + extern void vtime_init_idle(struct task_struct *tsk, int cpu); 38 38 #else 39 39 static inline void vtime_account_irq_exit(struct task_struct *tsk) 40 40 { ··· 45 45 static inline void vtime_user_exit(struct task_struct *tsk) { } 46 46 static inline void vtime_guest_enter(struct task_struct *tsk) { } 47 47 static inline void vtime_guest_exit(struct task_struct *tsk) { } 48 - static inline void vtime_init_idle(struct task_struct *tsk) { } 48 + static inline void vtime_init_idle(struct task_struct *tsk, int cpu) { } 49 49 #endif 50 50 51 51 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+2
include/media/v4l2-mem2mem.h
··· 110 110 struct v4l2_buffer *buf); 111 111 int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, 112 112 struct v4l2_buffer *buf); 113 + int v4l2_m2m_create_bufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, 114 + struct v4l2_create_buffers *create); 113 115 114 116 int v4l2_m2m_expbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, 115 117 struct v4l2_exportbuffer *eb);
+1
include/uapi/linux/Kbuild
··· 261 261 header-y += net_tstamp.h 262 262 header-y += netconf.h 263 263 header-y += netdevice.h 264 + header-y += netlink_diag.h 264 265 header-y += netfilter.h 265 266 header-y += netfilter_arp.h 266 267 header-y += netfilter_bridge.h
+40 -1
kernel/context_tracking.c
··· 15 15 */ 16 16 17 17 #include <linux/context_tracking.h> 18 - #include <linux/kvm_host.h> 19 18 #include <linux/rcupdate.h> 20 19 #include <linux/sched.h> 21 20 #include <linux/hardirq.h> ··· 70 71 local_irq_restore(flags); 71 72 } 72 73 74 + #ifdef CONFIG_PREEMPT 75 + /** 76 + * preempt_schedule_context - preempt_schedule called by tracing 77 + * 78 + * The tracing infrastructure uses preempt_enable_notrace to prevent 79 + * recursion and tracing preempt enabling caused by the tracing 80 + * infrastructure itself. But as tracing can happen in areas coming 81 + * from userspace or just about to enter userspace, a preempt enable 82 + * can occur before user_exit() is called. This will cause the scheduler 83 + * to be called when the system is still in usermode. 84 + * 85 + * To prevent this, the preempt_enable_notrace will use this function 86 + * instead of preempt_schedule() to exit user context if needed before 87 + * calling the scheduler. 88 + */ 89 + void __sched notrace preempt_schedule_context(void) 90 + { 91 + struct thread_info *ti = current_thread_info(); 92 + enum ctx_state prev_ctx; 93 + 94 + if (likely(ti->preempt_count || irqs_disabled())) 95 + return; 96 + 97 + /* 98 + * Need to disable preemption in case user_exit() is traced 99 + * and the tracer calls preempt_enable_notrace() causing 100 + * an infinite recursion. 101 + */ 102 + preempt_disable_notrace(); 103 + prev_ctx = exception_enter(); 104 + preempt_enable_no_resched_notrace(); 105 + 106 + preempt_schedule(); 107 + 108 + preempt_disable_notrace(); 109 + exception_exit(prev_ctx); 110 + preempt_enable_notrace(); 111 + } 112 + EXPORT_SYMBOL_GPL(preempt_schedule_context); 113 + #endif /* CONFIG_PREEMPT */ 73 114 74 115 /** 75 116 * user_exit - Inform the context tracking that the CPU is
+17
kernel/cpu/idle.c
··· 5 5 #include <linux/cpu.h> 6 6 #include <linux/tick.h> 7 7 #include <linux/mm.h> 8 + #include <linux/stackprotector.h> 8 9 9 10 #include <asm/tlb.h> 10 11 ··· 59 58 void __weak arch_cpu_idle(void) 60 59 { 61 60 cpu_idle_force_poll = 1; 61 + local_irq_enable(); 62 62 } 63 63 64 64 /* ··· 114 112 115 113 void cpu_startup_entry(enum cpuhp_state state) 116 114 { 115 + /* 116 + * This #ifdef needs to die, but it's too late in the cycle to 117 + * make this generic (arm and sh have never invoked the canary 118 + * init for the non boot cpus!). Will be fixed in 3.11 119 + */ 120 + #ifdef CONFIG_X86 121 + /* 122 + * If we're the non-boot CPU, nothing set the stack canary up 123 + * for us. The boot CPU already has it initialized but no harm 124 + * in doing it again. This is a good place for updating it, as 125 + * we wont ever return from this function (so the invalid 126 + * canaries already on the stack wont ever trigger). 127 + */ 128 + boot_init_stack_canary(); 129 + #endif 117 130 current_set_polling(); 118 131 arch_cpu_idle_prepare(); 119 132 cpu_idle_loop();
+162 -73
kernel/events/core.c
··· 196 196 static void update_context_time(struct perf_event_context *ctx); 197 197 static u64 perf_event_time(struct perf_event *event); 198 198 199 - static void ring_buffer_attach(struct perf_event *event, 200 - struct ring_buffer *rb); 201 - 202 199 void __weak perf_event_print_debug(void) { } 203 200 204 201 extern __weak const char *perf_pmu_name(void) ··· 2915 2918 } 2916 2919 2917 2920 static void ring_buffer_put(struct ring_buffer *rb); 2921 + static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb); 2918 2922 2919 2923 static void free_event(struct perf_event *event) 2920 2924 { ··· 2940 2942 if (has_branch_stack(event)) { 2941 2943 static_key_slow_dec_deferred(&perf_sched_events); 2942 2944 /* is system-wide event */ 2943 - if (!(event->attach_state & PERF_ATTACH_TASK)) 2945 + if (!(event->attach_state & PERF_ATTACH_TASK)) { 2944 2946 atomic_dec(&per_cpu(perf_branch_stack_events, 2945 2947 event->cpu)); 2948 + } 2946 2949 } 2947 2950 } 2948 2951 2949 2952 if (event->rb) { 2950 - ring_buffer_put(event->rb); 2951 - event->rb = NULL; 2953 + struct ring_buffer *rb; 2954 + 2955 + /* 2956 + * Can happen when we close an event with re-directed output. 2957 + * 2958 + * Since we have a 0 refcount, perf_mmap_close() will skip 2959 + * over us; possibly making our ring_buffer_put() the last. 2960 + */ 2961 + mutex_lock(&event->mmap_mutex); 2962 + rb = event->rb; 2963 + if (rb) { 2964 + rcu_assign_pointer(event->rb, NULL); 2965 + ring_buffer_detach(event, rb); 2966 + ring_buffer_put(rb); /* could be last */ 2967 + } 2968 + mutex_unlock(&event->mmap_mutex); 2952 2969 } 2953 2970 2954 2971 if (is_cgroup_event(event)) ··· 3201 3188 unsigned int events = POLL_HUP; 3202 3189 3203 3190 /* 3204 - * Race between perf_event_set_output() and perf_poll(): perf_poll() 3205 - * grabs the rb reference but perf_event_set_output() overrides it. 3206 - * Here is the timeline for two threads T1, T2: 3207 - * t0: T1, rb = rcu_dereference(event->rb) 3208 - * t1: T2, old_rb = event->rb 3209 - * t2: T2, event->rb = new rb 3210 - * t3: T2, ring_buffer_detach(old_rb) 3211 - * t4: T1, ring_buffer_attach(rb1) 3212 - * t5: T1, poll_wait(event->waitq) 3213 - * 3214 - * To avoid this problem, we grab mmap_mutex in perf_poll() 3215 - * thereby ensuring that the assignment of the new ring buffer 3216 - * and the detachment of the old buffer appear atomic to perf_poll() 3191 + * Pin the event->rb by taking event->mmap_mutex; otherwise 3192 + * perf_event_set_output() can swizzle our rb and make us miss wakeups. 3217 3193 */ 3218 3194 mutex_lock(&event->mmap_mutex); 3219 - 3220 - rcu_read_lock(); 3221 - rb = rcu_dereference(event->rb); 3222 - if (rb) { 3223 - ring_buffer_attach(event, rb); 3195 + rb = event->rb; 3196 + if (rb) 3224 3197 events = atomic_xchg(&rb->poll, 0); 3225 - } 3226 - rcu_read_unlock(); 3227 - 3228 3198 mutex_unlock(&event->mmap_mutex); 3229 3199 3230 3200 poll_wait(file, &event->waitq, wait); ··· 3517 3521 return; 3518 3522 3519 3523 spin_lock_irqsave(&rb->event_lock, flags); 3520 - if (!list_empty(&event->rb_entry)) 3521 - goto unlock; 3522 - 3523 - list_add(&event->rb_entry, &rb->event_list); 3524 - unlock: 3524 + if (list_empty(&event->rb_entry)) 3525 + list_add(&event->rb_entry, &rb->event_list); 3525 3526 spin_unlock_irqrestore(&rb->event_lock, flags); 3526 3527 } 3527 3528 3528 - static void ring_buffer_detach(struct perf_event *event, 3529 - struct ring_buffer *rb) 3529 + static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb) 3530 3530 { 3531 3531 unsigned long flags; 3532 3532 ··· 3541 3549 3542 3550 rcu_read_lock(); 3543 3551 rb = rcu_dereference(event->rb); 3544 - if (!rb) 3545 - goto unlock; 3546 - 3547 - list_for_each_entry_rcu(event, &rb->event_list, rb_entry) 3548 - wake_up_all(&event->waitq); 3549 - 3550 - unlock: 3552 + if (rb) { 3553 + list_for_each_entry_rcu(event, &rb->event_list, rb_entry) 3554 + wake_up_all(&event->waitq); 3555 + } 3551 3556 rcu_read_unlock(); 3552 3557 } 3553 3558 ··· 3573 3584 3574 3585 static void ring_buffer_put(struct ring_buffer *rb) 3575 3586 { 3576 - struct perf_event *event, *n; 3577 - unsigned long flags; 3578 - 3579 3587 if (!atomic_dec_and_test(&rb->refcount)) 3580 3588 return; 3581 3589 3582 - spin_lock_irqsave(&rb->event_lock, flags); 3583 - list_for_each_entry_safe(event, n, &rb->event_list, rb_entry) { 3584 - list_del_init(&event->rb_entry); 3585 - wake_up_all(&event->waitq); 3586 - } 3587 - spin_unlock_irqrestore(&rb->event_lock, flags); 3590 + WARN_ON_ONCE(!list_empty(&rb->event_list)); 3588 3591 3589 3592 call_rcu(&rb->rcu_head, rb_free_rcu); 3590 3593 } ··· 3586 3605 struct perf_event *event = vma->vm_file->private_data; 3587 3606 3588 3607 atomic_inc(&event->mmap_count); 3608 + atomic_inc(&event->rb->mmap_count); 3589 3609 } 3590 3610 3611 + /* 3612 + * A buffer can be mmap()ed multiple times; either directly through the same 3613 + * event, or through other events by use of perf_event_set_output(). 3614 + * 3615 + * In order to undo the VM accounting done by perf_mmap() we need to destroy 3616 + * the buffer here, where we still have a VM context. This means we need 3617 + * to detach all events redirecting to us. 3618 + */ 3591 3619 static void perf_mmap_close(struct vm_area_struct *vma) 3592 3620 { 3593 3621 struct perf_event *event = vma->vm_file->private_data; 3594 3622 3595 - if (atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) { 3596 - unsigned long size = perf_data_size(event->rb); 3597 - struct user_struct *user = event->mmap_user; 3598 - struct ring_buffer *rb = event->rb; 3623 + struct ring_buffer *rb = event->rb; 3624 + struct user_struct *mmap_user = rb->mmap_user; 3625 + int mmap_locked = rb->mmap_locked; 3626 + unsigned long size = perf_data_size(rb); 3599 3627 3600 - atomic_long_sub((size >> PAGE_SHIFT) + 1, &user->locked_vm); 3601 - vma->vm_mm->pinned_vm -= event->mmap_locked; 3602 - rcu_assign_pointer(event->rb, NULL); 3603 - ring_buffer_detach(event, rb); 3604 - mutex_unlock(&event->mmap_mutex); 3628 + atomic_dec(&rb->mmap_count); 3605 3629 3606 - ring_buffer_put(rb); 3607 - free_uid(user); 3630 + if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) 3631 + return; 3632 + 3633 + /* Detach current event from the buffer. */ 3634 + rcu_assign_pointer(event->rb, NULL); 3635 + ring_buffer_detach(event, rb); 3636 + mutex_unlock(&event->mmap_mutex); 3637 + 3638 + /* If there's still other mmap()s of this buffer, we're done. */ 3639 + if (atomic_read(&rb->mmap_count)) { 3640 + ring_buffer_put(rb); /* can't be last */ 3641 + return; 3608 3642 } 3643 + 3644 + /* 3645 + * No other mmap()s, detach from all other events that might redirect 3646 + * into the now unreachable buffer. Somewhat complicated by the 3647 + * fact that rb::event_lock otherwise nests inside mmap_mutex. 3648 + */ 3649 + again: 3650 + rcu_read_lock(); 3651 + list_for_each_entry_rcu(event, &rb->event_list, rb_entry) { 3652 + if (!atomic_long_inc_not_zero(&event->refcount)) { 3653 + /* 3654 + * This event is en-route to free_event() which will 3655 + * detach it and remove it from the list. 3656 + */ 3657 + continue; 3658 + } 3659 + rcu_read_unlock(); 3660 + 3661 + mutex_lock(&event->mmap_mutex); 3662 + /* 3663 + * Check we didn't race with perf_event_set_output() which can 3664 + * swizzle the rb from under us while we were waiting to 3665 + * acquire mmap_mutex. 3666 + * 3667 + * If we find a different rb; ignore this event, a next 3668 + * iteration will no longer find it on the list. We have to 3669 + * still restart the iteration to make sure we're not now 3670 + * iterating the wrong list. 3671 + */ 3672 + if (event->rb == rb) { 3673 + rcu_assign_pointer(event->rb, NULL); 3674 + ring_buffer_detach(event, rb); 3675 + ring_buffer_put(rb); /* can't be last, we still have one */ 3676 + } 3677 + mutex_unlock(&event->mmap_mutex); 3678 + put_event(event); 3679 + 3680 + /* 3681 + * Restart the iteration; either we're on the wrong list or 3682 + * destroyed its integrity by doing a deletion. 3683 + */ 3684 + goto again; 3685 + } 3686 + rcu_read_unlock(); 3687 + 3688 + /* 3689 + * It could be there's still a few 0-ref events on the list; they'll 3690 + * get cleaned up by free_event() -- they'll also still have their 3691 + * ref on the rb and will free it whenever they are done with it. 3692 + * 3693 + * Aside from that, this buffer is 'fully' detached and unmapped, 3694 + * undo the VM accounting. 3695 + */ 3696 + 3697 + atomic_long_sub((size >> PAGE_SHIFT) + 1, &mmap_user->locked_vm); 3698 + vma->vm_mm->pinned_vm -= mmap_locked; 3699 + free_uid(mmap_user); 3700 + 3701 + ring_buffer_put(rb); /* could be last */ 3609 3702 } 3610 3703 3611 3704 static const struct vm_operations_struct perf_mmap_vmops = { ··· 3729 3674 return -EINVAL; 3730 3675 3731 3676 WARN_ON_ONCE(event->ctx->parent_ctx); 3677 + again: 3732 3678 mutex_lock(&event->mmap_mutex); 3733 3679 if (event->rb) { 3734 - if (event->rb->nr_pages == nr_pages) 3735 - atomic_inc(&event->rb->refcount); 3736 - else 3680 + if (event->rb->nr_pages != nr_pages) { 3737 3681 ret = -EINVAL; 3682 + goto unlock; 3683 + } 3684 + 3685 + if (!atomic_inc_not_zero(&event->rb->mmap_count)) { 3686 + /* 3687 + * Raced against perf_mmap_close() through 3688 + * perf_event_set_output(). Try again, hope for better 3689 + * luck. 3690 + */ 3691 + mutex_unlock(&event->mmap_mutex); 3692 + goto again; 3693 + } 3694 + 3738 3695 goto unlock; 3739 3696 } 3740 3697 ··· 3787 3720 ret = -ENOMEM; 3788 3721 goto unlock; 3789 3722 } 3790 - rcu_assign_pointer(event->rb, rb); 3723 + 3724 + atomic_set(&rb->mmap_count, 1); 3725 + rb->mmap_locked = extra; 3726 + rb->mmap_user = get_current_user(); 3791 3727 3792 3728 atomic_long_add(user_extra, &user->locked_vm); 3793 - event->mmap_locked = extra; 3794 - event->mmap_user = get_current_user(); 3795 - vma->vm_mm->pinned_vm += event->mmap_locked; 3729 + vma->vm_mm->pinned_vm += extra; 3730 + 3731 + ring_buffer_attach(event, rb); 3732 + rcu_assign_pointer(event->rb, rb); 3796 3733 3797 3734 perf_event_update_userpage(event); 3798 3735 ··· 3805 3734 atomic_inc(&event->mmap_count); 3806 3735 mutex_unlock(&event->mmap_mutex); 3807 3736 3808 - vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; 3737 + /* 3738 + * Since pinned accounting is per vm we cannot allow fork() to copy our 3739 + * vma. 3740 + */ 3741 + vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP; 3809 3742 vma->vm_ops = &perf_mmap_vmops; 3810 3743 3811 3744 return ret; ··· 6487 6412 if (atomic_read(&event->mmap_count)) 6488 6413 goto unlock; 6489 6414 6415 + old_rb = event->rb; 6416 + 6490 6417 if (output_event) { 6491 6418 /* get the rb we want to redirect to */ 6492 6419 rb = ring_buffer_get(output_event); ··· 6496 6419 goto unlock; 6497 6420 } 6498 6421 6499 - old_rb = event->rb; 6500 - rcu_assign_pointer(event->rb, rb); 6501 6422 if (old_rb) 6502 6423 ring_buffer_detach(event, old_rb); 6424 + 6425 + if (rb) 6426 + ring_buffer_attach(event, rb); 6427 + 6428 + rcu_assign_pointer(event->rb, rb); 6429 + 6430 + if (old_rb) { 6431 + ring_buffer_put(old_rb); 6432 + /* 6433 + * Since we detached before setting the new rb, so that we 6434 + * could attach the new rb, we could have missed a wakeup. 6435 + * Provide it now. 6436 + */ 6437 + wake_up_all(&event->waitq); 6438 + } 6439 + 6503 6440 ret = 0; 6504 6441 unlock: 6505 6442 mutex_unlock(&event->mmap_mutex); 6506 6443 6507 - if (old_rb) 6508 - ring_buffer_put(old_rb); 6509 6444 out: 6510 6445 return ret; 6511 6446 }
+3 -3
kernel/events/hw_breakpoint.c
··· 120 120 list_for_each_entry(iter, &bp_task_head, hw.bp_list) { 121 121 if (iter->hw.bp_target == tsk && 122 122 find_slot_idx(iter) == type && 123 - cpu == iter->cpu) 123 + (iter->cpu < 0 || cpu == iter->cpu)) 124 124 count += hw_breakpoint_weight(iter); 125 125 } 126 126 ··· 149 149 return; 150 150 } 151 151 152 - for_each_online_cpu(cpu) { 152 + for_each_possible_cpu(cpu) { 153 153 unsigned int nr; 154 154 155 155 nr = per_cpu(nr_cpu_bp_pinned[type], cpu); ··· 235 235 if (cpu >= 0) { 236 236 toggle_bp_task_slot(bp, cpu, enable, type, weight); 237 237 } else { 238 - for_each_online_cpu(cpu) 238 + for_each_possible_cpu(cpu) 239 239 toggle_bp_task_slot(bp, cpu, enable, type, weight); 240 240 } 241 241
+4
kernel/events/internal.h
··· 31 31 spinlock_t event_lock; 32 32 struct list_head event_list; 33 33 34 + atomic_t mmap_count; 35 + unsigned long mmap_locked; 36 + struct user_struct *mmap_user; 37 + 34 38 struct perf_event_mmap_page *user_page; 35 39 void *data_pages[0]; 36 40 };
+20 -10
kernel/kprobes.c
··· 467 467 /* Optimization staging list, protected by kprobe_mutex */ 468 468 static LIST_HEAD(optimizing_list); 469 469 static LIST_HEAD(unoptimizing_list); 470 + static LIST_HEAD(freeing_list); 470 471 471 472 static void kprobe_optimizer(struct work_struct *work); 472 473 static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer); ··· 505 504 * Unoptimize (replace a jump with a breakpoint and remove the breakpoint 506 505 * if need) kprobes listed on unoptimizing_list. 507 506 */ 508 - static __kprobes void do_unoptimize_kprobes(struct list_head *free_list) 507 + static __kprobes void do_unoptimize_kprobes(void) 509 508 { 510 509 struct optimized_kprobe *op, *tmp; 511 510 ··· 516 515 /* Ditto to do_optimize_kprobes */ 517 516 get_online_cpus(); 518 517 mutex_lock(&text_mutex); 519 - arch_unoptimize_kprobes(&unoptimizing_list, free_list); 518 + arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list); 520 519 /* Loop free_list for disarming */ 521 - list_for_each_entry_safe(op, tmp, free_list, list) { 520 + list_for_each_entry_safe(op, tmp, &freeing_list, list) { 522 521 /* Disarm probes if marked disabled */ 523 522 if (kprobe_disabled(&op->kp)) 524 523 arch_disarm_kprobe(&op->kp); ··· 537 536 } 538 537 539 538 /* Reclaim all kprobes on the free_list */ 540 - static __kprobes void do_free_cleaned_kprobes(struct list_head *free_list) 539 + static __kprobes void do_free_cleaned_kprobes(void) 541 540 { 542 541 struct optimized_kprobe *op, *tmp; 543 542 544 - list_for_each_entry_safe(op, tmp, free_list, list) { 543 + list_for_each_entry_safe(op, tmp, &freeing_list, list) { 545 544 BUG_ON(!kprobe_unused(&op->kp)); 546 545 list_del_init(&op->list); 547 546 free_aggr_kprobe(&op->kp); ··· 557 556 /* Kprobe jump optimizer */ 558 557 static __kprobes void kprobe_optimizer(struct work_struct *work) 559 558 { 560 - LIST_HEAD(free_list); 561 - 562 559 mutex_lock(&kprobe_mutex); 563 560 /* Lock modules while optimizing kprobes */ 564 561 mutex_lock(&module_mutex); ··· 565 566 * Step 1: Unoptimize kprobes and collect cleaned (unused and disarmed) 566 567 * kprobes before waiting for quiesence period. 567 568 */ 568 - do_unoptimize_kprobes(&free_list); 569 + do_unoptimize_kprobes(); 569 570 570 571 /* 571 572 * Step 2: Wait for quiesence period to ensure all running interrupts ··· 580 581 do_optimize_kprobes(); 581 582 582 583 /* Step 4: Free cleaned kprobes after quiesence period */ 583 - do_free_cleaned_kprobes(&free_list); 584 + do_free_cleaned_kprobes(); 584 585 585 586 mutex_unlock(&module_mutex); 586 587 mutex_unlock(&kprobe_mutex); ··· 722 723 if (!list_empty(&op->list)) 723 724 /* Dequeue from the (un)optimization queue */ 724 725 list_del_init(&op->list); 725 - 726 726 op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED; 727 + 728 + if (kprobe_unused(p)) { 729 + /* Enqueue if it is unused */ 730 + list_add(&op->list, &freeing_list); 731 + /* 732 + * Remove unused probes from the hash list. After waiting 733 + * for synchronization, this probe is reclaimed. 734 + * (reclaiming is done by do_free_cleaned_kprobes().) 735 + */ 736 + hlist_del_rcu(&op->kp.hlist); 737 + } 738 + 727 739 /* Don't touch the code, because it is already freed. */ 728 740 arch_remove_optimized_kprobe(op); 729 741 }
+11 -9
kernel/ptrace.c
··· 665 665 if (unlikely(is_compat_task())) { 666 666 compat_siginfo_t __user *uinfo = compat_ptr(data); 667 667 668 - ret = copy_siginfo_to_user32(uinfo, &info); 669 - ret |= __put_user(info.si_code, &uinfo->si_code); 668 + if (copy_siginfo_to_user32(uinfo, &info) || 669 + __put_user(info.si_code, &uinfo->si_code)) { 670 + ret = -EFAULT; 671 + break; 672 + } 673 + 670 674 } else 671 675 #endif 672 676 { 673 677 siginfo_t __user *uinfo = (siginfo_t __user *) data; 674 678 675 - ret = copy_siginfo_to_user(uinfo, &info); 676 - ret |= __put_user(info.si_code, &uinfo->si_code); 677 - } 678 - 679 - if (ret) { 680 - ret = -EFAULT; 681 - break; 679 + if (copy_siginfo_to_user(uinfo, &info) || 680 + __put_user(info.si_code, &uinfo->si_code)) { 681 + ret = -EFAULT; 682 + break; 683 + } 682 684 } 683 685 684 686 data += sizeof(siginfo_t);
+11 -10
kernel/range.c
··· 4 4 #include <linux/kernel.h> 5 5 #include <linux/init.h> 6 6 #include <linux/sort.h> 7 - 7 + #include <linux/string.h> 8 8 #include <linux/range.h> 9 9 10 10 int add_range(struct range *range, int az, int nr_range, u64 start, u64 end) ··· 32 32 if (start >= end) 33 33 return nr_range; 34 34 35 - /* Try to merge it with old one: */ 35 + /* get new start/end: */ 36 36 for (i = 0; i < nr_range; i++) { 37 - u64 final_start, final_end; 38 37 u64 common_start, common_end; 39 38 40 39 if (!range[i].end) ··· 44 45 if (common_start > common_end) 45 46 continue; 46 47 47 - final_start = min(range[i].start, start); 48 - final_end = max(range[i].end, end); 48 + /* new start/end, will add it back at last */ 49 + start = min(range[i].start, start); 50 + end = max(range[i].end, end); 49 51 50 - /* clear it and add it back for further merge */ 51 - range[i].start = 0; 52 - range[i].end = 0; 53 - return add_range_with_merge(range, az, nr_range, 54 - final_start, final_end); 52 + memmove(&range[i], &range[i + 1], 53 + (nr_range - (i + 1)) * sizeof(range[i])); 54 + range[nr_range - 1].start = 0; 55 + range[nr_range - 1].end = 0; 56 + nr_range--; 57 + i--; 55 58 } 56 59 57 60 /* Need to add it: */
+18 -5
kernel/sched/core.c
··· 633 633 static inline bool got_nohz_idle_kick(void) 634 634 { 635 635 int cpu = smp_processor_id(); 636 - return idle_cpu(cpu) && test_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu)); 636 + 637 + if (!test_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu))) 638 + return false; 639 + 640 + if (idle_cpu(cpu) && !need_resched()) 641 + return true; 642 + 643 + /* 644 + * We can't run Idle Load Balance on this CPU for this time so we 645 + * cancel it and clear NOHZ_BALANCE_KICK 646 + */ 647 + clear_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu)); 648 + return false; 637 649 } 638 650 639 651 #else /* CONFIG_NO_HZ_COMMON */ ··· 1405 1393 1406 1394 void scheduler_ipi(void) 1407 1395 { 1408 - if (llist_empty(&this_rq()->wake_list) && !got_nohz_idle_kick() 1409 - && !tick_nohz_full_cpu(smp_processor_id())) 1396 + if (llist_empty(&this_rq()->wake_list) 1397 + && !tick_nohz_full_cpu(smp_processor_id()) 1398 + && !got_nohz_idle_kick()) 1410 1399 return; 1411 1400 1412 1401 /* ··· 1430 1417 /* 1431 1418 * Check if someone kicked us for doing the nohz idle load balance. 1432 1419 */ 1433 - if (unlikely(got_nohz_idle_kick() && !need_resched())) { 1420 + if (unlikely(got_nohz_idle_kick())) { 1434 1421 this_rq()->idle_balance = 1; 1435 1422 raise_softirq_irqoff(SCHED_SOFTIRQ); 1436 1423 } ··· 4758 4745 */ 4759 4746 idle->sched_class = &idle_sched_class; 4760 4747 ftrace_graph_init_idle_task(idle, cpu); 4761 - vtime_init_idle(idle); 4748 + vtime_init_idle(idle, cpu); 4762 4749 #if defined(CONFIG_SMP) 4763 4750 sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu); 4764 4751 #endif
+3 -3
kernel/sched/cputime.c
··· 747 747 748 748 write_seqlock(&current->vtime_seqlock); 749 749 current->vtime_snap_whence = VTIME_SYS; 750 - current->vtime_snap = sched_clock(); 750 + current->vtime_snap = sched_clock_cpu(smp_processor_id()); 751 751 write_sequnlock(&current->vtime_seqlock); 752 752 } 753 753 754 - void vtime_init_idle(struct task_struct *t) 754 + void vtime_init_idle(struct task_struct *t, int cpu) 755 755 { 756 756 unsigned long flags; 757 757 758 758 write_seqlock_irqsave(&t->vtime_seqlock, flags); 759 759 t->vtime_snap_whence = VTIME_SYS; 760 - t->vtime_snap = sched_clock(); 760 + t->vtime_snap = sched_clock_cpu(cpu); 761 761 write_sequnlock_irqrestore(&t->vtime_seqlock, flags); 762 762 } 763 763
+5 -6
kernel/time/tick-broadcast.c
··· 599 599 } else { 600 600 if (cpumask_test_and_clear_cpu(cpu, tick_broadcast_oneshot_mask)) { 601 601 clockevents_set_mode(dev, CLOCK_EVT_MODE_ONESHOT); 602 - if (dev->next_event.tv64 == KTIME_MAX) 603 - goto out; 604 602 /* 605 603 * The cpu which was handling the broadcast 606 604 * timer marked this cpu in the broadcast ··· 612 614 tick_broadcast_pending_mask)) 613 615 goto out; 614 616 617 + /* 618 + * Bail out if there is no next event. 619 + */ 620 + if (dev->next_event.tv64 == KTIME_MAX) 621 + goto out; 615 622 /* 616 623 * If the pending bit is not set, then we are 617 624 * either the CPU handling the broadcast ··· 700 697 int was_periodic = bc->mode == CLOCK_EVT_MODE_PERIODIC; 701 698 702 699 bc->event_handler = tick_handle_oneshot_broadcast; 703 - 704 - /* Take the do_timer update */ 705 - if (!tick_nohz_full_cpu(cpu)) 706 - tick_do_timer_cpu = cpu; 707 700 708 701 /* 709 702 * We must be careful here. There might be other CPUs
+1 -1
kernel/time/tick-sched.c
··· 306 306 * we can't safely shutdown that CPU. 307 307 */ 308 308 if (have_nohz_full_mask && tick_do_timer_cpu == cpu) 309 - return -EINVAL; 309 + return NOTIFY_BAD; 310 310 break; 311 311 } 312 312 return NOTIFY_OK;
+3 -1
mm/slab_common.c
··· 373 373 { 374 374 int index; 375 375 376 - if (WARN_ON_ONCE(size > KMALLOC_MAX_SIZE)) 376 + if (size > KMALLOC_MAX_SIZE) { 377 + WARN_ON_ONCE(!(flags & __GFP_NOWARN)); 377 378 return NULL; 379 + } 378 380 379 381 if (size <= 192) { 380 382 if (!size)
+34
net/core/dev.c
··· 800 800 EXPORT_SYMBOL(dev_get_by_index); 801 801 802 802 /** 803 + * netdev_get_name - get a netdevice name, knowing its ifindex. 804 + * @net: network namespace 805 + * @name: a pointer to the buffer where the name will be stored. 806 + * @ifindex: the ifindex of the interface to get the name from. 807 + * 808 + * The use of raw_seqcount_begin() and cond_resched() before 809 + * retrying is required as we want to give the writers a chance 810 + * to complete when CONFIG_PREEMPT is not set. 811 + */ 812 + int netdev_get_name(struct net *net, char *name, int ifindex) 813 + { 814 + struct net_device *dev; 815 + unsigned int seq; 816 + 817 + retry: 818 + seq = raw_seqcount_begin(&devnet_rename_seq); 819 + rcu_read_lock(); 820 + dev = dev_get_by_index_rcu(net, ifindex); 821 + if (!dev) { 822 + rcu_read_unlock(); 823 + return -ENODEV; 824 + } 825 + 826 + strcpy(name, dev->name); 827 + rcu_read_unlock(); 828 + if (read_seqcount_retry(&devnet_rename_seq, seq)) { 829 + cond_resched(); 830 + goto retry; 831 + } 832 + 833 + return 0; 834 + } 835 + 836 + /** 803 837 * dev_getbyhwaddr_rcu - find a device by its hardware address 804 838 * @net: the applicable net namespace 805 839 * @type: media type of device
+4 -15
net/core/dev_ioctl.c
··· 19 19 20 20 static int dev_ifname(struct net *net, struct ifreq __user *arg) 21 21 { 22 - struct net_device *dev; 23 22 struct ifreq ifr; 24 - unsigned seq; 23 + int error; 25 24 26 25 /* 27 26 * Fetch the caller's info block. ··· 29 30 if (copy_from_user(&ifr, arg, sizeof(struct ifreq))) 30 31 return -EFAULT; 31 32 32 - retry: 33 - seq = read_seqcount_begin(&devnet_rename_seq); 34 - rcu_read_lock(); 35 - dev = dev_get_by_index_rcu(net, ifr.ifr_ifindex); 36 - if (!dev) { 37 - rcu_read_unlock(); 38 - return -ENODEV; 39 - } 40 - 41 - strcpy(ifr.ifr_name, dev->name); 42 - rcu_read_unlock(); 43 - if (read_seqcount_retry(&devnet_rename_seq, seq)) 44 - goto retry; 33 + error = netdev_get_name(net, ifr.ifr_name, ifr.ifr_ifindex); 34 + if (error) 35 + return error; 45 36 46 37 if (copy_to_user(arg, &ifr, sizeof(struct ifreq))) 47 38 return -EFAULT;
+12 -8
net/core/skbuff.c
··· 477 477 478 478 static void skb_drop_list(struct sk_buff **listp) 479 479 { 480 - struct sk_buff *list = *listp; 481 - 480 + kfree_skb_list(*listp); 482 481 *listp = NULL; 483 - 484 - do { 485 - struct sk_buff *this = list; 486 - list = list->next; 487 - kfree_skb(this); 488 - } while (list); 489 482 } 490 483 491 484 static inline void skb_drop_fraglist(struct sk_buff *skb) ··· 637 644 __kfree_skb(skb); 638 645 } 639 646 EXPORT_SYMBOL(kfree_skb); 647 + 648 + void kfree_skb_list(struct sk_buff *segs) 649 + { 650 + while (segs) { 651 + struct sk_buff *next = segs->next; 652 + 653 + kfree_skb(segs); 654 + segs = next; 655 + } 656 + } 657 + EXPORT_SYMBOL(kfree_skb_list); 640 658 641 659 /** 642 660 * skb_tx_error - report an sk_buff xmit error
+2 -15
net/core/sock.c
··· 573 573 int ret = -ENOPROTOOPT; 574 574 #ifdef CONFIG_NETDEVICES 575 575 struct net *net = sock_net(sk); 576 - struct net_device *dev; 577 576 char devname[IFNAMSIZ]; 578 - unsigned seq; 579 577 580 578 if (sk->sk_bound_dev_if == 0) { 581 579 len = 0; ··· 584 586 if (len < IFNAMSIZ) 585 587 goto out; 586 588 587 - retry: 588 - seq = read_seqcount_begin(&devnet_rename_seq); 589 - rcu_read_lock(); 590 - dev = dev_get_by_index_rcu(net, sk->sk_bound_dev_if); 591 - ret = -ENODEV; 592 - if (!dev) { 593 - rcu_read_unlock(); 589 + ret = netdev_get_name(net, devname, sk->sk_bound_dev_if); 590 + if (ret) 594 591 goto out; 595 - } 596 - 597 - strcpy(devname, dev->name); 598 - rcu_read_unlock(); 599 - if (read_seqcount_retry(&devnet_rename_seq, seq)) 600 - goto retry; 601 592 602 593 len = strlen(devname) + 1; 603 594
+1 -1
net/ipv4/gre_offload.c
··· 87 87 88 88 err = __skb_linearize(skb); 89 89 if (err) { 90 - kfree_skb(segs); 90 + kfree_skb_list(segs); 91 91 segs = ERR_PTR(err); 92 92 goto out; 93 93 }
+8 -4
net/ipv4/netfilter/ipt_ULOG.c
··· 125 125 /* timer function to flush queue in flushtimeout time */ 126 126 static void ulog_timer(unsigned long data) 127 127 { 128 + unsigned int groupnum = *((unsigned int *)data); 128 129 struct ulog_net *ulog = container_of((void *)data, 129 130 struct ulog_net, 130 - nlgroup[*(unsigned int *)data]); 131 + nlgroup[groupnum]); 131 132 pr_debug("timer function called, calling ulog_send\n"); 132 133 133 134 /* lock to protect against somebody modifying our structure 134 135 * from ipt_ulog_target at the same time */ 135 136 spin_lock_bh(&ulog->lock); 136 - ulog_send(ulog, data); 137 + ulog_send(ulog, groupnum); 137 138 spin_unlock_bh(&ulog->lock); 138 139 } 139 140 ··· 414 413 415 414 spin_lock_init(&ulog->lock); 416 415 /* initialize ulog_buffers */ 417 - for (i = 0; i < ULOG_MAXNLGROUPS; i++) 418 - setup_timer(&ulog->ulog_buffers[i].timer, ulog_timer, i); 416 + for (i = 0; i < ULOG_MAXNLGROUPS; i++) { 417 + ulog->nlgroup[i] = i; 418 + setup_timer(&ulog->ulog_buffers[i].timer, ulog_timer, 419 + (unsigned long)&ulog->nlgroup[i]); 420 + } 419 421 420 422 ulog->nflognl = netlink_kernel_create(net, NETLINK_NFLOG, &cfg); 421 423 if (!ulog->nflognl)
+2 -2
net/ipv4/tcp_ipv4.c
··· 986 986 struct tcp_sock *tp = tcp_sk(sk); 987 987 struct tcp_md5sig_info *md5sig; 988 988 989 - key = tcp_md5_do_lookup(sk, (union tcp_md5_addr *)&addr, AF_INET); 989 + key = tcp_md5_do_lookup(sk, addr, family); 990 990 if (key) { 991 991 /* Pre-existing entry - just update that one. */ 992 992 memcpy(key->key, newkey, newkeylen); ··· 1029 1029 { 1030 1030 struct tcp_md5sig_key *key; 1031 1031 1032 - key = tcp_md5_do_lookup(sk, (union tcp_md5_addr *)&addr, AF_INET); 1032 + key = tcp_md5_do_lookup(sk, addr, family); 1033 1033 if (!key) 1034 1034 return -ENOENT; 1035 1035 hlist_del_rcu(&key->node);
+7 -5
net/ipv6/addrconf.c
··· 2656 2656 if (sp_ifa->flags & (IFA_F_DADFAILED | IFA_F_TENTATIVE)) 2657 2657 continue; 2658 2658 2659 + if (sp_ifa->rt) 2660 + continue; 2661 + 2659 2662 sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, 0); 2660 2663 2661 2664 /* Failure cases are ignored */ ··· 4343 4340 struct inet6_ifaddr *ifp; 4344 4341 struct net_device *dev = idev->dev; 4345 4342 bool update_rs = false; 4343 + struct in6_addr ll_addr; 4346 4344 4347 4345 if (token == NULL) 4348 4346 return -EINVAL; ··· 4363 4359 4364 4360 write_unlock_bh(&idev->lock); 4365 4361 4366 - if (!idev->dead && (idev->if_flags & IF_READY)) { 4367 - struct in6_addr ll_addr; 4368 - 4369 - ipv6_get_lladdr(dev, &ll_addr, IFA_F_TENTATIVE | 4370 - IFA_F_OPTIMISTIC); 4362 + if (!idev->dead && (idev->if_flags & IF_READY) && 4363 + !ipv6_get_lladdr(dev, &ll_addr, IFA_F_TENTATIVE | 4364 + IFA_F_OPTIMISTIC)) { 4371 4365 4372 4366 /* If we're not ready, then normal ifup will take care 4373 4367 * of this. Otherwise, we need to request our rs here.
+9 -4
net/ipv6/ip6_output.c
··· 381 381 * cannot be fragmented, because there is no warranty 382 382 * that different fragments will go along one path. --ANK 383 383 */ 384 - if (opt->ra) { 385 - u8 *ptr = skb_network_header(skb) + opt->ra; 386 - if (ip6_call_ra_chain(skb, (ptr[2]<<8) + ptr[3])) 384 + if (unlikely(opt->flags & IP6SKB_ROUTERALERT)) { 385 + if (ip6_call_ra_chain(skb, ntohs(opt->ra))) 387 386 return 0; 388 387 } 389 388 ··· 821 822 const struct flowi6 *fl6) 822 823 { 823 824 struct ipv6_pinfo *np = inet6_sk(sk); 824 - struct rt6_info *rt = (struct rt6_info *)dst; 825 + struct rt6_info *rt; 825 826 826 827 if (!dst) 827 828 goto out; 828 829 830 + if (dst->ops->family != AF_INET6) { 831 + dst_release(dst); 832 + return NULL; 833 + } 834 + 835 + rt = (struct rt6_info *)dst; 829 836 /* Yes, checking route validity in not connected 830 837 * case is not very simple. Take into account, 831 838 * that we do not support routing by source, TOS,
+1 -1
net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c
··· 204 204 if (ct != NULL && !nf_ct_is_untracked(ct)) { 205 205 help = nfct_help(ct); 206 206 if ((help && help->helper) || !nf_ct_is_confirmed(ct)) { 207 - nf_conntrack_get_reasm(skb); 207 + nf_conntrack_get_reasm(reasm); 208 208 NF_HOOK_THRESH(NFPROTO_IPV6, hooknum, reasm, 209 209 (struct net_device *)in, 210 210 (struct net_device *)out,
+2
net/key/af_key.c
··· 1710 1710 hdr->sadb_msg_version = PF_KEY_V2; 1711 1711 hdr->sadb_msg_errno = (uint8_t) 0; 1712 1712 hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t)); 1713 + hdr->sadb_msg_reserved = 0; 1713 1714 1714 1715 pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ALL, NULL, c->net); 1715 1716 ··· 2700 2699 hdr->sadb_msg_errno = (uint8_t) 0; 2701 2700 hdr->sadb_msg_satype = SADB_SATYPE_UNSPEC; 2702 2701 hdr->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t)); 2702 + hdr->sadb_msg_reserved = 0; 2703 2703 pfkey_broadcast(skb_out, GFP_ATOMIC, BROADCAST_ALL, NULL, c->net); 2704 2704 return 0; 2705 2705
+2 -1
net/netfilter/ipvs/ip_vs_core.c
··· 1442 1442 1443 1443 /* do the statistics and put it back */ 1444 1444 ip_vs_in_stats(cp, skb); 1445 - if (IPPROTO_TCP == cih->protocol || IPPROTO_UDP == cih->protocol) 1445 + if (IPPROTO_TCP == cih->protocol || IPPROTO_UDP == cih->protocol || 1446 + IPPROTO_SCTP == cih->protocol) 1446 1447 offset += 2 * sizeof(__u16); 1447 1448 verdict = ip_vs_icmp_xmit(skb, cp, pp, offset, hooknum, &ciph); 1448 1449
+1 -1
net/netfilter/nf_conntrack_labels.c
··· 45 45 if (test_bit(bit, labels->bits)) 46 46 return 0; 47 47 48 - if (test_and_set_bit(bit, labels->bits)) 48 + if (!test_and_set_bit(bit, labels->bits)) 49 49 nf_conntrack_event_cache(IPCT_LABEL, ct); 50 50 51 51 return 0;
+1
net/netfilter/nf_conntrack_netlink.c
··· 1837 1837 nf_conntrack_eventmask_report((1 << IPCT_REPLY) | 1838 1838 (1 << IPCT_ASSURED) | 1839 1839 (1 << IPCT_HELPER) | 1840 + (1 << IPCT_LABEL) | 1840 1841 (1 << IPCT_PROTOINFO) | 1841 1842 (1 << IPCT_NATSEQADJ) | 1842 1843 (1 << IPCT_MARK),
+2 -1
net/netfilter/nf_nat_sip.c
··· 230 230 &ct->tuplehash[!dir].tuple.src.u3, 231 231 false); 232 232 if (!mangle_packet(skb, protoff, dataoff, dptr, datalen, 233 - poff, plen, buffer, buflen)) 233 + poff, plen, buffer, buflen)) { 234 234 nf_ct_helper_log(skb, ct, "cannot mangle received"); 235 235 return NF_DROP; 236 + } 236 237 } 237 238 238 239 /* The rport= parameter (RFC 3581) contains the port number
+23
sound/pci/hda/patch_cirrus.c
··· 58 58 CS420X_GPIO_23, 59 59 CS420X_MBP101, 60 60 CS420X_MBP81, 61 + CS420X_MBA42, 61 62 CS420X_AUTO, 62 63 /* aliases */ 63 64 CS420X_IMAC27_122 = CS420X_GPIO_23, ··· 347 346 { .id = CS420X_APPLE, .name = "apple" }, 348 347 { .id = CS420X_MBP101, .name = "mbp101" }, 349 348 { .id = CS420X_MBP81, .name = "mbp81" }, 349 + { .id = CS420X_MBA42, .name = "mba42" }, 350 350 {} 351 351 }; 352 352 ··· 363 361 SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81), 364 362 SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122), 365 363 SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101), 364 + SND_PCI_QUIRK(0x106b, 0x5b00, "MacBookAir 4,2", CS420X_MBA42), 366 365 SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE), 367 366 {} /* terminator */ 368 367 }; ··· 414 411 { 0x0d, 0x40ab90f0 }, 415 412 { 0x0e, 0x90a600f0 }, 416 413 { 0x12, 0x50a600f0 }, 414 + {} /* terminator */ 415 + }; 416 + 417 + static const struct hda_pintbl mba42_pincfgs[] = { 418 + { 0x09, 0x012b4030 }, /* HP */ 419 + { 0x0a, 0x400000f0 }, 420 + { 0x0b, 0x90100120 }, /* speaker */ 421 + { 0x0c, 0x400000f0 }, 422 + { 0x0d, 0x90a00110 }, /* mic */ 423 + { 0x0e, 0x400000f0 }, 424 + { 0x0f, 0x400000f0 }, 425 + { 0x10, 0x400000f0 }, 426 + { 0x12, 0x400000f0 }, 427 + { 0x15, 0x400000f0 }, 417 428 {} /* terminator */ 418 429 }; 419 430 ··· 496 479 {0x11, AC_VERB_SET_PROC_COEF, 0x102a}, 497 480 {} 498 481 }, 482 + .chained = true, 483 + .chain_id = CS420X_GPIO_13, 484 + }, 485 + [CS420X_MBA42] = { 486 + .type = HDA_FIXUP_PINS, 487 + .v.pins = mba42_pincfgs, 499 488 .chained = true, 500 489 .chain_id = CS420X_GPIO_13, 501 490 },
+6
sound/pci/hda/patch_realtek.c
··· 3483 3483 SND_PCI_QUIRK(0x1028, 0x05ca, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 3484 3484 SND_PCI_QUIRK(0x1028, 0x05cb, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 3485 3485 SND_PCI_QUIRK(0x1028, 0x05de, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 3486 + SND_PCI_QUIRK(0x1028, 0x05e0, "Dell", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE), 3486 3487 SND_PCI_QUIRK(0x1028, 0x05e9, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3487 3488 SND_PCI_QUIRK(0x1028, 0x05ea, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3488 3489 SND_PCI_QUIRK(0x1028, 0x05eb, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), ··· 3495 3494 SND_PCI_QUIRK(0x1028, 0x05f5, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3496 3495 SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3497 3496 SND_PCI_QUIRK(0x1028, 0x05f8, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3497 + SND_PCI_QUIRK(0x1028, 0x0606, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3498 + SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3498 3499 SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3499 3500 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), 3500 3501 SND_PCI_QUIRK(0x103c, 0x18e6, "HP", ALC269_FIXUP_HP_GPIO_LED), ··· 3599 3596 {.id = ALC269_FIXUP_INV_DMIC, .name = "inv-dmic"}, 3600 3597 {.id = ALC269_FIXUP_LENOVO_DOCK, .name = "lenovo-dock"}, 3601 3598 {.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"}, 3599 + {.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"}, 3600 + {.id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE, .name = "dell-headset-dock"}, 3602 3601 {} 3603 3602 }; 3604 3603 ··· 4280 4275 {.id = ALC662_FIXUP_ASUS_MODE7, .name = "asus-mode7"}, 4281 4276 {.id = ALC662_FIXUP_ASUS_MODE8, .name = "asus-mode8"}, 4282 4277 {.id = ALC662_FIXUP_INV_DMIC, .name = "inv-dmic"}, 4278 + {.id = ALC668_FIXUP_DELL_MIC_NO_PRESENCE, .name = "dell-headset-multi"}, 4283 4279 {} 4284 4280 }; 4285 4281
+20 -2
sound/usb/card.c
··· 147 147 return -EINVAL; 148 148 } 149 149 150 + alts = &iface->altsetting[0]; 151 + altsd = get_iface_desc(alts); 152 + 153 + /* 154 + * Android with both accessory and audio interfaces enabled gets the 155 + * interface numbers wrong. 156 + */ 157 + if ((chip->usb_id == USB_ID(0x18d1, 0x2d04) || 158 + chip->usb_id == USB_ID(0x18d1, 0x2d05)) && 159 + interface == 0 && 160 + altsd->bInterfaceClass == USB_CLASS_VENDOR_SPEC && 161 + altsd->bInterfaceSubClass == USB_SUBCLASS_VENDOR_SPEC) { 162 + interface = 2; 163 + iface = usb_ifnum_to_if(dev, interface); 164 + if (!iface) 165 + return -EINVAL; 166 + alts = &iface->altsetting[0]; 167 + altsd = get_iface_desc(alts); 168 + } 169 + 150 170 if (usb_interface_claimed(iface)) { 151 171 snd_printdd(KERN_INFO "%d:%d:%d: skipping, already claimed\n", 152 172 dev->devnum, ctrlif, interface); 153 173 return -EINVAL; 154 174 } 155 175 156 - alts = &iface->altsetting[0]; 157 - altsd = get_iface_desc(alts); 158 176 if ((altsd->bInterfaceClass == USB_CLASS_AUDIO || 159 177 altsd->bInterfaceClass == USB_CLASS_VENDOR_SPEC) && 160 178 altsd->bInterfaceSubClass == USB_SUBCLASS_MIDISTREAMING) {
+1
sound/usb/mixer.c
··· 885 885 886 886 case USB_ID(0x046d, 0x0808): 887 887 case USB_ID(0x046d, 0x0809): 888 + case USB_ID(0x046d, 0x081b): /* HD Webcam c310 */ 888 889 case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */ 889 890 case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */ 890 891 case USB_ID(0x046d, 0x0991):