Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'asoc-v3.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-next

ASoC: More updates for v3.11

Some more fixes and enhancements, and also a bunch of refectoring for
AC'97 support which enables more than one AC'97 controller driver to be
built in.

+2200 -1142
+23 -14
Documentation/DocBook/media/v4l/dev-codec.xml
··· 1 1 <title>Codec Interface</title> 2 2 3 - <note> 4 - <title>Suspended</title> 5 - 6 - <para>This interface has been be suspended from the V4L2 API 7 - implemented in Linux 2.6 until we have more experience with codec 8 - device interfaces.</para> 9 - </note> 10 - 11 3 <para>A V4L2 codec can compress, decompress, transform, or otherwise 12 - convert video data from one format into another format, in memory. 13 - Applications send data to be converted to the driver through a 14 - &func-write; call, and receive the converted data through a 15 - &func-read; call. For efficiency a driver may also support streaming 16 - I/O.</para> 4 + convert video data from one format into another format, in memory. Typically 5 + such devices are memory-to-memory devices (i.e. devices with the 6 + <constant>V4L2_CAP_VIDEO_M2M</constant> or <constant>V4L2_CAP_VIDEO_M2M_MPLANE</constant> 7 + capability set). 8 + </para> 17 9 18 - <para>[to do]</para> 10 + <para>A memory-to-memory video node acts just like a normal video node, but it 11 + supports both output (sending frames from memory to the codec hardware) and 12 + capture (receiving the processed frames from the codec hardware into memory) 13 + stream I/O. An application will have to setup the stream 14 + I/O for both sides and finally call &VIDIOC-STREAMON; for both capture and output 15 + to start the codec.</para> 16 + 17 + <para>Video compression codecs use the MPEG controls to setup their codec parameters 18 + (note that the MPEG controls actually support many more codecs than just MPEG). 19 + See <xref linkend="mpeg-controls"></xref>.</para> 20 + 21 + <para>Memory-to-memory devices can often be used as a shared resource: you can 22 + open the video node multiple times, each application setting up their own codec properties 23 + that are local to the file handle, and each can use it independently from the others. 24 + The driver will arbitrate access to the codec and reprogram it whenever another file 25 + handler gets access. This is different from the usual video node behavior where the video properties 26 + are global to the device (i.e. changing something through one file handle is visible 27 + through another file handle).</para>
+1 -1
Documentation/DocBook/media/v4l/v4l2.xml
··· 493 493 </partinfo> 494 494 495 495 <title>Video for Linux Two API Specification</title> 496 - <subtitle>Revision 3.9</subtitle> 496 + <subtitle>Revision 3.10</subtitle> 497 497 498 498 <chapter id="common"> 499 499 &sub-common;
+1 -1
Documentation/devicetree/bindings/media/exynos-fimc-lite.txt
··· 2 2 3 3 Required properties: 4 4 5 - - compatible : should be "samsung,exynos4212-fimc" for Exynos4212 and 5 + - compatible : should be "samsung,exynos4212-fimc-lite" for Exynos4212 and 6 6 Exynos4412 SoCs; 7 7 - reg : physical base address and size of the device memory mapped 8 8 registers;
+12
Documentation/devicetree/bindings/sound/adi,adau1701.txt
··· 11 11 - reset-gpio: A GPIO spec to define which pin is connected to the 12 12 chip's !RESET pin. If specified, the driver will 13 13 assert a hardware reset at probe time. 14 + - adi,pll-mode-gpios: An array of two GPIO specs to describe the GPIOs 15 + the ADAU's PLL config pins are connected to. 16 + The state of the pins are set according to the 17 + configured clock divider on ASoC side before the 18 + firmware is loaded. 19 + - adi,pin-config: An array of 12 numerical values selecting one of the 20 + pin configurations as described in the datasheet, 21 + table 53. Note that the value of this property has 22 + to be prefixed with '/bits/ 8'. 14 23 15 24 Examples: 16 25 ··· 28 19 compatible = "adi,adau1701"; 29 20 reg = <0x34>; 30 21 reset-gpio = <&gpio 23 0>; 22 + adi,pll-mode-gpios = <&gpio 24 0 &gpio 25 0>; 23 + adi,pin-config = /bits/ 8 <0x4 0x7 0x5 0x5 0x4 0x4 24 + 0x4 0x4 0x4 0x4 0x4 0x4>; 31 25 }; 32 26 };
+11
Documentation/devicetree/bindings/sound/ti,tas5086.txt
··· 20 20 When not specified, the hardware default of 1300ms 21 21 is retained. 22 22 23 + - ti,mid-z-channel-X: Boolean properties, X being a number from 1 to 6. 24 + If given, channel X will start with the Mid-Z start 25 + sequence, otherwise the default Low-Z scheme is used. 26 + 27 + The correct configuration depends on how the power 28 + stages connected to the PWM output pins work. Not all 29 + power stages are compatible to Mid-Z - please refer 30 + to the datasheets for more details. 31 + 32 + Most systems should not set any of these properties. 33 + 23 34 Examples: 24 35 25 36 i2c_bus {
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 10 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = -rc7 5 5 NAME = Unicycling Gorilla 6 6 7 7 # *DOCUMENTATION*
+11 -1
arch/arm/Kconfig
··· 1189 1189 is not correctly implemented in PL310 as clean lines are not 1190 1190 invalidated as a result of these operations. 1191 1191 1192 + config ARM_ERRATA_643719 1193 + bool "ARM errata: LoUIS bit field in CLIDR register is incorrect" 1194 + depends on CPU_V7 && SMP 1195 + help 1196 + This option enables the workaround for the 643719 Cortex-A9 (prior to 1197 + r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR 1198 + register returns zero when it should return one. The workaround 1199 + corrects this value, ensuring cache maintenance operations which use 1200 + it behave as intended and avoiding data corruption. 1201 + 1192 1202 config ARM_ERRATA_720789 1193 1203 bool "ARM errata: TLBIASIDIS and TLBIMVAIS operations can broadcast a faulty ASID" 1194 1204 depends on CPU_V7 ··· 2016 2006 2017 2007 config KEXEC 2018 2008 bool "Kexec system call (EXPERIMENTAL)" 2019 - depends on (!SMP || HOTPLUG_CPU) 2009 + depends on (!SMP || PM_SLEEP_SMP) 2020 2010 help 2021 2011 kexec is a system call that implements the ability to shutdown your 2022 2012 current kernel, and to start another kernel. It is like a reboot
+2 -1
arch/arm/boot/compressed/Makefile
··· 116 116 117 117 # Make sure files are removed during clean 118 118 extra-y += piggy.gzip piggy.lzo piggy.lzma piggy.xzkern \ 119 - lib1funcs.S ashldi3.S $(libfdt) $(libfdt_hdrs) 119 + lib1funcs.S ashldi3.S $(libfdt) $(libfdt_hdrs) \ 120 + hyp-stub.S 120 121 121 122 ifeq ($(CONFIG_FUNCTION_TRACER),y) 122 123 ORIG_CFLAGS := $(KBUILD_CFLAGS)
+1 -1
arch/arm/boot/dts/exynos5250-pinctrl.dtsi
··· 763 763 }; 764 764 }; 765 765 766 - pinctrl@03680000 { 766 + pinctrl@03860000 { 767 767 gpz: gpz { 768 768 gpio-controller; 769 769 #gpio-cells = <2>;
+2 -2
arch/arm/boot/dts/exynos5250.dtsi
··· 161 161 interrupts = <0 50 0>; 162 162 }; 163 163 164 - pinctrl_3: pinctrl@03680000 { 164 + pinctrl_3: pinctrl@03860000 { 165 165 compatible = "samsung,exynos5250-pinctrl"; 166 - reg = <0x0368000 0x1000>; 166 + reg = <0x03860000 0x1000>; 167 167 interrupts = <0 47 0>; 168 168 }; 169 169
+1 -3
arch/arm/include/asm/cacheflush.h
··· 320 320 } 321 321 322 322 #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE 323 - static inline void flush_kernel_dcache_page(struct page *page) 324 - { 325 - } 323 + extern void flush_kernel_dcache_page(struct page *); 326 324 327 325 #define flush_dcache_mmap_lock(mapping) \ 328 326 spin_lock_irq(&(mapping)->tree_lock)
+4
arch/arm/kernel/machine_kexec.c
··· 134 134 unsigned long reboot_code_buffer_phys; 135 135 void *reboot_code_buffer; 136 136 137 + if (num_online_cpus() > 1) { 138 + pr_err("kexec: error: multiple CPUs still online\n"); 139 + return; 140 + } 137 141 138 142 page_list = image->head & PAGE_MASK; 139 143
+37 -6
arch/arm/kernel/process.c
··· 184 184 185 185 __setup("reboot=", reboot_setup); 186 186 187 + /* 188 + * Called by kexec, immediately prior to machine_kexec(). 189 + * 190 + * This must completely disable all secondary CPUs; simply causing those CPUs 191 + * to execute e.g. a RAM-based pin loop is not sufficient. This allows the 192 + * kexec'd kernel to use any and all RAM as it sees fit, without having to 193 + * avoid any code or data used by any SW CPU pin loop. The CPU hotplug 194 + * functionality embodied in disable_nonboot_cpus() to achieve this. 195 + */ 187 196 void machine_shutdown(void) 188 197 { 189 - #ifdef CONFIG_SMP 190 - smp_send_stop(); 191 - #endif 198 + disable_nonboot_cpus(); 192 199 } 193 200 201 + /* 202 + * Halting simply requires that the secondary CPUs stop performing any 203 + * activity (executing tasks, handling interrupts). smp_send_stop() 204 + * achieves this. 205 + */ 194 206 void machine_halt(void) 195 207 { 196 - machine_shutdown(); 208 + smp_send_stop(); 209 + 197 210 local_irq_disable(); 198 211 while (1); 199 212 } 200 213 214 + /* 215 + * Power-off simply requires that the secondary CPUs stop performing any 216 + * activity (executing tasks, handling interrupts). smp_send_stop() 217 + * achieves this. When the system power is turned off, it will take all CPUs 218 + * with it. 219 + */ 201 220 void machine_power_off(void) 202 221 { 203 - machine_shutdown(); 222 + smp_send_stop(); 223 + 204 224 if (pm_power_off) 205 225 pm_power_off(); 206 226 } 207 227 228 + /* 229 + * Restart requires that the secondary CPUs stop performing any activity 230 + * while the primary CPU resets the system. Systems with a single CPU can 231 + * use soft_restart() as their machine descriptor's .restart hook, since that 232 + * will cause the only available CPU to reset. Systems with multiple CPUs must 233 + * provide a HW restart implementation, to ensure that all CPUs reset at once. 234 + * This is required so that any code running after reset on the primary CPU 235 + * doesn't have to co-ordinate with other CPUs to ensure they aren't still 236 + * executing pre-reset code, and using RAM that the primary CPU's code wishes 237 + * to use. Implementing such co-ordination would be essentially impossible. 238 + */ 208 239 void machine_restart(char *cmd) 209 240 { 210 - machine_shutdown(); 241 + smp_send_stop(); 211 242 212 243 arm_pm_restart(reboot_mode, cmd); 213 244
-13
arch/arm/kernel/smp.c
··· 651 651 smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE); 652 652 } 653 653 654 - #ifdef CONFIG_HOTPLUG_CPU 655 - static void smp_kill_cpus(cpumask_t *mask) 656 - { 657 - unsigned int cpu; 658 - for_each_cpu(cpu, mask) 659 - platform_cpu_kill(cpu); 660 - } 661 - #else 662 - static void smp_kill_cpus(cpumask_t *mask) { } 663 - #endif 664 - 665 654 void smp_send_stop(void) 666 655 { 667 656 unsigned long timeout; ··· 668 679 669 680 if (num_online_cpus() > 1) 670 681 pr_warning("SMP: failed to stop secondary CPUs\n"); 671 - 672 - smp_kill_cpus(&mask); 673 682 } 674 683 675 684 /*
+8
arch/arm/mm/cache-v7.S
··· 92 92 mrc p15, 1, r0, c0, c0, 1 @ read clidr, r0 = clidr 93 93 ALT_SMP(ands r3, r0, #(7 << 21)) @ extract LoUIS from clidr 94 94 ALT_UP(ands r3, r0, #(7 << 27)) @ extract LoUU from clidr 95 + #ifdef CONFIG_ARM_ERRATA_643719 96 + ALT_SMP(mrceq p15, 0, r2, c0, c0, 0) @ read main ID register 97 + ALT_UP(moveq pc, lr) @ LoUU is zero, so nothing to do 98 + ldreq r1, =0x410fc090 @ ID of ARM Cortex A9 r0p? 99 + biceq r2, r2, #0x0000000f @ clear minor revision number 100 + teqeq r2, r1 @ test for errata affected core and if so... 101 + orreqs r3, #(1 << 21) @ fix LoUIS value (and set flags state to 'ne') 102 + #endif 95 103 ALT_SMP(mov r3, r3, lsr #20) @ r3 = LoUIS * 2 96 104 ALT_UP(mov r3, r3, lsr #26) @ r3 = LoUU * 2 97 105 moveq pc, lr @ return if level == 0
+33
arch/arm/mm/flush.c
··· 301 301 EXPORT_SYMBOL(flush_dcache_page); 302 302 303 303 /* 304 + * Ensure cache coherency for the kernel mapping of this page. We can 305 + * assume that the page is pinned via kmap. 306 + * 307 + * If the page only exists in the page cache and there are no user 308 + * space mappings, this is a no-op since the page was already marked 309 + * dirty at creation. Otherwise, we need to flush the dirty kernel 310 + * cache lines directly. 311 + */ 312 + void flush_kernel_dcache_page(struct page *page) 313 + { 314 + if (cache_is_vivt() || cache_is_vipt_aliasing()) { 315 + struct address_space *mapping; 316 + 317 + mapping = page_mapping(page); 318 + 319 + if (!mapping || mapping_mapped(mapping)) { 320 + void *addr; 321 + 322 + addr = page_address(page); 323 + /* 324 + * kmap_atomic() doesn't set the page virtual 325 + * address for highmem pages, and 326 + * kunmap_atomic() takes care of cache 327 + * flushing already. 328 + */ 329 + if (!IS_ENABLED(CONFIG_HIGHMEM) || addr) 330 + __cpuc_flush_dcache_area(addr, PAGE_SIZE); 331 + } 332 + } 333 + } 334 + EXPORT_SYMBOL(flush_kernel_dcache_page); 335 + 336 + /* 304 337 * Flush an anonymous page so that users of get_user_pages() 305 338 * can safely access the data. The expected sequence is: 306 339 *
+5 -3
arch/arm/mm/mmu.c
··· 616 616 } while (pte++, addr += PAGE_SIZE, addr != end); 617 617 } 618 618 619 - static void __init map_init_section(pmd_t *pmd, unsigned long addr, 619 + static void __init __map_init_section(pmd_t *pmd, unsigned long addr, 620 620 unsigned long end, phys_addr_t phys, 621 621 const struct mem_type *type) 622 622 { 623 + pmd_t *p = pmd; 624 + 623 625 #ifndef CONFIG_ARM_LPAE 624 626 /* 625 627 * In classic MMU format, puds and pmds are folded in to ··· 640 638 phys += SECTION_SIZE; 641 639 } while (pmd++, addr += SECTION_SIZE, addr != end); 642 640 643 - flush_pmd_entry(pmd); 641 + flush_pmd_entry(p); 644 642 } 645 643 646 644 static void __init alloc_init_pmd(pud_t *pud, unsigned long addr, ··· 663 661 */ 664 662 if (type->prot_sect && 665 663 ((addr | next | phys) & ~SECTION_MASK) == 0) { 666 - map_init_section(pmd, addr, next, phys, type); 664 + __map_init_section(pmd, addr, next, phys, type); 667 665 } else { 668 666 alloc_init_pte(pmd, addr, next, 669 667 __phys_to_pfn(phys), type);
+2 -2
arch/arm/mm/proc-v7.S
··· 409 409 */ 410 410 .type __v7_pj4b_proc_info, #object 411 411 __v7_pj4b_proc_info: 412 - .long 0x562f5840 413 - .long 0xfffffff0 412 + .long 0x560f5800 413 + .long 0xff0fff00 414 414 __v7_proc __v7_pj4b_setup 415 415 .size __v7_pj4b_proc_info, . - __v7_pj4b_proc_info 416 416
+1
arch/arm64/kernel/perf_event.c
··· 1336 1336 return; 1337 1337 } 1338 1338 1339 + perf_callchain_store(entry, regs->pc); 1339 1340 tail = (struct frame_tail __user *)regs->regs[29]; 1340 1341 1341 1342 while (entry->nr < PERF_MAX_STACK_DEPTH &&
+1
arch/ia64/include/asm/irqflags.h
··· 11 11 #define _ASM_IA64_IRQFLAGS_H 12 12 13 13 #include <asm/pal.h> 14 + #include <asm/kregs.h> 14 15 15 16 #ifdef CONFIG_IA64_DEBUG_IRQ 16 17 extern unsigned long last_cli_ip;
+1
arch/metag/include/asm/hugetlb.h
··· 2 2 #define _ASM_METAG_HUGETLB_H 3 3 4 4 #include <asm/page.h> 5 + #include <asm-generic/hugetlb.h> 5 6 6 7 7 8 static inline int is_hugepage_only_range(struct mm_struct *mm,
+2 -3
arch/mn10300/include/asm/irqflags.h
··· 13 13 #define _ASM_IRQFLAGS_H 14 14 15 15 #include <asm/cpu-regs.h> 16 - #ifndef __ASSEMBLY__ 17 - #include <linux/smp.h> 18 - #endif 16 + /* linux/smp.h <- linux/irqflags.h needs asm/smp.h first */ 17 + #include <asm/smp.h> 19 18 20 19 /* 21 20 * interrupt control
+3 -1
arch/mn10300/include/asm/smp.h
··· 24 24 #ifndef __ASSEMBLY__ 25 25 #include <linux/threads.h> 26 26 #include <linux/cpumask.h> 27 + #include <linux/thread_info.h> 27 28 #endif 28 29 29 30 #ifdef CONFIG_SMP ··· 86 85 extern void smp_init_cpus(void); 87 86 extern void smp_cache_interrupt(void); 88 87 extern void send_IPI_allbutself(int irq); 89 - extern int smp_nmi_call_function(smp_call_func_t func, void *info, int wait); 88 + extern int smp_nmi_call_function(void (*func)(void *), void *info, int wait); 90 89 91 90 extern void arch_send_call_function_single_ipi(int cpu); 92 91 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); ··· 101 100 #ifndef __ASSEMBLY__ 102 101 103 102 static inline void smp_init_cpus(void) {} 103 + #define raw_smp_processor_id() 0 104 104 105 105 #endif /* __ASSEMBLY__ */ 106 106 #endif /* CONFIG_SMP */
+2 -2
arch/parisc/include/asm/mmzone.h
··· 27 27 28 28 #define PFNNID_SHIFT (30 - PAGE_SHIFT) 29 29 #define PFNNID_MAP_MAX 512 /* support 512GB */ 30 - extern unsigned char pfnnid_map[PFNNID_MAP_MAX]; 30 + extern signed char pfnnid_map[PFNNID_MAP_MAX]; 31 31 32 32 #ifndef CONFIG_64BIT 33 33 #define pfn_is_io(pfn) ((pfn & (0xf0000000UL >> PAGE_SHIFT)) == (0xf0000000UL >> PAGE_SHIFT)) ··· 46 46 i = pfn >> PFNNID_SHIFT; 47 47 BUG_ON(i >= ARRAY_SIZE(pfnnid_map)); 48 48 49 - return (int)pfnnid_map[i]; 49 + return pfnnid_map[i]; 50 50 } 51 51 52 52 static inline int pfn_valid(int pfn)
+5
arch/parisc/include/asm/pci.h
··· 225 225 return channel ? 15 : 14; 226 226 } 227 227 228 + #define HAVE_PCI_MMAP 229 + 230 + extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 231 + enum pci_mmap_state mmap_state, int write_combine); 232 + 228 233 #endif /* __ASM_PARISC_PCI_H */
+1
arch/parisc/kernel/hardware.c
··· 1205 1205 {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, 1206 1206 {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, 1207 1207 {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, 1208 + {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, 1208 1209 {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, 1209 1210 {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, 1210 1211 {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"},
+36 -36
arch/parisc/kernel/pacache.S
··· 860 860 #endif 861 861 862 862 ldil L%dcache_stride, %r1 863 - ldw R%dcache_stride(%r1), %r1 863 + ldw R%dcache_stride(%r1), r31 864 864 865 865 #ifdef CONFIG_64BIT 866 866 depdi,z 1, 63-PAGE_SHIFT,1, %r25 ··· 868 868 depwi,z 1, 31-PAGE_SHIFT,1, %r25 869 869 #endif 870 870 add %r28, %r25, %r25 871 - sub %r25, %r1, %r25 871 + sub %r25, r31, %r25 872 872 873 873 874 - 1: fdc,m %r1(%r28) 875 - fdc,m %r1(%r28) 876 - fdc,m %r1(%r28) 877 - fdc,m %r1(%r28) 878 - fdc,m %r1(%r28) 879 - fdc,m %r1(%r28) 880 - fdc,m %r1(%r28) 881 - fdc,m %r1(%r28) 882 - fdc,m %r1(%r28) 883 - fdc,m %r1(%r28) 884 - fdc,m %r1(%r28) 885 - fdc,m %r1(%r28) 886 - fdc,m %r1(%r28) 887 - fdc,m %r1(%r28) 888 - fdc,m %r1(%r28) 874 + 1: fdc,m r31(%r28) 875 + fdc,m r31(%r28) 876 + fdc,m r31(%r28) 877 + fdc,m r31(%r28) 878 + fdc,m r31(%r28) 879 + fdc,m r31(%r28) 880 + fdc,m r31(%r28) 881 + fdc,m r31(%r28) 882 + fdc,m r31(%r28) 883 + fdc,m r31(%r28) 884 + fdc,m r31(%r28) 885 + fdc,m r31(%r28) 886 + fdc,m r31(%r28) 887 + fdc,m r31(%r28) 888 + fdc,m r31(%r28) 889 889 cmpb,COND(<<) %r28, %r25,1b 890 - fdc,m %r1(%r28) 890 + fdc,m r31(%r28) 891 891 892 892 sync 893 893 ··· 936 936 #endif 937 937 938 938 ldil L%icache_stride, %r1 939 - ldw R%icache_stride(%r1), %r1 939 + ldw R%icache_stride(%r1), %r31 940 940 941 941 #ifdef CONFIG_64BIT 942 942 depdi,z 1, 63-PAGE_SHIFT,1, %r25 ··· 944 944 depwi,z 1, 31-PAGE_SHIFT,1, %r25 945 945 #endif 946 946 add %r28, %r25, %r25 947 - sub %r25, %r1, %r25 947 + sub %r25, %r31, %r25 948 948 949 949 950 950 /* fic only has the type 26 form on PA1.1, requiring an 951 951 * explicit space specification, so use %sr4 */ 952 - 1: fic,m %r1(%sr4,%r28) 953 - fic,m %r1(%sr4,%r28) 954 - fic,m %r1(%sr4,%r28) 955 - fic,m %r1(%sr4,%r28) 956 - fic,m %r1(%sr4,%r28) 957 - fic,m %r1(%sr4,%r28) 958 - fic,m %r1(%sr4,%r28) 959 - fic,m %r1(%sr4,%r28) 960 - fic,m %r1(%sr4,%r28) 961 - fic,m %r1(%sr4,%r28) 962 - fic,m %r1(%sr4,%r28) 963 - fic,m %r1(%sr4,%r28) 964 - fic,m %r1(%sr4,%r28) 965 - fic,m %r1(%sr4,%r28) 966 - fic,m %r1(%sr4,%r28) 952 + 1: fic,m %r31(%sr4,%r28) 953 + fic,m %r31(%sr4,%r28) 954 + fic,m %r31(%sr4,%r28) 955 + fic,m %r31(%sr4,%r28) 956 + fic,m %r31(%sr4,%r28) 957 + fic,m %r31(%sr4,%r28) 958 + fic,m %r31(%sr4,%r28) 959 + fic,m %r31(%sr4,%r28) 960 + fic,m %r31(%sr4,%r28) 961 + fic,m %r31(%sr4,%r28) 962 + fic,m %r31(%sr4,%r28) 963 + fic,m %r31(%sr4,%r28) 964 + fic,m %r31(%sr4,%r28) 965 + fic,m %r31(%sr4,%r28) 966 + fic,m %r31(%sr4,%r28) 967 967 cmpb,COND(<<) %r28, %r25,1b 968 - fic,m %r1(%sr4,%r28) 968 + fic,m %r31(%sr4,%r28) 969 969 970 970 sync 971 971
+27
arch/parisc/kernel/pci.c
··· 220 220 } 221 221 222 222 223 + int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 224 + enum pci_mmap_state mmap_state, int write_combine) 225 + { 226 + unsigned long prot; 227 + 228 + /* 229 + * I/O space can be accessed via normal processor loads and stores on 230 + * this platform but for now we elect not to do this and portable 231 + * drivers should not do this anyway. 232 + */ 233 + if (mmap_state == pci_mmap_io) 234 + return -EINVAL; 235 + 236 + if (write_combine) 237 + return -EINVAL; 238 + 239 + /* 240 + * Ignore write-combine; for now only return uncached mappings. 241 + */ 242 + prot = pgprot_val(vma->vm_page_prot); 243 + prot |= _PAGE_NO_CACHE; 244 + vma->vm_page_prot = __pgprot(prot); 245 + 246 + return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, 247 + vma->vm_end - vma->vm_start, vma->vm_page_prot); 248 + } 249 + 223 250 /* 224 251 * A driver is enabling the device. We make sure that all the appropriate 225 252 * bits are set to allow the device to operate as the driver is expecting.
+1 -1
arch/parisc/mm/init.c
··· 47 47 48 48 #ifdef CONFIG_DISCONTIGMEM 49 49 struct node_map_data node_data[MAX_NUMNODES] __read_mostly; 50 - unsigned char pfnnid_map[PFNNID_MAP_MAX] __read_mostly; 50 + signed char pfnnid_map[PFNNID_MAP_MAX] __read_mostly; 51 51 #endif 52 52 53 53 static struct resource data_resource = {
+2 -1
arch/powerpc/kvm/booke.c
··· 673 673 ret = s; 674 674 goto out; 675 675 } 676 - kvmppc_lazy_ee_enable(); 677 676 678 677 kvm_guest_enter(); 679 678 ··· 697 698 698 699 kvmppc_load_guest_fp(vcpu); 699 700 #endif 701 + 702 + kvmppc_lazy_ee_enable(); 700 703 701 704 ret = __kvmppc_vcpu_run(kvm_run, vcpu); 702 705
+7 -1
arch/powerpc/mm/hugetlbpage.c
··· 592 592 do { 593 593 pmd = pmd_offset(pud, addr); 594 594 next = pmd_addr_end(addr, end); 595 - if (pmd_none_or_clear_bad(pmd)) 595 + if (!is_hugepd(pmd)) { 596 + /* 597 + * if it is not hugepd pointer, we should already find 598 + * it cleared. 599 + */ 600 + WARN_ON(!pmd_none_or_clear_bad(pmd)); 596 601 continue; 602 + } 597 603 #ifdef CONFIG_PPC_FSL_BOOK3E 598 604 /* 599 605 * Increment next by the size of the huge mapping since
+1
arch/sparc/include/asm/Kbuild
··· 6 6 generic-y += div64.h 7 7 generic-y += emergency-restart.h 8 8 generic-y += exec.h 9 + generic-y += linkage.h 9 10 generic-y += local64.h 10 11 generic-y += mutex.h 11 12 generic-y += irq_regs.h
+1 -1
arch/sparc/include/asm/leon.h
··· 135 135 136 136 #ifdef CONFIG_SMP 137 137 # define LEON3_IRQ_IPI_DEFAULT 13 138 - # define LEON3_IRQ_TICKER (leon3_ticker_irq) 138 + # define LEON3_IRQ_TICKER (leon3_gptimer_irq) 139 139 # define LEON3_IRQ_CROSS_CALL 15 140 140 #endif 141 141
+1
arch/sparc/include/asm/leon_amba.h
··· 47 47 #define LEON3_GPTIMER_LD 4 48 48 #define LEON3_GPTIMER_IRQEN 8 49 49 #define LEON3_GPTIMER_SEPIRQ 8 50 + #define LEON3_GPTIMER_TIMERS 0x7 50 51 51 52 #define LEON23_REG_TIMER_CONTROL_EN 0x00000001 /* 1 = enable counting */ 52 53 /* 0 = hold scalar and counter */
-6
arch/sparc/include/asm/linkage.h
··· 1 - #ifndef __ASM_LINKAGE_H 2 - #define __ASM_LINKAGE_H 3 - 4 - /* Nothing to see here... */ 5 - 6 - #endif
+2 -1
arch/sparc/kernel/ds.c
··· 843 843 unsigned long len; 844 844 845 845 strcpy(full_boot_str, "boot "); 846 - strcpy(full_boot_str + strlen("boot "), boot_command); 846 + strlcpy(full_boot_str + strlen("boot "), boot_command, 847 + sizeof(full_boot_str + strlen("boot "))); 847 848 len = strlen(full_boot_str); 848 849 849 850 if (reboot_data_supported) {
+24 -44
arch/sparc/kernel/leon_kernel.c
··· 38 38 39 39 unsigned long leon3_gptimer_irq; /* interrupt controller irq number */ 40 40 unsigned long leon3_gptimer_idx; /* Timer Index (0..6) within Timer Core */ 41 - int leon3_ticker_irq; /* Timer ticker IRQ */ 42 41 unsigned int sparc_leon_eirq; 43 42 #define LEON_IMASK(cpu) (&leon3_irqctrl_regs->mask[cpu]) 44 43 #define LEON_IACK (&leon3_irqctrl_regs->iclear) ··· 277 278 278 279 leon_clear_profile_irq(cpu); 279 280 281 + if (cpu == boot_cpu_id) 282 + timer_interrupt(irq, NULL); 283 + 280 284 ce = &per_cpu(sparc32_clockevent, cpu); 281 285 282 286 irq_enter(); ··· 301 299 int icsel; 302 300 int ampopts; 303 301 int err; 302 + u32 config; 304 303 305 304 sparc_config.get_cycles_offset = leon_cycles_offset; 306 305 sparc_config.cs_period = 1000000 / HZ; ··· 380 377 LEON3_BYPASS_STORE_PA( 381 378 &leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl, 0); 382 379 383 - #ifdef CONFIG_SMP 384 - leon3_ticker_irq = leon3_gptimer_irq + 1 + leon3_gptimer_idx; 385 - 386 - if (!(LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config) & 387 - (1<<LEON3_GPTIMER_SEPIRQ))) { 388 - printk(KERN_ERR "timer not configured with separate irqs\n"); 389 - BUG(); 390 - } 391 - 392 - LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].val, 393 - 0); 394 - LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].rld, 395 - (((1000000/HZ) - 1))); 396 - LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl, 397 - 0); 398 - #endif 399 - 400 380 /* 401 381 * The IRQ controller may (if implemented) consist of multiple 402 382 * IRQ controllers, each mapped on a 4Kb boundary. ··· 402 416 if (eirq != 0) 403 417 leon_eirq_setup(eirq); 404 418 405 - irq = _leon_build_device_irq(NULL, leon3_gptimer_irq+leon3_gptimer_idx); 406 - err = request_irq(irq, timer_interrupt, IRQF_TIMER, "timer", NULL); 407 - if (err) { 408 - printk(KERN_ERR "unable to attach timer IRQ%d\n", irq); 409 - prom_halt(); 410 - } 411 - 412 419 #ifdef CONFIG_SMP 413 420 { 414 421 unsigned long flags; ··· 418 439 } 419 440 #endif 420 441 442 + config = LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config); 443 + if (config & (1 << LEON3_GPTIMER_SEPIRQ)) 444 + leon3_gptimer_irq += leon3_gptimer_idx; 445 + else if ((config & LEON3_GPTIMER_TIMERS) > 1) 446 + pr_warn("GPTIMER uses shared irqs, using other timers of the same core will fail.\n"); 447 + 448 + #ifdef CONFIG_SMP 449 + /* Install per-cpu IRQ handler for broadcasted ticker */ 450 + irq = leon_build_device_irq(leon3_gptimer_irq, handle_percpu_irq, 451 + "per-cpu", 0); 452 + err = request_irq(irq, leon_percpu_timer_ce_interrupt, 453 + IRQF_PERCPU | IRQF_TIMER, "timer", NULL); 454 + #else 455 + irq = _leon_build_device_irq(NULL, leon3_gptimer_irq); 456 + err = request_irq(irq, timer_interrupt, IRQF_TIMER, "timer", NULL); 457 + #endif 458 + if (err) { 459 + pr_err("Unable to attach timer IRQ%d\n", irq); 460 + prom_halt(); 461 + } 421 462 LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl, 422 463 LEON3_GPTIMER_EN | 423 464 LEON3_GPTIMER_RL | 424 465 LEON3_GPTIMER_LD | 425 466 LEON3_GPTIMER_IRQEN); 426 - 427 - #ifdef CONFIG_SMP 428 - /* Install per-cpu IRQ handler for broadcasted ticker */ 429 - irq = leon_build_device_irq(leon3_ticker_irq, handle_percpu_irq, 430 - "per-cpu", 0); 431 - err = request_irq(irq, leon_percpu_timer_ce_interrupt, 432 - IRQF_PERCPU | IRQF_TIMER, "ticker", 433 - NULL); 434 - if (err) { 435 - printk(KERN_ERR "unable to attach ticker IRQ%d\n", irq); 436 - prom_halt(); 437 - } 438 - 439 - LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl, 440 - LEON3_GPTIMER_EN | 441 - LEON3_GPTIMER_RL | 442 - LEON3_GPTIMER_LD | 443 - LEON3_GPTIMER_IRQEN); 444 - #endif 445 467 return; 446 468 bad: 447 469 printk(KERN_ERR "No Timer/irqctrl found\n");
+3 -5
arch/sparc/kernel/leon_pci_grpci1.c
··· 536 536 537 537 /* find device register base address */ 538 538 res = platform_get_resource(ofdev, IORESOURCE_MEM, 0); 539 - regs = devm_request_and_ioremap(&ofdev->dev, res); 540 - if (!regs) { 541 - dev_err(&ofdev->dev, "io-regs mapping failed\n"); 542 - return -EADDRNOTAVAIL; 543 - } 539 + regs = devm_ioremap_resource(&ofdev->dev, res); 540 + if (IS_ERR(regs)) 541 + return PTR_ERR(regs); 544 542 545 543 /* 546 544 * check that we're in Host Slot and that we can act as a Host Bridge
+7
arch/sparc/kernel/leon_pmc.c
··· 47 47 * MMU does not get a TLB miss here by using the MMU BYPASS ASI. 48 48 */ 49 49 register unsigned int address = (unsigned int)leon3_irqctrl_regs; 50 + 51 + /* Interrupts need to be enabled to not hang the CPU */ 52 + local_irq_enable(); 53 + 50 54 __asm__ __volatile__ ( 51 55 "wr %%g0, %%asr19\n" 52 56 "lda [%0] %1, %%g0\n" ··· 64 60 */ 65 61 void pmc_leon_idle(void) 66 62 { 63 + /* Interrupts need to be enabled to not hang the CPU */ 64 + local_irq_enable(); 65 + 67 66 /* For systems without power-down, this will be no-op */ 68 67 __asm__ __volatile__ ("wr %g0, %asr19\n\t"); 69 68 }
+1 -1
arch/sparc/kernel/setup_32.c
··· 304 304 305 305 /* Initialize PROM console and command line. */ 306 306 *cmdline_p = prom_getbootargs(); 307 - strcpy(boot_command_line, *cmdline_p); 307 + strlcpy(boot_command_line, *cmdline_p, COMMAND_LINE_SIZE); 308 308 parse_early_param(); 309 309 310 310 boot_flags_init(*cmdline_p);
+1 -1
arch/sparc/kernel/setup_64.c
··· 555 555 { 556 556 /* Initialize PROM console and command line. */ 557 557 *cmdline_p = prom_getbootargs(); 558 - strcpy(boot_command_line, *cmdline_p); 558 + strlcpy(boot_command_line, *cmdline_p, COMMAND_LINE_SIZE); 559 559 parse_early_param(); 560 560 561 561 boot_flags_init(*cmdline_p);
+8 -1
arch/sparc/mm/init_64.c
··· 1098 1098 m->size = *val; 1099 1099 val = mdesc_get_property(md, node, 1100 1100 "address-congruence-offset", NULL); 1101 - m->offset = *val; 1101 + 1102 + /* The address-congruence-offset property is optional. 1103 + * Explicity zero it be identifty this. 1104 + */ 1105 + if (val) 1106 + m->offset = *val; 1107 + else 1108 + m->offset = 0UL; 1102 1109 1103 1110 numadbg("MBLOCK[%d]: base[%llx] size[%llx] offset[%llx]\n", 1104 1111 count - 1, m->base, m->size, m->offset);
+1 -1
arch/sparc/mm/tlb.c
··· 85 85 } 86 86 87 87 if (!tb->active) { 88 - global_flush_tlb_page(mm, vaddr); 89 88 flush_tsb_user_page(mm, vaddr); 89 + global_flush_tlb_page(mm, vaddr); 90 90 goto out; 91 91 } 92 92
+7 -5
arch/sparc/prom/bootstr_32.c
··· 23 23 return barg_buf; 24 24 } 25 25 26 - switch(prom_vers) { 26 + switch (prom_vers) { 27 27 case PROM_V0: 28 28 cp = barg_buf; 29 29 /* Start from 1 and go over fd(0,0,0)kernel */ 30 - for(iter = 1; iter < 8; iter++) { 30 + for (iter = 1; iter < 8; iter++) { 31 31 arg = (*(romvec->pv_v0bootargs))->argv[iter]; 32 32 if (arg == NULL) 33 33 break; 34 - while(*arg != 0) { 34 + while (*arg != 0) { 35 35 /* Leave place for space and null. */ 36 - if(cp >= barg_buf + BARG_LEN-2){ 36 + if (cp >= barg_buf + BARG_LEN - 2) 37 37 /* We might issue a warning here. */ 38 38 break; 39 - } 40 39 *cp++ = *arg++; 41 40 } 42 41 *cp++ = ' '; 42 + if (cp >= barg_buf + BARG_LEN - 1) 43 + /* We might issue a warning here. */ 44 + break; 43 45 } 44 46 *cp = 0; 45 47 break;
+8 -8
arch/sparc/prom/tree_64.c
··· 39 39 return prom_node_to_node("child", node); 40 40 } 41 41 42 - inline phandle prom_getchild(phandle node) 42 + phandle prom_getchild(phandle node) 43 43 { 44 44 phandle cnode; 45 45 ··· 72 72 return prom_node_to_node(prom_peer_name, node); 73 73 } 74 74 75 - inline phandle prom_getsibling(phandle node) 75 + phandle prom_getsibling(phandle node) 76 76 { 77 77 phandle sibnode; 78 78 ··· 89 89 /* Return the length in bytes of property 'prop' at node 'node'. 90 90 * Return -1 on error. 91 91 */ 92 - inline int prom_getproplen(phandle node, const char *prop) 92 + int prom_getproplen(phandle node, const char *prop) 93 93 { 94 94 unsigned long args[6]; 95 95 ··· 113 113 * 'buffer' which has a size of 'bufsize'. If the acquisition 114 114 * was successful the length will be returned, else -1 is returned. 115 115 */ 116 - inline int prom_getproperty(phandle node, const char *prop, 117 - char *buffer, int bufsize) 116 + int prom_getproperty(phandle node, const char *prop, 117 + char *buffer, int bufsize) 118 118 { 119 119 unsigned long args[8]; 120 120 int plen; ··· 141 141 /* Acquire an integer property and return its value. Returns -1 142 142 * on failure. 143 143 */ 144 - inline int prom_getint(phandle node, const char *prop) 144 + int prom_getint(phandle node, const char *prop) 145 145 { 146 146 int intprop; 147 147 ··· 235 235 /* Return the first property type for node 'node'. 236 236 * buffer should be at least 32B in length 237 237 */ 238 - inline char *prom_firstprop(phandle node, char *buffer) 238 + char *prom_firstprop(phandle node, char *buffer) 239 239 { 240 240 unsigned long args[7]; 241 241 ··· 261 261 * at node 'node' . Returns NULL string if no more 262 262 * property types for this node. 263 263 */ 264 - inline char *prom_nextprop(phandle node, const char *oprop, char *buffer) 264 + char *prom_nextprop(phandle node, const char *oprop, char *buffer) 265 265 { 266 266 unsigned long args[7]; 267 267 char buf[32];
+2
arch/tile/lib/exports.c
··· 84 84 EXPORT_SYMBOL(__ashrdi3); 85 85 uint64_t __ashldi3(uint64_t, unsigned int); 86 86 EXPORT_SYMBOL(__ashldi3); 87 + int __ffsdi2(uint64_t); 88 + EXPORT_SYMBOL(__ffsdi2); 87 89 #endif
+1 -1
arch/um/drivers/mconsole_kern.c
··· 147 147 } 148 148 149 149 do { 150 - loff_t pos; 150 + loff_t pos = file->f_pos; 151 151 mm_segment_t old_fs = get_fs(); 152 152 set_fs(KERNEL_DS); 153 153 len = vfs_read(file, buf, PAGE_SIZE - 1, &pos);
+1
arch/x86/Kconfig
··· 2265 2265 config IA32_EMULATION 2266 2266 bool "IA32 Emulation" 2267 2267 depends on X86_64 2268 + select BINFMT_ELF 2268 2269 select COMPAT_BINFMT_ELF 2269 2270 select HAVE_UID16 2270 2271 ---help---
+32 -16
arch/x86/crypto/aesni-intel_asm.S
··· 2681 2681 addq %rcx, KEYP 2682 2682 2683 2683 movdqa IV, STATE1 2684 - pxor 0x00(INP), STATE1 2684 + movdqu 0x00(INP), INC 2685 + pxor INC, STATE1 2685 2686 movdqu IV, 0x00(OUTP) 2686 2687 2687 2688 _aesni_gf128mul_x_ble() 2688 2689 movdqa IV, STATE2 2689 - pxor 0x10(INP), STATE2 2690 + movdqu 0x10(INP), INC 2691 + pxor INC, STATE2 2690 2692 movdqu IV, 0x10(OUTP) 2691 2693 2692 2694 _aesni_gf128mul_x_ble() 2693 2695 movdqa IV, STATE3 2694 - pxor 0x20(INP), STATE3 2696 + movdqu 0x20(INP), INC 2697 + pxor INC, STATE3 2695 2698 movdqu IV, 0x20(OUTP) 2696 2699 2697 2700 _aesni_gf128mul_x_ble() 2698 2701 movdqa IV, STATE4 2699 - pxor 0x30(INP), STATE4 2702 + movdqu 0x30(INP), INC 2703 + pxor INC, STATE4 2700 2704 movdqu IV, 0x30(OUTP) 2701 2705 2702 2706 call *%r11 2703 2707 2704 - pxor 0x00(OUTP), STATE1 2708 + movdqu 0x00(OUTP), INC 2709 + pxor INC, STATE1 2705 2710 movdqu STATE1, 0x00(OUTP) 2706 2711 2707 2712 _aesni_gf128mul_x_ble() 2708 2713 movdqa IV, STATE1 2709 - pxor 0x40(INP), STATE1 2714 + movdqu 0x40(INP), INC 2715 + pxor INC, STATE1 2710 2716 movdqu IV, 0x40(OUTP) 2711 2717 2712 - pxor 0x10(OUTP), STATE2 2718 + movdqu 0x10(OUTP), INC 2719 + pxor INC, STATE2 2713 2720 movdqu STATE2, 0x10(OUTP) 2714 2721 2715 2722 _aesni_gf128mul_x_ble() 2716 2723 movdqa IV, STATE2 2717 - pxor 0x50(INP), STATE2 2724 + movdqu 0x50(INP), INC 2725 + pxor INC, STATE2 2718 2726 movdqu IV, 0x50(OUTP) 2719 2727 2720 - pxor 0x20(OUTP), STATE3 2728 + movdqu 0x20(OUTP), INC 2729 + pxor INC, STATE3 2721 2730 movdqu STATE3, 0x20(OUTP) 2722 2731 2723 2732 _aesni_gf128mul_x_ble() 2724 2733 movdqa IV, STATE3 2725 - pxor 0x60(INP), STATE3 2734 + movdqu 0x60(INP), INC 2735 + pxor INC, STATE3 2726 2736 movdqu IV, 0x60(OUTP) 2727 2737 2728 - pxor 0x30(OUTP), STATE4 2738 + movdqu 0x30(OUTP), INC 2739 + pxor INC, STATE4 2729 2740 movdqu STATE4, 0x30(OUTP) 2730 2741 2731 2742 _aesni_gf128mul_x_ble() 2732 2743 movdqa IV, STATE4 2733 - pxor 0x70(INP), STATE4 2744 + movdqu 0x70(INP), INC 2745 + pxor INC, STATE4 2734 2746 movdqu IV, 0x70(OUTP) 2735 2747 2736 2748 _aesni_gf128mul_x_ble() ··· 2750 2738 2751 2739 call *%r11 2752 2740 2753 - pxor 0x40(OUTP), STATE1 2741 + movdqu 0x40(OUTP), INC 2742 + pxor INC, STATE1 2754 2743 movdqu STATE1, 0x40(OUTP) 2755 2744 2756 - pxor 0x50(OUTP), STATE2 2745 + movdqu 0x50(OUTP), INC 2746 + pxor INC, STATE2 2757 2747 movdqu STATE2, 0x50(OUTP) 2758 2748 2759 - pxor 0x60(OUTP), STATE3 2749 + movdqu 0x60(OUTP), INC 2750 + pxor INC, STATE3 2760 2751 movdqu STATE3, 0x60(OUTP) 2761 2752 2762 - pxor 0x70(OUTP), STATE4 2753 + movdqu 0x70(OUTP), INC 2754 + pxor INC, STATE4 2763 2755 movdqu STATE4, 0x70(OUTP) 2764 2756 2765 2757 ret
+1 -1
arch/x86/ia32/ia32_aout.c
··· 192 192 /* struct user */ 193 193 DUMP_WRITE(&dump, sizeof(dump)); 194 194 /* Now dump all of the user data. Include malloced stuff as well */ 195 - DUMP_SEEK(PAGE_SIZE); 195 + DUMP_SEEK(PAGE_SIZE - sizeof(dump)); 196 196 /* now we start writing out the user space info */ 197 197 set_fs(USER_DS); 198 198 /* Dump the data area */
+5
arch/x86/include/asm/irq.h
··· 41 41 42 42 extern void init_ISA_irqs(void); 43 43 44 + #ifdef CONFIG_X86_LOCAL_APIC 45 + void arch_trigger_all_cpu_backtrace(void); 46 + #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace 47 + #endif 48 + 44 49 #endif /* _ASM_X86_IRQ_H */
+2 -2
arch/x86/include/asm/microcode.h
··· 60 60 #ifdef CONFIG_MICROCODE_EARLY 61 61 #define MAX_UCODE_COUNT 128 62 62 extern void __init load_ucode_bsp(void); 63 - extern __init void load_ucode_ap(void); 63 + extern void __cpuinit load_ucode_ap(void); 64 64 extern int __init save_microcode_in_initrd(void); 65 65 #else 66 66 static inline void __init load_ucode_bsp(void) {} 67 - static inline __init void load_ucode_ap(void) {} 67 + static inline void __cpuinit load_ucode_ap(void) {} 68 68 static inline int __init save_microcode_in_initrd(void) 69 69 { 70 70 return 0;
+1 -3
arch/x86/include/asm/nmi.h
··· 18 18 void __user *, size_t *, loff_t *); 19 19 extern int unknown_nmi_panic; 20 20 21 - void arch_trigger_all_cpu_backtrace(void); 22 - #define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace 23 - #endif 21 + #endif /* CONFIG_X86_LOCAL_APIC */ 24 22 25 23 #define NMI_FLAG_FIRST 1 26 24
+1
arch/x86/kernel/apic/hw_nmi.c
··· 9 9 * 10 10 */ 11 11 #include <asm/apic.h> 12 + #include <asm/nmi.h> 12 13 13 14 #include <linux/cpumask.h> 14 15 #include <linux/kdebug.h>
+4 -4
arch/x86/kernel/cpu/mtrr/cleanup.c
··· 714 714 if (mtrr_tom2) 715 715 x_remove_size = (mtrr_tom2 >> PAGE_SHIFT) - x_remove_base; 716 716 717 - nr_range = x86_get_mtrr_mem_range(range, 0, x_remove_base, x_remove_size); 718 717 /* 719 718 * [0, 1M) should always be covered by var mtrr with WB 720 719 * and fixed mtrrs should take effect before var mtrr for it: 721 720 */ 722 - nr_range = add_range_with_merge(range, RANGE_NUM, nr_range, 0, 721 + nr_range = add_range_with_merge(range, RANGE_NUM, 0, 0, 723 722 1ULL<<(20 - PAGE_SHIFT)); 724 - /* Sort the ranges: */ 725 - sort_range(range, nr_range); 723 + /* add from var mtrr at last */ 724 + nr_range = x86_get_mtrr_mem_range(range, nr_range, 725 + x_remove_base, x_remove_size); 726 726 727 727 range_sums = sum_ranges(range, nr_range); 728 728 printk(KERN_INFO "total RAM covered: %ldM\n",
+1 -1
arch/x86/kernel/cpu/perf_event_intel.c
··· 165 165 INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3f807f8fffull, RSP_0), 166 166 INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3f807f8fffull, RSP_1), 167 167 INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 168 - INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 169 168 EVENT_EXTRA_END 170 169 }; 171 170 172 171 static struct extra_reg intel_snbep_extra_regs[] __read_mostly = { 173 172 INTEL_EVENT_EXTRA_REG(0xb7, MSR_OFFCORE_RSP_0, 0x3fffff8fffull, RSP_0), 174 173 INTEL_EVENT_EXTRA_REG(0xbb, MSR_OFFCORE_RSP_1, 0x3fffff8fffull, RSP_1), 174 + INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 175 175 EVENT_EXTRA_END 176 176 }; 177 177
+1
arch/x86/kernel/kvmclock.c
··· 242 242 if (!mem) 243 243 return; 244 244 hv_clock = __va(mem); 245 + memset(hv_clock, 0, size); 245 246 246 247 if (kvm_register_clock("boot clock")) { 247 248 hv_clock = NULL;
-12
arch/x86/kernel/process.c
··· 277 277 } 278 278 #endif 279 279 280 - void arch_cpu_idle_prepare(void) 281 - { 282 - /* 283 - * If we're the non-boot CPU, nothing set the stack canary up 284 - * for us. CPU0 already has it initialized but no harm in 285 - * doing it again. This is a good place for updating it, as 286 - * we wont ever return from this function (so the invalid 287 - * canaries already on the stack wont ever trigger). 288 - */ 289 - boot_init_stack_canary(); 290 - } 291 - 292 280 void arch_cpu_idle_enter(void) 293 281 { 294 282 local_touch_nmi();
+4 -4
arch/x86/kernel/smpboot.c
··· 372 372 373 373 void __cpuinit set_cpu_sibling_map(int cpu) 374 374 { 375 - bool has_mc = boot_cpu_data.x86_max_cores > 1; 376 375 bool has_smt = smp_num_siblings > 1; 376 + bool has_mp = has_smt || boot_cpu_data.x86_max_cores > 1; 377 377 struct cpuinfo_x86 *c = &cpu_data(cpu); 378 378 struct cpuinfo_x86 *o; 379 379 int i; 380 380 381 381 cpumask_set_cpu(cpu, cpu_sibling_setup_mask); 382 382 383 - if (!has_smt && !has_mc) { 383 + if (!has_mp) { 384 384 cpumask_set_cpu(cpu, cpu_sibling_mask(cpu)); 385 385 cpumask_set_cpu(cpu, cpu_llc_shared_mask(cpu)); 386 386 cpumask_set_cpu(cpu, cpu_core_mask(cpu)); ··· 394 394 if ((i == cpu) || (has_smt && match_smt(c, o))) 395 395 link_mask(sibling, cpu, i); 396 396 397 - if ((i == cpu) || (has_mc && match_llc(c, o))) 397 + if ((i == cpu) || (has_mp && match_llc(c, o))) 398 398 link_mask(llc_shared, cpu, i); 399 399 400 400 } ··· 406 406 for_each_cpu(i, cpu_sibling_setup_mask) { 407 407 o = &cpu_data(i); 408 408 409 - if ((i == cpu) || (has_mc && match_mc(c, o))) { 409 + if ((i == cpu) || (has_mp && match_mc(c, o))) { 410 410 link_mask(core, cpu, i); 411 411 412 412 /*
+2 -3
arch/x86/kvm/x86.c
··· 582 582 if (index != XCR_XFEATURE_ENABLED_MASK) 583 583 return 1; 584 584 xcr0 = xcr; 585 - if (kvm_x86_ops->get_cpl(vcpu) != 0) 586 - return 1; 587 585 if (!(xcr0 & XSTATE_FP)) 588 586 return 1; 589 587 if ((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE)) ··· 595 597 596 598 int kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr) 597 599 { 598 - if (__kvm_set_xcr(vcpu, index, xcr)) { 600 + if (kvm_x86_ops->get_cpl(vcpu) != 0 || 601 + __kvm_set_xcr(vcpu, index, xcr)) { 599 602 kvm_inject_gp(vcpu, 0); 600 603 return 1; 601 604 }
+6 -1
arch/x86/platform/efi/efi.c
··· 1069 1069 * that by attempting to use more space than is available. 1070 1070 */ 1071 1071 unsigned long dummy_size = remaining_size + 1024; 1072 - void *dummy = kmalloc(dummy_size, GFP_ATOMIC); 1072 + void *dummy = kzalloc(dummy_size, GFP_ATOMIC); 1073 + 1074 + if (!dummy) 1075 + return EFI_OUT_OF_RESOURCES; 1073 1076 1074 1077 status = efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, 1075 1078 EFI_VARIABLE_NON_VOLATILE | ··· 1091 1088 EFI_VARIABLE_RUNTIME_ACCESS, 1092 1089 0, dummy); 1093 1090 } 1091 + 1092 + kfree(dummy); 1094 1093 1095 1094 /* 1096 1095 * The runtime code may now have triggered a garbage collection
+15 -6
drivers/acpi/acpi_lpss.c
··· 164 164 if (dev_desc->clk_required) { 165 165 ret = register_device_clock(adev, pdata); 166 166 if (ret) { 167 - /* 168 - * Skip the device, but don't terminate the namespace 169 - * scan. 170 - */ 171 - kfree(pdata); 172 - return 0; 167 + /* Skip the device, but continue the namespace scan. */ 168 + ret = 0; 169 + goto err_out; 173 170 } 171 + } 172 + 173 + /* 174 + * This works around a known issue in ACPI tables where LPSS devices 175 + * have _PS0 and _PS3 without _PSC (and no power resources), so 176 + * acpi_bus_init_power() will assume that the BIOS has put them into D0. 177 + */ 178 + ret = acpi_device_fix_up_power(adev); 179 + if (ret) { 180 + /* Skip the device, but continue the namespace scan. */ 181 + ret = 0; 182 + goto err_out; 174 183 } 175 184 176 185 adev->driver_data = pdata;
+20
drivers/acpi/device_pm.c
··· 290 290 return 0; 291 291 } 292 292 293 + /** 294 + * acpi_device_fix_up_power - Force device with missing _PSC into D0. 295 + * @device: Device object whose power state is to be fixed up. 296 + * 297 + * Devices without power resources and _PSC, but having _PS0 and _PS3 defined, 298 + * are assumed to be put into D0 by the BIOS. However, in some cases that may 299 + * not be the case and this function should be used then. 300 + */ 301 + int acpi_device_fix_up_power(struct acpi_device *device) 302 + { 303 + int ret = 0; 304 + 305 + if (!device->power.flags.power_resources 306 + && !device->power.flags.explicit_get 307 + && device->power.state == ACPI_STATE_D0) 308 + ret = acpi_dev_pm_explicit_set(device, ACPI_STATE_D0); 309 + 310 + return ret; 311 + } 312 + 293 313 int acpi_bus_update_power(acpi_handle handle, int *state_p) 294 314 { 295 315 struct acpi_device *device;
+2
drivers/acpi/dock.c
··· 868 868 if (!count) 869 869 return -EINVAL; 870 870 871 + acpi_scan_lock_acquire(); 871 872 begin_undock(dock_station); 872 873 ret = handle_eject_request(dock_station, ACPI_NOTIFY_EJECT_REQUEST); 874 + acpi_scan_lock_release(); 873 875 return ret ? ret: count; 874 876 } 875 877 static DEVICE_ATTR(undock, S_IWUSR, NULL, write_undock);
+1
drivers/acpi/power.c
··· 885 885 ACPI_STA_DEFAULT); 886 886 mutex_init(&resource->resource_lock); 887 887 INIT_LIST_HEAD(&resource->dependent); 888 + INIT_LIST_HEAD(&resource->list_node); 888 889 resource->name = device->pnp.bus_id; 889 890 strcpy(acpi_device_name(device), ACPI_POWER_DEVICE_NAME); 890 891 strcpy(acpi_device_class(device), ACPI_POWER_CLASS);
+11 -5
drivers/acpi/resource.c
··· 304 304 } 305 305 306 306 static void acpi_dev_get_irqresource(struct resource *res, u32 gsi, 307 - u8 triggering, u8 polarity, u8 shareable) 307 + u8 triggering, u8 polarity, u8 shareable, 308 + bool legacy) 308 309 { 309 310 int irq, p, t; 310 311 ··· 318 317 * In IO-APIC mode, use overrided attribute. Two reasons: 319 318 * 1. BIOS bug in DSDT 320 319 * 2. BIOS uses IO-APIC mode Interrupt Source Override 320 + * 321 + * We do this only if we are dealing with IRQ() or IRQNoFlags() 322 + * resource (the legacy ISA resources). With modern ACPI 5 devices 323 + * using extended IRQ descriptors we take the IRQ configuration 324 + * from _CRS directly. 321 325 */ 322 - if (!acpi_get_override_irq(gsi, &t, &p)) { 326 + if (legacy && !acpi_get_override_irq(gsi, &t, &p)) { 323 327 u8 trig = t ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE; 324 328 u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH; 325 329 326 330 if (triggering != trig || polarity != pol) { 327 331 pr_warning("ACPI: IRQ %d override to %s, %s\n", gsi, 328 - t ? "edge" : "level", p ? "low" : "high"); 332 + t ? "level" : "edge", p ? "low" : "high"); 329 333 triggering = trig; 330 334 polarity = pol; 331 335 } ··· 379 373 } 380 374 acpi_dev_get_irqresource(res, irq->interrupts[index], 381 375 irq->triggering, irq->polarity, 382 - irq->sharable); 376 + irq->sharable, true); 383 377 break; 384 378 case ACPI_RESOURCE_TYPE_EXTENDED_IRQ: 385 379 ext_irq = &ares->data.extended_irq; ··· 389 383 } 390 384 acpi_dev_get_irqresource(res, ext_irq->interrupts[index], 391 385 ext_irq->triggering, ext_irq->polarity, 392 - ext_irq->sharable); 386 + ext_irq->sharable, false); 393 387 break; 394 388 default: 395 389 return false;
+18 -9
drivers/base/firmware_class.c
··· 450 450 { 451 451 struct firmware_buf *buf = fw_priv->buf; 452 452 453 + /* 454 + * There is a small window in which user can write to 'loading' 455 + * between loading done and disappearance of 'loading' 456 + */ 457 + if (test_bit(FW_STATUS_DONE, &buf->status)) 458 + return; 459 + 453 460 set_bit(FW_STATUS_ABORT, &buf->status); 454 461 complete_all(&buf->completion); 462 + 463 + /* avoid user action after loading abort */ 464 + fw_priv->buf = NULL; 455 465 } 456 466 457 467 #define is_fw_load_aborted(buf) \ ··· 538 528 struct device_attribute *attr, char *buf) 539 529 { 540 530 struct firmware_priv *fw_priv = to_firmware_priv(dev); 541 - int loading = test_bit(FW_STATUS_LOADING, &fw_priv->buf->status); 531 + int loading = 0; 532 + 533 + mutex_lock(&fw_lock); 534 + if (fw_priv->buf) 535 + loading = test_bit(FW_STATUS_LOADING, &fw_priv->buf->status); 536 + mutex_unlock(&fw_lock); 542 537 543 538 return sprintf(buf, "%d\n", loading); 544 539 } ··· 585 570 const char *buf, size_t count) 586 571 { 587 572 struct firmware_priv *fw_priv = to_firmware_priv(dev); 588 - struct firmware_buf *fw_buf = fw_priv->buf; 573 + struct firmware_buf *fw_buf; 589 574 int loading = simple_strtol(buf, NULL, 10); 590 575 int i; 591 576 592 577 mutex_lock(&fw_lock); 593 - 578 + fw_buf = fw_priv->buf; 594 579 if (!fw_buf) 595 580 goto out; 596 581 ··· 792 777 struct firmware_priv, timeout_work.work); 793 778 794 779 mutex_lock(&fw_lock); 795 - if (test_bit(FW_STATUS_DONE, &(fw_priv->buf->status))) { 796 - mutex_unlock(&fw_lock); 797 - return; 798 - } 799 780 fw_load_abort(fw_priv); 800 781 mutex_unlock(&fw_lock); 801 782 } ··· 871 860 wait_for_completion(&buf->completion); 872 861 873 862 cancel_delayed_work_sync(&fw_priv->timeout_work); 874 - 875 - fw_priv->buf = NULL; 876 863 877 864 device_remove_file(f_dev, &dev_attr_loading); 878 865 err_del_bin_attr:
+5 -1
drivers/block/rbd.c
··· 1036 1036 char *name; 1037 1037 u64 segment; 1038 1038 int ret; 1039 + char *name_format; 1039 1040 1040 1041 name = kmem_cache_alloc(rbd_segment_name_cache, GFP_NOIO); 1041 1042 if (!name) 1042 1043 return NULL; 1043 1044 segment = offset >> rbd_dev->header.obj_order; 1044 - ret = snprintf(name, MAX_OBJ_NAME_SIZE + 1, "%s.%012llx", 1045 + name_format = "%s.%012llx"; 1046 + if (rbd_dev->image_format == 2) 1047 + name_format = "%s.%016llx"; 1048 + ret = snprintf(name, MAX_OBJ_NAME_SIZE + 1, name_format, 1045 1049 rbd_dev->header.object_prefix, segment); 1046 1050 if (ret < 0 || ret > MAX_OBJ_NAME_SIZE) { 1047 1051 pr_err("error formatting segment name for #%llu (%d)\n",
+1
drivers/clk/clk.c
··· 1955 1955 /* XXX the notifier code should handle this better */ 1956 1956 if (!cn->notifier_head.head) { 1957 1957 srcu_cleanup_notifier_head(&cn->notifier_head); 1958 + list_del(&cn->node); 1958 1959 kfree(cn); 1959 1960 } 1960 1961
+5 -5
drivers/clk/samsung/clk-exynos5250.c
··· 155 155 156 156 /* list of all parent clock list */ 157 157 PNAME(mout_apll_p) = { "fin_pll", "fout_apll", }; 158 - PNAME(mout_cpu_p) = { "mout_apll", "mout_mpll", }; 158 + PNAME(mout_cpu_p) = { "mout_apll", "sclk_mpll", }; 159 159 PNAME(mout_mpll_fout_p) = { "fout_mplldiv2", "fout_mpll" }; 160 160 PNAME(mout_mpll_p) = { "fin_pll", "mout_mpll_fout" }; 161 161 PNAME(mout_bpll_fout_p) = { "fout_bplldiv2", "fout_bpll" }; ··· 208 208 }; 209 209 210 210 struct samsung_mux_clock exynos5250_mux_clks[] __initdata = { 211 - MUX(none, "mout_apll", mout_apll_p, SRC_CPU, 0, 1), 212 - MUX(none, "mout_cpu", mout_cpu_p, SRC_CPU, 16, 1), 211 + MUX_A(none, "mout_apll", mout_apll_p, SRC_CPU, 0, 1, "mout_apll"), 212 + MUX_A(none, "mout_cpu", mout_cpu_p, SRC_CPU, 16, 1, "mout_cpu"), 213 213 MUX(none, "mout_mpll_fout", mout_mpll_fout_p, PLL_DIV2_SEL, 4, 1), 214 - MUX(none, "sclk_mpll", mout_mpll_p, SRC_CORE1, 8, 1), 214 + MUX_A(none, "sclk_mpll", mout_mpll_p, SRC_CORE1, 8, 1, "mout_mpll"), 215 215 MUX(none, "mout_bpll_fout", mout_bpll_fout_p, PLL_DIV2_SEL, 0, 1), 216 216 MUX(none, "sclk_bpll", mout_bpll_p, SRC_CDREX, 0, 1), 217 217 MUX(none, "mout_vpllsrc", mout_vpllsrc_p, SRC_TOP2, 0, 1), ··· 378 378 GATE(hsi2c3, "hsi2c3", "aclk66", GATE_IP_PERIC, 31, 0, 0), 379 379 GATE(chipid, "chipid", "aclk66", GATE_IP_PERIS, 0, 0, 0), 380 380 GATE(sysreg, "sysreg", "aclk66", GATE_IP_PERIS, 1, 0, 0), 381 - GATE(pmu, "pmu", "aclk66", GATE_IP_PERIS, 2, 0, 0), 381 + GATE(pmu, "pmu", "aclk66", GATE_IP_PERIS, 2, CLK_IGNORE_UNUSED, 0), 382 382 GATE(tzpc0, "tzpc0", "aclk66", GATE_IP_PERIS, 6, 0, 0), 383 383 GATE(tzpc1, "tzpc1", "aclk66", GATE_IP_PERIS, 7, 0, 0), 384 384 GATE(tzpc2, "tzpc2", "aclk66", GATE_IP_PERIS, 8, 0, 0),
+3 -2
drivers/clk/samsung/clk-pll.c
··· 111 111 unsigned long parent_rate) 112 112 { 113 113 struct samsung_clk_pll36xx *pll = to_clk_pll36xx(hw); 114 - u32 mdiv, pdiv, sdiv, kdiv, pll_con0, pll_con1; 114 + u32 mdiv, pdiv, sdiv, pll_con0, pll_con1; 115 + s16 kdiv; 115 116 u64 fvco = parent_rate; 116 117 117 118 pll_con0 = __raw_readl(pll->con_reg); ··· 120 119 mdiv = (pll_con0 >> PLL36XX_MDIV_SHIFT) & PLL36XX_MDIV_MASK; 121 120 pdiv = (pll_con0 >> PLL36XX_PDIV_SHIFT) & PLL36XX_PDIV_MASK; 122 121 sdiv = (pll_con0 >> PLL36XX_SDIV_SHIFT) & PLL36XX_SDIV_MASK; 123 - kdiv = pll_con1 & PLL36XX_KDIV_MASK; 122 + kdiv = (s16)(pll_con1 & PLL36XX_KDIV_MASK); 124 123 125 124 fvco *= (mdiv << 16) + kdiv; 126 125 do_div(fvco, (pdiv << sdiv));
+1 -1
drivers/clk/spear/spear3xx_clock.c
··· 369 369 clk_register_clkdev(clk, NULL, "60100000.serial"); 370 370 } 371 371 #else 372 - static inline void spear320_clk_init(void) { } 372 + static inline void spear320_clk_init(void __iomem *soc_config_base) { } 373 373 #endif 374 374 375 375 void __init spear3xx_clk_init(void __iomem *misc_base, void __iomem *soc_config_base)
+6 -5
drivers/clk/tegra/clk-tegra30.c
··· 1598 1598 clk_register_clkdev(clk, "afi", "tegra-pcie"); 1599 1599 clks[afi] = clk; 1600 1600 1601 + /* pciex */ 1602 + clk = tegra_clk_register_periph_gate("pciex", "pll_e", 0, clk_base, 0, 1603 + 74, &periph_u_regs, periph_clk_enb_refcnt); 1604 + clk_register_clkdev(clk, "pciex", "tegra-pcie"); 1605 + clks[pciex] = clk; 1606 + 1601 1607 /* kfuse */ 1602 1608 clk = tegra_clk_register_periph_gate("kfuse", "clk_m", 1603 1609 TEGRA_PERIPH_ON_APB, ··· 1722 1716 1, 0, &cml_lock); 1723 1717 clk_register_clkdev(clk, "cml1", NULL); 1724 1718 clks[cml1] = clk; 1725 - 1726 - /* pciex */ 1727 - clk = clk_register_fixed_rate(NULL, "pciex", "pll_e", 0, 100000000); 1728 - clk_register_clkdev(clk, "pciex", NULL); 1729 - clks[pciex] = clk; 1730 1719 } 1731 1720 1732 1721 static void __init tegra30_osc_clk_init(void)
+1 -2
drivers/gpu/drm/drm_prime.c
··· 190 190 if (ret) 191 191 return ERR_PTR(ret); 192 192 } 193 - return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, 194 - 0600); 193 + return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, flags); 195 194 } 196 195 EXPORT_SYMBOL(drm_gem_prime_export); 197 196
+10 -3
drivers/gpu/drm/radeon/r600.c
··· 2687 2687 int r600_uvd_init(struct radeon_device *rdev) 2688 2688 { 2689 2689 int i, j, r; 2690 + /* disable byte swapping */ 2691 + u32 lmi_swap_cntl = 0; 2692 + u32 mp_swap_cntl = 0; 2690 2693 2691 2694 /* raise clocks while booting up the VCPU */ 2692 2695 radeon_set_uvd_clocks(rdev, 53300, 40000); ··· 2714 2711 WREG32(UVD_LMI_CTRL, 0x40 | (1 << 8) | (1 << 13) | 2715 2712 (1 << 21) | (1 << 9) | (1 << 20)); 2716 2713 2717 - /* disable byte swapping */ 2718 - WREG32(UVD_LMI_SWAP_CNTL, 0); 2719 - WREG32(UVD_MP_SWAP_CNTL, 0); 2714 + #ifdef __BIG_ENDIAN 2715 + /* swap (8 in 32) RB and IB */ 2716 + lmi_swap_cntl = 0xa; 2717 + mp_swap_cntl = 0; 2718 + #endif 2719 + WREG32(UVD_LMI_SWAP_CNTL, lmi_swap_cntl); 2720 + WREG32(UVD_MP_SWAP_CNTL, mp_swap_cntl); 2720 2721 2721 2722 WREG32(UVD_MPC_SET_MUXA0, 0x40c2040); 2722 2723 WREG32(UVD_MPC_SET_MUXA1, 0x0);
+24 -29
drivers/gpu/drm/radeon/radeon_device.c
··· 244 244 */ 245 245 void radeon_wb_disable(struct radeon_device *rdev) 246 246 { 247 - int r; 248 - 249 - if (rdev->wb.wb_obj) { 250 - r = radeon_bo_reserve(rdev->wb.wb_obj, false); 251 - if (unlikely(r != 0)) 252 - return; 253 - radeon_bo_kunmap(rdev->wb.wb_obj); 254 - radeon_bo_unpin(rdev->wb.wb_obj); 255 - radeon_bo_unreserve(rdev->wb.wb_obj); 256 - } 257 247 rdev->wb.enabled = false; 258 248 } 259 249 ··· 259 269 { 260 270 radeon_wb_disable(rdev); 261 271 if (rdev->wb.wb_obj) { 272 + if (!radeon_bo_reserve(rdev->wb.wb_obj, false)) { 273 + radeon_bo_kunmap(rdev->wb.wb_obj); 274 + radeon_bo_unpin(rdev->wb.wb_obj); 275 + radeon_bo_unreserve(rdev->wb.wb_obj); 276 + } 262 277 radeon_bo_unref(&rdev->wb.wb_obj); 263 278 rdev->wb.wb = NULL; 264 279 rdev->wb.wb_obj = NULL; ··· 290 295 dev_warn(rdev->dev, "(%d) create WB bo failed\n", r); 291 296 return r; 292 297 } 293 - } 294 - r = radeon_bo_reserve(rdev->wb.wb_obj, false); 295 - if (unlikely(r != 0)) { 296 - radeon_wb_fini(rdev); 297 - return r; 298 - } 299 - r = radeon_bo_pin(rdev->wb.wb_obj, RADEON_GEM_DOMAIN_GTT, 300 - &rdev->wb.gpu_addr); 301 - if (r) { 298 + r = radeon_bo_reserve(rdev->wb.wb_obj, false); 299 + if (unlikely(r != 0)) { 300 + radeon_wb_fini(rdev); 301 + return r; 302 + } 303 + r = radeon_bo_pin(rdev->wb.wb_obj, RADEON_GEM_DOMAIN_GTT, 304 + &rdev->wb.gpu_addr); 305 + if (r) { 306 + radeon_bo_unreserve(rdev->wb.wb_obj); 307 + dev_warn(rdev->dev, "(%d) pin WB bo failed\n", r); 308 + radeon_wb_fini(rdev); 309 + return r; 310 + } 311 + r = radeon_bo_kmap(rdev->wb.wb_obj, (void **)&rdev->wb.wb); 302 312 radeon_bo_unreserve(rdev->wb.wb_obj); 303 - dev_warn(rdev->dev, "(%d) pin WB bo failed\n", r); 304 - radeon_wb_fini(rdev); 305 - return r; 306 - } 307 - r = radeon_bo_kmap(rdev->wb.wb_obj, (void **)&rdev->wb.wb); 308 - radeon_bo_unreserve(rdev->wb.wb_obj); 309 - if (r) { 310 - dev_warn(rdev->dev, "(%d) map WB bo failed\n", r); 311 - radeon_wb_fini(rdev); 312 - return r; 313 + if (r) { 314 + dev_warn(rdev->dev, "(%d) map WB bo failed\n", r); 315 + radeon_wb_fini(rdev); 316 + return r; 317 + } 313 318 } 314 319 315 320 /* clear wb memory */
+8 -2
drivers/gpu/drm/radeon/radeon_fence.c
··· 63 63 { 64 64 struct radeon_fence_driver *drv = &rdev->fence_drv[ring]; 65 65 if (likely(rdev->wb.enabled || !drv->scratch_reg)) { 66 - *drv->cpu_addr = cpu_to_le32(seq); 66 + if (drv->cpu_addr) { 67 + *drv->cpu_addr = cpu_to_le32(seq); 68 + } 67 69 } else { 68 70 WREG32(drv->scratch_reg, seq); 69 71 } ··· 86 84 u32 seq = 0; 87 85 88 86 if (likely(rdev->wb.enabled || !drv->scratch_reg)) { 89 - seq = le32_to_cpu(*drv->cpu_addr); 87 + if (drv->cpu_addr) { 88 + seq = le32_to_cpu(*drv->cpu_addr); 89 + } else { 90 + seq = lower_32_bits(atomic64_read(&drv->last_seq)); 91 + } 90 92 } else { 91 93 seq = RREG32(drv->scratch_reg); 92 94 }
+4 -2
drivers/gpu/drm/radeon/radeon_gart.c
··· 1197 1197 int radeon_vm_bo_rmv(struct radeon_device *rdev, 1198 1198 struct radeon_bo_va *bo_va) 1199 1199 { 1200 - int r; 1200 + int r = 0; 1201 1201 1202 1202 mutex_lock(&rdev->vm_manager.lock); 1203 1203 mutex_lock(&bo_va->vm->mutex); 1204 - r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL); 1204 + if (bo_va->soffset) { 1205 + r = radeon_vm_bo_update_pte(rdev, bo_va->vm, bo_va->bo, NULL); 1206 + } 1205 1207 mutex_unlock(&rdev->vm_manager.lock); 1206 1208 list_del(&bo_va->vm_list); 1207 1209 mutex_unlock(&bo_va->vm->mutex);
+7
drivers/gpu/drm/radeon/radeon_ring.c
··· 402 402 return -ENOMEM; 403 403 /* Align requested size with padding so unlock_commit can 404 404 * pad safely */ 405 + radeon_ring_free_size(rdev, ring); 406 + if (ring->ring_free_dw == (ring->ring_size / 4)) { 407 + /* This is an empty ring update lockup info to avoid 408 + * false positive. 409 + */ 410 + radeon_ring_lockup_update(ring); 411 + } 405 412 ndw = (ndw + ring->align_mask) & ~ring->align_mask; 406 413 while (ndw > (ring->ring_free_dw - 1)) { 407 414 radeon_ring_free_size(rdev, ring);
+31 -17
drivers/gpu/drm/radeon/radeon_uvd.c
··· 159 159 if (!r) { 160 160 radeon_bo_kunmap(rdev->uvd.vcpu_bo); 161 161 radeon_bo_unpin(rdev->uvd.vcpu_bo); 162 + rdev->uvd.cpu_addr = NULL; 163 + if (!radeon_bo_pin(rdev->uvd.vcpu_bo, RADEON_GEM_DOMAIN_CPU, NULL)) { 164 + radeon_bo_kmap(rdev->uvd.vcpu_bo, &rdev->uvd.cpu_addr); 165 + } 162 166 radeon_bo_unreserve(rdev->uvd.vcpu_bo); 167 + 168 + if (rdev->uvd.cpu_addr) { 169 + radeon_fence_driver_start_ring(rdev, R600_RING_TYPE_UVD_INDEX); 170 + } else { 171 + rdev->fence_drv[R600_RING_TYPE_UVD_INDEX].cpu_addr = NULL; 172 + } 163 173 } 164 174 return r; 165 175 } ··· 187 177 dev_err(rdev->dev, "(%d) failed to reserve UVD bo\n", r); 188 178 return r; 189 179 } 180 + 181 + /* Have been pin in cpu unmap unpin */ 182 + radeon_bo_kunmap(rdev->uvd.vcpu_bo); 183 + radeon_bo_unpin(rdev->uvd.vcpu_bo); 190 184 191 185 r = radeon_bo_pin(rdev->uvd.vcpu_bo, RADEON_GEM_DOMAIN_VRAM, 192 186 &rdev->uvd.gpu_addr); ··· 627 613 } 628 614 629 615 /* stitch together an UVD create msg */ 630 - msg[0] = 0x00000de4; 631 - msg[1] = 0x00000000; 632 - msg[2] = handle; 633 - msg[3] = 0x00000000; 634 - msg[4] = 0x00000000; 635 - msg[5] = 0x00000000; 636 - msg[6] = 0x00000000; 637 - msg[7] = 0x00000780; 638 - msg[8] = 0x00000440; 639 - msg[9] = 0x00000000; 640 - msg[10] = 0x01b37000; 616 + msg[0] = cpu_to_le32(0x00000de4); 617 + msg[1] = cpu_to_le32(0x00000000); 618 + msg[2] = cpu_to_le32(handle); 619 + msg[3] = cpu_to_le32(0x00000000); 620 + msg[4] = cpu_to_le32(0x00000000); 621 + msg[5] = cpu_to_le32(0x00000000); 622 + msg[6] = cpu_to_le32(0x00000000); 623 + msg[7] = cpu_to_le32(0x00000780); 624 + msg[8] = cpu_to_le32(0x00000440); 625 + msg[9] = cpu_to_le32(0x00000000); 626 + msg[10] = cpu_to_le32(0x01b37000); 641 627 for (i = 11; i < 1024; ++i) 642 - msg[i] = 0x0; 628 + msg[i] = cpu_to_le32(0x0); 643 629 644 630 radeon_bo_kunmap(bo); 645 631 radeon_bo_unreserve(bo); ··· 673 659 } 674 660 675 661 /* stitch together an UVD destroy msg */ 676 - msg[0] = 0x00000de4; 677 - msg[1] = 0x00000002; 678 - msg[2] = handle; 679 - msg[3] = 0x00000000; 662 + msg[0] = cpu_to_le32(0x00000de4); 663 + msg[1] = cpu_to_le32(0x00000002); 664 + msg[2] = cpu_to_le32(handle); 665 + msg[3] = cpu_to_le32(0x00000000); 680 666 for (i = 4; i < 1024; ++i) 681 - msg[i] = 0x0; 667 + msg[i] = cpu_to_le32(0x0); 682 668 683 669 radeon_bo_kunmap(bo); 684 670 radeon_bo_unreserve(bo);
+1 -1
drivers/irqchip/irq-gic.c
··· 705 705 static int __cpuinit gic_secondary_init(struct notifier_block *nfb, 706 706 unsigned long action, void *hcpu) 707 707 { 708 - if (action == CPU_STARTING) 708 + if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 709 709 gic_cpu_init(&gic_data[0]); 710 710 return NOTIFY_OK; 711 711 }
+9 -3
drivers/media/Kconfig
··· 136 136 137 137 # This Kconfig option is used by both PCI and USB drivers 138 138 config TTPCI_EEPROM 139 - tristate 140 - depends on I2C 141 - default n 139 + tristate 140 + depends on I2C 141 + default n 142 142 143 143 source "drivers/media/dvb-core/Kconfig" 144 144 ··· 188 188 the needed demodulators). 189 189 190 190 If unsure say Y. 191 + 192 + config MEDIA_ATTACH 193 + bool 194 + depends on MEDIA_ANALOG_TV_SUPPORT || MEDIA_DIGITAL_TV_SUPPORT || MEDIA_RADIO_SUPPORT 195 + depends on MODULES 196 + default MODULES 191 197 192 198 source "drivers/media/i2c/Kconfig" 193 199 source "drivers/media/tuners/Kconfig"
+1 -1
drivers/media/i2c/s5c73m3/s5c73m3-core.c
··· 956 956 957 957 if (fie->pad != OIF_SOURCE_PAD) 958 958 return -EINVAL; 959 - if (fie->index > ARRAY_SIZE(s5c73m3_intervals)) 959 + if (fie->index >= ARRAY_SIZE(s5c73m3_intervals)) 960 960 return -EINVAL; 961 961 962 962 mutex_lock(&state->lock);
+3 -4
drivers/media/pci/cx88/cx88-alsa.c
··· 615 615 int changed = 0; 616 616 u32 old; 617 617 618 - if (core->board.audio_chip == V4L2_IDENT_WM8775) 618 + if (core->sd_wm8775) 619 619 snd_cx88_wm8775_volume_put(kcontrol, value); 620 620 621 621 left = value->value.integer.value[0] & 0x3f; ··· 682 682 vol ^= bit; 683 683 cx_swrite(SHADOW_AUD_VOL_CTL, AUD_VOL_CTL, vol); 684 684 /* Pass mute onto any WM8775 */ 685 - if ((core->board.audio_chip == V4L2_IDENT_WM8775) && 686 - ((1<<6) == bit)) 685 + if (core->sd_wm8775 && ((1<<6) == bit)) 687 686 wm8775_s_ctrl(core, V4L2_CID_AUDIO_MUTE, 0 != (vol & bit)); 688 687 ret = 1; 689 688 } ··· 902 903 goto error; 903 904 904 905 /* If there's a wm8775 then add a Line-In ALC switch */ 905 - if (core->board.audio_chip == V4L2_IDENT_WM8775) 906 + if (core->sd_wm8775) 906 907 snd_ctl_add(card, snd_ctl_new1(&snd_cx88_alc_switch, chip)); 907 908 908 909 strcpy (card->driver, "CX88x");
+3 -5
drivers/media/pci/cx88/cx88-video.c
··· 385 385 /* The wm8775 module has the "2" route hardwired into 386 386 the initialization. Some boards may use different 387 387 routes for different inputs. HVR-1300 surely does */ 388 - if (core->board.audio_chip && 389 - core->board.audio_chip == V4L2_IDENT_WM8775) { 388 + if (core->sd_wm8775) { 390 389 call_all(core, audio, s_routing, 391 390 INPUT(input).audioroute, 0, 0); 392 391 } ··· 770 771 cx_write(MO_GP1_IO, core->board.radio.gpio1); 771 772 cx_write(MO_GP2_IO, core->board.radio.gpio2); 772 773 if (core->board.radio.audioroute) { 773 - if(core->board.audio_chip && 774 - core->board.audio_chip == V4L2_IDENT_WM8775) { 774 + if (core->sd_wm8775) { 775 775 call_all(core, audio, s_routing, 776 776 core->board.radio.audioroute, 0, 0); 777 777 } ··· 957 959 u32 value,mask; 958 960 959 961 /* Pass changes onto any WM8775 */ 960 - if (core->board.audio_chip == V4L2_IDENT_WM8775) { 962 + if (core->sd_wm8775) { 961 963 switch (ctrl->id) { 962 964 case V4L2_CID_AUDIO_MUTE: 963 965 wm8775_s_ctrl(core, ctrl->id, ctrl->val);
+9
drivers/media/platform/coda.c
··· 576 576 return v4l2_m2m_dqbuf(file, ctx->m2m_ctx, buf); 577 577 } 578 578 579 + static int vidioc_create_bufs(struct file *file, void *priv, 580 + struct v4l2_create_buffers *create) 581 + { 582 + struct coda_ctx *ctx = fh_to_ctx(priv); 583 + 584 + return v4l2_m2m_create_bufs(file, ctx->m2m_ctx, create); 585 + } 586 + 579 587 static int vidioc_streamon(struct file *file, void *priv, 580 588 enum v4l2_buf_type type) 581 589 { ··· 618 610 619 611 .vidioc_qbuf = vidioc_qbuf, 620 612 .vidioc_dqbuf = vidioc_dqbuf, 613 + .vidioc_create_bufs = vidioc_create_bufs, 621 614 622 615 .vidioc_streamon = vidioc_streamon, 623 616 .vidioc_streamoff = vidioc_streamoff,
+15
drivers/media/platform/davinci/vpbe_display.c
··· 916 916 other video window */ 917 917 918 918 layer->pix_fmt = *pixfmt; 919 + if (pixfmt->pixelformat == V4L2_PIX_FMT_NV12) { 920 + struct vpbe_layer *otherlayer; 921 + 922 + otherlayer = _vpbe_display_get_other_win_layer(disp_dev, layer); 923 + /* if other layer is available, only 924 + * claim it, do not configure it 925 + */ 926 + ret = osd_device->ops.request_layer(osd_device, 927 + otherlayer->layer_info.id); 928 + if (ret < 0) { 929 + v4l2_err(&vpbe_dev->v4l2_dev, 930 + "Display Manager failed to allocate layer\n"); 931 + return -EBUSY; 932 + } 933 + } 919 934 920 935 /* Get osd layer config */ 921 936 osd_device->ops.get_layer_config(osd_device,
+1 -2
drivers/media/platform/davinci/vpfe_capture.c
··· 1837 1837 if (NULL == ccdc_cfg) { 1838 1838 v4l2_err(pdev->dev.driver, 1839 1839 "Memory allocation failed for ccdc_cfg\n"); 1840 - goto probe_free_lock; 1840 + goto probe_free_dev_mem; 1841 1841 } 1842 1842 1843 1843 mutex_lock(&ccdc_lock); ··· 1991 1991 free_irq(vpfe_dev->ccdc_irq0, vpfe_dev); 1992 1992 probe_free_ccdc_cfg_mem: 1993 1993 kfree(ccdc_cfg); 1994 - probe_free_lock: 1995 1994 mutex_unlock(&ccdc_lock); 1996 1995 probe_free_dev_mem: 1997 1996 kfree(vpfe_dev);
+1 -1
drivers/media/platform/exynos4-is/fimc-is-regs.c
··· 174 174 HIC_CAPTURE_STILL, HIC_CAPTURE_VIDEO, 175 175 }; 176 176 177 - if (WARN_ON(is->config_index > ARRAY_SIZE(cmd))) 177 + if (WARN_ON(is->config_index >= ARRAY_SIZE(cmd))) 178 178 return -EINVAL; 179 179 180 180 mcuctl_write(cmd[is->config_index], is, MCUCTL_REG_ISSR(0));
+18 -30
drivers/media/platform/exynos4-is/fimc-is.c
··· 48 48 [ISS_CLK_LITE0] = "lite0", 49 49 [ISS_CLK_LITE1] = "lite1", 50 50 [ISS_CLK_MPLL] = "mpll", 51 - [ISS_CLK_SYSREG] = "sysreg", 52 51 [ISS_CLK_ISP] = "isp", 53 52 [ISS_CLK_DRC] = "drc", 54 53 [ISS_CLK_FD] = "fd", ··· 70 71 for (i = 0; i < ISS_CLKS_MAX; i++) { 71 72 if (IS_ERR(is->clocks[i])) 72 73 continue; 73 - clk_unprepare(is->clocks[i]); 74 74 clk_put(is->clocks[i]); 75 75 is->clocks[i] = ERR_PTR(-EINVAL); 76 76 } ··· 88 90 ret = PTR_ERR(is->clocks[i]); 89 91 goto err; 90 92 } 91 - ret = clk_prepare(is->clocks[i]); 92 - if (ret < 0) { 93 - clk_put(is->clocks[i]); 94 - is->clocks[i] = ERR_PTR(-EINVAL); 95 - goto err; 96 - } 97 93 } 98 94 99 95 return 0; ··· 95 103 fimc_is_put_clocks(is); 96 104 dev_err(&is->pdev->dev, "failed to get clock: %s\n", 97 105 fimc_is_clocks[i]); 98 - return -ENXIO; 106 + return ret; 99 107 } 100 108 101 109 static int fimc_is_setup_clocks(struct fimc_is *is) ··· 136 144 for (i = 0; i < ISS_GATE_CLKS_MAX; i++) { 137 145 if (IS_ERR(is->clocks[i])) 138 146 continue; 139 - ret = clk_enable(is->clocks[i]); 147 + ret = clk_prepare_enable(is->clocks[i]); 140 148 if (ret < 0) { 141 149 dev_err(&is->pdev->dev, "clock %s enable failed\n", 142 150 fimc_is_clocks[i]); ··· 155 163 156 164 for (i = 0; i < ISS_GATE_CLKS_MAX; i++) { 157 165 if (!IS_ERR(is->clocks[i])) { 158 - clk_disable(is->clocks[i]); 166 + clk_disable_unprepare(is->clocks[i]); 159 167 pr_debug("disabled clock: %s\n", fimc_is_clocks[i]); 160 168 } 161 169 } ··· 317 325 { 318 326 struct device *dev = &is->pdev->dev; 319 327 int ret; 328 + 329 + if (is->fw.f_w == NULL) { 330 + dev_err(dev, "firmware is not loaded\n"); 331 + return -EINVAL; 332 + } 320 333 321 334 memcpy(is->memory.vaddr, is->fw.f_w->data, is->fw.f_w->size); 322 335 wmb(); ··· 834 837 goto err_clk; 835 838 } 836 839 pm_runtime_enable(dev); 837 - /* 838 - * Enable only the ISP power domain, keep FIMC-IS clocks off until 839 - * the whole clock tree is configured. The ISP power domain needs 840 - * be active in order to acces any CMU_ISP clock registers. 841 - */ 840 + 842 841 ret = pm_runtime_get_sync(dev); 843 842 if (ret < 0) 844 843 goto err_irq; 845 - 846 - ret = fimc_is_setup_clocks(is); 847 - pm_runtime_put_sync(dev); 848 - 849 - if (ret < 0) 850 - goto err_irq; 851 - 852 - is->clk_init = true; 853 844 854 845 is->alloc_ctx = vb2_dma_contig_init_ctx(dev); 855 846 if (IS_ERR(is->alloc_ctx)) { ··· 860 875 if (ret < 0) 861 876 goto err_dfs; 862 877 878 + pm_runtime_put_sync(dev); 879 + 863 880 dev_dbg(dev, "FIMC-IS registered successfully\n"); 864 881 return 0; 865 882 ··· 881 894 static int fimc_is_runtime_resume(struct device *dev) 882 895 { 883 896 struct fimc_is *is = dev_get_drvdata(dev); 897 + int ret; 884 898 885 - if (!is->clk_init) 886 - return 0; 899 + ret = fimc_is_setup_clocks(is); 900 + if (ret) 901 + return ret; 887 902 888 903 return fimc_is_enable_clocks(is); 889 904 } ··· 894 905 { 895 906 struct fimc_is *is = dev_get_drvdata(dev); 896 907 897 - if (is->clk_init) 898 - fimc_is_disable_clocks(is); 899 - 908 + fimc_is_disable_clocks(is); 900 909 return 0; 901 910 } 902 911 ··· 928 941 vb2_dma_contig_cleanup_ctx(is->alloc_ctx); 929 942 fimc_is_put_clocks(is); 930 943 fimc_is_debugfs_remove(is); 931 - release_firmware(is->fw.f_w); 944 + if (is->fw.f_w) 945 + release_firmware(is->fw.f_w); 932 946 fimc_is_free_cpu_memory(is); 933 947 934 948 return 0;
-2
drivers/media/platform/exynos4-is/fimc-is.h
··· 73 73 ISS_CLK_LITE0, 74 74 ISS_CLK_LITE1, 75 75 ISS_CLK_MPLL, 76 - ISS_CLK_SYSREG, 77 76 ISS_CLK_ISP, 78 77 ISS_CLK_DRC, 79 78 ISS_CLK_FD, ··· 264 265 spinlock_t slock; 265 266 266 267 struct clk *clocks[ISS_CLKS_MAX]; 267 - bool clk_init; 268 268 void __iomem *regs; 269 269 void __iomem *pmu_regs; 270 270 int irq;
+2 -2
drivers/media/platform/exynos4-is/fimc-isp.c
··· 138 138 return 0; 139 139 } 140 140 141 - mf->colorspace = V4L2_COLORSPACE_JPEG; 141 + mf->colorspace = V4L2_COLORSPACE_SRGB; 142 142 143 143 mutex_lock(&isp->subdev_lock); 144 144 __is_get_frame_size(is, &cur_fmt); ··· 194 194 v4l2_dbg(1, debug, sd, "%s: pad%d: code: 0x%x, %dx%d\n", 195 195 __func__, fmt->pad, mf->code, mf->width, mf->height); 196 196 197 - mf->colorspace = V4L2_COLORSPACE_JPEG; 197 + mf->colorspace = V4L2_COLORSPACE_SRGB; 198 198 199 199 mutex_lock(&isp->subdev_lock); 200 200 __isp_subdev_try_format(isp, fmt);
+1 -1
drivers/media/platform/exynos4-is/mipi-csis.c
··· 746 746 node = v4l2_of_get_next_endpoint(node, NULL); 747 747 if (!node) { 748 748 dev_err(&pdev->dev, "No port node at %s\n", 749 - node->full_name); 749 + pdev->dev.of_node->full_name); 750 750 return -EINVAL; 751 751 } 752 752 /* Get port node and validate MIPI-CSI channel id. */
+1 -1
drivers/media/platform/s3c-camif/camif-core.h
··· 229 229 unsigned int state; 230 230 u16 fmt_flags; 231 231 u8 id; 232 - u8 rotation; 232 + u16 rotation; 233 233 u8 hflip; 234 234 u8 vflip; 235 235 unsigned int offset;
+1 -1
drivers/media/platform/s5p-jpeg/Makefile
··· 1 1 s5p-jpeg-objs := jpeg-core.o 2 - obj-$(CONFIG_VIDEO_SAMSUNG_S5P_JPEG) := s5p-jpeg.o 2 + obj-$(CONFIG_VIDEO_SAMSUNG_S5P_JPEG) += s5p-jpeg.o
+1 -1
drivers/media/platform/s5p-mfc/Makefile
··· 1 - obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC) := s5p-mfc.o 1 + obj-$(CONFIG_VIDEO_SAMSUNG_S5P_MFC) += s5p-mfc.o 2 2 s5p-mfc-y += s5p_mfc.o s5p_mfc_intr.o 3 3 s5p-mfc-y += s5p_mfc_dec.o s5p_mfc_enc.o 4 4 s5p-mfc-y += s5p_mfc_ctrl.o s5p_mfc_pm.o
+3 -5
drivers/media/platform/s5p-mfc/s5p_mfc.c
··· 397 397 leave_handle_frame: 398 398 spin_unlock_irqrestore(&dev->irqlock, flags); 399 399 if ((ctx->src_queue_cnt == 0 && ctx->state != MFCINST_FINISHING) 400 - || ctx->dst_queue_cnt < ctx->dpb_count) 400 + || ctx->dst_queue_cnt < ctx->pb_count) 401 401 clear_work_bit(ctx); 402 402 s5p_mfc_hw_call(dev->mfc_ops, clear_int_flags, dev); 403 403 wake_up_ctx(ctx, reason, err); ··· 473 473 474 474 s5p_mfc_hw_call(dev->mfc_ops, dec_calc_dpb_size, ctx); 475 475 476 - ctx->dpb_count = s5p_mfc_hw_call(dev->mfc_ops, get_dpb_count, 476 + ctx->pb_count = s5p_mfc_hw_call(dev->mfc_ops, get_dpb_count, 477 477 dev); 478 478 ctx->mv_count = s5p_mfc_hw_call(dev->mfc_ops, get_mv_count, 479 479 dev); ··· 562 562 struct s5p_mfc_dev *dev = ctx->dev; 563 563 struct s5p_mfc_buf *mb_entry; 564 564 565 - mfc_debug(2, "Stream completed"); 565 + mfc_debug(2, "Stream completed\n"); 566 566 567 567 s5p_mfc_clear_int_flags(dev); 568 568 ctx->int_type = reason; ··· 1362 1362 .port_num = MFC_NUM_PORTS, 1363 1363 .buf_size = &buf_size_v5, 1364 1364 .buf_align = &mfc_buf_align_v5, 1365 - .mclk_name = "sclk_mfc", 1366 1365 .fw_name = "s5p-mfc.fw", 1367 1366 }; 1368 1367 ··· 1388 1389 .port_num = MFC_NUM_PORTS_V6, 1389 1390 .buf_size = &buf_size_v6, 1390 1391 .buf_align = &mfc_buf_align_v6, 1391 - .mclk_name = "aclk_333", 1392 1392 .fw_name = "s5p-mfc-v6.fw", 1393 1393 }; 1394 1394
+3 -3
drivers/media/platform/s5p-mfc/s5p_mfc_common.h
··· 138 138 MFCINST_INIT = 100, 139 139 MFCINST_GOT_INST, 140 140 MFCINST_HEAD_PARSED, 141 + MFCINST_HEAD_PRODUCED, 141 142 MFCINST_BUFS_SET, 142 143 MFCINST_RUNNING, 143 144 MFCINST_FINISHING, ··· 232 231 unsigned int port_num; 233 232 struct s5p_mfc_buf_size *buf_size; 234 233 struct s5p_mfc_buf_align *buf_align; 235 - char *mclk_name; 236 234 char *fw_name; 237 235 }; 238 236 ··· 438 438 u32 rc_framerate_num; 439 439 u32 rc_framerate_denom; 440 440 441 - union { 441 + struct { 442 442 struct s5p_mfc_h264_enc_params h264; 443 443 struct s5p_mfc_mpeg4_enc_params mpeg4; 444 444 } codec; ··· 602 602 int after_packed_pb; 603 603 int sei_fp_parse; 604 604 605 - int dpb_count; 605 + int pb_count; 606 606 int total_dpb_count; 607 607 int mv_count; 608 608 /* Buffers */
+1 -1
drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
··· 38 38 dev->fw_virt_addr = dma_alloc_coherent(dev->mem_dev_l, dev->fw_size, 39 39 &dev->bank1, GFP_KERNEL); 40 40 41 - if (IS_ERR(dev->fw_virt_addr)) { 41 + if (IS_ERR_OR_NULL(dev->fw_virt_addr)) { 42 42 dev->fw_virt_addr = NULL; 43 43 mfc_err("Allocating bitprocessor buffer failed\n"); 44 44 return -ENOMEM;
+2 -2
drivers/media/platform/s5p-mfc/s5p_mfc_debug.h
··· 30 30 #define mfc_debug(level, fmt, args...) 31 31 #endif 32 32 33 - #define mfc_debug_enter() mfc_debug(5, "enter") 34 - #define mfc_debug_leave() mfc_debug(5, "leave") 33 + #define mfc_debug_enter() mfc_debug(5, "enter\n") 34 + #define mfc_debug_leave() mfc_debug(5, "leave\n") 35 35 36 36 #define mfc_err(fmt, args...) \ 37 37 do { \
+10 -10
drivers/media/platform/s5p-mfc/s5p_mfc_dec.c
··· 210 210 /* Context is to decode a frame */ 211 211 if (ctx->src_queue_cnt >= 1 && 212 212 ctx->state == MFCINST_RUNNING && 213 - ctx->dst_queue_cnt >= ctx->dpb_count) 213 + ctx->dst_queue_cnt >= ctx->pb_count) 214 214 return 1; 215 215 /* Context is to return last frame */ 216 216 if (ctx->state == MFCINST_FINISHING && 217 - ctx->dst_queue_cnt >= ctx->dpb_count) 217 + ctx->dst_queue_cnt >= ctx->pb_count) 218 218 return 1; 219 219 /* Context is to set buffers */ 220 220 if (ctx->src_queue_cnt >= 1 && ··· 224 224 /* Resolution change */ 225 225 if ((ctx->state == MFCINST_RES_CHANGE_INIT || 226 226 ctx->state == MFCINST_RES_CHANGE_FLUSH) && 227 - ctx->dst_queue_cnt >= ctx->dpb_count) 227 + ctx->dst_queue_cnt >= ctx->pb_count) 228 228 return 1; 229 229 if (ctx->state == MFCINST_RES_CHANGE_END && 230 230 ctx->src_queue_cnt >= 1) ··· 537 537 mfc_err("vb2_reqbufs on capture failed\n"); 538 538 return ret; 539 539 } 540 - if (reqbufs->count < ctx->dpb_count) { 540 + if (reqbufs->count < ctx->pb_count) { 541 541 mfc_err("Not enough buffers allocated\n"); 542 542 reqbufs->count = 0; 543 543 s5p_mfc_clock_on(); ··· 751 751 case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE: 752 752 if (ctx->state >= MFCINST_HEAD_PARSED && 753 753 ctx->state < MFCINST_ABORT) { 754 - ctrl->val = ctx->dpb_count; 754 + ctrl->val = ctx->pb_count; 755 755 break; 756 756 } else if (ctx->state != MFCINST_INIT) { 757 757 v4l2_err(&dev->v4l2_dev, "Decoding not initialised\n"); ··· 763 763 S5P_MFC_R2H_CMD_SEQ_DONE_RET, 0); 764 764 if (ctx->state >= MFCINST_HEAD_PARSED && 765 765 ctx->state < MFCINST_ABORT) { 766 - ctrl->val = ctx->dpb_count; 766 + ctrl->val = ctx->pb_count; 767 767 } else { 768 768 v4l2_err(&dev->v4l2_dev, "Decoding not initialised\n"); 769 769 return -EINVAL; ··· 924 924 /* Output plane count is 2 - one for Y and one for CbCr */ 925 925 *plane_count = 2; 926 926 /* Setup buffer count */ 927 - if (*buf_count < ctx->dpb_count) 928 - *buf_count = ctx->dpb_count; 929 - if (*buf_count > ctx->dpb_count + MFC_MAX_EXTRA_DPB) 930 - *buf_count = ctx->dpb_count + MFC_MAX_EXTRA_DPB; 927 + if (*buf_count < ctx->pb_count) 928 + *buf_count = ctx->pb_count; 929 + if (*buf_count > ctx->pb_count + MFC_MAX_EXTRA_DPB) 930 + *buf_count = ctx->pb_count + MFC_MAX_EXTRA_DPB; 931 931 if (*buf_count > MFC_MAX_BUFFERS) 932 932 *buf_count = MFC_MAX_BUFFERS; 933 933 } else {
+56 -26
drivers/media/platform/s5p-mfc/s5p_mfc_enc.c
··· 592 592 return 1; 593 593 /* context is ready to encode a frame */ 594 594 if ((ctx->state == MFCINST_RUNNING || 595 - ctx->state == MFCINST_HEAD_PARSED) && 595 + ctx->state == MFCINST_HEAD_PRODUCED) && 596 596 ctx->src_queue_cnt >= 1 && ctx->dst_queue_cnt >= 1) 597 597 return 1; 598 598 /* context is ready to encode remaining frames */ ··· 649 649 struct s5p_mfc_enc_params *p = &ctx->enc_params; 650 650 struct s5p_mfc_buf *dst_mb; 651 651 unsigned long flags; 652 + unsigned int enc_pb_count; 652 653 653 654 if (p->seq_hdr_mode == V4L2_MPEG_VIDEO_HEADER_MODE_SEPARATE) { 654 655 spin_lock_irqsave(&dev->irqlock, flags); ··· 662 661 vb2_buffer_done(dst_mb->b, VB2_BUF_STATE_DONE); 663 662 spin_unlock_irqrestore(&dev->irqlock, flags); 664 663 } 665 - if (IS_MFCV6(dev)) { 666 - ctx->state = MFCINST_HEAD_PARSED; /* for INIT_BUFFER cmd */ 667 - } else { 664 + 665 + if (!IS_MFCV6(dev)) { 668 666 ctx->state = MFCINST_RUNNING; 669 667 if (s5p_mfc_ctx_ready(ctx)) 670 668 set_work_bit_irqsave(ctx); 671 669 s5p_mfc_hw_call(dev->mfc_ops, try_run, dev); 672 - } 673 - 674 - if (IS_MFCV6(dev)) 675 - ctx->dpb_count = s5p_mfc_hw_call(dev->mfc_ops, 670 + } else { 671 + enc_pb_count = s5p_mfc_hw_call(dev->mfc_ops, 676 672 get_enc_dpb_count, dev); 673 + if (ctx->pb_count < enc_pb_count) 674 + ctx->pb_count = enc_pb_count; 675 + ctx->state = MFCINST_HEAD_PRODUCED; 676 + } 677 677 678 678 return 0; 679 679 } ··· 719 717 720 718 slice_type = s5p_mfc_hw_call(dev->mfc_ops, get_enc_slice_type, dev); 721 719 strm_size = s5p_mfc_hw_call(dev->mfc_ops, get_enc_strm_size, dev); 722 - mfc_debug(2, "Encoded slice type: %d", slice_type); 723 - mfc_debug(2, "Encoded stream size: %d", strm_size); 724 - mfc_debug(2, "Display order: %d", 720 + mfc_debug(2, "Encoded slice type: %d\n", slice_type); 721 + mfc_debug(2, "Encoded stream size: %d\n", strm_size); 722 + mfc_debug(2, "Display order: %d\n", 725 723 mfc_read(dev, S5P_FIMV_ENC_SI_PIC_CNT)); 726 724 spin_lock_irqsave(&dev->irqlock, flags); 727 725 if (slice_type >= 0) { ··· 1057 1055 } 1058 1056 ctx->capture_state = QUEUE_BUFS_REQUESTED; 1059 1057 1060 - if (!IS_MFCV6(dev)) { 1061 - ret = s5p_mfc_hw_call(ctx->dev->mfc_ops, 1062 - alloc_codec_buffers, ctx); 1063 - if (ret) { 1064 - mfc_err("Failed to allocate encoding buffers\n"); 1065 - reqbufs->count = 0; 1066 - ret = vb2_reqbufs(&ctx->vq_dst, reqbufs); 1067 - return -ENOMEM; 1068 - } 1058 + ret = s5p_mfc_hw_call(ctx->dev->mfc_ops, 1059 + alloc_codec_buffers, ctx); 1060 + if (ret) { 1061 + mfc_err("Failed to allocate encoding buffers\n"); 1062 + reqbufs->count = 0; 1063 + ret = vb2_reqbufs(&ctx->vq_dst, reqbufs); 1064 + return -ENOMEM; 1069 1065 } 1070 1066 } else if (reqbufs->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) { 1071 1067 if (ctx->output_state != QUEUE_FREE) { ··· 1071 1071 ctx->output_state); 1072 1072 return -EINVAL; 1073 1073 } 1074 + 1075 + if (IS_MFCV6(dev)) { 1076 + /* Check for min encoder buffers */ 1077 + if (ctx->pb_count && 1078 + (reqbufs->count < ctx->pb_count)) { 1079 + reqbufs->count = ctx->pb_count; 1080 + mfc_debug(2, "Minimum %d output buffers needed\n", 1081 + ctx->pb_count); 1082 + } else { 1083 + ctx->pb_count = reqbufs->count; 1084 + } 1085 + } 1086 + 1074 1087 ret = vb2_reqbufs(&ctx->vq_src, reqbufs); 1075 1088 if (ret != 0) { 1076 1089 mfc_err("error in vb2_reqbufs() for E(S)\n"); ··· 1546 1533 1547 1534 spin_lock_irqsave(&dev->irqlock, flags); 1548 1535 if (list_empty(&ctx->src_queue)) { 1549 - mfc_debug(2, "EOS: empty src queue, entering finishing state"); 1536 + mfc_debug(2, "EOS: empty src queue, entering finishing state\n"); 1550 1537 ctx->state = MFCINST_FINISHING; 1551 1538 if (s5p_mfc_ctx_ready(ctx)) 1552 1539 set_work_bit_irqsave(ctx); 1553 1540 spin_unlock_irqrestore(&dev->irqlock, flags); 1554 1541 s5p_mfc_hw_call(dev->mfc_ops, try_run, dev); 1555 1542 } else { 1556 - mfc_debug(2, "EOS: marking last buffer of stream"); 1543 + mfc_debug(2, "EOS: marking last buffer of stream\n"); 1557 1544 buf = list_entry(ctx->src_queue.prev, 1558 1545 struct s5p_mfc_buf, list); 1559 1546 if (buf->flags & MFC_BUF_FLAG_USED) ··· 1622 1609 mfc_err("failed to get plane cookie\n"); 1623 1610 return -EINVAL; 1624 1611 } 1625 - mfc_debug(2, "index: %d, plane[%d] cookie: 0x%08zx", 1626 - vb->v4l2_buf.index, i, 1627 - vb2_dma_contig_plane_dma_addr(vb, i)); 1612 + mfc_debug(2, "index: %d, plane[%d] cookie: 0x%08zx\n", 1613 + vb->v4l2_buf.index, i, 1614 + vb2_dma_contig_plane_dma_addr(vb, i)); 1628 1615 } 1629 1616 return 0; 1630 1617 } ··· 1773 1760 struct s5p_mfc_ctx *ctx = fh_to_ctx(q->drv_priv); 1774 1761 struct s5p_mfc_dev *dev = ctx->dev; 1775 1762 1776 - v4l2_ctrl_handler_setup(&ctx->ctrl_handler); 1763 + if (IS_MFCV6(dev) && (q->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)) { 1764 + 1765 + if ((ctx->state == MFCINST_GOT_INST) && 1766 + (dev->curr_ctx == ctx->num) && dev->hw_lock) { 1767 + s5p_mfc_wait_for_done_ctx(ctx, 1768 + S5P_MFC_R2H_CMD_SEQ_DONE_RET, 1769 + 0); 1770 + } 1771 + 1772 + if (ctx->src_bufs_cnt < ctx->pb_count) { 1773 + mfc_err("Need minimum %d OUTPUT buffers\n", 1774 + ctx->pb_count); 1775 + return -EINVAL; 1776 + } 1777 + } 1778 + 1777 1779 /* If context is ready then dev = work->data;schedule it to run */ 1778 1780 if (s5p_mfc_ctx_ready(ctx)) 1779 1781 set_work_bit_irqsave(ctx); 1780 1782 s5p_mfc_hw_call(dev->mfc_ops, try_run, dev); 1783 + 1781 1784 return 0; 1782 1785 } 1783 1786 ··· 1949 1920 if (controls[i].is_volatile && ctx->ctrls[i]) 1950 1921 ctx->ctrls[i]->flags |= V4L2_CTRL_FLAG_VOLATILE; 1951 1922 } 1923 + v4l2_ctrl_handler_setup(&ctx->ctrl_handler); 1952 1924 return 0; 1953 1925 } 1954 1926
+2 -2
drivers/media/platform/s5p-mfc/s5p_mfc_opr_v5.c
··· 1275 1275 spin_unlock_irqrestore(&dev->irqlock, flags); 1276 1276 dev->curr_ctx = ctx->num; 1277 1277 s5p_mfc_clean_ctx_int_flags(ctx); 1278 - mfc_debug(2, "encoding buffer with index=%d state=%d", 1279 - src_mb ? src_mb->b->v4l2_buf.index : -1, ctx->state); 1278 + mfc_debug(2, "encoding buffer with index=%d state=%d\n", 1279 + src_mb ? src_mb->b->v4l2_buf.index : -1, ctx->state); 1280 1280 s5p_mfc_encode_one_frame_v5(ctx); 1281 1281 return 0; 1282 1282 }
+15 -38
drivers/media/platform/s5p-mfc/s5p_mfc_opr_v6.c
··· 62 62 /* NOP */ 63 63 } 64 64 65 - static int s5p_mfc_get_dec_status_v6(struct s5p_mfc_dev *dev) 66 - { 67 - /* NOP */ 68 - return -1; 69 - } 70 - 71 65 /* Allocate codec buffers */ 72 66 static int s5p_mfc_alloc_codec_buffers_v6(struct s5p_mfc_ctx *ctx) 73 67 { ··· 161 167 S5P_FIMV_SCRATCH_BUFFER_ALIGN_V6); 162 168 ctx->bank1.size = 163 169 ctx->scratch_buf_size + ctx->tmv_buffer_size + 164 - (ctx->dpb_count * (ctx->luma_dpb_size + 170 + (ctx->pb_count * (ctx->luma_dpb_size + 165 171 ctx->chroma_dpb_size + ctx->me_buffer_size)); 166 172 ctx->bank2.size = 0; 167 173 break; ··· 175 181 S5P_FIMV_SCRATCH_BUFFER_ALIGN_V6); 176 182 ctx->bank1.size = 177 183 ctx->scratch_buf_size + ctx->tmv_buffer_size + 178 - (ctx->dpb_count * (ctx->luma_dpb_size + 184 + (ctx->pb_count * (ctx->luma_dpb_size + 179 185 ctx->chroma_dpb_size + ctx->me_buffer_size)); 180 186 ctx->bank2.size = 0; 181 187 break; ··· 192 198 } 193 199 BUG_ON(ctx->bank1.dma & ((1 << MFC_BANK1_ALIGN_ORDER) - 1)); 194 200 } 195 - 196 201 return 0; 197 202 } 198 203 ··· 442 449 WRITEL(addr, S5P_FIMV_E_STREAM_BUFFER_ADDR_V6); /* 16B align */ 443 450 WRITEL(size, S5P_FIMV_E_STREAM_BUFFER_SIZE_V6); 444 451 445 - mfc_debug(2, "stream buf addr: 0x%08lx, size: 0x%d", 446 - addr, size); 452 + mfc_debug(2, "stream buf addr: 0x%08lx, size: 0x%d\n", 453 + addr, size); 447 454 448 455 return 0; 449 456 } ··· 456 463 WRITEL(y_addr, S5P_FIMV_E_SOURCE_LUMA_ADDR_V6); /* 256B align */ 457 464 WRITEL(c_addr, S5P_FIMV_E_SOURCE_CHROMA_ADDR_V6); 458 465 459 - mfc_debug(2, "enc src y buf addr: 0x%08lx", y_addr); 460 - mfc_debug(2, "enc src c buf addr: 0x%08lx", c_addr); 466 + mfc_debug(2, "enc src y buf addr: 0x%08lx\n", y_addr); 467 + mfc_debug(2, "enc src c buf addr: 0x%08lx\n", c_addr); 461 468 } 462 469 463 470 static void s5p_mfc_get_enc_frame_buffer_v6(struct s5p_mfc_ctx *ctx, ··· 472 479 enc_recon_y_addr = READL(S5P_FIMV_E_RECON_LUMA_DPB_ADDR_V6); 473 480 enc_recon_c_addr = READL(S5P_FIMV_E_RECON_CHROMA_DPB_ADDR_V6); 474 481 475 - mfc_debug(2, "recon y addr: 0x%08lx", enc_recon_y_addr); 476 - mfc_debug(2, "recon c addr: 0x%08lx", enc_recon_c_addr); 482 + mfc_debug(2, "recon y addr: 0x%08lx\n", enc_recon_y_addr); 483 + mfc_debug(2, "recon c addr: 0x%08lx\n", enc_recon_c_addr); 477 484 } 478 485 479 486 /* Set encoding ref & codec buffer */ ··· 490 497 491 498 mfc_debug(2, "Buf1: %p (%d)\n", (void *)buf_addr1, buf_size1); 492 499 493 - for (i = 0; i < ctx->dpb_count; i++) { 500 + for (i = 0; i < ctx->pb_count; i++) { 494 501 WRITEL(buf_addr1, S5P_FIMV_E_LUMA_DPB_V6 + (4 * i)); 495 502 buf_addr1 += ctx->luma_dpb_size; 496 503 WRITEL(buf_addr1, S5P_FIMV_E_CHROMA_DPB_V6 + (4 * i)); ··· 513 520 buf_size1 -= ctx->tmv_buffer_size; 514 521 515 522 mfc_debug(2, "Buf1: %u, buf_size1: %d (ref frames %d)\n", 516 - buf_addr1, buf_size1, ctx->dpb_count); 523 + buf_addr1, buf_size1, ctx->pb_count); 517 524 if (buf_size1 < 0) { 518 525 mfc_debug(2, "Not enough memory has been allocated.\n"); 519 526 return -ENOMEM; ··· 1424 1431 src_y_addr = vb2_dma_contig_plane_dma_addr(src_mb->b, 0); 1425 1432 src_c_addr = vb2_dma_contig_plane_dma_addr(src_mb->b, 1); 1426 1433 1427 - mfc_debug(2, "enc src y addr: 0x%08lx", src_y_addr); 1428 - mfc_debug(2, "enc src c addr: 0x%08lx", src_c_addr); 1434 + mfc_debug(2, "enc src y addr: 0x%08lx\n", src_y_addr); 1435 + mfc_debug(2, "enc src c addr: 0x%08lx\n", src_c_addr); 1429 1436 1430 1437 s5p_mfc_set_enc_frame_buffer_v6(ctx, src_y_addr, src_c_addr); 1431 1438 ··· 1515 1522 struct s5p_mfc_dev *dev = ctx->dev; 1516 1523 int ret; 1517 1524 1518 - ret = s5p_mfc_alloc_codec_buffers_v6(ctx); 1519 - if (ret) { 1520 - mfc_err("Failed to allocate encoding buffers.\n"); 1521 - return -ENOMEM; 1522 - } 1523 - 1524 - /* Header was generated now starting processing 1525 - * First set the reference frame buffers 1526 - */ 1527 - if (ctx->capture_state != QUEUE_BUFS_REQUESTED) { 1528 - mfc_err("It seems that destionation buffers were not\n" 1529 - "requested.MFC requires that header should be generated\n" 1530 - "before allocating codec buffer.\n"); 1531 - return -EAGAIN; 1532 - } 1533 - 1534 1525 dev->curr_ctx = ctx->num; 1535 1526 s5p_mfc_clean_ctx_int_flags(ctx); 1536 1527 ret = s5p_mfc_set_enc_ref_buffer_v6(ctx); ··· 1559 1582 mfc_debug(1, "Seting new context to %p\n", ctx); 1560 1583 /* Got context to run in ctx */ 1561 1584 mfc_debug(1, "ctx->dst_queue_cnt=%d ctx->dpb_count=%d ctx->src_queue_cnt=%d\n", 1562 - ctx->dst_queue_cnt, ctx->dpb_count, ctx->src_queue_cnt); 1585 + ctx->dst_queue_cnt, ctx->pb_count, ctx->src_queue_cnt); 1563 1586 mfc_debug(1, "ctx->state=%d\n", ctx->state); 1564 1587 /* Last frame has already been sent to MFC 1565 1588 * Now obtaining frames from MFC buffer */ ··· 1624 1647 case MFCINST_GOT_INST: 1625 1648 s5p_mfc_run_init_enc(ctx); 1626 1649 break; 1627 - case MFCINST_HEAD_PARSED: /* Only for MFC6.x */ 1650 + case MFCINST_HEAD_PRODUCED: 1628 1651 ret = s5p_mfc_run_init_enc_buffers(ctx); 1629 1652 break; 1630 1653 default: ··· 1707 1730 return mfc_read(dev, S5P_FIMV_D_DISPLAY_STATUS_V6); 1708 1731 } 1709 1732 1710 - static int s5p_mfc_get_decoded_status_v6(struct s5p_mfc_dev *dev) 1733 + static int s5p_mfc_get_dec_status_v6(struct s5p_mfc_dev *dev) 1711 1734 { 1712 1735 return mfc_read(dev, S5P_FIMV_D_DECODED_STATUS_V6); 1713 1736 }
+2 -21
drivers/media/platform/s5p-mfc/s5p_mfc_pm.c
··· 50 50 goto err_p_ip_clk; 51 51 } 52 52 53 - pm->clock = clk_get(&dev->plat_dev->dev, dev->variant->mclk_name); 54 - if (IS_ERR(pm->clock)) { 55 - mfc_err("Failed to get MFC clock\n"); 56 - ret = PTR_ERR(pm->clock); 57 - goto err_g_ip_clk_2; 58 - } 59 - 60 - ret = clk_prepare(pm->clock); 61 - if (ret) { 62 - mfc_err("Failed to prepare MFC clock\n"); 63 - goto err_p_ip_clk_2; 64 - } 65 - 66 53 atomic_set(&pm->power, 0); 67 54 #ifdef CONFIG_PM_RUNTIME 68 55 pm->device = &dev->plat_dev->dev; ··· 59 72 atomic_set(&clk_ref, 0); 60 73 #endif 61 74 return 0; 62 - err_p_ip_clk_2: 63 - clk_put(pm->clock); 64 - err_g_ip_clk_2: 65 - clk_unprepare(pm->clock_gate); 66 75 err_p_ip_clk: 67 76 clk_put(pm->clock_gate); 68 77 err_g_ip_clk: ··· 69 86 { 70 87 clk_unprepare(pm->clock_gate); 71 88 clk_put(pm->clock_gate); 72 - clk_unprepare(pm->clock); 73 - clk_put(pm->clock); 74 89 #ifdef CONFIG_PM_RUNTIME 75 90 pm_runtime_disable(pm->device); 76 91 #endif ··· 79 98 int ret; 80 99 #ifdef CLK_DEBUG 81 100 atomic_inc(&clk_ref); 82 - mfc_debug(3, "+ %d", atomic_read(&clk_ref)); 101 + mfc_debug(3, "+ %d\n", atomic_read(&clk_ref)); 83 102 #endif 84 103 ret = clk_enable(pm->clock_gate); 85 104 return ret; ··· 89 108 { 90 109 #ifdef CLK_DEBUG 91 110 atomic_dec(&clk_ref); 92 - mfc_debug(3, "- %d", atomic_read(&clk_ref)); 111 + mfc_debug(3, "- %d\n", atomic_read(&clk_ref)); 93 112 #endif 94 113 clk_disable(pm->clock_gate); 95 114 }
+6 -9
drivers/media/platform/sh_veu.c
··· 905 905 if (ftmp.fmt.pix.width != pix->width || 906 906 ftmp.fmt.pix.height != pix->height) 907 907 return -EINVAL; 908 - size = pix->bytesperline ? pix->bytesperline * pix->height : 909 - pix->width * pix->height * fmt->depth >> 3; 908 + size = pix->bytesperline ? pix->bytesperline * pix->height * fmt->depth / fmt->ydepth : 909 + pix->width * pix->height * fmt->depth / fmt->ydepth; 910 910 } else { 911 911 vfmt = sh_veu_get_vfmt(veu, vq->type); 912 - size = vfmt->bytesperline * vfmt->frame.height; 912 + size = vfmt->bytesperline * vfmt->frame.height * vfmt->fmt->depth / vfmt->fmt->ydepth; 913 913 } 914 914 915 915 if (count < 2) ··· 1033 1033 1034 1034 dev_dbg(veu->dev, "Releasing instance %p\n", veu_file); 1035 1035 1036 - pm_runtime_put(veu->dev); 1037 - 1038 1036 if (veu_file == veu->capture) { 1039 1037 veu->capture = NULL; 1040 1038 vb2_queue_release(v4l2_m2m_get_vq(veu->m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE)); ··· 1047 1049 v4l2_m2m_ctx_release(veu->m2m_ctx); 1048 1050 veu->m2m_ctx = NULL; 1049 1051 } 1052 + 1053 + pm_runtime_put(veu->dev); 1050 1054 1051 1055 kfree(veu_file); 1052 1056 ··· 1138 1138 1139 1139 veu->xaction++; 1140 1140 1141 - if (!veu->aborting) 1142 - return IRQ_WAKE_THREAD; 1143 - 1144 - return IRQ_HANDLED; 1141 + return IRQ_WAKE_THREAD; 1145 1142 } 1146 1143 1147 1144 static int sh_veu_probe(struct platform_device *pdev)
+2 -2
drivers/media/platform/soc_camera/soc_camera.c
··· 643 643 644 644 if (ici->ops->init_videobuf2) 645 645 vb2_queue_release(&icd->vb2_vidq); 646 - ici->ops->remove(icd); 647 - 648 646 __soc_camera_power_off(icd); 647 + 648 + ici->ops->remove(icd); 649 649 } 650 650 651 651 if (icd->streamer == file)
+1
drivers/media/radio/Kconfig
··· 22 22 tristate "Silicon Laboratories Si476x I2C FM Radio" 23 23 depends on I2C && VIDEO_V4L2 24 24 depends on MFD_SI476X_CORE 25 + depends on SND_SOC 25 26 select SND_SOC_SI476X 26 27 ---help--- 27 28 Choose Y here if you have this FM radio chip.
+1 -1
drivers/media/radio/radio-si476x.c
··· 44 44 45 45 #define FREQ_MUL (10000000 / 625) 46 46 47 - #define SI476X_PHDIV_STATUS_LINK_LOCKED(status) (0b10000000 & (status)) 47 + #define SI476X_PHDIV_STATUS_LINK_LOCKED(status) (0x80 & (status)) 48 48 49 49 #define DRIVER_NAME "si476x-radio" 50 50 #define DRIVER_CARD "SI476x AM/FM Receiver"
-20
drivers/media/tuners/Kconfig
··· 1 - config MEDIA_ATTACH 2 - bool "Load and attach frontend and tuner driver modules as needed" 3 - depends on MEDIA_ANALOG_TV_SUPPORT || MEDIA_DIGITAL_TV_SUPPORT || MEDIA_RADIO_SUPPORT 4 - depends on MODULES 5 - default y if !EXPERT 6 - help 7 - Remove the static dependency of DVB card drivers on all 8 - frontend modules for all possible card variants. Instead, 9 - allow the card drivers to only load the frontend modules 10 - they require. 11 - 12 - Also, tuner module will automatically load a tuner driver 13 - when needed, for analog mode. 14 - 15 - This saves several KBytes of memory. 16 - 17 - Note: You will need module-init-tools v3.2 or later for this feature. 18 - 19 - If unsure say Y. 20 - 21 1 # Analog TV tuners, auto-loaded via tuner.ko 22 2 config MEDIA_TUNER 23 3 tristate
+3 -3
drivers/media/usb/dvb-usb-v2/rtl28xxu.c
··· 376 376 struct rtl28xxu_req req_mxl5007t = {0xd9c0, CMD_I2C_RD, 1, buf}; 377 377 struct rtl28xxu_req req_e4000 = {0x02c8, CMD_I2C_RD, 1, buf}; 378 378 struct rtl28xxu_req req_tda18272 = {0x00c0, CMD_I2C_RD, 2, buf}; 379 - struct rtl28xxu_req req_r820t = {0x0034, CMD_I2C_RD, 5, buf}; 379 + struct rtl28xxu_req req_r820t = {0x0034, CMD_I2C_RD, 1, buf}; 380 380 381 381 dev_dbg(&d->udev->dev, "%s:\n", __func__); 382 382 ··· 481 481 goto found; 482 482 } 483 483 484 - /* check R820T by reading tuner stats at I2C addr 0x1a */ 484 + /* check R820T ID register; reg=00 val=69 */ 485 485 ret = rtl28xxu_ctrl_msg(d, &req_r820t); 486 - if (ret == 0) { 486 + if (ret == 0 && buf[0] == 0x69) { 487 487 priv->tuner = TUNER_RTL2832_R820T; 488 488 priv->tuner_name = "R820T"; 489 489 goto found;
+7
drivers/media/usb/gspca/sonixb.c
··· 1159 1159 regs[0x01] = 0x44; /* Select 24 Mhz clock */ 1160 1160 regs[0x12] = 0x02; /* Set hstart to 2 */ 1161 1161 } 1162 + break; 1163 + case SENSOR_PAS202: 1164 + /* For some unknown reason we need to increase hstart by 1 on 1165 + the sn9c103, otherwise we get wrong colors (bayer shift). */ 1166 + if (sd->bridge == BRIDGE_103) 1167 + regs[0x12] += 1; 1168 + break; 1162 1169 } 1163 1170 /* Disable compression when the raw bayer format has been selected */ 1164 1171 if (cam->cam_mode[gspca_dev->curr_mode].priv & MODE_RAW)
+1 -1
drivers/media/usb/pwc/pwc.h
··· 226 226 struct list_head queued_bufs; 227 227 spinlock_t queued_bufs_lock; /* Protects queued_bufs */ 228 228 229 - /* Note if taking both locks v4l2_lock must always be locked first! */ 229 + /* If taking both locks vb_queue_lock must always be locked first! */ 230 230 struct mutex v4l2_lock; /* Protects everything else */ 231 231 struct mutex vb_queue_lock; /* Protects vb_queue and capt_file */ 232 232
+2
drivers/media/v4l2-core/v4l2-ctrls.c
··· 1835 1835 { 1836 1836 if (V4L2_CTRL_ID2CLASS(ctrl->id) == V4L2_CTRL_CLASS_FM_TX) 1837 1837 return true; 1838 + if (V4L2_CTRL_ID2CLASS(ctrl->id) == V4L2_CTRL_CLASS_FM_RX) 1839 + return true; 1838 1840 switch (ctrl->id) { 1839 1841 case V4L2_CID_AUDIO_MUTE: 1840 1842 case V4L2_CID_AUDIO_VOLUME:
+21 -26
drivers/media/v4l2-core/v4l2-ioctl.c
··· 243 243 const struct v4l2_vbi_format *vbi; 244 244 const struct v4l2_sliced_vbi_format *sliced; 245 245 const struct v4l2_window *win; 246 - const struct v4l2_clip *clip; 247 246 unsigned i; 248 247 249 248 pr_cont("type=%s", prt_names(p->type, v4l2_type_names)); ··· 252 253 pix = &p->fmt.pix; 253 254 pr_cont(", width=%u, height=%u, " 254 255 "pixelformat=%c%c%c%c, field=%s, " 255 - "bytesperline=%u sizeimage=%u, colorspace=%d\n", 256 + "bytesperline=%u, sizeimage=%u, colorspace=%d\n", 256 257 pix->width, pix->height, 257 258 (pix->pixelformat & 0xff), 258 259 (pix->pixelformat >> 8) & 0xff, ··· 283 284 case V4L2_BUF_TYPE_VIDEO_OVERLAY: 284 285 case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY: 285 286 win = &p->fmt.win; 286 - pr_cont(", wxh=%dx%d, x,y=%d,%d, field=%s, " 287 - "chromakey=0x%08x, bitmap=%p, " 288 - "global_alpha=0x%02x\n", 289 - win->w.width, win->w.height, 290 - win->w.left, win->w.top, 287 + /* Note: we can't print the clip list here since the clips 288 + * pointer is a userspace pointer, not a kernelspace 289 + * pointer. */ 290 + pr_cont(", wxh=%dx%d, x,y=%d,%d, field=%s, chromakey=0x%08x, clipcount=%u, clips=%p, bitmap=%p, global_alpha=0x%02x\n", 291 + win->w.width, win->w.height, win->w.left, win->w.top, 291 292 prt_names(win->field, v4l2_field_names), 292 - win->chromakey, win->bitmap, win->global_alpha); 293 - clip = win->clips; 294 - for (i = 0; i < win->clipcount; i++) { 295 - printk(KERN_DEBUG "clip %u: wxh=%dx%d, x,y=%d,%d\n", 296 - i, clip->c.width, clip->c.height, 297 - clip->c.left, clip->c.top); 298 - clip = clip->next; 299 - } 293 + win->chromakey, win->clipcount, win->clips, 294 + win->bitmap, win->global_alpha); 300 295 break; 301 296 case V4L2_BUF_TYPE_VBI_CAPTURE: 302 297 case V4L2_BUF_TYPE_VBI_OUTPUT: ··· 325 332 326 333 pr_cont("capability=0x%x, flags=0x%x, base=0x%p, width=%u, " 327 334 "height=%u, pixelformat=%c%c%c%c, " 328 - "bytesperline=%u sizeimage=%u, colorspace=%d\n", 335 + "bytesperline=%u, sizeimage=%u, colorspace=%d\n", 329 336 p->capability, p->flags, p->base, 330 337 p->fmt.width, p->fmt.height, 331 338 (p->fmt.pixelformat & 0xff), ··· 346 353 const struct v4l2_modulator *p = arg; 347 354 348 355 if (write_only) 349 - pr_cont("index=%u, txsubchans=0x%x", p->index, p->txsubchans); 356 + pr_cont("index=%u, txsubchans=0x%x\n", p->index, p->txsubchans); 350 357 else 351 358 pr_cont("index=%u, name=%.*s, capability=0x%x, " 352 359 "rangelow=%u, rangehigh=%u, txsubchans=0x%x\n", ··· 438 445 for (i = 0; i < p->length; ++i) { 439 446 plane = &p->m.planes[i]; 440 447 printk(KERN_DEBUG 441 - "plane %d: bytesused=%d, data_offset=0x%08x " 448 + "plane %d: bytesused=%d, data_offset=0x%08x, " 442 449 "offset/userptr=0x%lx, length=%d\n", 443 450 i, plane->bytesused, plane->data_offset, 444 451 plane->m.userptr, plane->length); 445 452 } 446 453 } else { 447 - pr_cont("bytesused=%d, offset/userptr=0x%lx, length=%d\n", 454 + pr_cont(", bytesused=%d, offset/userptr=0x%lx, length=%d\n", 448 455 p->bytesused, p->m.userptr, p->length); 449 456 } 450 457 ··· 497 504 c->capability, c->outputmode, 498 505 c->timeperframe.numerator, c->timeperframe.denominator, 499 506 c->extendedmode, c->writebuffers); 507 + } else { 508 + pr_cont("\n"); 500 509 } 501 510 } 502 511 ··· 729 734 p->type); 730 735 switch (p->type) { 731 736 case V4L2_FRMSIZE_TYPE_DISCRETE: 732 - pr_cont(" wxh=%ux%u\n", 737 + pr_cont(", wxh=%ux%u\n", 733 738 p->discrete.width, p->discrete.height); 734 739 break; 735 740 case V4L2_FRMSIZE_TYPE_STEPWISE: 736 - pr_cont(" min=%ux%u, max=%ux%u, step=%ux%u\n", 741 + pr_cont(", min=%ux%u, max=%ux%u, step=%ux%u\n", 737 742 p->stepwise.min_width, p->stepwise.min_height, 738 743 p->stepwise.step_width, p->stepwise.step_height, 739 744 p->stepwise.max_width, p->stepwise.max_height); ··· 759 764 p->width, p->height, p->type); 760 765 switch (p->type) { 761 766 case V4L2_FRMIVAL_TYPE_DISCRETE: 762 - pr_cont(" fps=%d/%d\n", 767 + pr_cont(", fps=%d/%d\n", 763 768 p->discrete.numerator, 764 769 p->discrete.denominator); 765 770 break; 766 771 case V4L2_FRMIVAL_TYPE_STEPWISE: 767 - pr_cont(" min=%d/%d, max=%d/%d, step=%d/%d\n", 772 + pr_cont(", min=%d/%d, max=%d/%d, step=%d/%d\n", 768 773 p->stepwise.min.numerator, 769 774 p->stepwise.min.denominator, 770 775 p->stepwise.max.numerator, ··· 802 807 pr_cont("value64=%lld, ", c->value64); 803 808 else 804 809 pr_cont("value=%d, ", c->value); 805 - pr_cont("flags=0x%x, minimum=%d, maximum=%d, step=%d," 806 - " default_value=%d\n", 810 + pr_cont("flags=0x%x, minimum=%d, maximum=%d, step=%d, " 811 + "default_value=%d\n", 807 812 c->flags, c->minimum, c->maximum, 808 813 c->step, c->default_value); 809 814 break; ··· 840 845 const struct v4l2_frequency_band *p = arg; 841 846 842 847 pr_cont("tuner=%u, type=%u, index=%u, capability=0x%x, " 843 - "rangelow=%u, rangehigh=%u, modulation=0x%x\n", 848 + "rangelow=%u, rangehigh=%u, modulation=0x%x\n", 844 849 p->tuner, p->type, p->index, 845 850 p->capability, p->rangelow, 846 851 p->rangehigh, p->modulation);
+29 -10
drivers/media/v4l2-core/v4l2-mem2mem.c
··· 205 205 static void v4l2_m2m_try_schedule(struct v4l2_m2m_ctx *m2m_ctx) 206 206 { 207 207 struct v4l2_m2m_dev *m2m_dev; 208 - unsigned long flags_job, flags; 208 + unsigned long flags_job, flags_out, flags_cap; 209 209 210 210 m2m_dev = m2m_ctx->m2m_dev; 211 211 dprintk("Trying to schedule a job for m2m_ctx: %p\n", m2m_ctx); ··· 223 223 return; 224 224 } 225 225 226 - spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags); 226 + spin_lock_irqsave(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out); 227 227 if (list_empty(&m2m_ctx->out_q_ctx.rdy_queue)) { 228 - spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags); 228 + spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, 229 + flags_out); 229 230 spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); 230 231 dprintk("No input buffers available\n"); 231 232 return; 232 233 } 233 - spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags); 234 + spin_lock_irqsave(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap); 234 235 if (list_empty(&m2m_ctx->cap_q_ctx.rdy_queue)) { 235 - spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags); 236 - spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags); 236 + spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, 237 + flags_cap); 238 + spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, 239 + flags_out); 237 240 spin_unlock_irqrestore(&m2m_dev->job_spinlock, flags_job); 238 241 dprintk("No output buffers available\n"); 239 242 return; 240 243 } 241 - spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags); 242 - spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags); 244 + spin_unlock_irqrestore(&m2m_ctx->cap_q_ctx.rdy_spinlock, flags_cap); 245 + spin_unlock_irqrestore(&m2m_ctx->out_q_ctx.rdy_spinlock, flags_out); 243 246 244 247 if (m2m_dev->m2m_ops->job_ready 245 248 && (!m2m_dev->m2m_ops->job_ready(m2m_ctx->priv))) { ··· 375 372 EXPORT_SYMBOL_GPL(v4l2_m2m_dqbuf); 376 373 377 374 /** 375 + * v4l2_m2m_create_bufs() - create a source or destination buffer, depending 376 + * on the type 377 + */ 378 + int v4l2_m2m_create_bufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, 379 + struct v4l2_create_buffers *create) 380 + { 381 + struct vb2_queue *vq; 382 + 383 + vq = v4l2_m2m_get_vq(m2m_ctx, create->format.type); 384 + return vb2_create_bufs(vq, create); 385 + } 386 + EXPORT_SYMBOL_GPL(v4l2_m2m_create_bufs); 387 + 388 + /** 378 389 * v4l2_m2m_expbuf() - export a source or destination buffer, depending on 379 390 * the type 380 391 */ ··· 503 486 if (m2m_ctx->m2m_dev->m2m_ops->unlock) 504 487 m2m_ctx->m2m_dev->m2m_ops->unlock(m2m_ctx->priv); 505 488 506 - poll_wait(file, &src_q->done_wq, wait); 507 - poll_wait(file, &dst_q->done_wq, wait); 489 + if (list_empty(&src_q->done_list)) 490 + poll_wait(file, &src_q->done_wq, wait); 491 + if (list_empty(&dst_q->done_list)) 492 + poll_wait(file, &dst_q->done_wq, wait); 508 493 509 494 if (m2m_ctx->m2m_dev->m2m_ops->lock) 510 495 m2m_ctx->m2m_dev->m2m_ops->lock(m2m_ctx->priv);
+2 -1
drivers/media/v4l2-core/videobuf2-core.c
··· 2014 2014 if (list_empty(&q->queued_list)) 2015 2015 return res | POLLERR; 2016 2016 2017 - poll_wait(file, &q->done_wq, wait); 2017 + if (list_empty(&q->done_list)) 2018 + poll_wait(file, &q->done_wq, wait); 2018 2019 2019 2020 /* 2020 2021 * Take first buffer available for dequeuing.
+1 -1
drivers/net/ethernet/brocade/bna/bnad_debugfs.c
··· 244 244 file->f_pos += offset; 245 245 break; 246 246 case 2: 247 - file->f_pos = debug->buffer_len - offset; 247 + file->f_pos = debug->buffer_len + offset; 248 248 break; 249 249 default: 250 250 return -EINVAL;
+66
drivers/parisc/iosapic.c
··· 811 811 return pcidev->irq; 812 812 } 813 813 814 + static struct iosapic_info *first_isi = NULL; 815 + 816 + #ifdef CONFIG_64BIT 817 + int iosapic_serial_irq(int num) 818 + { 819 + struct iosapic_info *isi = first_isi; 820 + struct irt_entry *irte = NULL; /* only used if PAT PDC */ 821 + struct vector_info *vi; 822 + int isi_line; /* line used by device */ 823 + 824 + /* lookup IRT entry for isi/slot/pin set */ 825 + irte = &irt_cell[num]; 826 + 827 + DBG_IRT("iosapic_serial_irq(): irte %p %x %x %x %x %x %x %x %x\n", 828 + irte, 829 + irte->entry_type, 830 + irte->entry_length, 831 + irte->polarity_trigger, 832 + irte->src_bus_irq_devno, 833 + irte->src_bus_id, 834 + irte->src_seg_id, 835 + irte->dest_iosapic_intin, 836 + (u32) irte->dest_iosapic_addr); 837 + isi_line = irte->dest_iosapic_intin; 838 + 839 + /* get vector info for this input line */ 840 + vi = isi->isi_vector + isi_line; 841 + DBG_IRT("iosapic_serial_irq: line %d vi 0x%p\n", isi_line, vi); 842 + 843 + /* If this IRQ line has already been setup, skip it */ 844 + if (vi->irte) 845 + goto out; 846 + 847 + vi->irte = irte; 848 + 849 + /* 850 + * Allocate processor IRQ 851 + * 852 + * XXX/FIXME The txn_alloc_irq() code and related code should be 853 + * moved to enable_irq(). That way we only allocate processor IRQ 854 + * bits for devices that actually have drivers claiming them. 855 + * Right now we assign an IRQ to every PCI device present, 856 + * regardless of whether it's used or not. 857 + */ 858 + vi->txn_irq = txn_alloc_irq(8); 859 + 860 + if (vi->txn_irq < 0) 861 + panic("I/O sapic: couldn't get TXN IRQ\n"); 862 + 863 + /* enable_irq() will use txn_* to program IRdT */ 864 + vi->txn_addr = txn_alloc_addr(vi->txn_irq); 865 + vi->txn_data = txn_alloc_data(vi->txn_irq); 866 + 867 + vi->eoi_addr = isi->addr + IOSAPIC_REG_EOI; 868 + vi->eoi_data = cpu_to_le32(vi->txn_data); 869 + 870 + cpu_claim_irq(vi->txn_irq, &iosapic_interrupt_type, vi); 871 + 872 + out: 873 + 874 + return vi->txn_irq; 875 + } 876 + #endif 877 + 814 878 815 879 /* 816 880 ** squirrel away the I/O Sapic Version ··· 941 877 vip->irqline = (unsigned char) cnt; 942 878 vip->iosapic = isi; 943 879 } 880 + if (!first_isi) 881 + first_isi = isi; 944 882 return isi; 945 883 } 946 884
+1 -1
drivers/scsi/bfa/bfad_debugfs.c
··· 186 186 file->f_pos += offset; 187 187 break; 188 188 case 2: 189 - file->f_pos = debug->buffer_len - offset; 189 + file->f_pos = debug->buffer_len + offset; 190 190 break; 191 191 default: 192 192 return -EINVAL;
+1 -1
drivers/scsi/fnic/fnic_debugfs.c
··· 174 174 pos = file->f_pos + offset; 175 175 break; 176 176 case 2: 177 - pos = fnic_dbg_prt->buffer_len - offset; 177 + pos = fnic_dbg_prt->buffer_len + offset; 178 178 } 179 179 return (pos < 0 || pos > fnic_dbg_prt->buffer_len) ? 180 180 -EINVAL : (file->f_pos = pos);
+1 -1
drivers/scsi/lpfc/lpfc_debugfs.c
··· 1178 1178 pos = file->f_pos + off; 1179 1179 break; 1180 1180 case 2: 1181 - pos = debug->len - off; 1181 + pos = debug->len + off; 1182 1182 } 1183 1183 return (pos < 0 || pos > debug->len) ? -EINVAL : (file->f_pos = pos); 1184 1184 }
+5 -1
drivers/scsi/qla2xxx/tcm_qla2xxx.c
··· 688 688 * For FCP_READ with CHECK_CONDITION status, clear cmd->bufflen 689 689 * for qla_tgt_xmit_response LLD code 690 690 */ 691 + if (se_cmd->se_cmd_flags & SCF_OVERFLOW_BIT) { 692 + se_cmd->se_cmd_flags &= ~SCF_OVERFLOW_BIT; 693 + se_cmd->residual_count = 0; 694 + } 691 695 se_cmd->se_cmd_flags |= SCF_UNDERFLOW_BIT; 692 - se_cmd->residual_count = se_cmd->data_length; 696 + se_cmd->residual_count += se_cmd->data_length; 693 697 694 698 cmd->bufflen = 0; 695 699 }
+1 -1
drivers/staging/media/davinci_vpfe/Kconfig
··· 1 1 config VIDEO_DM365_VPFE 2 2 tristate "DM365 VPFE Media Controller Capture Driver" 3 - depends on VIDEO_V4L2 && ARCH_DAVINCI_DM365 && !VIDEO_VPFE_CAPTURE 3 + depends on VIDEO_V4L2 && ARCH_DAVINCI_DM365 && !VIDEO_DM365_ISIF 4 4 select VIDEOBUF2_DMA_CONTIG 5 5 help 6 6 Support for DM365 VPFE based Media Controller Capture driver.
+4 -2
drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c
··· 639 639 if (ret) 640 640 goto probe_free_dev_mem; 641 641 642 - if (vpfe_initialize_modules(vpfe_dev, pdev)) 642 + ret = vpfe_initialize_modules(vpfe_dev, pdev); 643 + if (ret) 643 644 goto probe_disable_clock; 644 645 645 646 vpfe_dev->media_dev.dev = vpfe_dev->pdev; ··· 664 663 /* set the driver data in platform device */ 665 664 platform_set_drvdata(pdev, vpfe_dev); 666 665 /* register subdevs/entities */ 667 - if (vpfe_register_entities(vpfe_dev)) 666 + ret = vpfe_register_entities(vpfe_dev); 667 + if (ret) 668 668 goto probe_out_v4l2_unregister; 669 669 670 670 ret = vpfe_attach_irq(vpfe_dev);
+1
drivers/staging/media/solo6x10/Kconfig
··· 5 5 select VIDEOBUF2_DMA_SG 6 6 select VIDEOBUF2_DMA_CONTIG 7 7 select SND_PCM 8 + select FONT_8x16 8 9 ---help--- 9 10 This driver supports the Softlogic based MPEG-4 and h.264 codec 10 11 cards.
+14 -13
drivers/target/iscsi/iscsi_target_configfs.c
··· 155 155 struct iscsi_tpg_np *tpg_np_iser = NULL; 156 156 char *endptr; 157 157 u32 op; 158 - int rc; 158 + int rc = 0; 159 159 160 160 op = simple_strtoul(page, &endptr, 0); 161 161 if ((op != 1) && (op != 0)) { ··· 174 174 return -EINVAL; 175 175 176 176 if (op) { 177 - int rc = request_module("ib_isert"); 178 - if (rc != 0) 177 + rc = request_module("ib_isert"); 178 + if (rc != 0) { 179 179 pr_warn("Unable to request_module for ib_isert\n"); 180 + rc = 0; 181 + } 180 182 181 183 tpg_np_iser = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr, 182 184 np->np_ip, tpg_np, ISCSI_INFINIBAND); 183 - if (!tpg_np_iser || IS_ERR(tpg_np_iser)) 185 + if (IS_ERR(tpg_np_iser)) { 186 + rc = PTR_ERR(tpg_np_iser); 184 187 goto out; 188 + } 185 189 } else { 186 190 tpg_np_iser = iscsit_tpg_locate_child_np(tpg_np, ISCSI_INFINIBAND); 187 - if (!tpg_np_iser) 188 - goto out; 189 - 190 - rc = iscsit_tpg_del_network_portal(tpg, tpg_np_iser); 191 - if (rc < 0) 192 - goto out; 191 + if (tpg_np_iser) { 192 + rc = iscsit_tpg_del_network_portal(tpg, tpg_np_iser); 193 + if (rc < 0) 194 + goto out; 195 + } 193 196 } 194 - 195 - printk("lio_target_np_store_iser() done, op: %d\n", op); 196 197 197 198 iscsit_put_tpg(tpg); 198 199 return count; 199 200 out: 200 201 iscsit_put_tpg(tpg); 201 - return -EINVAL; 202 + return rc; 202 203 } 203 204 204 205 TF_NP_BASE_ATTR(lio_target, iser, S_IRUGO | S_IWUSR);
+2 -2
drivers/target/iscsi/iscsi_target_erl0.c
··· 842 842 return 0; 843 843 844 844 sess->time2retain_timer_flags |= ISCSI_TF_STOP; 845 - spin_unlock_bh(&se_tpg->session_lock); 845 + spin_unlock(&se_tpg->session_lock); 846 846 847 847 del_timer_sync(&sess->time2retain_timer); 848 848 849 - spin_lock_bh(&se_tpg->session_lock); 849 + spin_lock(&se_tpg->session_lock); 850 850 sess->time2retain_timer_flags &= ~ISCSI_TF_RUNNING; 851 851 pr_debug("Stopped Time2Retain Timer for SID: %u\n", 852 852 sess->sid);
-3
drivers/target/iscsi/iscsi_target_login.c
··· 984 984 } 985 985 986 986 np->np_transport = t; 987 - printk("Set np->np_transport to %p -> %s\n", np->np_transport, 988 - np->np_transport->name); 989 987 return 0; 990 988 } 991 989 ··· 1000 1002 1001 1003 conn->sock = new_sock; 1002 1004 conn->login_family = np->np_sockaddr.ss_family; 1003 - printk("iSCSI/TCP: Setup conn->sock from new_sock: %p\n", new_sock); 1004 1005 1005 1006 if (np->np_sockaddr.ss_family == AF_INET6) { 1006 1007 memset(&sock_in6, 0, sizeof(struct sockaddr_in6));
-3
drivers/target/iscsi/iscsi_target_nego.c
··· 721 721 722 722 start += strlen(key) + strlen(value) + 2; 723 723 } 724 - 725 - printk("i_buf: %s, s_buf: %s, t_buf: %s\n", i_buf, s_buf, t_buf); 726 - 727 724 /* 728 725 * See 5.3. Login Phase. 729 726 */
+5 -8
drivers/tty/pty.c
··· 244 244 245 245 static int pty_open(struct tty_struct *tty, struct file *filp) 246 246 { 247 - int retval = -ENODEV; 248 - 249 247 if (!tty || !tty->link) 250 - goto out; 248 + return -ENODEV; 251 249 252 - set_bit(TTY_IO_ERROR, &tty->flags); 253 - 254 - retval = -EIO; 255 250 if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) 256 251 goto out; 257 252 if (test_bit(TTY_PTY_LOCK, &tty->link->flags)) ··· 257 262 clear_bit(TTY_IO_ERROR, &tty->flags); 258 263 clear_bit(TTY_OTHER_CLOSED, &tty->link->flags); 259 264 set_bit(TTY_THROTTLED, &tty->flags); 260 - retval = 0; 265 + return 0; 266 + 261 267 out: 262 - return retval; 268 + set_bit(TTY_IO_ERROR, &tty->flags); 269 + return -EIO; 263 270 } 264 271 265 272 static void pty_set_termios(struct tty_struct *tty,
+9 -1
drivers/tty/serial/8250/8250_gsc.c
··· 30 30 unsigned long address; 31 31 int err; 32 32 33 + #ifdef CONFIG_64BIT 34 + extern int iosapic_serial_irq(int cellnum); 35 + if (!dev->irq && (dev->id.sversion == 0xad)) 36 + dev->irq = iosapic_serial_irq(dev->mod_index-1); 37 + #endif 38 + 33 39 if (!dev->irq) { 34 40 /* We find some unattached serial ports by walking native 35 41 * busses. These should be silently ignored. Otherwise, ··· 57 51 memset(&uart, 0, sizeof(uart)); 58 52 uart.port.iotype = UPIO_MEM; 59 53 /* 7.272727MHz on Lasi. Assumed the same for Dino, Wax and Timi. */ 60 - uart.port.uartclk = 7272727; 54 + uart.port.uartclk = (dev->id.sversion != 0xad) ? 55 + 7272727 : 1843200; 61 56 uart.port.mapbase = address; 62 57 uart.port.membase = ioremap_nocache(address, 16); 63 58 uart.port.irq = dev->irq; ··· 80 73 { HPHW_FIO, HVERSION_REV_ANY_ID, HVERSION_ANY_ID, 0x00075 }, 81 74 { HPHW_FIO, HVERSION_REV_ANY_ID, HVERSION_ANY_ID, 0x0008c }, 82 75 { HPHW_FIO, HVERSION_REV_ANY_ID, HVERSION_ANY_ID, 0x0008d }, 76 + { HPHW_FIO, HVERSION_REV_ANY_ID, HVERSION_ANY_ID, 0x000ad }, 83 77 { 0 } 84 78 }; 85 79
+1 -4
drivers/tty/vt/vt_ioctl.c
··· 289 289 struct vc_data *vc = NULL; 290 290 int ret = 0; 291 291 292 - if (!vc_num) 293 - return 0; 294 - 295 292 console_lock(); 296 293 if (VT_BUSY(vc_num)) 297 294 ret = -EBUSY; 298 - else 295 + else if (vc_num) 299 296 vc = vc_deallocate(vc_num); 300 297 console_unlock(); 301 298
+10 -4
drivers/usb/phy/Kconfig
··· 4 4 menuconfig USB_PHY 5 5 bool "USB Physical Layer drivers" 6 6 help 7 - USB controllers (those which are host, device or DRD) need a 8 - device to handle the physical layer signalling, commonly called 9 - a PHY. 7 + Most USB controllers have the physical layer signalling part 8 + (commonly called a PHY) built in. However, dual-role devices 9 + (a.k.a. USB on-the-go) which support being USB master or slave 10 + with the same connector often use an external PHY. 10 11 11 - The following drivers add support for such PHY devices. 12 + The drivers in this submenu add support for such PHY devices. 13 + They are not needed for standard master-only (or the vast 14 + majority of slave-only) USB interfaces. 15 + 16 + If you're not sure if this applies to you, it probably doesn't; 17 + say N here. 12 18 13 19 if USB_PHY 14 20
+2 -1
drivers/usb/serial/ti_usb_3410_5052.c
··· 172 172 { USB_DEVICE(IBM_VENDOR_ID, IBM_4543_PRODUCT_ID) }, 173 173 { USB_DEVICE(IBM_VENDOR_ID, IBM_454B_PRODUCT_ID) }, 174 174 { USB_DEVICE(IBM_VENDOR_ID, IBM_454C_PRODUCT_ID) }, 175 - { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_PRODUCT_ID) }, 175 + { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STEREO_PLUG_ID) }, 176 + { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) }, 176 177 { USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) }, 177 178 }; 178 179
+3 -1
drivers/usb/serial/ti_usb_3410_5052.h
··· 52 52 53 53 /* Abbott Diabetics vendor and product ids */ 54 54 #define ABBOTT_VENDOR_ID 0x1a61 55 - #define ABBOTT_PRODUCT_ID 0x3410 55 + #define ABBOTT_STEREO_PLUG_ID 0x3410 56 + #define ABBOTT_PRODUCT_ID ABBOTT_STEREO_PLUG_ID 57 + #define ABBOTT_STRIP_PORT_ID 0x3420 56 58 57 59 /* Commands */ 58 60 #define TI_GET_VERSION 0x01
+6
fs/internal.h
··· 132 132 extern ssize_t __kernel_write(struct file *, const char *, size_t, loff_t *); 133 133 134 134 /* 135 + * splice.c 136 + */ 137 + extern long do_splice_direct(struct file *in, loff_t *ppos, struct file *out, 138 + loff_t *opos, size_t len, unsigned int flags); 139 + 140 + /* 135 141 * pipe.c 136 142 */ 137 143 extern const struct file_operations pipefifo_fops;
+16 -8
fs/read_write.c
··· 1064 1064 struct fd in, out; 1065 1065 struct inode *in_inode, *out_inode; 1066 1066 loff_t pos; 1067 + loff_t out_pos; 1067 1068 ssize_t retval; 1068 1069 int fl; 1069 1070 ··· 1078 1077 if (!(in.file->f_mode & FMODE_READ)) 1079 1078 goto fput_in; 1080 1079 retval = -ESPIPE; 1081 - if (!ppos) 1082 - ppos = &in.file->f_pos; 1083 - else 1080 + if (!ppos) { 1081 + pos = in.file->f_pos; 1082 + } else { 1083 + pos = *ppos; 1084 1084 if (!(in.file->f_mode & FMODE_PREAD)) 1085 1085 goto fput_in; 1086 - retval = rw_verify_area(READ, in.file, ppos, count); 1086 + } 1087 + retval = rw_verify_area(READ, in.file, &pos, count); 1087 1088 if (retval < 0) 1088 1089 goto fput_in; 1089 1090 count = retval; ··· 1102 1099 retval = -EINVAL; 1103 1100 in_inode = file_inode(in.file); 1104 1101 out_inode = file_inode(out.file); 1105 - retval = rw_verify_area(WRITE, out.file, &out.file->f_pos, count); 1102 + out_pos = out.file->f_pos; 1103 + retval = rw_verify_area(WRITE, out.file, &out_pos, count); 1106 1104 if (retval < 0) 1107 1105 goto fput_out; 1108 1106 count = retval; ··· 1111 1107 if (!max) 1112 1108 max = min(in_inode->i_sb->s_maxbytes, out_inode->i_sb->s_maxbytes); 1113 1109 1114 - pos = *ppos; 1115 1110 if (unlikely(pos + count > max)) { 1116 1111 retval = -EOVERFLOW; 1117 1112 if (pos >= max) ··· 1129 1126 if (in.file->f_flags & O_NONBLOCK) 1130 1127 fl = SPLICE_F_NONBLOCK; 1131 1128 #endif 1132 - retval = do_splice_direct(in.file, ppos, out.file, count, fl); 1129 + retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl); 1133 1130 1134 1131 if (retval > 0) { 1135 1132 add_rchar(current, retval); 1136 1133 add_wchar(current, retval); 1137 1134 fsnotify_access(in.file); 1138 1135 fsnotify_modify(out.file); 1136 + out.file->f_pos = out_pos; 1137 + if (ppos) 1138 + *ppos = pos; 1139 + else 1140 + in.file->f_pos = pos; 1139 1141 } 1140 1142 1141 1143 inc_syscr(current); 1142 1144 inc_syscw(current); 1143 - if (*ppos > max) 1145 + if (pos > max) 1144 1146 retval = -EOVERFLOW; 1145 1147 1146 1148 fput_out:
+18 -13
fs/splice.c
··· 1274 1274 { 1275 1275 struct file *file = sd->u.file; 1276 1276 1277 - return do_splice_from(pipe, file, &file->f_pos, sd->total_len, 1277 + return do_splice_from(pipe, file, sd->opos, sd->total_len, 1278 1278 sd->flags); 1279 1279 } 1280 1280 ··· 1294 1294 * 1295 1295 */ 1296 1296 long do_splice_direct(struct file *in, loff_t *ppos, struct file *out, 1297 - size_t len, unsigned int flags) 1297 + loff_t *opos, size_t len, unsigned int flags) 1298 1298 { 1299 1299 struct splice_desc sd = { 1300 1300 .len = len, ··· 1302 1302 .flags = flags, 1303 1303 .pos = *ppos, 1304 1304 .u.file = out, 1305 + .opos = opos, 1305 1306 }; 1306 1307 long ret; 1307 1308 ··· 1326 1325 { 1327 1326 struct pipe_inode_info *ipipe; 1328 1327 struct pipe_inode_info *opipe; 1329 - loff_t offset, *off; 1328 + loff_t offset; 1330 1329 long ret; 1331 1330 1332 1331 ipipe = get_pipe_info(in); ··· 1357 1356 return -EINVAL; 1358 1357 if (copy_from_user(&offset, off_out, sizeof(loff_t))) 1359 1358 return -EFAULT; 1360 - off = &offset; 1361 - } else 1362 - off = &out->f_pos; 1359 + } else { 1360 + offset = out->f_pos; 1361 + } 1363 1362 1364 - ret = do_splice_from(ipipe, out, off, len, flags); 1363 + ret = do_splice_from(ipipe, out, &offset, len, flags); 1365 1364 1366 - if (off_out && copy_to_user(off_out, off, sizeof(loff_t))) 1365 + if (!off_out) 1366 + out->f_pos = offset; 1367 + else if (copy_to_user(off_out, &offset, sizeof(loff_t))) 1367 1368 ret = -EFAULT; 1368 1369 1369 1370 return ret; ··· 1379 1376 return -EINVAL; 1380 1377 if (copy_from_user(&offset, off_in, sizeof(loff_t))) 1381 1378 return -EFAULT; 1382 - off = &offset; 1383 - } else 1384 - off = &in->f_pos; 1379 + } else { 1380 + offset = in->f_pos; 1381 + } 1385 1382 1386 - ret = do_splice_to(in, off, opipe, len, flags); 1383 + ret = do_splice_to(in, &offset, opipe, len, flags); 1387 1384 1388 - if (off_in && copy_to_user(off_in, off, sizeof(loff_t))) 1385 + if (!off_in) 1386 + in->f_pos = offset; 1387 + else if (copy_to_user(off_in, &offset, sizeof(loff_t))) 1389 1388 ret = -EFAULT; 1390 1389 1391 1390 return ret;
+1
include/acpi/acpi_bus.h
··· 382 382 int acpi_device_get_power(struct acpi_device *device, int *state); 383 383 int acpi_device_set_power(struct acpi_device *device, int state); 384 384 int acpi_bus_init_power(struct acpi_device *device); 385 + int acpi_device_fix_up_power(struct acpi_device *device); 385 386 int acpi_bus_update_power(acpi_handle handle, int *state_p); 386 387 bool acpi_bus_power_manageable(acpi_handle handle); 387 388
+35
include/linux/context_tracking.h
··· 3 3 4 4 #include <linux/sched.h> 5 5 #include <linux/percpu.h> 6 + #include <linux/vtime.h> 6 7 #include <asm/ptrace.h> 7 8 8 9 struct context_tracking { ··· 20 19 } state; 21 20 }; 22 21 22 + static inline void __guest_enter(void) 23 + { 24 + /* 25 + * This is running in ioctl context so we can avoid 26 + * the call to vtime_account() with its unnecessary idle check. 27 + */ 28 + vtime_account_system(current); 29 + current->flags |= PF_VCPU; 30 + } 31 + 32 + static inline void __guest_exit(void) 33 + { 34 + /* 35 + * This is running in ioctl context so we can avoid 36 + * the call to vtime_account() with its unnecessary idle check. 37 + */ 38 + vtime_account_system(current); 39 + current->flags &= ~PF_VCPU; 40 + } 41 + 23 42 #ifdef CONFIG_CONTEXT_TRACKING 24 43 DECLARE_PER_CPU(struct context_tracking, context_tracking); 25 44 ··· 55 34 56 35 extern void user_enter(void); 57 36 extern void user_exit(void); 37 + 38 + extern void guest_enter(void); 39 + extern void guest_exit(void); 58 40 59 41 static inline enum ctx_state exception_enter(void) 60 42 { ··· 81 57 static inline bool context_tracking_in_user(void) { return false; } 82 58 static inline void user_enter(void) { } 83 59 static inline void user_exit(void) { } 60 + 61 + static inline void guest_enter(void) 62 + { 63 + __guest_enter(); 64 + } 65 + 66 + static inline void guest_exit(void) 67 + { 68 + __guest_exit(); 69 + } 70 + 84 71 static inline enum ctx_state exception_enter(void) { return 0; } 85 72 static inline void exception_exit(enum ctx_state prev_ctx) { } 86 73 static inline void context_tracking_task_switch(struct task_struct *prev,
-2
include/linux/fs.h
··· 2414 2414 struct file *, loff_t *, size_t, unsigned int); 2415 2415 extern ssize_t generic_splice_sendpage(struct pipe_inode_info *pipe, 2416 2416 struct file *out, loff_t *, size_t len, unsigned int flags); 2417 - extern long do_splice_direct(struct file *in, loff_t *ppos, struct file *out, 2418 - size_t len, unsigned int flags); 2419 2417 2420 2418 extern void 2421 2419 file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping);
+1 -36
include/linux/kvm_host.h
··· 23 23 #include <linux/ratelimit.h> 24 24 #include <linux/err.h> 25 25 #include <linux/irqflags.h> 26 + #include <linux/context_tracking.h> 26 27 #include <asm/signal.h> 27 28 28 29 #include <linux/kvm.h> ··· 760 759 return 0; 761 760 } 762 761 #endif 763 - 764 - static inline void __guest_enter(void) 765 - { 766 - /* 767 - * This is running in ioctl context so we can avoid 768 - * the call to vtime_account() with its unnecessary idle check. 769 - */ 770 - vtime_account_system(current); 771 - current->flags |= PF_VCPU; 772 - } 773 - 774 - static inline void __guest_exit(void) 775 - { 776 - /* 777 - * This is running in ioctl context so we can avoid 778 - * the call to vtime_account() with its unnecessary idle check. 779 - */ 780 - vtime_account_system(current); 781 - current->flags &= ~PF_VCPU; 782 - } 783 - 784 - #ifdef CONFIG_CONTEXT_TRACKING 785 - extern void guest_enter(void); 786 - extern void guest_exit(void); 787 - 788 - #else /* !CONFIG_CONTEXT_TRACKING */ 789 - static inline void guest_enter(void) 790 - { 791 - __guest_enter(); 792 - } 793 - 794 - static inline void guest_exit(void) 795 - { 796 - __guest_exit(); 797 - } 798 - #endif /* !CONFIG_CONTEXT_TRACKING */ 799 762 800 763 static inline void kvm_guest_enter(void) 801 764 {
+7
include/linux/mfd/twl6040.h
··· 125 125 126 126 #define TWL6040_HSDACENA (1 << 0) 127 127 #define TWL6040_HSDACMODE (1 << 1) 128 + #define TWL6040_HSDRVENA (1 << 2) 128 129 #define TWL6040_HSDRVMODE (1 << 3) 130 + 131 + /* HFLCTL/R (0x14/0x16) fields */ 132 + 133 + #define TWL6040_HFDACENA (1 << 0) 134 + #define TWL6040_HFPGAENA (1 << 1) 135 + #define TWL6040_HFDRVENA (1 << 4) 129 136 130 137 /* VIBCTLL/R (0x18/0x1A) fields */ 131 138
+1 -2
include/linux/perf_event.h
··· 389 389 /* mmap bits */ 390 390 struct mutex mmap_mutex; 391 391 atomic_t mmap_count; 392 - int mmap_locked; 393 - struct user_struct *mmap_user; 392 + 394 393 struct ring_buffer *rb; 395 394 struct list_head rb_entry; 396 395
+17 -1
include/linux/preempt.h
··· 33 33 preempt_schedule(); \ 34 34 } while (0) 35 35 36 + #ifdef CONFIG_CONTEXT_TRACKING 37 + 38 + void preempt_schedule_context(void); 39 + 40 + #define preempt_check_resched_context() \ 41 + do { \ 42 + if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) \ 43 + preempt_schedule_context(); \ 44 + } while (0) 45 + #else 46 + 47 + #define preempt_check_resched_context() preempt_check_resched() 48 + 49 + #endif /* CONFIG_CONTEXT_TRACKING */ 50 + 36 51 #else /* !CONFIG_PREEMPT */ 37 52 38 53 #define preempt_check_resched() do { } while (0) 54 + #define preempt_check_resched_context() do { } while (0) 39 55 40 56 #endif /* CONFIG_PREEMPT */ 41 57 ··· 104 88 do { \ 105 89 preempt_enable_no_resched_notrace(); \ 106 90 barrier(); \ 107 - preempt_check_resched(); \ 91 + preempt_check_resched_context(); \ 108 92 } while (0) 109 93 110 94 #else /* !CONFIG_PREEMPT_COUNT */
+1
include/linux/splice.h
··· 35 35 void *data; /* cookie */ 36 36 } u; 37 37 loff_t pos; /* file position */ 38 + loff_t *opos; /* sendfile: output position */ 38 39 size_t num_spliced; /* number of bytes already spliced */ 39 40 bool need_wakeup; /* need to wake up writer */ 40 41 };
+2 -2
include/linux/vtime.h
··· 34 34 } 35 35 extern void vtime_guest_enter(struct task_struct *tsk); 36 36 extern void vtime_guest_exit(struct task_struct *tsk); 37 - extern void vtime_init_idle(struct task_struct *tsk); 37 + extern void vtime_init_idle(struct task_struct *tsk, int cpu); 38 38 #else 39 39 static inline void vtime_account_irq_exit(struct task_struct *tsk) 40 40 { ··· 45 45 static inline void vtime_user_exit(struct task_struct *tsk) { } 46 46 static inline void vtime_guest_enter(struct task_struct *tsk) { } 47 47 static inline void vtime_guest_exit(struct task_struct *tsk) { } 48 - static inline void vtime_init_idle(struct task_struct *tsk) { } 48 + static inline void vtime_init_idle(struct task_struct *tsk, int cpu) { } 49 49 #endif 50 50 51 51 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+2
include/media/v4l2-mem2mem.h
··· 110 110 struct v4l2_buffer *buf); 111 111 int v4l2_m2m_dqbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, 112 112 struct v4l2_buffer *buf); 113 + int v4l2_m2m_create_bufs(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, 114 + struct v4l2_create_buffers *create); 113 115 114 116 int v4l2_m2m_expbuf(struct file *file, struct v4l2_m2m_ctx *m2m_ctx, 115 117 struct v4l2_exportbuffer *eb);
+3 -1
include/sound/soc.h
··· 340 340 341 341 typedef int (*hw_write_t)(void *,const char* ,int); 342 342 343 - extern struct snd_ac97_bus_ops soc_ac97_ops; 343 + extern struct snd_ac97_bus_ops *soc_ac97_ops; 344 344 345 345 enum snd_soc_control_type { 346 346 SND_SOC_I2C = 1, ··· 466 466 int snd_soc_new_ac97_codec(struct snd_soc_codec *codec, 467 467 struct snd_ac97_bus_ops *ops, int num); 468 468 void snd_soc_free_ac97_codec(struct snd_soc_codec *codec); 469 + 470 + int snd_soc_set_ac97_ops(struct snd_ac97_bus_ops *ops); 469 471 470 472 /* 471 473 *Controls
+40 -1
kernel/context_tracking.c
··· 15 15 */ 16 16 17 17 #include <linux/context_tracking.h> 18 - #include <linux/kvm_host.h> 19 18 #include <linux/rcupdate.h> 20 19 #include <linux/sched.h> 21 20 #include <linux/hardirq.h> ··· 70 71 local_irq_restore(flags); 71 72 } 72 73 74 + #ifdef CONFIG_PREEMPT 75 + /** 76 + * preempt_schedule_context - preempt_schedule called by tracing 77 + * 78 + * The tracing infrastructure uses preempt_enable_notrace to prevent 79 + * recursion and tracing preempt enabling caused by the tracing 80 + * infrastructure itself. But as tracing can happen in areas coming 81 + * from userspace or just about to enter userspace, a preempt enable 82 + * can occur before user_exit() is called. This will cause the scheduler 83 + * to be called when the system is still in usermode. 84 + * 85 + * To prevent this, the preempt_enable_notrace will use this function 86 + * instead of preempt_schedule() to exit user context if needed before 87 + * calling the scheduler. 88 + */ 89 + void __sched notrace preempt_schedule_context(void) 90 + { 91 + struct thread_info *ti = current_thread_info(); 92 + enum ctx_state prev_ctx; 93 + 94 + if (likely(ti->preempt_count || irqs_disabled())) 95 + return; 96 + 97 + /* 98 + * Need to disable preemption in case user_exit() is traced 99 + * and the tracer calls preempt_enable_notrace() causing 100 + * an infinite recursion. 101 + */ 102 + preempt_disable_notrace(); 103 + prev_ctx = exception_enter(); 104 + preempt_enable_no_resched_notrace(); 105 + 106 + preempt_schedule(); 107 + 108 + preempt_disable_notrace(); 109 + exception_exit(prev_ctx); 110 + preempt_enable_notrace(); 111 + } 112 + EXPORT_SYMBOL_GPL(preempt_schedule_context); 113 + #endif /* CONFIG_PREEMPT */ 73 114 74 115 /** 75 116 * user_exit - Inform the context tracking that the CPU is
+17
kernel/cpu/idle.c
··· 5 5 #include <linux/cpu.h> 6 6 #include <linux/tick.h> 7 7 #include <linux/mm.h> 8 + #include <linux/stackprotector.h> 8 9 9 10 #include <asm/tlb.h> 10 11 ··· 59 58 void __weak arch_cpu_idle(void) 60 59 { 61 60 cpu_idle_force_poll = 1; 61 + local_irq_enable(); 62 62 } 63 63 64 64 /* ··· 114 112 115 113 void cpu_startup_entry(enum cpuhp_state state) 116 114 { 115 + /* 116 + * This #ifdef needs to die, but it's too late in the cycle to 117 + * make this generic (arm and sh have never invoked the canary 118 + * init for the non boot cpus!). Will be fixed in 3.11 119 + */ 120 + #ifdef CONFIG_X86 121 + /* 122 + * If we're the non-boot CPU, nothing set the stack canary up 123 + * for us. The boot CPU already has it initialized but no harm 124 + * in doing it again. This is a good place for updating it, as 125 + * we wont ever return from this function (so the invalid 126 + * canaries already on the stack wont ever trigger). 127 + */ 128 + boot_init_stack_canary(); 129 + #endif 117 130 current_set_polling(); 118 131 arch_cpu_idle_prepare(); 119 132 cpu_idle_loop();
+162 -73
kernel/events/core.c
··· 196 196 static void update_context_time(struct perf_event_context *ctx); 197 197 static u64 perf_event_time(struct perf_event *event); 198 198 199 - static void ring_buffer_attach(struct perf_event *event, 200 - struct ring_buffer *rb); 201 - 202 199 void __weak perf_event_print_debug(void) { } 203 200 204 201 extern __weak const char *perf_pmu_name(void) ··· 2915 2918 } 2916 2919 2917 2920 static void ring_buffer_put(struct ring_buffer *rb); 2921 + static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb); 2918 2922 2919 2923 static void free_event(struct perf_event *event) 2920 2924 { ··· 2940 2942 if (has_branch_stack(event)) { 2941 2943 static_key_slow_dec_deferred(&perf_sched_events); 2942 2944 /* is system-wide event */ 2943 - if (!(event->attach_state & PERF_ATTACH_TASK)) 2945 + if (!(event->attach_state & PERF_ATTACH_TASK)) { 2944 2946 atomic_dec(&per_cpu(perf_branch_stack_events, 2945 2947 event->cpu)); 2948 + } 2946 2949 } 2947 2950 } 2948 2951 2949 2952 if (event->rb) { 2950 - ring_buffer_put(event->rb); 2951 - event->rb = NULL; 2953 + struct ring_buffer *rb; 2954 + 2955 + /* 2956 + * Can happen when we close an event with re-directed output. 2957 + * 2958 + * Since we have a 0 refcount, perf_mmap_close() will skip 2959 + * over us; possibly making our ring_buffer_put() the last. 2960 + */ 2961 + mutex_lock(&event->mmap_mutex); 2962 + rb = event->rb; 2963 + if (rb) { 2964 + rcu_assign_pointer(event->rb, NULL); 2965 + ring_buffer_detach(event, rb); 2966 + ring_buffer_put(rb); /* could be last */ 2967 + } 2968 + mutex_unlock(&event->mmap_mutex); 2952 2969 } 2953 2970 2954 2971 if (is_cgroup_event(event)) ··· 3201 3188 unsigned int events = POLL_HUP; 3202 3189 3203 3190 /* 3204 - * Race between perf_event_set_output() and perf_poll(): perf_poll() 3205 - * grabs the rb reference but perf_event_set_output() overrides it. 3206 - * Here is the timeline for two threads T1, T2: 3207 - * t0: T1, rb = rcu_dereference(event->rb) 3208 - * t1: T2, old_rb = event->rb 3209 - * t2: T2, event->rb = new rb 3210 - * t3: T2, ring_buffer_detach(old_rb) 3211 - * t4: T1, ring_buffer_attach(rb1) 3212 - * t5: T1, poll_wait(event->waitq) 3213 - * 3214 - * To avoid this problem, we grab mmap_mutex in perf_poll() 3215 - * thereby ensuring that the assignment of the new ring buffer 3216 - * and the detachment of the old buffer appear atomic to perf_poll() 3191 + * Pin the event->rb by taking event->mmap_mutex; otherwise 3192 + * perf_event_set_output() can swizzle our rb and make us miss wakeups. 3217 3193 */ 3218 3194 mutex_lock(&event->mmap_mutex); 3219 - 3220 - rcu_read_lock(); 3221 - rb = rcu_dereference(event->rb); 3222 - if (rb) { 3223 - ring_buffer_attach(event, rb); 3195 + rb = event->rb; 3196 + if (rb) 3224 3197 events = atomic_xchg(&rb->poll, 0); 3225 - } 3226 - rcu_read_unlock(); 3227 - 3228 3198 mutex_unlock(&event->mmap_mutex); 3229 3199 3230 3200 poll_wait(file, &event->waitq, wait); ··· 3517 3521 return; 3518 3522 3519 3523 spin_lock_irqsave(&rb->event_lock, flags); 3520 - if (!list_empty(&event->rb_entry)) 3521 - goto unlock; 3522 - 3523 - list_add(&event->rb_entry, &rb->event_list); 3524 - unlock: 3524 + if (list_empty(&event->rb_entry)) 3525 + list_add(&event->rb_entry, &rb->event_list); 3525 3526 spin_unlock_irqrestore(&rb->event_lock, flags); 3526 3527 } 3527 3528 3528 - static void ring_buffer_detach(struct perf_event *event, 3529 - struct ring_buffer *rb) 3529 + static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb) 3530 3530 { 3531 3531 unsigned long flags; 3532 3532 ··· 3541 3549 3542 3550 rcu_read_lock(); 3543 3551 rb = rcu_dereference(event->rb); 3544 - if (!rb) 3545 - goto unlock; 3546 - 3547 - list_for_each_entry_rcu(event, &rb->event_list, rb_entry) 3548 - wake_up_all(&event->waitq); 3549 - 3550 - unlock: 3552 + if (rb) { 3553 + list_for_each_entry_rcu(event, &rb->event_list, rb_entry) 3554 + wake_up_all(&event->waitq); 3555 + } 3551 3556 rcu_read_unlock(); 3552 3557 } 3553 3558 ··· 3573 3584 3574 3585 static void ring_buffer_put(struct ring_buffer *rb) 3575 3586 { 3576 - struct perf_event *event, *n; 3577 - unsigned long flags; 3578 - 3579 3587 if (!atomic_dec_and_test(&rb->refcount)) 3580 3588 return; 3581 3589 3582 - spin_lock_irqsave(&rb->event_lock, flags); 3583 - list_for_each_entry_safe(event, n, &rb->event_list, rb_entry) { 3584 - list_del_init(&event->rb_entry); 3585 - wake_up_all(&event->waitq); 3586 - } 3587 - spin_unlock_irqrestore(&rb->event_lock, flags); 3590 + WARN_ON_ONCE(!list_empty(&rb->event_list)); 3588 3591 3589 3592 call_rcu(&rb->rcu_head, rb_free_rcu); 3590 3593 } ··· 3586 3605 struct perf_event *event = vma->vm_file->private_data; 3587 3606 3588 3607 atomic_inc(&event->mmap_count); 3608 + atomic_inc(&event->rb->mmap_count); 3589 3609 } 3590 3610 3611 + /* 3612 + * A buffer can be mmap()ed multiple times; either directly through the same 3613 + * event, or through other events by use of perf_event_set_output(). 3614 + * 3615 + * In order to undo the VM accounting done by perf_mmap() we need to destroy 3616 + * the buffer here, where we still have a VM context. This means we need 3617 + * to detach all events redirecting to us. 3618 + */ 3591 3619 static void perf_mmap_close(struct vm_area_struct *vma) 3592 3620 { 3593 3621 struct perf_event *event = vma->vm_file->private_data; 3594 3622 3595 - if (atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) { 3596 - unsigned long size = perf_data_size(event->rb); 3597 - struct user_struct *user = event->mmap_user; 3598 - struct ring_buffer *rb = event->rb; 3623 + struct ring_buffer *rb = event->rb; 3624 + struct user_struct *mmap_user = rb->mmap_user; 3625 + int mmap_locked = rb->mmap_locked; 3626 + unsigned long size = perf_data_size(rb); 3599 3627 3600 - atomic_long_sub((size >> PAGE_SHIFT) + 1, &user->locked_vm); 3601 - vma->vm_mm->pinned_vm -= event->mmap_locked; 3602 - rcu_assign_pointer(event->rb, NULL); 3603 - ring_buffer_detach(event, rb); 3604 - mutex_unlock(&event->mmap_mutex); 3628 + atomic_dec(&rb->mmap_count); 3605 3629 3606 - ring_buffer_put(rb); 3607 - free_uid(user); 3630 + if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex)) 3631 + return; 3632 + 3633 + /* Detach current event from the buffer. */ 3634 + rcu_assign_pointer(event->rb, NULL); 3635 + ring_buffer_detach(event, rb); 3636 + mutex_unlock(&event->mmap_mutex); 3637 + 3638 + /* If there's still other mmap()s of this buffer, we're done. */ 3639 + if (atomic_read(&rb->mmap_count)) { 3640 + ring_buffer_put(rb); /* can't be last */ 3641 + return; 3608 3642 } 3643 + 3644 + /* 3645 + * No other mmap()s, detach from all other events that might redirect 3646 + * into the now unreachable buffer. Somewhat complicated by the 3647 + * fact that rb::event_lock otherwise nests inside mmap_mutex. 3648 + */ 3649 + again: 3650 + rcu_read_lock(); 3651 + list_for_each_entry_rcu(event, &rb->event_list, rb_entry) { 3652 + if (!atomic_long_inc_not_zero(&event->refcount)) { 3653 + /* 3654 + * This event is en-route to free_event() which will 3655 + * detach it and remove it from the list. 3656 + */ 3657 + continue; 3658 + } 3659 + rcu_read_unlock(); 3660 + 3661 + mutex_lock(&event->mmap_mutex); 3662 + /* 3663 + * Check we didn't race with perf_event_set_output() which can 3664 + * swizzle the rb from under us while we were waiting to 3665 + * acquire mmap_mutex. 3666 + * 3667 + * If we find a different rb; ignore this event, a next 3668 + * iteration will no longer find it on the list. We have to 3669 + * still restart the iteration to make sure we're not now 3670 + * iterating the wrong list. 3671 + */ 3672 + if (event->rb == rb) { 3673 + rcu_assign_pointer(event->rb, NULL); 3674 + ring_buffer_detach(event, rb); 3675 + ring_buffer_put(rb); /* can't be last, we still have one */ 3676 + } 3677 + mutex_unlock(&event->mmap_mutex); 3678 + put_event(event); 3679 + 3680 + /* 3681 + * Restart the iteration; either we're on the wrong list or 3682 + * destroyed its integrity by doing a deletion. 3683 + */ 3684 + goto again; 3685 + } 3686 + rcu_read_unlock(); 3687 + 3688 + /* 3689 + * It could be there's still a few 0-ref events on the list; they'll 3690 + * get cleaned up by free_event() -- they'll also still have their 3691 + * ref on the rb and will free it whenever they are done with it. 3692 + * 3693 + * Aside from that, this buffer is 'fully' detached and unmapped, 3694 + * undo the VM accounting. 3695 + */ 3696 + 3697 + atomic_long_sub((size >> PAGE_SHIFT) + 1, &mmap_user->locked_vm); 3698 + vma->vm_mm->pinned_vm -= mmap_locked; 3699 + free_uid(mmap_user); 3700 + 3701 + ring_buffer_put(rb); /* could be last */ 3609 3702 } 3610 3703 3611 3704 static const struct vm_operations_struct perf_mmap_vmops = { ··· 3729 3674 return -EINVAL; 3730 3675 3731 3676 WARN_ON_ONCE(event->ctx->parent_ctx); 3677 + again: 3732 3678 mutex_lock(&event->mmap_mutex); 3733 3679 if (event->rb) { 3734 - if (event->rb->nr_pages == nr_pages) 3735 - atomic_inc(&event->rb->refcount); 3736 - else 3680 + if (event->rb->nr_pages != nr_pages) { 3737 3681 ret = -EINVAL; 3682 + goto unlock; 3683 + } 3684 + 3685 + if (!atomic_inc_not_zero(&event->rb->mmap_count)) { 3686 + /* 3687 + * Raced against perf_mmap_close() through 3688 + * perf_event_set_output(). Try again, hope for better 3689 + * luck. 3690 + */ 3691 + mutex_unlock(&event->mmap_mutex); 3692 + goto again; 3693 + } 3694 + 3738 3695 goto unlock; 3739 3696 } 3740 3697 ··· 3787 3720 ret = -ENOMEM; 3788 3721 goto unlock; 3789 3722 } 3790 - rcu_assign_pointer(event->rb, rb); 3723 + 3724 + atomic_set(&rb->mmap_count, 1); 3725 + rb->mmap_locked = extra; 3726 + rb->mmap_user = get_current_user(); 3791 3727 3792 3728 atomic_long_add(user_extra, &user->locked_vm); 3793 - event->mmap_locked = extra; 3794 - event->mmap_user = get_current_user(); 3795 - vma->vm_mm->pinned_vm += event->mmap_locked; 3729 + vma->vm_mm->pinned_vm += extra; 3730 + 3731 + ring_buffer_attach(event, rb); 3732 + rcu_assign_pointer(event->rb, rb); 3796 3733 3797 3734 perf_event_update_userpage(event); 3798 3735 ··· 3805 3734 atomic_inc(&event->mmap_count); 3806 3735 mutex_unlock(&event->mmap_mutex); 3807 3736 3808 - vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP; 3737 + /* 3738 + * Since pinned accounting is per vm we cannot allow fork() to copy our 3739 + * vma. 3740 + */ 3741 + vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP; 3809 3742 vma->vm_ops = &perf_mmap_vmops; 3810 3743 3811 3744 return ret; ··· 6487 6412 if (atomic_read(&event->mmap_count)) 6488 6413 goto unlock; 6489 6414 6415 + old_rb = event->rb; 6416 + 6490 6417 if (output_event) { 6491 6418 /* get the rb we want to redirect to */ 6492 6419 rb = ring_buffer_get(output_event); ··· 6496 6419 goto unlock; 6497 6420 } 6498 6421 6499 - old_rb = event->rb; 6500 - rcu_assign_pointer(event->rb, rb); 6501 6422 if (old_rb) 6502 6423 ring_buffer_detach(event, old_rb); 6424 + 6425 + if (rb) 6426 + ring_buffer_attach(event, rb); 6427 + 6428 + rcu_assign_pointer(event->rb, rb); 6429 + 6430 + if (old_rb) { 6431 + ring_buffer_put(old_rb); 6432 + /* 6433 + * Since we detached before setting the new rb, so that we 6434 + * could attach the new rb, we could have missed a wakeup. 6435 + * Provide it now. 6436 + */ 6437 + wake_up_all(&event->waitq); 6438 + } 6439 + 6503 6440 ret = 0; 6504 6441 unlock: 6505 6442 mutex_unlock(&event->mmap_mutex); 6506 6443 6507 - if (old_rb) 6508 - ring_buffer_put(old_rb); 6509 6444 out: 6510 6445 return ret; 6511 6446 }
+4
kernel/events/internal.h
··· 31 31 spinlock_t event_lock; 32 32 struct list_head event_list; 33 33 34 + atomic_t mmap_count; 35 + unsigned long mmap_locked; 36 + struct user_struct *mmap_user; 37 + 34 38 struct perf_event_mmap_page *user_page; 35 39 void *data_pages[0]; 36 40 };
+20 -10
kernel/kprobes.c
··· 467 467 /* Optimization staging list, protected by kprobe_mutex */ 468 468 static LIST_HEAD(optimizing_list); 469 469 static LIST_HEAD(unoptimizing_list); 470 + static LIST_HEAD(freeing_list); 470 471 471 472 static void kprobe_optimizer(struct work_struct *work); 472 473 static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer); ··· 505 504 * Unoptimize (replace a jump with a breakpoint and remove the breakpoint 506 505 * if need) kprobes listed on unoptimizing_list. 507 506 */ 508 - static __kprobes void do_unoptimize_kprobes(struct list_head *free_list) 507 + static __kprobes void do_unoptimize_kprobes(void) 509 508 { 510 509 struct optimized_kprobe *op, *tmp; 511 510 ··· 516 515 /* Ditto to do_optimize_kprobes */ 517 516 get_online_cpus(); 518 517 mutex_lock(&text_mutex); 519 - arch_unoptimize_kprobes(&unoptimizing_list, free_list); 518 + arch_unoptimize_kprobes(&unoptimizing_list, &freeing_list); 520 519 /* Loop free_list for disarming */ 521 - list_for_each_entry_safe(op, tmp, free_list, list) { 520 + list_for_each_entry_safe(op, tmp, &freeing_list, list) { 522 521 /* Disarm probes if marked disabled */ 523 522 if (kprobe_disabled(&op->kp)) 524 523 arch_disarm_kprobe(&op->kp); ··· 537 536 } 538 537 539 538 /* Reclaim all kprobes on the free_list */ 540 - static __kprobes void do_free_cleaned_kprobes(struct list_head *free_list) 539 + static __kprobes void do_free_cleaned_kprobes(void) 541 540 { 542 541 struct optimized_kprobe *op, *tmp; 543 542 544 - list_for_each_entry_safe(op, tmp, free_list, list) { 543 + list_for_each_entry_safe(op, tmp, &freeing_list, list) { 545 544 BUG_ON(!kprobe_unused(&op->kp)); 546 545 list_del_init(&op->list); 547 546 free_aggr_kprobe(&op->kp); ··· 557 556 /* Kprobe jump optimizer */ 558 557 static __kprobes void kprobe_optimizer(struct work_struct *work) 559 558 { 560 - LIST_HEAD(free_list); 561 - 562 559 mutex_lock(&kprobe_mutex); 563 560 /* Lock modules while optimizing kprobes */ 564 561 mutex_lock(&module_mutex); ··· 565 566 * Step 1: Unoptimize kprobes and collect cleaned (unused and disarmed) 566 567 * kprobes before waiting for quiesence period. 567 568 */ 568 - do_unoptimize_kprobes(&free_list); 569 + do_unoptimize_kprobes(); 569 570 570 571 /* 571 572 * Step 2: Wait for quiesence period to ensure all running interrupts ··· 580 581 do_optimize_kprobes(); 581 582 582 583 /* Step 4: Free cleaned kprobes after quiesence period */ 583 - do_free_cleaned_kprobes(&free_list); 584 + do_free_cleaned_kprobes(); 584 585 585 586 mutex_unlock(&module_mutex); 586 587 mutex_unlock(&kprobe_mutex); ··· 722 723 if (!list_empty(&op->list)) 723 724 /* Dequeue from the (un)optimization queue */ 724 725 list_del_init(&op->list); 725 - 726 726 op->kp.flags &= ~KPROBE_FLAG_OPTIMIZED; 727 + 728 + if (kprobe_unused(p)) { 729 + /* Enqueue if it is unused */ 730 + list_add(&op->list, &freeing_list); 731 + /* 732 + * Remove unused probes from the hash list. After waiting 733 + * for synchronization, this probe is reclaimed. 734 + * (reclaiming is done by do_free_cleaned_kprobes().) 735 + */ 736 + hlist_del_rcu(&op->kp.hlist); 737 + } 738 + 727 739 /* Don't touch the code, because it is already freed. */ 728 740 arch_remove_optimized_kprobe(op); 729 741 }
+11 -10
kernel/range.c
··· 4 4 #include <linux/kernel.h> 5 5 #include <linux/init.h> 6 6 #include <linux/sort.h> 7 - 7 + #include <linux/string.h> 8 8 #include <linux/range.h> 9 9 10 10 int add_range(struct range *range, int az, int nr_range, u64 start, u64 end) ··· 32 32 if (start >= end) 33 33 return nr_range; 34 34 35 - /* Try to merge it with old one: */ 35 + /* get new start/end: */ 36 36 for (i = 0; i < nr_range; i++) { 37 - u64 final_start, final_end; 38 37 u64 common_start, common_end; 39 38 40 39 if (!range[i].end) ··· 44 45 if (common_start > common_end) 45 46 continue; 46 47 47 - final_start = min(range[i].start, start); 48 - final_end = max(range[i].end, end); 48 + /* new start/end, will add it back at last */ 49 + start = min(range[i].start, start); 50 + end = max(range[i].end, end); 49 51 50 - /* clear it and add it back for further merge */ 51 - range[i].start = 0; 52 - range[i].end = 0; 53 - return add_range_with_merge(range, az, nr_range, 54 - final_start, final_end); 52 + memmove(&range[i], &range[i + 1], 53 + (nr_range - (i + 1)) * sizeof(range[i])); 54 + range[nr_range - 1].start = 0; 55 + range[nr_range - 1].end = 0; 56 + nr_range--; 57 + i--; 55 58 } 56 59 57 60 /* Need to add it: */
+18 -5
kernel/sched/core.c
··· 633 633 static inline bool got_nohz_idle_kick(void) 634 634 { 635 635 int cpu = smp_processor_id(); 636 - return idle_cpu(cpu) && test_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu)); 636 + 637 + if (!test_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu))) 638 + return false; 639 + 640 + if (idle_cpu(cpu) && !need_resched()) 641 + return true; 642 + 643 + /* 644 + * We can't run Idle Load Balance on this CPU for this time so we 645 + * cancel it and clear NOHZ_BALANCE_KICK 646 + */ 647 + clear_bit(NOHZ_BALANCE_KICK, nohz_flags(cpu)); 648 + return false; 637 649 } 638 650 639 651 #else /* CONFIG_NO_HZ_COMMON */ ··· 1405 1393 1406 1394 void scheduler_ipi(void) 1407 1395 { 1408 - if (llist_empty(&this_rq()->wake_list) && !got_nohz_idle_kick() 1409 - && !tick_nohz_full_cpu(smp_processor_id())) 1396 + if (llist_empty(&this_rq()->wake_list) 1397 + && !tick_nohz_full_cpu(smp_processor_id()) 1398 + && !got_nohz_idle_kick()) 1410 1399 return; 1411 1400 1412 1401 /* ··· 1430 1417 /* 1431 1418 * Check if someone kicked us for doing the nohz idle load balance. 1432 1419 */ 1433 - if (unlikely(got_nohz_idle_kick() && !need_resched())) { 1420 + if (unlikely(got_nohz_idle_kick())) { 1434 1421 this_rq()->idle_balance = 1; 1435 1422 raise_softirq_irqoff(SCHED_SOFTIRQ); 1436 1423 } ··· 4758 4745 */ 4759 4746 idle->sched_class = &idle_sched_class; 4760 4747 ftrace_graph_init_idle_task(idle, cpu); 4761 - vtime_init_idle(idle); 4748 + vtime_init_idle(idle, cpu); 4762 4749 #if defined(CONFIG_SMP) 4763 4750 sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu); 4764 4751 #endif
+3 -3
kernel/sched/cputime.c
··· 747 747 748 748 write_seqlock(&current->vtime_seqlock); 749 749 current->vtime_snap_whence = VTIME_SYS; 750 - current->vtime_snap = sched_clock(); 750 + current->vtime_snap = sched_clock_cpu(smp_processor_id()); 751 751 write_sequnlock(&current->vtime_seqlock); 752 752 } 753 753 754 - void vtime_init_idle(struct task_struct *t) 754 + void vtime_init_idle(struct task_struct *t, int cpu) 755 755 { 756 756 unsigned long flags; 757 757 758 758 write_seqlock_irqsave(&t->vtime_seqlock, flags); 759 759 t->vtime_snap_whence = VTIME_SYS; 760 - t->vtime_snap = sched_clock(); 760 + t->vtime_snap = sched_clock_cpu(cpu); 761 761 write_sequnlock_irqrestore(&t->vtime_seqlock, flags); 762 762 } 763 763
-4
kernel/time/tick-broadcast.c
··· 698 698 699 699 bc->event_handler = tick_handle_oneshot_broadcast; 700 700 701 - /* Take the do_timer update */ 702 - if (!tick_nohz_full_cpu(cpu)) 703 - tick_do_timer_cpu = cpu; 704 - 705 701 /* 706 702 * We must be careful here. There might be other CPUs 707 703 * waiting for periodic broadcast. We need to set the
+1 -1
kernel/time/tick-sched.c
··· 306 306 * we can't safely shutdown that CPU. 307 307 */ 308 308 if (have_nohz_full_mask && tick_do_timer_cpu == cpu) 309 - return -EINVAL; 309 + return NOTIFY_BAD; 310 310 break; 311 311 } 312 312 return NOTIFY_OK;
+3 -1
mm/slab_common.c
··· 373 373 { 374 374 int index; 375 375 376 - if (WARN_ON_ONCE(size > KMALLOC_MAX_SIZE)) 376 + if (size > KMALLOC_MAX_SIZE) { 377 + WARN_ON_ONCE(!(flags & __GFP_NOWARN)); 377 378 return NULL; 379 + } 378 380 379 381 if (size <= 192) { 380 382 if (!size)
+6 -15
sound/soc/au1x/ac97c.c
··· 179 179 } 180 180 181 181 /* AC97 controller operations */ 182 - struct snd_ac97_bus_ops soc_ac97_ops = { 182 + static struct snd_ac97_bus_ops ac97c_bus_ops = { 183 183 .read = au1xac97c_ac97_read, 184 184 .write = au1xac97c_ac97_write, 185 185 .reset = au1xac97c_ac97_cold_reset, 186 186 .warm_reset = au1xac97c_ac97_warm_reset, 187 187 }; 188 - EXPORT_SYMBOL_GPL(soc_ac97_ops); /* globals be gone! */ 189 188 190 189 static int alchemy_ac97c_startup(struct snd_pcm_substream *substream, 191 190 struct snd_soc_dai *dai) ··· 271 272 272 273 platform_set_drvdata(pdev, ctx); 273 274 275 + ret = snd_soc_set_ac97_ops(&ac97c_bus_ops); 276 + if (ret) 277 + return ret; 278 + 274 279 ret = snd_soc_register_component(&pdev->dev, &au1xac97c_component, 275 280 &au1xac97c_dai_driver, 1); 276 281 if (ret) ··· 341 338 .remove = au1xac97c_drvremove, 342 339 }; 343 340 344 - static int __init au1xac97c_load(void) 345 - { 346 - ac97c_workdata = NULL; 347 - return platform_driver_register(&au1xac97c_driver); 348 - } 349 - 350 - static void __exit au1xac97c_unload(void) 351 - { 352 - platform_driver_unregister(&au1xac97c_driver); 353 - } 354 - 355 - module_init(au1xac97c_load); 356 - module_exit(au1xac97c_unload); 341 + module_platform_driver(&au1xac97c_driver); 357 342 358 343 MODULE_LICENSE("GPL"); 359 344 MODULE_DESCRIPTION("Au1000/1500/1100 AC97C ASoC driver");
+9 -24
sound/soc/au1x/psc-ac97.c
··· 201 201 } 202 202 203 203 /* AC97 controller operations */ 204 - struct snd_ac97_bus_ops soc_ac97_ops = { 204 + static struct snd_ac97_bus_ops psc_ac97_ops = { 205 205 .read = au1xpsc_ac97_read, 206 206 .write = au1xpsc_ac97_write, 207 207 .reset = au1xpsc_ac97_cold_reset, 208 208 .warm_reset = au1xpsc_ac97_warm_reset, 209 209 }; 210 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 211 210 212 211 static int au1xpsc_ac97_hw_params(struct snd_pcm_substream *substream, 213 212 struct snd_pcm_hw_params *params, ··· 382 383 if (!iores) 383 384 return -ENODEV; 384 385 385 - if (!devm_request_mem_region(&pdev->dev, iores->start, 386 - resource_size(iores), 387 - pdev->name)) 388 - return -EBUSY; 389 - 390 - wd->mmio = devm_ioremap(&pdev->dev, iores->start, 391 - resource_size(iores)); 392 - if (!wd->mmio) 393 - return -EBUSY; 386 + wd->mmio = devm_ioremap_resource(&pdev->dev, iores); 387 + if (IS_ERR(wd->mmio)) 388 + return PTR_ERR(wd->mmio); 394 389 395 390 dmares = platform_get_resource(pdev, IORESOURCE_DMA, 0); 396 391 if (!dmares) ··· 415 422 wd->dai_drv.name = dev_name(&pdev->dev); 416 423 417 424 platform_set_drvdata(pdev, wd); 425 + 426 + ret = snd_soc_set_ac97_ops(&psc_ac97_ops); 427 + if (ret) 428 + return ret; 418 429 419 430 ret = snd_soc_register_component(&pdev->dev, &au1xpsc_ac97_component, 420 431 &wd->dai_drv, 1); ··· 500 503 .remove = au1xpsc_ac97_drvremove, 501 504 }; 502 505 503 - static int __init au1xpsc_ac97_load(void) 504 - { 505 - au1xpsc_ac97_workdata = NULL; 506 - return platform_driver_register(&au1xpsc_ac97_driver); 507 - } 508 - 509 - static void __exit au1xpsc_ac97_unload(void) 510 - { 511 - platform_driver_unregister(&au1xpsc_ac97_driver); 512 - } 513 - 514 - module_init(au1xpsc_ac97_load); 515 - module_exit(au1xpsc_ac97_unload); 506 + module_platform_driver(au1xpsc_ac97_driver); 516 507 517 508 MODULE_LICENSE("GPL"); 518 509 MODULE_DESCRIPTION("Au12x0/Au1550 PSC AC97 ALSA ASoC audio driver");
+15 -14
sound/soc/blackfin/bf5xx-ac97.c
··· 198 198 #endif 199 199 } 200 200 201 - struct snd_ac97_bus_ops soc_ac97_ops = { 201 + static struct snd_ac97_bus_ops bf5xx_ac97_ops = { 202 202 .read = bf5xx_ac97_read, 203 203 .write = bf5xx_ac97_write, 204 204 .warm_reset = bf5xx_ac97_warm_reset, 205 205 .reset = bf5xx_ac97_cold_reset, 206 206 }; 207 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 208 207 209 208 #ifdef CONFIG_PM 210 209 static int bf5xx_ac97_suspend(struct snd_soc_dai *dai) ··· 292 293 293 294 #ifdef CONFIG_SND_BF5XX_HAVE_COLD_RESET 294 295 /* Request PB3 as reset pin */ 295 - if (gpio_request(CONFIG_SND_BF5XX_RESET_GPIO_NUM, "SND_AD198x RESET")) { 296 - pr_err("Failed to request GPIO_%d for reset\n", 297 - CONFIG_SND_BF5XX_RESET_GPIO_NUM); 298 - ret = -1; 296 + ret = devm_gpio_request_one(&pdev->dev, 297 + CONFIG_SND_BF5XX_RESET_GPIO_NUM, 298 + GPIOF_OUT_INIT_HIGH, "SND_AD198x RESET") { 299 + dev_err(&pdev->dev, 300 + "Failed to request GPIO_%d for reset: %d\n", 301 + CONFIG_SND_BF5XX_RESET_GPIO_NUM, ret); 299 302 goto gpio_err; 300 303 } 301 - gpio_direction_output(CONFIG_SND_BF5XX_RESET_GPIO_NUM, 1); 302 304 #endif 303 305 304 306 sport_handle = sport_init(pdev, 2, sizeof(struct ac97_frame), ··· 335 335 goto sport_config_err; 336 336 } 337 337 338 + ret = snd_soc_set_ac97_ops(&bf5xx_ac97_ops); 339 + if (ret != 0) { 340 + dev_err(&pdev->dev, "Failed to set AC'97 ops: %d\n", ret); 341 + goto sport_config_err; 342 + } 343 + 338 344 ret = snd_soc_register_component(&pdev->dev, &bfin_ac97_component, 339 345 &bfin_ac97_dai, 1); 340 346 if (ret) { ··· 355 349 sport_config_err: 356 350 sport_done(sport_handle); 357 351 sport_err: 358 - #ifdef CONFIG_SND_BF5XX_HAVE_COLD_RESET 359 - gpio_free(CONFIG_SND_BF5XX_RESET_GPIO_NUM); 360 - gpio_err: 361 - #endif 352 + snd_soc_set_ac97_ops(NULL); 362 353 363 354 return ret; 364 355 } ··· 366 363 367 364 snd_soc_unregister_component(&pdev->dev); 368 365 sport_done(sport_handle); 369 - #ifdef CONFIG_SND_BF5XX_HAVE_COLD_RESET 370 - gpio_free(CONFIG_SND_BF5XX_RESET_GPIO_NUM); 371 - #endif 366 + snd_soc_set_ac97_ops(NULL); 372 367 373 368 return 0; 374 369 }
+8 -4
sound/soc/cirrus/ep93xx-ac97.c
··· 237 237 return IRQ_HANDLED; 238 238 } 239 239 240 - struct snd_ac97_bus_ops soc_ac97_ops = { 240 + static struct snd_ac97_bus_ops ep93xx_ac97_ops = { 241 241 .read = ep93xx_ac97_read, 242 242 .write = ep93xx_ac97_write, 243 243 .reset = ep93xx_ac97_cold_reset, 244 244 .warm_reset = ep93xx_ac97_warm_reset, 245 245 }; 246 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 247 246 248 247 static int ep93xx_ac97_trigger(struct snd_pcm_substream *substream, 249 248 int cmd, struct snd_soc_dai *dai) ··· 388 389 ep93xx_ac97_info = info; 389 390 platform_set_drvdata(pdev, info); 390 391 392 + ret = snd_soc_set_ac97_ops(&ep93xx_ac97_ops); 393 + if (ret) 394 + goto fail; 395 + 391 396 ret = snd_soc_register_component(&pdev->dev, &ep93xx_ac97_component, 392 397 &ep93xx_ac97_dai, 1); 393 398 if (ret) ··· 401 398 402 399 fail: 403 400 ep93xx_ac97_info = NULL; 404 - dev_set_drvdata(&pdev->dev, NULL); 401 + snd_soc_set_ac97_ops(NULL); 405 402 return ret; 406 403 } 407 404 ··· 415 412 ep93xx_ac97_write_reg(info, AC97GCR, 0); 416 413 417 414 ep93xx_ac97_info = NULL; 418 - dev_set_drvdata(&pdev->dev, NULL); 415 + 416 + snd_soc_set_ac97_ops(NULL); 419 417 420 418 return 0; 421 419 }
+2 -4
sound/soc/codecs/88pm860x-codec.c
··· 120 120 * before DAC & PGA in DAPM power-off sequence. 121 121 */ 122 122 #define PM860X_DAPM_OUTPUT(wname, wevent) \ 123 - { .id = snd_soc_dapm_pga, .name = wname, .reg = SND_SOC_NOPM, \ 124 - .shift = 0, .invert = 0, .kcontrol_news = NULL, \ 125 - .num_kcontrols = 0, .event = wevent, \ 126 - .event_flags = SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD, } 123 + SND_SOC_DAPM_PGA_E(wname, SND_SOC_NOPM, 0, 0, NULL, 0, wevent, \ 124 + SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD) 127 125 128 126 struct pm860x_det { 129 127 struct snd_soc_jack *hp_jack;
+4 -3
sound/soc/codecs/ac97.c
··· 62 62 static unsigned int ac97_read(struct snd_soc_codec *codec, 63 63 unsigned int reg) 64 64 { 65 - return soc_ac97_ops.read(codec->ac97, reg); 65 + return soc_ac97_ops->read(codec->ac97, reg); 66 66 } 67 67 68 68 static int ac97_write(struct snd_soc_codec *codec, unsigned int reg, 69 69 unsigned int val) 70 70 { 71 - soc_ac97_ops.write(codec->ac97, reg, val); 71 + soc_ac97_ops->write(codec->ac97, reg, val); 72 72 return 0; 73 73 } 74 74 ··· 79 79 int ret; 80 80 81 81 /* add codec as bus device for standard ac97 */ 82 - ret = snd_ac97_bus(codec->card->snd_card, 0, &soc_ac97_ops, NULL, &ac97_bus); 82 + ret = snd_ac97_bus(codec->card->snd_card, 0, soc_ac97_ops, NULL, 83 + &ac97_bus); 83 84 if (ret < 0) 84 85 return ret; 85 86
+6 -6
sound/soc/codecs/ad1980.c
··· 108 108 case AC97_EXTENDED_STATUS: 109 109 case AC97_VENDOR_ID1: 110 110 case AC97_VENDOR_ID2: 111 - return soc_ac97_ops.read(codec->ac97, reg); 111 + return soc_ac97_ops->read(codec->ac97, reg); 112 112 default: 113 113 reg = reg >> 1; 114 114 ··· 124 124 { 125 125 u16 *cache = codec->reg_cache; 126 126 127 - soc_ac97_ops.write(codec->ac97, reg, val); 127 + soc_ac97_ops->write(codec->ac97, reg, val); 128 128 reg = reg >> 1; 129 129 if (reg < ARRAY_SIZE(ad1980_reg)) 130 130 cache[reg] = val; ··· 154 154 u16 retry_cnt = 0; 155 155 156 156 retry: 157 - if (try_warm && soc_ac97_ops.warm_reset) { 158 - soc_ac97_ops.warm_reset(codec->ac97); 157 + if (try_warm && soc_ac97_ops->warm_reset) { 158 + soc_ac97_ops->warm_reset(codec->ac97); 159 159 if (ac97_read(codec, AC97_RESET) == 0x0090) 160 160 return 1; 161 161 } 162 162 163 - soc_ac97_ops.reset(codec->ac97); 163 + soc_ac97_ops->reset(codec->ac97); 164 164 /* Set bit 16slot in register 74h, then every slot will has only 16 165 165 * bits. This command is sent out in 20bit mode, in which case the 166 166 * first nibble of data is eaten by the addr. (Tag is always 16 bit)*/ ··· 186 186 187 187 printk(KERN_INFO "AD1980 SoC Audio Codec\n"); 188 188 189 - ret = snd_soc_new_ac97_codec(codec, &soc_ac97_ops, 0); 189 + ret = snd_soc_new_ac97_codec(codec, soc_ac97_ops, 0); 190 190 if (ret < 0) { 191 191 printk(KERN_ERR "ad1980: failed to register AC97 codec\n"); 192 192 return ret;
+227 -64
sound/soc/codecs/adau1701.c
··· 16 16 #include <linux/of.h> 17 17 #include <linux/of_gpio.h> 18 18 #include <linux/of_device.h> 19 + #include <linux/regmap.h> 19 20 #include <sound/core.h> 20 21 #include <sound/pcm.h> 21 22 #include <sound/pcm_params.h> ··· 25 24 #include "sigmadsp.h" 26 25 #include "adau1701.h" 27 26 28 - #define ADAU1701_DSPCTRL 0x1c 29 - #define ADAU1701_SEROCTL 0x1e 30 - #define ADAU1701_SERICTL 0x1f 27 + #define ADAU1701_DSPCTRL 0x081c 28 + #define ADAU1701_SEROCTL 0x081e 29 + #define ADAU1701_SERICTL 0x081f 31 30 32 - #define ADAU1701_AUXNPOW 0x22 31 + #define ADAU1701_AUXNPOW 0x0822 32 + #define ADAU1701_PINCONF_0 0x0820 33 + #define ADAU1701_PINCONF_1 0x0821 34 + #define ADAU1701_AUXNPOW 0x0822 33 35 34 - #define ADAU1701_OSCIPOW 0x26 35 - #define ADAU1701_DACSET 0x27 36 + #define ADAU1701_OSCIPOW 0x0826 37 + #define ADAU1701_DACSET 0x0827 36 38 37 - #define ADAU1701_NUM_REGS 0x28 39 + #define ADAU1701_MAX_REGISTER 0x0828 38 40 39 41 #define ADAU1701_DSPCTRL_CR (1 << 2) 40 42 #define ADAU1701_DSPCTRL_DAM (1 << 3) ··· 91 87 #define ADAU1701_OSCIPOW_OPD 0x04 92 88 #define ADAU1701_DACSET_DACINIT 1 93 89 90 + #define ADAU1707_CLKDIV_UNSET (-1UL) 91 + 94 92 #define ADAU1701_FIRMWARE "adau1701.bin" 95 93 96 94 struct adau1701 { 97 95 int gpio_nreset; 96 + int gpio_pll_mode[2]; 98 97 unsigned int dai_fmt; 98 + unsigned int pll_clkdiv; 99 + unsigned int sysclk; 100 + struct regmap *regmap; 101 + u8 pin_config[12]; 99 102 }; 100 103 101 104 static const struct snd_kcontrol_new adau1701_controls[] = { ··· 134 123 { "ADC", NULL, "IN1" }, 135 124 }; 136 125 137 - static unsigned int adau1701_register_size(struct snd_soc_codec *codec, 126 + static unsigned int adau1701_register_size(struct device *dev, 138 127 unsigned int reg) 139 128 { 140 129 switch (reg) { 130 + case ADAU1701_PINCONF_0: 131 + case ADAU1701_PINCONF_1: 132 + return 3; 141 133 case ADAU1701_DSPCTRL: 142 134 case ADAU1701_SEROCTL: 143 135 case ADAU1701_AUXNPOW: ··· 151 137 return 1; 152 138 } 153 139 154 - dev_err(codec->dev, "Unsupported register address: %d\n", reg); 140 + dev_err(dev, "Unsupported register address: %d\n", reg); 155 141 return 0; 156 142 } 157 143 158 - static int adau1701_write(struct snd_soc_codec *codec, unsigned int reg, 159 - unsigned int value) 144 + static bool adau1701_volatile_reg(struct device *dev, unsigned int reg) 160 145 { 146 + switch (reg) { 147 + case ADAU1701_DACSET: 148 + return true; 149 + default: 150 + return false; 151 + } 152 + } 153 + 154 + static int adau1701_reg_write(void *context, unsigned int reg, 155 + unsigned int value) 156 + { 157 + struct i2c_client *client = context; 161 158 unsigned int i; 162 159 unsigned int size; 163 - uint8_t buf[4]; 160 + uint8_t buf[5]; 164 161 int ret; 165 162 166 - size = adau1701_register_size(codec, reg); 163 + size = adau1701_register_size(&client->dev, reg); 167 164 if (size == 0) 168 165 return -EINVAL; 169 166 170 - snd_soc_cache_write(codec, reg, value); 171 - 172 - buf[0] = 0x08; 173 - buf[1] = reg; 167 + buf[0] = reg >> 8; 168 + buf[1] = reg & 0xff; 174 169 175 170 for (i = size + 1; i >= 2; --i) { 176 171 buf[i] = value; 177 172 value >>= 8; 178 173 } 179 174 180 - ret = i2c_master_send(to_i2c_client(codec->dev), buf, size + 2); 175 + ret = i2c_master_send(client, buf, size + 2); 181 176 if (ret == size + 2) 182 177 return 0; 183 178 else if (ret < 0) ··· 195 172 return -EIO; 196 173 } 197 174 198 - static unsigned int adau1701_read(struct snd_soc_codec *codec, unsigned int reg) 199 - { 200 - unsigned int value; 201 - unsigned int ret; 202 - 203 - ret = snd_soc_cache_read(codec, reg, &value); 204 - if (ret) 205 - return ret; 206 - 207 - return value; 208 - } 209 - 210 - static void adau1701_reset(struct snd_soc_codec *codec) 211 - { 212 - struct adau1701 *adau1701 = snd_soc_codec_get_drvdata(codec); 213 - 214 - if (!gpio_is_valid(adau1701->gpio_nreset)) 215 - return; 216 - 217 - gpio_set_value(adau1701->gpio_nreset, 0); 218 - /* minimum reset time is 20ns */ 219 - udelay(1); 220 - gpio_set_value(adau1701->gpio_nreset, 1); 221 - /* power-up time may be as long as 85ms */ 222 - mdelay(85); 223 - } 224 - 225 - static int adau1701_init(struct snd_soc_codec *codec) 175 + static int adau1701_reg_read(void *context, unsigned int reg, 176 + unsigned int *value) 226 177 { 227 178 int ret; 228 - struct i2c_client *client = to_i2c_client(codec->dev); 179 + unsigned int i; 180 + unsigned int size; 181 + uint8_t send_buf[2], recv_buf[3]; 182 + struct i2c_client *client = context; 183 + struct i2c_msg msgs[2]; 229 184 230 - adau1701_reset(codec); 185 + size = adau1701_register_size(&client->dev, reg); 186 + if (size == 0) 187 + return -EINVAL; 231 188 232 - ret = process_sigma_firmware(client, ADAU1701_FIRMWARE); 233 - if (ret) { 234 - dev_warn(codec->dev, "Failed to load firmware\n"); 189 + send_buf[0] = reg >> 8; 190 + send_buf[1] = reg & 0xff; 191 + 192 + msgs[0].addr = client->addr; 193 + msgs[0].len = sizeof(send_buf); 194 + msgs[0].buf = send_buf; 195 + msgs[0].flags = 0; 196 + 197 + msgs[1].addr = client->addr; 198 + msgs[1].len = size; 199 + msgs[1].buf = recv_buf; 200 + msgs[1].flags = I2C_M_RD; 201 + 202 + ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); 203 + if (ret < 0) 235 204 return ret; 205 + else if (ret != ARRAY_SIZE(msgs)) 206 + return -EIO; 207 + 208 + *value = 0; 209 + 210 + for (i = 0; i < size; i++) 211 + *value |= recv_buf[i] << (i * 8); 212 + 213 + return 0; 214 + } 215 + 216 + static int adau1701_reset(struct snd_soc_codec *codec, unsigned int clkdiv) 217 + { 218 + struct adau1701 *adau1701 = snd_soc_codec_get_drvdata(codec); 219 + struct i2c_client *client = to_i2c_client(codec->dev); 220 + int ret; 221 + 222 + if (clkdiv != ADAU1707_CLKDIV_UNSET && 223 + gpio_is_valid(adau1701->gpio_pll_mode[0]) && 224 + gpio_is_valid(adau1701->gpio_pll_mode[1])) { 225 + switch (clkdiv) { 226 + case 64: 227 + gpio_set_value(adau1701->gpio_pll_mode[0], 0); 228 + gpio_set_value(adau1701->gpio_pll_mode[1], 0); 229 + break; 230 + case 256: 231 + gpio_set_value(adau1701->gpio_pll_mode[0], 0); 232 + gpio_set_value(adau1701->gpio_pll_mode[1], 1); 233 + break; 234 + case 384: 235 + gpio_set_value(adau1701->gpio_pll_mode[0], 1); 236 + gpio_set_value(adau1701->gpio_pll_mode[1], 0); 237 + break; 238 + case 0: /* fallback */ 239 + case 512: 240 + gpio_set_value(adau1701->gpio_pll_mode[0], 1); 241 + gpio_set_value(adau1701->gpio_pll_mode[1], 1); 242 + break; 243 + } 236 244 } 237 245 238 - snd_soc_write(codec, ADAU1701_DACSET, ADAU1701_DACSET_DACINIT); 246 + adau1701->pll_clkdiv = clkdiv; 247 + 248 + if (gpio_is_valid(adau1701->gpio_nreset)) { 249 + gpio_set_value(adau1701->gpio_nreset, 0); 250 + /* minimum reset time is 20ns */ 251 + udelay(1); 252 + gpio_set_value(adau1701->gpio_nreset, 1); 253 + /* power-up time may be as long as 85ms */ 254 + mdelay(85); 255 + } 256 + 257 + /* 258 + * Postpone the firmware download to a point in time when we 259 + * know the correct PLL setup 260 + */ 261 + if (clkdiv != ADAU1707_CLKDIV_UNSET) { 262 + ret = process_sigma_firmware(client, ADAU1701_FIRMWARE); 263 + if (ret) { 264 + dev_warn(codec->dev, "Failed to load firmware\n"); 265 + return ret; 266 + } 267 + } 268 + 269 + regmap_write(adau1701->regmap, ADAU1701_DACSET, ADAU1701_DACSET_DACINIT); 270 + regmap_write(adau1701->regmap, ADAU1701_DSPCTRL, ADAU1701_DSPCTRL_CR); 271 + 272 + regcache_mark_dirty(adau1701->regmap); 273 + regcache_sync(adau1701->regmap); 239 274 240 275 return 0; 241 276 } ··· 372 291 struct snd_pcm_hw_params *params, struct snd_soc_dai *dai) 373 292 { 374 293 struct snd_soc_codec *codec = dai->codec; 294 + struct adau1701 *adau1701 = snd_soc_codec_get_drvdata(codec); 295 + unsigned int clkdiv = adau1701->sysclk / params_rate(params); 375 296 snd_pcm_format_t format; 376 297 unsigned int val; 298 + int ret; 299 + 300 + /* 301 + * If the mclk/lrclk ratio changes, the chip needs updated PLL 302 + * mode GPIO settings, and a full reset cycle, including a new 303 + * firmware upload. 304 + */ 305 + if (clkdiv != adau1701->pll_clkdiv) { 306 + ret = adau1701_reset(codec, clkdiv); 307 + if (ret < 0) 308 + return ret; 309 + } 377 310 378 311 switch (params_rate(params)) { 379 312 case 192000: ··· 479 384 480 385 adau1701->dai_fmt = fmt & SND_SOC_DAIFMT_FORMAT_MASK; 481 386 482 - snd_soc_write(codec, ADAU1701_SERICTL, serictl); 483 - snd_soc_update_bits(codec, ADAU1701_SEROCTL, 387 + regmap_write(adau1701->regmap, ADAU1701_SERICTL, serictl); 388 + regmap_update_bits(adau1701->regmap, ADAU1701_SEROCTL, 484 389 ~ADAU1701_SEROCTL_WORD_LEN_MASK, seroctl); 485 390 486 391 return 0; ··· 530 435 int source, unsigned int freq, int dir) 531 436 { 532 437 unsigned int val; 438 + struct adau1701 *adau1701 = snd_soc_codec_get_drvdata(codec); 533 439 534 440 switch (clk_id) { 535 441 case ADAU1701_CLK_SRC_OSC: ··· 544 448 } 545 449 546 450 snd_soc_update_bits(codec, ADAU1701_OSCIPOW, ADAU1701_OSCIPOW_OPD, val); 451 + adau1701->sysclk = freq; 547 452 548 453 return 0; 549 454 } ··· 591 494 592 495 static int adau1701_probe(struct snd_soc_codec *codec) 593 496 { 594 - int ret; 497 + int i, ret; 498 + unsigned int val; 499 + struct adau1701 *adau1701 = snd_soc_codec_get_drvdata(codec); 595 500 596 501 codec->control_data = to_i2c_client(codec->dev); 597 502 598 - ret = adau1701_init(codec); 599 - if (ret) 503 + /* 504 + * Let the pll_clkdiv variable default to something that won't happen 505 + * at runtime. That way, we can postpone the firmware download from 506 + * adau1701_reset() to a point in time when we know the correct PLL 507 + * mode parameters. 508 + */ 509 + adau1701->pll_clkdiv = ADAU1707_CLKDIV_UNSET; 510 + 511 + /* initalize with pre-configured pll mode settings */ 512 + ret = adau1701_reset(codec, adau1701->pll_clkdiv); 513 + if (ret < 0) 600 514 return ret; 601 515 602 - snd_soc_write(codec, ADAU1701_DSPCTRL, ADAU1701_DSPCTRL_CR); 516 + /* set up pin config */ 517 + val = 0; 518 + for (i = 0; i < 6; i++) 519 + val |= adau1701->pin_config[i] << (i * 4); 520 + 521 + regmap_write(adau1701->regmap, ADAU1701_PINCONF_0, val); 522 + 523 + val = 0; 524 + for (i = 0; i < 6; i++) 525 + val |= adau1701->pin_config[i + 6] << (i * 4); 526 + 527 + regmap_write(adau1701->regmap, ADAU1701_PINCONF_1, val); 603 528 604 529 return 0; 605 530 } ··· 631 512 .set_bias_level = adau1701_set_bias_level, 632 513 .idle_bias_off = true, 633 514 634 - .reg_cache_size = ADAU1701_NUM_REGS, 635 - .reg_word_size = sizeof(u16), 636 - 637 515 .controls = adau1701_controls, 638 516 .num_controls = ARRAY_SIZE(adau1701_controls), 639 517 .dapm_widgets = adau1701_dapm_widgets, ··· 638 522 .dapm_routes = adau1701_dapm_routes, 639 523 .num_dapm_routes = ARRAY_SIZE(adau1701_dapm_routes), 640 524 641 - .write = adau1701_write, 642 - .read = adau1701_read, 643 - 644 525 .set_sysclk = adau1701_set_sysclk, 526 + }; 527 + 528 + static const struct regmap_config adau1701_regmap = { 529 + .reg_bits = 16, 530 + .val_bits = 32, 531 + .max_register = ADAU1701_MAX_REGISTER, 532 + .cache_type = REGCACHE_RBTREE, 533 + .volatile_reg = adau1701_volatile_reg, 534 + .reg_write = adau1701_reg_write, 535 + .reg_read = adau1701_reg_read, 645 536 }; 646 537 647 538 static int adau1701_i2c_probe(struct i2c_client *client, ··· 657 534 struct adau1701 *adau1701; 658 535 struct device *dev = &client->dev; 659 536 int gpio_nreset = -EINVAL; 537 + int gpio_pll_mode[2] = { -EINVAL, -EINVAL }; 660 538 int ret; 661 539 662 540 adau1701 = devm_kzalloc(dev, sizeof(*adau1701), GFP_KERNEL); 663 541 if (!adau1701) 664 542 return -ENOMEM; 665 543 544 + adau1701->regmap = devm_regmap_init(dev, NULL, client, 545 + &adau1701_regmap); 546 + if (IS_ERR(adau1701->regmap)) 547 + return PTR_ERR(adau1701->regmap); 548 + 666 549 if (dev->of_node) { 667 550 gpio_nreset = of_get_named_gpio(dev->of_node, "reset-gpio", 0); 668 551 if (gpio_nreset < 0 && gpio_nreset != -ENOENT) 669 552 return gpio_nreset; 553 + 554 + gpio_pll_mode[0] = of_get_named_gpio(dev->of_node, 555 + "adi,pll-mode-gpios", 0); 556 + if (gpio_pll_mode[0] < 0 && gpio_pll_mode[0] != -ENOENT) 557 + return gpio_pll_mode[0]; 558 + 559 + gpio_pll_mode[1] = of_get_named_gpio(dev->of_node, 560 + "adi,pll-mode-gpios", 1); 561 + if (gpio_pll_mode[1] < 0 && gpio_pll_mode[1] != -ENOENT) 562 + return gpio_pll_mode[1]; 563 + 564 + of_property_read_u32(dev->of_node, "adi,pll-clkdiv", 565 + &adau1701->pll_clkdiv); 566 + 567 + of_property_read_u8_array(dev->of_node, "adi,pin-config", 568 + adau1701->pin_config, 569 + ARRAY_SIZE(adau1701->pin_config)); 670 570 } 671 571 672 572 if (gpio_is_valid(gpio_nreset)) { ··· 699 553 return ret; 700 554 } 701 555 556 + if (gpio_is_valid(gpio_pll_mode[0]) && 557 + gpio_is_valid(gpio_pll_mode[1])) { 558 + ret = devm_gpio_request_one(dev, gpio_pll_mode[0], 559 + GPIOF_OUT_INIT_LOW, 560 + "ADAU1701 PLL mode 0"); 561 + if (ret < 0) 562 + return ret; 563 + 564 + ret = devm_gpio_request_one(dev, gpio_pll_mode[1], 565 + GPIOF_OUT_INIT_LOW, 566 + "ADAU1701 PLL mode 1"); 567 + if (ret < 0) 568 + return ret; 569 + } 570 + 702 571 adau1701->gpio_nreset = gpio_nreset; 572 + adau1701->gpio_pll_mode[0] = gpio_pll_mode[0]; 573 + adau1701->gpio_pll_mode[1] = gpio_pll_mode[1]; 703 574 704 575 i2c_set_clientdata(client, adau1701); 705 576 ret = snd_soc_register_codec(&client->dev, &adau1701_codec_drv,
+11 -15
sound/soc/codecs/stac9766.c
··· 28 28 29 29 #include "stac9766.h" 30 30 31 - #define STAC9766_VERSION "0.10" 32 - 33 31 /* 34 32 * STAC9766 register cache 35 33 */ ··· 143 145 144 146 if (reg > AC97_STAC_PAGE0) { 145 147 stac9766_ac97_write(codec, AC97_INT_PAGING, 0); 146 - soc_ac97_ops.write(codec->ac97, reg, val); 148 + soc_ac97_ops->write(codec->ac97, reg, val); 147 149 stac9766_ac97_write(codec, AC97_INT_PAGING, 1); 148 150 return 0; 149 151 } 150 152 if (reg / 2 >= ARRAY_SIZE(stac9766_reg)) 151 153 return -EIO; 152 154 153 - soc_ac97_ops.write(codec->ac97, reg, val); 155 + soc_ac97_ops->write(codec->ac97, reg, val); 154 156 cache[reg / 2] = val; 155 157 return 0; 156 158 } ··· 162 164 163 165 if (reg > AC97_STAC_PAGE0) { 164 166 stac9766_ac97_write(codec, AC97_INT_PAGING, 0); 165 - val = soc_ac97_ops.read(codec->ac97, reg - AC97_STAC_PAGE0); 167 + val = soc_ac97_ops->read(codec->ac97, reg - AC97_STAC_PAGE0); 166 168 stac9766_ac97_write(codec, AC97_INT_PAGING, 1); 167 169 return val; 168 170 } ··· 173 175 reg == AC97_INT_PAGING || reg == AC97_VENDOR_ID1 || 174 176 reg == AC97_VENDOR_ID2) { 175 177 176 - val = soc_ac97_ops.read(codec->ac97, reg); 178 + val = soc_ac97_ops->read(codec->ac97, reg); 177 179 return val; 178 180 } 179 181 return cache[reg / 2]; ··· 240 242 241 243 static int stac9766_reset(struct snd_soc_codec *codec, int try_warm) 242 244 { 243 - if (try_warm && soc_ac97_ops.warm_reset) { 244 - soc_ac97_ops.warm_reset(codec->ac97); 245 + if (try_warm && soc_ac97_ops->warm_reset) { 246 + soc_ac97_ops->warm_reset(codec->ac97); 245 247 if (stac9766_ac97_read(codec, 0) == stac9766_reg[0]) 246 248 return 1; 247 249 } 248 250 249 - soc_ac97_ops.reset(codec->ac97); 250 - if (soc_ac97_ops.warm_reset) 251 - soc_ac97_ops.warm_reset(codec->ac97); 251 + soc_ac97_ops->reset(codec->ac97); 252 + if (soc_ac97_ops->warm_reset) 253 + soc_ac97_ops->warm_reset(codec->ac97); 252 254 if (stac9766_ac97_read(codec, 0) != stac9766_reg[0]) 253 255 return -EIO; 254 256 return 0; ··· 272 274 return -EIO; 273 275 } 274 276 codec->ac97->bus->ops->warm_reset(codec->ac97); 275 - id = soc_ac97_ops.read(codec->ac97, AC97_VENDOR_ID2); 277 + id = soc_ac97_ops->read(codec->ac97, AC97_VENDOR_ID2); 276 278 if (id != 0x4c13) { 277 279 stac9766_reset(codec, 0); 278 280 reset++; ··· 336 338 { 337 339 int ret = 0; 338 340 339 - printk(KERN_INFO "STAC9766 SoC Audio Codec %s\n", STAC9766_VERSION); 340 - 341 - ret = snd_soc_new_ac97_codec(codec, &soc_ac97_ops, 0); 341 + ret = snd_soc_new_ac97_codec(codec, soc_ac97_ops, 0); 342 342 if (ret < 0) 343 343 goto codec_err; 344 344
+326 -4
sound/soc/codecs/tas5086.c
··· 83 83 #define TAS5086_SPLIT_CAP_CHARGE 0x1a /* Split cap charge period register */ 84 84 #define TAS5086_OSC_TRIM 0x1b /* Oscillator trim register */ 85 85 #define TAS5086_BKNDERR 0x1c 86 + #define TAS5086_INPUT_MUX 0x20 87 + #define TAS5086_PWM_OUTPUT_MUX 0x25 88 + 89 + #define TAS5086_MAX_REGISTER TAS5086_PWM_OUTPUT_MUX 90 + 91 + #define TAS5086_PWM_START_MIDZ_FOR_START_1 (1 << 7) 92 + #define TAS5086_PWM_START_MIDZ_FOR_START_2 (1 << 6) 93 + #define TAS5086_PWM_START_CHANNEL_MASK (0x3f) 86 94 87 95 /* 88 96 * Default TAS5086 power-up configuration ··· 127 119 { 0x1c, 0x05 }, 128 120 }; 129 121 122 + static int tas5086_register_size(struct device *dev, unsigned int reg) 123 + { 124 + switch (reg) { 125 + case TAS5086_CLOCK_CONTROL ... TAS5086_BKNDERR: 126 + return 1; 127 + case TAS5086_INPUT_MUX: 128 + case TAS5086_PWM_OUTPUT_MUX: 129 + return 4; 130 + } 131 + 132 + dev_err(dev, "Unsupported register address: %d\n", reg); 133 + return 0; 134 + } 135 + 130 136 static bool tas5086_accessible_reg(struct device *dev, unsigned int reg) 131 137 { 132 - return !((reg == 0x0f) || (reg >= 0x11 && reg <= 0x17)); 138 + switch (reg) { 139 + case 0x0f: 140 + case 0x11 ... 0x17: 141 + case 0x1d ... 0x1f: 142 + return false; 143 + default: 144 + return true; 145 + } 133 146 } 134 147 135 148 static bool tas5086_volatile_reg(struct device *dev, unsigned int reg) ··· 167 138 static bool tas5086_writeable_reg(struct device *dev, unsigned int reg) 168 139 { 169 140 return tas5086_accessible_reg(dev, reg) && (reg != TAS5086_DEV_ID); 141 + } 142 + 143 + static int tas5086_reg_write(void *context, unsigned int reg, 144 + unsigned int value) 145 + { 146 + struct i2c_client *client = context; 147 + unsigned int i, size; 148 + uint8_t buf[5]; 149 + int ret; 150 + 151 + size = tas5086_register_size(&client->dev, reg); 152 + if (size == 0) 153 + return -EINVAL; 154 + 155 + buf[0] = reg; 156 + 157 + for (i = size; i >= 1; --i) { 158 + buf[i] = value; 159 + value >>= 8; 160 + } 161 + 162 + ret = i2c_master_send(client, buf, size + 1); 163 + if (ret == size + 1) 164 + return 0; 165 + else if (ret < 0) 166 + return ret; 167 + else 168 + return -EIO; 169 + } 170 + 171 + static int tas5086_reg_read(void *context, unsigned int reg, 172 + unsigned int *value) 173 + { 174 + struct i2c_client *client = context; 175 + uint8_t send_buf, recv_buf[4]; 176 + struct i2c_msg msgs[2]; 177 + unsigned int size; 178 + unsigned int i; 179 + int ret; 180 + 181 + size = tas5086_register_size(&client->dev, reg); 182 + if (size == 0) 183 + return -EINVAL; 184 + 185 + send_buf = reg; 186 + 187 + msgs[0].addr = client->addr; 188 + msgs[0].len = sizeof(send_buf); 189 + msgs[0].buf = &send_buf; 190 + msgs[0].flags = 0; 191 + 192 + msgs[1].addr = client->addr; 193 + msgs[1].len = size; 194 + msgs[1].buf = recv_buf; 195 + msgs[1].flags = I2C_M_RD; 196 + 197 + ret = i2c_transfer(client->adapter, msgs, ARRAY_SIZE(msgs)); 198 + if (ret < 0) 199 + return ret; 200 + else if (ret != ARRAY_SIZE(msgs)) 201 + return -EIO; 202 + 203 + *value = 0; 204 + 205 + for (i = 0; i < size; i++) { 206 + *value <<= 8; 207 + *value |= recv_buf[i]; 208 + } 209 + 210 + return 0; 170 211 } 171 212 172 213 struct tas5086_private { ··· 475 376 tas5086_get_deemph, tas5086_put_deemph), 476 377 }; 477 378 379 + /* Input mux controls */ 380 + static const char *tas5086_dapm_sdin_texts[] = 381 + { 382 + "SDIN1-L", "SDIN1-R", "SDIN2-L", "SDIN2-R", 383 + "SDIN3-L", "SDIN3-R", "Ground (0)", "nc" 384 + }; 385 + 386 + static const struct soc_enum tas5086_dapm_input_mux_enum[] = { 387 + SOC_ENUM_SINGLE(TAS5086_INPUT_MUX, 20, 8, tas5086_dapm_sdin_texts), 388 + SOC_ENUM_SINGLE(TAS5086_INPUT_MUX, 16, 8, tas5086_dapm_sdin_texts), 389 + SOC_ENUM_SINGLE(TAS5086_INPUT_MUX, 12, 8, tas5086_dapm_sdin_texts), 390 + SOC_ENUM_SINGLE(TAS5086_INPUT_MUX, 8, 8, tas5086_dapm_sdin_texts), 391 + SOC_ENUM_SINGLE(TAS5086_INPUT_MUX, 4, 8, tas5086_dapm_sdin_texts), 392 + SOC_ENUM_SINGLE(TAS5086_INPUT_MUX, 0, 8, tas5086_dapm_sdin_texts), 393 + }; 394 + 395 + static const struct snd_kcontrol_new tas5086_dapm_input_mux_controls[] = { 396 + SOC_DAPM_ENUM("Channel 1 input", tas5086_dapm_input_mux_enum[0]), 397 + SOC_DAPM_ENUM("Channel 2 input", tas5086_dapm_input_mux_enum[1]), 398 + SOC_DAPM_ENUM("Channel 3 input", tas5086_dapm_input_mux_enum[2]), 399 + SOC_DAPM_ENUM("Channel 4 input", tas5086_dapm_input_mux_enum[3]), 400 + SOC_DAPM_ENUM("Channel 5 input", tas5086_dapm_input_mux_enum[4]), 401 + SOC_DAPM_ENUM("Channel 6 input", tas5086_dapm_input_mux_enum[5]), 402 + }; 403 + 404 + /* Output mux controls */ 405 + static const char *tas5086_dapm_channel_texts[] = 406 + { "Channel 1 Mux", "Channel 2 Mux", "Channel 3 Mux", 407 + "Channel 4 Mux", "Channel 5 Mux", "Channel 6 Mux" }; 408 + 409 + static const struct soc_enum tas5086_dapm_output_mux_enum[] = { 410 + SOC_ENUM_SINGLE(TAS5086_PWM_OUTPUT_MUX, 20, 6, tas5086_dapm_channel_texts), 411 + SOC_ENUM_SINGLE(TAS5086_PWM_OUTPUT_MUX, 16, 6, tas5086_dapm_channel_texts), 412 + SOC_ENUM_SINGLE(TAS5086_PWM_OUTPUT_MUX, 12, 6, tas5086_dapm_channel_texts), 413 + SOC_ENUM_SINGLE(TAS5086_PWM_OUTPUT_MUX, 8, 6, tas5086_dapm_channel_texts), 414 + SOC_ENUM_SINGLE(TAS5086_PWM_OUTPUT_MUX, 4, 6, tas5086_dapm_channel_texts), 415 + SOC_ENUM_SINGLE(TAS5086_PWM_OUTPUT_MUX, 0, 6, tas5086_dapm_channel_texts), 416 + }; 417 + 418 + static const struct snd_kcontrol_new tas5086_dapm_output_mux_controls[] = { 419 + SOC_DAPM_ENUM("PWM1 Output", tas5086_dapm_output_mux_enum[0]), 420 + SOC_DAPM_ENUM("PWM2 Output", tas5086_dapm_output_mux_enum[1]), 421 + SOC_DAPM_ENUM("PWM3 Output", tas5086_dapm_output_mux_enum[2]), 422 + SOC_DAPM_ENUM("PWM4 Output", tas5086_dapm_output_mux_enum[3]), 423 + SOC_DAPM_ENUM("PWM5 Output", tas5086_dapm_output_mux_enum[4]), 424 + SOC_DAPM_ENUM("PWM6 Output", tas5086_dapm_output_mux_enum[5]), 425 + }; 426 + 427 + static const struct snd_soc_dapm_widget tas5086_dapm_widgets[] = { 428 + SND_SOC_DAPM_INPUT("SDIN1-L"), 429 + SND_SOC_DAPM_INPUT("SDIN1-R"), 430 + SND_SOC_DAPM_INPUT("SDIN2-L"), 431 + SND_SOC_DAPM_INPUT("SDIN2-R"), 432 + SND_SOC_DAPM_INPUT("SDIN3-L"), 433 + SND_SOC_DAPM_INPUT("SDIN3-R"), 434 + SND_SOC_DAPM_INPUT("SDIN4-L"), 435 + SND_SOC_DAPM_INPUT("SDIN4-R"), 436 + 437 + SND_SOC_DAPM_OUTPUT("PWM1"), 438 + SND_SOC_DAPM_OUTPUT("PWM2"), 439 + SND_SOC_DAPM_OUTPUT("PWM3"), 440 + SND_SOC_DAPM_OUTPUT("PWM4"), 441 + SND_SOC_DAPM_OUTPUT("PWM5"), 442 + SND_SOC_DAPM_OUTPUT("PWM6"), 443 + 444 + SND_SOC_DAPM_MUX("Channel 1 Mux", SND_SOC_NOPM, 0, 0, 445 + &tas5086_dapm_input_mux_controls[0]), 446 + SND_SOC_DAPM_MUX("Channel 2 Mux", SND_SOC_NOPM, 0, 0, 447 + &tas5086_dapm_input_mux_controls[1]), 448 + SND_SOC_DAPM_MUX("Channel 3 Mux", SND_SOC_NOPM, 0, 0, 449 + &tas5086_dapm_input_mux_controls[2]), 450 + SND_SOC_DAPM_MUX("Channel 4 Mux", SND_SOC_NOPM, 0, 0, 451 + &tas5086_dapm_input_mux_controls[3]), 452 + SND_SOC_DAPM_MUX("Channel 5 Mux", SND_SOC_NOPM, 0, 0, 453 + &tas5086_dapm_input_mux_controls[4]), 454 + SND_SOC_DAPM_MUX("Channel 6 Mux", SND_SOC_NOPM, 0, 0, 455 + &tas5086_dapm_input_mux_controls[5]), 456 + 457 + SND_SOC_DAPM_MUX("PWM1 Mux", SND_SOC_NOPM, 0, 0, 458 + &tas5086_dapm_output_mux_controls[0]), 459 + SND_SOC_DAPM_MUX("PWM2 Mux", SND_SOC_NOPM, 0, 0, 460 + &tas5086_dapm_output_mux_controls[1]), 461 + SND_SOC_DAPM_MUX("PWM3 Mux", SND_SOC_NOPM, 0, 0, 462 + &tas5086_dapm_output_mux_controls[2]), 463 + SND_SOC_DAPM_MUX("PWM4 Mux", SND_SOC_NOPM, 0, 0, 464 + &tas5086_dapm_output_mux_controls[3]), 465 + SND_SOC_DAPM_MUX("PWM5 Mux", SND_SOC_NOPM, 0, 0, 466 + &tas5086_dapm_output_mux_controls[4]), 467 + SND_SOC_DAPM_MUX("PWM6 Mux", SND_SOC_NOPM, 0, 0, 468 + &tas5086_dapm_output_mux_controls[5]), 469 + }; 470 + 471 + static const struct snd_soc_dapm_route tas5086_dapm_routes[] = { 472 + /* SDIN inputs -> channel muxes */ 473 + { "Channel 1 Mux", "SDIN1-L", "SDIN1-L" }, 474 + { "Channel 1 Mux", "SDIN1-R", "SDIN1-R" }, 475 + { "Channel 1 Mux", "SDIN2-L", "SDIN2-L" }, 476 + { "Channel 1 Mux", "SDIN2-R", "SDIN2-R" }, 477 + { "Channel 1 Mux", "SDIN3-L", "SDIN3-L" }, 478 + { "Channel 1 Mux", "SDIN3-R", "SDIN3-R" }, 479 + 480 + { "Channel 2 Mux", "SDIN1-L", "SDIN1-L" }, 481 + { "Channel 2 Mux", "SDIN1-R", "SDIN1-R" }, 482 + { "Channel 2 Mux", "SDIN2-L", "SDIN2-L" }, 483 + { "Channel 2 Mux", "SDIN2-R", "SDIN2-R" }, 484 + { "Channel 2 Mux", "SDIN3-L", "SDIN3-L" }, 485 + { "Channel 2 Mux", "SDIN3-R", "SDIN3-R" }, 486 + 487 + { "Channel 2 Mux", "SDIN1-L", "SDIN1-L" }, 488 + { "Channel 2 Mux", "SDIN1-R", "SDIN1-R" }, 489 + { "Channel 2 Mux", "SDIN2-L", "SDIN2-L" }, 490 + { "Channel 2 Mux", "SDIN2-R", "SDIN2-R" }, 491 + { "Channel 2 Mux", "SDIN3-L", "SDIN3-L" }, 492 + { "Channel 2 Mux", "SDIN3-R", "SDIN3-R" }, 493 + 494 + { "Channel 3 Mux", "SDIN1-L", "SDIN1-L" }, 495 + { "Channel 3 Mux", "SDIN1-R", "SDIN1-R" }, 496 + { "Channel 3 Mux", "SDIN2-L", "SDIN2-L" }, 497 + { "Channel 3 Mux", "SDIN2-R", "SDIN2-R" }, 498 + { "Channel 3 Mux", "SDIN3-L", "SDIN3-L" }, 499 + { "Channel 3 Mux", "SDIN3-R", "SDIN3-R" }, 500 + 501 + { "Channel 4 Mux", "SDIN1-L", "SDIN1-L" }, 502 + { "Channel 4 Mux", "SDIN1-R", "SDIN1-R" }, 503 + { "Channel 4 Mux", "SDIN2-L", "SDIN2-L" }, 504 + { "Channel 4 Mux", "SDIN2-R", "SDIN2-R" }, 505 + { "Channel 4 Mux", "SDIN3-L", "SDIN3-L" }, 506 + { "Channel 4 Mux", "SDIN3-R", "SDIN3-R" }, 507 + 508 + { "Channel 5 Mux", "SDIN1-L", "SDIN1-L" }, 509 + { "Channel 5 Mux", "SDIN1-R", "SDIN1-R" }, 510 + { "Channel 5 Mux", "SDIN2-L", "SDIN2-L" }, 511 + { "Channel 5 Mux", "SDIN2-R", "SDIN2-R" }, 512 + { "Channel 5 Mux", "SDIN3-L", "SDIN3-L" }, 513 + { "Channel 5 Mux", "SDIN3-R", "SDIN3-R" }, 514 + 515 + { "Channel 6 Mux", "SDIN1-L", "SDIN1-L" }, 516 + { "Channel 6 Mux", "SDIN1-R", "SDIN1-R" }, 517 + { "Channel 6 Mux", "SDIN2-L", "SDIN2-L" }, 518 + { "Channel 6 Mux", "SDIN2-R", "SDIN2-R" }, 519 + { "Channel 6 Mux", "SDIN3-L", "SDIN3-L" }, 520 + { "Channel 6 Mux", "SDIN3-R", "SDIN3-R" }, 521 + 522 + /* Channel muxes -> PWM muxes */ 523 + { "PWM1 Mux", "Channel 1 Mux", "Channel 1 Mux" }, 524 + { "PWM2 Mux", "Channel 1 Mux", "Channel 1 Mux" }, 525 + { "PWM3 Mux", "Channel 1 Mux", "Channel 1 Mux" }, 526 + { "PWM4 Mux", "Channel 1 Mux", "Channel 1 Mux" }, 527 + { "PWM5 Mux", "Channel 1 Mux", "Channel 1 Mux" }, 528 + { "PWM6 Mux", "Channel 1 Mux", "Channel 1 Mux" }, 529 + 530 + { "PWM1 Mux", "Channel 2 Mux", "Channel 2 Mux" }, 531 + { "PWM2 Mux", "Channel 2 Mux", "Channel 2 Mux" }, 532 + { "PWM3 Mux", "Channel 2 Mux", "Channel 2 Mux" }, 533 + { "PWM4 Mux", "Channel 2 Mux", "Channel 2 Mux" }, 534 + { "PWM5 Mux", "Channel 2 Mux", "Channel 2 Mux" }, 535 + { "PWM6 Mux", "Channel 2 Mux", "Channel 2 Mux" }, 536 + 537 + { "PWM1 Mux", "Channel 3 Mux", "Channel 3 Mux" }, 538 + { "PWM2 Mux", "Channel 3 Mux", "Channel 3 Mux" }, 539 + { "PWM3 Mux", "Channel 3 Mux", "Channel 3 Mux" }, 540 + { "PWM4 Mux", "Channel 3 Mux", "Channel 3 Mux" }, 541 + { "PWM5 Mux", "Channel 3 Mux", "Channel 3 Mux" }, 542 + { "PWM6 Mux", "Channel 3 Mux", "Channel 3 Mux" }, 543 + 544 + { "PWM1 Mux", "Channel 4 Mux", "Channel 4 Mux" }, 545 + { "PWM2 Mux", "Channel 4 Mux", "Channel 4 Mux" }, 546 + { "PWM3 Mux", "Channel 4 Mux", "Channel 4 Mux" }, 547 + { "PWM4 Mux", "Channel 4 Mux", "Channel 4 Mux" }, 548 + { "PWM5 Mux", "Channel 4 Mux", "Channel 4 Mux" }, 549 + { "PWM6 Mux", "Channel 4 Mux", "Channel 4 Mux" }, 550 + 551 + { "PWM1 Mux", "Channel 5 Mux", "Channel 5 Mux" }, 552 + { "PWM2 Mux", "Channel 5 Mux", "Channel 5 Mux" }, 553 + { "PWM3 Mux", "Channel 5 Mux", "Channel 5 Mux" }, 554 + { "PWM4 Mux", "Channel 5 Mux", "Channel 5 Mux" }, 555 + { "PWM5 Mux", "Channel 5 Mux", "Channel 5 Mux" }, 556 + { "PWM6 Mux", "Channel 5 Mux", "Channel 5 Mux" }, 557 + 558 + { "PWM1 Mux", "Channel 6 Mux", "Channel 6 Mux" }, 559 + { "PWM2 Mux", "Channel 6 Mux", "Channel 6 Mux" }, 560 + { "PWM3 Mux", "Channel 6 Mux", "Channel 6 Mux" }, 561 + { "PWM4 Mux", "Channel 6 Mux", "Channel 6 Mux" }, 562 + { "PWM5 Mux", "Channel 6 Mux", "Channel 6 Mux" }, 563 + { "PWM6 Mux", "Channel 6 Mux", "Channel 6 Mux" }, 564 + 565 + /* The PWM muxes are directly connected to the PWM outputs */ 566 + { "PWM1", NULL, "PWM1 Mux" }, 567 + { "PWM2", NULL, "PWM2 Mux" }, 568 + { "PWM3", NULL, "PWM3 Mux" }, 569 + { "PWM4", NULL, "PWM4 Mux" }, 570 + { "PWM5", NULL, "PWM5 Mux" }, 571 + { "PWM6", NULL, "PWM6 Mux" }, 572 + 573 + }; 574 + 478 575 static const struct snd_soc_dai_ops tas5086_dai_ops = { 479 576 .hw_params = tas5086_hw_params, 480 577 .set_sysclk = tas5086_set_dai_sysclk, ··· 721 426 { 722 427 struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec); 723 428 int charge_period = 1300000; /* hardware default is 1300 ms */ 429 + u8 pwm_start_mid_z = 0; 724 430 int i, ret; 725 431 726 432 if (of_match_device(of_match_ptr(tas5086_dt_ids), codec->dev)) { 727 433 struct device_node *of_node = codec->dev->of_node; 728 434 of_property_read_u32(of_node, "ti,charge-period", &charge_period); 435 + 436 + for (i = 0; i < 6; i++) { 437 + char name[25]; 438 + 439 + snprintf(name, sizeof(name), 440 + "ti,mid-z-channel-%d", i + 1); 441 + 442 + if (of_get_property(of_node, name, NULL) != NULL) 443 + pwm_start_mid_z |= 1 << i; 444 + } 729 445 } 446 + 447 + /* 448 + * If any of the channels is configured to start in Mid-Z mode, 449 + * configure 'part 1' of the PWM starts to use Mid-Z, and tell 450 + * all configured mid-z channels to start start under 'part 1'. 451 + */ 452 + if (pwm_start_mid_z) 453 + regmap_write(priv->regmap, TAS5086_PWM_START, 454 + TAS5086_PWM_START_MIDZ_FOR_START_1 | 455 + pwm_start_mid_z); 730 456 731 457 /* lookup and set split-capacitor charge period */ 732 458 if (charge_period == 0) { ··· 806 490 .resume = tas5086_soc_resume, 807 491 .controls = tas5086_controls, 808 492 .num_controls = ARRAY_SIZE(tas5086_controls), 493 + .dapm_widgets = tas5086_dapm_widgets, 494 + .num_dapm_widgets = ARRAY_SIZE(tas5086_dapm_widgets), 495 + .dapm_routes = tas5086_dapm_routes, 496 + .num_dapm_routes = ARRAY_SIZE(tas5086_dapm_routes), 809 497 }; 810 498 811 499 static const struct i2c_device_id tas5086_i2c_id[] = { ··· 820 500 821 501 static const struct regmap_config tas5086_regmap = { 822 502 .reg_bits = 8, 823 - .val_bits = 8, 824 - .max_register = ARRAY_SIZE(tas5086_reg_defaults), 503 + .val_bits = 32, 504 + .max_register = TAS5086_MAX_REGISTER, 825 505 .reg_defaults = tas5086_reg_defaults, 826 506 .num_reg_defaults = ARRAY_SIZE(tas5086_reg_defaults), 827 507 .cache_type = REGCACHE_RBTREE, 828 508 .volatile_reg = tas5086_volatile_reg, 829 509 .writeable_reg = tas5086_writeable_reg, 830 510 .readable_reg = tas5086_accessible_reg, 511 + .reg_read = tas5086_reg_read, 512 + .reg_write = tas5086_reg_write, 831 513 }; 832 514 833 515 static int tas5086_i2c_probe(struct i2c_client *i2c, ··· 844 522 if (!priv) 845 523 return -ENOMEM; 846 524 847 - priv->regmap = devm_regmap_init_i2c(i2c, &tas5086_regmap); 525 + priv->regmap = devm_regmap_init(dev, NULL, i2c, &tas5086_regmap); 848 526 if (IS_ERR(priv->regmap)) { 849 527 ret = PTR_ERR(priv->regmap); 850 528 dev_err(&i2c->dev, "Failed to create regmap: %d\n", ret);
+2 -4
sound/soc/codecs/tlv320aic3x.c
··· 128 128 }; 129 129 130 130 #define SOC_DAPM_SINGLE_AIC3X(xname, reg, shift, mask, invert) \ 131 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = xname, \ 132 - .info = snd_soc_info_volsw, \ 133 - .get = snd_soc_dapm_get_volsw, .put = snd_soc_dapm_put_volsw_aic3x, \ 134 - .private_value = SOC_SINGLE_VALUE(reg, shift, mask, invert) } 131 + SOC_SINGLE_EXT(xname, reg, shift, mask, invert, \ 132 + snd_soc_dapm_get_volsw, snd_soc_dapm_put_volsw_aic3x) 135 133 136 134 /* 137 135 * All input lines are connected when !0xf and disconnected with 0xf bit field,
+107 -2
sound/soc/codecs/twl6040.c
··· 38 38 39 39 #include "twl6040.h" 40 40 41 + enum twl6040_dai_id { 42 + TWL6040_DAI_LEGACY = 0, 43 + TWL6040_DAI_UL, 44 + TWL6040_DAI_DL1, 45 + TWL6040_DAI_DL2, 46 + TWL6040_DAI_VIB, 47 + }; 48 + 41 49 #define TWL6040_RATES SNDRV_PCM_RATE_8000_96000 42 50 #define TWL6040_FORMATS (SNDRV_PCM_FMTBIT_S32_LE) 43 51 ··· 75 67 int pll_power_mode; 76 68 int hs_power_mode; 77 69 int hs_power_mode_locked; 70 + bool dl1_unmuted; 71 + bool dl2_unmuted; 78 72 unsigned int clk_in; 79 73 unsigned int sysclk; 80 74 struct twl6040_jack_data hs_jack; ··· 230 220 return value; 231 221 } 232 222 223 + static bool twl6040_is_path_unmuted(struct snd_soc_codec *codec, 224 + unsigned int reg) 225 + { 226 + struct twl6040_data *priv = snd_soc_codec_get_drvdata(codec); 227 + 228 + switch (reg) { 229 + case TWL6040_REG_HSLCTL: 230 + case TWL6040_REG_HSRCTL: 231 + case TWL6040_REG_EARCTL: 232 + /* DL1 path */ 233 + return priv->dl1_unmuted; 234 + case TWL6040_REG_HFLCTL: 235 + case TWL6040_REG_HFRCTL: 236 + return priv->dl2_unmuted; 237 + default: 238 + return 1; 239 + }; 240 + } 241 + 233 242 /* 234 243 * write to the twl6040 register space 235 244 */ ··· 261 232 return -EIO; 262 233 263 234 twl6040_write_reg_cache(codec, reg, value); 264 - if (likely(reg < TWL6040_REG_SW_SHADOW)) 235 + if (likely(reg < TWL6040_REG_SW_SHADOW) && 236 + twl6040_is_path_unmuted(codec, reg)) 265 237 return twl6040_reg_write(twl6040, reg, value); 266 238 else 267 239 return 0; ··· 1056 1026 return 0; 1057 1027 } 1058 1028 1029 + static void twl6040_mute_path(struct snd_soc_codec *codec, enum twl6040_dai_id id, 1030 + int mute) 1031 + { 1032 + struct twl6040 *twl6040 = codec->control_data; 1033 + struct twl6040_data *priv = snd_soc_codec_get_drvdata(codec); 1034 + int hslctl, hsrctl, earctl; 1035 + int hflctl, hfrctl; 1036 + 1037 + switch (id) { 1038 + case TWL6040_DAI_DL1: 1039 + hslctl = twl6040_read_reg_cache(codec, TWL6040_REG_HSLCTL); 1040 + hsrctl = twl6040_read_reg_cache(codec, TWL6040_REG_HSRCTL); 1041 + earctl = twl6040_read_reg_cache(codec, TWL6040_REG_EARCTL); 1042 + 1043 + if (mute) { 1044 + /* Power down drivers and DACs */ 1045 + earctl &= ~0x01; 1046 + hslctl &= ~(TWL6040_HSDRVENA | TWL6040_HSDACENA); 1047 + hsrctl &= ~(TWL6040_HSDRVENA | TWL6040_HSDACENA); 1048 + 1049 + } 1050 + 1051 + twl6040_reg_write(twl6040, TWL6040_REG_EARCTL, earctl); 1052 + twl6040_reg_write(twl6040, TWL6040_REG_HSLCTL, hslctl); 1053 + twl6040_reg_write(twl6040, TWL6040_REG_HSRCTL, hsrctl); 1054 + priv->dl1_unmuted = !mute; 1055 + break; 1056 + case TWL6040_DAI_DL2: 1057 + hflctl = twl6040_read_reg_cache(codec, TWL6040_REG_HFLCTL); 1058 + hfrctl = twl6040_read_reg_cache(codec, TWL6040_REG_HFRCTL); 1059 + 1060 + if (mute) { 1061 + /* Power down drivers and DACs */ 1062 + hflctl &= ~(TWL6040_HFDACENA | TWL6040_HFPGAENA | 1063 + TWL6040_HFDRVENA); 1064 + hfrctl &= ~(TWL6040_HFDACENA | TWL6040_HFPGAENA | 1065 + TWL6040_HFDRVENA); 1066 + } 1067 + 1068 + twl6040_reg_write(twl6040, TWL6040_REG_HFLCTL, hflctl); 1069 + twl6040_reg_write(twl6040, TWL6040_REG_HFRCTL, hfrctl); 1070 + priv->dl2_unmuted = !mute; 1071 + break; 1072 + default: 1073 + break; 1074 + }; 1075 + } 1076 + 1077 + static int twl6040_digital_mute(struct snd_soc_dai *dai, int mute) 1078 + { 1079 + switch (dai->id) { 1080 + case TWL6040_DAI_LEGACY: 1081 + twl6040_mute_path(dai->codec, TWL6040_DAI_DL1, mute); 1082 + twl6040_mute_path(dai->codec, TWL6040_DAI_DL2, mute); 1083 + break; 1084 + case TWL6040_DAI_DL1: 1085 + case TWL6040_DAI_DL2: 1086 + twl6040_mute_path(dai->codec, dai->id, mute); 1087 + break; 1088 + default: 1089 + break; 1090 + } 1091 + 1092 + return 0; 1093 + } 1094 + 1059 1095 static const struct snd_soc_dai_ops twl6040_dai_ops = { 1060 1096 .startup = twl6040_startup, 1061 1097 .hw_params = twl6040_hw_params, 1062 1098 .prepare = twl6040_prepare, 1063 1099 .set_sysclk = twl6040_set_dai_sysclk, 1100 + .digital_mute = twl6040_digital_mute, 1064 1101 }; 1065 1102 1066 1103 static struct snd_soc_dai_driver twl6040_dai[] = { 1067 1104 { 1068 1105 .name = "twl6040-legacy", 1106 + .id = TWL6040_DAI_LEGACY, 1069 1107 .playback = { 1070 1108 .stream_name = "Legacy Playback", 1071 1109 .channels_min = 1, ··· 1152 1054 }, 1153 1055 { 1154 1056 .name = "twl6040-ul", 1057 + .id = TWL6040_DAI_UL, 1155 1058 .capture = { 1156 1059 .stream_name = "Capture", 1157 1060 .channels_min = 1, ··· 1164 1065 }, 1165 1066 { 1166 1067 .name = "twl6040-dl1", 1068 + .id = TWL6040_DAI_DL1, 1167 1069 .playback = { 1168 1070 .stream_name = "Headset Playback", 1169 1071 .channels_min = 1, ··· 1176 1076 }, 1177 1077 { 1178 1078 .name = "twl6040-dl2", 1079 + .id = TWL6040_DAI_DL2, 1179 1080 .playback = { 1180 1081 .stream_name = "Handsfree Playback", 1181 1082 .channels_min = 1, ··· 1188 1087 }, 1189 1088 { 1190 1089 .name = "twl6040-vib", 1090 + .id = TWL6040_DAI_VIB, 1191 1091 .playback = { 1192 1092 .stream_name = "Vibra Playback", 1193 1093 .channels_min = 1, ··· 1245 1143 1246 1144 mutex_init(&priv->mutex); 1247 1145 1248 - ret = devm_request_threaded_irq(codec->dev, priv->plug_irq, NULL, 1146 + ret = request_threaded_irq(priv->plug_irq, NULL, 1249 1147 twl6040_audio_handler, IRQF_NO_SUSPEND, 1250 1148 "twl6040_irq_plug", codec); 1251 1149 if (ret) { ··· 1261 1159 1262 1160 static int twl6040_remove(struct snd_soc_codec *codec) 1263 1161 { 1162 + struct twl6040_data *priv = snd_soc_codec_get_drvdata(codec); 1163 + 1164 + free_irq(priv->plug_irq, codec); 1264 1165 twl6040_set_bias_level(codec, SND_SOC_BIAS_OFF); 1265 1166 1266 1167 return 0;
+2 -7
sound/soc/codecs/wm8400.c
··· 143 143 } 144 144 145 145 #define WM8400_OUTPGA_SINGLE_R_TLV(xname, reg, shift, max, invert, tlv_array) \ 146 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = (xname), \ 147 - .access = SNDRV_CTL_ELEM_ACCESS_TLV_READ |\ 148 - SNDRV_CTL_ELEM_ACCESS_READWRITE,\ 149 - .tlv.p = (tlv_array), \ 150 - .info = snd_soc_info_volsw, \ 151 - .get = snd_soc_get_volsw, .put = wm8400_outpga_put_volsw_vu, \ 152 - .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert) } 146 + SOC_SINGLE_EXT_TLV(xname, reg, shift, max, invert, \ 147 + snd_soc_get_volsw, wm8400_outpga_put_volsw_vu, tlv_array) 153 148 154 149 155 150 static const char *wm8400_digital_sidetone[] =
+2 -4
sound/soc/codecs/wm8903.c
··· 403 403 } 404 404 405 405 #define SOC_DAPM_SINGLE_W(xname, reg, shift, max, invert) \ 406 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = xname, \ 407 - .info = snd_soc_info_volsw, \ 408 - .get = snd_soc_dapm_get_volsw, .put = wm8903_class_w_put, \ 409 - .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert) } 406 + SOC_SINGLE_EXT(xname, reg, shift, max, invert, \ 407 + snd_soc_dapm_get_volsw, wm8903_class_w_put) 410 408 411 409 412 410 static int wm8903_deemph[] = { 0, 32000, 44100, 48000 };
+2 -7
sound/soc/codecs/wm8904.c
··· 603 603 604 604 SOC_SINGLE("High Pass Filter Switch", WM8904_ADC_DIGITAL_0, 4, 1, 0), 605 605 SOC_ENUM("High Pass Filter Mode", hpf_mode), 606 - 607 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, 608 - .name = "ADC 128x OSR Switch", 609 - .info = snd_soc_info_volsw, .get = snd_soc_get_volsw, 610 - .put = wm8904_adc_osr_put, 611 - .private_value = SOC_SINGLE_VALUE(WM8904_ANALOGUE_ADC_0, 0, 1, 0), 612 - }, 606 + SOC_SINGLE_EXT("ADC 128x OSR Switch", WM8904_ANALOGUE_ADC_0, 0, 1, 0, 607 + snd_soc_get_volsw, wm8904_adc_osr_put), 613 608 }; 614 609 615 610 static const char *drc_path_text[] = {
+3 -8
sound/soc/codecs/wm8990.c
··· 151 151 } 152 152 153 153 #define SOC_WM899X_OUTPGA_SINGLE_R_TLV(xname, reg, shift, max, invert,\ 154 - tlv_array) {\ 155 - .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = (xname), \ 156 - .access = SNDRV_CTL_ELEM_ACCESS_TLV_READ |\ 157 - SNDRV_CTL_ELEM_ACCESS_READWRITE,\ 158 - .tlv.p = (tlv_array), \ 159 - .info = snd_soc_info_volsw, \ 160 - .get = snd_soc_get_volsw, .put = wm899x_outpga_put_volsw_vu, \ 161 - .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert) } 154 + tlv_array) \ 155 + SOC_SINGLE_EXT_TLV(xname, reg, shift, max, invert, \ 156 + snd_soc_get_volsw, wm899x_outpga_put_volsw_vu, tlv_array) 162 157 163 158 164 159 static const char *wm8990_digital_sidetone[] =
+2 -7
sound/soc/codecs/wm8991.h
··· 822 822 823 823 #define SOC_WM899X_OUTPGA_SINGLE_R_TLV(xname, reg, shift, max, invert,\ 824 824 tlv_array) \ 825 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = (xname), \ 826 - .access = SNDRV_CTL_ELEM_ACCESS_TLV_READ |\ 827 - SNDRV_CTL_ELEM_ACCESS_READWRITE,\ 828 - .tlv.p = (tlv_array), \ 829 - .info = snd_soc_info_volsw, \ 830 - .get = snd_soc_get_volsw, .put = wm899x_outpga_put_volsw_vu, \ 831 - .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert) } 825 + SOC_SINGLE_EXT_TLV(xname, reg, shift, max, invert, \ 826 + snd_soc_get_volsw, wm899x_outpga_put_volsw_vu, tlv_array) 832 827 833 828 #endif /* _WM8991_H */
+4 -8
sound/soc/codecs/wm8994.c
··· 290 290 static const DECLARE_TLV_DB_SCALE(mixin_boost_tlv, 0, 900, 0); 291 291 292 292 #define WM8994_DRC_SWITCH(xname, reg, shift) \ 293 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = xname, \ 294 - .info = snd_soc_info_volsw, .get = snd_soc_get_volsw,\ 295 - .put = wm8994_put_drc_sw, \ 296 - .private_value = SOC_SINGLE_VALUE(reg, shift, 1, 0) } 293 + SOC_SINGLE_EXT(xname, reg, shift, 1, 0, \ 294 + snd_soc_get_volsw, wm8994_put_drc_sw) 297 295 298 296 static int wm8994_put_drc_sw(struct snd_kcontrol *kcontrol, 299 297 struct snd_ctl_elem_value *ucontrol) ··· 1431 1433 }; 1432 1434 1433 1435 #define WM8994_CLASS_W_SWITCH(xname, reg, shift, max, invert) \ 1434 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = xname, \ 1435 - .info = snd_soc_info_volsw, \ 1436 - .get = snd_soc_dapm_get_volsw, .put = wm8994_put_class_w, \ 1437 - .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert) } 1436 + SOC_SINGLE_EXT(xname, reg, shift, max, invert, \ 1437 + snd_soc_get_volsw, wm8994_put_class_w) 1438 1438 1439 1439 static int wm8994_put_class_w(struct snd_kcontrol *kcontrol, 1440 1440 struct snd_ctl_elem_value *ucontrol)
+2 -5
sound/soc/codecs/wm8995.h
··· 4237 4237 #define WM8995_SPK2_MUTE_SEQ1_WIDTH 8 /* SPK2_MUTE_SEQ1 - [7:0] */ 4238 4238 4239 4239 #define WM8995_CLASS_W_SWITCH(xname, reg, shift, max, invert) \ 4240 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = xname, \ 4241 - .info = snd_soc_info_volsw, \ 4242 - .get = snd_soc_dapm_get_volsw, .put = wm8995_put_class_w, \ 4243 - .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert) \ 4244 - } 4240 + SOC_SINGLE_EXT(xname, reg, shift, max, invert, \ 4241 + snd_soc_dapm_get_volsw, wm8995_put_class_w) 4245 4242 4246 4243 struct wm8995_reg_access { 4247 4244 u16 read;
+7 -9
sound/soc/codecs/wm9705.c
··· 209 209 case AC97_RESET: 210 210 case AC97_VENDOR_ID1: 211 211 case AC97_VENDOR_ID2: 212 - return soc_ac97_ops.read(codec->ac97, reg); 212 + return soc_ac97_ops->read(codec->ac97, reg); 213 213 default: 214 214 reg = reg >> 1; 215 215 ··· 225 225 { 226 226 u16 *cache = codec->reg_cache; 227 227 228 - soc_ac97_ops.write(codec->ac97, reg, val); 228 + soc_ac97_ops->write(codec->ac97, reg, val); 229 229 reg = reg >> 1; 230 230 if (reg < (ARRAY_SIZE(wm9705_reg))) 231 231 cache[reg] = val; ··· 294 294 295 295 static int wm9705_reset(struct snd_soc_codec *codec) 296 296 { 297 - if (soc_ac97_ops.reset) { 298 - soc_ac97_ops.reset(codec->ac97); 297 + if (soc_ac97_ops->reset) { 298 + soc_ac97_ops->reset(codec->ac97); 299 299 if (ac97_read(codec, 0) == wm9705_reg[0]) 300 300 return 0; /* Success */ 301 301 } ··· 306 306 #ifdef CONFIG_PM 307 307 static int wm9705_soc_suspend(struct snd_soc_codec *codec) 308 308 { 309 - soc_ac97_ops.write(codec->ac97, AC97_POWERDOWN, 0xffff); 309 + soc_ac97_ops->write(codec->ac97, AC97_POWERDOWN, 0xffff); 310 310 311 311 return 0; 312 312 } ··· 323 323 } 324 324 325 325 for (i = 2; i < ARRAY_SIZE(wm9705_reg) << 1; i += 2) { 326 - soc_ac97_ops.write(codec->ac97, i, cache[i>>1]); 326 + soc_ac97_ops->write(codec->ac97, i, cache[i>>1]); 327 327 } 328 328 329 329 return 0; ··· 337 337 { 338 338 int ret = 0; 339 339 340 - printk(KERN_INFO "WM9705 SoC Audio Codec\n"); 341 - 342 - ret = snd_soc_new_ac97_codec(codec, &soc_ac97_ops, 0); 340 + ret = snd_soc_new_ac97_codec(codec, soc_ac97_ops, 0); 343 341 if (ret < 0) { 344 342 printk(KERN_ERR "wm9705: failed to register AC97 codec\n"); 345 343 return ret;
+9 -9
sound/soc/codecs/wm9712.c
··· 455 455 if (reg == AC97_RESET || reg == AC97_GPIO_STATUS || 456 456 reg == AC97_VENDOR_ID1 || reg == AC97_VENDOR_ID2 || 457 457 reg == AC97_REC_GAIN) 458 - return soc_ac97_ops.read(codec->ac97, reg); 458 + return soc_ac97_ops->read(codec->ac97, reg); 459 459 else { 460 460 reg = reg >> 1; 461 461 ··· 472 472 u16 *cache = codec->reg_cache; 473 473 474 474 if (reg < 0x7c) 475 - soc_ac97_ops.write(codec->ac97, reg, val); 475 + soc_ac97_ops->write(codec->ac97, reg, val); 476 476 reg = reg >> 1; 477 477 if (reg < (ARRAY_SIZE(wm9712_reg))) 478 478 cache[reg] = val; ··· 581 581 582 582 static int wm9712_reset(struct snd_soc_codec *codec, int try_warm) 583 583 { 584 - if (try_warm && soc_ac97_ops.warm_reset) { 585 - soc_ac97_ops.warm_reset(codec->ac97); 584 + if (try_warm && soc_ac97_ops->warm_reset) { 585 + soc_ac97_ops->warm_reset(codec->ac97); 586 586 if (ac97_read(codec, 0) == wm9712_reg[0]) 587 587 return 1; 588 588 } 589 589 590 - soc_ac97_ops.reset(codec->ac97); 591 - if (soc_ac97_ops.warm_reset) 592 - soc_ac97_ops.warm_reset(codec->ac97); 590 + soc_ac97_ops->reset(codec->ac97); 591 + if (soc_ac97_ops->warm_reset) 592 + soc_ac97_ops->warm_reset(codec->ac97); 593 593 if (ac97_read(codec, 0) != wm9712_reg[0]) 594 594 goto err; 595 595 return 0; ··· 624 624 if (i == AC97_INT_PAGING || i == AC97_POWERDOWN || 625 625 (i > 0x58 && i != 0x5c)) 626 626 continue; 627 - soc_ac97_ops.write(codec->ac97, i, cache[i>>1]); 627 + soc_ac97_ops->write(codec->ac97, i, cache[i>>1]); 628 628 } 629 629 } 630 630 ··· 635 635 { 636 636 int ret = 0; 637 637 638 - ret = snd_soc_new_ac97_codec(codec, &soc_ac97_ops, 0); 638 + ret = snd_soc_new_ac97_codec(codec, soc_ac97_ops, 0); 639 639 if (ret < 0) { 640 640 printk(KERN_ERR "wm9712: failed to register AC97 codec\n"); 641 641 return ret;
+9 -9
sound/soc/codecs/wm9713.c
··· 652 652 if (reg == AC97_RESET || reg == AC97_GPIO_STATUS || 653 653 reg == AC97_VENDOR_ID1 || reg == AC97_VENDOR_ID2 || 654 654 reg == AC97_CD) 655 - return soc_ac97_ops.read(codec->ac97, reg); 655 + return soc_ac97_ops->read(codec->ac97, reg); 656 656 else { 657 657 reg = reg >> 1; 658 658 ··· 668 668 { 669 669 u16 *cache = codec->reg_cache; 670 670 if (reg < 0x7c) 671 - soc_ac97_ops.write(codec->ac97, reg, val); 671 + soc_ac97_ops->write(codec->ac97, reg, val); 672 672 reg = reg >> 1; 673 673 if (reg < (ARRAY_SIZE(wm9713_reg))) 674 674 cache[reg] = val; ··· 1095 1095 1096 1096 int wm9713_reset(struct snd_soc_codec *codec, int try_warm) 1097 1097 { 1098 - if (try_warm && soc_ac97_ops.warm_reset) { 1099 - soc_ac97_ops.warm_reset(codec->ac97); 1098 + if (try_warm && soc_ac97_ops->warm_reset) { 1099 + soc_ac97_ops->warm_reset(codec->ac97); 1100 1100 if (ac97_read(codec, 0) == wm9713_reg[0]) 1101 1101 return 1; 1102 1102 } 1103 1103 1104 - soc_ac97_ops.reset(codec->ac97); 1105 - if (soc_ac97_ops.warm_reset) 1106 - soc_ac97_ops.warm_reset(codec->ac97); 1104 + soc_ac97_ops->reset(codec->ac97); 1105 + if (soc_ac97_ops->warm_reset) 1106 + soc_ac97_ops->warm_reset(codec->ac97); 1107 1107 if (ac97_read(codec, 0) != wm9713_reg[0]) 1108 1108 return -EIO; 1109 1109 return 0; ··· 1180 1180 if (i == AC97_POWERDOWN || i == AC97_EXTENDED_MID || 1181 1181 i == AC97_EXTENDED_MSTATUS || i > 0x66) 1182 1182 continue; 1183 - soc_ac97_ops.write(codec->ac97, i, cache[i>>1]); 1183 + soc_ac97_ops->write(codec->ac97, i, cache[i>>1]); 1184 1184 } 1185 1185 } 1186 1186 ··· 1197 1197 return -ENOMEM; 1198 1198 snd_soc_codec_set_drvdata(codec, wm9713); 1199 1199 1200 - ret = snd_soc_new_ac97_codec(codec, &soc_ac97_ops, 0); 1200 + ret = snd_soc_new_ac97_codec(codec, soc_ac97_ops, 0); 1201 1201 if (ret < 0) 1202 1202 goto codec_err; 1203 1203
+4 -6
sound/soc/codecs/wm_adsp.h
··· 61 61 }; 62 62 63 63 #define WM_ADSP1(wname, num) \ 64 - { .id = snd_soc_dapm_pga, .name = wname, .reg = SND_SOC_NOPM, \ 65 - .shift = num, .event = wm_adsp1_event, \ 66 - .event_flags = SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD } 64 + SND_SOC_DAPM_PGA_E(wname, SND_SOC_NOPM, num, 0, NULL, 0, \ 65 + wm_adsp1_event, SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD) 67 66 68 67 #define WM_ADSP2(wname, num) \ 69 - { .id = snd_soc_dapm_pga, .name = wname, .reg = SND_SOC_NOPM, \ 70 - .shift = num, .event = wm_adsp2_event, \ 71 - .event_flags = SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD } 68 + SND_SOC_DAPM_PGA_E(wname, SND_SOC_NOPM, num, 0, NULL, 0, \ 69 + wm_adsp2_event, SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD) 72 70 73 71 extern const struct snd_kcontrol_new wm_adsp1_fw_controls[]; 74 72 extern const struct snd_kcontrol_new wm_adsp2_fw_controls[];
+2 -4
sound/soc/codecs/wm_hubs.c
··· 693 693 EXPORT_SYMBOL_GPL(wm_hubs_update_class_w); 694 694 695 695 #define WM_HUBS_SINGLE_W(xname, reg, shift, max, invert) \ 696 - { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = xname, \ 697 - .info = snd_soc_info_volsw, \ 698 - .get = snd_soc_dapm_get_volsw, .put = class_w_put_volsw, \ 699 - .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert) } 696 + SOC_SINGLE_EXT(xname, reg, shift, max, invert, \ 697 + snd_soc_dapm_get_volsw, class_w_put_volsw) 700 698 701 699 static int class_w_put_volsw(struct snd_kcontrol *kcontrol, 702 700 struct snd_ctl_elem_value *ucontrol)
+9 -2
sound/soc/fsl/imx-ssi.c
··· 501 501 imx_ssi_ac97_read(ac97, 0); 502 502 } 503 503 504 - struct snd_ac97_bus_ops soc_ac97_ops = { 504 + static struct snd_ac97_bus_ops imx_ssi_ac97_ops = { 505 505 .read = imx_ssi_ac97_read, 506 506 .write = imx_ssi_ac97_write, 507 507 .reset = imx_ssi_ac97_reset, 508 508 .warm_reset = imx_ssi_ac97_warm_reset 509 509 }; 510 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 511 510 512 511 static int imx_ssi_probe(struct platform_device *pdev) 513 512 { ··· 582 583 583 584 platform_set_drvdata(pdev, ssi); 584 585 586 + ret = snd_soc_set_ac97_ops(&imx_ssi_ac97_ops); 587 + if (ret != 0) { 588 + dev_err(&pdev->dev, "Failed to set AC'97 ops: %d\n", ret); 589 + goto failed_register; 590 + } 591 + 585 592 ret = snd_soc_register_component(&pdev->dev, &imx_component, 586 593 dai, 1); 587 594 if (ret) { ··· 613 608 release_mem_region(res->start, resource_size(res)); 614 609 clk_disable_unprepare(ssi->clk); 615 610 failed_clk: 611 + snd_soc_set_ac97_ops(NULL); 616 612 617 613 return ret; 618 614 } ··· 633 627 634 628 release_mem_region(res->start, resource_size(res)); 635 629 clk_disable_unprepare(ssi->clk); 630 + snd_soc_set_ac97_ops(NULL); 636 631 637 632 return 0; 638 633 }
+8 -2
sound/soc/fsl/mpc5200_psc_ac97.c
··· 131 131 psc_ac97_warm_reset(ac97); 132 132 } 133 133 134 - struct snd_ac97_bus_ops soc_ac97_ops = { 134 + static struct snd_ac97_bus_ops psc_ac97_ops = { 135 135 .read = psc_ac97_read, 136 136 .write = psc_ac97_write, 137 137 .reset = psc_ac97_cold_reset, 138 138 .warm_reset = psc_ac97_warm_reset, 139 139 }; 140 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 141 140 142 141 static int psc_ac97_hw_analog_params(struct snd_pcm_substream *substream, 143 142 struct snd_pcm_hw_params *params, ··· 289 290 if (rc != 0) 290 291 return rc; 291 292 293 + rc = snd_soc_set_ac97_ops(&psc_ac97_ops); 294 + if (rc != 0) { 295 + dev_err(&op->dev, "Failed to set AC'97 ops: %d\n", ret); 296 + return rc; 297 + } 298 + 292 299 rc = snd_soc_register_component(&op->dev, &psc_ac97_component, 293 300 psc_ac97_dai, ARRAY_SIZE(psc_ac97_dai)); 294 301 if (rc != 0) { ··· 323 318 { 324 319 mpc5200_audio_dma_destroy(op); 325 320 snd_soc_unregister_component(&op->dev); 321 + snd_soc_set_ac97_ops(NULL); 326 322 return 0; 327 323 } 328 324
+10 -21
sound/soc/mid-x86/mfld_machine.c
··· 371 371 372 372 /* audio interrupt base of SRAM location where 373 373 * interrupts are stored by System FW */ 374 - mc_drv_ctx = kzalloc(sizeof(*mc_drv_ctx), GFP_ATOMIC); 374 + mc_drv_ctx = devm_kzalloc(&pdev->dev, sizeof(*mc_drv_ctx), GFP_ATOMIC); 375 375 if (!mc_drv_ctx) { 376 376 pr_err("allocation failed\n"); 377 377 return -ENOMEM; ··· 381 381 pdev, IORESOURCE_MEM, "IRQ_BASE"); 382 382 if (!irq_mem) { 383 383 pr_err("no mem resource given\n"); 384 - ret_val = -ENODEV; 385 - goto unalloc; 384 + return -ENODEV; 386 385 } 387 - mc_drv_ctx->int_base = ioremap_nocache(irq_mem->start, 388 - resource_size(irq_mem)); 386 + mc_drv_ctx->int_base = devm_ioremap_nocache(&pdev->dev, irq_mem->start, 387 + resource_size(irq_mem)); 389 388 if (!mc_drv_ctx->int_base) { 390 389 pr_err("Mapping of cache failed\n"); 391 - ret_val = -ENOMEM; 392 - goto unalloc; 390 + return -ENOMEM; 393 391 } 394 392 /* register for interrupt */ 395 - ret_val = request_threaded_irq(irq, snd_mfld_jack_intr_handler, 393 + ret_val = devm_request_threaded_irq(&pdev->dev, irq, 394 + snd_mfld_jack_intr_handler, 396 395 snd_mfld_jack_detection, 397 396 IRQF_SHARED, pdev->dev.driver->name, mc_drv_ctx); 398 397 if (ret_val) { 399 398 pr_err("cannot register IRQ\n"); 400 - goto unalloc; 399 + return ret_val; 401 400 } 402 401 /* register the soc card */ 403 402 snd_soc_card_mfld.dev = &pdev->dev; 404 403 ret_val = snd_soc_register_card(&snd_soc_card_mfld); 405 404 if (ret_val) { 406 405 pr_debug("snd_soc_register_card failed %d\n", ret_val); 407 - goto freeirq; 406 + return ret_val; 408 407 } 409 408 platform_set_drvdata(pdev, mc_drv_ctx); 410 409 pr_debug("successfully exited probe\n"); 411 - return ret_val; 412 - 413 - freeirq: 414 - free_irq(irq, mc_drv_ctx); 415 - unalloc: 416 - kfree(mc_drv_ctx); 417 - return ret_val; 410 + return 0; 418 411 } 419 412 420 413 static int snd_mfld_mc_remove(struct platform_device *pdev) 421 414 { 422 - struct mfld_mc_private *mc_drv_ctx = platform_get_drvdata(pdev); 423 - 424 415 pr_debug("snd_mfld_mc_remove called\n"); 425 - free_irq(platform_get_irq(pdev, 0), mc_drv_ctx); 426 416 snd_soc_unregister_card(&snd_soc_card_mfld); 427 - kfree(mc_drv_ctx); 428 417 return 0; 429 418 } 430 419
+21 -39
sound/soc/nuc900/nuc900-ac97.c
··· 197 197 } 198 198 199 199 /* AC97 controller operations */ 200 - struct snd_ac97_bus_ops soc_ac97_ops = { 200 + static struct snd_ac97_bus_ops nuc900_ac97_ops = { 201 201 .read = nuc900_ac97_read, 202 202 .write = nuc900_ac97_write, 203 203 .reset = nuc900_ac97_cold_reset, 204 204 .warm_reset = nuc900_ac97_warm_reset, 205 - } 206 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 205 + }; 207 206 208 207 static int nuc900_ac97_trigger(struct snd_pcm_substream *substream, 209 208 int cmd, struct snd_soc_dai *dai) ··· 325 326 if (nuc900_ac97_data) 326 327 return -EBUSY; 327 328 328 - nuc900_audio = kzalloc(sizeof(struct nuc900_audio), GFP_KERNEL); 329 + nuc900_audio = devm_kzalloc(&pdev->dev, sizeof(struct nuc900_audio), 330 + GFP_KERNEL); 329 331 if (!nuc900_audio) 330 332 return -ENOMEM; 331 333 332 334 spin_lock_init(&nuc900_audio->lock); 333 335 334 336 nuc900_audio->res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 335 - if (!nuc900_audio->res) { 336 - ret = -ENODEV; 337 - goto out0; 338 - } 337 + if (!nuc900_audio->res) 338 + return ret; 339 339 340 - if (!request_mem_region(nuc900_audio->res->start, 341 - resource_size(nuc900_audio->res), pdev->name)) { 342 - ret = -EBUSY; 343 - goto out0; 344 - } 340 + nuc900_audio->mmio = devm_ioremap_resource(&pdev->dev, 341 + nuc900_audio->res); 342 + if (IS_ERR(nuc900_audio->mmio)) 343 + return PTR_ERR(nuc900_audio->mmio); 345 344 346 - nuc900_audio->mmio = ioremap(nuc900_audio->res->start, 347 - resource_size(nuc900_audio->res)); 348 - if (!nuc900_audio->mmio) { 349 - ret = -ENOMEM; 350 - goto out1; 351 - } 352 - 353 - nuc900_audio->clk = clk_get(&pdev->dev, NULL); 345 + nuc900_audio->clk = devm_clk_get(&pdev->dev, NULL); 354 346 if (IS_ERR(nuc900_audio->clk)) { 355 347 ret = PTR_ERR(nuc900_audio->clk); 356 - goto out2; 348 + goto out; 357 349 } 358 350 359 351 nuc900_audio->irq_num = platform_get_irq(pdev, 0); 360 352 if (!nuc900_audio->irq_num) { 361 353 ret = -EBUSY; 362 - goto out3; 354 + goto out; 363 355 } 364 356 365 357 nuc900_ac97_data = nuc900_audio; 366 358 359 + ret = snd_soc_set_ac97_ops(&nuc900_ac97_ops); 360 + if (ret) 361 + goto out; 362 + 367 363 ret = snd_soc_register_component(&pdev->dev, &nuc900_ac97_component, 368 364 &nuc900_ac97_dai, 1); 369 365 if (ret) 370 - goto out3; 366 + goto out; 371 367 372 368 /* enbale ac97 multifunction pin */ 373 369 mfp_set_groupg(nuc900_audio->dev, NULL); 374 370 375 371 return 0; 376 372 377 - out3: 378 - clk_put(nuc900_audio->clk); 379 - out2: 380 - iounmap(nuc900_audio->mmio); 381 - out1: 382 - release_mem_region(nuc900_audio->res->start, 383 - resource_size(nuc900_audio->res)); 384 - out0: 385 - kfree(nuc900_audio); 373 + out: 374 + snd_soc_set_ac97_ops(NULL); 386 375 return ret; 387 376 } 388 377 ··· 378 391 { 379 392 snd_soc_unregister_component(&pdev->dev); 380 393 381 - clk_put(nuc900_ac97_data->clk); 382 - iounmap(nuc900_ac97_data->mmio); 383 - release_mem_region(nuc900_ac97_data->res->start, 384 - resource_size(nuc900_ac97_data->res)); 385 - 386 - kfree(nuc900_ac97_data); 387 394 nuc900_ac97_data = NULL; 395 + snd_soc_set_ac97_ops(NULL); 388 396 389 397 return 0; 390 398 }
+6 -2
sound/soc/pxa/pxa2xx-ac97.c
··· 41 41 pxa2xx_ac97_finish_reset(ac97); 42 42 } 43 43 44 - struct snd_ac97_bus_ops soc_ac97_ops = { 44 + static struct snd_ac97_bus_ops pxa2xx_ac97_ops = { 45 45 .read = pxa2xx_ac97_read, 46 46 .write = pxa2xx_ac97_write, 47 47 .warm_reset = pxa2xx_ac97_warm_reset, 48 48 .reset = pxa2xx_ac97_cold_reset, 49 49 }; 50 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 51 50 52 51 static struct pxa2xx_pcm_dma_params pxa2xx_ac97_pcm_stereo_out = { 53 52 .name = "AC97 PCM Stereo out", ··· 243 244 return -ENXIO; 244 245 } 245 246 247 + ret = snd_soc_set_ac97_ops(&pxa2xx_ac97_ops); 248 + if (ret != 0) 249 + return ret; 250 + 246 251 /* Punt most of the init to the SoC probe; we may need the machine 247 252 * driver to do interesting things with the clocking to get us up 248 253 * and running. ··· 258 255 static int pxa2xx_ac97_dev_remove(struct platform_device *pdev) 259 256 { 260 257 snd_soc_unregister_component(&pdev->dev); 258 + snd_soc_set_ac97_ops(NULL); 261 259 return 0; 262 260 } 263 261
+14 -28
sound/soc/samsung/ac97.c
··· 214 214 return IRQ_HANDLED; 215 215 } 216 216 217 - struct snd_ac97_bus_ops soc_ac97_ops = { 217 + static struct snd_ac97_bus_ops s3c_ac97_ops = { 218 218 .read = s3c_ac97_read, 219 219 .write = s3c_ac97_write, 220 220 .warm_reset = s3c_ac97_warm_reset, 221 221 .reset = s3c_ac97_cold_reset, 222 222 }; 223 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 224 223 225 224 static int s3c_ac97_hw_params(struct snd_pcm_substream *substream, 226 225 struct snd_pcm_hw_params *params, ··· 416 417 return -ENXIO; 417 418 } 418 419 419 - if (!request_mem_region(mem_res->start, 420 - resource_size(mem_res), "ac97")) { 421 - dev_err(&pdev->dev, "Unable to request register region\n"); 422 - return -EBUSY; 423 - } 420 + s3c_ac97.regs = devm_ioremap_resource(&pdev->dev, mem_res); 421 + if (IS_ERR(s3c_ac97.regs)) 422 + return PTR_ERR(s3c_ac97.regs); 424 423 425 424 s3c_ac97_pcm_out.channel = dmatx_res->start; 426 425 s3c_ac97_pcm_out.dma_addr = mem_res->start + S3C_AC97_PCM_DATA; ··· 430 433 init_completion(&s3c_ac97.done); 431 434 mutex_init(&s3c_ac97.lock); 432 435 433 - s3c_ac97.regs = ioremap(mem_res->start, resource_size(mem_res)); 434 - if (s3c_ac97.regs == NULL) { 435 - dev_err(&pdev->dev, "Unable to ioremap register region\n"); 436 - ret = -ENXIO; 437 - goto err1; 438 - } 439 - 440 - s3c_ac97.ac97_clk = clk_get(&pdev->dev, "ac97"); 436 + s3c_ac97.ac97_clk = devm_clk_get(&pdev->dev, "ac97"); 441 437 if (IS_ERR(s3c_ac97.ac97_clk)) { 442 438 dev_err(&pdev->dev, "ac97 failed to get ac97_clock\n"); 443 439 ret = -ENODEV; ··· 448 458 0, "AC97", NULL); 449 459 if (ret < 0) { 450 460 dev_err(&pdev->dev, "ac97: interrupt request failed.\n"); 461 + goto err4; 462 + } 463 + 464 + ret = snd_soc_set_ac97_ops(&s3c_ac97_ops); 465 + if (ret != 0) { 466 + dev_err(&pdev->dev, "Failed to set AC'97 ops: %d\n", ret); 451 467 goto err4; 452 468 } 453 469 ··· 476 480 err4: 477 481 err3: 478 482 clk_disable_unprepare(s3c_ac97.ac97_clk); 479 - clk_put(s3c_ac97.ac97_clk); 480 483 err2: 481 - iounmap(s3c_ac97.regs); 482 - err1: 483 - release_mem_region(mem_res->start, resource_size(mem_res)); 484 - 484 + snd_soc_set_ac97_ops(NULL); 485 485 return ret; 486 486 } 487 487 488 488 static int s3c_ac97_remove(struct platform_device *pdev) 489 489 { 490 - struct resource *mem_res, *irq_res; 490 + struct resource *irq_res; 491 491 492 492 asoc_dma_platform_unregister(&pdev->dev); 493 493 snd_soc_unregister_component(&pdev->dev); ··· 493 501 free_irq(irq_res->start, NULL); 494 502 495 503 clk_disable_unprepare(s3c_ac97.ac97_clk); 496 - clk_put(s3c_ac97.ac97_clk); 497 - 498 - iounmap(s3c_ac97.regs); 499 - 500 - mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 501 - if (mem_res) 502 - release_mem_region(mem_res->start, resource_size(mem_res)); 504 + snd_soc_set_ac97_ops(NULL); 503 505 504 506 return 0; 505 507 }
+6 -2
sound/soc/sh/hac.c
··· 227 227 hac_ac97_warmrst(ac97); 228 228 } 229 229 230 - struct snd_ac97_bus_ops soc_ac97_ops = { 230 + static struct snd_ac97_bus_ops hac_ac97_ops = { 231 231 .read = hac_ac97_read, 232 232 .write = hac_ac97_write, 233 233 .reset = hac_ac97_coldrst, 234 234 .warm_reset = hac_ac97_warmrst, 235 235 }; 236 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 237 236 238 237 static int hac_hw_params(struct snd_pcm_substream *substream, 239 238 struct snd_pcm_hw_params *params, ··· 315 316 316 317 static int hac_soc_platform_probe(struct platform_device *pdev) 317 318 { 319 + ret = snd_soc_set_ac97_ops(&hac_ac97_ops); 320 + if (ret != 0) 321 + return ret; 322 + 318 323 return snd_soc_register_component(&pdev->dev, &sh4_hac_component, 319 324 sh4_hac_dai, ARRAY_SIZE(sh4_hac_dai)); 320 325 } ··· 326 323 static int hac_soc_platform_remove(struct platform_device *pdev) 327 324 { 328 325 snd_soc_unregister_component(&pdev->dev); 326 + snd_soc_set_ac97_ops(NULL); 329 327 return 0; 330 328 } 331 329
+16
sound/soc/soc-core.c
··· 2080 2080 } 2081 2081 EXPORT_SYMBOL_GPL(snd_soc_new_ac97_codec); 2082 2082 2083 + struct snd_ac97_bus_ops *soc_ac97_ops; 2084 + 2085 + int snd_soc_set_ac97_ops(struct snd_ac97_bus_ops *ops) 2086 + { 2087 + if (ops == soc_ac97_ops) 2088 + return 0; 2089 + 2090 + if (soc_ac97_ops && ops) 2091 + return -EBUSY; 2092 + 2093 + soc_ac97_ops = ops; 2094 + 2095 + return 0; 2096 + } 2097 + EXPORT_SYMBOL_GPL(snd_soc_set_ac97_ops); 2098 + 2083 2099 /** 2084 2100 * snd_soc_free_ac97_codec - free AC97 codec device 2085 2101 * @codec: audio codec
+32 -34
sound/soc/tegra/tegra20_ac97.c
··· 142 142 } while (!time_after(jiffies, timeout)); 143 143 } 144 144 145 - struct snd_ac97_bus_ops soc_ac97_ops = { 145 + static struct snd_ac97_bus_ops tegra20_ac97_ops = { 146 146 .read = tegra20_ac97_codec_read, 147 147 .write = tegra20_ac97_codec_write, 148 148 .reset = tegra20_ac97_codec_reset, 149 149 .warm_reset = tegra20_ac97_codec_warm_reset, 150 150 }; 151 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 152 151 153 152 static inline void tegra20_ac97_start_playback(struct tegra20_ac97 *ac97) 154 153 { ··· 326 327 } 327 328 dev_set_drvdata(&pdev->dev, ac97); 328 329 329 - ac97->clk_ac97 = clk_get(&pdev->dev, NULL); 330 + ac97->clk_ac97 = devm_clk_get(&pdev->dev, NULL); 330 331 if (IS_ERR(ac97->clk_ac97)) { 331 332 dev_err(&pdev->dev, "Can't retrieve ac97 clock\n"); 332 333 ret = PTR_ERR(ac97->clk_ac97); ··· 340 341 goto err_clk_put; 341 342 } 342 343 343 - memregion = devm_request_mem_region(&pdev->dev, mem->start, 344 - resource_size(mem), DRV_NAME); 345 - if (!memregion) { 346 - dev_err(&pdev->dev, "Memory region already claimed\n"); 347 - ret = -EBUSY; 348 - goto err_clk_put; 349 - } 350 - 351 - regs = devm_ioremap(&pdev->dev, mem->start, resource_size(mem)); 352 - if (!regs) { 353 - dev_err(&pdev->dev, "ioremap failed\n"); 354 - ret = -ENOMEM; 344 + regs = devm_ioremap_resource(&pdev->dev, mem); 345 + if (IS_ERR(regs)) { 346 + ret = PTR_ERR(regs); 347 + dev_err(&pdev->dev, "ioremap failed: %d\n", ret); 355 348 goto err_clk_put; 356 349 } 357 350 ··· 394 403 ac97->capture_dma_data.maxburst = 4; 395 404 ac97->capture_dma_data.slave_id = of_dma[0]; 396 405 397 - ret = snd_soc_register_component(&pdev->dev, &tegra20_ac97_component, 398 - &tegra20_ac97_dai, 1); 399 - if (ret) { 400 - dev_err(&pdev->dev, "Could not register DAI: %d\n", ret); 401 - ret = -ENOMEM; 402 - goto err_clk_put; 403 - } 404 - 405 - ret = tegra_pcm_platform_register(&pdev->dev); 406 - if (ret) { 407 - dev_err(&pdev->dev, "Could not register PCM: %d\n", ret); 408 - goto err_unregister_component; 409 - } 410 - 411 406 ret = tegra_asoc_utils_init(&ac97->util_data, &pdev->dev); 412 407 if (ret) 413 - goto err_unregister_pcm; 408 + goto err_clk_put; 414 409 415 410 ret = tegra_asoc_utils_set_ac97_rate(&ac97->util_data); 416 411 if (ret) ··· 408 431 goto err_asoc_utils_fini; 409 432 } 410 433 434 + ret = snd_soc_set_ac97_ops(&tegra20_ac97_ops); 435 + if (ret) { 436 + dev_err(&pdev->dev, "Failed to set AC'97 ops: %d\n", ret); 437 + goto err_asoc_utils_fini; 438 + } 439 + 440 + ret = snd_soc_register_component(&pdev->dev, &tegra20_ac97_component, 441 + &tegra20_ac97_dai, 1); 442 + if (ret) { 443 + dev_err(&pdev->dev, "Could not register DAI: %d\n", ret); 444 + ret = -ENOMEM; 445 + goto err_asoc_utils_fini; 446 + } 447 + 448 + ret = tegra_pcm_platform_register(&pdev->dev); 449 + if (ret) { 450 + dev_err(&pdev->dev, "Could not register PCM: %d\n", ret); 451 + goto err_unregister_component; 452 + } 453 + 411 454 /* XXX: crufty ASoC AC97 API - only one AC97 codec allowed */ 412 455 workdata = ac97; 413 456 414 457 return 0; 415 458 416 - err_asoc_utils_fini: 417 - tegra_asoc_utils_fini(&ac97->util_data); 418 459 err_unregister_pcm: 419 460 tegra_pcm_platform_unregister(&pdev->dev); 420 461 err_unregister_component: 421 462 snd_soc_unregister_component(&pdev->dev); 463 + err_asoc_utils_fini: 464 + tegra_asoc_utils_fini(&ac97->util_data); 422 465 err_clk_put: 423 - clk_put(ac97->clk_ac97); 424 466 err: 467 + snd_soc_set_ac97_ops(NULL); 425 468 return ret; 426 469 } 427 470 ··· 455 458 tegra_asoc_utils_fini(&ac97->util_data); 456 459 457 460 clk_disable_unprepare(ac97->clk_ac97); 458 - clk_put(ac97->clk_ac97); 461 + 462 + snd_soc_set_ac97_ops(NULL); 459 463 460 464 return 0; 461 465 }
+9 -8
sound/soc/txx9/txx9aclc-ac97.c
··· 119 119 } 120 120 121 121 /* AC97 controller operations */ 122 - struct snd_ac97_bus_ops soc_ac97_ops = { 122 + static struct snd_ac97_bus_ops txx9aclc_ac97_ops = { 123 123 .read = txx9aclc_ac97_read, 124 124 .write = txx9aclc_ac97_write, 125 125 .reset = txx9aclc_ac97_cold_reset, 126 126 }; 127 - EXPORT_SYMBOL_GPL(soc_ac97_ops); 128 127 129 128 static irqreturn_t txx9aclc_ac97_irq(int irq, void *dev_id) 130 129 { ··· 187 188 if (!r) 188 189 return -EBUSY; 189 190 190 - if (!devm_request_mem_region(&pdev->dev, r->start, resource_size(r), 191 - dev_name(&pdev->dev))) 192 - return -EBUSY; 191 + drvdata->base = devm_ioremap_resource(&pdev->dev, r); 192 + if (IS_ERR(drvdata->base)) 193 + return PTR_ERR(drvdata->base); 193 194 194 195 drvdata = devm_kzalloc(&pdev->dev, sizeof(*drvdata), GFP_KERNEL); 195 196 if (!drvdata) ··· 200 201 r->start >= TXX9_DIRECTMAP_BASE && 201 202 r->start < TXX9_DIRECTMAP_BASE + 0x400000) 202 203 drvdata->physbase |= 0xf00000000ull; 203 - drvdata->base = devm_ioremap(&pdev->dev, r->start, resource_size(r)); 204 - if (!drvdata->base) 205 - return -EBUSY; 206 204 err = devm_request_irq(&pdev->dev, irq, txx9aclc_ac97_irq, 207 205 0, dev_name(&pdev->dev), drvdata); 206 + if (err < 0) 207 + return err; 208 + 209 + err = snd_soc_set_ac97_ops(&txx9aclc_ac97_ops); 208 210 if (err < 0) 209 211 return err; 210 212 ··· 216 216 static int txx9aclc_ac97_dev_remove(struct platform_device *pdev) 217 217 { 218 218 snd_soc_unregister_component(&pdev->dev); 219 + snd_soc_set_ac97_ops(NULL); 219 220 return 0; 220 221 } 221 222