Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-qemu-20161121' of git://git.kraxel.org/linux into drm-next

drm/virtio: fix busid in a different way, allocate more vbufs.
drm/qxl: various bugfixes and cleanups,

* tag 'drm-qemu-20161121' of git://git.kraxel.org/linux: (224 commits)
drm/virtio: allocate some extra bufs
qxl: Allow resolution which are not multiple of 8
qxl: Don't notify userspace when monitors config is unchanged
qxl: Remove qxl_bo_init() return value
qxl: Call qxl_gem_{init, fini}
qxl: Add missing '\n' to qxl_io_log() call
qxl: Remove unused prototype
qxl: Mark some internal functions as static
Revert "drm: virtio: reinstate drm_virtio_set_busid()"
drm/virtio: fix busid regression
drm: re-export drm_dev_set_unique
Linux 4.9-rc5
gp8psk: Fix DVB frontend attach
gp8psk: fix gp8psk_usb_in_op() logic
dvb-usb: move data_mutex to struct dvb_usb_device
iio: maxim_thermocouple: detect invalid storage size in read()
aoe: fix crash in page count manipulation
lightnvm: invalid offset calculation for lba_shift
Kbuild: enable -Wmaybe-uninitialized warnings by default
pcmcia: fix return value of soc_pcmcia_regulator_set
...

+2004 -1309
+2 -2
Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
··· 1 - What: state 1 + What: /sys/devices/system/ibm_rtl/state 2 2 Date: Sep 2010 3 3 KernelVersion: 2.6.37 4 4 Contact: Vernon Mauery <vernux@us.ibm.com> ··· 10 10 Users: The ibm-prtm userspace daemon uses this interface. 11 11 12 12 13 - What: version 13 + What: /sys/devices/system/ibm_rtl/version 14 14 Date: Sep 2010 15 15 KernelVersion: 2.6.37 16 16 Contact: Vernon Mauery <vernux@us.ibm.com>
+5
Documentation/devicetree/bindings/mmc/synopsys-dw-mshc.txt
··· 43 43 reset signal present internally in some host controller IC designs. 44 44 See Documentation/devicetree/bindings/reset/reset.txt for details. 45 45 46 + * reset-names: request name for using "resets" property. Must be "reset". 47 + (It will be used together with "resets" property.) 48 + 46 49 * clocks: from common clock binding: handle to biu and ciu clocks for the 47 50 bus interface unit clock and the card interface unit clock. 48 51 ··· 106 103 interrupts = <0 75 0>; 107 104 #address-cells = <1>; 108 105 #size-cells = <0>; 106 + resets = <&rst 20>; 107 + reset-names = "reset"; 109 108 }; 110 109 111 110 [board specific internal DMA resources]
+8 -3
Documentation/devicetree/bindings/pci/rockchip-pcie.txt
··· 26 26 - "sys" 27 27 - "legacy" 28 28 - "client" 29 - - resets: Must contain five entries for each entry in reset-names. 29 + - resets: Must contain seven entries for each entry in reset-names. 30 30 See ../reset/reset.txt for details. 31 31 - reset-names: Must include the following names 32 32 - "core" 33 33 - "mgmt" 34 34 - "mgmt-sticky" 35 35 - "pipe" 36 + - "pm" 37 + - "aclk" 38 + - "pclk" 36 39 - pinctrl-names : The pin control state names 37 40 - pinctrl-0: The "default" pinctrl state 38 41 - #interrupt-cells: specifies the number of cells needed to encode an ··· 89 86 reg = <0x0 0xf8000000 0x0 0x2000000>, <0x0 0xfd000000 0x0 0x1000000>; 90 87 reg-names = "axi-base", "apb-base"; 91 88 resets = <&cru SRST_PCIE_CORE>, <&cru SRST_PCIE_MGMT>, 92 - <&cru SRST_PCIE_MGMT_STICKY>, <&cru SRST_PCIE_PIPE>; 93 - reset-names = "core", "mgmt", "mgmt-sticky", "pipe"; 89 + <&cru SRST_PCIE_MGMT_STICKY>, <&cru SRST_PCIE_PIPE> , 90 + <&cru SRST_PCIE_PM>, <&cru SRST_P_PCIE>, <&cru SRST_A_PCIE>; 91 + reset-names = "core", "mgmt", "mgmt-sticky", "pipe", 92 + "pm", "pclk", "aclk"; 94 93 phys = <&pcie_phy>; 95 94 phy-names = "pcie-phy"; 96 95 pinctrl-names = "default";
+5 -5
Documentation/devicetree/bindings/pinctrl/st,stm32-pinctrl.txt
··· 14 14 - #size-cells : The value of this property must be 1 15 15 - ranges : defines mapping between pin controller node (parent) to 16 16 gpio-bank node (children). 17 - - interrupt-parent: phandle of the interrupt parent to which the external 18 - GPIO interrupts are forwarded to. 19 - - st,syscfg: Should be phandle/offset pair. The phandle to the syscon node 20 - which includes IRQ mux selection register, and the offset of the IRQ mux 21 - selection register. 22 17 - pins-are-numbered: Specify the subnodes are using numbered pinmux to 23 18 specify pins. 24 19 ··· 32 37 33 38 Optional properties: 34 39 - reset: : Reference to the reset controller 40 + - interrupt-parent: phandle of the interrupt parent to which the external 41 + GPIO interrupts are forwarded to. 42 + - st,syscfg: Should be phandle/offset pair. The phandle to the syscon node 43 + which includes IRQ mux selection register, and the offset of the IRQ mux 44 + selection register. 35 45 36 46 Example: 37 47 #include <dt-bindings/pinctrl/stm32f429-pinfunc.h>
-1
Documentation/filesystems/Locking
··· 447 447 int (*flush) (struct file *); 448 448 int (*release) (struct inode *, struct file *); 449 449 int (*fsync) (struct file *, loff_t start, loff_t end, int datasync); 450 - int (*aio_fsync) (struct kiocb *, int datasync); 451 450 int (*fasync) (int, struct file *, int); 452 451 int (*lock) (struct file *, int, struct file_lock *); 453 452 ssize_t (*readv) (struct file *, const struct iovec *, unsigned long,
-1
Documentation/filesystems/vfs.txt
··· 828 828 int (*flush) (struct file *, fl_owner_t id); 829 829 int (*release) (struct inode *, struct file *); 830 830 int (*fsync) (struct file *, loff_t, loff_t, int datasync); 831 - int (*aio_fsync) (struct kiocb *, int datasync); 832 831 int (*fasync) (int, struct file *, int); 833 832 int (*lock) (struct file *, int, struct file_lock *); 834 833 ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
+1 -1
MAINTAINERS
··· 9354 9354 M: Keith Busch <keith.busch@intel.com> 9355 9355 L: linux-pci@vger.kernel.org 9356 9356 S: Supported 9357 - F: arch/x86/pci/vmd.c 9357 + F: drivers/pci/host/vmd.c 9358 9358 9359 9359 PCIE DRIVER FOR ST SPEAR13XX 9360 9360 M: Pratyush Anand <pratyush.anand@gmail.com>
+7 -5
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 9 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc5 5 5 NAME = Psychotic Stoned Sheep 6 6 7 7 # *DOCUMENTATION* ··· 370 370 CFLAGS_KERNEL = 371 371 AFLAGS_KERNEL = 372 372 LDFLAGS_vmlinux = 373 - CFLAGS_GCOV = -fprofile-arcs -ftest-coverage -fno-tree-loop-im 373 + CFLAGS_GCOV = -fprofile-arcs -ftest-coverage -fno-tree-loop-im -Wno-maybe-uninitialized 374 374 CFLAGS_KCOV := $(call cc-option,-fsanitize-coverage=trace-pc,) 375 375 376 376 ··· 620 620 include arch/$(SRCARCH)/Makefile 621 621 622 622 KBUILD_CFLAGS += $(call cc-option,-fno-delete-null-pointer-checks,) 623 - KBUILD_CFLAGS += $(call cc-disable-warning,maybe-uninitialized,) 624 623 KBUILD_CFLAGS += $(call cc-disable-warning,frame-address,) 625 624 626 625 ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION ··· 628 629 endif 629 630 630 631 ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE 631 - KBUILD_CFLAGS += -Os 632 + KBUILD_CFLAGS += -Os $(call cc-disable-warning,maybe-uninitialized,) 632 633 else 633 634 ifdef CONFIG_PROFILE_ALL_BRANCHES 634 - KBUILD_CFLAGS += -O2 635 + KBUILD_CFLAGS += -O2 $(call cc-disable-warning,maybe-uninitialized,) 635 636 else 636 637 KBUILD_CFLAGS += -O2 637 638 endif 638 639 endif 640 + 641 + KBUILD_CFLAGS += $(call cc-ifversion, -lt, 0409, \ 642 + $(call cc-disable-warning,maybe-uninitialized,)) 639 643 640 644 # Tell gcc to never replace conditional load with a non-conditional one 641 645 KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0)
+6 -1
arch/arc/Makefile
··· 50 50 51 51 cflags-$(atleast_gcc44) += -fsection-anchors 52 52 53 + cflags-$(CONFIG_ARC_HAS_LLSC) += -mlock 54 + cflags-$(CONFIG_ARC_HAS_SWAPE) += -mswape 55 + 53 56 ifdef CONFIG_ISA_ARCV2 54 57 55 58 ifndef CONFIG_ARC_HAS_LL64 ··· 71 68 ifndef CONFIG_CC_OPTIMIZE_FOR_SIZE 72 69 # Generic build system uses -O2, we want -O3 73 70 # Note: No need to add to cflags-y as that happens anyways 74 - ARCH_CFLAGS += -O3 71 + # 72 + # Disable the false maybe-uninitialized warings gcc spits out at -O3 73 + ARCH_CFLAGS += -O3 $(call cc-disable-warning,maybe-uninitialized,) 75 74 endif 76 75 77 76 # small data is default for elf32 tool-chain. If not usable, disable it
+1 -1
arch/arc/boot/dts/axc001.dtsi
··· 71 71 reg-io-width = <4>; 72 72 }; 73 73 74 - arcpmu0: pmu { 74 + arcpct0: pct { 75 75 compatible = "snps,arc700-pct"; 76 76 }; 77 77 };
+1 -1
arch/arc/boot/dts/nsim_700.dts
··· 69 69 }; 70 70 }; 71 71 72 - arcpmu0: pmu { 72 + arcpct0: pct { 73 73 compatible = "snps,arc700-pct"; 74 74 }; 75 75 };
+4
arch/arc/boot/dts/nsimosci.dts
··· 83 83 reg = <0xf0003000 0x44>; 84 84 interrupts = <7>; 85 85 }; 86 + 87 + arcpct0: pct { 88 + compatible = "snps,arc700-pct"; 89 + }; 86 90 }; 87 91 };
+1
arch/arc/configs/nsim_700_defconfig
··· 14 14 CONFIG_INITRAMFS_SOURCE="../arc_initramfs/" 15 15 CONFIG_KALLSYMS_ALL=y 16 16 CONFIG_EMBEDDED=y 17 + CONFIG_PERF_EVENTS=y 17 18 # CONFIG_SLUB_DEBUG is not set 18 19 # CONFIG_COMPAT_BRK is not set 19 20 CONFIG_KPROBES=y
+1
arch/arc/configs/nsim_hs_defconfig
··· 14 14 CONFIG_INITRAMFS_SOURCE="../../arc_initramfs_hs/" 15 15 CONFIG_KALLSYMS_ALL=y 16 16 CONFIG_EMBEDDED=y 17 + CONFIG_PERF_EVENTS=y 17 18 # CONFIG_SLUB_DEBUG is not set 18 19 # CONFIG_COMPAT_BRK is not set 19 20 CONFIG_KPROBES=y
+1
arch/arc/configs/nsim_hs_smp_defconfig
··· 12 12 CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/" 13 13 CONFIG_KALLSYMS_ALL=y 14 14 CONFIG_EMBEDDED=y 15 + CONFIG_PERF_EVENTS=y 15 16 # CONFIG_SLUB_DEBUG is not set 16 17 # CONFIG_COMPAT_BRK is not set 17 18 CONFIG_KPROBES=y
+1
arch/arc/configs/nsimosci_defconfig
··· 14 14 CONFIG_INITRAMFS_SOURCE="../arc_initramfs/" 15 15 CONFIG_KALLSYMS_ALL=y 16 16 CONFIG_EMBEDDED=y 17 + CONFIG_PERF_EVENTS=y 17 18 # CONFIG_SLUB_DEBUG is not set 18 19 # CONFIG_COMPAT_BRK is not set 19 20 CONFIG_KPROBES=y
+1
arch/arc/configs/nsimosci_hs_defconfig
··· 14 14 CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/" 15 15 CONFIG_KALLSYMS_ALL=y 16 16 CONFIG_EMBEDDED=y 17 + CONFIG_PERF_EVENTS=y 17 18 # CONFIG_SLUB_DEBUG is not set 18 19 # CONFIG_COMPAT_BRK is not set 19 20 CONFIG_KPROBES=y
+1 -2
arch/arc/configs/nsimosci_hs_smp_defconfig
··· 10 10 # CONFIG_PID_NS is not set 11 11 CONFIG_BLK_DEV_INITRD=y 12 12 CONFIG_INITRAMFS_SOURCE="../arc_initramfs_hs/" 13 + CONFIG_PERF_EVENTS=y 13 14 # CONFIG_COMPAT_BRK is not set 14 15 CONFIG_KPROBES=y 15 16 CONFIG_MODULES=y ··· 35 34 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 36 35 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 37 36 # CONFIG_INET_XFRM_MODE_BEET is not set 38 - # CONFIG_INET_LRO is not set 39 37 # CONFIG_IPV6 is not set 40 38 # CONFIG_WIRELESS is not set 41 39 CONFIG_DEVTMPFS=y ··· 72 72 # CONFIG_HWMON is not set 73 73 CONFIG_DRM=y 74 74 CONFIG_DRM_ARCPGU=y 75 - CONFIG_FRAMEBUFFER_CONSOLE=y 76 75 CONFIG_LOGO=y 77 76 # CONFIG_HID is not set 78 77 # CONFIG_USB_SUPPORT is not set
+2
arch/arc/include/asm/arcregs.h
··· 43 43 #define STATUS_AE_BIT 5 /* Exception active */ 44 44 #define STATUS_DE_BIT 6 /* PC is in delay slot */ 45 45 #define STATUS_U_BIT 7 /* User/Kernel mode */ 46 + #define STATUS_Z_BIT 11 46 47 #define STATUS_L_BIT 12 /* Loop inhibit */ 47 48 48 49 /* These masks correspond to the status word(STATUS_32) bits */ 49 50 #define STATUS_AE_MASK (1<<STATUS_AE_BIT) 50 51 #define STATUS_DE_MASK (1<<STATUS_DE_BIT) 51 52 #define STATUS_U_MASK (1<<STATUS_U_BIT) 53 + #define STATUS_Z_MASK (1<<STATUS_Z_BIT) 52 54 #define STATUS_L_MASK (1<<STATUS_L_BIT) 53 55 54 56 /*
+2 -2
arch/arc/include/asm/smp.h
··· 37 37 * API expected BY platform smp code (FROM arch smp code) 38 38 * 39 39 * smp_ipi_irq_setup: 40 - * Takes @cpu and @irq to which the arch-common ISR is hooked up 40 + * Takes @cpu and @hwirq to which the arch-common ISR is hooked up 41 41 */ 42 - extern int smp_ipi_irq_setup(int cpu, int irq); 42 + extern int smp_ipi_irq_setup(int cpu, irq_hw_number_t hwirq); 43 43 44 44 /* 45 45 * struct plat_smp_ops - SMP callbacks provided by platform to ARC SMP
+2
arch/arc/kernel/devtree.c
··· 31 31 arc_base_baud = 166666666; /* Fixed 166.6MHz clk (TB10x) */ 32 32 else if (of_flat_dt_is_compatible(dt_root, "snps,arc-sdp")) 33 33 arc_base_baud = 33333333; /* Fixed 33MHz clk (AXS10x) */ 34 + else if (of_flat_dt_is_compatible(dt_root, "ezchip,arc-nps")) 35 + arc_base_baud = 800000000; /* Fixed 800MHz clk (NPS) */ 34 36 else 35 37 arc_base_baud = 50000000; /* Fixed default 50MHz */ 36 38 }
+20 -12
arch/arc/kernel/mcip.c
··· 181 181 { 182 182 unsigned long flags; 183 183 cpumask_t online; 184 + unsigned int destination_bits; 185 + unsigned int distribution_mode; 184 186 185 187 /* errout if no online cpu per @cpumask */ 186 188 if (!cpumask_and(&online, cpumask, cpu_online_mask)) ··· 190 188 191 189 raw_spin_lock_irqsave(&mcip_lock, flags); 192 190 193 - idu_set_dest(data->hwirq, cpumask_bits(&online)[0]); 194 - idu_set_mode(data->hwirq, IDU_M_TRIG_LEVEL, IDU_M_DISTRI_RR); 191 + destination_bits = cpumask_bits(&online)[0]; 192 + idu_set_dest(data->hwirq, destination_bits); 193 + 194 + if (ffs(destination_bits) == fls(destination_bits)) 195 + distribution_mode = IDU_M_DISTRI_DEST; 196 + else 197 + distribution_mode = IDU_M_DISTRI_RR; 198 + 199 + idu_set_mode(data->hwirq, IDU_M_TRIG_LEVEL, distribution_mode); 195 200 196 201 raw_spin_unlock_irqrestore(&mcip_lock, flags); 197 202 ··· 216 207 217 208 }; 218 209 219 - static int idu_first_irq; 210 + static irq_hw_number_t idu_first_hwirq; 220 211 221 212 static void idu_cascade_isr(struct irq_desc *desc) 222 213 { 223 - struct irq_domain *domain = irq_desc_get_handler_data(desc); 224 - unsigned int core_irq = irq_desc_get_irq(desc); 225 - unsigned int idu_irq; 214 + struct irq_domain *idu_domain = irq_desc_get_handler_data(desc); 215 + irq_hw_number_t core_hwirq = irqd_to_hwirq(irq_desc_get_irq_data(desc)); 216 + irq_hw_number_t idu_hwirq = core_hwirq - idu_first_hwirq; 226 217 227 - idu_irq = core_irq - idu_first_irq; 228 - generic_handle_irq(irq_find_mapping(domain, idu_irq)); 218 + generic_handle_irq(irq_find_mapping(idu_domain, idu_hwirq)); 229 219 } 230 220 231 221 static int idu_irq_map(struct irq_domain *d, unsigned int virq, irq_hw_number_t hwirq) ··· 290 282 struct irq_domain *domain; 291 283 /* Read IDU BCR to confirm nr_irqs */ 292 284 int nr_irqs = of_irq_count(intc); 293 - int i, irq; 285 + int i, virq; 294 286 struct mcip_bcr mp; 295 287 296 288 READ_BCR(ARC_REG_MCIP_BCR, mp); ··· 311 303 * however we need it to get the parent virq and set IDU handler 312 304 * as first level isr 313 305 */ 314 - irq = irq_of_parse_and_map(intc, i); 306 + virq = irq_of_parse_and_map(intc, i); 315 307 if (!i) 316 - idu_first_irq = irq; 308 + idu_first_hwirq = irqd_to_hwirq(irq_get_irq_data(virq)); 317 309 318 - irq_set_chained_handler_and_data(irq, idu_cascade_isr, domain); 310 + irq_set_chained_handler_and_data(virq, idu_cascade_isr, domain); 319 311 } 320 312 321 313 __mcip_cmd(CMD_IDU_ENABLE, 0);
+11 -9
arch/arc/kernel/process.c
··· 43 43 44 44 SYSCALL_DEFINE3(arc_usr_cmpxchg, int *, uaddr, int, expected, int, new) 45 45 { 46 - int uval; 47 - int ret; 46 + struct pt_regs *regs = current_pt_regs(); 47 + int uval = -EFAULT; 48 48 49 49 /* 50 50 * This is only for old cores lacking LLOCK/SCOND, which by defintion ··· 54 54 */ 55 55 WARN_ON_ONCE(IS_ENABLED(CONFIG_SMP)); 56 56 57 + /* Z indicates to userspace if operation succeded */ 58 + regs->status32 &= ~STATUS_Z_MASK; 59 + 57 60 if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int))) 58 61 return -EFAULT; 59 62 60 63 preempt_disable(); 61 64 62 - ret = __get_user(uval, uaddr); 63 - if (ret) 65 + if (__get_user(uval, uaddr)) 64 66 goto done; 65 67 66 - if (uval != expected) 67 - ret = -EAGAIN; 68 - else 69 - ret = __put_user(new, uaddr); 68 + if (uval == expected) { 69 + if (!__put_user(new, uaddr)) 70 + regs->status32 |= STATUS_Z_MASK; 71 + } 70 72 71 73 done: 72 74 preempt_enable(); 73 75 74 - return ret; 76 + return uval; 75 77 } 76 78 77 79 void arch_cpu_idle(void)
+15 -8
arch/arc/kernel/smp.c
··· 22 22 #include <linux/atomic.h> 23 23 #include <linux/cpumask.h> 24 24 #include <linux/reboot.h> 25 + #include <linux/irqdomain.h> 25 26 #include <asm/processor.h> 26 27 #include <asm/setup.h> 27 28 #include <asm/mach_desc.h> ··· 68 67 int i; 69 68 70 69 /* 71 - * Initialise the present map, which describes the set of CPUs 72 - * actually populated at the present time. 70 + * if platform didn't set the present map already, do it now 71 + * boot cpu is set to present already by init/main.c 73 72 */ 74 - for (i = 0; i < max_cpus; i++) 75 - set_cpu_present(i, true); 73 + if (num_present_cpus() <= 1) { 74 + for (i = 0; i < max_cpus; i++) 75 + set_cpu_present(i, true); 76 + } 76 77 } 77 78 78 79 void __init smp_cpus_done(unsigned int max_cpus) ··· 354 351 */ 355 352 static DEFINE_PER_CPU(int, ipi_dev); 356 353 357 - int smp_ipi_irq_setup(int cpu, int irq) 354 + int smp_ipi_irq_setup(int cpu, irq_hw_number_t hwirq) 358 355 { 359 356 int *dev = per_cpu_ptr(&ipi_dev, cpu); 357 + unsigned int virq = irq_find_mapping(NULL, hwirq); 358 + 359 + if (!virq) 360 + panic("Cannot find virq for root domain and hwirq=%lu", hwirq); 360 361 361 362 /* Boot cpu calls request, all call enable */ 362 363 if (!cpu) { 363 364 int rc; 364 365 365 - rc = request_percpu_irq(irq, do_IPI, "IPI Interrupt", dev); 366 + rc = request_percpu_irq(virq, do_IPI, "IPI Interrupt", dev); 366 367 if (rc) 367 - panic("Percpu IRQ request failed for %d\n", irq); 368 + panic("Percpu IRQ request failed for %u\n", virq); 368 369 } 369 370 370 - enable_percpu_irq(irq, 0); 371 + enable_percpu_irq(virq, 0); 371 372 372 373 return 0; 373 374 }
+11 -8
arch/arc/kernel/time.c
··· 152 152 cycle_t full; 153 153 } stamp; 154 154 155 - 156 - __asm__ __volatile( 157 - "1: \n" 158 - " lr %0, [AUX_RTC_LOW] \n" 159 - " lr %1, [AUX_RTC_HIGH] \n" 160 - " lr %2, [AUX_RTC_CTRL] \n" 161 - " bbit0.nt %2, 31, 1b \n" 162 - : "=r" (stamp.low), "=r" (stamp.high), "=r" (status)); 155 + /* 156 + * hardware has an internal state machine which tracks readout of 157 + * low/high and updates the CTRL.status if 158 + * - interrupt/exception taken between the two reads 159 + * - high increments after low has been read 160 + */ 161 + do { 162 + stamp.low = read_aux_reg(AUX_RTC_LOW); 163 + stamp.high = read_aux_reg(AUX_RTC_HIGH); 164 + status = read_aux_reg(AUX_RTC_CTRL); 165 + } while (!(status & _BITUL(31))); 163 166 164 167 return stamp.full; 165 168 }
+26
arch/arc/mm/dma.c
··· 105 105 __free_pages(page, get_order(size)); 106 106 } 107 107 108 + static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma, 109 + void *cpu_addr, dma_addr_t dma_addr, size_t size, 110 + unsigned long attrs) 111 + { 112 + unsigned long user_count = vma_pages(vma); 113 + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; 114 + unsigned long pfn = __phys_to_pfn(plat_dma_to_phys(dev, dma_addr)); 115 + unsigned long off = vma->vm_pgoff; 116 + int ret = -ENXIO; 117 + 118 + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 119 + 120 + if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret)) 121 + return ret; 122 + 123 + if (off < count && user_count <= (count - off)) { 124 + ret = remap_pfn_range(vma, vma->vm_start, 125 + pfn + off, 126 + user_count << PAGE_SHIFT, 127 + vma->vm_page_prot); 128 + } 129 + 130 + return ret; 131 + } 132 + 108 133 /* 109 134 * streaming DMA Mapping API... 110 135 * CPU accesses page via normal paddr, thus needs to explicitly made ··· 218 193 struct dma_map_ops arc_dma_ops = { 219 194 .alloc = arc_dma_alloc, 220 195 .free = arc_dma_free, 196 + .mmap = arc_dma_mmap, 221 197 .map_page = arc_dma_map_page, 222 198 .map_sg = arc_dma_map_sg, 223 199 .sync_single_for_device = arc_dma_sync_single_for_device,
-6
arch/arc/plat-eznps/smp.c
··· 140 140 mtm_enable_core(cpu); 141 141 } 142 142 143 - static void eznps_ipi_clear(int irq) 144 - { 145 - write_aux_reg(CTOP_AUX_IACK, 1 << irq); 146 - } 147 - 148 143 struct plat_smp_ops plat_smp_ops = { 149 144 .info = smp_cpuinfo_buf, 150 145 .init_early_smp = eznps_init_cpumasks, 151 146 .cpu_kick = eznps_smp_wakeup_cpu, 152 147 .ipi_send = eznps_ipi_send, 153 148 .init_per_cpu = eznps_init_per_cpu, 154 - .ipi_clear = eznps_ipi_clear, 155 149 };
+1
arch/arm/include/asm/kvm_asm.h
··· 66 66 extern void __kvm_flush_vm_context(void); 67 67 extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); 68 68 extern void __kvm_tlb_flush_vmid(struct kvm *kvm); 69 + extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); 69 70 70 71 extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); 71 72
+3
arch/arm/include/asm/kvm_host.h
··· 57 57 /* VTTBR value associated with below pgd and vmid */ 58 58 u64 vttbr; 59 59 60 + /* The last vcpu id that ran on each physical CPU */ 61 + int __percpu *last_vcpu_ran; 62 + 60 63 /* Timer */ 61 64 struct arch_timer_kvm timer; 62 65
+1
arch/arm/include/asm/kvm_hyp.h
··· 71 71 #define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0) 72 72 #define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0) 73 73 #define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0) 74 + #define TLBIALL __ACCESS_CP15(c8, 0, c7, 0) 74 75 #define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4) 75 76 #define PRRR __ACCESS_CP15(c10, 0, c2, 0) 76 77 #define NMRR __ACCESS_CP15(c10, 0, c2, 1)
+26 -1
arch/arm/kvm/arm.c
··· 114 114 */ 115 115 int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) 116 116 { 117 - int ret = 0; 117 + int ret, cpu; 118 118 119 119 if (type) 120 120 return -EINVAL; 121 + 122 + kvm->arch.last_vcpu_ran = alloc_percpu(typeof(*kvm->arch.last_vcpu_ran)); 123 + if (!kvm->arch.last_vcpu_ran) 124 + return -ENOMEM; 125 + 126 + for_each_possible_cpu(cpu) 127 + *per_cpu_ptr(kvm->arch.last_vcpu_ran, cpu) = -1; 121 128 122 129 ret = kvm_alloc_stage2_pgd(kvm); 123 130 if (ret) ··· 148 141 out_free_stage2_pgd: 149 142 kvm_free_stage2_pgd(kvm); 150 143 out_fail_alloc: 144 + free_percpu(kvm->arch.last_vcpu_ran); 145 + kvm->arch.last_vcpu_ran = NULL; 151 146 return ret; 152 147 } 153 148 ··· 176 167 void kvm_arch_destroy_vm(struct kvm *kvm) 177 168 { 178 169 int i; 170 + 171 + free_percpu(kvm->arch.last_vcpu_ran); 172 + kvm->arch.last_vcpu_ran = NULL; 179 173 180 174 for (i = 0; i < KVM_MAX_VCPUS; ++i) { 181 175 if (kvm->vcpus[i]) { ··· 324 312 325 313 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) 326 314 { 315 + int *last_ran; 316 + 317 + last_ran = this_cpu_ptr(vcpu->kvm->arch.last_vcpu_ran); 318 + 319 + /* 320 + * We might get preempted before the vCPU actually runs, but 321 + * over-invalidation doesn't affect correctness. 322 + */ 323 + if (*last_ran != vcpu->vcpu_id) { 324 + kvm_call_hyp(__kvm_tlb_flush_local_vmid, vcpu); 325 + *last_ran = vcpu->vcpu_id; 326 + } 327 + 327 328 vcpu->cpu = cpu; 328 329 vcpu->arch.host_cpu_context = this_cpu_ptr(kvm_host_cpu_state); 329 330
+15
arch/arm/kvm/hyp/tlb.c
··· 55 55 __kvm_tlb_flush_vmid(kvm); 56 56 } 57 57 58 + void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) 59 + { 60 + struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); 61 + 62 + /* Switch to requested VMID */ 63 + write_sysreg(kvm->arch.vttbr, VTTBR); 64 + isb(); 65 + 66 + write_sysreg(0, TLBIALL); 67 + dsb(nsh); 68 + isb(); 69 + 70 + write_sysreg(0, VTTBR); 71 + } 72 + 58 73 void __hyp_text __kvm_flush_vm_context(void) 59 74 { 60 75 write_sysreg(0, TLBIALLNSNHIS);
+5 -2
arch/arm64/boot/dts/rockchip/rk3399.dtsi
··· 300 300 ranges = <0x83000000 0x0 0xfa000000 0x0 0xfa000000 0x0 0x600000 301 301 0x81000000 0x0 0xfa600000 0x0 0xfa600000 0x0 0x100000>; 302 302 resets = <&cru SRST_PCIE_CORE>, <&cru SRST_PCIE_MGMT>, 303 - <&cru SRST_PCIE_MGMT_STICKY>, <&cru SRST_PCIE_PIPE>; 304 - reset-names = "core", "mgmt", "mgmt-sticky", "pipe"; 303 + <&cru SRST_PCIE_MGMT_STICKY>, <&cru SRST_PCIE_PIPE>, 304 + <&cru SRST_PCIE_PM>, <&cru SRST_P_PCIE>, 305 + <&cru SRST_A_PCIE>; 306 + reset-names = "core", "mgmt", "mgmt-sticky", "pipe", 307 + "pm", "pclk", "aclk"; 305 308 status = "disabled"; 306 309 307 310 pcie0_intc: interrupt-controller {
+1 -1
arch/arm64/include/asm/alternative.h
··· 1 1 #ifndef __ASM_ALTERNATIVE_H 2 2 #define __ASM_ALTERNATIVE_H 3 3 4 - #include <asm/cpufeature.h> 4 + #include <asm/cpucaps.h> 5 5 #include <asm/insn.h> 6 6 7 7 #ifndef __ASSEMBLY__
+40
arch/arm64/include/asm/cpucaps.h
··· 1 + /* 2 + * arch/arm64/include/asm/cpucaps.h 3 + * 4 + * Copyright (C) 2016 ARM Ltd. 5 + * 6 + * This program is free software: you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + #ifndef __ASM_CPUCAPS_H 19 + #define __ASM_CPUCAPS_H 20 + 21 + #define ARM64_WORKAROUND_CLEAN_CACHE 0 22 + #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE 1 23 + #define ARM64_WORKAROUND_845719 2 24 + #define ARM64_HAS_SYSREG_GIC_CPUIF 3 25 + #define ARM64_HAS_PAN 4 26 + #define ARM64_HAS_LSE_ATOMICS 5 27 + #define ARM64_WORKAROUND_CAVIUM_23154 6 28 + #define ARM64_WORKAROUND_834220 7 29 + #define ARM64_HAS_NO_HW_PREFETCH 8 30 + #define ARM64_HAS_UAO 9 31 + #define ARM64_ALT_PAN_NOT_UAO 10 32 + #define ARM64_HAS_VIRT_HOST_EXTN 11 33 + #define ARM64_WORKAROUND_CAVIUM_27456 12 34 + #define ARM64_HAS_32BIT_EL0 13 35 + #define ARM64_HYP_OFFSET_LOW 14 36 + #define ARM64_MISMATCHED_CACHE_LINE_SIZE 15 37 + 38 + #define ARM64_NCAPS 16 39 + 40 + #endif /* __ASM_CPUCAPS_H */
+1 -19
arch/arm64/include/asm/cpufeature.h
··· 11 11 12 12 #include <linux/jump_label.h> 13 13 14 + #include <asm/cpucaps.h> 14 15 #include <asm/hwcap.h> 15 16 #include <asm/sysreg.h> 16 17 ··· 24 23 25 24 #define MAX_CPU_FEATURES (8 * sizeof(elf_hwcap)) 26 25 #define cpu_feature(x) ilog2(HWCAP_ ## x) 27 - 28 - #define ARM64_WORKAROUND_CLEAN_CACHE 0 29 - #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE 1 30 - #define ARM64_WORKAROUND_845719 2 31 - #define ARM64_HAS_SYSREG_GIC_CPUIF 3 32 - #define ARM64_HAS_PAN 4 33 - #define ARM64_HAS_LSE_ATOMICS 5 34 - #define ARM64_WORKAROUND_CAVIUM_23154 6 35 - #define ARM64_WORKAROUND_834220 7 36 - #define ARM64_HAS_NO_HW_PREFETCH 8 37 - #define ARM64_HAS_UAO 9 38 - #define ARM64_ALT_PAN_NOT_UAO 10 39 - #define ARM64_HAS_VIRT_HOST_EXTN 11 40 - #define ARM64_WORKAROUND_CAVIUM_27456 12 41 - #define ARM64_HAS_32BIT_EL0 13 42 - #define ARM64_HYP_OFFSET_LOW 14 43 - #define ARM64_MISMATCHED_CACHE_LINE_SIZE 15 44 - 45 - #define ARM64_NCAPS 16 46 26 47 27 #ifndef __ASSEMBLY__ 48 28
+1
arch/arm64/include/asm/kvm_asm.h
··· 54 54 extern void __kvm_flush_vm_context(void); 55 55 extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); 56 56 extern void __kvm_tlb_flush_vmid(struct kvm *kvm); 57 + extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu); 57 58 58 59 extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); 59 60
+3
arch/arm64/include/asm/kvm_host.h
··· 62 62 /* VTTBR value associated with above pgd and vmid */ 63 63 u64 vttbr; 64 64 65 + /* The last vcpu id that ran on each physical CPU */ 66 + int __percpu *last_vcpu_ran; 67 + 65 68 /* The maximum number of vCPUs depends on the used GIC model */ 66 69 int max_vcpus; 67 70
+1 -1
arch/arm64/include/asm/kvm_mmu.h
··· 128 128 return v; 129 129 } 130 130 131 - #define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v))) 131 + #define kern_hyp_va(v) ((typeof(v))(__kern_hyp_va((unsigned long)(v)))) 132 132 133 133 /* 134 134 * We currently only support a 40bit IPA.
-1
arch/arm64/include/asm/lse.h
··· 5 5 6 6 #include <linux/stringify.h> 7 7 #include <asm/alternative.h> 8 - #include <asm/cpufeature.h> 9 8 10 9 #ifdef __ASSEMBLER__ 11 10
+15
arch/arm64/kvm/hyp/tlb.c
··· 64 64 write_sysreg(0, vttbr_el2); 65 65 } 66 66 67 + void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu) 68 + { 69 + struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm); 70 + 71 + /* Switch to requested VMID */ 72 + write_sysreg(kvm->arch.vttbr, vttbr_el2); 73 + isb(); 74 + 75 + asm volatile("tlbi vmalle1" : : ); 76 + dsb(nsh); 77 + isb(); 78 + 79 + write_sysreg(0, vttbr_el2); 80 + } 81 + 67 82 void __hyp_text __kvm_flush_vm_context(void) 68 83 { 69 84 dsb(ishst);
+1
arch/nios2/kernel/time.c
··· 324 324 ret = nios2_clocksource_init(timer); 325 325 break; 326 326 default: 327 + ret = 0; 327 328 break; 328 329 } 329 330
+2
arch/openrisc/include/asm/cache.h
··· 23 23 * they shouldn't be hard-coded! 24 24 */ 25 25 26 + #define __ro_after_init __read_mostly 27 + 26 28 #define L1_CACHE_BYTES 16 27 29 #define L1_CACHE_SHIFT 4 28 30
+3 -3
arch/s390/hypfs/hypfs_diag.c
··· 363 363 static int diag224_get_name_table(void) 364 364 { 365 365 /* memory must be below 2GB */ 366 - diag224_cpu_names = kmalloc(PAGE_SIZE, GFP_KERNEL | GFP_DMA); 366 + diag224_cpu_names = (char *) __get_free_page(GFP_KERNEL | GFP_DMA); 367 367 if (!diag224_cpu_names) 368 368 return -ENOMEM; 369 369 if (diag224(diag224_cpu_names)) { 370 - kfree(diag224_cpu_names); 370 + free_page((unsigned long) diag224_cpu_names); 371 371 return -EOPNOTSUPP; 372 372 } 373 373 EBCASC(diag224_cpu_names + 16, (*diag224_cpu_names + 1) * 16); ··· 376 376 377 377 static void diag224_delete_name_table(void) 378 378 { 379 - kfree(diag224_cpu_names); 379 + free_page((unsigned long) diag224_cpu_names); 380 380 } 381 381 382 382 static int diag224_idx2name(int index, char *name)
+2
arch/s390/kernel/vmlinux.lds.S
··· 62 62 63 63 . = ALIGN(PAGE_SIZE); 64 64 __start_ro_after_init = .; 65 + __start_data_ro_after_init = .; 65 66 .data..ro_after_init : { 66 67 *(.data..ro_after_init) 67 68 } 69 + __end_data_ro_after_init = .; 68 70 EXCEPTION_TABLE(16) 69 71 . = ALIGN(PAGE_SIZE); 70 72 __end_ro_after_init = .;
+1 -1
arch/s390/pci/pci_dma.c
··· 423 423 dma_addr_t dma_addr_base, dma_addr; 424 424 int flags = ZPCI_PTE_VALID; 425 425 struct scatterlist *s; 426 - unsigned long pa; 426 + unsigned long pa = 0; 427 427 int ret; 428 428 429 429 size = PAGE_ALIGN(size);
+2 -2
arch/x86/crypto/aesni-intel_glue.c
··· 888 888 unsigned long auth_tag_len = crypto_aead_authsize(tfm); 889 889 u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); 890 890 struct scatter_walk src_sg_walk; 891 - struct scatter_walk dst_sg_walk; 891 + struct scatter_walk dst_sg_walk = {}; 892 892 unsigned int i; 893 893 894 894 /* Assuming we are supporting rfc4106 64-bit extended */ ··· 968 968 u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); 969 969 u8 authTag[16]; 970 970 struct scatter_walk src_sg_walk; 971 - struct scatter_walk dst_sg_walk; 971 + struct scatter_walk dst_sg_walk = {}; 972 972 unsigned int i; 973 973 974 974 if (unlikely(req->assoclen != 16 && req->assoclen != 20))
+4 -1
arch/x86/kernel/apm_32.c
··· 1042 1042 1043 1043 if (apm_info.get_power_status_broken) 1044 1044 return APM_32_UNSUPPORTED; 1045 - if (apm_bios_call(&call)) 1045 + if (apm_bios_call(&call)) { 1046 + if (!call.err) 1047 + return APM_NO_ERROR; 1046 1048 return call.err; 1049 + } 1047 1050 *status = call.ebx; 1048 1051 *bat = call.ecx; 1049 1052 if (apm_info.get_power_status_swabinminutes) {
+2 -8
drivers/acpi/acpi_apd.c
··· 122 122 int ret; 123 123 124 124 if (!dev_desc) { 125 - pdev = acpi_create_platform_device(adev); 125 + pdev = acpi_create_platform_device(adev, NULL); 126 126 return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1; 127 127 } 128 128 ··· 139 139 goto err_out; 140 140 } 141 141 142 - if (dev_desc->properties) { 143 - ret = device_add_properties(&adev->dev, dev_desc->properties); 144 - if (ret) 145 - goto err_out; 146 - } 147 - 148 142 adev->driver_data = pdata; 149 - pdev = acpi_create_platform_device(adev); 143 + pdev = acpi_create_platform_device(adev, dev_desc->properties); 150 144 if (!IS_ERR_OR_NULL(pdev)) 151 145 return 1; 152 146
+2 -8
drivers/acpi/acpi_lpss.c
··· 395 395 396 396 dev_desc = (const struct lpss_device_desc *)id->driver_data; 397 397 if (!dev_desc) { 398 - pdev = acpi_create_platform_device(adev); 398 + pdev = acpi_create_platform_device(adev, NULL); 399 399 return IS_ERR_OR_NULL(pdev) ? PTR_ERR(pdev) : 1; 400 400 } 401 401 pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); ··· 451 451 goto err_out; 452 452 } 453 453 454 - if (dev_desc->properties) { 455 - ret = device_add_properties(&adev->dev, dev_desc->properties); 456 - if (ret) 457 - goto err_out; 458 - } 459 - 460 454 adev->driver_data = pdata; 461 - pdev = acpi_create_platform_device(adev); 455 + pdev = acpi_create_platform_device(adev, dev_desc->properties); 462 456 if (!IS_ERR_OR_NULL(pdev)) { 463 457 return 1; 464 458 }
+4 -1
drivers/acpi/acpi_platform.c
··· 50 50 /** 51 51 * acpi_create_platform_device - Create platform device for ACPI device node 52 52 * @adev: ACPI device node to create a platform device for. 53 + * @properties: Optional collection of build-in properties. 53 54 * 54 55 * Check if the given @adev can be represented as a platform device and, if 55 56 * that's the case, create and register a platform device, populate its common ··· 58 57 * 59 58 * Name of the platform device will be the same as @adev's. 60 59 */ 61 - struct platform_device *acpi_create_platform_device(struct acpi_device *adev) 60 + struct platform_device *acpi_create_platform_device(struct acpi_device *adev, 61 + struct property_entry *properties) 62 62 { 63 63 struct platform_device *pdev = NULL; 64 64 struct platform_device_info pdevinfo; ··· 108 106 pdevinfo.res = resources; 109 107 pdevinfo.num_res = count; 110 108 pdevinfo.fwnode = acpi_fwnode_handle(adev); 109 + pdevinfo.properties = properties; 111 110 112 111 if (acpi_dma_supported(adev)) 113 112 pdevinfo.dma_mask = DMA_BIT_MASK(32);
+2 -2
drivers/acpi/dptf/int340x_thermal.c
··· 34 34 const struct acpi_device_id *id) 35 35 { 36 36 if (IS_ENABLED(CONFIG_INT340X_THERMAL)) 37 - acpi_create_platform_device(adev); 37 + acpi_create_platform_device(adev, NULL); 38 38 /* Intel SoC DTS thermal driver needs INT3401 to set IRQ descriptor */ 39 39 else if (IS_ENABLED(CONFIG_INTEL_SOC_DTS_THERMAL) && 40 40 id->driver_data == INT3401_DEVICE) 41 - acpi_create_platform_device(adev); 41 + acpi_create_platform_device(adev, NULL); 42 42 return 1; 43 43 } 44 44
+1 -1
drivers/acpi/scan.c
··· 1734 1734 &is_spi_i2c_slave); 1735 1735 acpi_dev_free_resource_list(&resource_list); 1736 1736 if (!is_spi_i2c_slave) { 1737 - acpi_create_platform_device(device); 1737 + acpi_create_platform_device(device, NULL); 1738 1738 acpi_device_set_enumerated(device); 1739 1739 } else { 1740 1740 blocking_notifier_call_chain(&acpi_reconfig_chain,
+3 -2
drivers/base/dd.c
··· 324 324 { 325 325 int ret = -EPROBE_DEFER; 326 326 int local_trigger_count = atomic_read(&deferred_trigger_count); 327 - bool test_remove = IS_ENABLED(CONFIG_DEBUG_TEST_DRIVER_REMOVE); 327 + bool test_remove = IS_ENABLED(CONFIG_DEBUG_TEST_DRIVER_REMOVE) && 328 + !drv->suppress_bind_attrs; 328 329 329 330 if (defer_all_probes) { 330 331 /* ··· 384 383 if (test_remove) { 385 384 test_remove = false; 386 385 387 - if (dev->bus && dev->bus->remove) 386 + if (dev->bus->remove) 388 387 dev->bus->remove(dev); 389 388 else if (drv->remove) 390 389 drv->remove(dev);
+4 -4
drivers/base/power/main.c
··· 1027 1027 TRACE_DEVICE(dev); 1028 1028 TRACE_SUSPEND(0); 1029 1029 1030 + dpm_wait_for_children(dev, async); 1031 + 1030 1032 if (async_error) 1031 1033 goto Complete; 1032 1034 ··· 1039 1037 1040 1038 if (dev->power.syscore || dev->power.direct_complete) 1041 1039 goto Complete; 1042 - 1043 - dpm_wait_for_children(dev, async); 1044 1040 1045 1041 if (dev->pm_domain) { 1046 1042 info = "noirq power domain "; ··· 1174 1174 1175 1175 __pm_runtime_disable(dev, false); 1176 1176 1177 + dpm_wait_for_children(dev, async); 1178 + 1177 1179 if (async_error) 1178 1180 goto Complete; 1179 1181 ··· 1186 1184 1187 1185 if (dev->power.syscore || dev->power.direct_complete) 1188 1186 goto Complete; 1189 - 1190 - dpm_wait_for_children(dev, async); 1191 1187 1192 1188 if (dev->pm_domain) { 1193 1189 info = "late power domain ";
-41
drivers/block/aoe/aoecmd.c
··· 853 853 return n; 854 854 } 855 855 856 - /* This can be removed if we are certain that no users of the block 857 - * layer will ever use zero-count pages in bios. Otherwise we have to 858 - * protect against the put_page sometimes done by the network layer. 859 - * 860 - * See http://oss.sgi.com/archives/xfs/2007-01/msg00594.html for 861 - * discussion. 862 - * 863 - * We cannot use get_page in the workaround, because it insists on a 864 - * positive page count as a precondition. So we use _refcount directly. 865 - */ 866 - static void 867 - bio_pageinc(struct bio *bio) 868 - { 869 - struct bio_vec bv; 870 - struct page *page; 871 - struct bvec_iter iter; 872 - 873 - bio_for_each_segment(bv, bio, iter) { 874 - /* Non-zero page count for non-head members of 875 - * compound pages is no longer allowed by the kernel. 876 - */ 877 - page = compound_head(bv.bv_page); 878 - page_ref_inc(page); 879 - } 880 - } 881 - 882 - static void 883 - bio_pagedec(struct bio *bio) 884 - { 885 - struct page *page; 886 - struct bio_vec bv; 887 - struct bvec_iter iter; 888 - 889 - bio_for_each_segment(bv, bio, iter) { 890 - page = compound_head(bv.bv_page); 891 - page_ref_dec(page); 892 - } 893 - } 894 - 895 856 static void 896 857 bufinit(struct buf *buf, struct request *rq, struct bio *bio) 897 858 { ··· 860 899 buf->rq = rq; 861 900 buf->bio = bio; 862 901 buf->iter = bio->bi_iter; 863 - bio_pageinc(bio); 864 902 } 865 903 866 904 static struct buf * ··· 1087 1127 if (buf == d->ip.buf) 1088 1128 d->ip.buf = NULL; 1089 1129 rq = buf->rq; 1090 - bio_pagedec(buf->bio); 1091 1130 mempool_free(buf, d->bufpool); 1092 1131 n = (unsigned long) rq->special; 1093 1132 rq->special = (void *) --n;
+1 -1
drivers/block/drbd/drbd_main.c
··· 1871 1871 drbd_update_congested(connection); 1872 1872 } 1873 1873 do { 1874 - rv = kernel_sendmsg(sock, &msg, &iov, 1, size); 1874 + rv = kernel_sendmsg(sock, &msg, &iov, 1, iov.iov_len); 1875 1875 if (rv == -EAGAIN) { 1876 1876 if (we_should_drop_the_connection(connection, sock)) 1877 1877 break;
+1 -1
drivers/block/nbd.c
··· 599 599 return -EINVAL; 600 600 601 601 sreq = blk_mq_alloc_request(bdev_get_queue(bdev), WRITE, 0); 602 - if (!sreq) 602 + if (IS_ERR(sreq)) 603 603 return -ENOMEM; 604 604 605 605 mutex_unlock(&nbd->tx_lock);
-3
drivers/char/ppdev.c
··· 748 748 } 749 749 750 750 if (pp->pdev) { 751 - const char *name = pp->pdev->name; 752 - 753 751 parport_unregister_device(pp->pdev); 754 - kfree(name); 755 752 pp->pdev = NULL; 756 753 pr_debug(CHRDEV "%x: unregistered pardevice\n", minor); 757 754 }
+8 -5
drivers/clk/clk-qoriq.c
··· 700 700 struct mux_hwclock *hwc, 701 701 const struct clk_ops *ops, 702 702 unsigned long min_rate, 703 + unsigned long max_rate, 703 704 unsigned long pct80_rate, 704 705 const char *fmt, int idx) 705 706 { ··· 728 727 rate > pct80_rate) 729 728 continue; 730 729 if (rate < min_rate) 730 + continue; 731 + if (rate > max_rate) 731 732 continue; 732 733 733 734 parent_names[j] = div->name; ··· 762 759 struct mux_hwclock *hwc; 763 760 const struct clockgen_pll_div *div; 764 761 unsigned long plat_rate, min_rate; 765 - u64 pct80_rate; 762 + u64 max_rate, pct80_rate; 766 763 u32 clksel; 767 764 768 765 hwc = kzalloc(sizeof(*hwc), GFP_KERNEL); ··· 790 787 return NULL; 791 788 } 792 789 793 - pct80_rate = clk_get_rate(div->clk); 794 - pct80_rate *= 8; 790 + max_rate = clk_get_rate(div->clk); 791 + pct80_rate = max_rate * 8; 795 792 do_div(pct80_rate, 10); 796 793 797 794 plat_rate = clk_get_rate(cg->pll[PLATFORM_PLL].div[PLL_DIV1].clk); ··· 801 798 else 802 799 min_rate = plat_rate / 2; 803 800 804 - return create_mux_common(cg, hwc, &cmux_ops, min_rate, 801 + return create_mux_common(cg, hwc, &cmux_ops, min_rate, max_rate, 805 802 pct80_rate, "cg-cmux%d", idx); 806 803 } 807 804 ··· 816 813 hwc->reg = cg->regs + 0x20 * idx + 0x10; 817 814 hwc->info = cg->info.hwaccel[idx]; 818 815 819 - return create_mux_common(cg, hwc, &hwaccel_ops, 0, 0, 816 + return create_mux_common(cg, hwc, &hwaccel_ops, 0, ULONG_MAX, 0, 820 817 "cg-hwaccel%d", idx); 821 818 } 822 819
+4 -6
drivers/clk/clk-xgene.c
··· 463 463 struct xgene_clk *pclk = to_xgene_clk(hw); 464 464 unsigned long flags = 0; 465 465 u32 data; 466 - phys_addr_t reg; 467 466 468 467 if (pclk->lock) 469 468 spin_lock_irqsave(pclk->lock, flags); 470 469 471 470 if (pclk->param.csr_reg != NULL) { 472 471 pr_debug("%s clock enabled\n", clk_hw_get_name(hw)); 473 - reg = __pa(pclk->param.csr_reg); 474 472 /* First enable the clock */ 475 473 data = xgene_clk_read(pclk->param.csr_reg + 476 474 pclk->param.reg_clk_offset); 477 475 data |= pclk->param.reg_clk_mask; 478 476 xgene_clk_write(data, pclk->param.csr_reg + 479 477 pclk->param.reg_clk_offset); 480 - pr_debug("%s clock PADDR base %pa clk offset 0x%08X mask 0x%08X value 0x%08X\n", 481 - clk_hw_get_name(hw), &reg, 478 + pr_debug("%s clk offset 0x%08X mask 0x%08X value 0x%08X\n", 479 + clk_hw_get_name(hw), 482 480 pclk->param.reg_clk_offset, pclk->param.reg_clk_mask, 483 481 data); 484 482 ··· 486 488 data &= ~pclk->param.reg_csr_mask; 487 489 xgene_clk_write(data, pclk->param.csr_reg + 488 490 pclk->param.reg_csr_offset); 489 - pr_debug("%s CSR RESET PADDR base %pa csr offset 0x%08X mask 0x%08X value 0x%08X\n", 490 - clk_hw_get_name(hw), &reg, 491 + pr_debug("%s csr offset 0x%08X mask 0x%08X value 0x%08X\n", 492 + clk_hw_get_name(hw), 491 493 pclk->param.reg_csr_offset, pclk->param.reg_csr_mask, 492 494 data); 493 495 }
+6 -2
drivers/clk/imx/clk-pllv3.c
··· 223 223 temp64 *= mfn; 224 224 do_div(temp64, mfd); 225 225 226 - return (parent_rate * div) + (u32)temp64; 226 + return parent_rate * div + (unsigned long)temp64; 227 227 } 228 228 229 229 static long clk_pllv3_av_round_rate(struct clk_hw *hw, unsigned long rate, ··· 247 247 do_div(temp64, parent_rate); 248 248 mfn = temp64; 249 249 250 - return parent_rate * div + parent_rate * mfn / mfd; 250 + temp64 = (u64)parent_rate; 251 + temp64 *= mfn; 252 + do_div(temp64, mfd); 253 + 254 + return parent_rate * div + (unsigned long)temp64; 251 255 } 252 256 253 257 static int clk_pllv3_av_set_rate(struct clk_hw *hw, unsigned long rate,
+1 -1
drivers/clk/mmp/clk-of-mmp2.c
··· 313 313 } 314 314 315 315 pxa_unit->apmu_base = of_iomap(np, 1); 316 - if (!pxa_unit->mpmu_base) { 316 + if (!pxa_unit->apmu_base) { 317 317 pr_err("failed to map apmu registers\n"); 318 318 return; 319 319 }
+1 -1
drivers/clk/mmp/clk-of-pxa168.c
··· 262 262 } 263 263 264 264 pxa_unit->apmu_base = of_iomap(np, 1); 265 - if (!pxa_unit->mpmu_base) { 265 + if (!pxa_unit->apmu_base) { 266 266 pr_err("failed to map apmu registers\n"); 267 267 return; 268 268 }
+2 -2
drivers/clk/mmp/clk-of-pxa910.c
··· 282 282 } 283 283 284 284 pxa_unit->apmu_base = of_iomap(np, 1); 285 - if (!pxa_unit->mpmu_base) { 285 + if (!pxa_unit->apmu_base) { 286 286 pr_err("failed to map apmu registers\n"); 287 287 return; 288 288 } ··· 294 294 } 295 295 296 296 pxa_unit->apbcp_base = of_iomap(np, 3); 297 - if (!pxa_unit->mpmu_base) { 297 + if (!pxa_unit->apbcp_base) { 298 298 pr_err("failed to map apbcp registers\n"); 299 299 return; 300 300 }
+1 -4
drivers/clk/rockchip/clk-ddr.c
··· 144 144 ddrclk->ddr_flag = ddr_flag; 145 145 146 146 clk = clk_register(NULL, &ddrclk->hw); 147 - if (IS_ERR(clk)) { 148 - pr_err("%s: could not register ddrclk %s\n", __func__, name); 147 + if (IS_ERR(clk)) 149 148 kfree(ddrclk); 150 - return NULL; 151 - } 152 149 153 150 return clk; 154 151 }
+14 -8
drivers/clk/samsung/clk-exynos-clkout.c
··· 132 132 pr_err("%s: failed to register clkout clock\n", __func__); 133 133 } 134 134 135 + /* 136 + * We use CLK_OF_DECLARE_DRIVER initialization method to avoid setting 137 + * the OF_POPULATED flag on the pmu device tree node, so later the 138 + * Exynos PMU platform device can be properly probed with PMU driver. 139 + */ 140 + 135 141 static void __init exynos4_clkout_init(struct device_node *node) 136 142 { 137 143 exynos_clkout_init(node, EXYNOS4_CLKOUT_MUX_MASK); 138 144 } 139 - CLK_OF_DECLARE(exynos4210_clkout, "samsung,exynos4210-pmu", 145 + CLK_OF_DECLARE_DRIVER(exynos4210_clkout, "samsung,exynos4210-pmu", 140 146 exynos4_clkout_init); 141 - CLK_OF_DECLARE(exynos4212_clkout, "samsung,exynos4212-pmu", 147 + CLK_OF_DECLARE_DRIVER(exynos4212_clkout, "samsung,exynos4212-pmu", 142 148 exynos4_clkout_init); 143 - CLK_OF_DECLARE(exynos4412_clkout, "samsung,exynos4412-pmu", 149 + CLK_OF_DECLARE_DRIVER(exynos4412_clkout, "samsung,exynos4412-pmu", 144 150 exynos4_clkout_init); 145 - CLK_OF_DECLARE(exynos3250_clkout, "samsung,exynos3250-pmu", 151 + CLK_OF_DECLARE_DRIVER(exynos3250_clkout, "samsung,exynos3250-pmu", 146 152 exynos4_clkout_init); 147 153 148 154 static void __init exynos5_clkout_init(struct device_node *node) 149 155 { 150 156 exynos_clkout_init(node, EXYNOS5_CLKOUT_MUX_MASK); 151 157 } 152 - CLK_OF_DECLARE(exynos5250_clkout, "samsung,exynos5250-pmu", 158 + CLK_OF_DECLARE_DRIVER(exynos5250_clkout, "samsung,exynos5250-pmu", 153 159 exynos5_clkout_init); 154 - CLK_OF_DECLARE(exynos5410_clkout, "samsung,exynos5410-pmu", 160 + CLK_OF_DECLARE_DRIVER(exynos5410_clkout, "samsung,exynos5410-pmu", 155 161 exynos5_clkout_init); 156 - CLK_OF_DECLARE(exynos5420_clkout, "samsung,exynos5420-pmu", 162 + CLK_OF_DECLARE_DRIVER(exynos5420_clkout, "samsung,exynos5420-pmu", 157 163 exynos5_clkout_init); 158 - CLK_OF_DECLARE(exynos5433_clkout, "samsung,exynos5433-pmu", 164 + CLK_OF_DECLARE_DRIVER(exynos5433_clkout, "samsung,exynos5433-pmu", 159 165 exynos5_clkout_init);
+4 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_acp.c
··· 395 395 { 396 396 int i, ret; 397 397 struct device *dev; 398 - 399 398 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 399 + 400 + /* return early if no ACP */ 401 + if (!adev->acp.acp_genpd) 402 + return 0; 400 403 401 404 for (i = 0; i < ACP_DEVS ; i++) { 402 405 dev = get_mfd_cell_dev(adev->acp.acp_cell[i].name, i);
+11 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
··· 809 809 if (!adev->pm.fw) { 810 810 switch (adev->asic_type) { 811 811 case CHIP_TOPAZ: 812 - strcpy(fw_name, "amdgpu/topaz_smc.bin"); 812 + if (((adev->pdev->device == 0x6900) && (adev->pdev->revision == 0x81)) || 813 + ((adev->pdev->device == 0x6900) && (adev->pdev->revision == 0x83)) || 814 + ((adev->pdev->device == 0x6907) && (adev->pdev->revision == 0x87))) 815 + strcpy(fw_name, "amdgpu/topaz_k_smc.bin"); 816 + else 817 + strcpy(fw_name, "amdgpu/topaz_smc.bin"); 813 818 break; 814 819 case CHIP_TONGA: 815 - strcpy(fw_name, "amdgpu/tonga_smc.bin"); 820 + if (((adev->pdev->device == 0x6939) && (adev->pdev->revision == 0xf1)) || 821 + ((adev->pdev->device == 0x6938) && (adev->pdev->revision == 0xf1))) 822 + strcpy(fw_name, "amdgpu/tonga_k_smc.bin"); 823 + else 824 + strcpy(fw_name, "amdgpu/tonga_smc.bin"); 816 825 break; 817 826 case CHIP_FIJI: 818 827 strcpy(fw_name, "amdgpu/fiji_smc.bin");
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
··· 769 769 { 770 770 struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector); 771 771 772 - if (amdgpu_connector->ddc_bus->has_aux) { 772 + if (amdgpu_connector->ddc_bus && amdgpu_connector->ddc_bus->has_aux) { 773 773 drm_dp_aux_unregister(&amdgpu_connector->ddc_bus->aux); 774 774 amdgpu_connector->ddc_bus->has_aux = false; 775 775 }
+24 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 742 742 743 743 static int __init amdgpu_init(void) 744 744 { 745 - amdgpu_sync_init(); 746 - amdgpu_fence_slab_init(); 745 + int r; 746 + 747 + r = amdgpu_sync_init(); 748 + if (r) 749 + goto error_sync; 750 + 751 + r = amdgpu_fence_slab_init(); 752 + if (r) 753 + goto error_fence; 754 + 755 + r = amd_sched_fence_slab_init(); 756 + if (r) 757 + goto error_sched; 758 + 747 759 if (vgacon_text_force()) { 748 760 DRM_ERROR("VGACON disables amdgpu kernel modesetting.\n"); 749 761 return -EINVAL; ··· 767 755 amdgpu_register_atpx_handler(); 768 756 /* let modprobe override vga console setting */ 769 757 return drm_pci_init(driver, pdriver); 758 + 759 + error_sched: 760 + amdgpu_fence_slab_fini(); 761 + 762 + error_fence: 763 + amdgpu_sync_fini(); 764 + 765 + error_sync: 766 + return r; 770 767 } 771 768 772 769 static void __exit amdgpu_exit(void) ··· 784 763 drm_pci_exit(driver, pdriver); 785 764 amdgpu_unregister_atpx_handler(); 786 765 amdgpu_sync_fini(); 766 + amd_sched_fence_slab_fini(); 787 767 amdgpu_fence_slab_fini(); 788 768 } 789 769
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 99 99 100 100 if ((amdgpu_runtime_pm != 0) && 101 101 amdgpu_has_atpx() && 102 + (amdgpu_is_atpx_hybrid() || 103 + amdgpu_has_atpx_dgpu_power_cntl()) && 102 104 ((flags & AMD_IS_APU) == 0)) 103 105 flags |= AMD_IS_PX; 104 106
+2
drivers/gpu/drm/amd/amdgpu/vi.c
··· 80 80 #include "dce_virtual.h" 81 81 82 82 MODULE_FIRMWARE("amdgpu/topaz_smc.bin"); 83 + MODULE_FIRMWARE("amdgpu/topaz_k_smc.bin"); 83 84 MODULE_FIRMWARE("amdgpu/tonga_smc.bin"); 85 + MODULE_FIRMWARE("amdgpu/tonga_k_smc.bin"); 84 86 MODULE_FIRMWARE("amdgpu/fiji_smc.bin"); 85 87 MODULE_FIRMWARE("amdgpu/polaris10_smc.bin"); 86 88 MODULE_FIRMWARE("amdgpu/polaris10_smc_sk.bin");
+1 -1
drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
··· 272 272 PHM_FUNC_CHECK(hwmgr); 273 273 274 274 if (hwmgr->hwmgr_func->check_smc_update_required_for_display_configuration == NULL) 275 - return -EINVAL; 275 + return false; 276 276 277 277 return hwmgr->hwmgr_func->check_smc_update_required_for_display_configuration(hwmgr); 278 278 }
+4 -2
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
··· 710 710 uint32_t vol; 711 711 int ret = 0; 712 712 713 - if (hwmgr->chip_id < CHIP_POLARIS10) { 714 - atomctrl_get_voltage_evv_on_sclk(hwmgr, voltage_type, sclk, id, voltage); 713 + if (hwmgr->chip_id < CHIP_TONGA) { 714 + ret = atomctrl_get_voltage_evv(hwmgr, id, voltage); 715 + } else if (hwmgr->chip_id < CHIP_POLARIS10) { 716 + ret = atomctrl_get_voltage_evv_on_sclk(hwmgr, voltage_type, sclk, id, voltage); 715 717 if (*voltage >= 2000 || *voltage == 0) 716 718 *voltage = 1150; 717 719 } else {
+45 -25
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 1474 1474 struct phm_ppt_v1_clock_voltage_dependency_table *sclk_table = NULL; 1475 1475 1476 1476 1477 - if (table_info == NULL) 1478 - return -EINVAL; 1479 - 1480 - sclk_table = table_info->vdd_dep_on_sclk; 1481 - 1482 1477 for (i = 0; i < SMU7_MAX_LEAKAGE_COUNT; i++) { 1483 1478 vv_id = ATOM_VIRTUAL_VOLTAGE_ID0 + i; 1484 1479 1485 1480 if (data->vdd_gfx_control == SMU7_VOLTAGE_CONTROL_BY_SVID2) { 1486 - if (0 == phm_get_sclk_for_voltage_evv(hwmgr, 1481 + if ((hwmgr->pp_table_version == PP_TABLE_V1) 1482 + && !phm_get_sclk_for_voltage_evv(hwmgr, 1487 1483 table_info->vddgfx_lookup_table, vv_id, &sclk)) { 1488 1484 if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, 1489 1485 PHM_PlatformCaps_ClockStretcher)) { 1486 + if (table_info == NULL) 1487 + return -EINVAL; 1488 + sclk_table = table_info->vdd_dep_on_sclk; 1489 + 1490 1490 for (j = 1; j < sclk_table->count; j++) { 1491 1491 if (sclk_table->entries[j].clk == sclk && 1492 1492 sclk_table->entries[j].cks_enable == 0) { ··· 1512 1512 } 1513 1513 } 1514 1514 } else { 1515 - 1516 1515 if ((hwmgr->pp_table_version == PP_TABLE_V0) 1517 1516 || !phm_get_sclk_for_voltage_evv(hwmgr, 1518 1517 table_info->vddc_lookup_table, vv_id, &sclk)) { 1519 1518 if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, 1520 1519 PHM_PlatformCaps_ClockStretcher)) { 1520 + if (table_info == NULL) 1521 + return -EINVAL; 1522 + sclk_table = table_info->vdd_dep_on_sclk; 1523 + 1521 1524 for (j = 1; j < sclk_table->count; j++) { 1522 1525 if (sclk_table->entries[j].clk == sclk && 1523 1526 sclk_table->entries[j].cks_enable == 0) { ··· 2150 2147 struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend); 2151 2148 2152 2149 if (tab) { 2150 + vddc = tab->vddc; 2153 2151 smu7_patch_ppt_v0_with_vdd_leakage(hwmgr, &vddc, 2154 2152 &data->vddc_leakage); 2155 2153 tab->vddc = vddc; 2154 + vddci = tab->vddci; 2156 2155 smu7_patch_ppt_v0_with_vdd_leakage(hwmgr, &vddci, 2157 2156 &data->vddci_leakage); 2158 2157 tab->vddci = vddci; ··· 4252 4247 { 4253 4248 struct phm_ppt_v1_information *table_info = 4254 4249 (struct phm_ppt_v1_information *)hwmgr->pptable; 4255 - struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table; 4250 + struct phm_ppt_v1_clock_voltage_dependency_table *dep_sclk_table = NULL; 4251 + struct phm_clock_voltage_dependency_table *sclk_table; 4256 4252 int i; 4257 4253 4258 - if (table_info == NULL) 4259 - return -EINVAL; 4260 - 4261 - dep_sclk_table = table_info->vdd_dep_on_sclk; 4262 - 4263 - for (i = 0; i < dep_sclk_table->count; i++) { 4264 - clocks->clock[i] = dep_sclk_table->entries[i].clk; 4265 - clocks->count++; 4254 + if (hwmgr->pp_table_version == PP_TABLE_V1) { 4255 + if (table_info == NULL || table_info->vdd_dep_on_sclk == NULL) 4256 + return -EINVAL; 4257 + dep_sclk_table = table_info->vdd_dep_on_sclk; 4258 + for (i = 0; i < dep_sclk_table->count; i++) { 4259 + clocks->clock[i] = dep_sclk_table->entries[i].clk; 4260 + clocks->count++; 4261 + } 4262 + } else if (hwmgr->pp_table_version == PP_TABLE_V0) { 4263 + sclk_table = hwmgr->dyn_state.vddc_dependency_on_sclk; 4264 + for (i = 0; i < sclk_table->count; i++) { 4265 + clocks->clock[i] = sclk_table->entries[i].clk; 4266 + clocks->count++; 4267 + } 4266 4268 } 4269 + 4267 4270 return 0; 4268 4271 } 4269 4272 ··· 4293 4280 (struct phm_ppt_v1_information *)hwmgr->pptable; 4294 4281 struct phm_ppt_v1_clock_voltage_dependency_table *dep_mclk_table; 4295 4282 int i; 4283 + struct phm_clock_voltage_dependency_table *mclk_table; 4296 4284 4297 - if (table_info == NULL) 4298 - return -EINVAL; 4299 - 4300 - dep_mclk_table = table_info->vdd_dep_on_mclk; 4301 - 4302 - for (i = 0; i < dep_mclk_table->count; i++) { 4303 - clocks->clock[i] = dep_mclk_table->entries[i].clk; 4304 - clocks->latency[i] = smu7_get_mem_latency(hwmgr, 4285 + if (hwmgr->pp_table_version == PP_TABLE_V1) { 4286 + if (table_info == NULL) 4287 + return -EINVAL; 4288 + dep_mclk_table = table_info->vdd_dep_on_mclk; 4289 + for (i = 0; i < dep_mclk_table->count; i++) { 4290 + clocks->clock[i] = dep_mclk_table->entries[i].clk; 4291 + clocks->latency[i] = smu7_get_mem_latency(hwmgr, 4305 4292 dep_mclk_table->entries[i].clk); 4306 - clocks->count++; 4293 + clocks->count++; 4294 + } 4295 + } else if (hwmgr->pp_table_version == PP_TABLE_V0) { 4296 + mclk_table = hwmgr->dyn_state.vddc_dependency_on_mclk; 4297 + for (i = 0; i < mclk_table->count; i++) { 4298 + clocks->clock[i] = mclk_table->entries[i].clk; 4299 + clocks->count++; 4300 + } 4307 4301 } 4308 4302 return 0; 4309 4303 }
+3 -3
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_thermal.c
··· 30 30 struct phm_fan_speed_info *fan_speed_info) 31 31 { 32 32 if (hwmgr->thermal_controller.fanInfo.bNoFan) 33 - return 0; 33 + return -ENODEV; 34 34 35 35 fan_speed_info->supports_percent_read = true; 36 36 fan_speed_info->supports_percent_write = true; ··· 60 60 uint64_t tmp64; 61 61 62 62 if (hwmgr->thermal_controller.fanInfo.bNoFan) 63 - return 0; 63 + return -ENODEV; 64 64 65 65 duty100 = PHM_READ_VFPF_INDIRECT_FIELD(hwmgr->device, CGS_IND_REG__SMC, 66 66 CG_FDO_CTRL1, FMAX_DUTY100); ··· 89 89 if (hwmgr->thermal_controller.fanInfo.bNoFan || 90 90 (hwmgr->thermal_controller.fanInfo. 91 91 ucTachometerPulsesPerRevolution == 0)) 92 - return 0; 92 + return -ENODEV; 93 93 94 94 tach_period = PHM_READ_VFPF_INDIRECT_FIELD(hwmgr->device, CGS_IND_REG__SMC, 95 95 CG_TACH_STATUS, TACH_PERIOD);
-13
drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
··· 34 34 static void amd_sched_wakeup(struct amd_gpu_scheduler *sched); 35 35 static void amd_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb); 36 36 37 - struct kmem_cache *sched_fence_slab; 38 - atomic_t sched_fence_slab_ref = ATOMIC_INIT(0); 39 - 40 37 /* Initialize a given run queue struct */ 41 38 static void amd_sched_rq_init(struct amd_sched_rq *rq) 42 39 { ··· 616 619 INIT_LIST_HEAD(&sched->ring_mirror_list); 617 620 spin_lock_init(&sched->job_list_lock); 618 621 atomic_set(&sched->hw_rq_count, 0); 619 - if (atomic_inc_return(&sched_fence_slab_ref) == 1) { 620 - sched_fence_slab = kmem_cache_create( 621 - "amd_sched_fence", sizeof(struct amd_sched_fence), 0, 622 - SLAB_HWCACHE_ALIGN, NULL); 623 - if (!sched_fence_slab) 624 - return -ENOMEM; 625 - } 626 622 627 623 /* Each scheduler will run on a seperate kernel thread */ 628 624 sched->thread = kthread_run(amd_sched_main, sched, sched->name); ··· 636 646 { 637 647 if (sched->thread) 638 648 kthread_stop(sched->thread); 639 - rcu_barrier(); 640 - if (atomic_dec_and_test(&sched_fence_slab_ref)) 641 - kmem_cache_destroy(sched_fence_slab); 642 649 }
+3 -3
drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
··· 30 30 struct amd_gpu_scheduler; 31 31 struct amd_sched_rq; 32 32 33 - extern struct kmem_cache *sched_fence_slab; 34 - extern atomic_t sched_fence_slab_ref; 35 - 36 33 /** 37 34 * A scheduler entity is a wrapper around a job queue or a group 38 35 * of other entities. Entities take turns emitting jobs from their ··· 141 144 void amd_sched_entity_fini(struct amd_gpu_scheduler *sched, 142 145 struct amd_sched_entity *entity); 143 146 void amd_sched_entity_push_job(struct amd_sched_job *sched_job); 147 + 148 + int amd_sched_fence_slab_init(void); 149 + void amd_sched_fence_slab_fini(void); 144 150 145 151 struct amd_sched_fence *amd_sched_fence_create( 146 152 struct amd_sched_entity *s_entity, void *owner);
+19
drivers/gpu/drm/amd/scheduler/sched_fence.c
··· 27 27 #include <drm/drmP.h> 28 28 #include "gpu_scheduler.h" 29 29 30 + static struct kmem_cache *sched_fence_slab; 31 + 32 + int amd_sched_fence_slab_init(void) 33 + { 34 + sched_fence_slab = kmem_cache_create( 35 + "amd_sched_fence", sizeof(struct amd_sched_fence), 0, 36 + SLAB_HWCACHE_ALIGN, NULL); 37 + if (!sched_fence_slab) 38 + return -ENOMEM; 39 + 40 + return 0; 41 + } 42 + 43 + void amd_sched_fence_slab_fini(void) 44 + { 45 + rcu_barrier(); 46 + kmem_cache_destroy(sched_fence_slab); 47 + } 48 + 30 49 struct amd_sched_fence *amd_sched_fence_create(struct amd_sched_entity *entity, 31 50 void *owner) 32 51 {
+24 -14
drivers/gpu/drm/drm_drv.c
··· 303 303 * callbacks implemented by the driver. The driver then needs to initialize all 304 304 * the various subsystems for the drm device like memory management, vblank 305 305 * handling, modesetting support and intial output configuration plus obviously 306 - * initialize all the corresponding hardware bits. Finally when everything is up 307 - * and running and ready for userspace the device instance can be published 308 - * using drm_dev_register(). 306 + * initialize all the corresponding hardware bits. An important part of this is 307 + * also calling drm_dev_set_unique() to set the userspace-visible unique name of 308 + * this device instance. Finally when everything is up and running and ready for 309 + * userspace the device instance can be published using drm_dev_register(). 309 310 * 310 311 * There is also deprecated support for initalizing device instances using 311 312 * bus-specific helpers and the ->load() callback. But due to ··· 327 326 * it would be easy to add). Drivers can store driver-private data in the 328 327 * dev_priv field of &drm_device. 329 328 */ 330 - 331 - static int drm_dev_set_unique(struct drm_device *dev, const char *name) 332 - { 333 - if (!name) 334 - return -EINVAL; 335 - 336 - kfree(dev->unique); 337 - dev->unique = kstrdup(name, GFP_KERNEL); 338 - 339 - return dev->unique ? 0 : -ENOMEM; 340 - } 341 329 342 330 /** 343 331 * drm_put_dev - Unregister and release a DRM device ··· 749 759 drm_minor_unregister(dev, DRM_MINOR_CONTROL); 750 760 } 751 761 EXPORT_SYMBOL(drm_dev_unregister); 762 + 763 + /** 764 + * drm_dev_set_unique - Set the unique name of a DRM device 765 + * @dev: device of which to set the unique name 766 + * @name: unique name 767 + * 768 + * Sets the unique name of a DRM device using the specified string. Drivers 769 + * can use this at driver probe time if the unique name of the devices they 770 + * drive is static. 771 + * 772 + * Return: 0 on success or a negative error code on failure. 773 + */ 774 + int drm_dev_set_unique(struct drm_device *dev, const char *name) 775 + { 776 + kfree(dev->unique); 777 + dev->unique = kstrdup(name, GFP_KERNEL); 778 + 779 + return dev->unique ? 0 : -ENOMEM; 780 + } 781 + EXPORT_SYMBOL(drm_dev_set_unique); 752 782 753 783 /* 754 784 * DRM Core
+6 -3
drivers/gpu/drm/imx/ipuv3-crtc.c
··· 68 68 69 69 ipu_dc_disable_channel(ipu_crtc->dc); 70 70 ipu_di_disable(ipu_crtc->di); 71 + /* 72 + * Planes must be disabled before DC clock is removed, as otherwise the 73 + * attached IDMACs will be left in undefined state, possibly hanging 74 + * the IPU or even system. 75 + */ 76 + drm_atomic_helper_disable_planes_on_crtc(old_crtc_state, false); 71 77 ipu_dc_disable(ipu); 72 78 73 79 spin_lock_irq(&crtc->dev->event_lock); ··· 82 76 crtc->state->event = NULL; 83 77 } 84 78 spin_unlock_irq(&crtc->dev->event_lock); 85 - 86 - /* always disable planes on the CRTC */ 87 - drm_atomic_helper_disable_planes_on_crtc(old_crtc_state, true); 88 79 89 80 drm_crtc_vblank_off(crtc); 90 81 }
+12 -2
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 139 139 140 140 u32 err_work_state; 141 141 struct work_struct err_work; 142 + struct work_struct hpd_work; 142 143 struct workqueue_struct *workqueue; 143 144 144 145 /* DSI 6G TX buffer*/ ··· 1295 1294 wmb(); /* make sure dsi controller enabled again */ 1296 1295 } 1297 1296 1297 + static void dsi_hpd_worker(struct work_struct *work) 1298 + { 1299 + struct msm_dsi_host *msm_host = 1300 + container_of(work, struct msm_dsi_host, hpd_work); 1301 + 1302 + drm_helper_hpd_irq_event(msm_host->dev); 1303 + } 1304 + 1298 1305 static void dsi_err_worker(struct work_struct *work) 1299 1306 { 1300 1307 struct msm_dsi_host *msm_host = ··· 1489 1480 1490 1481 DBG("id=%d", msm_host->id); 1491 1482 if (msm_host->dev) 1492 - drm_helper_hpd_irq_event(msm_host->dev); 1483 + queue_work(msm_host->workqueue, &msm_host->hpd_work); 1493 1484 1494 1485 return 0; 1495 1486 } ··· 1503 1494 1504 1495 DBG("id=%d", msm_host->id); 1505 1496 if (msm_host->dev) 1506 - drm_helper_hpd_irq_event(msm_host->dev); 1497 + queue_work(msm_host->workqueue, &msm_host->hpd_work); 1507 1498 1508 1499 return 0; 1509 1500 } ··· 1757 1748 /* setup workqueue */ 1758 1749 msm_host->workqueue = alloc_ordered_workqueue("dsi_drm_work", 0); 1759 1750 INIT_WORK(&msm_host->err_work, dsi_err_worker); 1751 + INIT_WORK(&msm_host->hpd_work, dsi_hpd_worker); 1760 1752 1761 1753 msm_dsi->host = &msm_host->base; 1762 1754 msm_dsi->id = msm_host->id;
+1
drivers/gpu/drm/msm/dsi/pll/dsi_pll_28nm.c
··· 521 521 .parent_names = (const char *[]){ "xo" }, 522 522 .num_parents = 1, 523 523 .name = vco_name, 524 + .flags = CLK_IGNORE_UNUSED, 524 525 .ops = &clk_ops_dsi_pll_28nm_vco, 525 526 }; 526 527 struct device *dev = &pll_28nm->pdev->dev;
+1
drivers/gpu/drm/msm/dsi/pll/dsi_pll_28nm_8960.c
··· 412 412 struct clk_init_data vco_init = { 413 413 .parent_names = (const char *[]){ "pxo" }, 414 414 .num_parents = 1, 415 + .flags = CLK_IGNORE_UNUSED, 415 416 .ops = &clk_ops_dsi_pll_28nm_vco, 416 417 }; 417 418 struct device *dev = &pll_28nm->pdev->dev;
+1
drivers/gpu/drm/msm/hdmi/hdmi_phy_8996.c
··· 702 702 .ops = &hdmi_8996_pll_ops, 703 703 .parent_names = hdmi_pll_parents, 704 704 .num_parents = ARRAY_SIZE(hdmi_pll_parents), 705 + .flags = CLK_IGNORE_UNUSED, 705 706 }; 706 707 707 708 int msm_hdmi_pll_8996_init(struct platform_device *pdev)
+1
drivers/gpu/drm/msm/hdmi/hdmi_pll_8960.c
··· 424 424 .ops = &hdmi_pll_ops, 425 425 .parent_names = hdmi_pll_parents, 426 426 .num_parents = ARRAY_SIZE(hdmi_pll_parents), 427 + .flags = CLK_IGNORE_UNUSED, 427 428 }; 428 429 429 430 int msm_hdmi_pll_8960_init(struct platform_device *pdev)
+2 -2
drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c
··· 272 272 .count = 2, 273 273 .base = { 0x14000, 0x16000 }, 274 274 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 275 - MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_DECIMATION, 275 + MDP_PIPE_CAP_DECIMATION, 276 276 }, 277 277 .pipe_dma = { 278 278 .count = 1, ··· 282 282 .lm = { 283 283 .count = 2, /* LM0 and LM3 */ 284 284 .base = { 0x44000, 0x47000 }, 285 - .nb_stages = 5, 285 + .nb_stages = 8, 286 286 .max_width = 2048, 287 287 .max_height = 0xFFFF, 288 288 },
+28 -18
drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
··· 223 223 plane_cnt++; 224 224 } 225 225 226 - /* 227 - * If there is no base layer, enable border color. 228 - * Although it's not possbile in current blend logic, 229 - * put it here as a reminder. 230 - */ 231 - if (!pstates[STAGE_BASE] && plane_cnt) { 226 + if (!pstates[STAGE_BASE]) { 232 227 ctl_blend_flags |= MDP5_CTL_BLEND_OP_FLAG_BORDER_OUT; 233 228 DBG("Border Color is enabled"); 234 229 } ··· 360 365 return pa->state->zpos - pb->state->zpos; 361 366 } 362 367 368 + /* is there a helper for this? */ 369 + static bool is_fullscreen(struct drm_crtc_state *cstate, 370 + struct drm_plane_state *pstate) 371 + { 372 + return (pstate->crtc_x <= 0) && (pstate->crtc_y <= 0) && 373 + ((pstate->crtc_x + pstate->crtc_w) >= cstate->mode.hdisplay) && 374 + ((pstate->crtc_y + pstate->crtc_h) >= cstate->mode.vdisplay); 375 + } 376 + 363 377 static int mdp5_crtc_atomic_check(struct drm_crtc *crtc, 364 378 struct drm_crtc_state *state) 365 379 { ··· 379 375 struct plane_state pstates[STAGE_MAX + 1]; 380 376 const struct mdp5_cfg_hw *hw_cfg; 381 377 const struct drm_plane_state *pstate; 382 - int cnt = 0, i; 378 + int cnt = 0, base = 0, i; 383 379 384 380 DBG("%s: check", mdp5_crtc->name); 385 381 386 - /* verify that there are not too many planes attached to crtc 387 - * and that we don't have conflicting mixer stages: 388 - */ 389 - hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); 390 382 drm_atomic_crtc_state_for_each_plane_state(plane, pstate, state) { 391 - if (cnt >= (hw_cfg->lm.nb_stages)) { 392 - dev_err(dev->dev, "too many planes!\n"); 393 - return -EINVAL; 394 - } 395 - 396 - 397 383 pstates[cnt].plane = plane; 398 384 pstates[cnt].state = to_mdp5_plane_state(pstate); 399 385 ··· 393 399 /* assign a stage based on sorted zpos property */ 394 400 sort(pstates, cnt, sizeof(pstates[0]), pstate_cmp, NULL); 395 401 402 + /* if the bottom-most layer is not fullscreen, we need to use 403 + * it for solid-color: 404 + */ 405 + if ((cnt > 0) && !is_fullscreen(state, &pstates[0].state->base)) 406 + base++; 407 + 408 + /* verify that there are not too many planes attached to crtc 409 + * and that we don't have conflicting mixer stages: 410 + */ 411 + hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); 412 + 413 + if ((cnt + base) >= hw_cfg->lm.nb_stages) { 414 + dev_err(dev->dev, "too many planes!\n"); 415 + return -EINVAL; 416 + } 417 + 396 418 for (i = 0; i < cnt; i++) { 397 - pstates[i].state->stage = STAGE_BASE + i; 419 + pstates[i].state->stage = STAGE_BASE + i + base; 398 420 DBG("%s: assign pipe %s on stage=%d", mdp5_crtc->name, 399 421 pipe2name(mdp5_plane_pipe(pstates[i].plane)), 400 422 pstates[i].state->stage);
+3 -6
drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
··· 307 307 format = to_mdp_format(msm_framebuffer_format(state->fb)); 308 308 if (MDP_FORMAT_IS_YUV(format) && 309 309 !pipe_supports_yuv(mdp5_plane->caps)) { 310 - dev_err(plane->dev->dev, 311 - "Pipe doesn't support YUV\n"); 310 + DBG("Pipe doesn't support YUV\n"); 312 311 313 312 return -EINVAL; 314 313 } ··· 315 316 if (!(mdp5_plane->caps & MDP_PIPE_CAP_SCALE) && 316 317 (((state->src_w >> 16) != state->crtc_w) || 317 318 ((state->src_h >> 16) != state->crtc_h))) { 318 - dev_err(plane->dev->dev, 319 - "Pipe doesn't support scaling (%dx%d -> %dx%d)\n", 319 + DBG("Pipe doesn't support scaling (%dx%d -> %dx%d)\n", 320 320 state->src_w >> 16, state->src_h >> 16, 321 321 state->crtc_w, state->crtc_h); 322 322 ··· 331 333 332 334 if ((vflip && !(mdp5_plane->caps & MDP_PIPE_CAP_VFLIP)) || 333 335 (hflip && !(mdp5_plane->caps & MDP_PIPE_CAP_HFLIP))) { 334 - dev_err(plane->dev->dev, 335 - "Pipe doesn't support flip\n"); 336 + DBG("Pipe doesn't support flip\n"); 336 337 337 338 return -EINVAL; 338 339 }
+1 -1
drivers/gpu/drm/msm/msm_drv.c
··· 234 234 flush_workqueue(priv->atomic_wq); 235 235 destroy_workqueue(priv->atomic_wq); 236 236 237 - if (kms) 237 + if (kms && kms->funcs) 238 238 kms->funcs->destroy(kms); 239 239 240 240 if (gpu) {
+5 -2
drivers/gpu/drm/msm/msm_gem_shrinker.c
··· 163 163 void msm_gem_shrinker_cleanup(struct drm_device *dev) 164 164 { 165 165 struct msm_drm_private *priv = dev->dev_private; 166 - WARN_ON(unregister_vmap_purge_notifier(&priv->vmap_notifier)); 167 - unregister_shrinker(&priv->shrinker); 166 + 167 + if (priv->shrinker.nr_deferred) { 168 + WARN_ON(unregister_vmap_purge_notifier(&priv->vmap_notifier)); 169 + unregister_shrinker(&priv->shrinker); 170 + } 168 171 }
+1 -1
drivers/gpu/drm/qxl/qxl_cmd.c
··· 578 578 return 0; 579 579 } 580 580 581 - int qxl_update_surface(struct qxl_device *qdev, struct qxl_bo *surf) 581 + static int qxl_update_surface(struct qxl_device *qdev, struct qxl_bo *surf) 582 582 { 583 583 struct qxl_rect rect; 584 584 int ret;
+57 -12
drivers/gpu/drm/qxl/qxl_display.c
··· 36 36 return head->width && head->height; 37 37 } 38 38 39 - void qxl_alloc_client_monitors_config(struct qxl_device *qdev, unsigned count) 39 + static void qxl_alloc_client_monitors_config(struct qxl_device *qdev, unsigned count) 40 40 { 41 41 if (qdev->client_monitors_config && 42 42 count > qdev->client_monitors_config->count) { ··· 57 57 qdev->client_monitors_config->count = count; 58 58 } 59 59 60 + enum { 61 + MONITORS_CONFIG_MODIFIED, 62 + MONITORS_CONFIG_UNCHANGED, 63 + MONITORS_CONFIG_BAD_CRC, 64 + }; 65 + 60 66 static int qxl_display_copy_rom_client_monitors_config(struct qxl_device *qdev) 61 67 { 62 68 int i; 63 69 int num_monitors; 64 70 uint32_t crc; 71 + int status = MONITORS_CONFIG_UNCHANGED; 65 72 66 73 num_monitors = qdev->rom->client_monitors_config.count; 67 74 crc = crc32(0, (const uint8_t *)&qdev->rom->client_monitors_config, ··· 77 70 qxl_io_log(qdev, "crc mismatch: have %X (%zd) != %X\n", crc, 78 71 sizeof(qdev->rom->client_monitors_config), 79 72 qdev->rom->client_monitors_config_crc); 80 - return 1; 73 + return MONITORS_CONFIG_BAD_CRC; 81 74 } 82 75 if (num_monitors > qdev->monitors_config->max_allowed) { 83 76 DRM_DEBUG_KMS("client monitors list will be truncated: %d < %d\n", ··· 85 78 num_monitors = qdev->monitors_config->max_allowed; 86 79 } else { 87 80 num_monitors = qdev->rom->client_monitors_config.count; 81 + } 82 + if (qdev->client_monitors_config 83 + && (num_monitors != qdev->client_monitors_config->count)) { 84 + status = MONITORS_CONFIG_MODIFIED; 88 85 } 89 86 qxl_alloc_client_monitors_config(qdev, num_monitors); 90 87 /* we copy max from the client but it isn't used */ ··· 99 88 &qdev->rom->client_monitors_config.heads[i]; 100 89 struct qxl_head *client_head = 101 90 &qdev->client_monitors_config->heads[i]; 102 - client_head->x = c_rect->left; 103 - client_head->y = c_rect->top; 104 - client_head->width = c_rect->right - c_rect->left; 105 - client_head->height = c_rect->bottom - c_rect->top; 106 - client_head->surface_id = 0; 107 - client_head->id = i; 108 - client_head->flags = 0; 91 + if (client_head->x != c_rect->left) { 92 + client_head->x = c_rect->left; 93 + status = MONITORS_CONFIG_MODIFIED; 94 + } 95 + if (client_head->y != c_rect->top) { 96 + client_head->y = c_rect->top; 97 + status = MONITORS_CONFIG_MODIFIED; 98 + } 99 + if (client_head->width != c_rect->right - c_rect->left) { 100 + client_head->width = c_rect->right - c_rect->left; 101 + status = MONITORS_CONFIG_MODIFIED; 102 + } 103 + if (client_head->height != c_rect->bottom - c_rect->top) { 104 + client_head->height = c_rect->bottom - c_rect->top; 105 + status = MONITORS_CONFIG_MODIFIED; 106 + } 107 + if (client_head->surface_id != 0) { 108 + client_head->surface_id = 0; 109 + status = MONITORS_CONFIG_MODIFIED; 110 + } 111 + if (client_head->id != i) { 112 + client_head->id = i; 113 + status = MONITORS_CONFIG_MODIFIED; 114 + } 115 + if (client_head->flags != 0) { 116 + client_head->flags = 0; 117 + status = MONITORS_CONFIG_MODIFIED; 118 + } 109 119 DRM_DEBUG_KMS("read %dx%d+%d+%d\n", client_head->width, client_head->height, 110 120 client_head->x, client_head->y); 111 121 } 112 - return 0; 122 + 123 + return status; 113 124 } 114 125 115 126 static void qxl_update_offset_props(struct qxl_device *qdev) ··· 157 124 { 158 125 159 126 struct drm_device *dev = qdev->ddev; 160 - while (qxl_display_copy_rom_client_monitors_config(qdev)) { 127 + int status; 128 + 129 + status = qxl_display_copy_rom_client_monitors_config(qdev); 130 + while (status == MONITORS_CONFIG_BAD_CRC) { 161 131 qxl_io_log(qdev, "failed crc check for client_monitors_config," 162 132 " retrying\n"); 133 + status = qxl_display_copy_rom_client_monitors_config(qdev); 134 + } 135 + if (status == MONITORS_CONFIG_UNCHANGED) { 136 + qxl_io_log(qdev, "config unchanged\n"); 137 + DRM_DEBUG("ignoring unchanged client monitors config"); 138 + return; 163 139 } 164 140 165 141 drm_modeset_lock_all(dev); ··· 199 157 mode = drm_cvt_mode(dev, head->width, head->height, 60, false, false, 200 158 false); 201 159 mode->type |= DRM_MODE_TYPE_PREFERRED; 160 + mode->hdisplay = head->width; 161 + mode->vdisplay = head->height; 162 + drm_mode_set_name(mode); 202 163 *pwidth = head->width; 203 164 *pheight = head->height; 204 165 drm_mode_probed_add(connector, mode); ··· 652 607 return true; 653 608 } 654 609 655 - void 610 + static void 656 611 qxl_send_monitors_config(struct qxl_device *qdev) 657 612 { 658 613 int i;
+1 -7
drivers/gpu/drm/qxl/qxl_drv.h
··· 395 395 struct drm_gem_object *obj, 396 396 const struct drm_framebuffer_funcs *funcs); 397 397 void qxl_display_read_client_monitors_config(struct qxl_device *qdev); 398 - void qxl_send_monitors_config(struct qxl_device *qdev); 399 398 int qxl_create_monitors_object(struct qxl_device *qdev); 400 399 int qxl_destroy_monitors_object(struct qxl_device *qdev); 401 400 402 - /* used by qxl_debugfs only */ 403 - void qxl_crtc_set_from_monitors_config(struct qxl_device *qdev); 404 - void qxl_alloc_client_monitors_config(struct qxl_device *qdev, unsigned count); 405 - 406 401 /* qxl_gem.c */ 407 - int qxl_gem_init(struct qxl_device *qdev); 402 + void qxl_gem_init(struct qxl_device *qdev); 408 403 void qxl_gem_fini(struct qxl_device *qdev); 409 404 int qxl_gem_object_create(struct qxl_device *qdev, int size, 410 405 int alignment, int initial_domain, ··· 569 574 struct qxl_drv_surface * 570 575 qxl_surface_lookup(struct drm_device *dev, int surface_id); 571 576 void qxl_surface_evict(struct qxl_device *qdev, struct qxl_bo *surf, bool freeing); 572 - int qxl_update_surface(struct qxl_device *qdev, struct qxl_bo *surf); 573 577 574 578 #endif
+1 -1
drivers/gpu/drm/qxl/qxl_fb.c
··· 191 191 /* 192 192 * we are using a shadow draw buffer, at qdev->surface0_shadow 193 193 */ 194 - qxl_io_log(qdev, "dirty x[%d, %d], y[%d, %d]", clips->x1, clips->x2, 194 + qxl_io_log(qdev, "dirty x[%d, %d], y[%d, %d]\n", clips->x1, clips->x2, 195 195 clips->y1, clips->y2); 196 196 image->dx = clips->x1; 197 197 image->dy = clips->y1;
+1 -2
drivers/gpu/drm/qxl/qxl_gem.c
··· 111 111 { 112 112 } 113 113 114 - int qxl_gem_init(struct qxl_device *qdev) 114 + void qxl_gem_init(struct qxl_device *qdev) 115 115 { 116 116 INIT_LIST_HEAD(&qdev->gem.objects); 117 - return 0; 118 117 } 119 118 120 119 void qxl_gem_fini(struct qxl_device *qdev)
+2 -1
drivers/gpu/drm/qxl/qxl_kms.c
··· 131 131 mutex_init(&qdev->update_area_mutex); 132 132 mutex_init(&qdev->release_mutex); 133 133 mutex_init(&qdev->surf_evict_mutex); 134 - INIT_LIST_HEAD(&qdev->gem.objects); 134 + qxl_gem_init(qdev); 135 135 136 136 qdev->rom_base = pci_resource_start(pdev, 2); 137 137 qdev->rom_size = pci_resource_len(pdev, 2); ··· 273 273 qxl_ring_free(qdev->command_ring); 274 274 qxl_ring_free(qdev->cursor_ring); 275 275 qxl_ring_free(qdev->release_ring); 276 + qxl_gem_fini(qdev); 276 277 qxl_bo_fini(qdev); 277 278 io_mapping_free(qdev->surface_mapping); 278 279 io_mapping_free(qdev->vram_mapping);
+1 -1
drivers/gpu/drm/radeon/radeon_connectors.c
··· 931 931 { 932 932 struct radeon_connector *radeon_connector = to_radeon_connector(connector); 933 933 934 - if (radeon_connector->ddc_bus->has_aux) { 934 + if (radeon_connector->ddc_bus && radeon_connector->ddc_bus->has_aux) { 935 935 drm_dp_aux_unregister(&radeon_connector->ddc_bus->aux); 936 936 radeon_connector->ddc_bus->has_aux = false; 937 937 }
+13
drivers/gpu/drm/radeon/radeon_device.c
··· 104 104 "LAST", 105 105 }; 106 106 107 + #if defined(CONFIG_VGA_SWITCHEROO) 108 + bool radeon_has_atpx_dgpu_power_cntl(void); 109 + bool radeon_is_atpx_hybrid(void); 110 + #else 111 + static inline bool radeon_has_atpx_dgpu_power_cntl(void) { return false; } 112 + static inline bool radeon_is_atpx_hybrid(void) { return false; } 113 + #endif 114 + 107 115 #define RADEON_PX_QUIRK_DISABLE_PX (1 << 0) 108 116 #define RADEON_PX_QUIRK_LONG_WAKEUP (1 << 1) 109 117 ··· 167 159 } 168 160 169 161 if (rdev->px_quirk_flags & RADEON_PX_QUIRK_DISABLE_PX) 162 + rdev->flags &= ~RADEON_IS_PX; 163 + 164 + /* disable PX is the system doesn't support dGPU power control or hybrid gfx */ 165 + if (!radeon_is_atpx_hybrid() && 166 + !radeon_has_atpx_dgpu_power_cntl()) 170 167 rdev->flags &= ~RADEON_IS_PX; 171 168 } 172 169
+11 -5
drivers/gpu/drm/udl/udl_main.c
··· 98 98 static int udl_select_std_channel(struct udl_device *udl) 99 99 { 100 100 int ret; 101 - u8 set_def_chn[] = {0x57, 0xCD, 0xDC, 0xA7, 102 - 0x1C, 0x88, 0x5E, 0x15, 103 - 0x60, 0xFE, 0xC6, 0x97, 104 - 0x16, 0x3D, 0x47, 0xF2}; 101 + static const u8 set_def_chn[] = {0x57, 0xCD, 0xDC, 0xA7, 102 + 0x1C, 0x88, 0x5E, 0x15, 103 + 0x60, 0xFE, 0xC6, 0x97, 104 + 0x16, 0x3D, 0x47, 0xF2}; 105 + void *sendbuf; 106 + 107 + sendbuf = kmemdup(set_def_chn, sizeof(set_def_chn), GFP_KERNEL); 108 + if (!sendbuf) 109 + return -ENOMEM; 105 110 106 111 ret = usb_control_msg(udl->udev, 107 112 usb_sndctrlpipe(udl->udev, 0), 108 113 NR_USB_REQUEST_CHANNEL, 109 114 (USB_DIR_OUT | USB_TYPE_VENDOR), 0, 0, 110 - set_def_chn, sizeof(set_def_chn), 115 + sendbuf, sizeof(set_def_chn), 111 116 USB_CTRL_SET_TIMEOUT); 117 + kfree(sendbuf); 112 118 return ret < 0 ? ret : 0; 113 119 } 114 120
+11 -12
drivers/gpu/drm/virtio/virtgpu_drm_bus.c
··· 28 28 29 29 #include "virtgpu_drv.h" 30 30 31 - int drm_virtio_set_busid(struct drm_device *dev, struct drm_master *master) 32 - { 33 - struct pci_dev *pdev = dev->pdev; 34 - 35 - if (pdev) { 36 - return drm_pci_set_busid(dev, master); 37 - } 38 - return 0; 39 - } 40 - 41 31 static void virtio_pci_kick_out_firmware_fb(struct pci_dev *pci_dev) 42 32 { 43 33 struct apertures_struct *ap; ··· 61 71 62 72 if (strcmp(vdev->dev.parent->bus->name, "pci") == 0) { 63 73 struct pci_dev *pdev = to_pci_dev(vdev->dev.parent); 74 + const char *pname = dev_name(&pdev->dev); 64 75 bool vga = (pdev->class >> 8) == PCI_CLASS_DISPLAY_VGA; 76 + char unique[20]; 65 77 66 - DRM_INFO("pci: %s detected\n", 67 - vga ? "virtio-vga" : "virtio-gpu-pci"); 78 + DRM_INFO("pci: %s detected at %s\n", 79 + vga ? "virtio-vga" : "virtio-gpu-pci", 80 + pname); 68 81 dev->pdev = pdev; 69 82 if (vga) 70 83 virtio_pci_kick_out_firmware_fb(pdev); 84 + 85 + snprintf(unique, sizeof(unique), "pci:%s", pname); 86 + ret = drm_dev_set_unique(dev, unique); 87 + if (ret) 88 + goto err_free; 89 + 71 90 } 72 91 73 92 ret = drm_dev_register(dev, 0);
-1
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 115 115 116 116 static struct drm_driver driver = { 117 117 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME | DRIVER_RENDER | DRIVER_ATOMIC, 118 - .set_busid = drm_virtio_set_busid, 119 118 .load = virtio_gpu_driver_load, 120 119 .unload = virtio_gpu_driver_unload, 121 120 .open = virtio_gpu_driver_open,
-1
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 49 49 #define DRIVER_PATCHLEVEL 1 50 50 51 51 /* virtgpu_drm_bus.c */ 52 - int drm_virtio_set_busid(struct drm_device *dev, struct drm_master *master); 53 52 int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev); 54 53 55 54 struct virtio_gpu_object {
+1 -1
drivers/gpu/drm/virtio/virtgpu_vq.c
··· 75 75 int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev) 76 76 { 77 77 struct virtio_gpu_vbuffer *vbuf; 78 - int i, size, count = 0; 78 + int i, size, count = 16; 79 79 void *ptr; 80 80 81 81 INIT_LIST_HEAD(&vgdev->free_vbufs);
+1
drivers/hid/hid-ids.h
··· 179 179 #define USB_DEVICE_ID_ATEN_4PORTKVM 0x2205 180 180 #define USB_DEVICE_ID_ATEN_4PORTKVMC 0x2208 181 181 #define USB_DEVICE_ID_ATEN_CS682 0x2213 182 + #define USB_DEVICE_ID_ATEN_CS692 0x8021 182 183 183 184 #define USB_VENDOR_ID_ATMEL 0x03eb 184 185 #define USB_DEVICE_ID_ATMEL_MULTITOUCH 0x211c
+3 -3
drivers/hid/hid-sensor-custom.c
··· 292 292 bool input = false; 293 293 int value = 0; 294 294 295 - if (sscanf(attr->attr.name, "feature-%d-%x-%s", &index, &usage, 295 + if (sscanf(attr->attr.name, "feature-%x-%x-%s", &index, &usage, 296 296 name) == 3) { 297 297 feature = true; 298 298 field_index = index + sensor_inst->input_field_count; 299 - } else if (sscanf(attr->attr.name, "input-%d-%x-%s", &index, &usage, 299 + } else if (sscanf(attr->attr.name, "input-%x-%x-%s", &index, &usage, 300 300 name) == 3) { 301 301 input = true; 302 302 field_index = index; ··· 398 398 char name[HID_CUSTOM_NAME_LENGTH]; 399 399 int value; 400 400 401 - if (sscanf(attr->attr.name, "feature-%d-%x-%s", &index, &usage, 401 + if (sscanf(attr->attr.name, "feature-%x-%x-%s", &index, &usage, 402 402 name) == 3) { 403 403 field_index = index + sensor_inst->input_field_count; 404 404 } else
+14 -1
drivers/hid/hid-sensor-hub.c
··· 251 251 struct sensor_hub_data *data = hid_get_drvdata(hsdev->hdev); 252 252 int report_size; 253 253 int ret = 0; 254 + u8 *val_ptr; 255 + int buffer_index = 0; 256 + int i; 254 257 255 258 mutex_lock(&data->mutex); 256 259 report = sensor_hub_report(report_id, hsdev->hdev, HID_FEATURE_REPORT); ··· 274 271 goto done_proc; 275 272 } 276 273 ret = min(report_size, buffer_size); 277 - memcpy(buffer, report->field[field_index]->value, ret); 274 + 275 + val_ptr = (u8 *)report->field[field_index]->value; 276 + for (i = 0; i < report->field[field_index]->report_count; ++i) { 277 + if (buffer_index >= ret) 278 + break; 279 + 280 + memcpy(&((u8 *)buffer)[buffer_index], val_ptr, 281 + report->field[field_index]->report_size / 8); 282 + val_ptr += sizeof(__s32); 283 + buffer_index += (report->field[field_index]->report_size / 8); 284 + } 278 285 279 286 done_proc: 280 287 mutex_unlock(&data->mutex);
+73 -29
drivers/hid/intel-ish-hid/ipc/ipc.c
··· 638 638 } 639 639 640 640 /** 641 + * ish_disable_dma() - disable dma communication between host and ISHFW 642 + * @dev: ishtp device pointer 643 + * 644 + * Clear the dma enable bit and wait for dma inactive. 645 + * 646 + * Return: 0 for success else error code. 647 + */ 648 + static int ish_disable_dma(struct ishtp_device *dev) 649 + { 650 + unsigned int dma_delay; 651 + 652 + /* Clear the dma enable bit */ 653 + ish_reg_write(dev, IPC_REG_ISH_RMP2, 0); 654 + 655 + /* wait for dma inactive */ 656 + for (dma_delay = 0; dma_delay < MAX_DMA_DELAY && 657 + _ish_read_fw_sts_reg(dev) & (IPC_ISH_IN_DMA); 658 + dma_delay += 5) 659 + mdelay(5); 660 + 661 + if (dma_delay >= MAX_DMA_DELAY) { 662 + dev_err(dev->devc, 663 + "Wait for DMA inactive timeout\n"); 664 + return -EBUSY; 665 + } 666 + 667 + return 0; 668 + } 669 + 670 + /** 671 + * ish_wakeup() - wakeup ishfw from waiting-for-host state 672 + * @dev: ishtp device pointer 673 + * 674 + * Set the dma enable bit and send a void message to FW, 675 + * it wil wakeup FW from waiting-for-host state. 676 + */ 677 + static void ish_wakeup(struct ishtp_device *dev) 678 + { 679 + /* Set dma enable bit */ 680 + ish_reg_write(dev, IPC_REG_ISH_RMP2, IPC_RMP2_DMA_ENABLED); 681 + 682 + /* 683 + * Send 0 IPC message so that ISH FW wakes up if it was already 684 + * asleep. 685 + */ 686 + ish_reg_write(dev, IPC_REG_HOST2ISH_DRBL, IPC_DRBL_BUSY_BIT); 687 + 688 + /* Flush writes to doorbell and REMAP2 */ 689 + ish_reg_read(dev, IPC_REG_ISH_HOST_FWSTS); 690 + } 691 + 692 + /** 641 693 * _ish_hw_reset() - HW reset 642 694 * @dev: ishtp device pointer 643 695 * ··· 701 649 { 702 650 struct pci_dev *pdev = dev->pdev; 703 651 int rv; 704 - unsigned int dma_delay; 705 652 uint16_t csr; 706 653 707 654 if (!pdev) ··· 715 664 return -EINVAL; 716 665 } 717 666 718 - /* Now trigger reset to FW */ 719 - ish_reg_write(dev, IPC_REG_ISH_RMP2, 0); 720 - 721 - for (dma_delay = 0; dma_delay < MAX_DMA_DELAY && 722 - _ish_read_fw_sts_reg(dev) & (IPC_ISH_IN_DMA); 723 - dma_delay += 5) 724 - mdelay(5); 725 - 726 - if (dma_delay >= MAX_DMA_DELAY) { 667 + /* Disable dma communication between FW and host */ 668 + if (ish_disable_dma(dev)) { 727 669 dev_err(&pdev->dev, 728 670 "Can't reset - stuck with DMA in-progress\n"); 729 671 return -EBUSY; ··· 734 690 csr |= PCI_D0; 735 691 pci_write_config_word(pdev, pdev->pm_cap + PCI_PM_CTRL, csr); 736 692 737 - ish_reg_write(dev, IPC_REG_ISH_RMP2, IPC_RMP2_DMA_ENABLED); 738 - 739 - /* 740 - * Send 0 IPC message so that ISH FW wakes up if it was already 741 - * asleep 742 - */ 743 - ish_reg_write(dev, IPC_REG_HOST2ISH_DRBL, IPC_DRBL_BUSY_BIT); 744 - 745 - /* Flush writes to doorbell and REMAP2 */ 746 - ish_reg_read(dev, IPC_REG_ISH_HOST_FWSTS); 693 + /* Now we can enable ISH DMA operation and wakeup ISHFW */ 694 + ish_wakeup(dev); 747 695 748 696 return 0; 749 697 } ··· 794 758 int ish_hw_start(struct ishtp_device *dev) 795 759 { 796 760 ish_set_host_rdy(dev); 797 - /* After that we can enable ISH DMA operation */ 798 - ish_reg_write(dev, IPC_REG_ISH_RMP2, IPC_RMP2_DMA_ENABLED); 799 761 800 - /* 801 - * Send 0 IPC message so that ISH FW wakes up if it was already 802 - * asleep 803 - */ 804 - ish_reg_write(dev, IPC_REG_HOST2ISH_DRBL, IPC_DRBL_BUSY_BIT); 805 - /* Flush write to doorbell */ 806 - ish_reg_read(dev, IPC_REG_ISH_HOST_FWSTS); 762 + /* After that we can enable ISH DMA operation and wakeup ISHFW */ 763 + ish_wakeup(dev); 807 764 808 765 set_host_ready(dev); 809 766 ··· 905 876 */ 906 877 void ish_device_disable(struct ishtp_device *dev) 907 878 { 879 + struct pci_dev *pdev = dev->pdev; 880 + 881 + if (!pdev) 882 + return; 883 + 884 + /* Disable dma communication between FW and host */ 885 + if (ish_disable_dma(dev)) { 886 + dev_err(&pdev->dev, 887 + "Can't reset - stuck with DMA in-progress\n"); 888 + return; 889 + } 890 + 891 + /* Put ISH to D3hot state for power saving */ 892 + pci_set_power_state(pdev, PCI_D3hot); 893 + 908 894 dev->dev_state = ISHTP_DEV_DISABLED; 909 895 ish_clr_host_rdy(dev); 910 896 }
+3 -3
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 146 146 pdev->dev_flags |= PCI_DEV_FLAGS_NO_D3; 147 147 148 148 /* request and enable interrupt */ 149 - ret = request_irq(pdev->irq, ish_irq_handler, IRQF_NO_SUSPEND, 149 + ret = request_irq(pdev->irq, ish_irq_handler, IRQF_SHARED, 150 150 KBUILD_MODNAME, dev); 151 151 if (ret) { 152 152 dev_err(&pdev->dev, "ISH: request IRQ failure (%d)\n", ··· 202 202 kfree(ishtp_dev); 203 203 } 204 204 205 + #ifdef CONFIG_PM 205 206 static struct device *ish_resume_device; 206 207 207 208 /** ··· 294 293 return 0; 295 294 } 296 295 297 - #ifdef CONFIG_PM 298 296 static const struct dev_pm_ops ish_pm_ops = { 299 297 .suspend = ish_suspend, 300 298 .resume = ish_resume, ··· 301 301 #define ISHTP_ISH_PM_OPS (&ish_pm_ops) 302 302 #else 303 303 #define ISHTP_ISH_PM_OPS NULL 304 - #endif 304 + #endif /* CONFIG_PM */ 305 305 306 306 static struct pci_driver ish_driver = { 307 307 .name = KBUILD_MODNAME,
+1
drivers/hid/usbhid/hid-quirks.c
··· 63 63 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM, HID_QUIRK_NOGET }, 64 64 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC, HID_QUIRK_NOGET }, 65 65 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS682, HID_QUIRK_NOGET }, 66 + { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS692, HID_QUIRK_NOGET }, 66 67 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FIGHTERSTICK, HID_QUIRK_NOGET }, 67 68 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_COMBATSTICK, HID_QUIRK_NOGET }, 68 69 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_ECLIPSE_YOKE, HID_QUIRK_NOGET },
+1 -1
drivers/hv/vmbus_drv.c
··· 961 961 { 962 962 int ret = 0; 963 963 964 - dev_set_name(&child_device_obj->device, "vmbus-%pUl", 964 + dev_set_name(&child_device_obj->device, "%pUl", 965 965 child_device_obj->channel->offermsg.offer.if_instance.b); 966 966 967 967 child_device_obj->device.bus = &hv_bus;
+4 -2
drivers/hwmon/hwmon.c
··· 536 536 537 537 hwdev->groups = devm_kcalloc(dev, ngroups, sizeof(*groups), 538 538 GFP_KERNEL); 539 - if (!hwdev->groups) 540 - return ERR_PTR(-ENOMEM); 539 + if (!hwdev->groups) { 540 + err = -ENOMEM; 541 + goto free_hwmon; 542 + } 541 543 542 544 attrs = __hwmon_create_attrs(dev, drvdata, chip); 543 545 if (IS_ERR(attrs)) {
+8 -4
drivers/iio/accel/st_accel_core.c
··· 743 743 744 744 return IIO_VAL_INT; 745 745 case IIO_CHAN_INFO_SCALE: 746 - *val = 0; 747 - *val2 = adata->current_fullscale->gain; 746 + *val = adata->current_fullscale->gain / 1000000; 747 + *val2 = adata->current_fullscale->gain % 1000000; 748 748 return IIO_VAL_INT_PLUS_MICRO; 749 749 case IIO_CHAN_INFO_SAMP_FREQ: 750 750 *val = adata->odr; ··· 763 763 int err; 764 764 765 765 switch (mask) { 766 - case IIO_CHAN_INFO_SCALE: 767 - err = st_sensors_set_fullscale_by_gain(indio_dev, val2); 766 + case IIO_CHAN_INFO_SCALE: { 767 + int gain; 768 + 769 + gain = val * 1000000 + val2; 770 + err = st_sensors_set_fullscale_by_gain(indio_dev, gain); 768 771 break; 772 + } 769 773 case IIO_CHAN_INFO_SAMP_FREQ: 770 774 if (val2) 771 775 return -EINVAL;
+28 -28
drivers/iio/common/hid-sensors/hid-sensor-attributes.c
··· 30 30 u32 usage_id; 31 31 int unit; /* 0 for default others from HID sensor spec */ 32 32 int scale_val0; /* scale, whole number */ 33 - int scale_val1; /* scale, fraction in micros */ 33 + int scale_val1; /* scale, fraction in nanos */ 34 34 } unit_conversion[] = { 35 - {HID_USAGE_SENSOR_ACCEL_3D, 0, 9, 806650}, 35 + {HID_USAGE_SENSOR_ACCEL_3D, 0, 9, 806650000}, 36 36 {HID_USAGE_SENSOR_ACCEL_3D, 37 37 HID_USAGE_SENSOR_UNITS_METERS_PER_SEC_SQRD, 1, 0}, 38 38 {HID_USAGE_SENSOR_ACCEL_3D, 39 - HID_USAGE_SENSOR_UNITS_G, 9, 806650}, 39 + HID_USAGE_SENSOR_UNITS_G, 9, 806650000}, 40 40 41 - {HID_USAGE_SENSOR_GYRO_3D, 0, 0, 17453}, 41 + {HID_USAGE_SENSOR_GYRO_3D, 0, 0, 17453293}, 42 42 {HID_USAGE_SENSOR_GYRO_3D, 43 43 HID_USAGE_SENSOR_UNITS_RADIANS_PER_SECOND, 1, 0}, 44 44 {HID_USAGE_SENSOR_GYRO_3D, 45 - HID_USAGE_SENSOR_UNITS_DEGREES_PER_SECOND, 0, 17453}, 45 + HID_USAGE_SENSOR_UNITS_DEGREES_PER_SECOND, 0, 17453293}, 46 46 47 - {HID_USAGE_SENSOR_COMPASS_3D, 0, 0, 1000}, 47 + {HID_USAGE_SENSOR_COMPASS_3D, 0, 0, 1000000}, 48 48 {HID_USAGE_SENSOR_COMPASS_3D, HID_USAGE_SENSOR_UNITS_GAUSS, 1, 0}, 49 49 50 - {HID_USAGE_SENSOR_INCLINOMETER_3D, 0, 0, 17453}, 50 + {HID_USAGE_SENSOR_INCLINOMETER_3D, 0, 0, 17453293}, 51 51 {HID_USAGE_SENSOR_INCLINOMETER_3D, 52 - HID_USAGE_SENSOR_UNITS_DEGREES, 0, 17453}, 52 + HID_USAGE_SENSOR_UNITS_DEGREES, 0, 17453293}, 53 53 {HID_USAGE_SENSOR_INCLINOMETER_3D, 54 54 HID_USAGE_SENSOR_UNITS_RADIANS, 1, 0}, 55 55 ··· 57 57 {HID_USAGE_SENSOR_ALS, HID_USAGE_SENSOR_UNITS_LUX, 1, 0}, 58 58 59 59 {HID_USAGE_SENSOR_PRESSURE, 0, 100, 0}, 60 - {HID_USAGE_SENSOR_PRESSURE, HID_USAGE_SENSOR_UNITS_PASCAL, 0, 1000}, 60 + {HID_USAGE_SENSOR_PRESSURE, HID_USAGE_SENSOR_UNITS_PASCAL, 0, 1000000}, 61 61 }; 62 62 63 63 static int pow_10(unsigned power) ··· 266 266 /* 267 267 * This fuction applies the unit exponent to the scale. 268 268 * For example: 269 - * 9.806650 ->exp:2-> val0[980]val1[665000] 270 - * 9.000806 ->exp:2-> val0[900]val1[80600] 271 - * 0.174535 ->exp:2-> val0[17]val1[453500] 272 - * 1.001745 ->exp:0-> val0[1]val1[1745] 273 - * 1.001745 ->exp:2-> val0[100]val1[174500] 274 - * 1.001745 ->exp:4-> val0[10017]val1[450000] 275 - * 9.806650 ->exp:-2-> val0[0]val1[98066] 269 + * 9.806650000 ->exp:2-> val0[980]val1[665000000] 270 + * 9.000806000 ->exp:2-> val0[900]val1[80600000] 271 + * 0.174535293 ->exp:2-> val0[17]val1[453529300] 272 + * 1.001745329 ->exp:0-> val0[1]val1[1745329] 273 + * 1.001745329 ->exp:2-> val0[100]val1[174532900] 274 + * 1.001745329 ->exp:4-> val0[10017]val1[453290000] 275 + * 9.806650000 ->exp:-2-> val0[0]val1[98066500] 276 276 */ 277 - static void adjust_exponent_micro(int *val0, int *val1, int scale0, 277 + static void adjust_exponent_nano(int *val0, int *val1, int scale0, 278 278 int scale1, int exp) 279 279 { 280 280 int i; ··· 285 285 if (exp > 0) { 286 286 *val0 = scale0 * pow_10(exp); 287 287 res = 0; 288 - if (exp > 6) { 288 + if (exp > 9) { 289 289 *val1 = 0; 290 290 return; 291 291 } 292 292 for (i = 0; i < exp; ++i) { 293 - x = scale1 / pow_10(5 - i); 293 + x = scale1 / pow_10(8 - i); 294 294 res += (pow_10(exp - 1 - i) * x); 295 - scale1 = scale1 % pow_10(5 - i); 295 + scale1 = scale1 % pow_10(8 - i); 296 296 } 297 297 *val0 += res; 298 298 *val1 = scale1 * pow_10(exp); 299 299 } else if (exp < 0) { 300 300 exp = abs(exp); 301 - if (exp > 6) { 301 + if (exp > 9) { 302 302 *val0 = *val1 = 0; 303 303 return; 304 304 } 305 305 *val0 = scale0 / pow_10(exp); 306 306 rem = scale0 % pow_10(exp); 307 307 res = 0; 308 - for (i = 0; i < (6 - exp); ++i) { 309 - x = scale1 / pow_10(5 - i); 310 - res += (pow_10(5 - exp - i) * x); 311 - scale1 = scale1 % pow_10(5 - i); 308 + for (i = 0; i < (9 - exp); ++i) { 309 + x = scale1 / pow_10(8 - i); 310 + res += (pow_10(8 - exp - i) * x); 311 + scale1 = scale1 % pow_10(8 - i); 312 312 } 313 - *val1 = rem * pow_10(6 - exp) + res; 313 + *val1 = rem * pow_10(9 - exp) + res; 314 314 } else { 315 315 *val0 = scale0; 316 316 *val1 = scale1; ··· 332 332 unit_conversion[i].unit == attr_info->units) { 333 333 exp = hid_sensor_convert_exponent( 334 334 attr_info->unit_expo); 335 - adjust_exponent_micro(val0, val1, 335 + adjust_exponent_nano(val0, val1, 336 336 unit_conversion[i].scale_val0, 337 337 unit_conversion[i].scale_val1, exp); 338 338 break; 339 339 } 340 340 } 341 341 342 - return IIO_VAL_INT_PLUS_MICRO; 342 + return IIO_VAL_INT_PLUS_NANO; 343 343 } 344 344 EXPORT_SYMBOL(hid_sensor_format_scale); 345 345
+5 -3
drivers/iio/common/st_sensors/st_sensors_core.c
··· 612 612 ssize_t st_sensors_sysfs_scale_avail(struct device *dev, 613 613 struct device_attribute *attr, char *buf) 614 614 { 615 - int i, len = 0; 615 + int i, len = 0, q, r; 616 616 struct iio_dev *indio_dev = dev_get_drvdata(dev); 617 617 struct st_sensor_data *sdata = iio_priv(indio_dev); 618 618 ··· 621 621 if (sdata->sensor_settings->fs.fs_avl[i].num == 0) 622 622 break; 623 623 624 - len += scnprintf(buf + len, PAGE_SIZE - len, "0.%06u ", 625 - sdata->sensor_settings->fs.fs_avl[i].gain); 624 + q = sdata->sensor_settings->fs.fs_avl[i].gain / 1000000; 625 + r = sdata->sensor_settings->fs.fs_avl[i].gain % 1000000; 626 + 627 + len += scnprintf(buf + len, PAGE_SIZE - len, "%u.%06u ", q, r); 626 628 } 627 629 mutex_unlock(&indio_dev->mlock); 628 630 buf[len - 1] = '\n';
+1
drivers/iio/orientation/hid-sensor-rotation.c
··· 335 335 .id_table = hid_dev_rot_ids, 336 336 .driver = { 337 337 .name = KBUILD_MODNAME, 338 + .pm = &hid_sensor_pm_ops, 338 339 }, 339 340 .probe = hid_dev_rot_probe, 340 341 .remove = hid_dev_rot_remove,
+2
drivers/iio/temperature/maxim_thermocouple.c
··· 136 136 ret = spi_read(data->spi, (void *)&buf32, storage_bytes); 137 137 *val = be32_to_cpu(buf32); 138 138 break; 139 + default: 140 + ret = -EINVAL; 139 141 } 140 142 141 143 if (ret)
+28 -26
drivers/infiniband/core/cma.c
··· 1094 1094 } 1095 1095 } 1096 1096 1097 - static void cma_save_ip4_info(struct sockaddr *src_addr, 1098 - struct sockaddr *dst_addr, 1097 + static void cma_save_ip4_info(struct sockaddr_in *src_addr, 1098 + struct sockaddr_in *dst_addr, 1099 1099 struct cma_hdr *hdr, 1100 1100 __be16 local_port) 1101 1101 { 1102 - struct sockaddr_in *ip4; 1103 - 1104 1102 if (src_addr) { 1105 - ip4 = (struct sockaddr_in *)src_addr; 1106 - ip4->sin_family = AF_INET; 1107 - ip4->sin_addr.s_addr = hdr->dst_addr.ip4.addr; 1108 - ip4->sin_port = local_port; 1103 + *src_addr = (struct sockaddr_in) { 1104 + .sin_family = AF_INET, 1105 + .sin_addr.s_addr = hdr->dst_addr.ip4.addr, 1106 + .sin_port = local_port, 1107 + }; 1109 1108 } 1110 1109 1111 1110 if (dst_addr) { 1112 - ip4 = (struct sockaddr_in *)dst_addr; 1113 - ip4->sin_family = AF_INET; 1114 - ip4->sin_addr.s_addr = hdr->src_addr.ip4.addr; 1115 - ip4->sin_port = hdr->port; 1111 + *dst_addr = (struct sockaddr_in) { 1112 + .sin_family = AF_INET, 1113 + .sin_addr.s_addr = hdr->src_addr.ip4.addr, 1114 + .sin_port = hdr->port, 1115 + }; 1116 1116 } 1117 1117 } 1118 1118 1119 - static void cma_save_ip6_info(struct sockaddr *src_addr, 1120 - struct sockaddr *dst_addr, 1119 + static void cma_save_ip6_info(struct sockaddr_in6 *src_addr, 1120 + struct sockaddr_in6 *dst_addr, 1121 1121 struct cma_hdr *hdr, 1122 1122 __be16 local_port) 1123 1123 { 1124 - struct sockaddr_in6 *ip6; 1125 - 1126 1124 if (src_addr) { 1127 - ip6 = (struct sockaddr_in6 *)src_addr; 1128 - ip6->sin6_family = AF_INET6; 1129 - ip6->sin6_addr = hdr->dst_addr.ip6; 1130 - ip6->sin6_port = local_port; 1125 + *src_addr = (struct sockaddr_in6) { 1126 + .sin6_family = AF_INET6, 1127 + .sin6_addr = hdr->dst_addr.ip6, 1128 + .sin6_port = local_port, 1129 + }; 1131 1130 } 1132 1131 1133 1132 if (dst_addr) { 1134 - ip6 = (struct sockaddr_in6 *)dst_addr; 1135 - ip6->sin6_family = AF_INET6; 1136 - ip6->sin6_addr = hdr->src_addr.ip6; 1137 - ip6->sin6_port = hdr->port; 1133 + *dst_addr = (struct sockaddr_in6) { 1134 + .sin6_family = AF_INET6, 1135 + .sin6_addr = hdr->src_addr.ip6, 1136 + .sin6_port = hdr->port, 1137 + }; 1138 1138 } 1139 1139 } 1140 1140 ··· 1159 1159 1160 1160 switch (cma_get_ip_ver(hdr)) { 1161 1161 case 4: 1162 - cma_save_ip4_info(src_addr, dst_addr, hdr, port); 1162 + cma_save_ip4_info((struct sockaddr_in *)src_addr, 1163 + (struct sockaddr_in *)dst_addr, hdr, port); 1163 1164 break; 1164 1165 case 6: 1165 - cma_save_ip6_info(src_addr, dst_addr, hdr, port); 1166 + cma_save_ip6_info((struct sockaddr_in6 *)src_addr, 1167 + (struct sockaddr_in6 *)dst_addr, hdr, port); 1166 1168 break; 1167 1169 default: 1168 1170 return -EAFNOSUPPORT;
+17 -8
drivers/iommu/arm-smmu-v3.c
··· 2636 2636 /* And we're up. Go go go! */ 2637 2637 of_iommu_set_ops(dev->of_node, &arm_smmu_ops); 2638 2638 #ifdef CONFIG_PCI 2639 - pci_request_acs(); 2640 - ret = bus_set_iommu(&pci_bus_type, &arm_smmu_ops); 2641 - if (ret) 2642 - return ret; 2639 + if (pci_bus_type.iommu_ops != &arm_smmu_ops) { 2640 + pci_request_acs(); 2641 + ret = bus_set_iommu(&pci_bus_type, &arm_smmu_ops); 2642 + if (ret) 2643 + return ret; 2644 + } 2643 2645 #endif 2644 2646 #ifdef CONFIG_ARM_AMBA 2645 - ret = bus_set_iommu(&amba_bustype, &arm_smmu_ops); 2646 - if (ret) 2647 - return ret; 2647 + if (amba_bustype.iommu_ops != &arm_smmu_ops) { 2648 + ret = bus_set_iommu(&amba_bustype, &arm_smmu_ops); 2649 + if (ret) 2650 + return ret; 2651 + } 2648 2652 #endif 2649 - return bus_set_iommu(&platform_bus_type, &arm_smmu_ops); 2653 + if (platform_bus_type.iommu_ops != &arm_smmu_ops) { 2654 + ret = bus_set_iommu(&platform_bus_type, &arm_smmu_ops); 2655 + if (ret) 2656 + return ret; 2657 + } 2658 + return 0; 2650 2659 } 2651 2660 2652 2661 static int arm_smmu_device_remove(struct platform_device *pdev)
+14 -2
drivers/iommu/arm-smmu.c
··· 324 324 #define INVALID_SMENDX -1 325 325 #define __fwspec_cfg(fw) ((struct arm_smmu_master_cfg *)fw->iommu_priv) 326 326 #define fwspec_smmu(fw) (__fwspec_cfg(fw)->smmu) 327 + #define fwspec_smendx(fw, i) \ 328 + (i >= fw->num_ids ? INVALID_SMENDX : __fwspec_cfg(fw)->smendx[i]) 327 329 #define for_each_cfg_sme(fw, i, idx) \ 328 - for (i = 0; idx = __fwspec_cfg(fw)->smendx[i], i < fw->num_ids; ++i) 330 + for (i = 0; idx = fwspec_smendx(fw, i), i < fw->num_ids; ++i) 329 331 330 332 struct arm_smmu_device { 331 333 struct device *dev; ··· 1230 1228 return -ENXIO; 1231 1229 } 1232 1230 1231 + /* 1232 + * FIXME: The arch/arm DMA API code tries to attach devices to its own 1233 + * domains between of_xlate() and add_device() - we have no way to cope 1234 + * with that, so until ARM gets converted to rely on groups and default 1235 + * domains, just say no (but more politely than by dereferencing NULL). 1236 + * This should be at least a WARN_ON once that's sorted. 1237 + */ 1238 + if (!fwspec->iommu_priv) 1239 + return -ENODEV; 1240 + 1233 1241 smmu = fwspec_smmu(fwspec); 1234 1242 /* Ensure that the domain is finalised */ 1235 1243 ret = arm_smmu_init_domain_context(domain, smmu); ··· 1402 1390 fwspec = dev->iommu_fwspec; 1403 1391 if (ret) 1404 1392 goto out_free; 1405 - } else if (fwspec) { 1393 + } else if (fwspec && fwspec->ops == &arm_smmu_ops) { 1406 1394 smmu = arm_smmu_get_by_node(to_of_node(fwspec->iommu_fwnode)); 1407 1395 } else { 1408 1396 return -ENODEV;
+12 -2
drivers/iommu/intel-iommu.c
··· 1711 1711 if (!iommu->domains || !iommu->domain_ids) 1712 1712 return; 1713 1713 1714 + again: 1714 1715 spin_lock_irqsave(&device_domain_lock, flags); 1715 1716 list_for_each_entry_safe(info, tmp, &device_domain_list, global) { 1716 1717 struct dmar_domain *domain; ··· 1724 1723 1725 1724 domain = info->domain; 1726 1725 1727 - dmar_remove_one_dev_info(domain, info->dev); 1726 + __dmar_remove_one_dev_info(info); 1728 1727 1729 - if (!domain_type_is_vm_or_si(domain)) 1728 + if (!domain_type_is_vm_or_si(domain)) { 1729 + /* 1730 + * The domain_exit() function can't be called under 1731 + * device_domain_lock, as it takes this lock itself. 1732 + * So release the lock here and re-run the loop 1733 + * afterwards. 1734 + */ 1735 + spin_unlock_irqrestore(&device_domain_lock, flags); 1730 1736 domain_exit(domain); 1737 + goto again; 1738 + } 1731 1739 } 1732 1740 spin_unlock_irqrestore(&device_domain_lock, flags); 1733 1741
+5
drivers/media/dvb-frontends/Kconfig
··· 513 513 depends on DVB_CORE 514 514 default DVB_AS102 515 515 516 + config DVB_GP8PSK_FE 517 + tristate 518 + depends on DVB_CORE 519 + default DVB_USB_GP8PSK 520 + 516 521 comment "DVB-C (cable) frontends" 517 522 depends on DVB_CORE 518 523
+1
drivers/media/dvb-frontends/Makefile
··· 121 121 obj-$(CONFIG_DVB_M88RS2000) += m88rs2000.o 122 122 obj-$(CONFIG_DVB_AF9033) += af9033.o 123 123 obj-$(CONFIG_DVB_AS102_FE) += as102_fe.o 124 + obj-$(CONFIG_DVB_GP8PSK_FE) += gp8psk-fe.o 124 125 obj-$(CONFIG_DVB_TC90522) += tc90522.o 125 126 obj-$(CONFIG_DVB_HORUS3A) += horus3a.o 126 127 obj-$(CONFIG_DVB_ASCOT2E) += ascot2e.o
+82
drivers/media/dvb-frontends/gp8psk-fe.h
··· 1 + /* 2 + * gp8psk_fe driver 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2, or (at your option) 7 + * any later version. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #ifndef GP8PSK_FE_H 16 + #define GP8PSK_FE_H 17 + 18 + #include <linux/types.h> 19 + 20 + /* gp8psk commands */ 21 + 22 + #define GET_8PSK_CONFIG 0x80 /* in */ 23 + #define SET_8PSK_CONFIG 0x81 24 + #define I2C_WRITE 0x83 25 + #define I2C_READ 0x84 26 + #define ARM_TRANSFER 0x85 27 + #define TUNE_8PSK 0x86 28 + #define GET_SIGNAL_STRENGTH 0x87 /* in */ 29 + #define LOAD_BCM4500 0x88 30 + #define BOOT_8PSK 0x89 /* in */ 31 + #define START_INTERSIL 0x8A /* in */ 32 + #define SET_LNB_VOLTAGE 0x8B 33 + #define SET_22KHZ_TONE 0x8C 34 + #define SEND_DISEQC_COMMAND 0x8D 35 + #define SET_DVB_MODE 0x8E 36 + #define SET_DN_SWITCH 0x8F 37 + #define GET_SIGNAL_LOCK 0x90 /* in */ 38 + #define GET_FW_VERS 0x92 39 + #define GET_SERIAL_NUMBER 0x93 /* in */ 40 + #define USE_EXTRA_VOLT 0x94 41 + #define GET_FPGA_VERS 0x95 42 + #define CW3K_INIT 0x9d 43 + 44 + /* PSK_configuration bits */ 45 + #define bm8pskStarted 0x01 46 + #define bm8pskFW_Loaded 0x02 47 + #define bmIntersilOn 0x04 48 + #define bmDVBmode 0x08 49 + #define bm22kHz 0x10 50 + #define bmSEL18V 0x20 51 + #define bmDCtuned 0x40 52 + #define bmArmed 0x80 53 + 54 + /* Satellite modulation modes */ 55 + #define ADV_MOD_DVB_QPSK 0 /* DVB-S QPSK */ 56 + #define ADV_MOD_TURBO_QPSK 1 /* Turbo QPSK */ 57 + #define ADV_MOD_TURBO_8PSK 2 /* Turbo 8PSK (also used for Trellis 8PSK) */ 58 + #define ADV_MOD_TURBO_16QAM 3 /* Turbo 16QAM (also used for Trellis 8PSK) */ 59 + 60 + #define ADV_MOD_DCII_C_QPSK 4 /* Digicipher II Combo */ 61 + #define ADV_MOD_DCII_I_QPSK 5 /* Digicipher II I-stream */ 62 + #define ADV_MOD_DCII_Q_QPSK 6 /* Digicipher II Q-stream */ 63 + #define ADV_MOD_DCII_C_OQPSK 7 /* Digicipher II offset QPSK */ 64 + #define ADV_MOD_DSS_QPSK 8 /* DSS (DIRECTV) QPSK */ 65 + #define ADV_MOD_DVB_BPSK 9 /* DVB-S BPSK */ 66 + 67 + /* firmware revision id's */ 68 + #define GP8PSK_FW_REV1 0x020604 69 + #define GP8PSK_FW_REV2 0x020704 70 + #define GP8PSK_FW_VERS(_fw_vers) \ 71 + ((_fw_vers)[2]<<0x10 | (_fw_vers)[1]<<0x08 | (_fw_vers)[0]) 72 + 73 + struct gp8psk_fe_ops { 74 + int (*in)(void *priv, u8 req, u16 value, u16 index, u8 *b, int blen); 75 + int (*out)(void *priv, u8 req, u16 value, u16 index, u8 *b, int blen); 76 + int (*reload)(void *priv); 77 + }; 78 + 79 + struct dvb_frontend *gp8psk_fe_attach(const struct gp8psk_fe_ops *ops, 80 + void *priv, bool is_rev1); 81 + 82 + #endif
+1 -1
drivers/media/i2c/ir-kbd-i2c.c
··· 118 118 *protocol = RC_TYPE_RC6_MCE; 119 119 dev &= 0x7f; 120 120 dprintk(1, "ir hauppauge (rc6-mce): t%d vendor=%d dev=%d code=%d\n", 121 - toggle, vendor, dev, code); 121 + *ptoggle, vendor, dev, code); 122 122 } else { 123 123 *ptoggle = 0; 124 124 *protocol = RC_TYPE_RC6_6A_32;
+1 -1
drivers/media/usb/dvb-usb/Makefile
··· 8 8 dvb-usb-vp702x-objs := vp702x.o vp702x-fe.o 9 9 obj-$(CONFIG_DVB_USB_VP702X) += dvb-usb-vp702x.o 10 10 11 - dvb-usb-gp8psk-objs := gp8psk.o gp8psk-fe.o 11 + dvb-usb-gp8psk-objs := gp8psk.o 12 12 obj-$(CONFIG_DVB_USB_GP8PSK) += dvb-usb-gp8psk.o 13 13 14 14 dvb-usb-dtt200u-objs := dtt200u.o dtt200u-fe.o
+10 -23
drivers/media/usb/dvb-usb/af9005.c
··· 53 53 u8 sequence; 54 54 int led_state; 55 55 unsigned char data[256]; 56 - struct mutex data_mutex; 57 56 }; 58 57 59 58 static int af9005_generic_read_write(struct dvb_usb_device *d, u16 reg, ··· 71 72 return -EINVAL; 72 73 } 73 74 74 - mutex_lock(&st->data_mutex); 75 + mutex_lock(&d->data_mutex); 75 76 st->data[0] = 14; /* rest of buffer length low */ 76 77 st->data[1] = 0; /* rest of buffer length high */ 77 78 ··· 139 140 values[i] = st->data[8 + i]; 140 141 141 142 ret: 142 - mutex_unlock(&st->data_mutex); 143 + mutex_unlock(&d->data_mutex); 143 144 return ret; 144 145 145 146 } ··· 480 481 } 481 482 packet_len = wlen + 5; 482 483 483 - mutex_lock(&st->data_mutex); 484 + mutex_lock(&d->data_mutex); 484 485 485 486 st->data[0] = (u8) (packet_len & 0xff); 486 487 st->data[1] = (u8) ((packet_len & 0xff00) >> 8); ··· 511 512 rbuf[i] = st->data[i + 7]; 512 513 } 513 514 514 - mutex_unlock(&st->data_mutex); 515 + mutex_unlock(&d->data_mutex); 515 516 return ret; 516 517 } 517 518 ··· 522 523 u8 seq; 523 524 int ret, i; 524 525 525 - mutex_lock(&st->data_mutex); 526 + mutex_lock(&d->data_mutex); 526 527 527 528 memset(st->data, 0, sizeof(st->data)); 528 529 ··· 558 559 for (i = 0; i < len; i++) 559 560 values[i] = st->data[6 + i]; 560 561 } 561 - mutex_unlock(&st->data_mutex); 562 + mutex_unlock(&d->data_mutex); 562 563 563 564 return ret; 564 565 } ··· 846 847 return 0; 847 848 } 848 849 849 - mutex_lock(&st->data_mutex); 850 + mutex_lock(&d->data_mutex); 850 851 851 852 /* deb_info("rc_query\n"); */ 852 853 st->data[0] = 3; /* rest of packet length low */ ··· 889 890 } 890 891 891 892 ret: 892 - mutex_unlock(&st->data_mutex); 893 + mutex_unlock(&d->data_mutex); 893 894 return ret; 894 895 } 895 896 ··· 1003 1004 static int af9005_usb_probe(struct usb_interface *intf, 1004 1005 const struct usb_device_id *id) 1005 1006 { 1006 - struct dvb_usb_device *d; 1007 - struct af9005_device_state *st; 1008 - int ret; 1009 - 1010 - ret = dvb_usb_device_init(intf, &af9005_properties, 1011 - THIS_MODULE, &d, adapter_nr); 1012 - 1013 - if (ret < 0) 1014 - return ret; 1015 - 1016 - st = d->priv; 1017 - mutex_init(&st->data_mutex); 1018 - 1019 - return 0; 1007 + return dvb_usb_device_init(intf, &af9005_properties, 1008 + THIS_MODULE, NULL, adapter_nr); 1020 1009 } 1021 1010 1022 1011 enum af9005_usb_table_entry {
+10 -23
drivers/media/usb/dvb-usb/cinergyT2-core.c
··· 42 42 struct cinergyt2_state { 43 43 u8 rc_counter; 44 44 unsigned char data[64]; 45 - struct mutex data_mutex; 46 45 }; 47 46 48 47 /* We are missing a release hook with usb_device data */ ··· 55 56 struct cinergyt2_state *st = d->priv; 56 57 int ret; 57 58 58 - mutex_lock(&st->data_mutex); 59 + mutex_lock(&d->data_mutex); 59 60 st->data[0] = CINERGYT2_EP1_CONTROL_STREAM_TRANSFER; 60 61 st->data[1] = enable ? 1 : 0; 61 62 62 63 ret = dvb_usb_generic_rw(d, st->data, 2, st->data, 64, 0); 63 - mutex_unlock(&st->data_mutex); 64 + mutex_unlock(&d->data_mutex); 64 65 65 66 return ret; 66 67 } ··· 70 71 struct cinergyt2_state *st = d->priv; 71 72 int ret; 72 73 73 - mutex_lock(&st->data_mutex); 74 + mutex_lock(&d->data_mutex); 74 75 st->data[0] = CINERGYT2_EP1_SLEEP_MODE; 75 76 st->data[1] = enable ? 0 : 1; 76 77 77 78 ret = dvb_usb_generic_rw(d, st->data, 2, st->data, 3, 0); 78 - mutex_unlock(&st->data_mutex); 79 + mutex_unlock(&d->data_mutex); 79 80 80 81 return ret; 81 82 } ··· 88 89 89 90 adap->fe_adap[0].fe = cinergyt2_fe_attach(adap->dev); 90 91 91 - mutex_lock(&st->data_mutex); 92 + mutex_lock(&d->data_mutex); 92 93 st->data[0] = CINERGYT2_EP1_GET_FIRMWARE_VERSION; 93 94 94 95 ret = dvb_usb_generic_rw(d, st->data, 1, st->data, 3, 0); ··· 96 97 deb_rc("cinergyt2_power_ctrl() Failed to retrieve sleep " 97 98 "state info\n"); 98 99 } 99 - mutex_unlock(&st->data_mutex); 100 + mutex_unlock(&d->data_mutex); 100 101 101 102 /* Copy this pointer as we are gonna need it in the release phase */ 102 103 cinergyt2_usb_device = adap->dev; ··· 165 166 166 167 *state = REMOTE_NO_KEY_PRESSED; 167 168 168 - mutex_lock(&st->data_mutex); 169 + mutex_lock(&d->data_mutex); 169 170 st->data[0] = CINERGYT2_EP1_GET_RC_EVENTS; 170 171 171 172 ret = dvb_usb_generic_rw(d, st->data, 1, st->data, 5, 0); ··· 201 202 } 202 203 203 204 ret: 204 - mutex_unlock(&st->data_mutex); 205 + mutex_unlock(&d->data_mutex); 205 206 return ret; 206 207 } 207 208 208 209 static int cinergyt2_usb_probe(struct usb_interface *intf, 209 210 const struct usb_device_id *id) 210 211 { 211 - struct dvb_usb_device *d; 212 - struct cinergyt2_state *st; 213 - int ret; 214 - 215 - ret = dvb_usb_device_init(intf, &cinergyt2_properties, 216 - THIS_MODULE, &d, adapter_nr); 217 - if (ret < 0) 218 - return ret; 219 - 220 - st = d->priv; 221 - mutex_init(&st->data_mutex); 222 - 223 - return 0; 212 + return dvb_usb_device_init(intf, &cinergyt2_properties, 213 + THIS_MODULE, NULL, adapter_nr); 224 214 } 225 - 226 215 227 216 static struct usb_device_id cinergyt2_usb_table[] = { 228 217 { USB_DEVICE(USB_VID_TERRATEC, 0x0038) },
+16 -23
drivers/media/usb/dvb-usb/cxusb.c
··· 68 68 69 69 wo = (rbuf == NULL || rlen == 0); /* write-only */ 70 70 71 - mutex_lock(&st->data_mutex); 71 + mutex_lock(&d->data_mutex); 72 72 st->data[0] = cmd; 73 73 memcpy(&st->data[1], wbuf, wlen); 74 74 if (wo) ··· 77 77 ret = dvb_usb_generic_rw(d, st->data, 1 + wlen, 78 78 rbuf, rlen, 0); 79 79 80 - mutex_unlock(&st->data_mutex); 80 + mutex_unlock(&d->data_mutex); 81 81 return ret; 82 82 } 83 83 ··· 1461 1461 static int cxusb_probe(struct usb_interface *intf, 1462 1462 const struct usb_device_id *id) 1463 1463 { 1464 - struct dvb_usb_device *d; 1465 - struct cxusb_state *st; 1466 - 1467 1464 if (0 == dvb_usb_device_init(intf, &cxusb_medion_properties, 1468 - THIS_MODULE, &d, adapter_nr) || 1465 + THIS_MODULE, NULL, adapter_nr) || 1469 1466 0 == dvb_usb_device_init(intf, &cxusb_bluebird_lgh064f_properties, 1470 - THIS_MODULE, &d, adapter_nr) || 1467 + THIS_MODULE, NULL, adapter_nr) || 1471 1468 0 == dvb_usb_device_init(intf, &cxusb_bluebird_dee1601_properties, 1472 - THIS_MODULE, &d, adapter_nr) || 1469 + THIS_MODULE, NULL, adapter_nr) || 1473 1470 0 == dvb_usb_device_init(intf, &cxusb_bluebird_lgz201_properties, 1474 - THIS_MODULE, &d, adapter_nr) || 1471 + THIS_MODULE, NULL, adapter_nr) || 1475 1472 0 == dvb_usb_device_init(intf, &cxusb_bluebird_dtt7579_properties, 1476 - THIS_MODULE, &d, adapter_nr) || 1473 + THIS_MODULE, NULL, adapter_nr) || 1477 1474 0 == dvb_usb_device_init(intf, &cxusb_bluebird_dualdig4_properties, 1478 - THIS_MODULE, &d, adapter_nr) || 1475 + THIS_MODULE, NULL, adapter_nr) || 1479 1476 0 == dvb_usb_device_init(intf, &cxusb_bluebird_nano2_properties, 1480 - THIS_MODULE, &d, adapter_nr) || 1477 + THIS_MODULE, NULL, adapter_nr) || 1481 1478 0 == dvb_usb_device_init(intf, 1482 1479 &cxusb_bluebird_nano2_needsfirmware_properties, 1483 - THIS_MODULE, &d, adapter_nr) || 1480 + THIS_MODULE, NULL, adapter_nr) || 1484 1481 0 == dvb_usb_device_init(intf, &cxusb_aver_a868r_properties, 1485 - THIS_MODULE, &d, adapter_nr) || 1482 + THIS_MODULE, NULL, adapter_nr) || 1486 1483 0 == dvb_usb_device_init(intf, 1487 1484 &cxusb_bluebird_dualdig4_rev2_properties, 1488 - THIS_MODULE, &d, adapter_nr) || 1485 + THIS_MODULE, NULL, adapter_nr) || 1489 1486 0 == dvb_usb_device_init(intf, &cxusb_d680_dmb_properties, 1490 - THIS_MODULE, &d, adapter_nr) || 1487 + THIS_MODULE, NULL, adapter_nr) || 1491 1488 0 == dvb_usb_device_init(intf, &cxusb_mygica_d689_properties, 1492 - THIS_MODULE, &d, adapter_nr) || 1489 + THIS_MODULE, NULL, adapter_nr) || 1493 1490 0 == dvb_usb_device_init(intf, &cxusb_mygica_t230_properties, 1494 - THIS_MODULE, &d, adapter_nr) || 1495 - 0) { 1496 - st = d->priv; 1497 - mutex_init(&st->data_mutex); 1498 - 1491 + THIS_MODULE, NULL, adapter_nr) || 1492 + 0) 1499 1493 return 0; 1500 - } 1501 1494 1502 1495 return -EINVAL; 1503 1496 }
-1
drivers/media/usb/dvb-usb/cxusb.h
··· 37 37 struct i2c_client *i2c_client_tuner; 38 38 39 39 unsigned char data[MAX_XFER_SIZE]; 40 - struct mutex data_mutex; 41 40 }; 42 41 43 42 #endif
+3 -2
drivers/media/usb/dvb-usb/dib0700_core.c
··· 704 704 struct dvb_usb_device *d = purb->context; 705 705 struct dib0700_rc_response *poll_reply; 706 706 enum rc_type protocol; 707 - u32 uninitialized_var(keycode); 707 + u32 keycode; 708 708 u8 toggle; 709 709 710 710 deb_info("%s()\n", __func__); ··· 745 745 poll_reply->nec.data == 0x00 && 746 746 poll_reply->nec.not_data == 0xff) { 747 747 poll_reply->data_state = 2; 748 - break; 748 + rc_repeat(d->rc_dev); 749 + goto resubmit; 749 750 } 750 751 751 752 if ((poll_reply->nec.data ^ poll_reply->nec.not_data) != 0xff) {
+17 -23
drivers/media/usb/dvb-usb/dtt200u.c
··· 22 22 23 23 struct dtt200u_state { 24 24 unsigned char data[80]; 25 - struct mutex data_mutex; 26 25 }; 27 26 28 27 static int dtt200u_power_ctrl(struct dvb_usb_device *d, int onoff) ··· 29 30 struct dtt200u_state *st = d->priv; 30 31 int ret = 0; 31 32 32 - mutex_lock(&st->data_mutex); 33 + mutex_lock(&d->data_mutex); 33 34 34 35 st->data[0] = SET_INIT; 35 36 36 37 if (onoff) 37 38 ret = dvb_usb_generic_write(d, st->data, 2); 38 39 39 - mutex_unlock(&st->data_mutex); 40 + mutex_unlock(&d->data_mutex); 40 41 return ret; 41 42 } 42 43 43 44 static int dtt200u_streaming_ctrl(struct dvb_usb_adapter *adap, int onoff) 44 45 { 45 - struct dtt200u_state *st = adap->dev->priv; 46 + struct dvb_usb_device *d = adap->dev; 47 + struct dtt200u_state *st = d->priv; 46 48 int ret; 47 49 48 - mutex_lock(&st->data_mutex); 50 + mutex_lock(&d->data_mutex); 49 51 st->data[0] = SET_STREAMING; 50 52 st->data[1] = onoff; 51 53 ··· 61 61 ret = dvb_usb_generic_write(adap->dev, st->data, 1); 62 62 63 63 ret: 64 - mutex_unlock(&st->data_mutex); 64 + mutex_unlock(&d->data_mutex); 65 65 66 66 return ret; 67 67 } 68 68 69 69 static int dtt200u_pid_filter(struct dvb_usb_adapter *adap, int index, u16 pid, int onoff) 70 70 { 71 - struct dtt200u_state *st = adap->dev->priv; 71 + struct dvb_usb_device *d = adap->dev; 72 + struct dtt200u_state *st = d->priv; 72 73 int ret; 73 74 74 75 pid = onoff ? pid : 0; 75 76 76 - mutex_lock(&st->data_mutex); 77 + mutex_lock(&d->data_mutex); 77 78 st->data[0] = SET_PID_FILTER; 78 79 st->data[1] = index; 79 80 st->data[2] = pid & 0xff; 80 81 st->data[3] = (pid >> 8) & 0x1f; 81 82 82 83 ret = dvb_usb_generic_write(adap->dev, st->data, 4); 83 - mutex_unlock(&st->data_mutex); 84 + mutex_unlock(&d->data_mutex); 84 85 85 86 return ret; 86 87 } ··· 92 91 u32 scancode; 93 92 int ret; 94 93 95 - mutex_lock(&st->data_mutex); 94 + mutex_lock(&d->data_mutex); 96 95 st->data[0] = GET_RC_CODE; 97 96 98 97 ret = dvb_usb_generic_rw(d, st->data, 1, st->data, 5, 0); ··· 127 126 deb_info("st->data: %*ph\n", 5, st->data); 128 127 129 128 ret: 130 - mutex_unlock(&st->data_mutex); 129 + mutex_unlock(&d->data_mutex); 131 130 return ret; 132 131 } 133 132 ··· 146 145 static int dtt200u_usb_probe(struct usb_interface *intf, 147 146 const struct usb_device_id *id) 148 147 { 149 - struct dvb_usb_device *d; 150 - struct dtt200u_state *st; 151 - 152 148 if (0 == dvb_usb_device_init(intf, &dtt200u_properties, 153 - THIS_MODULE, &d, adapter_nr) || 149 + THIS_MODULE, NULL, adapter_nr) || 154 150 0 == dvb_usb_device_init(intf, &wt220u_properties, 155 - THIS_MODULE, &d, adapter_nr) || 151 + THIS_MODULE, NULL, adapter_nr) || 156 152 0 == dvb_usb_device_init(intf, &wt220u_fc_properties, 157 - THIS_MODULE, &d, adapter_nr) || 153 + THIS_MODULE, NULL, adapter_nr) || 158 154 0 == dvb_usb_device_init(intf, &wt220u_zl0353_properties, 159 - THIS_MODULE, &d, adapter_nr) || 155 + THIS_MODULE, NULL, adapter_nr) || 160 156 0 == dvb_usb_device_init(intf, &wt220u_miglia_properties, 161 - THIS_MODULE, &d, adapter_nr)) { 162 - st = d->priv; 163 - mutex_init(&st->data_mutex); 164 - 157 + THIS_MODULE, NULL, adapter_nr)) 165 158 return 0; 166 - } 167 159 168 160 return -ENODEV; 169 161 }
+1
drivers/media/usb/dvb-usb/dvb-usb-init.c
··· 142 142 { 143 143 int ret = 0; 144 144 145 + mutex_init(&d->data_mutex); 145 146 mutex_init(&d->usb_mutex); 146 147 mutex_init(&d->i2c_mutex); 147 148
+7 -2
drivers/media/usb/dvb-usb/dvb-usb.h
··· 404 404 * Powered is in/decremented for each call to modify the state. 405 405 * @udev: pointer to the device's struct usb_device. 406 406 * 407 - * @usb_mutex: semaphore of USB control messages (reading needs two messages) 408 - * @i2c_mutex: semaphore for i2c-transfers 407 + * @data_mutex: mutex to protect the data structure used to store URB data 408 + * @usb_mutex: mutex of USB control messages (reading needs two messages). 409 + * Please notice that this mutex is used internally at the generic 410 + * URB control functions. So, drivers using dvb_usb_generic_rw() and 411 + * derivated functions should not lock it internally. 412 + * @i2c_mutex: mutex for i2c-transfers 409 413 * 410 414 * @i2c_adap: device's i2c_adapter if it uses I2CoverUSB 411 415 * ··· 437 433 int powered; 438 434 439 435 /* locking */ 436 + struct mutex data_mutex; 440 437 struct mutex usb_mutex; 441 438 442 439 /* i2c */
+80 -55
drivers/media/usb/dvb-usb/gp8psk-fe.c drivers/media/dvb-frontends/gp8psk-fe.c
··· 14 14 * 15 15 * see Documentation/dvb/README.dvb-usb for more information 16 16 */ 17 - #include "gp8psk.h" 17 + 18 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 19 + 20 + #include "gp8psk-fe.h" 21 + #include "dvb_frontend.h" 22 + 23 + static int debug; 24 + module_param(debug, int, 0644); 25 + MODULE_PARM_DESC(debug, "Turn on/off debugging (default:off)."); 26 + 27 + #define dprintk(fmt, arg...) do { \ 28 + if (debug) \ 29 + printk(KERN_DEBUG pr_fmt("%s: " fmt), \ 30 + __func__, ##arg); \ 31 + } while (0) 18 32 19 33 struct gp8psk_fe_state { 20 34 struct dvb_frontend fe; 21 - struct dvb_usb_device *d; 35 + void *priv; 36 + const struct gp8psk_fe_ops *ops; 37 + bool is_rev1; 22 38 u8 lock; 23 39 u16 snr; 24 40 unsigned long next_status_check; ··· 45 29 { 46 30 struct gp8psk_fe_state *st = fe->demodulator_priv; 47 31 u8 status; 48 - gp8psk_usb_in_op(st->d, GET_8PSK_CONFIG, 0, 0, &status, 1); 32 + 33 + st->ops->in(st->priv, GET_8PSK_CONFIG, 0, 0, &status, 1); 49 34 return status & bmDCtuned; 50 35 } 51 36 52 37 static int gp8psk_set_tuner_mode(struct dvb_frontend *fe, int mode) 53 38 { 54 - struct gp8psk_fe_state *state = fe->demodulator_priv; 55 - return gp8psk_usb_out_op(state->d, SET_8PSK_CONFIG, mode, 0, NULL, 0); 39 + struct gp8psk_fe_state *st = fe->demodulator_priv; 40 + 41 + return st->ops->out(st->priv, SET_8PSK_CONFIG, mode, 0, NULL, 0); 56 42 } 57 43 58 44 static int gp8psk_fe_update_status(struct gp8psk_fe_state *st) 59 45 { 60 46 u8 buf[6]; 61 47 if (time_after(jiffies,st->next_status_check)) { 62 - gp8psk_usb_in_op(st->d, GET_SIGNAL_LOCK, 0,0,&st->lock,1); 63 - gp8psk_usb_in_op(st->d, GET_SIGNAL_STRENGTH, 0,0,buf,6); 48 + st->ops->in(st->priv, GET_SIGNAL_LOCK, 0, 0, &st->lock, 1); 49 + st->ops->in(st->priv, GET_SIGNAL_STRENGTH, 0, 0, buf, 6); 64 50 st->snr = (buf[1]) << 8 | buf[0]; 65 51 st->next_status_check = jiffies + (st->status_check_interval*HZ)/1000; 66 52 } ··· 134 116 135 117 static int gp8psk_fe_set_frontend(struct dvb_frontend *fe) 136 118 { 137 - struct gp8psk_fe_state *state = fe->demodulator_priv; 119 + struct gp8psk_fe_state *st = fe->demodulator_priv; 138 120 struct dtv_frontend_properties *c = &fe->dtv_property_cache; 139 121 u8 cmd[10]; 140 122 u32 freq = c->frequency * 1000; 141 - int gp_product_id = le16_to_cpu(state->d->udev->descriptor.idProduct); 142 123 143 - deb_fe("%s()\n", __func__); 124 + dprintk("%s()\n", __func__); 144 125 145 126 cmd[4] = freq & 0xff; 146 127 cmd[5] = (freq >> 8) & 0xff; ··· 153 136 switch (c->delivery_system) { 154 137 case SYS_DVBS: 155 138 if (c->modulation != QPSK) { 156 - deb_fe("%s: unsupported modulation selected (%d)\n", 139 + dprintk("%s: unsupported modulation selected (%d)\n", 157 140 __func__, c->modulation); 158 141 return -EOPNOTSUPP; 159 142 } 160 143 c->fec_inner = FEC_AUTO; 161 144 break; 162 145 case SYS_DVBS2: /* kept for backwards compatibility */ 163 - deb_fe("%s: DVB-S2 delivery system selected\n", __func__); 146 + dprintk("%s: DVB-S2 delivery system selected\n", __func__); 164 147 break; 165 148 case SYS_TURBO: 166 - deb_fe("%s: Turbo-FEC delivery system selected\n", __func__); 149 + dprintk("%s: Turbo-FEC delivery system selected\n", __func__); 167 150 break; 168 151 169 152 default: 170 - deb_fe("%s: unsupported delivery system selected (%d)\n", 153 + dprintk("%s: unsupported delivery system selected (%d)\n", 171 154 __func__, c->delivery_system); 172 155 return -EOPNOTSUPP; 173 156 } ··· 178 161 cmd[3] = (c->symbol_rate >> 24) & 0xff; 179 162 switch (c->modulation) { 180 163 case QPSK: 181 - if (gp_product_id == USB_PID_GENPIX_8PSK_REV_1_WARM) 164 + if (st->is_rev1) 182 165 if (gp8psk_tuned_to_DCII(fe)) 183 - gp8psk_bcm4500_reload(state->d); 166 + st->ops->reload(st->priv); 184 167 switch (c->fec_inner) { 185 168 case FEC_1_2: 186 169 cmd[9] = 0; break; ··· 224 207 cmd[9] = 0; 225 208 break; 226 209 default: /* Unknown modulation */ 227 - deb_fe("%s: unsupported modulation selected (%d)\n", 210 + dprintk("%s: unsupported modulation selected (%d)\n", 228 211 __func__, c->modulation); 229 212 return -EOPNOTSUPP; 230 213 } 231 214 232 - if (gp_product_id == USB_PID_GENPIX_8PSK_REV_1_WARM) 215 + if (st->is_rev1) 233 216 gp8psk_set_tuner_mode(fe, 0); 234 - gp8psk_usb_out_op(state->d, TUNE_8PSK, 0, 0, cmd, 10); 217 + st->ops->out(st->priv, TUNE_8PSK, 0, 0, cmd, 10); 235 218 236 - state->lock = 0; 237 - state->next_status_check = jiffies; 238 - state->status_check_interval = 200; 219 + st->lock = 0; 220 + st->next_status_check = jiffies; 221 + st->status_check_interval = 200; 239 222 240 223 return 0; 241 224 } ··· 245 228 { 246 229 struct gp8psk_fe_state *st = fe->demodulator_priv; 247 230 248 - deb_fe("%s\n",__func__); 231 + dprintk("%s\n", __func__); 249 232 250 - if (gp8psk_usb_out_op(st->d,SEND_DISEQC_COMMAND, m->msg[0], 0, 233 + if (st->ops->out(st->priv, SEND_DISEQC_COMMAND, m->msg[0], 0, 251 234 m->msg, m->msg_len)) { 252 235 return -EINVAL; 253 236 } ··· 260 243 struct gp8psk_fe_state *st = fe->demodulator_priv; 261 244 u8 cmd; 262 245 263 - deb_fe("%s\n",__func__); 246 + dprintk("%s\n", __func__); 264 247 265 248 /* These commands are certainly wrong */ 266 249 cmd = (burst == SEC_MINI_A) ? 0x00 : 0x01; 267 250 268 - if (gp8psk_usb_out_op(st->d,SEND_DISEQC_COMMAND, cmd, 0, 251 + if (st->ops->out(st->priv, SEND_DISEQC_COMMAND, cmd, 0, 269 252 &cmd, 0)) { 270 253 return -EINVAL; 271 254 } ··· 275 258 static int gp8psk_fe_set_tone(struct dvb_frontend *fe, 276 259 enum fe_sec_tone_mode tone) 277 260 { 278 - struct gp8psk_fe_state* state = fe->demodulator_priv; 261 + struct gp8psk_fe_state *st = fe->demodulator_priv; 279 262 280 - if (gp8psk_usb_out_op(state->d,SET_22KHZ_TONE, 281 - (tone == SEC_TONE_ON), 0, NULL, 0)) { 263 + if (st->ops->out(st->priv, SET_22KHZ_TONE, 264 + (tone == SEC_TONE_ON), 0, NULL, 0)) { 282 265 return -EINVAL; 283 266 } 284 267 return 0; ··· 287 270 static int gp8psk_fe_set_voltage(struct dvb_frontend *fe, 288 271 enum fe_sec_voltage voltage) 289 272 { 290 - struct gp8psk_fe_state* state = fe->demodulator_priv; 273 + struct gp8psk_fe_state *st = fe->demodulator_priv; 291 274 292 - if (gp8psk_usb_out_op(state->d,SET_LNB_VOLTAGE, 275 + if (st->ops->out(st->priv, SET_LNB_VOLTAGE, 293 276 voltage == SEC_VOLTAGE_18, 0, NULL, 0)) { 294 277 return -EINVAL; 295 278 } ··· 298 281 299 282 static int gp8psk_fe_enable_high_lnb_voltage(struct dvb_frontend* fe, long onoff) 300 283 { 301 - struct gp8psk_fe_state* state = fe->demodulator_priv; 302 - return gp8psk_usb_out_op(state->d, USE_EXTRA_VOLT, onoff, 0,NULL,0); 284 + struct gp8psk_fe_state *st = fe->demodulator_priv; 285 + 286 + return st->ops->out(st->priv, USE_EXTRA_VOLT, onoff, 0, NULL, 0); 303 287 } 304 288 305 289 static int gp8psk_fe_send_legacy_dish_cmd (struct dvb_frontend* fe, unsigned long sw_cmd) 306 290 { 307 - struct gp8psk_fe_state* state = fe->demodulator_priv; 291 + struct gp8psk_fe_state *st = fe->demodulator_priv; 308 292 u8 cmd = sw_cmd & 0x7f; 309 293 310 - if (gp8psk_usb_out_op(state->d,SET_DN_SWITCH, cmd, 0, 311 - NULL, 0)) { 294 + if (st->ops->out(st->priv, SET_DN_SWITCH, cmd, 0, NULL, 0)) 312 295 return -EINVAL; 313 - } 314 - if (gp8psk_usb_out_op(state->d,SET_LNB_VOLTAGE, !!(sw_cmd & 0x80), 315 - 0, NULL, 0)) { 296 + 297 + if (st->ops->out(st->priv, SET_LNB_VOLTAGE, !!(sw_cmd & 0x80), 298 + 0, NULL, 0)) 316 299 return -EINVAL; 317 - } 318 300 319 301 return 0; 320 302 } 321 303 322 304 static void gp8psk_fe_release(struct dvb_frontend* fe) 323 305 { 324 - struct gp8psk_fe_state *state = fe->demodulator_priv; 325 - kfree(state); 306 + struct gp8psk_fe_state *st = fe->demodulator_priv; 307 + 308 + kfree(st); 326 309 } 327 310 328 311 static struct dvb_frontend_ops gp8psk_fe_ops; 329 312 330 - struct dvb_frontend * gp8psk_fe_attach(struct dvb_usb_device *d) 313 + struct dvb_frontend *gp8psk_fe_attach(const struct gp8psk_fe_ops *ops, 314 + void *priv, bool is_rev1) 331 315 { 332 - struct gp8psk_fe_state *s = kzalloc(sizeof(struct gp8psk_fe_state), GFP_KERNEL); 333 - if (s == NULL) 334 - goto error; 316 + struct gp8psk_fe_state *st; 335 317 336 - s->d = d; 337 - memcpy(&s->fe.ops, &gp8psk_fe_ops, sizeof(struct dvb_frontend_ops)); 338 - s->fe.demodulator_priv = s; 318 + if (!ops || !ops->in || !ops->out || !ops->reload) { 319 + pr_err("Error! gp8psk-fe ops not defined.\n"); 320 + return NULL; 321 + } 339 322 340 - goto success; 341 - error: 342 - return NULL; 343 - success: 344 - return &s->fe; 323 + st = kzalloc(sizeof(struct gp8psk_fe_state), GFP_KERNEL); 324 + if (!st) 325 + return NULL; 326 + 327 + memcpy(&st->fe.ops, &gp8psk_fe_ops, sizeof(struct dvb_frontend_ops)); 328 + st->fe.demodulator_priv = st; 329 + st->ops = ops; 330 + st->priv = priv; 331 + st->is_rev1 = is_rev1; 332 + 333 + pr_info("Frontend %sattached\n", is_rev1 ? "revision 1 " : ""); 334 + 335 + return &st->fe; 345 336 } 346 - 337 + EXPORT_SYMBOL_GPL(gp8psk_fe_attach); 347 338 348 339 static struct dvb_frontend_ops gp8psk_fe_ops = { 349 340 .delsys = { SYS_DVBS },
+78 -33
drivers/media/usb/dvb-usb/gp8psk.c
··· 15 15 * see Documentation/dvb/README.dvb-usb for more information 16 16 */ 17 17 #include "gp8psk.h" 18 + #include "gp8psk-fe.h" 18 19 19 20 /* debug */ 20 21 static char bcm4500_firmware[] = "dvb-usb-gp8psk-02.fw"; ··· 29 28 unsigned char data[80]; 30 29 }; 31 30 32 - static int gp8psk_get_fw_version(struct dvb_usb_device *d, u8 *fw_vers) 33 - { 34 - return (gp8psk_usb_in_op(d, GET_FW_VERS, 0, 0, fw_vers, 6)); 35 - } 36 - 37 - static int gp8psk_get_fpga_version(struct dvb_usb_device *d, u8 *fpga_vers) 38 - { 39 - return (gp8psk_usb_in_op(d, GET_FPGA_VERS, 0, 0, fpga_vers, 1)); 40 - } 41 - 42 - static void gp8psk_info(struct dvb_usb_device *d) 43 - { 44 - u8 fpga_vers, fw_vers[6]; 45 - 46 - if (!gp8psk_get_fw_version(d, fw_vers)) 47 - info("FW Version = %i.%02i.%i (0x%x) Build %4i/%02i/%02i", 48 - fw_vers[2], fw_vers[1], fw_vers[0], GP8PSK_FW_VERS(fw_vers), 49 - 2000 + fw_vers[5], fw_vers[4], fw_vers[3]); 50 - else 51 - info("failed to get FW version"); 52 - 53 - if (!gp8psk_get_fpga_version(d, &fpga_vers)) 54 - info("FPGA Version = %i", fpga_vers); 55 - else 56 - info("failed to get FPGA version"); 57 - } 58 - 59 - int gp8psk_usb_in_op(struct dvb_usb_device *d, u8 req, u16 value, u16 index, u8 *b, int blen) 31 + static int gp8psk_usb_in_op(struct dvb_usb_device *d, u8 req, u16 value, 32 + u16 index, u8 *b, int blen) 60 33 { 61 34 struct gp8psk_state *st = d->priv; 62 35 int ret = 0,try = 0; ··· 42 67 return ret; 43 68 44 69 while (ret >= 0 && ret != blen && try < 3) { 45 - memcpy(st->data, b, blen); 46 70 ret = usb_control_msg(d->udev, 47 71 usb_rcvctrlpipe(d->udev,0), 48 72 req, ··· 55 81 if (ret < 0 || ret != blen) { 56 82 warn("usb in %d operation failed.", req); 57 83 ret = -EIO; 58 - } else 84 + } else { 59 85 ret = 0; 86 + memcpy(b, st->data, blen); 87 + } 60 88 61 89 deb_xfer("in: req. %x, val: %x, ind: %x, buffer: ",req,value,index); 62 90 debug_dump(b,blen,deb_xfer); ··· 68 92 return ret; 69 93 } 70 94 71 - int gp8psk_usb_out_op(struct dvb_usb_device *d, u8 req, u16 value, 95 + static int gp8psk_usb_out_op(struct dvb_usb_device *d, u8 req, u16 value, 72 96 u16 index, u8 *b, int blen) 73 97 { 74 98 struct gp8psk_state *st = d->priv; ··· 97 121 mutex_unlock(&d->usb_mutex); 98 122 99 123 return ret; 124 + } 125 + 126 + 127 + static int gp8psk_get_fw_version(struct dvb_usb_device *d, u8 *fw_vers) 128 + { 129 + return gp8psk_usb_in_op(d, GET_FW_VERS, 0, 0, fw_vers, 6); 130 + } 131 + 132 + static int gp8psk_get_fpga_version(struct dvb_usb_device *d, u8 *fpga_vers) 133 + { 134 + return gp8psk_usb_in_op(d, GET_FPGA_VERS, 0, 0, fpga_vers, 1); 135 + } 136 + 137 + static void gp8psk_info(struct dvb_usb_device *d) 138 + { 139 + u8 fpga_vers, fw_vers[6]; 140 + 141 + if (!gp8psk_get_fw_version(d, fw_vers)) 142 + info("FW Version = %i.%02i.%i (0x%x) Build %4i/%02i/%02i", 143 + fw_vers[2], fw_vers[1], fw_vers[0], GP8PSK_FW_VERS(fw_vers), 144 + 2000 + fw_vers[5], fw_vers[4], fw_vers[3]); 145 + else 146 + info("failed to get FW version"); 147 + 148 + if (!gp8psk_get_fpga_version(d, &fpga_vers)) 149 + info("FPGA Version = %i", fpga_vers); 150 + else 151 + info("failed to get FPGA version"); 100 152 } 101 153 102 154 static int gp8psk_load_bcm4500fw(struct dvb_usb_device *d) ··· 229 225 return 0; 230 226 } 231 227 232 - int gp8psk_bcm4500_reload(struct dvb_usb_device *d) 228 + static int gp8psk_bcm4500_reload(struct dvb_usb_device *d) 233 229 { 234 230 u8 buf; 235 231 int gp_product_id = le16_to_cpu(d->udev->descriptor.idProduct); 232 + 233 + deb_xfer("reloading firmware\n"); 234 + 236 235 /* Turn off 8psk power */ 237 236 if (gp8psk_usb_in_op(d, BOOT_8PSK, 0, 0, &buf, 1)) 238 237 return -EINVAL; ··· 254 247 return gp8psk_usb_out_op(adap->dev, ARM_TRANSFER, onoff, 0 , NULL, 0); 255 248 } 256 249 250 + /* Callbacks for gp8psk-fe.c */ 251 + 252 + static int gp8psk_fe_in(void *priv, u8 req, u16 value, 253 + u16 index, u8 *b, int blen) 254 + { 255 + struct dvb_usb_device *d = priv; 256 + 257 + return gp8psk_usb_in_op(d, req, value, index, b, blen); 258 + } 259 + 260 + static int gp8psk_fe_out(void *priv, u8 req, u16 value, 261 + u16 index, u8 *b, int blen) 262 + { 263 + struct dvb_usb_device *d = priv; 264 + 265 + return gp8psk_usb_out_op(d, req, value, index, b, blen); 266 + } 267 + 268 + static int gp8psk_fe_reload(void *priv) 269 + { 270 + struct dvb_usb_device *d = priv; 271 + 272 + return gp8psk_bcm4500_reload(d); 273 + } 274 + 275 + const struct gp8psk_fe_ops gp8psk_fe_ops = { 276 + .in = gp8psk_fe_in, 277 + .out = gp8psk_fe_out, 278 + .reload = gp8psk_fe_reload, 279 + }; 280 + 257 281 static int gp8psk_frontend_attach(struct dvb_usb_adapter *adap) 258 282 { 259 - adap->fe_adap[0].fe = gp8psk_fe_attach(adap->dev); 283 + struct dvb_usb_device *d = adap->dev; 284 + int id = le16_to_cpu(d->udev->descriptor.idProduct); 285 + int is_rev1; 286 + 287 + is_rev1 = (id == USB_PID_GENPIX_8PSK_REV_1_WARM) ? true : false; 288 + 289 + adap->fe_adap[0].fe = dvb_attach(gp8psk_fe_attach, 290 + &gp8psk_fe_ops, d, is_rev1); 260 291 return 0; 261 292 } 262 293
-63
drivers/media/usb/dvb-usb/gp8psk.h
··· 24 24 #define deb_info(args...) dprintk(dvb_usb_gp8psk_debug,0x01,args) 25 25 #define deb_xfer(args...) dprintk(dvb_usb_gp8psk_debug,0x02,args) 26 26 #define deb_rc(args...) dprintk(dvb_usb_gp8psk_debug,0x04,args) 27 - #define deb_fe(args...) dprintk(dvb_usb_gp8psk_debug,0x08,args) 28 - 29 - /* Twinhan Vendor requests */ 30 - #define TH_COMMAND_IN 0xC0 31 - #define TH_COMMAND_OUT 0xC1 32 - 33 - /* gp8psk commands */ 34 - 35 - #define GET_8PSK_CONFIG 0x80 /* in */ 36 - #define SET_8PSK_CONFIG 0x81 37 - #define I2C_WRITE 0x83 38 - #define I2C_READ 0x84 39 - #define ARM_TRANSFER 0x85 40 - #define TUNE_8PSK 0x86 41 - #define GET_SIGNAL_STRENGTH 0x87 /* in */ 42 - #define LOAD_BCM4500 0x88 43 - #define BOOT_8PSK 0x89 /* in */ 44 - #define START_INTERSIL 0x8A /* in */ 45 - #define SET_LNB_VOLTAGE 0x8B 46 - #define SET_22KHZ_TONE 0x8C 47 - #define SEND_DISEQC_COMMAND 0x8D 48 - #define SET_DVB_MODE 0x8E 49 - #define SET_DN_SWITCH 0x8F 50 - #define GET_SIGNAL_LOCK 0x90 /* in */ 51 - #define GET_FW_VERS 0x92 52 - #define GET_SERIAL_NUMBER 0x93 /* in */ 53 - #define USE_EXTRA_VOLT 0x94 54 - #define GET_FPGA_VERS 0x95 55 - #define CW3K_INIT 0x9d 56 - 57 - /* PSK_configuration bits */ 58 - #define bm8pskStarted 0x01 59 - #define bm8pskFW_Loaded 0x02 60 - #define bmIntersilOn 0x04 61 - #define bmDVBmode 0x08 62 - #define bm22kHz 0x10 63 - #define bmSEL18V 0x20 64 - #define bmDCtuned 0x40 65 - #define bmArmed 0x80 66 - 67 - /* Satellite modulation modes */ 68 - #define ADV_MOD_DVB_QPSK 0 /* DVB-S QPSK */ 69 - #define ADV_MOD_TURBO_QPSK 1 /* Turbo QPSK */ 70 - #define ADV_MOD_TURBO_8PSK 2 /* Turbo 8PSK (also used for Trellis 8PSK) */ 71 - #define ADV_MOD_TURBO_16QAM 3 /* Turbo 16QAM (also used for Trellis 8PSK) */ 72 - 73 - #define ADV_MOD_DCII_C_QPSK 4 /* Digicipher II Combo */ 74 - #define ADV_MOD_DCII_I_QPSK 5 /* Digicipher II I-stream */ 75 - #define ADV_MOD_DCII_Q_QPSK 6 /* Digicipher II Q-stream */ 76 - #define ADV_MOD_DCII_C_OQPSK 7 /* Digicipher II offset QPSK */ 77 - #define ADV_MOD_DSS_QPSK 8 /* DSS (DIRECTV) QPSK */ 78 - #define ADV_MOD_DVB_BPSK 9 /* DVB-S BPSK */ 79 27 80 28 #define GET_USB_SPEED 0x07 81 29 ··· 33 85 #define VENDOR_STRING_READ 0x0C 34 86 #define PRODUCT_STRING_READ 0x0D 35 87 #define FW_BCD_VERSION_READ 0x14 36 - 37 - /* firmware revision id's */ 38 - #define GP8PSK_FW_REV1 0x020604 39 - #define GP8PSK_FW_REV2 0x020704 40 - #define GP8PSK_FW_VERS(_fw_vers) ((_fw_vers)[2]<<0x10 | (_fw_vers)[1]<<0x08 | (_fw_vers)[0]) 41 - 42 - extern struct dvb_frontend * gp8psk_fe_attach(struct dvb_usb_device *d); 43 - extern int gp8psk_usb_in_op(struct dvb_usb_device *d, u8 req, u16 value, u16 index, u8 *b, int blen); 44 - extern int gp8psk_usb_out_op(struct dvb_usb_device *d, u8 req, u16 value, 45 - u16 index, u8 *b, int blen); 46 - extern int gp8psk_bcm4500_reload(struct dvb_usb_device *d); 47 88 48 89 #endif
+1 -1
drivers/misc/mei/bus-fixup.c
··· 178 178 179 179 ret = 0; 180 180 bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length); 181 - if (bytes_recv < 0 || bytes_recv < sizeof(struct mei_nfc_reply)) { 181 + if (bytes_recv < if_version_length) { 182 182 dev_err(bus->dev, "Could not read IF version\n"); 183 183 ret = -EIO; 184 184 goto err;
+4 -4
drivers/mmc/card/mmc_test.c
··· 2347 2347 struct mmc_test_req *rq = mmc_test_req_alloc(); 2348 2348 struct mmc_host *host = test->card->host; 2349 2349 struct mmc_test_area *t = &test->area; 2350 - struct mmc_async_req areq; 2350 + struct mmc_test_async_req test_areq = { .test = test }; 2351 2351 struct mmc_request *mrq; 2352 2352 unsigned long timeout; 2353 2353 bool expired = false; ··· 2363 2363 mrq->sbc = &rq->sbc; 2364 2364 mrq->cap_cmd_during_tfr = true; 2365 2365 2366 - areq.mrq = mrq; 2367 - areq.err_check = mmc_test_check_result_async; 2366 + test_areq.areq.mrq = mrq; 2367 + test_areq.areq.err_check = mmc_test_check_result_async; 2368 2368 2369 2369 mmc_test_prepare_mrq(test, mrq, t->sg, t->sg_len, dev_addr, t->blocks, 2370 2370 512, write); ··· 2378 2378 2379 2379 /* Start ongoing data request */ 2380 2380 if (use_areq) { 2381 - mmc_start_req(host, &areq, &ret); 2381 + mmc_start_req(host, &test_areq.areq, &ret); 2382 2382 if (ret) 2383 2383 goto out_free; 2384 2384 } else {
+3
drivers/mmc/core/mmc.c
··· 26 26 #include "mmc_ops.h" 27 27 #include "sd_ops.h" 28 28 29 + #define DEFAULT_CMD6_TIMEOUT_MS 500 30 + 29 31 static const unsigned int tran_exp[] = { 30 32 10000, 100000, 1000000, 10000000, 31 33 0, 0, 0, 0 ··· 573 571 card->erased_byte = 0x0; 574 572 575 573 /* eMMC v4.5 or later */ 574 + card->ext_csd.generic_cmd6_time = DEFAULT_CMD6_TIMEOUT_MS; 576 575 if (card->ext_csd.rev >= 6) { 577 576 card->ext_csd.feature_support |= MMC_DISCARD_FEATURE; 578 577
+1 -1
drivers/mmc/host/dw_mmc.c
··· 2940 2940 return ERR_PTR(-ENOMEM); 2941 2941 2942 2942 /* find reset controller when exist */ 2943 - pdata->rstc = devm_reset_control_get_optional(dev, NULL); 2943 + pdata->rstc = devm_reset_control_get_optional(dev, "reset"); 2944 2944 if (IS_ERR(pdata->rstc)) { 2945 2945 if (PTR_ERR(pdata->rstc) == -EPROBE_DEFER) 2946 2946 return ERR_PTR(-EPROBE_DEFER);
+2 -2
drivers/mmc/host/mxs-mmc.c
··· 661 661 662 662 platform_set_drvdata(pdev, mmc); 663 663 664 + spin_lock_init(&host->lock); 665 + 664 666 ret = devm_request_irq(&pdev->dev, irq_err, mxs_mmc_irq_handler, 0, 665 667 dev_name(&pdev->dev), host); 666 668 if (ret) 667 669 goto out_free_dma; 668 - 669 - spin_lock_init(&host->lock); 670 670 671 671 ret = mmc_add_host(mmc); 672 672 if (ret)
+26 -10
drivers/mmc/host/sdhci.c
··· 2086 2086 2087 2087 if (!host->tuning_done) { 2088 2088 pr_info(DRIVER_NAME ": Timeout waiting for Buffer Read Ready interrupt during tuning procedure, falling back to fixed sampling clock\n"); 2089 + 2090 + sdhci_do_reset(host, SDHCI_RESET_CMD); 2091 + sdhci_do_reset(host, SDHCI_RESET_DATA); 2092 + 2089 2093 ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2); 2090 2094 ctrl &= ~SDHCI_CTRL_TUNED_CLK; 2091 2095 ctrl &= ~SDHCI_CTRL_EXEC_TUNING; ··· 2290 2286 2291 2287 for (i = 0; i < SDHCI_MAX_MRQS; i++) { 2292 2288 mrq = host->mrqs_done[i]; 2293 - if (mrq) { 2294 - host->mrqs_done[i] = NULL; 2289 + if (mrq) 2295 2290 break; 2296 - } 2297 2291 } 2298 2292 2299 2293 if (!mrq) { ··· 2322 2320 * upon error conditions. 2323 2321 */ 2324 2322 if (sdhci_needs_reset(host, mrq)) { 2323 + /* 2324 + * Do not finish until command and data lines are available for 2325 + * reset. Note there can only be one other mrq, so it cannot 2326 + * also be in mrqs_done, otherwise host->cmd and host->data_cmd 2327 + * would both be null. 2328 + */ 2329 + if (host->cmd || host->data_cmd) { 2330 + spin_unlock_irqrestore(&host->lock, flags); 2331 + return true; 2332 + } 2333 + 2325 2334 /* Some controllers need this kick or reset won't work here */ 2326 2335 if (host->quirks & SDHCI_QUIRK_CLOCK_BEFORE_RESET) 2327 2336 /* This is to force an update */ ··· 2340 2327 2341 2328 /* Spec says we should do both at the same time, but Ricoh 2342 2329 controllers do not like that. */ 2343 - if (!host->cmd) 2344 - sdhci_do_reset(host, SDHCI_RESET_CMD); 2345 - if (!host->data_cmd) 2346 - sdhci_do_reset(host, SDHCI_RESET_DATA); 2330 + sdhci_do_reset(host, SDHCI_RESET_CMD); 2331 + sdhci_do_reset(host, SDHCI_RESET_DATA); 2347 2332 2348 2333 host->pending_reset = false; 2349 2334 } 2350 2335 2351 2336 if (!sdhci_has_requests(host)) 2352 2337 sdhci_led_deactivate(host); 2338 + 2339 + host->mrqs_done[i] = NULL; 2353 2340 2354 2341 mmiowb(); 2355 2342 spin_unlock_irqrestore(&host->lock, flags); ··· 2525 2512 if (!host->data) { 2526 2513 struct mmc_command *data_cmd = host->data_cmd; 2527 2514 2528 - if (data_cmd) 2529 - host->data_cmd = NULL; 2530 - 2531 2515 /* 2532 2516 * The "data complete" interrupt is also used to 2533 2517 * indicate that a busy state has ended. See comment ··· 2532 2522 */ 2533 2523 if (data_cmd && (data_cmd->flags & MMC_RSP_BUSY)) { 2534 2524 if (intmask & SDHCI_INT_DATA_TIMEOUT) { 2525 + host->data_cmd = NULL; 2535 2526 data_cmd->error = -ETIMEDOUT; 2536 2527 sdhci_finish_mrq(host, data_cmd->mrq); 2537 2528 return; 2538 2529 } 2539 2530 if (intmask & SDHCI_INT_DATA_END) { 2531 + host->data_cmd = NULL; 2540 2532 /* 2541 2533 * Some cards handle busy-end interrupt 2542 2534 * before the command completed, so make ··· 2923 2911 sdhci_enable_preset_value(host, true); 2924 2912 spin_unlock_irqrestore(&host->lock, flags); 2925 2913 } 2914 + 2915 + if ((mmc->caps2 & MMC_CAP2_HS400_ES) && 2916 + mmc->ops->hs400_enhanced_strobe) 2917 + mmc->ops->hs400_enhanced_strobe(mmc, &mmc->ios); 2926 2918 2927 2919 spin_lock_irqsave(&host->lock, flags); 2928 2920
+1 -1
drivers/nfc/mei_phy.c
··· 133 133 return -ENOMEM; 134 134 135 135 bytes_recv = mei_cldev_recv(phy->cldev, (u8 *)reply, if_version_length); 136 - if (bytes_recv < 0 || bytes_recv < sizeof(struct mei_nfc_reply)) { 136 + if (bytes_recv < 0 || bytes_recv < if_version_length) { 137 137 pr_err("Could not read IF version\n"); 138 138 r = -EIO; 139 139 goto err;
+1 -1
drivers/nvme/host/lightnvm.c
··· 612 612 613 613 ret = nvm_register(dev); 614 614 615 - ns->lba_shift = ilog2(dev->sec_size) - 9; 615 + ns->lba_shift = ilog2(dev->sec_size); 616 616 617 617 if (sysfs_create_group(&dev->dev.kobj, attrs)) 618 618 pr_warn("%s: failed to create sysfs group for identification\n",
-2
drivers/of/base.c
··· 2077 2077 name = of_get_property(of_aliases, "stdout", NULL); 2078 2078 if (name) 2079 2079 of_stdout = of_find_node_opts_by_path(name, &of_stdout_options); 2080 - if (of_stdout) 2081 - console_set_by_of(); 2082 2080 } 2083 2081 2084 2082 if (!of_aliases)
+62
drivers/pci/host/pcie-rockchip.c
··· 190 190 struct reset_control *mgmt_rst; 191 191 struct reset_control *mgmt_sticky_rst; 192 192 struct reset_control *pipe_rst; 193 + struct reset_control *pm_rst; 194 + struct reset_control *aclk_rst; 195 + struct reset_control *pclk_rst; 193 196 struct clk *aclk_pcie; 194 197 struct clk *aclk_perf_pcie; 195 198 struct clk *hclk_pcie; ··· 410 407 unsigned long timeout; 411 408 412 409 gpiod_set_value(rockchip->ep_gpio, 0); 410 + 411 + err = reset_control_assert(rockchip->aclk_rst); 412 + if (err) { 413 + dev_err(dev, "assert aclk_rst err %d\n", err); 414 + return err; 415 + } 416 + 417 + err = reset_control_assert(rockchip->pclk_rst); 418 + if (err) { 419 + dev_err(dev, "assert pclk_rst err %d\n", err); 420 + return err; 421 + } 422 + 423 + err = reset_control_assert(rockchip->pm_rst); 424 + if (err) { 425 + dev_err(dev, "assert pm_rst err %d\n", err); 426 + return err; 427 + } 428 + 429 + udelay(10); 430 + 431 + err = reset_control_deassert(rockchip->pm_rst); 432 + if (err) { 433 + dev_err(dev, "deassert pm_rst err %d\n", err); 434 + return err; 435 + } 436 + 437 + err = reset_control_deassert(rockchip->aclk_rst); 438 + if (err) { 439 + dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err); 440 + return err; 441 + } 442 + 443 + err = reset_control_deassert(rockchip->pclk_rst); 444 + if (err) { 445 + dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err); 446 + return err; 447 + } 413 448 414 449 err = phy_init(rockchip->phy); 415 450 if (err < 0) { ··· 820 779 if (PTR_ERR(rockchip->pipe_rst) != -EPROBE_DEFER) 821 780 dev_err(dev, "missing pipe reset property in node\n"); 822 781 return PTR_ERR(rockchip->pipe_rst); 782 + } 783 + 784 + rockchip->pm_rst = devm_reset_control_get(dev, "pm"); 785 + if (IS_ERR(rockchip->pm_rst)) { 786 + if (PTR_ERR(rockchip->pm_rst) != -EPROBE_DEFER) 787 + dev_err(dev, "missing pm reset property in node\n"); 788 + return PTR_ERR(rockchip->pm_rst); 789 + } 790 + 791 + rockchip->pclk_rst = devm_reset_control_get(dev, "pclk"); 792 + if (IS_ERR(rockchip->pclk_rst)) { 793 + if (PTR_ERR(rockchip->pclk_rst) != -EPROBE_DEFER) 794 + dev_err(dev, "missing pclk reset property in node\n"); 795 + return PTR_ERR(rockchip->pclk_rst); 796 + } 797 + 798 + rockchip->aclk_rst = devm_reset_control_get(dev, "aclk"); 799 + if (IS_ERR(rockchip->aclk_rst)) { 800 + if (PTR_ERR(rockchip->aclk_rst) != -EPROBE_DEFER) 801 + dev_err(dev, "missing aclk reset property in node\n"); 802 + return PTR_ERR(rockchip->aclk_rst); 823 803 } 824 804 825 805 rockchip->ep_gpio = devm_gpiod_get(dev, "ep", GPIOD_OUT_HIGH);
+8
drivers/pci/setup-res.c
··· 121 121 return -EINVAL; 122 122 } 123 123 124 + /* 125 + * If we have a shadow copy in RAM, the PCI device doesn't respond 126 + * to the shadow range, so we don't need to claim it, and upstream 127 + * bridges don't need to route the range to the device. 128 + */ 129 + if (res->flags & IORESOURCE_ROM_SHADOW) 130 + return 0; 131 + 124 132 root = pci_find_parent_resource(dev, res); 125 133 if (!root) { 126 134 dev_info(&dev->dev, "can't claim BAR %d %pR: no compatible bridge window\n",
+1 -1
drivers/pcmcia/soc_common.c
··· 107 107 108 108 ret = regulator_enable(r->reg); 109 109 } else { 110 - regulator_disable(r->reg); 110 + ret = regulator_disable(r->reg); 111 111 } 112 112 if (ret == 0) 113 113 r->on = on;
+3 -2
drivers/phy/phy-da8xx-usb.c
··· 198 198 } else { 199 199 int ret; 200 200 201 - ret = phy_create_lookup(d_phy->usb11_phy, "usb-phy", "ohci.0"); 201 + ret = phy_create_lookup(d_phy->usb11_phy, "usb-phy", 202 + "ohci-da8xx"); 202 203 if (ret) 203 204 dev_warn(dev, "Failed to create usb11 phy lookup\n"); 204 205 ret = phy_create_lookup(d_phy->usb20_phy, "usb-phy", ··· 217 216 218 217 if (!pdev->dev.of_node) { 219 218 phy_remove_lookup(d_phy->usb20_phy, "usb-phy", "musb-da8xx"); 220 - phy_remove_lookup(d_phy->usb11_phy, "usb-phy", "ohci.0"); 219 + phy_remove_lookup(d_phy->usb11_phy, "usb-phy", "ohci-da8xx"); 221 220 } 222 221 223 222 return 0;
+1 -12
drivers/phy/phy-rockchip-pcie.c
··· 249 249 static int rockchip_pcie_phy_exit(struct phy *phy) 250 250 { 251 251 struct rockchip_pcie_phy *rk_phy = phy_get_drvdata(phy); 252 - int err = 0; 253 252 254 253 clk_disable_unprepare(rk_phy->clk_pciephy_ref); 255 254 256 - err = reset_control_deassert(rk_phy->phy_rst); 257 - if (err) { 258 - dev_err(&phy->dev, "deassert phy_rst err %d\n", err); 259 - goto err_reset; 260 - } 261 - 262 - return err; 263 - 264 - err_reset: 265 - clk_prepare_enable(rk_phy->clk_pciephy_ref); 266 - return err; 255 + return 0; 267 256 } 268 257 269 258 static const struct phy_ops ops = {
+1 -1
drivers/phy/phy-sun4i-usb.c
··· 264 264 return ret; 265 265 } 266 266 267 - if (data->cfg->enable_pmu_unk1) { 267 + if (phy->pmu && data->cfg->enable_pmu_unk1) { 268 268 val = readl(phy->pmu + REG_PMU_UNK1); 269 269 writel(val & ~2, phy->pmu + REG_PMU_UNK1); 270 270 }
+1 -1
drivers/pinctrl/aspeed/pinctrl-aspeed-g5.c
··· 26 26 27 27 #define ASPEED_G5_NR_PINS 228 28 28 29 - #define COND1 SIG_DESC_BIT(SCU90, 6, 0) 29 + #define COND1 { SCU90, BIT(6), 0, 0 } 30 30 #define COND2 { SCU94, GENMASK(1, 0), 0, 0 } 31 31 32 32 #define B14 0
+1 -1
drivers/pinctrl/bcm/pinctrl-iproc-gpio.c
··· 844 844 845 845 static int __init iproc_gpio_init(void) 846 846 { 847 - return platform_driver_probe(&iproc_gpio_driver, iproc_gpio_probe); 847 + return platform_driver_register(&iproc_gpio_driver); 848 848 } 849 849 arch_initcall_sync(iproc_gpio_init);
+1 -1
drivers/pinctrl/bcm/pinctrl-nsp-gpio.c
··· 741 741 742 742 static int __init nsp_gpio_init(void) 743 743 { 744 - return platform_driver_probe(&nsp_gpio_driver, nsp_gpio_probe); 744 + return platform_driver_register(&nsp_gpio_driver); 745 745 } 746 746 arch_initcall_sync(nsp_gpio_init);
+1
drivers/pinctrl/freescale/pinctrl-imx.c
··· 687 687 if (!info->functions) 688 688 return -ENOMEM; 689 689 690 + info->group_index = 0; 690 691 if (flat_funcs) { 691 692 info->ngroups = of_get_child_count(np); 692 693 } else {
+14 -3
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1652 1652 } 1653 1653 1654 1654 #ifdef CONFIG_PM_SLEEP 1655 - static int chv_pinctrl_suspend(struct device *dev) 1655 + static int chv_pinctrl_suspend_noirq(struct device *dev) 1656 1656 { 1657 1657 struct platform_device *pdev = to_platform_device(dev); 1658 1658 struct chv_pinctrl *pctrl = platform_get_drvdata(pdev); 1659 + unsigned long flags; 1659 1660 int i; 1661 + 1662 + raw_spin_lock_irqsave(&chv_lock, flags); 1660 1663 1661 1664 pctrl->saved_intmask = readl(pctrl->regs + CHV_INTMASK); 1662 1665 ··· 1681 1678 ctx->padctrl1 = readl(reg); 1682 1679 } 1683 1680 1681 + raw_spin_unlock_irqrestore(&chv_lock, flags); 1682 + 1684 1683 return 0; 1685 1684 } 1686 1685 1687 - static int chv_pinctrl_resume(struct device *dev) 1686 + static int chv_pinctrl_resume_noirq(struct device *dev) 1688 1687 { 1689 1688 struct platform_device *pdev = to_platform_device(dev); 1690 1689 struct chv_pinctrl *pctrl = platform_get_drvdata(pdev); 1690 + unsigned long flags; 1691 1691 int i; 1692 + 1693 + raw_spin_lock_irqsave(&chv_lock, flags); 1692 1694 1693 1695 /* 1694 1696 * Mask all interrupts before restoring per-pin configuration ··· 1739 1731 chv_writel(0xffff, pctrl->regs + CHV_INTSTAT); 1740 1732 chv_writel(pctrl->saved_intmask, pctrl->regs + CHV_INTMASK); 1741 1733 1734 + raw_spin_unlock_irqrestore(&chv_lock, flags); 1735 + 1742 1736 return 0; 1743 1737 } 1744 1738 #endif 1745 1739 1746 1740 static const struct dev_pm_ops chv_pinctrl_pm_ops = { 1747 - SET_LATE_SYSTEM_SLEEP_PM_OPS(chv_pinctrl_suspend, chv_pinctrl_resume) 1741 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(chv_pinctrl_suspend_noirq, 1742 + chv_pinctrl_resume_noirq) 1748 1743 }; 1749 1744 1750 1745 static const struct acpi_device_id chv_pinctrl_acpi_match[] = {
+1 -1
drivers/pinctrl/pinctrl-st.c
··· 1512 1512 if (info->irqmux_base || gpio_irq > 0) { 1513 1513 err = gpiochip_irqchip_add(&bank->gpio_chip, &st_gpio_irqchip, 1514 1514 0, handle_simple_irq, 1515 - IRQ_TYPE_LEVEL_LOW); 1515 + IRQ_TYPE_NONE); 1516 1516 if (err) { 1517 1517 gpiochip_remove(&bank->gpio_chip); 1518 1518 dev_info(dev, "could not add irqchip\n");
+5 -3
drivers/pinctrl/stm32/pinctrl-stm32.c
··· 1092 1092 return -EINVAL; 1093 1093 } 1094 1094 1095 - ret = stm32_pctrl_dt_setup_irq(pdev, pctl); 1096 - if (ret) 1097 - return ret; 1095 + if (of_find_property(np, "interrupt-parent", NULL)) { 1096 + ret = stm32_pctrl_dt_setup_irq(pdev, pctl); 1097 + if (ret) 1098 + return ret; 1099 + } 1098 1100 1099 1101 for_each_child_of_node(np, child) 1100 1102 if (of_property_read_bool(child, "gpio-controller"))
+7
drivers/platform/x86/ideapad-laptop.c
··· 934 934 }, 935 935 }, 936 936 { 937 + .ident = "Lenovo Yoga 900", 938 + .matches = { 939 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 940 + DMI_MATCH(DMI_BOARD_NAME, "VIUU4"), 941 + }, 942 + }, 943 + { 937 944 .ident = "Lenovo YOGA 910-13IKB", 938 945 .matches = { 939 946 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+1 -1
drivers/platform/x86/intel-hid.c
··· 264 264 return AE_OK; 265 265 266 266 if (acpi_match_device_ids(dev, ids) == 0) 267 - if (acpi_create_platform_device(dev)) 267 + if (acpi_create_platform_device(dev, NULL)) 268 268 dev_info(&dev->dev, 269 269 "intel-hid: created platform device\n"); 270 270
+1 -1
drivers/platform/x86/intel-vbtn.c
··· 164 164 return AE_OK; 165 165 166 166 if (acpi_match_device_ids(dev, ids) == 0) 167 - if (acpi_create_platform_device(dev)) 167 + if (acpi_create_platform_device(dev, NULL)) 168 168 dev_info(&dev->dev, 169 169 "intel-vbtn: created platform device\n"); 170 170
+19 -7
drivers/platform/x86/toshiba-wmi.c
··· 24 24 #include <linux/acpi.h> 25 25 #include <linux/input.h> 26 26 #include <linux/input/sparse-keymap.h> 27 + #include <linux/dmi.h> 27 28 28 29 MODULE_AUTHOR("Azael Avalos"); 29 30 MODULE_DESCRIPTION("Toshiba WMI Hotkey Driver"); 30 31 MODULE_LICENSE("GPL"); 31 32 32 - #define TOSHIBA_WMI_EVENT_GUID "59142400-C6A3-40FA-BADB-8A2652834100" 33 + #define WMI_EVENT_GUID "59142400-C6A3-40FA-BADB-8A2652834100" 33 34 34 - MODULE_ALIAS("wmi:"TOSHIBA_WMI_EVENT_GUID); 35 + MODULE_ALIAS("wmi:"WMI_EVENT_GUID); 35 36 36 37 static struct input_dev *toshiba_wmi_input_dev; 37 38 ··· 64 63 kfree(response.pointer); 65 64 } 66 65 66 + static struct dmi_system_id toshiba_wmi_dmi_table[] __initdata = { 67 + { 68 + .ident = "Toshiba laptop", 69 + .matches = { 70 + DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), 71 + }, 72 + }, 73 + {} 74 + }; 75 + 67 76 static int __init toshiba_wmi_input_setup(void) 68 77 { 69 78 acpi_status status; ··· 92 81 if (err) 93 82 goto err_free_dev; 94 83 95 - status = wmi_install_notify_handler(TOSHIBA_WMI_EVENT_GUID, 84 + status = wmi_install_notify_handler(WMI_EVENT_GUID, 96 85 toshiba_wmi_notify, NULL); 97 86 if (ACPI_FAILURE(status)) { 98 87 err = -EIO; ··· 106 95 return 0; 107 96 108 97 err_remove_notifier: 109 - wmi_remove_notify_handler(TOSHIBA_WMI_EVENT_GUID); 98 + wmi_remove_notify_handler(WMI_EVENT_GUID); 110 99 err_free_keymap: 111 100 sparse_keymap_free(toshiba_wmi_input_dev); 112 101 err_free_dev: ··· 116 105 117 106 static void toshiba_wmi_input_destroy(void) 118 107 { 119 - wmi_remove_notify_handler(TOSHIBA_WMI_EVENT_GUID); 108 + wmi_remove_notify_handler(WMI_EVENT_GUID); 120 109 sparse_keymap_free(toshiba_wmi_input_dev); 121 110 input_unregister_device(toshiba_wmi_input_dev); 122 111 } ··· 125 114 { 126 115 int ret; 127 116 128 - if (!wmi_has_guid(TOSHIBA_WMI_EVENT_GUID)) 117 + if (!wmi_has_guid(WMI_EVENT_GUID) || 118 + !dmi_check_system(toshiba_wmi_dmi_table)) 129 119 return -ENODEV; 130 120 131 121 ret = toshiba_wmi_input_setup(); ··· 142 130 143 131 static void __exit toshiba_wmi_exit(void) 144 132 { 145 - if (wmi_has_guid(TOSHIBA_WMI_EVENT_GUID)) 133 + if (wmi_has_guid(WMI_EVENT_GUID)) 146 134 toshiba_wmi_input_destroy(); 147 135 } 148 136
+2 -1
drivers/scsi/cxgbi/libcxgbi.c
··· 2081 2081 /* never reached the xmit task callout */ 2082 2082 if (tdata->skb) 2083 2083 __kfree_skb(tdata->skb); 2084 - memset(tdata, 0, sizeof(*tdata)); 2085 2084 2086 2085 task_release_itt(task, task->hdr_itt); 2086 + memset(tdata, 0, sizeof(*tdata)); 2087 + 2087 2088 iscsi_tcp_cleanup_task(task); 2088 2089 } 2089 2090 EXPORT_SYMBOL_GPL(cxgbi_cleanup_task);
+4 -1
drivers/scsi/device_handler/scsi_dh_alua.c
··· 793 793 WARN_ON(pg->flags & ALUA_PG_RUN_RTPG); 794 794 WARN_ON(pg->flags & ALUA_PG_RUN_STPG); 795 795 spin_unlock_irqrestore(&pg->lock, flags); 796 + kref_put(&pg->kref, release_port_group); 796 797 return; 797 798 } 798 799 if (pg->flags & ALUA_SYNC_STPG) ··· 891 890 /* Do not queue if the worker is already running */ 892 891 if (!(pg->flags & ALUA_PG_RUNNING)) { 893 892 kref_get(&pg->kref); 893 + sdev = NULL; 894 894 start_queue = 1; 895 895 } 896 896 } ··· 903 901 if (start_queue && 904 902 !queue_delayed_work(alua_wq, &pg->rtpg_work, 905 903 msecs_to_jiffies(ALUA_RTPG_DELAY_MSECS))) { 906 - scsi_device_put(sdev); 904 + if (sdev) 905 + scsi_device_put(sdev); 907 906 kref_put(&pg->kref, release_port_group); 908 907 } 909 908 }
+1 -1
drivers/scsi/megaraid/megaraid_sas.h
··· 2233 2233 }; 2234 2234 2235 2235 #define MEGASAS_IS_LOGICAL(scp) \ 2236 - (scp->device->channel < MEGASAS_MAX_PD_CHANNELS) ? 0 : 1 2236 + ((scp->device->channel < MEGASAS_MAX_PD_CHANNELS) ? 0 : 1) 2237 2237 2238 2238 #define MEGASAS_DEV_INDEX(scp) \ 2239 2239 (((scp->device->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) + \
+2 -2
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 1273 1273 sas_target_priv_data->handle = raid_device->handle; 1274 1274 sas_target_priv_data->sas_address = raid_device->wwid; 1275 1275 sas_target_priv_data->flags |= MPT_TARGET_FLAGS_VOLUME; 1276 - sas_target_priv_data->raid_device = raid_device; 1277 1276 if (ioc->is_warpdrive) 1278 - raid_device->starget = starget; 1277 + sas_target_priv_data->raid_device = raid_device; 1278 + raid_device->starget = starget; 1279 1279 } 1280 1280 spin_unlock_irqrestore(&ioc->raid_device_lock, flags); 1281 1281 return 0;
+16
drivers/scsi/qla2xxx/qla_os.c
··· 707 707 srb_t *sp; 708 708 int rval; 709 709 710 + if (unlikely(test_bit(UNLOADING, &base_vha->dpc_flags))) { 711 + cmd->result = DID_NO_CONNECT << 16; 712 + goto qc24_fail_command; 713 + } 714 + 710 715 if (ha->flags.eeh_busy) { 711 716 if (ha->flags.pci_channel_io_perm_failure) { 712 717 ql_dbg(ql_dbg_aer, vha, 0x9010, ··· 1456 1451 for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) { 1457 1452 sp = req->outstanding_cmds[cnt]; 1458 1453 if (sp) { 1454 + /* Get a reference to the sp and drop the lock. 1455 + * The reference ensures this sp->done() call 1456 + * - and not the call in qla2xxx_eh_abort() - 1457 + * ends the SCSI command (with result 'res'). 1458 + */ 1459 + sp_get(sp); 1460 + spin_unlock_irqrestore(&ha->hardware_lock, flags); 1461 + qla2xxx_eh_abort(GET_CMD_SP(sp)); 1462 + spin_lock_irqsave(&ha->hardware_lock, flags); 1459 1463 req->outstanding_cmds[cnt] = NULL; 1460 1464 sp->done(vha, sp, res); 1461 1465 } ··· 2355 2341 { 2356 2342 scsi_qla_host_t *vha = shost_priv(shost); 2357 2343 2344 + if (test_bit(UNLOADING, &vha->dpc_flags)) 2345 + return 1; 2358 2346 if (!vha->host) 2359 2347 return 1; 2360 2348 if (time > vha->hw->loop_reset_delay * HZ)
+3 -2
drivers/scsi/vmw_pvscsi.c
··· 793 793 unsigned long flags; 794 794 int result = SUCCESS; 795 795 DECLARE_COMPLETION_ONSTACK(abort_cmp); 796 + int done; 796 797 797 798 scmd_printk(KERN_DEBUG, cmd, "task abort on host %u, %p\n", 798 799 adapter->host->host_no, cmd); ··· 825 824 pvscsi_abort_cmd(adapter, ctx); 826 825 spin_unlock_irqrestore(&adapter->hw_lock, flags); 827 826 /* Wait for 2 secs for the completion. */ 828 - wait_for_completion_timeout(&abort_cmp, msecs_to_jiffies(2000)); 827 + done = wait_for_completion_timeout(&abort_cmp, msecs_to_jiffies(2000)); 829 828 spin_lock_irqsave(&adapter->hw_lock, flags); 830 829 831 - if (!completion_done(&abort_cmp)) { 830 + if (!done) { 832 831 /* 833 832 * Failed to abort the command, unmark the fact that it 834 833 * was requested to be aborted.
+1 -1
drivers/scsi/vmw_pvscsi.h
··· 26 26 27 27 #include <linux/types.h> 28 28 29 - #define PVSCSI_DRIVER_VERSION_STRING "1.0.6.0-k" 29 + #define PVSCSI_DRIVER_VERSION_STRING "1.0.7.0-k" 30 30 31 31 #define PVSCSI_MAX_NUM_SG_ENTRIES_PER_SEGMENT 128 32 32
+2 -1
drivers/staging/comedi/drivers/ni_tio.c
··· 207 207 * clock period is specified by user with prescaling 208 208 * already taken into account. 209 209 */ 210 - return counter->clock_period_ps; 210 + *period_ps = counter->clock_period_ps; 211 + return 0; 211 212 } 212 213 213 214 switch (generic_clock_source & NI_GPCT_PRESCALE_MODE_CLOCK_SRC_MASK) {
+1
drivers/staging/greybus/arche-platform.c
··· 186 186 exit: 187 187 spin_unlock_irqrestore(&arche_pdata->wake_lock, flags); 188 188 mutex_unlock(&arche_pdata->platform_state_mutex); 189 + put_device(&pdev->dev); 189 190 of_node_put(np); 190 191 return ret; 191 192 }
+10 -7
drivers/staging/iio/impedance-analyzer/ad5933.c
··· 655 655 __be16 buf[2]; 656 656 int val[2]; 657 657 unsigned char status; 658 + int ret; 658 659 659 660 mutex_lock(&indio_dev->mlock); 660 661 if (st->state == AD5933_CTRL_INIT_START_FREQ) { ··· 663 662 ad5933_cmd(st, AD5933_CTRL_START_SWEEP); 664 663 st->state = AD5933_CTRL_START_SWEEP; 665 664 schedule_delayed_work(&st->work, st->poll_time_jiffies); 666 - mutex_unlock(&indio_dev->mlock); 667 - return; 665 + goto out; 668 666 } 669 667 670 - ad5933_i2c_read(st->client, AD5933_REG_STATUS, 1, &status); 668 + ret = ad5933_i2c_read(st->client, AD5933_REG_STATUS, 1, &status); 669 + if (ret) 670 + goto out; 671 671 672 672 if (status & AD5933_STAT_DATA_VALID) { 673 673 int scan_count = bitmap_weight(indio_dev->active_scan_mask, 674 674 indio_dev->masklength); 675 - ad5933_i2c_read(st->client, 675 + ret = ad5933_i2c_read(st->client, 676 676 test_bit(1, indio_dev->active_scan_mask) ? 677 677 AD5933_REG_REAL_DATA : AD5933_REG_IMAG_DATA, 678 678 scan_count * 2, (u8 *)buf); 679 + if (ret) 680 + goto out; 679 681 680 682 if (scan_count == 2) { 681 683 val[0] = be16_to_cpu(buf[0]); ··· 690 686 } else { 691 687 /* no data available - try again later */ 692 688 schedule_delayed_work(&st->work, st->poll_time_jiffies); 693 - mutex_unlock(&indio_dev->mlock); 694 - return; 689 + goto out; 695 690 } 696 691 697 692 if (status & AD5933_STAT_SWEEP_DONE) { ··· 703 700 ad5933_cmd(st, AD5933_CTRL_INC_FREQ); 704 701 schedule_delayed_work(&st->work, st->poll_time_jiffies); 705 702 } 706 - 703 + out: 707 704 mutex_unlock(&indio_dev->mlock); 708 705 } 709 706
+2 -6
drivers/staging/nvec/nvec_ps2.c
··· 106 106 { 107 107 struct nvec_chip *nvec = dev_get_drvdata(pdev->dev.parent); 108 108 struct serio *ser_dev; 109 - char mouse_reset[] = { NVEC_PS2, SEND_COMMAND, PSMOUSE_RST, 3 }; 110 109 111 - ser_dev = devm_kzalloc(&pdev->dev, sizeof(struct serio), GFP_KERNEL); 110 + ser_dev = kzalloc(sizeof(struct serio), GFP_KERNEL); 112 111 if (!ser_dev) 113 112 return -ENOMEM; 114 113 115 - ser_dev->id.type = SERIO_PS_PSTHRU; 114 + ser_dev->id.type = SERIO_8042; 116 115 ser_dev->write = ps2_sendcommand; 117 116 ser_dev->start = ps2_startstreaming; 118 117 ser_dev->stop = ps2_stopstreaming; ··· 125 126 nvec_register_notifier(nvec, &ps2_dev.notifier, 0); 126 127 127 128 serio_register_port(ser_dev); 128 - 129 - /* mouse reset */ 130 - nvec_write_async(nvec, mouse_reset, sizeof(mouse_reset)); 131 129 132 130 return 0; 133 131 }
+4 -4
drivers/staging/sm750fb/ddk750_reg.h
··· 601 601 602 602 #define PANEL_PLANE_TL 0x08001C 603 603 #define PANEL_PLANE_TL_TOP_SHIFT 16 604 - #define PANEL_PLANE_TL_TOP_MASK (0xeff << 16) 605 - #define PANEL_PLANE_TL_LEFT_MASK 0xeff 604 + #define PANEL_PLANE_TL_TOP_MASK (0x7ff << 16) 605 + #define PANEL_PLANE_TL_LEFT_MASK 0x7ff 606 606 607 607 #define PANEL_PLANE_BR 0x080020 608 608 #define PANEL_PLANE_BR_BOTTOM_SHIFT 16 609 - #define PANEL_PLANE_BR_BOTTOM_MASK (0xeff << 16) 610 - #define PANEL_PLANE_BR_RIGHT_MASK 0xeff 609 + #define PANEL_PLANE_BR_BOTTOM_MASK (0x7ff << 16) 610 + #define PANEL_PLANE_BR_RIGHT_MASK 0x7ff 611 611 612 612 #define PANEL_HORIZONTAL_TOTAL 0x080024 613 613 #define PANEL_HORIZONTAL_TOTAL_TOTAL_SHIFT 16
+2 -2
drivers/usb/class/cdc-acm.c
··· 932 932 DECLARE_WAITQUEUE(wait, current); 933 933 struct async_icount old, new; 934 934 935 - if (arg & (TIOCM_DSR | TIOCM_RI | TIOCM_CD)) 936 - return -EINVAL; 937 935 do { 938 936 spin_lock_irq(&acm->read_lock); 939 937 old = acm->oldcount; ··· 1158 1160 1159 1161 if (quirks == IGNORE_DEVICE) 1160 1162 return -ENODEV; 1163 + 1164 + memset(&h, 0x00, sizeof(struct usb_cdc_parsed_header)); 1161 1165 1162 1166 num_rx_buf = (quirks == SINGLE_RX_URB) ? 1 : ACM_NR; 1163 1167
+2 -3
drivers/usb/dwc3/core.c
··· 769 769 return 0; 770 770 771 771 err4: 772 - phy_power_off(dwc->usb2_generic_phy); 772 + phy_power_off(dwc->usb3_generic_phy); 773 773 774 774 err3: 775 - phy_power_off(dwc->usb3_generic_phy); 775 + phy_power_off(dwc->usb2_generic_phy); 776 776 777 777 err2: 778 778 usb_phy_set_suspend(dwc->usb2_phy, 1); 779 779 usb_phy_set_suspend(dwc->usb3_phy, 1); 780 - dwc3_core_exit(dwc); 781 780 782 781 err1: 783 782 usb_phy_shutdown(dwc->usb2_phy);
+1
drivers/usb/dwc3/dwc3-st.c
··· 31 31 #include <linux/slab.h> 32 32 #include <linux/regmap.h> 33 33 #include <linux/reset.h> 34 + #include <linux/pinctrl/consumer.h> 34 35 #include <linux/usb/of.h> 35 36 36 37 #include "core.h"
-8
drivers/usb/gadget/function/u_ether.c
··· 588 588 589 589 req->length = length; 590 590 591 - /* throttle high/super speed IRQ rate back slightly */ 592 - if (gadget_is_dualspeed(dev->gadget)) 593 - req->no_interrupt = (((dev->gadget->speed == USB_SPEED_HIGH || 594 - dev->gadget->speed == USB_SPEED_SUPER)) && 595 - !list_empty(&dev->tx_reqs)) 596 - ? ((atomic_read(&dev->tx_qlen) % dev->qmult) != 0) 597 - : 0; 598 - 599 591 retval = usb_ep_queue(in, req, GFP_ATOMIC); 600 592 switch (retval) { 601 593 default:
+8
drivers/usb/host/pci-quirks.c
··· 995 995 } 996 996 val = readl(base + ext_cap_offset); 997 997 998 + /* Auto handoff never worked for these devices. Force it and continue */ 999 + if ((pdev->vendor == PCI_VENDOR_ID_TI && pdev->device == 0x8241) || 1000 + (pdev->vendor == PCI_VENDOR_ID_RENESAS 1001 + && pdev->device == 0x0014)) { 1002 + val = (val | XHCI_HC_OS_OWNED) & ~XHCI_HC_BIOS_OWNED; 1003 + writel(val, base + ext_cap_offset); 1004 + } 1005 + 998 1006 /* If the BIOS owns the HC, signal that the OS wants it, and wait */ 999 1007 if (val & XHCI_HC_BIOS_OWNED) { 1000 1008 writel(val | XHCI_HC_OS_OWNED, base + ext_cap_offset);
+2 -1
drivers/usb/musb/da8xx.c
··· 479 479 480 480 glue->phy = devm_phy_get(&pdev->dev, "usb-phy"); 481 481 if (IS_ERR(glue->phy)) { 482 - dev_err(&pdev->dev, "failed to get phy\n"); 482 + if (PTR_ERR(glue->phy) != -EPROBE_DEFER) 483 + dev_err(&pdev->dev, "failed to get phy\n"); 483 484 return PTR_ERR(glue->phy); 484 485 } 485 486
-5
drivers/usb/musb/musb_core.c
··· 2114 2114 musb->io.ep_offset = musb_flat_ep_offset; 2115 2115 musb->io.ep_select = musb_flat_ep_select; 2116 2116 } 2117 - /* And override them with platform specific ops if specified. */ 2118 - if (musb->ops->ep_offset) 2119 - musb->io.ep_offset = musb->ops->ep_offset; 2120 - if (musb->ops->ep_select) 2121 - musb->io.ep_select = musb->ops->ep_select; 2122 2117 2123 2118 /* At least tusb6010 has its own offsets */ 2124 2119 if (musb->ops->ep_offset)
+13 -3
drivers/uwb/lc-rc.c
··· 56 56 struct uwb_rc *rc = NULL; 57 57 58 58 dev = class_find_device(&uwb_rc_class, NULL, &index, uwb_rc_index_match); 59 - if (dev) 59 + if (dev) { 60 60 rc = dev_get_drvdata(dev); 61 + put_device(dev); 62 + } 63 + 61 64 return rc; 62 65 } 63 66 ··· 470 467 if (dev) { 471 468 rc = dev_get_drvdata(dev); 472 469 __uwb_rc_get(rc); 470 + put_device(dev); 473 471 } 472 + 474 473 return rc; 475 474 } 476 475 EXPORT_SYMBOL_GPL(__uwb_rc_try_get); ··· 525 520 526 521 dev = class_find_device(&uwb_rc_class, NULL, grandpa_dev, 527 522 find_rc_grandpa); 528 - if (dev) 523 + if (dev) { 529 524 rc = dev_get_drvdata(dev); 525 + put_device(dev); 526 + } 527 + 530 528 return rc; 531 529 } 532 530 EXPORT_SYMBOL_GPL(uwb_rc_get_by_grandpa); ··· 561 553 struct uwb_rc *rc = NULL; 562 554 563 555 dev = class_find_device(&uwb_rc_class, NULL, addr, find_rc_dev); 564 - if (dev) 556 + if (dev) { 565 557 rc = dev_get_drvdata(dev); 558 + put_device(dev); 559 + } 566 560 567 561 return rc; 568 562 }
+2
drivers/uwb/pal.c
··· 97 97 98 98 dev = class_find_device(&uwb_rc_class, NULL, target_rc, find_rc); 99 99 100 + put_device(dev); 101 + 100 102 return (dev != NULL); 101 103 } 102 104
+114 -103
fs/aio.c
··· 1078 1078 unsigned tail, pos, head; 1079 1079 unsigned long flags; 1080 1080 1081 + if (kiocb->ki_flags & IOCB_WRITE) { 1082 + struct file *file = kiocb->ki_filp; 1083 + 1084 + /* 1085 + * Tell lockdep we inherited freeze protection from submission 1086 + * thread. 1087 + */ 1088 + __sb_writers_acquired(file_inode(file)->i_sb, SB_FREEZE_WRITE); 1089 + file_end_write(file); 1090 + } 1091 + 1081 1092 /* 1082 1093 * Special case handling for sync iocbs: 1083 1094 * - events go directly into the iocb for fast handling ··· 1403 1392 return -EINVAL; 1404 1393 } 1405 1394 1406 - typedef ssize_t (rw_iter_op)(struct kiocb *, struct iov_iter *); 1407 - 1408 - static int aio_setup_vectored_rw(int rw, char __user *buf, size_t len, 1409 - struct iovec **iovec, 1410 - bool compat, 1411 - struct iov_iter *iter) 1395 + static int aio_setup_rw(int rw, struct iocb *iocb, struct iovec **iovec, 1396 + bool vectored, bool compat, struct iov_iter *iter) 1412 1397 { 1398 + void __user *buf = (void __user *)(uintptr_t)iocb->aio_buf; 1399 + size_t len = iocb->aio_nbytes; 1400 + 1401 + if (!vectored) { 1402 + ssize_t ret = import_single_range(rw, buf, len, *iovec, iter); 1403 + *iovec = NULL; 1404 + return ret; 1405 + } 1413 1406 #ifdef CONFIG_COMPAT 1414 1407 if (compat) 1415 - return compat_import_iovec(rw, 1416 - (struct compat_iovec __user *)buf, 1417 - len, UIO_FASTIOV, iovec, iter); 1408 + return compat_import_iovec(rw, buf, len, UIO_FASTIOV, iovec, 1409 + iter); 1418 1410 #endif 1419 - return import_iovec(rw, (struct iovec __user *)buf, 1420 - len, UIO_FASTIOV, iovec, iter); 1411 + return import_iovec(rw, buf, len, UIO_FASTIOV, iovec, iter); 1421 1412 } 1422 1413 1423 - /* 1424 - * aio_run_iocb: 1425 - * Performs the initial checks and io submission. 1426 - */ 1427 - static ssize_t aio_run_iocb(struct kiocb *req, unsigned opcode, 1428 - char __user *buf, size_t len, bool compat) 1414 + static inline ssize_t aio_ret(struct kiocb *req, ssize_t ret) 1429 1415 { 1430 - struct file *file = req->ki_filp; 1431 - ssize_t ret; 1432 - int rw; 1433 - fmode_t mode; 1434 - rw_iter_op *iter_op; 1435 - struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; 1436 - struct iov_iter iter; 1437 - 1438 - switch (opcode) { 1439 - case IOCB_CMD_PREAD: 1440 - case IOCB_CMD_PREADV: 1441 - mode = FMODE_READ; 1442 - rw = READ; 1443 - iter_op = file->f_op->read_iter; 1444 - goto rw_common; 1445 - 1446 - case IOCB_CMD_PWRITE: 1447 - case IOCB_CMD_PWRITEV: 1448 - mode = FMODE_WRITE; 1449 - rw = WRITE; 1450 - iter_op = file->f_op->write_iter; 1451 - goto rw_common; 1452 - rw_common: 1453 - if (unlikely(!(file->f_mode & mode))) 1454 - return -EBADF; 1455 - 1456 - if (!iter_op) 1457 - return -EINVAL; 1458 - 1459 - if (opcode == IOCB_CMD_PREADV || opcode == IOCB_CMD_PWRITEV) 1460 - ret = aio_setup_vectored_rw(rw, buf, len, 1461 - &iovec, compat, &iter); 1462 - else { 1463 - ret = import_single_range(rw, buf, len, iovec, &iter); 1464 - iovec = NULL; 1465 - } 1466 - if (!ret) 1467 - ret = rw_verify_area(rw, file, &req->ki_pos, 1468 - iov_iter_count(&iter)); 1469 - if (ret < 0) { 1470 - kfree(iovec); 1471 - return ret; 1472 - } 1473 - 1474 - if (rw == WRITE) 1475 - file_start_write(file); 1476 - 1477 - ret = iter_op(req, &iter); 1478 - 1479 - if (rw == WRITE) 1480 - file_end_write(file); 1481 - kfree(iovec); 1482 - break; 1483 - 1484 - case IOCB_CMD_FDSYNC: 1485 - if (!file->f_op->aio_fsync) 1486 - return -EINVAL; 1487 - 1488 - ret = file->f_op->aio_fsync(req, 1); 1489 - break; 1490 - 1491 - case IOCB_CMD_FSYNC: 1492 - if (!file->f_op->aio_fsync) 1493 - return -EINVAL; 1494 - 1495 - ret = file->f_op->aio_fsync(req, 0); 1496 - break; 1497 - 1498 - default: 1499 - pr_debug("EINVAL: no operation provided\n"); 1500 - return -EINVAL; 1501 - } 1502 - 1503 - if (ret != -EIOCBQUEUED) { 1416 + switch (ret) { 1417 + case -EIOCBQUEUED: 1418 + return ret; 1419 + case -ERESTARTSYS: 1420 + case -ERESTARTNOINTR: 1421 + case -ERESTARTNOHAND: 1422 + case -ERESTART_RESTARTBLOCK: 1504 1423 /* 1505 1424 * There's no easy way to restart the syscall since other AIO's 1506 1425 * may be already running. Just fail this IO with EINTR. 1507 1426 */ 1508 - if (unlikely(ret == -ERESTARTSYS || ret == -ERESTARTNOINTR || 1509 - ret == -ERESTARTNOHAND || 1510 - ret == -ERESTART_RESTARTBLOCK)) 1511 - ret = -EINTR; 1427 + ret = -EINTR; 1428 + /*FALLTHRU*/ 1429 + default: 1512 1430 aio_complete(req, ret, 0); 1431 + return 0; 1513 1432 } 1433 + } 1514 1434 1515 - return 0; 1435 + static ssize_t aio_read(struct kiocb *req, struct iocb *iocb, bool vectored, 1436 + bool compat) 1437 + { 1438 + struct file *file = req->ki_filp; 1439 + struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; 1440 + struct iov_iter iter; 1441 + ssize_t ret; 1442 + 1443 + if (unlikely(!(file->f_mode & FMODE_READ))) 1444 + return -EBADF; 1445 + if (unlikely(!file->f_op->read_iter)) 1446 + return -EINVAL; 1447 + 1448 + ret = aio_setup_rw(READ, iocb, &iovec, vectored, compat, &iter); 1449 + if (ret) 1450 + return ret; 1451 + ret = rw_verify_area(READ, file, &req->ki_pos, iov_iter_count(&iter)); 1452 + if (!ret) 1453 + ret = aio_ret(req, file->f_op->read_iter(req, &iter)); 1454 + kfree(iovec); 1455 + return ret; 1456 + } 1457 + 1458 + static ssize_t aio_write(struct kiocb *req, struct iocb *iocb, bool vectored, 1459 + bool compat) 1460 + { 1461 + struct file *file = req->ki_filp; 1462 + struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs; 1463 + struct iov_iter iter; 1464 + ssize_t ret; 1465 + 1466 + if (unlikely(!(file->f_mode & FMODE_WRITE))) 1467 + return -EBADF; 1468 + if (unlikely(!file->f_op->write_iter)) 1469 + return -EINVAL; 1470 + 1471 + ret = aio_setup_rw(WRITE, iocb, &iovec, vectored, compat, &iter); 1472 + if (ret) 1473 + return ret; 1474 + ret = rw_verify_area(WRITE, file, &req->ki_pos, iov_iter_count(&iter)); 1475 + if (!ret) { 1476 + req->ki_flags |= IOCB_WRITE; 1477 + file_start_write(file); 1478 + ret = aio_ret(req, file->f_op->write_iter(req, &iter)); 1479 + /* 1480 + * We release freeze protection in aio_complete(). Fool lockdep 1481 + * by telling it the lock got released so that it doesn't 1482 + * complain about held lock when we return to userspace. 1483 + */ 1484 + __sb_writers_release(file_inode(file)->i_sb, SB_FREEZE_WRITE); 1485 + } 1486 + kfree(iovec); 1487 + return ret; 1516 1488 } 1517 1489 1518 1490 static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb, 1519 1491 struct iocb *iocb, bool compat) 1520 1492 { 1521 1493 struct aio_kiocb *req; 1494 + struct file *file; 1522 1495 ssize_t ret; 1523 1496 1524 1497 /* enforce forwards compatibility on users */ ··· 1525 1530 if (unlikely(!req)) 1526 1531 return -EAGAIN; 1527 1532 1528 - req->common.ki_filp = fget(iocb->aio_fildes); 1533 + req->common.ki_filp = file = fget(iocb->aio_fildes); 1529 1534 if (unlikely(!req->common.ki_filp)) { 1530 1535 ret = -EBADF; 1531 1536 goto out_put_req; ··· 1560 1565 req->ki_user_iocb = user_iocb; 1561 1566 req->ki_user_data = iocb->aio_data; 1562 1567 1563 - ret = aio_run_iocb(&req->common, iocb->aio_lio_opcode, 1564 - (char __user *)(unsigned long)iocb->aio_buf, 1565 - iocb->aio_nbytes, 1566 - compat); 1567 - if (ret) 1568 - goto out_put_req; 1568 + get_file(file); 1569 + switch (iocb->aio_lio_opcode) { 1570 + case IOCB_CMD_PREAD: 1571 + ret = aio_read(&req->common, iocb, false, compat); 1572 + break; 1573 + case IOCB_CMD_PWRITE: 1574 + ret = aio_write(&req->common, iocb, false, compat); 1575 + break; 1576 + case IOCB_CMD_PREADV: 1577 + ret = aio_read(&req->common, iocb, true, compat); 1578 + break; 1579 + case IOCB_CMD_PWRITEV: 1580 + ret = aio_write(&req->common, iocb, true, compat); 1581 + break; 1582 + default: 1583 + pr_debug("invalid aio operation %d\n", iocb->aio_lio_opcode); 1584 + ret = -EINVAL; 1585 + break; 1586 + } 1587 + fput(file); 1569 1588 1589 + if (ret && ret != -EIOCBQUEUED) 1590 + goto out_put_req; 1570 1591 return 0; 1571 1592 out_put_req: 1572 1593 put_reqs_available(ctx, 1);
-1
fs/ceph/file.c
··· 1770 1770 .fsync = ceph_fsync, 1771 1771 .lock = ceph_lock, 1772 1772 .flock = ceph_flock, 1773 - .splice_read = generic_file_splice_read, 1774 1773 .splice_write = iter_file_splice_write, 1775 1774 .unlocked_ioctl = ceph_ioctl, 1776 1775 .compat_ioctl = ceph_ioctl,
+3
fs/coredump.c
··· 1 1 #include <linux/slab.h> 2 2 #include <linux/file.h> 3 3 #include <linux/fdtable.h> 4 + #include <linux/freezer.h> 4 5 #include <linux/mm.h> 5 6 #include <linux/stat.h> 6 7 #include <linux/fcntl.h> ··· 424 423 if (core_waiters > 0) { 425 424 struct core_thread *ptr; 426 425 426 + freezer_do_not_count(); 427 427 wait_for_completion(&core_state->startup); 428 + freezer_count(); 428 429 /* 429 430 * Wait for all the threads to become inactive, so that 430 431 * all the thread context (extended register state, like
+2 -1
fs/nfs/client.c
··· 314 314 /* Match the full socket address */ 315 315 if (!rpc_cmp_addr_port(sap, clap)) 316 316 /* Match all xprt_switch full socket addresses */ 317 - if (!rpc_clnt_xprt_switch_has_addr(clp->cl_rpcclient, 317 + if (IS_ERR(clp->cl_rpcclient) || 318 + !rpc_clnt_xprt_switch_has_addr(clp->cl_rpcclient, 318 319 sap)) 319 320 continue; 320 321
+1 -1
fs/nfs/namespace.c
··· 98 98 return end; 99 99 } 100 100 namelen = strlen(base); 101 - if (flags & NFS_PATH_CANONICAL) { 101 + if (*end == '/') { 102 102 /* Strip off excess slashes in base string */ 103 103 while (namelen > 0 && base[namelen - 1] == '/') 104 104 namelen--;
+7 -5
fs/nfs/nfs4session.c
··· 178 178 __must_hold(&tbl->slot_tbl_lock) 179 179 { 180 180 struct nfs4_slot *slot; 181 + int ret; 181 182 182 183 slot = nfs4_lookup_slot(tbl, slotid); 183 - if (IS_ERR(slot)) 184 - return PTR_ERR(slot); 185 - *seq_nr = slot->seq_nr; 186 - return 0; 184 + ret = PTR_ERR_OR_ZERO(slot); 185 + if (!ret) 186 + *seq_nr = slot->seq_nr; 187 + 188 + return ret; 187 189 } 188 190 189 191 /* ··· 198 196 static bool nfs4_slot_seqid_in_use(struct nfs4_slot_table *tbl, 199 197 u32 slotid, u32 seq_nr) 200 198 { 201 - u32 cur_seq; 199 + u32 cur_seq = 0; 202 200 bool ret = false; 203 201 204 202 spin_lock(&tbl->slot_tbl_lock);
+2
fs/nfs/pnfs.c
··· 146 146 u32 id; 147 147 int i; 148 148 149 + if (fsinfo->nlayouttypes == 0) 150 + goto out_no_driver; 149 151 if (!(server->nfs_client->cl_exchange_flags & 150 152 (EXCHGID4_FLAG_USE_NON_PNFS | EXCHGID4_FLAG_USE_PNFS_MDS))) { 151 153 printk(KERN_ERR "NFS: %s: cl_exchange_flags 0x%x\n",
-2
fs/ntfs/dir.c
··· 1544 1544 .iterate = ntfs_readdir, /* Read directory contents. */ 1545 1545 #ifdef NTFS_RW 1546 1546 .fsync = ntfs_dir_fsync, /* Sync a directory to disk. */ 1547 - /*.aio_fsync = ,*/ /* Sync all outstanding async 1548 - i/o operations on a kiocb. */ 1549 1547 #endif /* NTFS_RW */ 1550 1548 /*.ioctl = ,*/ /* Perform function on the 1551 1549 mounted filesystem. */
+1 -1
fs/ocfs2/dir.c
··· 3699 3699 static int ocfs2_dx_dir_rebalance_credits(struct ocfs2_super *osb, 3700 3700 struct ocfs2_dx_root_block *dx_root) 3701 3701 { 3702 - int credits = ocfs2_clusters_to_blocks(osb->sb, 2); 3702 + int credits = ocfs2_clusters_to_blocks(osb->sb, 3); 3703 3703 3704 3704 credits += ocfs2_calc_extend_credits(osb->sb, &dx_root->dr_list); 3705 3705 credits += ocfs2_quota_trans_credits(osb->sb);
+64 -83
fs/orangefs/orangefs-debugfs.c
··· 141 141 */ 142 142 static DEFINE_MUTEX(orangefs_debug_lock); 143 143 144 + /* Used to protect data in ORANGEFS_KMOD_DEBUG_HELP_FILE */ 145 + static DEFINE_MUTEX(orangefs_help_file_lock); 146 + 144 147 /* 145 148 * initialize kmod debug operations, create orangefs debugfs dir and 146 149 * ORANGEFS_KMOD_DEBUG_HELP_FILE. ··· 292 289 293 290 gossip_debug(GOSSIP_DEBUGFS_DEBUG, "help_start: start\n"); 294 291 292 + mutex_lock(&orangefs_help_file_lock); 293 + 295 294 if (*pos == 0) 296 295 payload = m->private; 297 296 ··· 310 305 static void help_stop(struct seq_file *m, void *p) 311 306 { 312 307 gossip_debug(GOSSIP_DEBUGFS_DEBUG, "help_stop: start\n"); 308 + mutex_unlock(&orangefs_help_file_lock); 313 309 } 314 310 315 311 static int help_show(struct seq_file *m, void *v) ··· 616 610 * /sys/kernel/debug/orangefs/debug-help can be catted to 617 611 * see all the available kernel and client debug keywords. 618 612 * 619 - * When the kernel boots, we have no idea what keywords the 613 + * When orangefs.ko initializes, we have no idea what keywords the 620 614 * client supports, nor their associated masks. 621 615 * 622 - * We pass through this function once at boot and stamp a 616 + * We pass through this function once at module-load and stamp a 623 617 * boilerplate "we don't know" message for the client in the 624 618 * debug-help file. We pass through here again when the client 625 619 * starts and then we can fill out the debug-help file fully. 626 620 * 627 621 * The client might be restarted any number of times between 628 - * reboots, we only build the debug-help file the first time. 622 + * module reloads, we only build the debug-help file the first time. 629 623 */ 630 624 int orangefs_prepare_debugfs_help_string(int at_boot) 631 625 { 632 - int rc = -EINVAL; 633 - int i; 634 - int byte_count = 0; 635 626 char *client_title = "Client Debug Keywords:\n"; 636 627 char *kernel_title = "Kernel Debug Keywords:\n"; 628 + size_t string_size = DEBUG_HELP_STRING_SIZE; 629 + size_t result_size; 630 + size_t i; 631 + char *new; 632 + int rc = -EINVAL; 637 633 638 634 gossip_debug(GOSSIP_UTILS_DEBUG, "%s: start\n", __func__); 639 635 640 - if (at_boot) { 641 - byte_count += strlen(HELP_STRING_UNINITIALIZED); 636 + if (at_boot) 642 637 client_title = HELP_STRING_UNINITIALIZED; 643 - } else { 644 - /* 638 + 639 + /* build a new debug_help_string. */ 640 + new = kzalloc(DEBUG_HELP_STRING_SIZE, GFP_KERNEL); 641 + if (!new) { 642 + rc = -ENOMEM; 643 + goto out; 644 + } 645 + 646 + /* 647 + * strlcat(dst, src, size) will append at most 648 + * "size - strlen(dst) - 1" bytes of src onto dst, 649 + * null terminating the result, and return the total 650 + * length of the string it tried to create. 651 + * 652 + * We'll just plow through here building our new debug 653 + * help string and let strlcat take care of assuring that 654 + * dst doesn't overflow. 655 + */ 656 + strlcat(new, client_title, string_size); 657 + 658 + if (!at_boot) { 659 + 660 + /* 645 661 * fill the client keyword/mask array and remember 646 662 * how many elements there were. 647 663 */ ··· 672 644 if (cdm_element_count <= 0) 673 645 goto out; 674 646 675 - /* Count the bytes destined for debug_help_string. */ 676 - byte_count += strlen(client_title); 677 - 678 647 for (i = 0; i < cdm_element_count; i++) { 679 - byte_count += strlen(cdm_array[i].keyword + 2); 680 - if (byte_count >= DEBUG_HELP_STRING_SIZE) { 681 - pr_info("%s: overflow 1!\n", __func__); 682 - goto out; 683 - } 648 + strlcat(new, "\t", string_size); 649 + strlcat(new, cdm_array[i].keyword, string_size); 650 + strlcat(new, "\n", string_size); 684 651 } 685 - 686 - gossip_debug(GOSSIP_UTILS_DEBUG, 687 - "%s: cdm_element_count:%d:\n", 688 - __func__, 689 - cdm_element_count); 690 652 } 691 653 692 - byte_count += strlen(kernel_title); 654 + strlcat(new, "\n", string_size); 655 + strlcat(new, kernel_title, string_size); 656 + 693 657 for (i = 0; i < num_kmod_keyword_mask_map; i++) { 694 - byte_count += 695 - strlen(s_kmod_keyword_mask_map[i].keyword + 2); 696 - if (byte_count >= DEBUG_HELP_STRING_SIZE) { 697 - pr_info("%s: overflow 2!\n", __func__); 698 - goto out; 699 - } 658 + strlcat(new, "\t", string_size); 659 + strlcat(new, s_kmod_keyword_mask_map[i].keyword, string_size); 660 + result_size = strlcat(new, "\n", string_size); 700 661 } 701 662 702 - /* build debug_help_string. */ 703 - debug_help_string = kzalloc(DEBUG_HELP_STRING_SIZE, GFP_KERNEL); 704 - if (!debug_help_string) { 705 - rc = -ENOMEM; 663 + /* See if we tried to put too many bytes into "new"... */ 664 + if (result_size >= string_size) { 665 + kfree(new); 706 666 goto out; 707 667 } 708 668 709 - strcat(debug_help_string, client_title); 710 - 711 - if (!at_boot) { 712 - for (i = 0; i < cdm_element_count; i++) { 713 - strcat(debug_help_string, "\t"); 714 - strcat(debug_help_string, cdm_array[i].keyword); 715 - strcat(debug_help_string, "\n"); 716 - } 717 - } 718 - 719 - strcat(debug_help_string, "\n"); 720 - strcat(debug_help_string, kernel_title); 721 - 722 - for (i = 0; i < num_kmod_keyword_mask_map; i++) { 723 - strcat(debug_help_string, "\t"); 724 - strcat(debug_help_string, s_kmod_keyword_mask_map[i].keyword); 725 - strcat(debug_help_string, "\n"); 669 + if (at_boot) { 670 + debug_help_string = new; 671 + } else { 672 + mutex_lock(&orangefs_help_file_lock); 673 + memset(debug_help_string, 0, DEBUG_HELP_STRING_SIZE); 674 + strlcat(debug_help_string, new, string_size); 675 + mutex_unlock(&orangefs_help_file_lock); 726 676 } 727 677 728 678 rc = 0; 729 679 730 - out: 731 - 732 - return rc; 680 + out: return rc; 733 681 734 682 } 735 683 ··· 963 959 ret = copy_from_user(&client_debug_array_string, 964 960 (void __user *)arg, 965 961 ORANGEFS_MAX_DEBUG_STRING_LEN); 966 - if (ret != 0) 962 + 963 + if (ret != 0) { 964 + pr_info("%s: CLIENT_STRING: copy_from_user failed\n", 965 + __func__); 967 966 return -EIO; 967 + } 968 968 969 969 /* 970 970 * The real client-core makes an effort to ensure ··· 983 975 client_debug_array_string[ORANGEFS_MAX_DEBUG_STRING_LEN - 1] = 984 976 '\0'; 985 977 986 - if (ret != 0) { 987 - pr_info("%s: CLIENT_STRING: copy_from_user failed\n", 988 - __func__); 989 - return -EIO; 990 - } 991 - 992 978 pr_info("%s: client debug array string has been received.\n", 993 979 __func__); 994 980 995 981 if (!help_string_initialized) { 996 982 997 - /* Free the "we don't know yet" default string... */ 998 - kfree(debug_help_string); 999 - 1000 - /* build a proper debug help string */ 983 + /* Build a proper debug help string. */ 1001 984 if (orangefs_prepare_debugfs_help_string(0)) { 1002 985 gossip_err("%s: no debug help string \n", 1003 986 __func__); 1004 987 return -EIO; 1005 988 } 1006 989 1007 - /* Replace the boilerplate boot-time debug-help file. */ 1008 - debugfs_remove(help_file_dentry); 1009 - 1010 - help_file_dentry = 1011 - debugfs_create_file( 1012 - ORANGEFS_KMOD_DEBUG_HELP_FILE, 1013 - 0444, 1014 - debug_dir, 1015 - debug_help_string, 1016 - &debug_help_fops); 1017 - 1018 - if (!help_file_dentry) { 1019 - gossip_err("%s: debugfs_create_file failed for" 1020 - " :%s:!\n", 1021 - __func__, 1022 - ORANGEFS_KMOD_DEBUG_HELP_FILE); 1023 - return -EIO; 1024 - } 1025 990 } 1026 991 1027 992 debug_mask_to_string(&client_debug_mask, 1);
+4 -2
fs/orangefs/orangefs-mod.c
··· 124 124 * unknown at boot time. 125 125 * 126 126 * orangefs_prepare_debugfs_help_string will be used again 127 - * later to rebuild the debug-help file after the client starts 127 + * later to rebuild the debug-help-string after the client starts 128 128 * and passes along the needed info. The argument signifies 129 129 * which time orangefs_prepare_debugfs_help_string is being 130 130 * called. ··· 152 152 153 153 ret = register_filesystem(&orangefs_fs_type); 154 154 if (ret == 0) { 155 - pr_info("orangefs: module version %s loaded\n", ORANGEFS_VERSION); 155 + pr_info("%s: module version %s loaded\n", 156 + __func__, 157 + ORANGEFS_VERSION); 156 158 ret = 0; 157 159 goto out; 158 160 }
-5
fs/splice.c
··· 299 299 { 300 300 struct iov_iter to; 301 301 struct kiocb kiocb; 302 - loff_t isize; 303 302 int idx, ret; 304 - 305 - isize = i_size_read(in->f_mapping->host); 306 - if (unlikely(*ppos >= isize)) 307 - return 0; 308 303 309 304 iov_iter_pipe(&to, ITER_PIPE | READ, pipe, len); 310 305 idx = to.idx;
+5 -12
fs/xfs/libxfs/xfs_defer.c
··· 199 199 struct xfs_defer_pending *dfp; 200 200 201 201 list_for_each_entry(dfp, &dop->dop_intake, dfp_list) { 202 - trace_xfs_defer_intake_work(tp->t_mountp, dfp); 203 202 dfp->dfp_intent = dfp->dfp_type->create_intent(tp, 204 203 dfp->dfp_count); 204 + trace_xfs_defer_intake_work(tp->t_mountp, dfp); 205 205 list_sort(tp->t_mountp, &dfp->dfp_work, 206 206 dfp->dfp_type->diff_items); 207 207 list_for_each(li, &dfp->dfp_work) ··· 221 221 struct xfs_defer_pending *dfp; 222 222 223 223 trace_xfs_defer_trans_abort(tp->t_mountp, dop); 224 - /* 225 - * If the transaction was committed, drop the intent reference 226 - * since we're bailing out of here. The other reference is 227 - * dropped when the intent hits the AIL. If the transaction 228 - * was not committed, the intent is freed by the intent item 229 - * unlock handler on abort. 230 - */ 231 - if (!dop->dop_committed) 232 - return; 233 224 234 - /* Abort intent items. */ 225 + /* Abort intent items that don't have a done item. */ 235 226 list_for_each_entry(dfp, &dop->dop_pending, dfp_list) { 236 227 trace_xfs_defer_pending_abort(tp->t_mountp, dfp); 237 - if (!dfp->dfp_done) 228 + if (dfp->dfp_intent && !dfp->dfp_done) { 238 229 dfp->dfp_type->abort_intent(dfp->dfp_intent); 230 + dfp->dfp_intent = NULL; 231 + } 239 232 } 240 233 241 234 /* Shut down FS. */
+2 -2
include/asm-generic/percpu.h
··· 118 118 #define this_cpu_generic_read(pcp) \ 119 119 ({ \ 120 120 typeof(pcp) __ret; \ 121 - preempt_disable(); \ 121 + preempt_disable_notrace(); \ 122 122 __ret = raw_cpu_generic_read(pcp); \ 123 - preempt_enable(); \ 123 + preempt_enable_notrace(); \ 124 124 __ret; \ 125 125 }) 126 126
+3
include/asm-generic/sections.h
··· 14 14 * [_sdata, _edata]: contains .data.* sections, may also contain .rodata.* 15 15 * and/or .init.* sections. 16 16 * [__start_rodata, __end_rodata]: contains .rodata.* sections 17 + * [__start_data_ro_after_init, __end_data_ro_after_init]: 18 + * contains data.ro_after_init section 17 19 * [__init_begin, __init_end]: contains .init.* sections, but .init.text.* 18 20 * may be out of this range on some architectures. 19 21 * [_sinittext, _einittext]: contains .init.text.* sections ··· 33 31 extern char __bss_start[], __bss_stop[]; 34 32 extern char __init_begin[], __init_end[]; 35 33 extern char _sinittext[], _einittext[]; 34 + extern char __start_data_ro_after_init[], __end_data_ro_after_init[]; 36 35 extern char _end[]; 37 36 extern char __per_cpu_load[], __per_cpu_start[], __per_cpu_end[]; 38 37 extern char __kprobes_text_start[], __kprobes_text_end[];
+4 -1
include/asm-generic/vmlinux.lds.h
··· 259 259 * own by defining an empty RO_AFTER_INIT_DATA. 260 260 */ 261 261 #ifndef RO_AFTER_INIT_DATA 262 - #define RO_AFTER_INIT_DATA *(.data..ro_after_init) 262 + #define RO_AFTER_INIT_DATA \ 263 + __start_data_ro_after_init = .; \ 264 + *(.data..ro_after_init) \ 265 + __end_data_ro_after_init = .; 263 266 #endif 264 267 265 268 /*
-1
include/drm/drmP.h
··· 779 779 extern void drm_sysfs_hotplug_event(struct drm_device *dev); 780 780 781 781 782 - 783 782 /*@}*/ 784 783 785 784 /* PCI section */
+3
include/drm/drm_drv.h
··· 427 427 void drm_put_dev(struct drm_device *dev); 428 428 void drm_unplug_dev(struct drm_device *dev); 429 429 430 + int drm_dev_set_unique(struct drm_device *dev, const char *name); 431 + 432 + 430 433 #endif
+2 -1
include/linux/acpi.h
··· 555 555 int acpi_device_modalias(struct device *, char *, int); 556 556 void acpi_walk_dep_device_list(acpi_handle handle); 557 557 558 - struct platform_device *acpi_create_platform_device(struct acpi_device *); 558 + struct platform_device *acpi_create_platform_device(struct acpi_device *, 559 + struct property_entry *); 559 560 #define ACPI_PTR(_ptr) (_ptr) 560 561 561 562 static inline void acpi_device_set_enumerated(struct acpi_device *adev)
+2
include/linux/ceph/osd_client.h
··· 258 258 struct ceph_entity_addr addr; 259 259 }; 260 260 261 + #define CEPH_LINGER_ID_START 0xffff000000000000ULL 262 + 261 263 struct ceph_osd_client { 262 264 struct ceph_client *client; 263 265
-6
include/linux/console.h
··· 173 173 #endif 174 174 extern bool console_suspend_enabled; 175 175 176 - #ifdef CONFIG_OF 177 - extern void console_set_by_of(void); 178 - #else 179 - static inline void console_set_by_of(void) {} 180 - #endif 181 - 182 176 /* Suspend and resume console messages over PM events */ 183 177 extern void suspend_console(void); 184 178 extern void resume_console(void);
+3 -2
include/linux/frontswap.h
··· 106 106 107 107 static inline void frontswap_init(unsigned type, unsigned long *map) 108 108 { 109 - if (frontswap_enabled()) 110 - __frontswap_init(type, map); 109 + #ifdef CONFIG_FRONTSWAP 110 + __frontswap_init(type, map); 111 + #endif 111 112 } 112 113 113 114 #endif /* _LINUX_FRONTSWAP_H */
+1 -1
include/linux/fs.h
··· 321 321 #define IOCB_HIPRI (1 << 3) 322 322 #define IOCB_DSYNC (1 << 4) 323 323 #define IOCB_SYNC (1 << 5) 324 + #define IOCB_WRITE (1 << 6) 324 325 325 326 struct kiocb { 326 327 struct file *ki_filp; ··· 1710 1709 int (*flush) (struct file *, fl_owner_t id); 1711 1710 int (*release) (struct inode *, struct file *); 1712 1711 int (*fsync) (struct file *, loff_t, loff_t, int datasync); 1713 - int (*aio_fsync) (struct kiocb *, int datasync); 1714 1712 int (*fasync) (int, struct file *, int); 1715 1713 int (*lock) (struct file *, int, struct file_lock *); 1716 1714 ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
+7
include/linux/phy/phy.h
··· 253 253 return -ENOSYS; 254 254 } 255 255 256 + static inline int phy_reset(struct phy *phy) 257 + { 258 + if (!phy) 259 + return 0; 260 + return -ENOSYS; 261 + } 262 + 256 263 static inline int phy_get_bus_width(struct phy *phy) 257 264 { 258 265 return -ENOSYS;
-6
include/uapi/sound/asoc.h
··· 18 18 #include <linux/types.h> 19 19 #include <sound/asound.h> 20 20 21 - #ifndef __KERNEL__ 22 - #error This API is an early revision and not enabled in the current 23 - #error kernel release, it will be enabled in a future kernel version 24 - #error with incompatible changes to what is here. 25 - #endif 26 - 27 21 /* 28 22 * Maximum number of channels topology kcontrol can represent. 29 23 */
+3 -1
kernel/power/suspend_test.c
··· 203 203 204 204 /* RTCs have initialized by now too ... can we use one? */ 205 205 dev = class_find_device(rtc_class, NULL, NULL, has_wakealarm); 206 - if (dev) 206 + if (dev) { 207 207 rtc = rtc_class_open(dev_name(dev)); 208 + put_device(dev); 209 + } 208 210 if (!rtc) { 209 211 printk(warn_no_rtc); 210 212 return 0;
+1 -12
kernel/printk/printk.c
··· 253 253 int console_set_on_cmdline; 254 254 EXPORT_SYMBOL(console_set_on_cmdline); 255 255 256 - #ifdef CONFIG_OF 257 - static bool of_specified_console; 258 - 259 - void console_set_by_of(void) 260 - { 261 - of_specified_console = true; 262 - } 263 - #else 264 - # define of_specified_console false 265 - #endif 266 - 267 256 /* Flag: console code may call schedule() */ 268 257 static int console_may_schedule; 269 258 ··· 2646 2657 * didn't select a console we take the first one 2647 2658 * that registers here. 2648 2659 */ 2649 - if (preferred_console < 0 && !of_specified_console) { 2660 + if (preferred_console < 0) { 2650 2661 if (newcon->index < 0) 2651 2662 newcon->index = 0; 2652 2663 if (newcon->setup == NULL ||
+2
lib/stackdepot.c
··· 192 192 trace->entries = stack->entries; 193 193 trace->skip = 0; 194 194 } 195 + EXPORT_SYMBOL_GPL(depot_fetch_stack); 195 196 196 197 /** 197 198 * depot_save_stack - save stack in a stack depot. ··· 284 283 fast_exit: 285 284 return retval; 286 285 } 286 + EXPORT_SYMBOL_GPL(depot_save_stack);
+3
mm/cma.c
··· 385 385 bitmap_maxno = cma_bitmap_maxno(cma); 386 386 bitmap_count = cma_bitmap_pages_to_bits(cma, count); 387 387 388 + if (bitmap_count > bitmap_maxno) 389 + return NULL; 390 + 388 391 for (;;) { 389 392 mutex_lock(&cma->lock); 390 393 bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap,
+3
mm/filemap.c
··· 1732 1732 if (inode->i_blkbits == PAGE_SHIFT || 1733 1733 !mapping->a_ops->is_partially_uptodate) 1734 1734 goto page_not_up_to_date; 1735 + /* pipes can't handle partially uptodate pages */ 1736 + if (unlikely(iter->type & ITER_PIPE)) 1737 + goto page_not_up_to_date; 1735 1738 if (!trylock_page(page)) 1736 1739 goto page_not_up_to_date; 1737 1740 /* Did it get truncated before we got the lock? */
+66
mm/hugetlb.c
··· 1826 1826 * is not the case is if a reserve map was changed between calls. It 1827 1827 * is the responsibility of the caller to notice the difference and 1828 1828 * take appropriate action. 1829 + * 1830 + * vma_add_reservation is used in error paths where a reservation must 1831 + * be restored when a newly allocated huge page must be freed. It is 1832 + * to be called after calling vma_needs_reservation to determine if a 1833 + * reservation exists. 1829 1834 */ 1830 1835 enum vma_resv_mode { 1831 1836 VMA_NEEDS_RESV, 1832 1837 VMA_COMMIT_RESV, 1833 1838 VMA_END_RESV, 1839 + VMA_ADD_RESV, 1834 1840 }; 1835 1841 static long __vma_reservation_common(struct hstate *h, 1836 1842 struct vm_area_struct *vma, unsigned long addr, ··· 1861 1855 case VMA_END_RESV: 1862 1856 region_abort(resv, idx, idx + 1); 1863 1857 ret = 0; 1858 + break; 1859 + case VMA_ADD_RESV: 1860 + if (vma->vm_flags & VM_MAYSHARE) 1861 + ret = region_add(resv, idx, idx + 1); 1862 + else { 1863 + region_abort(resv, idx, idx + 1); 1864 + ret = region_del(resv, idx, idx + 1); 1865 + } 1864 1866 break; 1865 1867 default: 1866 1868 BUG(); ··· 1915 1901 struct vm_area_struct *vma, unsigned long addr) 1916 1902 { 1917 1903 (void)__vma_reservation_common(h, vma, addr, VMA_END_RESV); 1904 + } 1905 + 1906 + static long vma_add_reservation(struct hstate *h, 1907 + struct vm_area_struct *vma, unsigned long addr) 1908 + { 1909 + return __vma_reservation_common(h, vma, addr, VMA_ADD_RESV); 1910 + } 1911 + 1912 + /* 1913 + * This routine is called to restore a reservation on error paths. In the 1914 + * specific error paths, a huge page was allocated (via alloc_huge_page) 1915 + * and is about to be freed. If a reservation for the page existed, 1916 + * alloc_huge_page would have consumed the reservation and set PagePrivate 1917 + * in the newly allocated page. When the page is freed via free_huge_page, 1918 + * the global reservation count will be incremented if PagePrivate is set. 1919 + * However, free_huge_page can not adjust the reserve map. Adjust the 1920 + * reserve map here to be consistent with global reserve count adjustments 1921 + * to be made by free_huge_page. 1922 + */ 1923 + static void restore_reserve_on_error(struct hstate *h, 1924 + struct vm_area_struct *vma, unsigned long address, 1925 + struct page *page) 1926 + { 1927 + if (unlikely(PagePrivate(page))) { 1928 + long rc = vma_needs_reservation(h, vma, address); 1929 + 1930 + if (unlikely(rc < 0)) { 1931 + /* 1932 + * Rare out of memory condition in reserve map 1933 + * manipulation. Clear PagePrivate so that 1934 + * global reserve count will not be incremented 1935 + * by free_huge_page. This will make it appear 1936 + * as though the reservation for this page was 1937 + * consumed. This may prevent the task from 1938 + * faulting in the page at a later time. This 1939 + * is better than inconsistent global huge page 1940 + * accounting of reserve counts. 1941 + */ 1942 + ClearPagePrivate(page); 1943 + } else if (rc) { 1944 + rc = vma_add_reservation(h, vma, address); 1945 + if (unlikely(rc < 0)) 1946 + /* 1947 + * See above comment about rare out of 1948 + * memory condition. 1949 + */ 1950 + ClearPagePrivate(page); 1951 + } else 1952 + vma_end_reservation(h, vma, address); 1953 + } 1918 1954 } 1919 1955 1920 1956 struct page *alloc_huge_page(struct vm_area_struct *vma, ··· 3562 3498 spin_unlock(ptl); 3563 3499 mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 3564 3500 out_release_all: 3501 + restore_reserve_on_error(h, vma, address, new_page); 3565 3502 put_page(new_page); 3566 3503 out_release_old: 3567 3504 put_page(old_page); ··· 3745 3680 spin_unlock(ptl); 3746 3681 backout_unlocked: 3747 3682 unlock_page(page); 3683 + restore_reserve_on_error(h, vma, address, page); 3748 3684 put_page(page); 3749 3685 goto out; 3750 3686 }
+1
mm/kmemleak.c
··· 1414 1414 /* data/bss scanning */ 1415 1415 scan_large_block(_sdata, _edata); 1416 1416 scan_large_block(__bss_start, __bss_stop); 1417 + scan_large_block(__start_data_ro_after_init, __end_data_ro_after_init); 1417 1418 1418 1419 #ifdef CONFIG_SMP 1419 1420 /* per-cpu sections scanning */
+5 -7
mm/memory-failure.c
··· 1112 1112 } 1113 1113 1114 1114 if (!PageHuge(p) && PageTransHuge(hpage)) { 1115 - lock_page(hpage); 1116 - if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) { 1117 - unlock_page(hpage); 1118 - if (!PageAnon(hpage)) 1115 + lock_page(p); 1116 + if (!PageAnon(p) || unlikely(split_huge_page(p))) { 1117 + unlock_page(p); 1118 + if (!PageAnon(p)) 1119 1119 pr_err("Memory failure: %#lx: non anonymous thp\n", 1120 1120 pfn); 1121 1121 else ··· 1126 1126 put_hwpoison_page(p); 1127 1127 return -EBUSY; 1128 1128 } 1129 - unlock_page(hpage); 1130 - get_hwpoison_page(p); 1131 - put_hwpoison_page(hpage); 1129 + unlock_page(p); 1132 1130 VM_BUG_ON_PAGE(!page_count(p), p); 1133 1131 hpage = compound_head(p); 1134 1132 }
+1 -1
mm/page_alloc.c
··· 3658 3658 /* Make sure we know about allocations which stall for too long */ 3659 3659 if (time_after(jiffies, alloc_start + stall_timeout)) { 3660 3660 warn_alloc(gfp_mask, 3661 - "page alloction stalls for %ums, order:%u\n", 3661 + "page allocation stalls for %ums, order:%u", 3662 3662 jiffies_to_msecs(jiffies-alloc_start), order); 3663 3663 stall_timeout += 10 * HZ; 3664 3664 }
+2
mm/shmem.c
··· 1483 1483 copy_highpage(newpage, oldpage); 1484 1484 flush_dcache_page(newpage); 1485 1485 1486 + __SetPageLocked(newpage); 1487 + __SetPageSwapBacked(newpage); 1486 1488 SetPageUptodate(newpage); 1487 1489 set_page_private(newpage, swap_index); 1488 1490 SetPageSwapCache(newpage);
+2 -2
mm/slab_common.c
··· 533 533 534 534 s = create_cache(cache_name, root_cache->object_size, 535 535 root_cache->size, root_cache->align, 536 - root_cache->flags, root_cache->ctor, 537 - memcg, root_cache); 536 + root_cache->flags & CACHE_CREATE_MASK, 537 + root_cache->ctor, memcg, root_cache); 538 538 /* 539 539 * If we could not create a memcg cache, do not complain, because 540 540 * that's not critical at all as we can always proceed with the root
+2
mm/swapfile.c
··· 2224 2224 swab32s(&swap_header->info.version); 2225 2225 swab32s(&swap_header->info.last_page); 2226 2226 swab32s(&swap_header->info.nr_badpages); 2227 + if (swap_header->info.nr_badpages > MAX_SWAP_BADPAGES) 2228 + return 0; 2227 2229 for (i = 0; i < swap_header->info.nr_badpages; i++) 2228 2230 swab32s(&swap_header->info.badpages[i]); 2229 2231 }
+2 -1
net/ceph/ceph_fs.c
··· 34 34 fl->stripe_count = le32_to_cpu(legacy->fl_stripe_count); 35 35 fl->object_size = le32_to_cpu(legacy->fl_object_size); 36 36 fl->pool_id = le32_to_cpu(legacy->fl_pg_pool); 37 - if (fl->pool_id == 0) 37 + if (fl->pool_id == 0 && fl->stripe_unit == 0 && 38 + fl->stripe_count == 0 && fl->object_size == 0) 38 39 fl->pool_id = -1; 39 40 } 40 41 EXPORT_SYMBOL(ceph_file_layout_from_legacy);
+1
net/ceph/osd_client.c
··· 4094 4094 osd_init(&osdc->homeless_osd); 4095 4095 osdc->homeless_osd.o_osdc = osdc; 4096 4096 osdc->homeless_osd.o_osd = CEPH_HOMELESS_OSD; 4097 + osdc->last_linger_id = CEPH_LINGER_ID_START; 4097 4098 osdc->linger_requests = RB_ROOT; 4098 4099 osdc->map_checks = RB_ROOT; 4099 4100 osdc->linger_map_checks = RB_ROOT;
+5 -2
net/sunrpc/clnt.c
··· 2753 2753 2754 2754 void rpc_clnt_xprt_switch_put(struct rpc_clnt *clnt) 2755 2755 { 2756 + rcu_read_lock(); 2756 2757 xprt_switch_put(rcu_dereference(clnt->cl_xpi.xpi_xpswitch)); 2758 + rcu_read_unlock(); 2757 2759 } 2758 2760 EXPORT_SYMBOL_GPL(rpc_clnt_xprt_switch_put); 2759 2761 2760 2762 void rpc_clnt_xprt_switch_add_xprt(struct rpc_clnt *clnt, struct rpc_xprt *xprt) 2761 2763 { 2764 + rcu_read_lock(); 2762 2765 rpc_xprt_switch_add_xprt(rcu_dereference(clnt->cl_xpi.xpi_xpswitch), 2763 2766 xprt); 2767 + rcu_read_unlock(); 2764 2768 } 2765 2769 EXPORT_SYMBOL_GPL(rpc_clnt_xprt_switch_add_xprt); 2766 2770 ··· 2774 2770 struct rpc_xprt_switch *xps; 2775 2771 bool ret; 2776 2772 2777 - xps = rcu_dereference(clnt->cl_xpi.xpi_xpswitch); 2778 - 2779 2773 rcu_read_lock(); 2774 + xps = rcu_dereference(clnt->cl_xpi.xpi_xpswitch); 2780 2775 ret = rpc_xprt_switch_has_addr(xps, sap); 2781 2776 rcu_read_unlock(); 2782 2777 return ret;
+22 -15
net/sunrpc/xprtrdma/frwr_ops.c
··· 44 44 * being done. 45 45 * 46 46 * When the underlying transport disconnects, MRs are left in one of 47 - * three states: 47 + * four states: 48 48 * 49 49 * INVALID: The MR was not in use before the QP entered ERROR state. 50 - * (Or, the LOCAL_INV WR has not completed or flushed yet). 51 - * 52 - * STALE: The MR was being registered or unregistered when the QP 53 - * entered ERROR state, and the pending WR was flushed. 54 50 * 55 51 * VALID: The MR was registered before the QP entered ERROR state. 56 52 * 57 - * When frwr_op_map encounters STALE and VALID MRs, they are recovered 58 - * with ib_dereg_mr and then are re-initialized. Beause MR recovery 53 + * FLUSHED_FR: The MR was being registered when the QP entered ERROR 54 + * state, and the pending WR was flushed. 55 + * 56 + * FLUSHED_LI: The MR was being invalidated when the QP entered ERROR 57 + * state, and the pending WR was flushed. 58 + * 59 + * When frwr_op_map encounters FLUSHED and VALID MRs, they are recovered 60 + * with ib_dereg_mr and then are re-initialized. Because MR recovery 59 61 * allocates fresh resources, it is deferred to a workqueue, and the 60 62 * recovered MRs are placed back on the rb_mws list when recovery is 61 63 * complete. frwr_op_map allocates another MR for the current RPC while ··· 179 177 static void 180 178 frwr_op_recover_mr(struct rpcrdma_mw *mw) 181 179 { 180 + enum rpcrdma_frmr_state state = mw->frmr.fr_state; 182 181 struct rpcrdma_xprt *r_xprt = mw->mw_xprt; 183 182 struct rpcrdma_ia *ia = &r_xprt->rx_ia; 184 183 int rc; 185 184 186 185 rc = __frwr_reset_mr(ia, mw); 187 - ib_dma_unmap_sg(ia->ri_device, mw->mw_sg, mw->mw_nents, mw->mw_dir); 186 + if (state != FRMR_FLUSHED_LI) 187 + ib_dma_unmap_sg(ia->ri_device, 188 + mw->mw_sg, mw->mw_nents, mw->mw_dir); 188 189 if (rc) 189 190 goto out_release; 190 191 ··· 267 262 } 268 263 269 264 static void 270 - __frwr_sendcompletion_flush(struct ib_wc *wc, struct rpcrdma_frmr *frmr, 271 - const char *wr) 265 + __frwr_sendcompletion_flush(struct ib_wc *wc, const char *wr) 272 266 { 273 - frmr->fr_state = FRMR_IS_STALE; 274 267 if (wc->status != IB_WC_WR_FLUSH_ERR) 275 268 pr_err("rpcrdma: %s: %s (%u/0x%x)\n", 276 269 wr, ib_wc_status_msg(wc->status), ··· 291 288 if (wc->status != IB_WC_SUCCESS) { 292 289 cqe = wc->wr_cqe; 293 290 frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe); 294 - __frwr_sendcompletion_flush(wc, frmr, "fastreg"); 291 + frmr->fr_state = FRMR_FLUSHED_FR; 292 + __frwr_sendcompletion_flush(wc, "fastreg"); 295 293 } 296 294 } 297 295 ··· 312 308 if (wc->status != IB_WC_SUCCESS) { 313 309 cqe = wc->wr_cqe; 314 310 frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe); 315 - __frwr_sendcompletion_flush(wc, frmr, "localinv"); 311 + frmr->fr_state = FRMR_FLUSHED_LI; 312 + __frwr_sendcompletion_flush(wc, "localinv"); 316 313 } 317 314 } 318 315 ··· 333 328 /* WARNING: Only wr_cqe and status are reliable at this point */ 334 329 cqe = wc->wr_cqe; 335 330 frmr = container_of(cqe, struct rpcrdma_frmr, fr_cqe); 336 - if (wc->status != IB_WC_SUCCESS) 337 - __frwr_sendcompletion_flush(wc, frmr, "localinv"); 331 + if (wc->status != IB_WC_SUCCESS) { 332 + frmr->fr_state = FRMR_FLUSHED_LI; 333 + __frwr_sendcompletion_flush(wc, "localinv"); 334 + } 338 335 complete(&frmr->fr_linv_done); 339 336 } 340 337
+2 -1
net/sunrpc/xprtrdma/xprt_rdma.h
··· 216 216 enum rpcrdma_frmr_state { 217 217 FRMR_IS_INVALID, /* ready to be used */ 218 218 FRMR_IS_VALID, /* in use */ 219 - FRMR_IS_STALE, /* failed completion */ 219 + FRMR_FLUSHED_FR, /* flushed FASTREG WR */ 220 + FRMR_FLUSHED_LI, /* flushed LOCALINV WR */ 220 221 }; 221 222 222 223 struct rpcrdma_frmr {
+1
scripts/Makefile.extrawarn
··· 36 36 warning-2 += $(call cc-option, -Wlogical-op) 37 37 warning-2 += $(call cc-option, -Wmissing-field-initializers) 38 38 warning-2 += $(call cc-option, -Wsign-compare) 39 + warning-2 += $(call cc-option, -Wmaybe-uninitialized) 39 40 40 41 warning-3 := -Wbad-function-cast 41 42 warning-3 += -Wcast-qual
+4
scripts/Makefile.ubsan
··· 17 17 ifdef CONFIG_UBSAN_NULL 18 18 CFLAGS_UBSAN += $(call cc-option, -fsanitize=null) 19 19 endif 20 + 21 + # -fsanitize=* options makes GCC less smart than usual and 22 + # increase number of 'maybe-uninitialized false-positives 23 + CFLAGS_UBSAN += $(call cc-option, -Wno-maybe-uninitialized) 20 24 endif
+3
scripts/bloat-o-meter
··· 8 8 # of the GNU General Public License, incorporated herein by reference. 9 9 10 10 import sys, os, re 11 + from signal import signal, SIGPIPE, SIG_DFL 12 + 13 + signal(SIGPIPE, SIG_DFL) 11 14 12 15 if len(sys.argv) != 3: 13 16 sys.stderr.write("usage: %s file1 file2\n" % sys.argv[0])
+8 -1
sound/core/info.c
··· 325 325 size_t next; 326 326 int err = 0; 327 327 328 + if (!entry->c.text.write) 329 + return -EIO; 328 330 pos = *offset; 329 331 if (!valid_pos(pos, count)) 330 332 return -EIO; 331 333 next = pos + count; 334 + /* don't handle too large text inputs */ 335 + if (next > 16 * 1024) 336 + return -EIO; 332 337 mutex_lock(&entry->access); 333 338 buf = data->wbuffer; 334 339 if (!buf) { ··· 371 366 struct snd_info_private_data *data = seq->private; 372 367 struct snd_info_entry *entry = data->entry; 373 368 374 - if (entry->c.text.read) { 369 + if (!entry->c.text.read) { 370 + return -EIO; 371 + } else { 375 372 data->rbuffer->buffer = (char *)seq; /* XXX hack! */ 376 373 entry->c.text.read(entry, data->rbuffer); 377 374 }
+4 -4
sound/soc/codecs/cs4270.c
··· 148 148 }; 149 149 150 150 static const struct snd_soc_dapm_route cs4270_dapm_routes[] = { 151 - { "Capture", NULL, "AINA" }, 152 - { "Capture", NULL, "AINB" }, 151 + { "Capture", NULL, "AINL" }, 152 + { "Capture", NULL, "AINR" }, 153 153 154 - { "AOUTA", NULL, "Playback" }, 155 - { "AOUTB", NULL, "Playback" }, 154 + { "AOUTL", NULL, "Playback" }, 155 + { "AOUTR", NULL, "Playback" }, 156 156 }; 157 157 158 158 /**
+2 -1
sound/soc/codecs/da7219.c
··· 880 880 SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD), 881 881 882 882 /* DAI */ 883 - SND_SOC_DAPM_AIF_OUT("DAIOUT", "Capture", 0, SND_SOC_NOPM, 0, 0), 883 + SND_SOC_DAPM_AIF_OUT("DAIOUT", "Capture", 0, DA7219_DAI_TDM_CTRL, 884 + DA7219_DAI_OE_SHIFT, DA7219_NO_INVERT), 884 885 SND_SOC_DAPM_AIF_IN("DAIIN", "Playback", 0, SND_SOC_NOPM, 0, 0), 885 886 886 887 /* Output Muxes */
+6 -1
sound/soc/codecs/hdmi-codec.c
··· 364 364 struct of_phandle_args *args, 365 365 const char **dai_name) 366 366 { 367 - int id = args->args[0]; 367 + int id; 368 + 369 + if (args->args_count) 370 + id = args->args[0]; 371 + else 372 + id = 0; 368 373 369 374 if (id < ARRAY_SIZE(hdmi_dai_name)) { 370 375 *dai_name = hdmi_dai_name[id];
+5
sound/soc/codecs/rt298.c
··· 249 249 snd_soc_dapm_force_enable_pin(dapm, "LDO1"); 250 250 snd_soc_dapm_sync(dapm); 251 251 252 + regmap_update_bits(rt298->regmap, 253 + RT298_POWER_CTRL1, 0x1001, 0); 254 + regmap_update_bits(rt298->regmap, 255 + RT298_POWER_CTRL2, 0x4, 0x4); 256 + 252 257 regmap_write(rt298->regmap, RT298_SET_MIC1, 0x24); 253 258 msleep(50); 254 259
+2 -2
sound/soc/codecs/rt5663.c
··· 1547 1547 msleep(sleep_time[i]); 1548 1548 val = snd_soc_read(codec, RT5663_EM_JACK_TYPE_2) & 1549 1549 0x0003; 1550 + dev_dbg(codec->dev, "%s: MX-00e7 val=%x sleep %d\n", 1551 + __func__, val, sleep_time[i]); 1550 1552 i++; 1551 1553 if (val == 0x1 || val == 0x2 || val == 0x3) 1552 1554 break; 1553 - dev_dbg(codec->dev, "%s: MX-00e7 val=%x sleep %d\n", 1554 - __func__, val, sleep_time[i]); 1555 1555 } 1556 1556 dev_dbg(codec->dev, "%s val = %d\n", __func__, val); 1557 1557 switch (val) {
+1 -1
sound/soc/codecs/sti-sas.c
··· 424 424 static const struct regmap_config stih407_sas_regmap = { 425 425 .reg_bits = 32, 426 426 .val_bits = 32, 427 - 427 + .fast_io = true, 428 428 .max_register = STIH407_AUDIO_DAC_CTRL, 429 429 .reg_defaults = stih407_sas_reg_defaults, 430 430 .num_reg_defaults = ARRAY_SIZE(stih407_sas_reg_defaults),
+12 -25
sound/soc/codecs/tas571x.c
··· 341 341 return ret; 342 342 } 343 343 } 344 - 345 - gpiod_set_value(priv->pdn_gpio, 0); 346 - usleep_range(5000, 6000); 347 - 348 - regcache_cache_only(priv->regmap, false); 349 - ret = regcache_sync(priv->regmap); 350 - if (ret) 351 - return ret; 352 344 } 353 345 break; 354 346 case SND_SOC_BIAS_OFF: 355 - regcache_cache_only(priv->regmap, true); 356 - gpiod_set_value(priv->pdn_gpio, 1); 357 - 358 347 if (!IS_ERR(priv->mclk)) 359 348 clk_disable_unprepare(priv->mclk); 360 349 break; ··· 390 401 TAS571X_SOFT_MUTE_REG, 391 402 TAS571X_SOFT_MUTE_CH1_SHIFT, TAS571X_SOFT_MUTE_CH2_SHIFT, 392 403 1, 1), 393 - 394 - SOC_DOUBLE_R_RANGE("CH1 Mixer Volume", 395 - TAS5717_CH1_LEFT_CH_MIX_REG, 396 - TAS5717_CH1_RIGHT_CH_MIX_REG, 397 - 16, 0, 0x80, 0), 398 - 399 - SOC_DOUBLE_R_RANGE("CH2 Mixer Volume", 400 - TAS5717_CH2_LEFT_CH_MIX_REG, 401 - TAS5717_CH2_RIGHT_CH_MIX_REG, 402 - 16, 0, 0x80, 0), 403 404 }; 404 405 405 406 static const struct regmap_range tas571x_readonly_regs_range[] = { ··· 466 487 TAS571X_SOFT_MUTE_REG, 467 488 TAS571X_SOFT_MUTE_CH1_SHIFT, TAS571X_SOFT_MUTE_CH2_SHIFT, 468 489 1, 1), 490 + 491 + SOC_DOUBLE_R_RANGE("CH1 Mixer Volume", 492 + TAS5717_CH1_LEFT_CH_MIX_REG, 493 + TAS5717_CH1_RIGHT_CH_MIX_REG, 494 + 16, 0, 0x80, 0), 495 + 496 + SOC_DOUBLE_R_RANGE("CH2 Mixer Volume", 497 + TAS5717_CH2_LEFT_CH_MIX_REG, 498 + TAS5717_CH2_RIGHT_CH_MIX_REG, 499 + 16, 0, 0x80, 0), 469 500 470 501 /* 471 502 * The biquads are named according to the register names. ··· 736 747 /* pulse the active low reset line for ~100us */ 737 748 usleep_range(100, 200); 738 749 gpiod_set_value(priv->reset_gpio, 0); 739 - usleep_range(12000, 20000); 750 + usleep_range(13500, 20000); 740 751 } 741 752 742 753 ret = regmap_write(priv->regmap, TAS571X_OSC_TRIM_REG, 0); 743 754 if (ret) 744 755 return ret; 745 756 757 + usleep_range(50000, 60000); 746 758 747 759 memcpy(&priv->codec_driver, &tas571x_codec, sizeof(priv->codec_driver)); 748 760 priv->codec_driver.component_driver.controls = priv->chip->controls; ··· 759 769 if (ret) 760 770 return ret; 761 771 } 762 - 763 - regcache_cache_only(priv->regmap, true); 764 - gpiod_set_value(priv->pdn_gpio, 1); 765 772 766 773 return snd_soc_register_codec(&client->dev, &priv->codec_driver, 767 774 &tas571x_dai, 1);
+1 -2
sound/soc/intel/Kconfig
··· 47 47 48 48 config SND_SOC_INTEL_HASWELL 49 49 tristate 50 + select SND_SOC_INTEL_SST_FIRMWARE 50 51 51 52 config SND_SOC_INTEL_BAYTRAIL 52 53 tristate ··· 57 56 depends on X86_INTEL_LPSS && I2C && I2C_DESIGNWARE_PLATFORM 58 57 depends on DW_DMAC_CORE 59 58 select SND_SOC_INTEL_SST 60 - select SND_SOC_INTEL_SST_FIRMWARE 61 59 select SND_SOC_INTEL_HASWELL 62 60 select SND_SOC_RT5640 63 61 help ··· 138 138 I2C_DESIGNWARE_PLATFORM 139 139 depends on DW_DMAC_CORE 140 140 select SND_SOC_INTEL_SST 141 - select SND_SOC_INTEL_SST_FIRMWARE 142 141 select SND_SOC_INTEL_HASWELL 143 142 select SND_SOC_RT286 144 143 help
+1
sound/soc/intel/atom/sst/sst_acpi.c
··· 416 416 DMI_MATCH(DMI_PRODUCT_NAME, "Surface 3"), 417 417 }, 418 418 }, 419 + { } 419 420 }; 420 421 421 422
+2 -2
sound/soc/intel/boards/bxt_da7219_max98357a.c
··· 130 130 */ 131 131 ret = snd_soc_card_jack_new(rtd->card, "Headset Jack", 132 132 SND_JACK_HEADSET | SND_JACK_BTN_0 | SND_JACK_BTN_1 | 133 - SND_JACK_BTN_2 | SND_JACK_BTN_3, &broxton_headset, 134 - NULL, 0); 133 + SND_JACK_BTN_2 | SND_JACK_BTN_3 | SND_JACK_LINEOUT, 134 + &broxton_headset, NULL, 0); 135 135 if (ret) { 136 136 dev_err(rtd->dev, "Headset Jack creation failed: %d\n", ret); 137 137 return ret;
+5 -3
sound/soc/intel/skylake/skl.c
··· 674 674 675 675 if (skl->nhlt == NULL) { 676 676 err = -ENODEV; 677 - goto out_free; 677 + goto out_display_power_off; 678 678 } 679 679 680 680 skl_nhlt_update_topology_bin(skl); ··· 746 746 skl_machine_device_unregister(skl); 747 747 out_nhlt_free: 748 748 skl_nhlt_free(skl->nhlt); 749 + out_display_power_off: 750 + if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) 751 + snd_hdac_display_power(bus, false); 749 752 out_free: 750 753 skl->init_failed = 1; 751 754 skl_free(ebus); ··· 788 785 789 786 release_firmware(skl->tplg); 790 787 791 - if (pci_dev_run_wake(pci)) 792 - pm_runtime_get_noresume(&pci->dev); 788 + pm_runtime_get_noresume(&pci->dev); 793 789 794 790 /* codec removal, invoke bus_device_remove */ 795 791 snd_hdac_ext_bus_device_remove(ebus);
+1 -1
sound/soc/pxa/Kconfig
··· 208 208 209 209 config SND_MMP_SOC_BROWNSTONE 210 210 tristate "SoC Audio support for Marvell Brownstone" 211 - depends on SND_MMP_SOC && MACH_BROWNSTONE 211 + depends on SND_MMP_SOC && MACH_BROWNSTONE && I2C 212 212 select SND_MMP_SOC_SSPA 213 213 select MFD_WM8994 214 214 select SND_SOC_WM8994
+3
sound/soc/qcom/lpass-cpu.c
··· 586 586 return 0; 587 587 } 588 588 EXPORT_SYMBOL_GPL(asoc_qcom_lpass_cpu_platform_remove); 589 + 590 + MODULE_DESCRIPTION("QTi LPASS CPU Driver"); 591 + MODULE_LICENSE("GPL v2");
+79 -86
sound/soc/qcom/lpass-platform.c
··· 61 61 { 62 62 struct snd_pcm_runtime *runtime = substream->runtime; 63 63 struct snd_soc_pcm_runtime *soc_runtime = substream->private_data; 64 - int ret; 64 + struct snd_soc_dai *cpu_dai = soc_runtime->cpu_dai; 65 + struct lpass_data *drvdata = 66 + snd_soc_platform_get_drvdata(soc_runtime->platform); 67 + struct lpass_variant *v = drvdata->variant; 68 + int ret, dma_ch, dir = substream->stream; 69 + struct lpass_pcm_data *data; 70 + 71 + data = devm_kzalloc(soc_runtime->dev, sizeof(*data), GFP_KERNEL); 72 + if (!data) 73 + return -ENOMEM; 74 + 75 + data->i2s_port = cpu_dai->driver->id; 76 + runtime->private_data = data; 77 + 78 + if (v->alloc_dma_channel) 79 + dma_ch = v->alloc_dma_channel(drvdata, dir); 80 + if (dma_ch < 0) 81 + return dma_ch; 82 + 83 + drvdata->substream[dma_ch] = substream; 84 + 85 + ret = regmap_write(drvdata->lpaif_map, 86 + LPAIF_DMACTL_REG(v, dma_ch, dir), 0); 87 + if (ret) { 88 + dev_err(soc_runtime->dev, 89 + "%s() error writing to rdmactl reg: %d\n", 90 + __func__, ret); 91 + return ret; 92 + } 93 + 94 + if (dir == SNDRV_PCM_STREAM_PLAYBACK) 95 + data->rdma_ch = dma_ch; 96 + else 97 + data->wrdma_ch = dma_ch; 65 98 66 99 snd_soc_set_runtime_hwparams(substream, &lpass_platform_pcm_hardware); 67 100 ··· 113 80 return 0; 114 81 } 115 82 83 + static int lpass_platform_pcmops_close(struct snd_pcm_substream *substream) 84 + { 85 + struct snd_pcm_runtime *runtime = substream->runtime; 86 + struct snd_soc_pcm_runtime *soc_runtime = substream->private_data; 87 + struct lpass_data *drvdata = 88 + snd_soc_platform_get_drvdata(soc_runtime->platform); 89 + struct lpass_variant *v = drvdata->variant; 90 + struct lpass_pcm_data *data; 91 + int dma_ch, dir = substream->stream; 92 + 93 + data = runtime->private_data; 94 + v = drvdata->variant; 95 + 96 + if (dir == SNDRV_PCM_STREAM_PLAYBACK) 97 + dma_ch = data->rdma_ch; 98 + else 99 + dma_ch = data->wrdma_ch; 100 + 101 + drvdata->substream[dma_ch] = NULL; 102 + 103 + if (v->free_dma_channel) 104 + v->free_dma_channel(drvdata, dma_ch); 105 + 106 + return 0; 107 + } 108 + 116 109 static int lpass_platform_pcmops_hw_params(struct snd_pcm_substream *substream, 117 110 struct snd_pcm_hw_params *params) 118 111 { 119 112 struct snd_soc_pcm_runtime *soc_runtime = substream->private_data; 120 113 struct lpass_data *drvdata = 121 114 snd_soc_platform_get_drvdata(soc_runtime->platform); 122 - struct lpass_pcm_data *pcm_data = drvdata->private_data; 115 + struct snd_pcm_runtime *rt = substream->runtime; 116 + struct lpass_pcm_data *pcm_data = rt->private_data; 123 117 struct lpass_variant *v = drvdata->variant; 124 118 snd_pcm_format_t format = params_format(params); 125 119 unsigned int channels = params_channels(params); ··· 239 179 struct snd_soc_pcm_runtime *soc_runtime = substream->private_data; 240 180 struct lpass_data *drvdata = 241 181 snd_soc_platform_get_drvdata(soc_runtime->platform); 242 - struct lpass_pcm_data *pcm_data = drvdata->private_data; 182 + struct snd_pcm_runtime *rt = substream->runtime; 183 + struct lpass_pcm_data *pcm_data = rt->private_data; 243 184 struct lpass_variant *v = drvdata->variant; 244 185 unsigned int reg; 245 186 int ret; ··· 264 203 struct snd_soc_pcm_runtime *soc_runtime = substream->private_data; 265 204 struct lpass_data *drvdata = 266 205 snd_soc_platform_get_drvdata(soc_runtime->platform); 267 - struct lpass_pcm_data *pcm_data = drvdata->private_data; 206 + struct snd_pcm_runtime *rt = substream->runtime; 207 + struct lpass_pcm_data *pcm_data = rt->private_data; 268 208 struct lpass_variant *v = drvdata->variant; 269 209 int ret, ch, dir = substream->stream; 270 210 ··· 319 257 struct snd_soc_pcm_runtime *soc_runtime = substream->private_data; 320 258 struct lpass_data *drvdata = 321 259 snd_soc_platform_get_drvdata(soc_runtime->platform); 322 - struct lpass_pcm_data *pcm_data = drvdata->private_data; 260 + struct snd_pcm_runtime *rt = substream->runtime; 261 + struct lpass_pcm_data *pcm_data = rt->private_data; 323 262 struct lpass_variant *v = drvdata->variant; 324 263 int ret, ch, dir = substream->stream; 325 264 ··· 396 333 struct snd_soc_pcm_runtime *soc_runtime = substream->private_data; 397 334 struct lpass_data *drvdata = 398 335 snd_soc_platform_get_drvdata(soc_runtime->platform); 399 - struct lpass_pcm_data *pcm_data = drvdata->private_data; 336 + struct snd_pcm_runtime *rt = substream->runtime; 337 + struct lpass_pcm_data *pcm_data = rt->private_data; 400 338 struct lpass_variant *v = drvdata->variant; 401 339 unsigned int base_addr, curr_addr; 402 340 int ret, ch, dir = substream->stream; ··· 438 374 439 375 static const struct snd_pcm_ops lpass_platform_pcm_ops = { 440 376 .open = lpass_platform_pcmops_open, 377 + .close = lpass_platform_pcmops_close, 441 378 .ioctl = snd_pcm_lib_ioctl, 442 379 .hw_params = lpass_platform_pcmops_hw_params, 443 380 .hw_free = lpass_platform_pcmops_hw_free, ··· 535 470 { 536 471 struct snd_pcm *pcm = soc_runtime->pcm; 537 472 struct snd_pcm_substream *psubstream, *csubstream; 538 - struct snd_soc_dai *cpu_dai = soc_runtime->cpu_dai; 539 - struct lpass_data *drvdata = 540 - snd_soc_platform_get_drvdata(soc_runtime->platform); 541 - struct lpass_variant *v = drvdata->variant; 542 473 int ret = -EINVAL; 543 - struct lpass_pcm_data *data; 544 474 size_t size = lpass_platform_pcm_hardware.buffer_bytes_max; 545 - 546 - data = devm_kzalloc(soc_runtime->dev, sizeof(*data), GFP_KERNEL); 547 - if (!data) 548 - return -ENOMEM; 549 - 550 - data->i2s_port = cpu_dai->driver->id; 551 - drvdata->private_data = data; 552 475 553 476 psubstream = pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream; 554 477 if (psubstream) { 555 - if (v->alloc_dma_channel) 556 - data->rdma_ch = v->alloc_dma_channel(drvdata, 557 - SNDRV_PCM_STREAM_PLAYBACK); 558 - 559 - if (data->rdma_ch < 0) 560 - return data->rdma_ch; 561 - 562 - drvdata->substream[data->rdma_ch] = psubstream; 563 - 564 478 ret = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, 565 479 soc_runtime->platform->dev, 566 480 size, &psubstream->dma_buffer); 567 - if (ret) 568 - goto playback_alloc_err; 569 - 570 - ret = regmap_write(drvdata->lpaif_map, 571 - LPAIF_RDMACTL_REG(v, data->rdma_ch), 0); 572 481 if (ret) { 573 - dev_err(soc_runtime->dev, 574 - "%s() error writing to rdmactl reg: %d\n", 575 - __func__, ret); 576 - goto capture_alloc_err; 482 + dev_err(soc_runtime->dev, "Cannot allocate buffer(s)\n"); 483 + return ret; 577 484 } 578 485 } 579 486 580 487 csubstream = pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream; 581 488 if (csubstream) { 582 - if (v->alloc_dma_channel) 583 - data->wrdma_ch = v->alloc_dma_channel(drvdata, 584 - SNDRV_PCM_STREAM_CAPTURE); 585 - 586 - if (data->wrdma_ch < 0) { 587 - ret = data->wrdma_ch; 588 - goto capture_alloc_err; 589 - } 590 - 591 - drvdata->substream[data->wrdma_ch] = csubstream; 592 - 593 489 ret = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, 594 490 soc_runtime->platform->dev, 595 491 size, &csubstream->dma_buffer); 596 - if (ret) 597 - goto capture_alloc_err; 598 - 599 - ret = regmap_write(drvdata->lpaif_map, 600 - LPAIF_WRDMACTL_REG(v, data->wrdma_ch), 0); 601 492 if (ret) { 602 - dev_err(soc_runtime->dev, 603 - "%s() error writing to wrdmactl reg: %d\n", 604 - __func__, ret); 605 - goto capture_reg_err; 493 + dev_err(soc_runtime->dev, "Cannot allocate buffer(s)\n"); 494 + if (psubstream) 495 + snd_dma_free_pages(&psubstream->dma_buffer); 496 + return ret; 606 497 } 498 + 607 499 } 608 500 609 501 return 0; 610 - 611 - capture_reg_err: 612 - if (csubstream) 613 - snd_dma_free_pages(&csubstream->dma_buffer); 614 - 615 - capture_alloc_err: 616 - if (psubstream) 617 - snd_dma_free_pages(&psubstream->dma_buffer); 618 - 619 - playback_alloc_err: 620 - dev_err(soc_runtime->dev, "Cannot allocate buffer(s)\n"); 621 - 622 - return ret; 623 502 } 624 503 625 504 static void lpass_platform_pcm_free(struct snd_pcm *pcm) 626 505 { 627 - struct snd_soc_pcm_runtime *rt; 628 - struct lpass_data *drvdata; 629 - struct lpass_pcm_data *data; 630 - struct lpass_variant *v; 631 506 struct snd_pcm_substream *substream; 632 - int ch, i; 507 + int i; 633 508 634 509 for (i = 0; i < ARRAY_SIZE(pcm->streams); i++) { 635 510 substream = pcm->streams[i].substream; 636 511 if (substream) { 637 - rt = substream->private_data; 638 - drvdata = snd_soc_platform_get_drvdata(rt->platform); 639 - data = drvdata->private_data; 640 - 641 - ch = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 642 - ? data->rdma_ch 643 - : data->wrdma_ch; 644 - v = drvdata->variant; 645 - drvdata->substream[ch] = NULL; 646 - if (v->free_dma_channel) 647 - v->free_dma_channel(drvdata, ch); 648 - 649 512 snd_dma_free_pages(&substream->dma_buffer); 650 513 substream->dma_buffer.area = NULL; 651 514 substream->dma_buffer.addr = 0;
-1
sound/soc/qcom/lpass.h
··· 59 59 struct clk *pcnoc_mport_clk; 60 60 struct clk *pcnoc_sway_clk; 61 61 62 - void *private_data; 63 62 }; 64 63 65 64 /* Vairant data per each SOC */
+5 -5
sound/soc/samsung/ac97.c
··· 383 383 goto err4; 384 384 } 385 385 386 - ret = devm_snd_soc_register_component(&pdev->dev, &s3c_ac97_component, 387 - s3c_ac97_dai, ARRAY_SIZE(s3c_ac97_dai)); 388 - if (ret) 389 - goto err5; 390 - 391 386 ret = samsung_asoc_dma_platform_register(&pdev->dev, 392 387 ac97_pdata->dma_filter, 393 388 NULL, NULL); ··· 390 395 dev_err(&pdev->dev, "failed to get register DMA: %d\n", ret); 391 396 goto err5; 392 397 } 398 + 399 + ret = devm_snd_soc_register_component(&pdev->dev, &s3c_ac97_component, 400 + s3c_ac97_dai, ARRAY_SIZE(s3c_ac97_dai)); 401 + if (ret) 402 + goto err5; 393 403 394 404 return 0; 395 405 err5:
+10 -9
sound/soc/samsung/i2s.c
··· 1237 1237 dev_err(&pdev->dev, "Unable to get drvdata\n"); 1238 1238 return -EFAULT; 1239 1239 } 1240 - ret = devm_snd_soc_register_component(&sec_dai->pdev->dev, 1241 - &samsung_i2s_component, 1242 - &sec_dai->i2s_dai_drv, 1); 1240 + ret = samsung_asoc_dma_platform_register(&pdev->dev, 1241 + sec_dai->filter, "tx-sec", NULL); 1243 1242 if (ret != 0) 1244 1243 return ret; 1245 1244 1246 - return samsung_asoc_dma_platform_register(&pdev->dev, 1247 - sec_dai->filter, "tx-sec", NULL); 1245 + return devm_snd_soc_register_component(&sec_dai->pdev->dev, 1246 + &samsung_i2s_component, 1247 + &sec_dai->i2s_dai_drv, 1); 1248 1248 } 1249 1249 1250 1250 pri_dai = i2s_alloc_dai(pdev, false); ··· 1314 1314 if (quirks & QUIRK_PRI_6CHAN) 1315 1315 pri_dai->i2s_dai_drv.playback.channels_max = 6; 1316 1316 1317 + ret = samsung_asoc_dma_platform_register(&pdev->dev, pri_dai->filter, 1318 + NULL, NULL); 1319 + if (ret < 0) 1320 + goto err_disable_clk; 1321 + 1317 1322 if (quirks & QUIRK_SEC_DAI) { 1318 1323 sec_dai = i2s_alloc_dai(pdev, true); 1319 1324 if (!sec_dai) { ··· 1358 1353 if (ret < 0) 1359 1354 goto err_free_dai; 1360 1355 1361 - ret = samsung_asoc_dma_platform_register(&pdev->dev, pri_dai->filter, 1362 - NULL, NULL); 1363 - if (ret < 0) 1364 - goto err_free_dai; 1365 1356 1366 1357 pm_runtime_enable(&pdev->dev); 1367 1358
+11 -10
sound/soc/samsung/pcm.c
··· 565 565 pcm->dma_capture = &s3c_pcm_stereo_in[pdev->id]; 566 566 pcm->dma_playback = &s3c_pcm_stereo_out[pdev->id]; 567 567 568 - pm_runtime_enable(&pdev->dev); 569 - 570 - ret = devm_snd_soc_register_component(&pdev->dev, &s3c_pcm_component, 571 - &s3c_pcm_dai[pdev->id], 1); 572 - if (ret != 0) { 573 - dev_err(&pdev->dev, "failed to get register DAI: %d\n", ret); 574 - goto err5; 575 - } 576 - 577 568 ret = samsung_asoc_dma_platform_register(&pdev->dev, filter, 578 569 NULL, NULL); 579 570 if (ret) { ··· 572 581 goto err5; 573 582 } 574 583 575 - return 0; 584 + pm_runtime_enable(&pdev->dev); 576 585 586 + ret = devm_snd_soc_register_component(&pdev->dev, &s3c_pcm_component, 587 + &s3c_pcm_dai[pdev->id], 1); 588 + if (ret != 0) { 589 + dev_err(&pdev->dev, "failed to get register DAI: %d\n", ret); 590 + goto err6; 591 + } 592 + 593 + return 0; 594 + err6: 595 + pm_runtime_disable(&pdev->dev); 577 596 err5: 578 597 clk_disable_unprepare(pcm->pclk); 579 598 err4:
+9 -9
sound/soc/samsung/s3c2412-i2s.c
··· 168 168 s3c2412_i2s_pcm_stereo_in.addr = res->start + S3C2412_IISRXD; 169 169 s3c2412_i2s_pcm_stereo_in.filter_data = pdata->dma_capture; 170 170 171 - ret = s3c_i2sv2_register_component(&pdev->dev, -1, 172 - &s3c2412_i2s_component, 173 - &s3c2412_i2s_dai); 174 - if (ret) { 175 - pr_err("failed to register the dai\n"); 176 - return ret; 177 - } 178 - 179 171 ret = samsung_asoc_dma_platform_register(&pdev->dev, 180 172 pdata->dma_filter, 181 173 NULL, NULL); 182 - if (ret) 174 + if (ret) { 183 175 pr_err("failed to register the DMA: %d\n", ret); 176 + return ret; 177 + } 178 + 179 + ret = s3c_i2sv2_register_component(&pdev->dev, -1, 180 + &s3c2412_i2s_component, 181 + &s3c2412_i2s_dai); 182 + if (ret) 183 + pr_err("failed to register the dai\n"); 184 184 185 185 return ret; 186 186 }
+8 -8
sound/soc/samsung/s3c24xx-i2s.c
··· 474 474 s3c24xx_i2s_pcm_stereo_in.addr = res->start + S3C2410_IISFIFO; 475 475 s3c24xx_i2s_pcm_stereo_in.filter_data = pdata->dma_capture; 476 476 477 - ret = devm_snd_soc_register_component(&pdev->dev, 478 - &s3c24xx_i2s_component, &s3c24xx_i2s_dai, 1); 479 - if (ret) { 480 - pr_err("failed to register the dai\n"); 481 - return ret; 482 - } 483 - 484 477 ret = samsung_asoc_dma_platform_register(&pdev->dev, 485 478 pdata->dma_filter, 486 479 NULL, NULL); 487 - if (ret) 480 + if (ret) { 488 481 pr_err("failed to register the dma: %d\n", ret); 482 + return ret; 483 + } 484 + 485 + ret = devm_snd_soc_register_component(&pdev->dev, 486 + &s3c24xx_i2s_component, &s3c24xx_i2s_dai, 1); 487 + if (ret) 488 + pr_err("failed to register the dai\n"); 489 489 490 490 return ret; 491 491 }
+9 -10
sound/soc/samsung/spdif.c
··· 416 416 goto err3; 417 417 } 418 418 419 - dev_set_drvdata(&pdev->dev, spdif); 420 - 421 - ret = devm_snd_soc_register_component(&pdev->dev, 422 - &samsung_spdif_component, &samsung_spdif_dai, 1); 423 - if (ret != 0) { 424 - dev_err(&pdev->dev, "fail to register dai\n"); 425 - goto err4; 426 - } 427 - 428 419 spdif_stereo_out.addr_width = 2; 429 420 spdif_stereo_out.addr = mem_res->start + DATA_OUTBUF; 430 421 filter = NULL; ··· 423 432 spdif_stereo_out.filter_data = spdif_pdata->dma_playback; 424 433 filter = spdif_pdata->dma_filter; 425 434 } 426 - 427 435 spdif->dma_playback = &spdif_stereo_out; 428 436 429 437 ret = samsung_asoc_dma_platform_register(&pdev->dev, filter, 430 438 NULL, NULL); 431 439 if (ret) { 432 440 dev_err(&pdev->dev, "failed to register DMA: %d\n", ret); 441 + goto err4; 442 + } 443 + 444 + dev_set_drvdata(&pdev->dev, spdif); 445 + 446 + ret = devm_snd_soc_register_component(&pdev->dev, 447 + &samsung_spdif_component, &samsung_spdif_dai, 1); 448 + if (ret != 0) { 449 + dev_err(&pdev->dev, "fail to register dai\n"); 433 450 goto err4; 434 451 } 435 452
+5 -1
sound/soc/sti/uniperif_player.c
··· 614 614 iec958->status[3] = ucontrol->value.iec958.status[3]; 615 615 mutex_unlock(&player->ctrl_lock); 616 616 617 - uni_player_set_channel_status(player, NULL); 617 + if (player->substream && player->substream->runtime) 618 + uni_player_set_channel_status(player, 619 + player->substream->runtime); 620 + else 621 + uni_player_set_channel_status(player, NULL); 618 622 619 623 return 0; 620 624 }
+10 -9
sound/soc/sunxi/sun4i-codec.c
··· 765 765 766 766 card = devm_kzalloc(dev, sizeof(*card), GFP_KERNEL); 767 767 if (!card) 768 - return NULL; 768 + return ERR_PTR(-ENOMEM); 769 769 770 770 card->dai_link = sun4i_codec_create_link(dev, &card->num_links); 771 771 if (!card->dai_link) 772 - return NULL; 772 + return ERR_PTR(-ENOMEM); 773 773 774 774 card->dev = dev; 775 775 card->name = "sun4i-codec"; ··· 829 829 return PTR_ERR(scodec->clk_module); 830 830 } 831 831 832 - /* Enable the bus clock */ 833 - if (clk_prepare_enable(scodec->clk_apb)) { 834 - dev_err(&pdev->dev, "Failed to enable the APB clock\n"); 835 - return -EINVAL; 836 - } 837 - 838 832 scodec->gpio_pa = devm_gpiod_get_optional(&pdev->dev, "allwinner,pa", 839 833 GPIOD_OUT_LOW); 840 834 if (IS_ERR(scodec->gpio_pa)) { ··· 836 842 if (ret != -EPROBE_DEFER) 837 843 dev_err(&pdev->dev, "Failed to get pa gpio: %d\n", ret); 838 844 return ret; 845 + } 846 + 847 + /* Enable the bus clock */ 848 + if (clk_prepare_enable(scodec->clk_apb)) { 849 + dev_err(&pdev->dev, "Failed to enable the APB clock\n"); 850 + return -EINVAL; 839 851 } 840 852 841 853 /* DMA configuration for TX FIFO */ ··· 876 876 } 877 877 878 878 card = sun4i_codec_create_card(&pdev->dev); 879 - if (!card) { 879 + if (IS_ERR(card)) { 880 + ret = PTR_ERR(card); 880 881 dev_err(&pdev->dev, "Failed to create our card\n"); 881 882 goto err_unregister_codec; 882 883 }
+2 -5
tools/power/cpupower/utils/cpufreq-set.c
··· 296 296 struct cpufreq_affected_cpus *cpus; 297 297 298 298 if (!bitmask_isbitset(cpus_chosen, cpu) || 299 - cpupower_is_cpu_online(cpu)) 299 + cpupower_is_cpu_online(cpu) != 1) 300 300 continue; 301 301 302 302 cpus = cpufreq_get_related_cpus(cpu); ··· 316 316 cpu <= bitmask_last(cpus_chosen); cpu++) { 317 317 318 318 if (!bitmask_isbitset(cpus_chosen, cpu) || 319 - cpupower_is_cpu_online(cpu)) 320 - continue; 321 - 322 - if (cpupower_is_cpu_online(cpu) != 1) 319 + cpupower_is_cpu_online(cpu) != 1) 323 320 continue; 324 321 325 322 printf(_("Setting cpu: %d\n"), cpu);
+27 -14
virt/kvm/arm/vgic/vgic-mmio.c
··· 453 453 return container_of(dev, struct vgic_io_device, dev); 454 454 } 455 455 456 - static bool check_region(const struct vgic_register_region *region, 456 + static bool check_region(const struct kvm *kvm, 457 + const struct vgic_register_region *region, 457 458 gpa_t addr, int len) 458 459 { 459 - if ((region->access_flags & VGIC_ACCESS_8bit) && len == 1) 460 - return true; 461 - if ((region->access_flags & VGIC_ACCESS_32bit) && 462 - len == sizeof(u32) && !(addr & 3)) 463 - return true; 464 - if ((region->access_flags & VGIC_ACCESS_64bit) && 465 - len == sizeof(u64) && !(addr & 7)) 466 - return true; 460 + int flags, nr_irqs = kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; 461 + 462 + switch (len) { 463 + case sizeof(u8): 464 + flags = VGIC_ACCESS_8bit; 465 + break; 466 + case sizeof(u32): 467 + flags = VGIC_ACCESS_32bit; 468 + break; 469 + case sizeof(u64): 470 + flags = VGIC_ACCESS_64bit; 471 + break; 472 + default: 473 + return false; 474 + } 475 + 476 + if ((region->access_flags & flags) && IS_ALIGNED(addr, len)) { 477 + if (!region->bits_per_irq) 478 + return true; 479 + 480 + /* Do we access a non-allocated IRQ? */ 481 + return VGIC_ADDR_TO_INTID(addr, region->bits_per_irq) < nr_irqs; 482 + } 467 483 468 484 return false; 469 485 } ··· 493 477 494 478 region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions, 495 479 addr - iodev->base_addr); 496 - if (!region || !check_region(region, addr, len)) { 480 + if (!region || !check_region(vcpu->kvm, region, addr, len)) { 497 481 memset(val, 0, len); 498 482 return 0; 499 483 } ··· 526 510 527 511 region = vgic_find_mmio_region(iodev->regions, iodev->nr_regions, 528 512 addr - iodev->base_addr); 529 - if (!region) 530 - return 0; 531 - 532 - if (!check_region(region, addr, len)) 513 + if (!region || !check_region(vcpu->kvm, region, addr, len)) 533 514 return 0; 534 515 535 516 switch (iodev->iodev_type) {
+7 -7
virt/kvm/arm/vgic/vgic-mmio.h
··· 50 50 #define VGIC_ADDR_IRQ_MASK(bits) (((bits) * 1024 / 8) - 1) 51 51 52 52 /* 53 - * (addr & mask) gives us the byte offset for the INT ID, so we want to 54 - * divide this with 'bytes per irq' to get the INT ID, which is given 55 - * by '(bits) / 8'. But we do this with fixed-point-arithmetic and 56 - * take advantage of the fact that division by a fraction equals 57 - * multiplication with the inverted fraction, and scale up both the 58 - * numerator and denominator with 8 to support at most 64 bits per IRQ: 53 + * (addr & mask) gives us the _byte_ offset for the INT ID. 54 + * We multiply this by 8 the get the _bit_ offset, then divide this by 55 + * the number of bits to learn the actual INT ID. 56 + * But instead of a division (which requires a "long long div" implementation), 57 + * we shift by the binary logarithm of <bits>. 58 + * This assumes that <bits> is a power of two. 59 59 */ 60 60 #define VGIC_ADDR_TO_INTID(addr, bits) (((addr) & VGIC_ADDR_IRQ_MASK(bits)) * \ 61 - 64 / (bits) / 8) 61 + 8 >> ilog2(bits)) 62 62 63 63 /* 64 64 * Some VGIC registers store per-IRQ information, with a different number
+12
virt/kvm/arm/vgic/vgic.c
··· 273 273 * no more work for us to do. 274 274 */ 275 275 spin_unlock(&irq->irq_lock); 276 + 277 + /* 278 + * We have to kick the VCPU here, because we could be 279 + * queueing an edge-triggered interrupt for which we 280 + * get no EOI maintenance interrupt. In that case, 281 + * while the IRQ is already on the VCPU's AP list, the 282 + * VCPU could have EOI'ed the original interrupt and 283 + * won't see this one until it exits for some other 284 + * reason. 285 + */ 286 + if (vcpu) 287 + kvm_vcpu_kick(vcpu); 276 288 return false; 277 289 } 278 290